Space entrepreneurship at Moffett Field?

Moffett Federal Airfield. On left of runways, NASA Ames, Hangar One. On right, Hangars Two and Three.
On approach to Moffett Federal Airfield. On left of runways, NASA Ames, Hangar One. On right, Hangars Two and Three.  (Photo by I. Kluft.)

One of the landmarks of the San Francisco bay area is Hangar One, located at Moffett Federal Airfield. It was built in the early 1930s to house lighter-than-air ships, specifically the USS Macon. The field started as Naval Air Station Sunnyvale, but was renamed in 1935 to NAS Moffett, in honor of Admiral William A. Moffett, who died in the crash of the airship USS Akron in 1933.

In its history, several federal agencies and military branches have been located there, including the Navy, Army Reserve, and the Air Force. It is also home to NASA Ames Research Center, which started there as the NACA Ames Aeronautical Laboratory in 1939, on recommendations of a committee chaired by Charles Lindbergh.

Crossroads for Moffett

The airfield and surrounding land are now at a crossroads. The General Services Adminstration and NASA have put out a request for proposal (RFP) to lease either just Hangar One, or the whole airfield, including Hangars One, Two, and Three.  This particular post that you are reading started out as a reply to questions posed in the Mountain View Voice, the community newspaper of Mountain View, the town on the western border of NASA Ames. Mountain View is home to Google, and was once also home to Silicon Graphics, Sun Microsystems, Adobe, etc. To south of the field, right under the approach to the runway is Sunnyvale, current or former home to a variety of other tech companies. Hence the original name of the field, NAS Sunnyvale.

There is a lot of debate about the future use of the field and the hangars. A couple of years ago, one plan called for ripping out the runways and using the land to house World Expo 2020.  This plan seems to no longer be viable.

HangarOneReleased in May, the RFP allows for a couple of two kinds of proposals: (1) just Hangar One, or (2) the airfield, including Hangars One, Two, and Three. In the first case, it seems NASA (via the California Air National Guard) would continue to operate the airfield. In both cases, the airfield would remain operational.

I have just started studying the RFP. There is nothing I see in it that allows the hangars to be torn down; if anything, there are varying levels of rehabilitation required. There may be reimbursable costs, but it is up to leasee to make the field financially viable. NASA Ames and the NASA Research Park in front of Hanger One are not part of the RFP, and remain under NASA control.

There appear to be four interested parties getting ready to submit proposals:

  • Google (presumably through its related company H211 LLC)
  • International Space Development Hub (ISDhub)
  • Silicon Valley Space Center (SVSC) and its Moffett Federal Airfield (MFA) Alliance
  • Earth, Air, and Space Center of the  Air and Space West Foundation

Of these groups, at least a couple of them seem intent on space entrepreneurship: ISDHub and SVSC. The Google founders have invested in space companies, but there is no indication of whether or not space entrepreneurship would factor into its proposal. Thus, what space entrepreneurship means for Moffett Field will depend on who wins the lease.

Among the key problems: Hangar One was stripped of its external skin, which showed evidence of asbestos. That job was performed by the Navy before it handed the airfield over to NASA. A protective coating has been applied to the skeleton. But the hangar need to be reskinned by whomever is the winning leasee of the property.

The opportunity

(Disclosure: For regular readers of this blog, it is no surprise that I am an SVSC member. I am not one of the key personnel on the SVSC/MFA Alliance proposal. However, I’ve worked with other SVSC members on projects, and have had opportunities to talk to the founders of companies being incubated through SVSC. I represent the AIAA SF side of the Small Payload Entrepreneur TechTalks which are co-sponsored with SVSC.)

The opinions below are mine. They reflect my current thinking on the future of Moffett Field. If/when the proposals are released for public consumption, I might change my mind on some aspects.

In my view, using Moffett Field for endeavors other than aerospace development activities would be a lost opportunity. There is collaboration that can come from pulling several complementary companies together in a single setting. There are suddenly a lot more potential users of products concentrated together; discovering common needs comes much more quickly. These users may be NASA or small companies in the area.  In effect, as new economic supply chains emerge, participants are able to identify their current niches, and discover missing links and opportunities. [mod:0926]

To me, NASA Ames has demonstrated more interest and support for commercial applications in space than any other NASA center.  It had the Space Portal long before the rest of NASA put such a priority on commercialization. The NASA Flight Opportunities program at Ames matches technology with research flights all across the country; it is helping to accelerate the maturity of hardware to be used in space. These programs stand to reduce the costs of space flight even faster if Moffett Field is dedicated to that kind of collaboration.

Mastering aerospace complexities

What about technologies other than aerospace? Would only aerospace companies reside in an aerospace entrepreneurial research park? There are two answers to this:

1. Concentrated aerospace.  The unique value of Ames is to be able to pull developers of various parts of the aerospace ecosystem together in a single place. There are experts at Ames in various aerospace problems and technologies. The National Full-scale Aerodynamics Complex (NFAC) is at Ames. The Arc Jet Complex, used to test materials for atmospheric entry, is at Ames. The Lunar Science Institute is there. The Astrobiology Institute is there. The list goes on and on.

There are things that researchers at Ames want and entrepreneurs would like to offer. Rather than travel (which the GSA has managed to restrict), conference calls, and shipping intermediate deliverables around the country, which usually have to be highly focused activites, this opens up the ability to informally affect secondary and tertiary effects of technologies, leading to quicker feedback and optimization.

If you want to develop better computing technologies, or simply want to have a manufacturing line, there are other places in the Valley to do it.  And close proximity to a flight line is not necessarily conducive to those activities, particularly if it is not a shipping port for manufactured goods.

2. The truly complex nature of aerospace.  Ultimately, the goal of an aerospace enterprise is to design, construct, or integrate a flight vehicle that accomplishes a class of missions.  For larger projects, the undertaking is so complex and has so many spin-offs that large aerospace companies sometimes identify themselves as systems companies.  The complexity and resulting cash flow requirements are just too high for the vast majority of entrepreneurs. Starting with aeronautics, the field traditionally includes: aerodynamics, structures, propulsion, and controls. But when you get to satellites, the dominant discipline is electronics.  This can be broken into power systems, sensors, computing, communications, and probably a few other things I’m forgetting. I expect to see boutique companies focusing on a single or a small cluster of disciplines. The integration of these disciplines makes aircraft or launch vehicles possible; this is virtually impossible for entrepreneurs to do, except for the most well-financed.

The smallest flight vehicle that an aerospace company might attempt is a small satellite or an autonomous unmanned aerial vehicle (UAVs). In such enterprises, an orchestrated solution for power, communications, sensors, attitude control, and overall resource management is being attempted. For serious UAVs, structures and mass are traded against propulsion, which may be traded against aerodynamics. Successful flight vehicles need expertise in all these areas. Concentrating so much intellectual power in an entrepreneurial company is a major challenge.

Thus, the most likely scenario is to see companies with highly focused products or activities which are able to occupy a niche in an aerospace supply chain.

Museums

As for museums, I don’t see an inherent conflict between space entrepreneurship and having part of Hangar One as a museum.  If you go to the Computer History Museum, The Tech Museum, or the California Science Center down south, the exhibits show how technology works and what the potentials are for the future. The challenge would be how you cost-effectively add value above and beyond the other excellent venues that are already available in the region.  Furthermore, NASA Ames has a Visitor Center at the entrance just off Highway 101. Would that continue as an independent venue, or be folded into a museum in Hangar One? Presumably, those who are proposing a museum in Hangar One are figuring that out.

If you dedicate a substantial part of Hangar One to a museum or other educational center, then you will need the rest of the airfield and its facilities if you also want to support space entrepreneurship. Some entrepreneurial work will involve chemicals, gasses, or other hazards which probably should not be present in a public venue.  Otherwise, you need to accept from the outset that such activities cannot be pursued on the premises.  (Since Hangar One is a historical site, they may not be allowed anyway.)

What is at stake

The decision on how to lease/manage Moffett Field has major repercussions for the future of space entrepreneurship.  It is possible for it to struggle along in Silicon Valley, but increasingly, companies will find that it is more expedient to move out of California to Texas (home of the Johnson Space Flight Center), or Colorado (home to a lot of spacecraft design and construction), or other states that want space business.

The infusion of capital from Silicon Valley, where half of venture capital deals are made, is likely to accelerate the maturity of new commercial space operations.  This is much more likely to happen when companies are located within easy access to venture firms.  (Local residents understand the relationship between projects spun out from Stanford University and Sand Hill Road, which is just west of the campus.)

To be sure, a large portion of aerospace vehicle design and construction happens in southern California. That concentration of talent has allowed SpaceX to rise very quickly.  The concentration of experimental flight vehicle talent around Mojave, just north of Los Angeles, makes possible Scaled Composites, XCOR Aerospace, Masten Space Systems, etc.  However, some of these entrepreneurial companies are moving manufacturing and test operations to Texas (particularly, XCOR and SpaceX). Other states would certainly like to be home base to aerospace vehicle design and manufacturing, e.g., Alabama (Marshall Space Flight Center), Mississippi (Stennis Space Center) and Florida (Cape Canaveral and Kennedy Space Center), and even Virginia (Wallops Island). If companies don’t want to stay in California, there are welcome mats in a lot of other places.

The decision for the next phase of Moffett Field lies in the hands of NASA and the GSA.  The residents of surrounding communities have their preferences on what they want to see, based on good and bad experiences with other local enterprises.  Space entrepreneurs badly want to enable technology for humans returning to the Moon, reaching Mars, and settlements on both.

The next phase of Moffett Field is more than just the next side-effect of base realignment when the Navy and Air Force pulled out.  It has potentially a major impact on how quickly a commercial space economy gets a foothold beyond Earth orbit, and how soon NASA’s limited resources can be freed up for more robust exploration missions.

[PostScript: Since I originally pushed this post out a few days ago, I’m making occasional small fixes, chiefly spelling or grammatical.  Paragraphs that have such mods are marked with [mod:mmdd], where mmdd is obviously the month and day of modification.  Given the readership I’m seeing, there may be a need to expand of a particular aspect of this article.  But I’ll deal with that as a separate post at this time.  –RSR]

The energy cost of computation

If you have a smartphone, and the battery is quickly being drained, you may have discovered that by quitting apps or removing them from your phone, the battery lasts longer.  Sensors, transmitters and receivers, and computers all take energy.  It is something that mobile device designers are concerned about.  It turns out app developers make important decisions which affect the battery life.

Of course, I consider aircraft and spacecraft to be very mobile devices.  In fact, in the last week, I’ve heard from parts of the aerospace community on this subject.  More specifically, they are concerned about how to minimize the energy consumption of computation.  Their interest ranges from mobile CPUs and sensors to high performance computing.

Aside from the work that hardware designers do, what can/should software developers and users be aware of?  I’m going to try to lay a foundation for the subject in this post.  It may initially seem that energy consumption is outside a software person’s control.  I will give some examples where it actually makes a difference.  Other examples are taken from hardware, but illustrate the implications of system design choices, e.g., using a simple embedded system vs. a multi-tasking system.

Before doing so, I need to confess something.  Between the computing hardware and software worlds, each side tends to believe that I belong to the other.  Neither side enthusiastically claims me as their own.  Really, I learned computing because I couldn’t get my hands on a wind tunnel.  My youth was spent with T-square, right triangles, and French curves rather than oscilloscopes, resistors, and diodes. But professionally, I am a computer scientist.  I happen to have worked with a lot of digital and computing systems designers, including a few years at the chip level.

All gates are created equal… to a first approximation…

For our purposes, we will think in terms of gates — fundamental blocks which have a 0 or 1 as their output.  It is a major simplification.  Chip designers frequently think in terms of NAND or NOR gates, switches, or transistors (going from most to least abstract).  In fact, a NAND gate may be a 2-,  3-, or 4- input NAND gate.  There are also nuances in rise and fall times of propagated signals.  I’m also going to assume a silicon technology like CMOS rather than nMOS or pMOS or bipolar types.  In practice, CMOS dominates the vast majority of digital devices.  But for our purposes, all gates are created equal. (We’ll ignore whether or not some seem to be more equal than others…)

Fundamentally, changing the state of a gate takes energy.  For a first order approximation, it doesn’t matter if it moves from 0 to 1 or 1 to 0; the result is pretty much the same. The amount of energy needed for a computation is directly related to the number gate state changes needed to complete the computation.

Program counters, page boundaries

One of the strange side effects is on the program counter.  Assume you have a short loop that runs from 0x0010 to 0x0013 and back again.  This takes less energy than a loop that runs from 0x0ffe to 0x1001.  Why?  Two major sets of gate state changes happen:

  • Going from 0x0fff to 0x1000, there are 1 change of 0->1 and 12 changes of 1->0 — a total of 13 state changes.
  • When a pass through the loop finishes, it jumps from 0x1001 to 0x0ffe, which has 11 changes of 0->1 and 2 changes of 1->0 — again 13 state changes.

In the other case, running between 0x0010 and 0x0013 the state changes are confined to 2 bits.

Moral of the story:  frequent tight loops should avoid page boundaries.

The program counter is simply one part of the story.  There is the execution of instructions, the impact on registers and memory, etc.  But if those are the same between the two address sets, the code placement emerges as a variable that can be manipulated.

Algorithmic performance

In understanding the order of an algorithm, a sorting program that processes n inputs in n steps (or even 2n steps) is considered to be linear.  Its run-time is of order n, written O(n).  As the number of inputs increases, asymptotic trends emerge.

A sorting program whose run-time increases as O(n log n) ultimately performs better than a program that runs O(n2) . (Computer scientists and software engineers understand this as the difference between the “quicksort” vs “bubblesort” algorithms.)  If tricks can be played to let the sorting algorithm approach O(n), that algorithm has the potential to perform even better.

Varying processor performance

In the smartphone world, we’re beginning to see devices that have multiple high-performance cores and one low-power low-performance core on the same piece of silicon.  When performance demand drops off and only maintenance tasks are running, the low-power core continues running while the high-performance cores are sleeping.

This technique is exploit by processor design ARM Holdings in its big.LITTLE™ processing design.  (Yes, the world “big” is lower case, and “LITTLE” is all caps.)  ARM claims this can reduce energy consumption by 70% or more for light workloads and by 50% for moderate workloads.  In the case of big.LITTLE, ARM Cortex-A15 is paired with the lower power ARM Cortex-A7. [strange typos fixed 9/18]

Of course, the selection of whether to run just the low-power core as opposed to the high-performance cores is made by the operating system.

In some simpler systems, the user is able to select between higher performance or lower power, thus extending battery life.  In this case, system clock speed is set high or low accordingly.  The user enters a choice via the device user interface, which then interpreted by the operating system as a performance choice.

Communications

Smartphones typically have at least three types of radio — Bluetooth, wi-fi, and a wireless telecom standard (3G, 4G, perhaps even more). Communications for an app are a trade-off between necessary data rate vs. power consumption. When possible, the app developer should choose the communication mode that requires the least amount of power for the job.  Often this means preferring wi-fi over the wireless telecom.  (In fact, Bluetooth takes far less power than the others, but is not used as a multi-hop network protocol.)

DMA, buffers, interrupts

An operating system faces varying challenges in servicing I/O requests, particularly if it operates under real-time constraints.  In modern computer architectures, block data can be transferred under DMA control without interrupting the CPU.  (DMA = direct memory access)  But once the DMA operation is finished, an interrupt has to be issued so that the data can be dealt with.  This, of course, requires a context switch from a process to the kernel to possibly another process.

Some operating systems are more efficient about switching between processes than other.  Historically, UNIX has been better at this than Windows.  However, when you introduce threads (a lighter weight scheduling mechanism) into the picture, there is no clear advantage.  Thus, Windows programs are often designed with many threads.  In general, basic UNIX programs do not use multiple threads, but can easily be assembled as building blocks.  On the other hand, UNIX database programs typically run as monolithic components, but make extensive use of threads.  Java programs invariably run many threads.

Some I/O interfaces cannot run DMA and require more frequent OS attention.  Sometimes there is a buffer for 3 or 4 characters.  Before that overflows, the OS needs to copy the buffer content.  This can result in very high context switching overhead when certain apps are running.

Co-processors

The newest iPhone, the 5S, supplements the main A7 chip with an M7 chip to directly deal with data from accelerometers, gyroscope, and compass.  Details of the M7 chip have not yet been published.  I suspect this vastly reduces the CPU load with motion-based apps are running.  It certainly cuts out a lot of interrupts.  What is not clear to me is whether or not the M7 also does low-power matrix computations.  If this a sequence of matrix operations is only done 50 or 100 times per second, a high-performance multiplier-accumulator may not be necessary, and a low-power version can be put in the co-processor, further relieving the CPU of certain real-time burdens.

Loop vs. ‘halt’

When there is no more computation to do, some operating systems would put themselves into a tight idle loop waiting for next interrupt to come in.  Others would execute a ‘halt’ instruction and wait.  If you measure the CPU temperature, the latter is significantly cooler.  The former would not be practical for a mobile device.  Naturally, I consider aircraft and spacecraft to be mobile.  So I dislike operating systems with tight idle loops.

Memory management schemes

A key computing architecture feature that makes smartphones possible is demand paging, a memory management scheme invented in the late 1960s.  They make multi-tasking and adaptability to new programs a fundamental reality. But, the logic design behind a memory management unit (MMU) requires a LOT of gates, and thus consumes a lot of energy.  Thus, for simple dedicated real-time systems, it may be best to avoid the need for a paging MMU.

Processsors such as ARM Cortex-M series use a segmentation scheme that load registers with a segment base address and a segment length.   The complexity and power costs of a demand-paging MMU are not there.

The PDP-11 used a hybrid between paging and segment.  PDP-11 programs were limited to 64K bytes.  But the processor had 8 segment registers to map 8K segments the desired parts of memory.  The processor could then switch application programs by switching the contents of the segment registers.  As a result, many models of the PDP-11 were able to handle several users on a time-sharing system, giving rise to the UNIX operating system.

Given the small address space and small set of segment registers, a PDP-11 would take considerably few gates than a VAX-11.  If both were resurrected today, the PDP-11 would be more power efficient.

Virtual machines

It is worth nothing that virtualization, the current practice that replaces several physical machines by virtual machines has had an immense impact on energy footprint in data centers.  However, a context switch between virtual machines is even more heavyweight than switching processes on a single machine.

The virtualization kernel and hypervisor(s) are presumed to be reliable components that separate unreliable operating systems from each other. Thus, the failure of a single process in an operating doesn’t affect other key application processes on the other virtual machine because there are not other application processes there.

In fact, the physical machine running the multiple virtual machines is probably consuming more energy than combining all the processes on a reliable operating system on the same physical machine.  But managers of data centers are constrained to commercial choices available to them, which favor certain operating systems.  Thus, the minimum energy consumption that can be achieved is not as low as it can be for a single operating system.

To be sure, there are good uses for virtualization.  When different software packages require different versions of the same operating system, virtualization provides a way to host all the packages on the same hardware. Virtualization has evolved to where an application can be migrated off one physical machine and onto another one, letting the former be brought down for maintenance without interrupting program operation.  These are just a couple of examples of legitimate uses.

The list goes on…

There are certainly considerable CPU design strategies can can affect energy consumption.

I could start in on speculative execution and other CPU accelerators, these gain performance at the expense of additional power.  A stark contrast to these is the SPARC chip-multithreading (CMT) architecture.  The fundamental concept gets rid of the accelerators, but replaces it by a massive set of simpler cores and threads, resulting in much lower power per CPU thread.

At this point, we’re clearly no longer in the realm that software or system designers can affect.  And little or nothing can be done through software.

So that’s my view of computation energy consumption from a software perspective, but also peering into the system hardware.

The lure and mystery of moondust

Friday evening, September 6, 2013, the Lunar Atmosphere and Dust Environment Explorer (LADEE) was launched aboard a Minotaur rocket from Wallops Island, VA, on the mid-Atlantic seaboard.  This mission is proving interesting in a lot of ways — some to do with the Moon, and some to do with how space missions are now executed.

LADEE-event-ames

The LADEE spacecraft was developed at NASA Ames Research Center, in Silicon Valley and sitting right on the edge of the San Francisco Bay. Ames is home to the NASA Lunar Science Institute. So it was fitting that Ames hosted a “science night” for thousands of enthusiasts, with the LADEE launch as the highlight.

Now, for my friends who were expecting to see me at the Ames event, I apologize.  I had a major case of fatigue very early in the day, and did not think I would be able to drive home after the event.  So I left work after the last meeting of the day, into a big traffic jam (sigh), and connected my laptop to the biggest screen in the house so that I could watch it.

In recent years, our understanding of the Moon has been rapidly evolving. After decades of neglect following the Apollo missions in the late 1960s-early 1970s, we were suddenly on the hunt for water. As a result of Apollo, we believed that the Moon is an exceedingly dry, barren place with no hope of supporting humans without a continual lifeline of supplies from Earth.  So we stopped going.  Even the first pictures from the surface of Mars, from Viking in the 1970s, showed a dry, barren place with hardly any atmosphere.  A lot of people were probably wondering why governments were spending money on space at all.  In fact, the reason was international prestige; with that in hand, further scientific understanding was hard to justify to government budget and oversight committees.

It turns out we were wrong.  We now know water ice has been accumulating on the Moon, and Mars was once a very wet place.  We suspect that Jupiter’s moon Europa has icy oceans below its surface. And comets, coming from the far reaches of the solar system, are an amalgamation of rock, dust, water ice, and frozen “gases” like carbon dioxide, methane, ammonia, etc.  (At least, the latter would be gases at room temperature and pressure on Earth.)

The rocks from the Apollo missions did indeed contain faint traces of moisture, but it was felt these were probably contaminants brought from Earth.  Indeed, the samples are pretty dry.  Similarly sized samples of the Earth’s driest deserts have more water content.

In a nutshell, it now appears that water is virtually everywhere in the solar system.  With some ingenuity, human settlements can be sustained as far from the Sun as the water goes.  The Moon is the first step in figuring out how to do this.

The Moon’s exosphere

Like the Earth, the Moon is bombarded by a steady stream of atomic and subatomic particles from the Sun as well as meteorites from asteroids or other debris orbiting the Sun.  With no atmosphere like the Earth to slow or disintegrate small objects before they hit, the Moon is littered with small impact fragments that retain their sharp edges for millennia.  There is no erosion from abrasion against other fragments to round off the edges.  There is extreme heat and cold which would drive out moisture, and perhaps produce cracks.

The molecules and gases driven out would follow ballistic trajectories, hardly ever hitting another molecule before hitting the Moon’s surface, and perhaps bouncing again until it ran out of energy. Protons from the Sun might ultimately combine with atomic oxygen, also from the Sun, first forming hydroxyl (one hydrogen, one oxygen), and later water (two hydrogens, one oxygen).  [The Sun is about 78% hydrogen, 20% helium, and 0.86% oxygen, 0.4% carbon, etc..]  Furthermore, the surface is bathed in ultraviolet, knocking electrons off some atoms, leaving positively charged dust on the Moon that also has sharp edges.

Apollo_17_twilight_ray_sketch

A funny thing happened on the Moon during the Apollo 17 mission.  Astronaut Gene Cernan was set to observe the coronal and zodiacal light (CZL) of the Sun when it was hidden by the Moon. Indeed, he did see it, but there was more. There should have been just a small hump of light over the horizon, but in fact there were additional columns of light across the horizon as the Sun prepared to rise. It turned out there were similar sightings during Apollo 8, 10, and 15 during CZL observations. In fact, there was a lunar horizon glow (LHG) during the Surveyor missions preceding Apollo. The problem is, it wasn’t consistently seen; it was highly variable.

The LHG seen by Surveyor was from the surface of the Moon. To an extent, this can be explained by electrostatic charge on lunar dust. Ultraviolet radiation kicks electrons off the dust, leaving them positively charged, pushing away from each other, and rising off the Moon’s surface. This is expected a few meters up. But what the Apollo astronauts saw was at much higher altitudes.

Low-level lunar horizon glow (LHG) seen by Surveyor 7.  Credit: NASA
Low-level lunar horizon glow (LHG) seen by Surveyor 7. Credit: NASA

It is possible that the electrostatic charge is propelling smaller particles faster, onto trajectories high above the lunar surface. It could also be from sodium atoms in the Moon’s exosphere.

Part of what LADEE will do is gather more information on the composition of this tenuous lunar atmosphere and dust environment.  (Hence, the name.)  What is learned here will undoubtedly shed light on what is happening on other small bodies elsewhere in the solar system.  One has to imagine the moons of Mars, or for that matter, the moons of Pluto.  (Pluto itself seems to have a very very faint atmosphere; New Horizons will be visiting soon and find out more.)

Modularity

LADEE is built on a “modular common spacecraft bus” (MCSB); that is, a basic spacecraft framework (structure, propulsion, electrical buses).  It is intended to be utilized for a variety of missions, thus reducing the overall cost of spacecraft and mission development.

In fact, the MCSB is being utilized by Moon Express, a team competing for the Google Lunar X Prize.

Modularity and commonality are not new concepts in the spacecraft business. Commercial satellites are often built on a common bus, with small incremental improvements from one  spacecraft bus to the next.  Exploratory spacecraft are often developed with pairs of hardware — one for the spacecraft to be launched, and a spare set for testing and backup.  The spare set is often later used for a cheaper spacecraft.  Venus Radar Mapper, aka Magellan, utilized spare components from Galileo.  If the Galileo spacecraft was lost during launch or early in the mission, the spare set would have been appropriated to create the backup spacecraft, and there would have been no Magellan.  (At least, that was the plan. Check the launch dates; you’ll see something else happened.)

In the emerging industry of asteroid mining, one can see the glimmer of a spacecraft production line, where perhaps hundreds of spacecraft will ultimately be manufactured for various stages of asteroid resource exploration.  These are, so far, much smaller than the MCSB that LADEE uses.  (Although once resource extraction begins, it seems impossible for the spacecraft to retain a small size.)

Having done the prototyping of the bus and adapting it to LADEE, it is hoped that other missions will see fit to utilize it.  LADEE itself is a lunar orbiter.  Moon Express is building a lander.  A variety of other lander and rendezvous missions are possible using MCSB components.

Spaceports for small spacecraft

Wallops Island has been a flight facility for NASA or NACA, dating back to 1945. Sounding rockets have been launched from Wallops to study the upper atmosphere.  A variety of aircraft and scientific balloon missions have originated from there as well.

However, it is emerging as a launch facility for orbital and lunar missions.  This means payload integration, tracking range, etc.  While larger payloads are launched from Cape Canaveral to destinations like the International Space Station, Wallops is focused on small payloads in the range of 4-400 pounds (1.8 to 180 kilograms).

Rather than hitching a ride as a secondary payload on a larger rocket, it gives the opportunity for small payloads to be primaries, thus giving mission teams more control and flexibility in what they can do.

This particular launch involved an Orbital Sciences Minotaur V rocket.  A few months earlier, on April 21, Orbital launched an Antares rocket, carrying a few small payloads, including three copies of PhoneSat-1, developed at NASA Ames.

These are unmanned launches.  There is also a trend toward utilizing hardware for manned suborbital launches for unmanned orbital as well.

XCOR Aerospace is building its Lynx rocketplane to take-off and land on a runway.  As the vehicle matures, a version is planned to have a dorsal pod that extends from the fuselage.  The pod will be able to hold a rocket stage that could put a small payload into orbit.  (This would be a nanosatellite such as a CubeSat.)

Virgin Galactic, which is building SpaceShipTwo for tourists, is also building LauncherOne for small payloads.  Both will utilize the WhiteKnightTwo aircraft as their launch platform.  Currently, Virgin Galactic is planning commercial flights at New Mexico’s “Spaceport America.”

Thus, the options for payloads ranging from small to very small are gaining. Wallops and the state of Virginia undoubtedly hope to be a leading flight facility for launches of small payloads into orbit and beyond.

Putting it all together…

LADEE is a research and exploration mission built using a modular spacecraft design.  It is intended to answer questions about the nature of the tenuous lunar atmosphere/exosphere.  The lessons learned may be extrapolated to other small planetoids beyond Earth in the solar system. The launch from Wallops is part of an emerging trend to provide better options for smaller spacecraft.

Additional info

[Updates: around 2013-09-08 around 0820 PDT – a few additional words about Galileo and Magellan.]