Development and Testing Fission Power Systems History Nuclear Electric Propulsion Test Stands

History of US Astronuclear Reactors part 1: SNAP-2 and 10A

Hello, and welcome to Beyond NERVA! Today we’re going to look at the program that birthed the first astronuclear reactor to go into orbit, although the extent of the program far exceeds the flight record of a single launch.

Before we get into that, I have a minor administrative announcement that will develop into major changes for Beyond NERVA in the near-to-mid future! As you may have noticed, we have moved from to For the moment, there isn’t much different, but in the background a major webpage update is brewing! Not only will the home page be updated to make it easier to navigate the site (and see all the content that’s already available!), but the number of pages on the site is going to be increasing significantly. A large part of this is going to be integrating information that I’ve written about in the blog into a more topical, focused format – with easier access to both basic concepts and technical details being a priority. However, there will also be expansions on concepts, pages for technical concepts that don’t really fit anywhere in the blog posts, and more! As these updates become live, I’ll mention them in future blog posts. Also, I’ll post them on both the Facebook group and the new Twitter feed (currently not super active, largely because I haven’t found my “tweet voice yet,” but I hope to expand this soon!). If you are on either platform, you should definitely check them out!

The Systems for Nuclear Auxiliary Propulsion, or SNAP program, was a major focus for a wide range of organizations in the US for many decades. The program extended everywhere from the bottom of the seas (SNAP-4, which we won’t be covering in this post) to deep space travel with electric propulsion. SNAP was divided up into an odd/even numbering scheme, with the odd model numbers (starting with the SNAP-3) being radioisotope thermoelectric generators, and the even numbers (beginning with SNAP-2) being fission reactor electrical power systems.

Due to the sheer scope of the SNAP program, even eliminating systems that aren’t fission-based, this is going to be a two post subject. This post will cover the US Air Force’s portion of the SNAP reactor program: the SNAP-2 and SNAP-10A reactors; their development programs; the SNAPSHOT mission; and a look at the missions that these reactors were designed to support, including satellites, space stations, and other crewed and uncrewed installations. The next post will cover the NASA side of things: SNAP-8 and its successor designs as well as SNAP-50/SPUR. The one after that will cover the SP-100, SABRE, and other designs from the late 1970s through to the early 1990s, and will conclude with looking at a system that we mentioned briefly in the last post: the ENISY/TOPAZ II reactor, the only astronuclear design to be flight qualified by the space agencies and nuclear regulatory bodies of two different nations.

SNAP capabilities 1964
SNAP Reactor Capabilities and Status as of 1973, image DOE

The Beginnings of the US Astronuclear Program: SNAP’s Early Years

Core Cutaway Artist
Early SNAP-2 Concept Art, image courtesy DOE

Beginning in the earliest days of both the nuclear age and the space age, nuclear power had a lot of appeal for the space program: high power density, high power output, and mechanically simple systems were in high demand for space agencies worldwide. The earliest mention of a program to develop nuclear electric power systems for spacecraft was the Pied Piper program, begun in 1954. This led to the development of the Systems for Nuclear Auxiliary Power program, or SNAP, the following year (1955), which was eventually canceled in 1973, as were so many other space-focused programs.

s2 Toroidal Station
SNAP-2 powered space station concept image via DOE

Once space became a realistic place to send not only scientific payloads but personnel, the need to provide them with significant amounts of power became evident. Not only were most systems of the day far from the electricity efficient designs that both NASA and Roscosmos would develop in the coming decades; but, at the time, the vision for a semi-permanent space station wasn’t 3-6 people orbiting in a (completely epic, scientifically revolutionary, collaboratively brilliant, and invaluable) zero-gee conglomeration of tin cans like the ISS, but larger space stations that provided centrifugal gravity, staffed ‘round the clock by dozens of individuals. These weren’t just space stations for NASA, which was an infant organization at the time, but the USAF, and possibly other institutions in the US government as well. In addition, what would provide a livable habitation for a group of astronauts would also be able to power a remote, uncrewed radar station in the Arctic, or in other extreme environments. Even if crew were there, the fact that the power plant wouldn’t have to be maintained was a significant military advantage.

Responsible for both radioisotope thermoelectric generators (which run on the natural radioactive decay of a radioisotope, selected according to its energy density and half-life) as well as fission power plants, SNAP programs were numbered with an even-odd system: even numbers were fission reactors, odd numbers were RTGs. These designs were never solely meant for in-space application, but the increased mission requirements and complexities of being able to safely launch a nuclear power system into space made this aspect of their use the most stringent, and therefore the logical one to design around. Additionally, while the benefits of a power-dense electrical supply are obvious for any branch of the military, the need for this capability in space far surpassed the needs of those on the ground or at sea.

Originally jointly run by the AEC’s Department of Reactor Development (who funded the reactor itself) and the USAF’s AF Wright Air Development Center (who funded the power conversion system), full control was handed over to the AEC in 1957. Atomics International Research was the prime contractor for the program.

There are a number of similarities in almost all the SNAP designs, probably for a number of reasons. First, all of the reactors that we’ll be looking at (as well as some other designs we’ll look at in the next post) used the same type of fissile fuel, even though the form, and the cladding, varied reasonably widely between the different concepts. Uranium-zirconium hydride (U-ZrH) was a very popular fuel choice at the time. Assuming hydrogen loss could be controlled (this was a major part of the testing regime in all the reactors that we’ll look at), it provided a self-moderating, moderate-to-high-temperature fuel form, which was a very attractive feature. This type of fuel is still used today, for the TRIGA reactor – which, between it and its direct descendants is the most common form of research and test reactor worldwide. The high-powered reactors (SNAP 2 and 8) both initially used variations on the same power conversion system: a boiling mercury Rankine power conversion cycle, which was determined by the end of the testing regime to be possible to execute, however to my knowledge has never been proposed again (we’ll look at this briefly in the post on heat engines as power conversion systems, and a more in-depth look will be available in the future), although a mercury-based MHD conversion system is being offered as a power conversion system for an accelerator-driven molten salt reactor.

SNAP-2: The First American Built-For-Space Nuclear Reactor Design

S2 Artist Cutaway Core
SNAP-2 Reactor Cutaway, image DOE

The idea for the SNAP-2 reactor originally came from a 1951 Rand Corporation study, looking at the feasibility of having a nuclear powered satellite. By 1955, the possibilities that a fission power supply offered in terms of mass and reliability had captured the attention of many people in the USAF, which was (at the time) the organization that was most interested and involved (outside the Army Ballistic Missile Agency at the Redstone Arsenal, which would later become the Goddard Spaceflight Center) in the exploitation of space for military purposes.

The original request for the SNAP program, which ended up becoming known as SNAP 2, occurred in 1955, from the AEC’s Defense Reactor Development Division and the USAF Wright Air Development Center. It was for possible power sources in the 1 to 10 kWe range that would be able to autonomously operate for one year, and the original proposal was for a zirconium hydride moderated sodium-potassium (NaK) metal cooled reactor with a boiling mercury Rankine power conversion system (similar to a steam turbine in operational principles, but we’ll look at the power conversion systems more in a later post), which is now known as SNAP-2. The design was refined into a 55 kWt, 5 kWe reactor operating at about 650°C outlet temperature, massing about 100 kg unshielded, and was tested for over 10,000 hours. This epithermal neutron spectrum would remain popular throughout much of the US in-space reactor program, both for electrical power and thermal propulsion designs. This design would later be adapted to the SNAP-10A reactor, with some modifications, as well.

S2 Critical Assembly
SNAP Critical Assembly core, image DOE

SNAP-2’s first critical assembly test was in October of 1957, shortly after Sputnik-1’s successful launch. With 93% enriched 235U making up 8% of the weight of the U-ZrH fuel elements, a 1” beryllium inner reflector, and an outer graphite reflector (which could be varied in thickness), separated into two rough hemispheres to control the construction of a critical assembly; this device was able to test many of the reactivity conditions needed for materials testing on a small economic scale, as well as test the behavior of the fuel itself. The primary concerns with testing on this machine were reactivity, activation, and intrinsic steady state behavior of the fuel that would be used for SNAP-2. A number of materials were also tested for reflection and neutron absorbency, both for main core components as well as out-of-core mechanisms. This was followed by the SNAP-2 Experimental Reactor in 1959-1960 and the SNAP 2 Development Reactor in 1961-1962.

S2ER Cross Section
SNAP-2 Experimental Reactor core cros section diagram, image DOE

The SNAP-2 Experimental Reactor (S2ER or SER) was built to verify the core geometry and basic reactivity controls of the SNAP-2 reactor design, as well as to test the basics of the primary cooling system, materials, and other basic design questions, but was not meant to be a good representation of the eventual flight system. Construction started in June 1958, with construction completed by March 1959. Dry (Sept 15) and wet (Oct 20) critical testing was completed the same year, and power operations started on Nov 5, 1959. Four days later, the reactor reached design power and temperature operations, and by April 23 of 1960, 1000 hours of continuous testing at design conditions were completed. Following transient and other testing, the reactor was shut down for the last time on November 19, 1960, just over one year after it had first achieved full power operations. Between May 19 and June 15, 1961, the reactor was disassembled and decommissioned. Testing on various reactor materials, especially the fuel elements, was conducted, and these test results refined the design for the Development Reactor.

S2ER Schedule and Timeline

S2DR Core Xsec
SNAP 2 Development Reactor core cross section, image DOE

The SNAP-2 Development Reactor (S2DR or SDR, also called the SNAP-2 Development System, S2DS) was installed in a new facility at the Atomics International Santa Susana research facility to better manage the increased testing requirements for the more advanced reactor design. While this wasn’t going to be a flight-type system, it was designed to inform the flight system on many of the details that the S2ER wasn’t able to. This, interestingly, is much harder to find information on than the S2ER. This reactor incorporated many changes from the S2ER, and went through several iterations to tweak the design for a flight reactor. Zero power testing occurred over the summer of 1961, and testing at power began shortly after (although at SNAP-10 power and temperature levels. Testing continued until December of 1962, and further refined the SNAP-2 and -10A reactors.

S2DR Development Timeline

A third set of critical assembly reactors, known as the SNAP Development Assembly series, was constructed at about the same time, meant to provide fuel element testing, criticality benchmarks, reflector and control system worth, and other core dynamic behaviors. These were also built at the Santa Susana facility, and would provide key test capabilities throughout the SNAP program. This water-and-beryllium reflected core assembly allowed for a wide range of testing environments, and would continue to serve the SNAP program through to its cancellation. Going through three iterations, the designs were used more to test fuel element characteristics than the core geometries of individual core concepts. This informed all three major SNAP designs in fuel element material and, to a lesser extent, heat transfer (the SNAP-8 used thinner fuel elements) design.

Extensive testing was carried out on all aspects of the core geometry, fuel element geometry and materials, and other behaviors of the reactor; but by May 1960 there was enough confidence in the reactor design for the USAF and AEC to plan on a launch program for the reactor (and the SNAP-10A), called SNAPSHOT (more on that below). Testing using the SNAP-2 Experimental Reactor occurred in 1960-1961, and the follow-on test program, including the Snap 2 Development reactor occurred in 1962-63. These programs, as well as the SNAP Critical Assembly 3 series of tests (used for SNAP 2 and 10A), allowed for a mostly finalized reactor design to be completed.

S2 PCS Cutaway Drawing
CRU mercury Rankine power conversion system cutaway diagram, image DOE

The power conversion system (PCS), a Rankine (steam) turbine using mercury, were carried out starting in 1958, with the development of a mercury boiler to test the components in a non-nuclear environment. The turbine had many technical challenges, including bearing lubrication and wear issues, turbine blade pitting and erosion, fluid dynamics challenges, and other technical difficulties. As is often the case with advanced reactor designs, the reactor core itself wasn’t the main challenge, nor the control mechanisms for the reactor, but the non-nuclear portions of the power unit. This is a common theme in astronuclear engineering. More recently, JIMO experienced similar problems when the final system design called for a theoretical but not yet experimental supercritical CO2 Brayton turbine (as we’ll see in a future post). However, without a power conversion system of useable efficiency and low enough mass, an astronuclear power system doesn’t have a means of delivering the electricity that it’s called upon to deliver.

Reactor shielding, in the form of a metal honeycomb impregnated with a high-hydrogen material (in this case a form of paraffin), was common to all SNAP reactor designs. The high hydrogen content allowed for the best hydrogen density of the available materials, and therefore the greatest shielding per unit mass of the available options.

s10 FSM Reactor
SNAP-2/10A FSM reflector and drum mechanism pre-test, image DOE

Testing on the SNAP 2 reactor system continued until 1963, when the reactor core itself was re-purposed into the redesigned SNAP-10, which became the SNAP-10A. At this point the SNAP-2 reactor program was folded into the SNAP-10A program. SNAP-2 specific design work was more or less halted from a reactor point of view, due to a number of factors, including the slower development of the CRU power conversion system, the large number of moving parts in the Rankine turbine, and the advances made in the more powerful SNAP-8 family of reactors (which we’ll cover in the next post). However, testing on the power conversion system continued until 1967, due to its application to other programs. This didn’t mean that the reactor was useless for other missions; in fact, it was far more useful, due to its far more efficient power conversion system for crewed space operations (as we’ll see later in this post), especially for space stations. However, even this role would be surpassed by a derivative of the SNAP-8, the Advanced ZrH Reactor, and the SNAP-2 would end up being deprived of any useful mission.

The SNAP Reactor Improvement Program, in 1963-64, continued to optimize and advance the design without nuclear testing, through computer modeling, flow analysis, and other means; but the program ended without flight hardware being either built or used. We’ll look more at the missions that this reactor was designed for later in this blog post, after looking at its smaller sibling, the first reactor (and only US reactor) to ever achieve orbit: the SNAP-10A.

S2 Program History Table

SNAP-10: The Father of the First Reactor in Space

At about the same time as the SNAP 2 Development Reactor tests (1958), the USAF requested a study on a thermoelectric power conversion system, targeting a 0.3 kWe-1kWe power regime. This was the birth of what would eventually become the SNAP-10 reactor. This reactor would evolve in time to become the SNAP-10A reactor, the first nuclear reactor to go into orbit.

In the beginning, this design was superficially quite similar to the Romashka reactor that we’ll examine in the USSR part of this blog post, with plates of U-ZrH fuel, separated by beryllium plates for heat conduction, and surrounded by radial and axial beryllium reflectors. Purely conductively cooled internally, and radiatively cooled externally, this was later changed to a NaK forced convection cooling system for better thermal management (see below). The resulting design was later adapted to the SNAP-4 reactor, which was designed to be used for underwater military installations, rather than spaceflight. Outside these radial reflectors were thermoelectric power conversion systems, with a finned radiating casing being the only major component that was visible. The design looked, superficially at least, remarkably like the RTGs that would be used for the next several decades. However, the advantages to using even the low power conversion efficiency thermoelectric conversion system made this a far more powerful source of electricity than the RTGs that were available at the time (or even today) for space missions.

Reactor and Shield Cutaway
SNAP-10A Reactor sketch, image DOE

Within a short period, however, the design was changed dramatically, resulting in a design very similar to the core for the SNAP-2 reactor that was under development at the same time. Modifications were made to the SNAP-2 baseline, resulting in the reactor cores themselves becoming identical. This also led to the NaK cooling system being implemented on the SNAP-10A. Many of the test reactors for the SNAP-2 system were also used to develop the SNAP-10A. This is because the final design, while lower powered, by 20 kWe of electrical output, was largely different in the power conversion system, not the reactor structure. This reactor design was tested extensively, with the S2ER, S2DR, and SCA test series (4A, 4B, and 4C) reactors, as well as the SNAP-10 Ground Test Reactor (S10FS-1). The new design used a similar, but slightly smaller, conical radiator using NaK as the working fluid for the radiator.

This was a far lower power design than the SNAP-2, coming in at 30 kWt, but with the 1.6% power conversion ratio of the thermoelectric systems, its electrical power output was only 500 We. It also ran almost 100°C cooler (200 F), allowing for longer fuel element life, but less thermal gradient to work with, and therefore less theoretical maximum efficiency. This tradeoff was the best on offer, though, and the power conversion system’s lack of moving parts, and ease of being tested in a non-nuclear environment without extensive support equipment, made it more robust from an engineering point of view. The overall design life of the reactor, though, remained short: only about 1 year, and less than 1% fissile fuel burnup. It’s possible, and maybe even likely, that (barring spacecraft-associated failure) the reactor could have provided power for longer durations; however, the longer the reactor operates, the more the fuel swells, due to fission product buildup, and at some point this would cause the clad of the fuel to fail. Other challenges to reactor design, such as fission poison buildup, clad erosion, mechanical wear, and others would end the reactor’s operational life at some point, even if the fuel elements could still provide more power.

SNAP Meteorological Satellite
SNAP-10A satellite concept, image DOE

The SNAP-10A was not meant to power crewed facilities, since the power output was so low that multiple installations would be needed. This meant that, while all SNAP reactors were meant to be largely or wholly unmaintained by crew personnel, this reactor had no possibility of being maintained. The reliability requirements for the system were higher because of this, and the lack of moving parts in the power conversion system aided in this design requirement. The design was also designed to only have a brief (72 hour) time period where active reactivity control would be used, to mitigate any startup transients, and to establish steady-state operations, before the active control systems would be left in their final configuration, leaving the reactor entirely self-regulating. This placed additional burden on the reactor designers to have a very strong understanding of the behavior of the reactor, its long-term stability, and any effects that would occur during the year-long lifetime of the system.

Reflector Ejection Test
Reflector ejection test in progress, image DOE

At the end of the reactor’s life, it was designed to stay in orbit until the short-lived and radiotoxic portions of the reactor had gone through at least five product half-lives, reducing the radioactivity of the system to a very low level. At the end of this process, the reactor would re-enter the atmosphere, the reflectors and end reflector would be ejected, and the entire thing would burn up in the upper atmosphere. From there, winds would dilute any residual radioactivity to less than what was released by a single small nuclear test (which were still being conducted in Nevada at the time). While there’s nothing wrong with this approach from a health physics point of view, as we saw in the last post on the BES-5 reactors the Soviet Union was flying, there are major international political problems with this concept. The SNAPSHOT reactor continues to orbit the Earth (currently at an altitude of roughly 1300 km), and will do so for more than 2000 years, according to recent orbital models, so the only system of concern is not in danger of re-entry any time soon; but, at some point, the reactor will need to be moved into a graveyard orbit or collected and returned to Earth – a problem which currently has no solution.

The Runup to Flight: Vehicle Verification and Integration

1960 brought big plans for orbital testing of both the SNAP-2 and SNAP-10 reactors, under the program SNAPSHOT: Two SNAP-10 launches, and two SNAP-2 launches would be made. Lockheed Missiles System Division was chosen as the launch vehicle, systems integration, and launch operations contractor for the program; while Atomics International, working under the AEC, was responsible for the power plant.

The SNAP-10A reactor design was meant to be decommissioned by orbiting for long enough that the fission product inventory (the waste portion of the burned fuel elements, and the source of the vast majority of the radiation from the reactor post-fission) would naturally decay away, and then the reactor would be de-orbited, and burn up in the atmosphere. This was planned before the KOSMOS-954 accident, when the possibility of allowing a nuclear reactor to burn up in the atmosphere was not as anathema as it is today. This plan wouldn’t increase the amount of radioactivity that the public would receive to any appreciable degree; and, at the time, open-air testing of nuclear weapons was the norm, sending up thousands of kilograms of radioactive fallout per year. However, it was important that the fuel rods themselves would burn up high in the atmosphere, in order to dilute the fuel elements as much as possible, and this is something that needed to be tested.

RFD-1 Experimental Payload

Enter the SNAP Reactor Flight Demonstration Number 1 mission, or RFD-1. The concept of this test was to demonstrate that the planned disassembly and burnup process would occur as expected, and to inform the further design of the reactor if there were any unexpected effects of re-entry. Sandia National Labs took the lead on this part of the SNAPSHOT program. After looking at the budget available, the launch vehicles available, and the payloads, the team realized that orbiting a nuclear reactor mockup would be too expensive, and another solution needed to be found. This led to the mission design of RFD-1: a sounding rocket would be used, and the core geometry would be changed to account for the short flight time, compared to a real reentry, in order to get the data needed for the de-orbiting testing of the actual SNAP-10A reactor that would be flown.

So what does this mean? Ideally, the development of a mockup of the SNAP-10A reactor, with the only difference being that there wouldn’t be any highly enriched uranium in the fuel elements, as normally configured; instead depleted uranium would be used. It would be launched on the same launch vehicle that the SNAPSHOT mission would use (an Atlas-Agena D), be placed in the same orbit, and then be deorbited at the same angle and the same place as the actual reactor would be; maybe even in a slightly less favorable reentry angle to know how accurate the calculations were, and what the margin of error would be. However, an Atlas-Agena rocket isn’t a cheap piece of hardware, either to purchase, or to launch, and the project managers knew that they wouldn’t be able to afford that, so they went hunting for a more economical alternative.

RFD1 Flight Path
RFD-1 Mission Profile, image DOE

This led the team to decide on a NASA Scout sounding rocket as the launch vehicle, launched from Wallops Island launch site (which still launches sounding rockets, as well as the Antares rocket, to this day, and is expanding to launch Vector Space and RocketLab orbital rockets as well in the coming years). Sounding rockets don’t reach orbital altitudes or velocities, but they get close, and so can be used effectively to test orbital components for systems that would eventually fly in orbit, but for much less money. The downside is that they’re far smaller, with less payload and less velocity than their larger, orbital cousins. This led to needing to compromise on the design of the dummy reactor in significant ways – but those ways couldn’t compromise the usefulness of the test.

Sandia Corporation (which runs Sandia National Laboratories to this day, although who runs Sandia Corp changes… it’s complicated) and Atomics International engineers got together to figure out what could be done with the Scout rocket and a dummy reactor to provide as useful an engineering validation as possible, while sticking within the payload requirements and flight profile of the relatively small, suborbital rocket that they could afford. Because the dummy reactor wouldn’t be going nearly as fast as it would during true re-entry, a steeper angle of attack when the test was returning to Earth was necessary to get the velocity high enough to get meaningful data.

Scout rocket
Scout sounding rocket, image DOE

The Scout rocket that was being used had much less payload capability than the Atlas rocket, so if there was a system that could be eliminated, it was removed to save weight. No NaK was flown on RFD-1, the power conversion system was left off, the NaK pump was simulated by an empty stainless steel box, and the reflector assembly was made out of aluminum instead of beryllium, both for weight and toxicity reasons (BeO is not something that you want to breathe!). The reactor core didn’t contain any dummy fuel elements, just a set of six stainless steel spacers to keep the grid plates at the appropriate separation. Because the angle of attack was steeper, the test would be shorter, meaning that there wouldn’t be time for the reactor’s reflectors to degrade enough to release the fuel elements. The fuel elements were the most important part of the test, however, since it needed to be demonstrated that they would completely burn up upon re-entry, so a compromise was found.

The fuel elements would be clustered on the outside of the dummy reactor core, and ejected early in the burnup test period. While the short time and high angle of attack meant that there wouldn’t be enough time to observe full burnup, the beginning of the process would be able to provide enough data to allow for accurate simulations of the process to be made. How to ensure that this data, which was the most important part of the test, would be able to be collected was another challenge, though, which forced even more compromises for RFT-1’s design. Testing equipment had to be mounted in such a way as to not change the aerodynamic profile of the dummy reactor core. Other minor changes were needed as well, but despite all of the differences between the RFD-1 and the actual SNAP-10A the thermodynamics and aerodynamics of the system were different in only very minor ways.

Testing support came from Wallops Island and NASA’s Bermuda tracking station, as well as three ships and five aircraft stationed near the impact site for radar observation. The ground stations would provide both radar and optical support for the RFD-1 mission, verifying reactor burnup, fuel element burnup, and other test objective data, while the aircraft and ships were primarily tasked with collecting telemetry data from on-board instruments, as well as providing additional radar data; although one NASA aircraft carried a spectrometer in order to analyze the visible radiation coming off the reentry vehicle as it disintegrated.

RFD-1 Burnup Splice
Film splice of RV burnup during RFD-1, image DOE

The test went largely as expected. Due to the steeper angle of attack, full fuel element burnup wasn’t possible, even with the early ejection of the simulated fuel rods, but the amount that they did disintegrate during the mission showed that the reactor’s fuel would be sufficiently distributed at a high enough altitude to prevent any radiological risk. The dummy core behaved mostly as expected, although there were some disagreements between the predicted behavior and the flight data, due to the fact that the re-entry vehicle was on such a steep angle of attack. However, the test was considered a success, and paved the way for SNAPSHOT to go forward.

The next task was to mount the SNAP-10A to the Agena spacecraft. Because the reactor was a very different power supply than was used at the time, special power conditioning units were needed to transfer power from the reactor to the spacecraft. This subsystem was mounted on the Agena itself, along with tracking and command functionality, control systems, and voltage regulation. While Atomics International worked to ensure the reactor would be as self-contained as possible, the reactor and spacecraft were fully integrated as a single system. Besides the reactor itself, the spacecraft carried a number of other experiments, including a suite of micrometeorite detectors and an experimental cesium contact thruster, which would operate from a battery system that would be recharged by electricity produced by the reactor.

S10FSM Vac Chamber
FSEM-3 in vacuum chamber for environmental and vibration tests, image DOE

In order to ensure the reactor would be able to be integrated to the spacecraft, a series of Flight System Prototypes (FSM-1, and -4; FSEM-2 and -3 were used for electrical system integration) were built. These were full scale, non-nuclear mockups that contained a heating unit to simulate the reactor core. Simulations were run using FSM-1 from launch to startup on orbit, with all testing occurring in a vacuum chamber. The final one of the series, FSM-4, was the only one that used NaK coolant in the system, which was used to verify that the thermal performance of the NaK system met with flight system requirements. FSEM-2 did not have a power system mockup, instead it used a mass mockup of the reactor, power conversion system, radiator, and other associated components. Testing with FSEM-2 showed that there were problems with the original electrical design of the spacecraft, which required a rebuild of the test-bed, and a modification of the flight system itself. Once complete, the renamed FSEM-2A underwent a series of shock, vibration, acceleration, temperature, and other tests (known as the “Shake and Bake” environmental tests), which it subsequently passed. The final mockup, FSEM-3, underwent extensive electrical systems testing at Lockheed’s Sunnyvale facility, using simulated mission events to test the compatibility of the spacecraft and the reactor. Additional electrical systems changes were implemented before the program proceeded, but by the middle of 1965, the electrical system and spacecraft integration tests were complete and the necessary changes were implemented into the flight vehicle design.

SNAP-10A F-3 (flight unit for SNAPSHOT) undergoing final checks before spacecraft integration. S10F-4 was identical.

The last round of pre-flight testing was a test of a flight-configured SNAP-10A reactor under fission power. This nuclear ground test, S10F-3, was identical to the system that would fly on SNAPSHOT, save some small ground safety modifications, and was tested from January 22 1965 to March 15, 1966. It operated uninterrupted for over 10,000 hours, with the first 390 days being at a power output of 35 kWt, and (following AEC approval) an additional 25 days of testing at 44 kWt. This testing showed that, after one year of operation, the continuing problem of hydrogen redistribution caused the reactor’s outlet temperature to drop more than expected, and additional, relatively minor, uncertainties about reactor dynamics were seen as well. However, overall, the test was a success, and paved the way for the launch of the SNAPSHOT spacecraft in April 1965; and the continued testing of S10F-3 during the SNAPSHOT mission was able to verify that the thermal behavior of astronuclear power systems during ground test is essentially identical to orbiting systems, proving the ground test strategy that had been employed for the SNAP program.

SNAPSHOT: The First Nuclear Reactor in Space

In 1963 there was a change in the way the USAF was funding these programs. While they were solely under the direction of the AEC, the USAF still funded research into the power conversion systems, since they were still operationally useful; but that changed in 1963, with the removal of the 0.3 kWe to 1 kWe portion of the program. Budget cuts killed the Zr-H moderated core of the SNAP-2 reactor, although funding continued for the Hg vapor Rankine conversion system (which was being developed by TRW) until 1966. The SNAP-4 reactor, which had not even been run through criticality testing, was canceled, as was the planned flight test of the SNAP-10A, which had been funded under the USAF, because they no longer had an operational need for the power system with the cancellation of the 0.3-1 kWe power system program. The associated USAF program that would have used the power supply was well behind schedule and over budget, and was canceled at the same time.

The USAF attempted to get more funding, but was denied. All parties involved had a series of meetings to figure out what to do to save the program, but the needed funds weren’t forthcoming. All partners in the program worked together to try and have a reduced SNAPSHOT program go through, but funding shortfalls in the AEC (who received only $8.6 million of the $15 million they requested), as well as severe restrictions on the Air Force (who continued to fund Lockheed for the development and systems integration work through bureaucratic creativity), kept the program from moving forward. At the same time, it was realized that being able to deliver kilowatts or megawatts of electrical power, rather than the watts currently able to be produced, would make the reactor a much more attractive program for a potential customer (either the USAF or NASA).

Finally, in February of 1964 the Joint Congressional Committee on Atomic Energy was able to fund the AEC to the tune of $14.6 million to complete the SNAP-10A orbital test. This reactor design had already been extensively tested and modeled, and unlike the SNAP-2 and -8 designs, no complex, highly experimental, mechanical-failure-prone power conversion system was needed.

On Orbit Artist
SNAP-10A reactor, artist’s rendering (artist unknown), image DOE

SNAPSHOT consisted of a SNAP-10A fission power system mounted to a modified Agena-D spacecraft, which by this time was an off-the-shelf, highly adaptable spacecraft used by the US Air Force for a variety of missions. An experimental cesium contact ion thruster (read more about these thrusters on the Gridded Ion Engine page) was installed on the spacecraft for in-flight testing. The mission was to validate the SNAP-10A architecture with on-orbit experience, proving the capability to operate for 9 days without active control, while providing 500 W (28.5 V DC) of electrical power. Additional requirements included the use of a SNAP-2 reactor core with minimal modification (to allow for the higher-output SNAP-2 system with its mercury vapor Rankine power conversion system to be validated as well, when the need for it arose), eliminating the need (while offering the option) for active control of the reactor once startup was achieved for one year (to prove autonomous operation capability); facilitating safe ground handling during spacecraft integration and launch; and, accommodating future growth potential in both available power and power-to-weight ratio.

While the threshold for mission success was set at 90 days, for Atomics International wanted to prove 1 year of capability for the system; so, in those 90 days, the goal was that the entire reactor system would be demonstrated to be capable of one year of operation (the SNAP-2 requirements). Atomics International imposed additional, more stringent, guidelines for the mission as well, specifying a number of design requirements, including self-containment of the power system outside the structure of the Agena, as much as possible; more stringent mass and center-of-gravity requirements for the system than specified by the US Air Force; meeting the military specifications for EM radiation exposure to the Agena; and others.

SNAPSHOT launch, image USAF via Gunter’s Space Page

The flight was formally approved in March, and the launch occurred on April 3, 1965 on an Atlas-Agena D rocket from Vandenberg Air Force Base. The launch went perfectly, and placed the SNAPSHOT spacecraft in a polar orbit, as planned. Sadly, the mission was not one that could be considered either routine or simple. One of the impedance probes failed before launch, and a part of the micrometeorite detector system failed before returning data. A number of other minor faults were detected as well, but perhaps the most troubling was that there were shorts and voltage irregularities coming from the ion thruster, due to high voltage failure modes, as well as excessive electromagnetic interference from the system, which reduced the telemetry data to an unintelligible mess. This was shut off until later in the flight, in order to focus on testing the reactor itself.

The reactor was given the startup order 3.5 hours into the flight, when the two gross adjustment control drums were fully inserted, and the two fine control drums began a stepwise reactivity insertion into the reactor. Within 6 hours, the reactor achieved on-orbit criticality, and the active control portion of the reactor test program began. For the next 154 hours, the control drums were operated with ground commands, to test reactor behavior. Due to the problems with the ion engine, the failure sensing and malfunction sensing systems were also switched off, because these could have been corrupted by the errant thruster. Following the first 200 hours of reactor operations, the reactor was set to autonomous operation at full power. Between 600 and 700 hours later, the voltage output of the reactor, as well as its temperature, began to drop; an effect that the S10-F3 test reactor had also demonstrated, due to hydrogen migration in the core.

On May 16, just over one month after being launched into orbit, contact was lost with the spacecraft for about 40 hours. Some time during this blackout, the reactor’s reflectors ejected from the core (although they remained attached to their actuator cables), shutting down the core. This spelled the end of reactor operations for the spacecraft, and when the emergency batteries died five days later all communication with the spacecraft was lost forever. Only 45 days had passed since the spacecraft’s launch, and information was received from the spacecraft for only 616 orbits.

What caused the failure? There are many possibilities, but when the telemetry from the spacecraft was read, it was obvious that something badly wrong had occurred. The only thing that can be said with complete confidence is that the error came from the Agena spacecraft rather than from the reactor. No indications had been received before the blackout that the reactor was about to scram itself (the reflector ejection was the emergency scram mechanism), and the problem wasn’t one that should have been able to occur without ground commands. However, with the telemetry data gained from the dwindling battery after the shutdown, some suppositions could be made. The most likely immediate cause of the reactor’s shutdown was traced to a possible spurious command from the high voltage command decoder, part of the Agena’s power conditioning and distribution system. This in turn was likely caused by one of two possible scenarios: either a piece of the voltage regulator failed, or it became overstressed because of either the unusual low-power vehicle loads or commanding the reactor to increase power output. Sadly, the cause of this system failure cascade was never directly determined, but all of the data received pointed to a high-voltage failure of some sort, rather than a low-voltage error (which could have also resulted in a reactor scram). Other possible causes of instrumentation or reactor failure, such as thermal or radiation environment, collision with another object, onboard explosion of the chemical propellants used on the Agena’s main engines, and previously noted flight anomalies – including the arcing and EM interference from the ion engine – were all eliminated as the cause of the error as well.

Despite the spacecraft’s mysterious early demise, SNAPSHOT provided many valuable lessons in space reactor design, qualification, ground handling, launch challenges, and many other aspects of handling an astronuclear power source for potential future missions: Suggestions for improved instrumentation design and performance characteristics; provision for a sunshade for the main radiator to eliminate the sun/shade efficiency difference that was observed during the mission; the use of a SNAP-2 type radiation shield to allow for off-the-shelf, non-radiation-hardened electronic components in order to save both money and weight on the spacecraft itself; and other minor changes were all suggested after the conclusion of the mission. Finally, the safety program developed for SNAPSHOT, including the SCA4 submersion criticality tests, the RFT-1 test, and the good agreement in reactor behavior between the on-orbit and ground test versions of the SNAP-10A showed that both the AEC and the customer of the SNAP-10A (be it the US Air Force or NASA) could have confidence that the program was ready to be used for whatever mission it was needed for.

Sadly, at the time of SNAPSHOT there simply wasn’t a mission that needed this system. 500 We isn’t much power, even though it was more power than was needed for many systems that were being used at the time. While improvements in the thermoelectric generators continued to come in (and would do so all the way to the present day, where thermoelectric systems are used for everything from RTGs on space missions to waste heat recapture in industrial facilities), the simple truth of the matter was that there was no mission that needed the SNAP-10A, so the program was largely shelved. Some follow-on paper studies would be conducted, but the lowest powered of the US astronuclear designs, and the first reactor to operate in Earth orbit, would be retired almost immediately after the SNAPSHOT mission.

Post-SNAPSHOT SNAP: the SNAP Improvement Program

The SNAP fission-powered program didn’t end with SNAPSHOT, far from it. While the SNAP reactors only ever flew once, their design was mature, well-tested, and in most particulars ready to fly in a short time – and the problems associated with those particulars had been well-addressed on the nuclear side of things. The Rankine power conversion system for the SNAP-2, which went through five iterations, reached technological maturity as well, having operated in a non-nuclear environment for close to 5,000 hours and remained in excellent condition, meaning that the 10,000 hour requirement for the PCS would be able to be met without any significant challenges. The thermoelectric power conversion system also continued to be developed, focusing on an advanced silicon-germanium thermoelectric convertor, which was highly sensitive to fabrication and manufacturing processes – however, we’ll look more at thermoelectrics in the power conversion systems series of blog posts, just keep in mind that the power conversion systems continued to improve throughout this time, not just the reactor core design.

LiH FE Cracking
Fuel element post-irradiation. Notice the cracks where the FE would rest in the endcap reflector

On the reactor side of things, the biggest challenge was definitely hydrogen migration within the fuel elements. As the hydrogen migrates away from the ZrH fuel, many problems occur; from unpredictable reactivity within the fuel elements, to temperature changes (dehydrogenated fuel elements developed hotspots – which in turn drove more hydrogen out of the fuel element), to changes in ductility of the fuel, causing major headaches for end-of-life behavior of the reactors and severely limiting the fuel element temperature that could be achieved. However, the necessary testing for the improvement of those systems could easily be conducted with less-expensive reactor tests, including the SCA4 test-bed, and didn’t require flight architecture testing to continue to be improved.

The maturity of these two reactors led to a short-lived program in the 1960s to improve them, the SNAP Reactor Improvement Program. The SNAP-2 and -10 reactors went through many different design changes, some large and some small – and some leading to new reactor designs based on the shared reactor core architecture.

By this time, the SNAP-2 had mostly faded into obscurity. However, the fact that it shared a reactor core with the SNAP-10A, and that the power conversion system was continuing to improve, warranted some small studies to improve its capabilities. The two of note that are independent of the core (all of the design changes for the -10 that will be discussed can be applied to the -2 core as well, since at this point they were identical) are the change from a single mercury boiler to three, to allow more power throughput and to reduce loads on one of the more challenging components, and combining multiple cores into a single power unit. These were proposed together for a space station design (which we’ll look at later) to allow an 11 kWe power supply for a crewed station.

The vast majority of this work was done on the -10A. Any further reactors of this type would have had an additional three sets of 1/8” beryllium shims on the external reflector, increasing the initial reactivity by about 50 cents (1 dollar of reactivity is exactly break even, all other things being equal; reactivity potential is often somewhere around $2-$3, however, to account for fission product buildup); this means that additional burnable poisons (elements which absorb neutrons, then decay into something that is mostly neutron transparent, to even out the reactivity of the reactor over its lifetime) could be inserted in the core at construction, mitigating the problems of reactivity loss that were experienced during earlier operation of the reactor. With this, and a number of other minor tweaks to reflector geometry and lowering the core outlet temperature slightly, the life of the SNAP-10A was able to be extended from the initial design goal of one year to five years of operation. The end-of-life power level of the improved -10A was 39.5 kWt, with an outlet temperature of 980 F (527°C) and a power density of 0.14 kWt/lb (0.31 kWt/kg).


Interim 10A 2
Interim SNAP 10A/2, image DOE

e design modifications led to another iteration of the SNAP-10A, the Interim SNAP-10A/2 (I-10A/2). This reactor’s core was identical, but the reflector was further enhanced, and the outlet temperature and reactor power were both increased. In addition, even more burnable poisons were added to the core to account for the higher power output of the reactor. Perhaps the biggest design change with the Interim -10A/2 was the method of reactor control: rather than the passive control of the reactor, as was done on the -10A, the entire period of operation for the I-10A/2 was actively controlled, using the control drums to manage reactivity and power output of the reactor. As with the improved -10A design, this reactor would be able to have an operational lifetime of five years. These improvements led the I-10A/2 to have an end of life power rating of 100 kWt, an outlet temperature of 1200 F (648°C), and an improved power density of 0.33 kWt/lb (0.73 kWt/kg).

10A2 Table
Interim SNAP 10A/2 Design References, image DOE

This design, in turn, led to the Upgraded SNAP-10A/2 (U-10A/2). The biggest in-core difference between the I-10A/2 and the U-10A/2 was the hydrogen barrier used in the fuel elements: rather than using the initial design that was common to the -2, -10A, and I-10A/2, this reactor used the hydrogen barrier from the SNAP-8 reactor, which we’ll look at in the next blog post. This is significant, because the degradation of the hydrogen barrier over time, and the resulting loss of hydrogen from the fuel elements, was the major lifetime limiting factor of the SNAP-10 variants up until this point. This reactor also went back to static control, rather than the active control used in the I-10A/2. As with the other -10A variants, the U-10A/2 had a possible core lifetime of five years, and other than an improvement of 100 F in outlet temperature (to 1300 F), and a marginal drop in power density to 0.31 kWt/lb, it shared many of the characteristics that the I-10A/2 had.

SNAP-10B: The Upgrade that Could Have Been

10B Cutaway System
SNAP=10B Cutaway Diagram, image DOE

One consistent mass penalty in the SNAP-10A variants that we’ve looked at so far is the control drums: relatively large reactivity insertions were possible with a minimum of movement due to the wide profile of the control drums, but this also meant that they extended well away from the reflector, especially early in the mission. This meant that, in order to prevent neutron backscatter from hitting the rest of the spacecraft, the shield had to be relatively wide compared the the size of the core – and the shield was not exactly a lightweight system component.

The SNAP-10B reactor was designed to address this problem. It used a similar core to the U-10A/2, with the upgraded hydrogen barrier from the -8, but the reflector was tapered to better fit the profile of the shadow shield, and axially sliding control cylinders would be moved in and out to provide control instead of the rotating drums of the -10A variants. A number of minor reactor changes were needed, and some of the reactor physics parameters changed due to this new control system; but, overall, very few modifications were needed.

The first -10B reactor, the -10B Basic (B-10B), was a very simple and direct evolution of the U-10A/2, with nothing but the reflector and control structures changed to the -10B configuration. Other than a slight drop in power density (to 0.30 kWt/lb), the rest of the performance characteristics of the B-10B were identical to the U-10A/2. This design would have been a simple evolution of the -10A/2, with a slimmer profile to help with payload integration challenges.

10B Basic Table
Image DOE

The next iteration of the SNAP-10B, the Advanced -10B (A-10B), had options for significant changes to the reactor core and the fuel elements themselves. One thing to keep in mind about these reactors is that they were being designed above and beyond any specific mission needs; and, on top of that, a production schedule hadn’t been laid out for them. This means that many of the design characteristics of these reactors were never “frozen,” which is the point in the design process when the production team of engineers need to have a basic configuration that won’t change in order to proceed with the program, although obviously many minor changes (and possibly some major ones) would continue to be made up until the system was flight qualified.

Up until now, every SNAP-10 design used a 37 fuel element core, with the only difference in the design occurring in the Upgraded -10A/2 and Basic -10B reactors (which changed the hydrogen barrier ceramic enamel inside the fuel element clad). However, with the A-10B there were three core size options: the first kept the 37 fuel element core, a medium-sized 55-element core, and a large 85-element core. There were other questions about the final design, as well, looking at two other major core changes (as well as a lot of open minor questions). The first option was to add a “getter,” a sheath of hydrophilic (highly hydrogen-absorbing) metal to the clad outside the steel casing, but still within the active region of the core. While this isn’t as ideal as containing the hydrogen within the U-ZrH itself, the neutron moderation provided by the hydrogen would be lost at a far lower rate. The second option was to change the core geometry itself, as the temperature of the core changed, with devices called “Thermal Coefficient Augmenters” (TCA). There were two options that were suggested: first, there was a bellows system that was driven by NaK core temperature (using ruthenium vapor), which moves a portion of the radial reflector to change the core’s reactivity coefficient; second, the securing grids for the fuel elements themselves would expand as the NaK increased in temperature, and contract as the coolant dropped in temperature.

Between the options available, with core size, fuel element design, and variable core and fuel element configuration all up in the air, the Advanced SNAP-10B was a wide range of reactors, rather than just one. Many of the characteristics of the reactors remained identical, including the fissile fuel itself, the overall core size, maximum outlet temperature, and others. However, the number of fuel elements in the core alone resulted in a wide range of different power outputs; and, which core modification the designers ultimately decided upon (Getter vs TCA, I haven’t seen any indication that the two were combined) would change what the capabilities of the reactor core would actually be. However, both for simplicity’s sake, and due to the very limited documentation available on the SNAP-10B program, other than a general comparison table from the SNAP Systems Capability Study from 1966, we’ll focus on the 85 fuel element core of the two options: the Getter core and the TCA core.

A final note, which isn’t clear from these tables: each of these reactor cores was nominally optimized to a 100 kWt power output, the additional fuel elements reduced the power density required at any time from the core in order to maximize fuel lifetime. Even with the improved hydrogen barriers, and the variable core geometry, while these systems CAN offer higher power, it comes at the cost of a shorter – but still minimum one year – life on the reactor system. Because of this, all reported estimates assumed a 100 kWt power level unless otherwise stated.

Yt Getter FE
Fuel element with yttrium getter, image DOE

The idea of a hydrogen “getter” was not a new one at the time that it was proposed, but it was one that hadn’t been investigated thoroughly at that point (and is a very niche requirement in terrestrial nuclear engineering). The basic concept is to get the second-best option when it comes to hydrogen migration: if you can’t keep the hydrogen in your fuel element itself, then the next best option is keeping it in the active region of the core (where fission is occurring, and neutron moderation is the most directly useful for power production). While this isn’t as good as increasing the chance of neutron capture within the fuel element itself, it’s still far better than hydrogen either dissolving into your coolant, or worse yet, migrating outside your reactor and into space, where it’s completely useless in terms of reactor dynamics. Of course, there’s a trade-off: because of the interplay between the various aspects of reactor physics and design, it wasn’t practical to change the external geometry of the fuel elements themselves – which means that the only way to add a hydrogen “getter” was to displace the fissile fuel itself. There’s definitely an optimization question to be considered; after all, the overall reactivity of the reactor will have to be reduced because the fuel is worth more in terms of reactivity than the hydrogen that would be lost, but the hydrogen containment in the core at end of life means that the system itself would be more predictable and reliable. Especially for a static control system like the A-10B, this increase in behavioral predictability can be worth far more than the reactivity that the additional fuel would offer. Of the materials options that were tested for the “getter” system, yttrium metal was found to be the most effective at the reactor temperatures and radiation flux that would be present in the A-10B core. However, while improvements had been made in the fuel element design to the point that the “getter” program continued until the cancellation of the SNAP-2/10 core experiments, there were many uncertainties left as to whether the concept was worth employing in a flight system.

The second option was to vary the core geometry with temperature, the Thermal Coefficient Augmentation (TCA) variant of the A-10B. This would change the reactivity of the reactor mechanically, but not require active commands from any systems outside the core itself. There were two options investigated: a bellows arrangement, and a design for an expanding grid holding the fuel elements themselves.

A-10B Bellows Diagram
Ruthenium vapor bellows design for TCA, image DOE

The first variant used a bellows to move a portion of the reflector out as the temperature increased. This was done using a ruthenium reservoir within the core itself. As the NaK increased in temperature, the ruthenium would boil, pushing a bellows which would move some of the beryllium shims away from the reactor vessel, reducing the overall worth of the radial reflector. While this sounds simple in theory, gas diffusion from a number of different sources (from fission products migrating through the clad to offgassing of various components) meant that the gas in the bellows would not just be ruthenium vapor. While this could have been accounted for, a lot of study would have needed to have been done with a flight-type system to properly model the behavior.

A-10B Expandable BaseplateThe second option would change the distance between the fuel elements themselves, using base plate with concentric accordion folds for each ring of fuel elements called the “convoluted baseplate.” As the NaK heated beyond optimized design temperature, the base plates would expand radially, separating the fuel elements and reducing the reactivity in the core. This involved a different set of materials tradeoffs, with just getting the device constructed causing major headaches. The design used both 316 stainless steel and Hastelloy C in its construction, and was cold annealed. The alternative, hot annealing, resulted in random cracks, and while explosive manufacture was explored it wasn’t practical to map the shockwave propagation through such a complex structure to ensure reliable construction at the time.

A-10B Expandable Baseplate 2While this is certainly a concept that has caused me to think a lot about the concept of a variable reactor geometry of this nature, there are many problems with this approach (which could have possibly been solved, or proven insurmountable). Major lifetime concerns would include ductility and elasticity changes through the wide range of temperatures that the baseplate would be exposed to; work hardening of the metal, thermal stresses, and neutron bombardment considerations of the base plates would also be a major concern in this concept.

These design options were briefly tested, but most of these ended up not being developed fully. Because the reactor’s design was never frozen, many engineering challenges remained in every option that had been presented. Also, while I know that a report was written on the SNAP-10B reactor’s design (R. J . Gimera, “SNAP 1OB Reactor Conceptual Design,” NAA-SR-10422), I can’t find it… yet. This makes writing about the design difficult, to say the least.

Because of this, and the extreme paucity of documentation on this later design, it’s time to turn to what these innovative designs could have offered when it comes to actual missions.

The Path Not Taken: Missions for SNAP-2, -10A

Every space system has to have a mission, or it will never fly. Both SNAP-2 and SNAP-10 offered a lot for the space program, both for crewed and uncrewed missions; and what they offered only grew with time. However, due to priorities at the time, and the fact that many records from these programs appear to never have been digitized, it’s difficult to point to specific mission proposals for these reactors in a lot of cases, and the missions have to be guessed at from scattered data, status reports, and other piecemeal sources.

SNAP-10 was always a lower-powered system, even with its growth to a kWe-class power supply. Because of this, it was always seen as a power supply for unmanned probes, mostly in low Earth orbit, but certainly it would also have been useful in interplanetary studies as well, which at this point were just appearing on the horizon as practical. Had the SNAPSHOT system worked as planned, the cesium thruster that had been on board the Agena spacecraft would have been an excellent propulsion source for an interplanetary mission. However, due to the long mission times and relatively fragile fuel of the original SNAP-10A, it is unlikely that these missions would have been initially successful, while the SNAP-10A/2 and SNAP-B systems, with their higher power output and lifetimes, would have been ideal for many interplanetary missions.

As we saw in the US-A program, one of the major advantages that a nuclear reactor offers over photovoltaic cells – which were just starting to be a practical technology at the time – is that they offer very little surface area, and therefore the atmospheric drag that all satellites experience due to the thin atmosphere in lower orbits is less of a concern. There are many cases where this lower altitude offers clear benefits, but the vast majority of them deal with image resolution: the lower you are, the more clear your imagery can be with the same sensors. For the Russians, the ability to get better imagery of US Navy movements in all weather conditions was of strategic importance, leading to the US-A program. For Americans, who had other means of surveillance (and an opponent’s far less capable blue-water navy to track), radar surveillance was not a major focus – although it should be noted that 500 We isn’t going to give you much, if any resolution, no matter what your altitude.

SNAP Meteorological Satellite
SNAP-10 powered meteorological satellite, image DOE

One area that SNAP-10A was considered for was for meteorological satellites. With a growing understanding of how weather could be monitored, and what types of data were available through orbital systems, the ability to take and transmit pictures from on-orbit using the first generations of digital cameras (which were just coming into existence, and not nearly good enough to interest the intelligence organizations at the time), along with transmitting the data back to Earth, would have allowed for the best weather tracking capability in the world at the time. By using a low orbit, these satellites would be able to make the most of the primitive equipment available at the time, and possibly (speculation on my part) have been able to gather rudimentary moisture content data as well.

However, while SNAP-10A was worked on for about a decade, for the entire program there was always the question of “what do you do with 500-1000 We?” Sure, it’s not an insignificant amount of power, even then, but… communications and propulsion, the two things that are the most immediately interesting for satellites with reliable power, both have a linear relationship between power level and capability: the more power, the more bandwidth, or delta-vee, you have available. Also, the -10A was only ever rated for one year of operations, although it was always suspected it could be limped along for longer, which precluded many other missions.

The later SNAP-10A/2 and -10B satellites, with their multi-kilowatt range and years-long lifespans, offered far more flexibility, but by this point many in the AEC, the US Air Force, NASA, and others were no longer very interested in the program; with newer, more capable, reactor designs being available (we’ll look at some of those in the next post). While the SNAP-10A was the only flight-qualified and -tested reactor design (and the errors on the mission were shown to not be the fault of the reactor, but the Agena spacecraft), it was destined to fade into obscurity.

SNAP-10A was always the smallest of the reactors, and also the least powerful. What about the SNAP-2, the 3-6 kWe reactor system?

Initial planning for the SNAP-2 offered many options, with communications satellites being mentioned as an option early on – especially if the reactor lifetime could be extended. While not designed specifically for electric propulsion, it could have utilized that capability either on orbit around the Earth or for interplanetary missions. Other options were also proposed, but one was seized on early: a space station.

S2 Cylinder Station
Cylindrical space station, image DOE

At the time, most space station designs were nuclear powered, and there were many different configurations. However, there were two that were the most common: first was the simple cylinder, launched as a single piece (although there were multiple module designs proposed which kept the basic cylinder shape) which would be finally realized with the Skylab mission; second was a torus-shaped space station, which was proposed almost a half a century before by Tsiolkovsky, and popularized at the time by Werner von Braun. SNAP-2 was adapted to both of these types of stations. Sadly, while I can find one paper on the use of the SNAP-2 on a station, it focuses exclusively on the reactor system, and doesn’t use a particular space station design, instead laying out the ground limits of the use of the reactor on each type of station, and especially the shielding requirements for each station’s geometry. It was also noted that the reactors could be clustered, providing up to 11 kWe of power for a space station, without significant change to the radiation shield geometry. We’ll look at radiation shielding in a couple posts, and look at the particulars of these designs there.

s2 Toroidal Station
Hexagonal/Toroid space station. Note the wide radiation shield. Image DOE

Since space stations were something that NASA didn’t have the budget for at the time, most designs remained vaguely defined, without much funding or impetus within the structure of either NASA or the US Air Force (although SNAP-2 would have definitely been an option for the Manned Orbiting Laboratory program of the USAF). By the time NASA was seriously looking at space stations as a major funding focus, the SNAP-8 derived Advanced Zirconium Hydride reactor, and later the SNAP-50 (which we’ll look at in the next post) offered more capability than the more powerful SNAP-2. Once again, the lack of a mission spelled the doom of the SNAP-2 reactor.

Hg Rankine Cutaway Drawing
Power conversion system, SNAP-2

The SNAP-2 reactor met its piecemeal fate even earlier than the SNAP-10A, but oddly enough both the reactor and the power conversion system lasted just as long as the SNAP-10A did. The reactor core for the SNAP-2 became the SNAP-10A/2 core, and the CRU power conversion system continued under development until after the reactor cores had been canceled. However, mention of the SNAP-2 as a system disappears in the literature around 1966, while the -2/10A core and CRU power conversion system continued until the late 1960s and late 1970s, respectively.

The Legacy of The Early SNAP Reactors

The SNAP program was canceled in 1971 (with one ongoing exception), after flying a single reactor which was operational for 43 days, and conducting over five years of fission powered testing on the ground. The death of the program was slow and drawn out, with the US Air Force canceling the program requirement for the SNAP-10A in 1963 (before the SNAPSHOT mission even launched), the SNAP-2 reactor development being canceled in 1967, all SNAP reactors (including the SNAP-8, which we’ll look at next week) being canceled by 1974, and the CRU power conversion system being continued until 1979 as a separate internal, NASA-supported but not fully funded, project by Rockwell International.

The promise of SNAP was not enough to save the program from the massive cuts to space programs, both for NASA and the US Air Force, that fell even as humanity stepped onto the Moon for the first time. This is an all-too-common fate, both in advanced nuclear reactor engineering and design as well as aerospace engineering. As one of the engineers who worked on the Molten Salt Reactor Experiment noted in a recent documentary on that technology, “everything I ever worked on got canceled.”

However, this does not mean that the SNAP-2/10A programs were useless, or that nothing except a permanently shut down reactor in orbit was achieved. In fact, the SNAP program has left a lasting mark on the astronuclear engineering world, and one that is still felt today. The design of the SNAP-2/10A core, and the challenges that were faced with both this reactor core and the SNAP-8 core informed hydride fuel element development, including the thermal limits of this fuel form, hydrogen migration mitigation strategies, and materials and modeling for multiple burnable poison options for many different fuel types. The thermoelectric conversion system (germanium-silicon) became a common one for high-temperature thermoelectric power conversion, both for power conversion and for thermal testing equipment. Many other materials and systems that were used in this reactor system continued to be developed through other programs.

Possibly the most broad and enduring legacy of this program is in the realm of launch safety, flight safety, and operational paradigms for crewed astronuclear power systems. The foundation of the launch and operational safety guidelines that are used today, for both fission power systems and radioisotope thermoelectric generators, were laid out, refined, or strongly informed by the SNAPSHOT and Space Reactor Safety program – a subject for a future web page, or possibly a blog post. From the ground handling of a nuclear reactor being integrated to a spacecraft, to launch safety and abort behavior, to characterizing nuclear reactor behavior if it falls into the ocean, to operating crewed space stations with on-board nuclear power plants, the SNAP-2/10A program literally wrote the book on how to operate a nuclear power supply for a spacecraft.

While the reactors themselves never flew again, nor did their direct descendants in design, the SNAP reactors formed the foundation for astronuclear engineering of fission power plants for decades. When we start to launch nuclear power systems in the future, these studies, and the carefully studied lessons of the program, will continue to offer lessons for future mission planners.

More Coming Soon!

The SNAP program extended well beyond the SNAP-2/10A program. The SNAP-8 reactor, started in 1959, was the first astronuclear design specifically developed for a nuclear electric propulsion spacecraft. It evolved into several different reactors, notably the Advanced ZrH reactor, which remained the preferred power option for NASA’s nascent modular space station through the mid-to-late 1970s, due to its ability to be effectively shielded from all angles. Its eventual replacement, the SNAP-50 reactor, offered megawatts of power using technology from the Aircraft Nuclear Propulsion program. Many other designs were proposed in this time period, including the SP-100 reactor, the ancestor of Kilopower (the SABRE heat pipe cooled reactor concept), as well as the first American in-core thermionic power system, advances in fuel element designs, and many other innovations.

Originally, these concepts were included in this blog post, but this post quickly expanded to the point that there simply wasn’t room for them. While some of the upcoming post has already been written, and a lot of the research has been done, this next post is going to be a long one as well. Because of this, I don’t know exactly when the post will end up being completed.

After we look at the reactor programs from the 1950s to the late 1980s, we’ll look at NASA and Rosatom’s collaboration on the TOPAZ-II reactor program, and the more recent history of astronuclear designs, from SDI through the Fission Surface Power program. We’ll finish up the series by looking at the most recent power systems from around the world, from JIMO to Kilopower to the new Russian on-orbit nuclear electric tug.

After this, we’ll look at shielding for astronuclear power plants, and possibly ground handling, launch safety, and launch abort considerations, then move on to power conversion systems, which will be a long series of posts due to the sheer number of options available.

These next posts are more research-intensive than usual, even for this blog, so while I’ll be hard at work on the next posts, it may be a bit more time than usual before these posts come out.



SNAP Reactor Overview, Voss 1984


Preliminary Results of the SNAP-2 Experimental Reactor, Hulin et al 1961

Application of the SNAP 2 to Manned Orbiting Stations, Rosenberg et al 1962

The ORNL-SNAP Shielding Program, Mynatt et al 1971


SNAP-10A Nuclear Analysis, Dayes et al 1965

SNAP 10 FS-3 Reactor Performance Hawley et al 1966

SNAPSHOT and the Flight Safety Program

SNAP-10A SNAPSHOTProgram Development, Atomics International 1962

Reliability Improvement Program Planning Report for the SNAP-10A Reactor, Coombs et al 1961

Aerospace Safety Reentry Analytical and Experimental Program SNAP 2 and 10A Interim Report, Elliot 1963

SNAPSHOT orbit, Heavens Above

SNAP Improvement Program

Static Control of SNAP Reactors, Birney et al 1966

SNAP Systems Capabilities Vol 2, Study Introduction, Reactors, Shielding, Atomics International 1965

Progress Report, SNAP Reactor Improvement Program, April-June 1965

Progress Report for SNAP General Supporting Technology May-July 1964

Development and Testing Electric propulsion History Non-nuclear Testing Nuclear Electric Propulsion Spacecraft Concepts

Electric Propulsion: The Oldest “Futuristic” Propulsion Possibility

Hello, and welcome back to Beyond NERVA. Today, we are looking at a very popular topic, but one that doesn’t necessarily require nuclear power: electric propulsion. However, it IS an area that nuclear power plants are often tied to, because the amount of thrust available is highly dependent on the amount of power available for the drive system. We will touch a little bit on the history of electric propulsion, as well as the different types of electric thrusters, their advantages and disadvantages, and how fission power plants can change the paradigm for how electric thrusters can be used. It’s important to realize that most electric propulsion is power-source-agnostic: all they require is electricity; how it’s produced usually doesn’t mean much to the drive system itself. As such, nuclear power plants are not going to be mentioned much in this post, until we look at the optimization of electric propulsion systems.

We also aren’t going to be looking at specific types of thrusters in this post. Instead, we’re going to do a brief overview of the general types of electric propulsion, their history, and how electrically propelled spacecraft differ from thermally or chemically propelled spacecraft. The next few posts will focus more on the specific technology itself, its’ application, and some of the current options for each type of thruster.

Electric Propulsion: What is It?

In its simplest definition, electric propulsion is any means of producing thrust in a spacecraft using electrical energy. There’s a wide range of different concepts that get rolled into this concept, so it’s hard to make generalizations about the capabilities of these systems. As a general rule of thumb, though, most electric propulsion systems are low-thrust, long-burn-time systems. Since they’re not used for launch, and instead for on-orbit maneuvering or interplanetary missions, the fact that these systems generally have very little thrust is a characteristic that can be worked with, although there’s a great deal of variety as far as how much thrust, and how efficient in terms of specific impulse, these systems are.

There are three very important basic concepts to understand when discussing electric propulsion: thrust-to-weight ratio (T/W), specific impulse (isp), and burn time. The first is self-explanatory: how much does the engine weigh, compared to how hard it can push, commonly in relation to Earth’s gravity: a T/W ratio of 1/1 means that the engine can hover, basically, but no more, A T/W ratio of 3/1 means that it can push just less than 3 times its weight off the ground. Specific impulse is a measure of how much thrust you get out of a given unit of propellant, and ignores everything else, including the weight of the propulsion system; it’s directly related to fuel efficiency, and is measured in seconds: if the drive system had a T/W ratio of 1/1, and was entirely made out of fuel, this would be the amount of time it could hover (assuming the engine is completely made out of propellant) for any given mass of fuel at 1 gee. Finally, you have burn time: the T/W ratio and isp give you the amount of thrust imparted per unit time based on the mass of the drive system and of the propellant, then the spacecraft’s mass is factored into the equation to give the total acceleration on the spacecraft for a given unit of time. The longer the engine burns, the more overall acceleration is produced.

Electric propulsion has a very poor thrust-to-weight ratio (as a general rule), but incredible specific impulse and burn times. The T/W ratio of many of the thrusters is very low, due to the fact that they provide very little thrust, often measured in micronewtons – often, the thrust is illustrated in pieces of paper, or pennies, in Earth gravity. However, this doesn’t matter once you’re in space: with no drag, and orbital mechanics not requiring the huge amounts of thrust over a very short period of time, the total amount of thrust is more important for most maneuvers, not how long it takes to build up said thrust. This is where the burn time comes in: most electric thrusters burn continuously, providing minute amounts of thrust over months, sometimes years; they push the spacecraft in the direction of travel until halfway through the mission, then turn around and start decelerating the spacecraft halfway through the trip (in energy budget terms, not necessarily in total mission time). The trump card for electric propulsion is in specific impulse: rather than the few hundred seconds of isp for chemical propulsion, or the thousand or so for a solid core nuclear thermal rocket, electric propulsion gives thousands of seconds of isp. This means less fuel, which in turn makes the spacecraft lighter, and allows for truly astounding total velocities; the downside to this is that it takes months or years to build these velocities, so escaping a gravity well (for instance, if you’re starting in low Earth orbit) can take months, so it’s best suited for long trips, or for very minor changes in orbit – such as for communications satellites, where it’s made these spacecraft smaller, more efficient, and with far longer lifetimes.

Electric propulsion is an old idea, but one that has yet to reach its’ full potential due to a number of challenges. Tsiolkovsy and Goddard both wrote about electric propulsion, but because neither was living in a time that it was possible to get into orbit, their ideas went unrealized in their lifetimes. The reason for this is that electric propulsion isn’t suitable for lifting rockets off the surface of a planet, but for in-space propulsion it’s incredibly promising. They both showed that the only thing that matters for a rocket engine is that, to put it simply, some mass needs to be thrown out the back of the rocket to provide thrust, it doesn’t matter what that something is. Electricity isn’t (directly) limited by thermodynamics (except through entropic losses), only by electric potential differences, and can offer very efficient conversion of electric potential to kinetic energy (the “throwing something out of the back” part of the system).

In chemical propulsion, combustion is used to cause heat to be produced, which causes the byproducts of the chemical reaction to expand and accelerate. This is then directed out of a nozzle to increase the velocity of the exhaust and provide lift. This is the first type of rocket ever developed; however, while advances are always being produced, in many ways the field is chasing after more and more esoteric or exotic ways to produce ever more marginal gains. The reason for this is that there’s only so much chemical potential energy available in a given system. The most efficient chemical engines top out around 500 seconds of specific impulse, and most hover around the 350 mark. The place that chemical engines excel though, is in thrust-to-weight ratio. They remain – arguably – our best, and currently our only, way of actually getting off Earth.

Thermal propulsion doesn’t rely on the chemical potential energy, instead the reaction mass is directly heated from some other source, causing expansion. The lighter the propellant, the more it expands, and therefore the more thrust is produced for a given mass; however, heavier propellants can be used to give more thrust per unit volume, at lower efficiencies. It should be noted that thermal propulsion is not only possible, but also common, with electrothermal thrusters, but we’ll dig more into that later.

Electric propulsion, on the other hand, is kind of a catch-all term when you start to look at it. There are many mechanisms for changing electrical energy into kinetic energy, and looking at most – but not all – of the options is what this blog post is about.

In order to get a better idea of how these systems work, and the fundamental principles behind electric propulsion, it may be best to look into the past. While the potential of electric propulsion is far from realized, it has a far longer history than many realize.

Futuristic Propulsion? … Sort Of, but With A Long Pedigree

The Origins of Electric Propulsion

Goddard drive drawing
First Patented Ion Drive, Robert Goddard 1917

When looking into the history of spaceflight, two great visionaries stand out: Konstantin Tsiolkosky and Robert Goddard. Both worked independently on the basics of rocketry, both provided much in the way of theory, and both were visionaries seeing far beyond their time to the potential of rocketry and spaceflight in general. Both were working on the questions of spaceflight and rocketry at the turn of the 20th century. Both also independently came up with the concept of electric propulsion; although who did it first requires some splitting of hairs: Goddard mentioned it first, but in a private journal, while Tsiolkovsky published the concept first in a scientific paper, even if the reference is fairly vague (considering the era, understandably so). Additionally, due to the fact that electricity was a relatively poorly understood phenomenon at the time (the nature of cathode and anode “rays” was much debated, and positively charged ions had yet to be formally described); and neither of these visionaries had a deep understanding of the concepts involved, their ideas being little more than just that: concepts that could be used as a starting point, not actual designs for systems that would be able to be used to propel a spacecraft.


Tsilkovsky small portrait
Konstantin Tsiolkovsky, image via Wikimedia

The first mention of electric propulsion in the formal scientific literature was in 1911, in Russia. Konstantin Tsiolkovsky wrote that “it is possible that in time we may use electricity to produce a large velocity of particles ejected from a rocket device.” He began to focus on the electron, rather than the ion, as the ejected particle. While he never designed a practical device, the promise of electric propulsion was clearly seen: “It is quite possible that electrons and ions can be used, i.e. cathode and especially anode rays. The force of electricity is unlimited and can, therefore, produce a powerful flux of ionized helium to serve a spaceship.” The lack of understanding of electric phenomena hindered him, though, and prevented him from ever designing a practical system, much less building one.


Robert Goddard, image viaWikimedia

The first mention of electric propulsion in history is from Goddard, in 1906, in a private notebook, but as noted by Edgar Choueiri, in his excellent historical paper published in 2004 (a major source for this section), these early notes don’t actually describe (or even reference the use of) an electric propulsion drive system. This wasn’t a practical design (that didn’t come until 1917), but the basic principles were laid out for the acceleration of electrons (rather than positively charged ions) to the “speed of light.” For the next few years, the concept fermented in his mind, culminating in patents in 1912 (for an ionization chamber using magnetic fields, similar to modern ionization chambers) and in 1917 (for a “Method and Means for Producing Electrified Jets of Gas”). The third of three variants was for the first recognizable electric thruster, whichwould come to be known as an electrostatic thruster. Shortly after, though, America entered WWI, and Goddard spent the rest of his life focused on the then-far-more-practical field of chemical propulsion.


Yuri Kondratyuk, image wia Wikimedia

Other visionaries of rocketry also came up with concepts for electric propulsion. Yuri Kondratyuk (another, lesser-known, Russian rocket pioneer) wrote “Concerning Other Possible Reactive Drives,” which examined electric propulsion, and pointed out the high power requirements for this type of system. He didn’t just examine electron acceleration, but also ion acceleration, noting that the heavier particles provide greater thrust (in the same paper he may have designed a nascent colloid thruster, another type of electric propulsion).





Hermann Oberth, image via Wikimedia

Another of the first generation of rocket pioneers to look at the possibilities of electric propulsion was Hermann Oberth. His 1929 opus, “Ways to Spaceflight,” devoted an entire chapter to electric propulsion. Not only did he examine electrostatic thrusters, but he looked at the practicalities of a fully electric-powered spacecraft.






Valentin Glushko, image via Wikimedia

Finally, we come to Valentin Glushko, another early Russian rocketry pioneer, and giant of the Soviet rocketry program. In 1929, he actually built an electric thruster (an electrothermal system, which vaporized fine wires to produce superheated particles), although this particular concept never flew.By this time, it was clear that much more work had to be done in many fields for electric propulsion to be used; and so, one by one, these early visionaries turned their attention to chemical rockets, while electric propulsion sat on the dusty shelves of spaceflight concepts that had yet to be realized. It collected dust next to centrifugal artificial gravity, solar sails, and other practical ideas that didn’t have the ability to be realized for decades.

The First Wave of Electric Propulsion

Electric propulsion began to be investigated after WWII, both in the US and in the USSR, but it would be another 19 years of development before a flight system was introduced. The two countries both focused on one general type of electric propulsion, the electrostatic thruster, but they looked at different types of this thruster, reflecting the technical capabilities and priorities of each country. The US focused on what is now known as a gridded ion thruster, most commonly called an ion drive, while the USSR focused on the Hall effect thruster, which uses a magnetic field perpendicular to the current direction to accelerate particles. Both of these concepts will be examined more in the section on electrostatic thrusters; though, for now it’s worth noting that the design differences in these concepts led to two very different systems, and two very different conceptions of how electric propulsion would be used in the early days of spaceflight.

In the US, the most vigorous early proponent of electric propulsion was Ernst Stuhlinger, who was the project manager for many of the earliest electric propulsion experiments. He was inspired by the work of Oberth, and encouraged by von Braun to pursue this area, especially now that being able to get into space to test and utilize this type of propulsion was soon to be at hand. His leadership and designs had a lasting impact on the US electric propulsion program, and can still be seen today.

SERT-I thruster, image courtesy NASA

The first spacecraft to be propelled using electric propulsion was the SERT-I spacecraft, a follow on to a suborbital test (Program 661A, Test A, first of three suborbital tests for the USAF) of the ion drives that would be used. These drive system used cesium and mercury as a propellant, rather than the inert gasses that are commonly used today. The reason for this is that these metals both have very low ionization energy, and a reasonably favorable mass for providing more significant thrust. Tungsten buttons were used in the place of the grids used in modern ion drives, and a tantalum wire was used to neutralize the ion stream. Unfortunately, the cesium engine short circuited, but the mercury system was tested for 31 minutes and 53 cycles of the engine. This not only demonstrated ion propulsion in principle, but just as importantly demonstrated ion beam neutralization. This is important for most electric propulsion systems, because this prevents the spacecraft from becoming negatively charged, and possibly even attracting the ion stream back to the spacecraft, robbing it of thrust and contaminating sensors on board (which was a common problem in early electric propulsion systems).

The SNAPSHOT program, which launched the SNAP 10A nuclear reactor on April 3, 1965, also had a cesium ion engine as a secondary experimental payload. The failure of the electrical bus prevented this from being operated, but SNAPSHOT could be considered the first nuclear electric spacecraft in history (if unsuccessful).

ATS (either 4 or 5), image courtesy NASA

The ATS program continued to develop the cesium thrusters from 1968 through 1970. The ATS-4 flight was the first demonstration of an orbital spacecraft with electric propulsion, but sadly there were problems with beam neutralization in the drive systems, indicating more work needed to be done. ATS-5 was a geostationary satellite meant to have electrically powered stationkeeping, but was not able to despin the satellite from launch, meaning that the thruster couldn’t be used for propulsion (the emission chamber was flooded with unionized propellant), although it was used as a neutral plasma source for experimentation. ATS-6 was a similar design, and successfully operated for a total of over 90 hours (one failed early due to a similar emission chamber flooding issue). SERT-II and SCATHA satellites continued to demonstrate improvements as well, using both cesium and mercury ion devices (SCATHA wasn’t optimized as a drive system, but used similar components to test spacecraft charge neutralization techniques).

These tests in the 1960s never developed into an operational satellite that used ion propulsion for another thirty years. Challenges with the aforementioned thrusters becoming saturated, spacecraft contamination issues due to highly reactive cesium and mercury propellants, and relatively low engine lifetimes (due to erosion of the screens used for this type of ion thruster) didn’t offer a large amount of promise for mission planners. The high (2000+ s) specific impulse was very promising for interplanetary spacecraft, but the low reliability, and reasonably short lifetimes, of these early ion drives made them unreliable, or of marginal use, for mission planners. Ground testing of various concepts continued in the US, but additional flight missions were rare until the end of the 1990s. This likely helped feed the idea that electric propulsion is new and futuristic, rather than having its’ conceptual roots reaching all the way back to the dawn of the age of flight.

Early Electric Propulsion in the USSR

Unlike in the US, the USSR started development of electric propulsion early, and continued its development almost continuously to the modern day. Sergei Korolev’s OKB-1 was tasked, from the beginning of the space race, with developing a wide range of technologies, including nuclear powered spacecraft and the development of electric propulsion.

Early USSR TAL, Kim et al
Early sketch of a Hall effect (TAL) thruster in USSR, image from Kim et al

Part of this may be the different architecture that the Soviet engineers used: rather than having ions be accelerated toward a pair of charged grids, the Soviet designs used a stream of ionized gas, with a perpendicular magnetic field to accelerate the ions. This is the Hall effect thruster, which has several advantages over the gridded ion thruster, including simplicity, fewer problems with erosion, as well as higher thrust (admittedly, at the cost of specific impulse). Other designs, including the PPT, or pulsed plasma thruster, were also experimented with (the ZOND-2 spacecraft carried a PPT system). However, due to the rapidly growing Soviet mastery of plasma physics, the Hall effect thruster became a very attractive system.

There are two main types of Hall thruster that were experimented with: the stationary plasma thruster (SPT) and the thruster with anode layer (TAL), which refer to how the electric charge is produced, the behavior of the plasma, and the path that the current follows through the thruster. The TAL was developed in 1957 by Askold Zharinov, and proven in 1958-1961, but a prototype wasn’t built until 1967 (using cesium, bismuth, cadmium, and xenon propellants, with isp of up to 8000 s), and it wasn’t published in open literature until 1973. This thruster can be characterized by a narrow acceleration zone, meaning it can be more compact.

E1 SPT Thruster, Kim et al
E1 SPT-type Hall thruster, image via Kim et al

The SPT, on the other hand, can be larger, and is the most common form of Hall thruster used today. Complications in the plasma dynamics of this system meant that it took longer to develop, but its’ greater electrical efficiency and thrust mean that it’s a more attractive choice for station-keeping thrusters. Research began in 1962, under Alexy Morozov at the Institute of Atomic Energy; and was later moved to the Moscow Aviation institute, and then again to what became known as FDB Fakel (now Fakel Industries, still a major producer of Hall thrusters). The first breadboard thruster was built in 1968, and flew in 1970. It was then used for the Meteor series of weather satellites for attitude control. Development continued on the design until today, but these types of thrusters weren’t widely used, despite their higher thrust and lack of spacecraft contamination (unlike similar vintage American designs).

It would be a mistake to think that only the US and the USSR were working on these concepts, though. Germany also had a diversity of programs. Arcjet thrusters, as well as magnetoplasmadynamic thrusters, were researched by the predecessors of the DLR. This work was inherited by the University of Stuttgart Institute for Space Systems, which remains a major research institution for electric propulsion in many forms. France, on the other hand, focused on the Hall effect thruster, which provides lower specific impulse, but more thrust. The Japanese program tended to focus on microwave frequency ion thrusters, which later provided the main means of propulsion for the Hayabusa sample return mission (more on that below).

The Birth of Modern Electric Propulsion

DS1 Mission Patch, Image courtesy JPL

For many people, electric propulsion was an unknown until 1998, when NASA launched the Deep Space 1 mission. DS1 was a technology demonstration mission, part of the New Millennium program of advanced technology testing and experimentation. A wide array of technologies were to be tested in space, after extensive ground testing; but, for the purposes of Beyond NERVA, the most important of these new concepts was the first operational ion drive, the NASA Solar Technology Applications Readiness thruster (NSTAR). As is typical of many modern NASA programs, DS1 far exceeded the minimum requirements. Originally meant to do a flyby of the asteroid 9969 Braille, the mission was extended twice: first for a transit to the comet 19/P Borrelly, and later to extend engineering testing of the spacecraft.

NSTAR thruster, image courtesy NASA

In many ways, NSTAR was a departure from most of the flight-tested American electric propulsion designs. The biggest difference was with the propellant used: cesium and mercury were easy to ionize, but a combination of problems with neutralizing the propellant stream, and the resultant contamination of the spacecraft and its’ sensors (as well as minimizing chemical reaction complications and growing conservatism concerning toxic component minimization in spacecraft), led to the decision to use noble gasses, in this case xenon. This, though, doesn’t mean that it was a great overall departure from the gridded ion drives of US development; it was an evolution, not a revolution, in propulsion technology. Despite an early (4.5 hour) failure of the NSTAR thruster, it was able to be restarted, and the overall thruster life was 8,200 hours, and the backup achieved more than 500 hours beyond that.

Not only that, but this was not the only use of this thruster. The Dawn mission to the minor planet Ceres uses an NSTAR thruster, and is still in operation around that body, sending back incredibly detailed and fascinating information about hydrocarbon content in the asteroid belt, water content, and many other exciting discoveries for when humanity begins to mine the asteroid belt.

Many satellites, especially geostationary satellites, use electric propulsion today, for stationkeeping and even for final orbital insertion. The low thrust of these systems is not a major detriment, since they can be used over long periods of time to ensure a stable orbital path; and the small amount of propellant required allows for larger payloads or longer mission lifetimes with the same mass of propellant.

After decades of being considered impractical, immature, or unreliable, electric propulsion has come out of the cold. Many designs for interplanetary spacecraft use electric propulsion, due to their high specific impulse and ability to maximize the benefits of the high-isp, low-thrust propulsion regime that these thruster systems excel at.

GT arcjet small.PNG
Electrothermal arcjet, image courtest Georgia Tech

Another type of electric thruster is also becoming popular for small-sat users: electrothermal thrusters, which offer higher thrust from chemically inert propellants in compact forms, at the cost of specific impulse. These thrusters offer the benefits of high-thrust chemical propulsion in a more compact and chemically inert form – a major requirement for most smallsats which are secondary payloads that have to demonstrate that they won’t threaten the primary payload.

So, now that we’ve looked into how we’ve gotten to this point, let’s see what the different possibilities are, and what is used today.

What are the Options?

Ion drive scematic, NASA
Ion drive schematic, image courtesy NASA

The most well-known and popularized version of electric propulsion is electrostatic propulsion, which uses an ionization chamber (or ionic fluid) to develop a positively charged stream of ions, which are then accelerated out the “nozzle” of the thruster. A stream of electrons is added to the propellant as it leaves the spacecraft, to prevent the buildup of a negative charge. There are many different variations of this concept, including the best known types of thrusters (the Hall effect and gridded ion thrusters), as well as field effect thrusters and electrospray thrusters.

NASA MPD concept
MPD Thruster concept, image courtesy NASA

The next most common version – and one with a large amount of popular mentions these days – is the electromagnetic thruster. Here, the propellant is converted to a relatively dense plasma, and usually (but not always) magnets are used to accelerate this plasma at high speed out of a magnetic nozzle using the electromagnetic and thermal properties of plasma physics. In the cases that the plasma isn’t accelerated using magnetic fields directly, magnetic nozzles and other plasma shaping functions are used to constrict or expand the plasma flow. There are many different versions, from magnetohydrodynamic thrusters (MHD, where a charge isn’t transferred into the plasma from the magnetic field), to the less-well-known magnetoplasmadynamic (MPD, where the Lorentz force is used to at least partially accelerate the plasma), electrodeless plasma, and pulsed inductive thruster (PIT).

GT arcjet small
Electrothermal arcjet, image courtesy Georgia Tech

Thirdly, we have electrothermal drive systems, basically highly advanced electric heaters used to heat a propellant. These tend to be the less energy efficient, but high thrust, systems (although, theoretically, some versions of electromagnetic thrusters can achieve high thrust as well). The most common types of electrothermal systems proposed have been arcjet, resistojet, and inductive heating drives; the first two actually being popular choices for reaction control systems for large, nuclear-powered space stations. Inductive heating has already made a number of appearances on this page, both in testing apparatus (CFEET and NTREES are both inductively heated), and as part of a bimodal NTR (the nuclear thermal electric rocket, or NTER, covered on our NTR page).

VASIMR sketch, Ad Astra
VASIMR operating principles diagram, image courtesy Ad Astra

The last two systems, MHD and electrothermal, often use similar mechanisms of operation when you look at the details, and the line between the two isn’t necessarily clear. For instance, some authors describe the pulsed plasma thruster (PPT), which most commonly uses a solid propellant such as PTFE (Teflon) as a propellant, which is vaporized and occasionally ionized electrically before it’s accelerated out of the spacecraft, as an MHD, while others describe it as an arcjet, and which term best applies depends on the particulars of the system in question. A more famous example of this gray area would be the VASIMR thruster, (VAriable Specific Impulse through Magnetic Resonance). This system uses dense plasma, contained in a magnetic field, but the plasma is inductively heated using RF energy, and then accelerated due to the thermal behavior of the plasma while being contained magnetically. Because of this, the system can be seen as an MHD thruster, or as an electrothermal thruster (that debate, and the way these terms are used, was one of the more enjoyable parts of the editing process of this blog post, and I’m sure one that will continue as we continue to examine EP).

Finally, we come to the photon drives. These use photons as the reaction mass – and as such, are sometimes somewhat jokingly called flashlight drives. They have the lowest thrust of any of these systems, but the exhaust velocity is literally the speed of light, so they have insanely high specific impulse. Just… don’t expect any sort of significant acceleration, getting up to speed with these systems could take decades, if not centuries; making them popular choices for interstellar systems, rather than interplanetary ones. Photonic drives have another option, as well, though: the power source for the photons doesn’t need to be on board the spacecraft at all! This is the principle behind the lightsail (the best-known version being the solar sail): a fixed installation can produce a laser, or other stream of photons (such as a maser, out of microwaves, in the Starwisp concept), which then impact a reflective surface to provide thrust. This type of system follows a different set of rules and limitations, however, from systems where the power supply (and associated equipment), drive system, and any propellant needed are on-board the spacecraft, so we won’t go too much into depth on that concept initially, instead focusing on designs that have everything on-board the spacecraft.

Each of these systems has its’ advantages and disadvantages. Electrostatic thrusters are very simple to build: ionization chambers are easy, and creating a charged field is easy as well; but to get it to work there has to be something generating that charge, and whatever that something is will be hit by the ionized particles used for propellant, causing erosion. Plasmadynamic thrusters can provide incredible flexibility, but generally require large power plants; and reducing the power requirements requires superconducting magnets and other materials challenges. In addition, plasma physics, while becoming increasingly well known, provides a unique set of challenges. Thermoelectric thrusters are simple, but generally provide poor specific impulse, and thermal cycling of the components causes wear. Finally, photon drives are incredibly efficient but very, very low thrust systems, requiring exceedingly long burn times to produce any noticeable thrust. Let’s look at each of these options in a bit more detail, and look at the practical limitations that each system has.

Optimizing the System: The Fiddly Bits

As we’ve seen, there’s a huge array of technologies that fall under the umbrella of “electric propulsion,” each with their advantages and disadvantages. The mission that is going to be performed is going to determine which types of thrusters are feasible or not, depending on a number of factors. If the mission is stationkeeping for a geosynchronous communications satellite, then the Hall thruster has a wonderful balance between thrust and specific impulse. If the mission is a sample return mission to an asteroid, then the lower thrust, higher specific impulse gridded ion thruster is better, because the longer mission time (and greater overall delta-v needed for the mission) make this low-thrust, high-efficiency thruster a far more ideal option. If the mission is stationkeeping on a small satellite that is a piggyback load, the arcjet may be the best option, due to its’ compactness, the chemically inert nature of the fuel, and relatively high thrust. If higher thrust is needed over a longer period for a larger spacecraft, MPD may be the best bet. Very few systems are designed to deal with a wide range of capabilities in spaceflight, and electric propulsion is no different.

There are other key concepts to consider in the selection of an electric propulsion system as well. The first is the efficiency of this system: how much electricity is required for the thruster, compared to how much energy is imparted onto the spacecraft in the form of the propellant. This efficiency will vary within each different specific design, and its’ improvement is a major goal in every thruster’s development process. The quality of electrical power needed is also an important consideration: some require direct, current, some require alternating current, some require RF or microwave power inputs, and matching the electricity produced to the thruster itself is a necessary step, which on occasion can make one thruster more attractive than another by reducing the overall mass of the system. Another key question is the total amount of change in velocity needed for the mission, and the timeframe over which this delta-v can be applied; in this case, the longer timeframe you have, the more efficient your thruster can be at lower thrust (trading thrust for specific impulse).

However, looking past just the drive itself, there are quite a few things about the spacecraft itself, and the power supply, that also have to be considered. The first consideration is the power supply available to the drive system. If you’ve got an incredibly efficient drive system that requires a MW to run, then you’re going to be severely limited in your power supply options (there are very few, if any, drive systems that require this high a charge). For more realistic systems, the mass of the power supply, and therefore of the spacecraft, is going to have a direct impact on the amount of delta-v that is able to be applied over a given time: if you want your spacecraft to be able to, say maneuver out of the way of a piece of space debris, or a mission to another planet needs to arrive within a given timeframe, the less mass for a given unit of power, the better. This is an area where nuclear power can offer real benefits: while it’s debatable whether solar or nuclear power is better for low-powered applications in terms of power per unit mass, which is known in engineering as specific power. Once higher power levels are needed, however, nuclear shines: it can be difficult (but is far from impossible) to scale nuclear down in size and power output, but it scales up very easily and efficiently, and this scaling is non-linear. A smaller output reactor and one that has 3 times the output could be very similar in terms of core size, and the power conversion systems used also often have similar scaling advantages. There are additional advantages, as well: radiators are generally speaking smaller in sail area, and harder to damage, than photovoltaic cells, and can often be repaired more easily (once a PV cell get hit with space debris, it needs to be replaced, but a radiator tube designed to be repaired can in many cases just be patched or welded and continue functioning). This concept is known as power density, or power-per-unit-volume, and also has a significant impact on the capabilities of many (especially large) spacecraft. The specific volume of the power supply is going to be a limiting factor when it comes to launching the vehicle itself, since it has to fit into the payload fairing of the launch vehicle (or the satellite bus of the satellite that will use it).

The specific power, on the other hand, has quite a few different implications, most importantly in the available payload mass fraction of the spacecraft itself. Without a payload, of whatever type is needed, either scientific missions or crew life support and habitation modules, then there’s no point to the mission, and the specific power of the entire power and propulsion unit has a large impact on the amount of mass that is able to be brought on the mission.

Another factor to consider when designing an electrically propelled spacecraft is how the capabilities and limitations of the entire power and propulsion unit interact with the spacecraft itself. Just as in chemical and thermal rockets, the ratio of wet (or fueled) to dry (unfueled) mass has a direct impact on the vehicle’s capabilities: Tsiolkovsky’s rocket equation still applies, and in long missions there can be a significant mass of propellant on-board, despite the high isp of most of these thrusters. The specific mass of the power and propulsion system will have a huge impact on this, so the more power-dense, and more mass-efficient you are when converting your electricity into useful power for your thruster, the more capable the spacecraft will be.

Finally, the overall energy budget for the mission needs to be accounted for: how much change in velocity, or delta-v, is needed for the mission, and over what time period this change in velocity can be applied, are perhaps the biggest factors in selecting one type of thruster over another. We’ve already discussed the relative advantages and disadvantages of many of the different types of thrusters earlier, so we won’t examine it in detail again, but this consideration needs to be taken into account for any designed spacecraft.

With each of these factors applied appropriately, it’s possible to create a mathematical description of the spacecraft’s capabilities, and match it to a given mission profile, or (as is more common) to go the other way and design a spacecraft’s basic design parameters for a specific mission. After all, a spacecraft designed to deliver 100 kg of science payload to Jupiter in two years is going to have a very different design than one that’s designed to carry 100 kg to the Moon in two weeks, due to the huge differences in mission profile. The math itself isn’t that difficult, but for now we’ll stick with the general concepts, rather than going into the numbers (there are a number of dimensionless variables in the equations, and for a lot of people that becomes confusing to understand).

Let’s look instead at some of the more important parts of the power and propulsion unit that are tied more directly to the drives themselves.

Just as in any electrical system, you can’t just hook wires up to a battery, solar panel, or power conversion system and feed it into the thruster, the electricity needs to be conditioned first. This ensures the correct type of current (alternating or direct), the correct amount of current, the correct amperage… all the things that are done on Earth multiple times in our power grid have to be done on-board the spacecraft as well, and this is one of the biggest factors when it comes to what specific drive is placed on a particular satellite.

After the electricity is generated, it goes through a number of control systems to first ensure protection for the spacecraft from things like power surges and inappropriate routing, and then goes to a system to actually distribute the power, not just to the thruster, but to the rest of the on-board electrical systems. Each of these requires different levels of power, and as such there’s a complex series of systems to distribute and manage this power. If electric storage is used, for instance for a solar powered satellite, this is also where that energy is tapped off and used to charge the batteries (with the appropriate voltage and battery charge management capability).

After the electricity needed for other systems has been rerouted, it is directed into a system to ensure that the correct amount and type (AC, DC, necessary voltage, etc) of electricity is delivered to the thruster. These power conditioning units, or PCUs, are some of the most complex systems in an electric propulsion systems, and have to be highly reliable. Power fluctuations will affect the functioning of a thruster (possibly even forcing it to shut down in the case of too low a current), and in extreme cases can even damage a thruster, so this is a key function that must be provided by these systems. Due to this, some designers of electrical drive systems don’t design those systems in-house, instead selling the thruster alone, and the customer must contract or design the PCU independently of the supplier (although obviously with the supplier’s support).

Finally, the thermal load on the thruster itself needs to be managed. In many cases, small enough thermal loads on the thruster mean that radiation, or thermal convection through the propellant stream, is sufficient for managing this, but for high-powered systems, an additional waste heat removal system may be necessary. If this is the case, then it’s an additional system that needs to be designed and integrated into the system, and the amount of heat generated will play a major factor in the types of heat rejection used.

There’s a lot more than just these factors to consider when integrating an electric propulsion system into a spacecraft, but it tends to get fairly esoteric fairly quickly, and the best way to understand it is to look at the relevant mathematical functions for a better understanding. Up until this point, I’ve managed to avoid using the equations behind these concepts, because for many people it’s easier to grasp the concepts without the numbers. This will change in the future (as part of the web pages associated with these blog posts), but for now I’m going to continue to try and leave the math out of the posts themselves.

Conclusions, and Upcoming Posts

As we’ve seen, electric propulsion is a huge area of research and design, and one that extends all the way back to the dawn of rocketry. Despite a slow start, research has continued more or less continuously across the world in a wide range of different types of electric propulsion.

We also saw that the term “electric propulsion” is very vague, with a huge range of capabilities and limitations for each system. I was hoping to do a brief look at each type of electric propulsion in this post (but longer than a paragraph or two each), but sadly I discovered that just covering the general concepts, history, and integration of electric propulsion was already a longer-than-average blog post. So, instead, we got a brief glimpse into the most general basics of electrothermal, electrostatic, magnetoplasmadynamic, and photonic thrusters, with a lot more to come in the coming posts.

Finally, we looked at the challenges of integrating an electric propulsion system into a spacecraft, and some of the implications for the very wide range of capabilities and limitations that this drive concept offers. This is an area that will be expanded a lot as well, since we barely scratched the surface. We also briefly looked at the other electrical systems that a spacecraft has in between the power conversion system and the thruster itself, and some of the challenges associated with using electricity as your main propulsion system.

Our next post will look at two similar in concept, but different in mechanics, designs for electric propulsion: electrothermal and magnetoplasmadynamic thrusters. I’ve already written most of the electrothermal side, and have a good friend who’s far better than I at MPD, so hopefully that one will be coming soon.

The post after that will focus on electrostatic thrusters. Due to the fact that these are some of the most widely used, and also some of the most diverse in the mechanisms used, this may end up being its’ own post, but at this point I’m planning on also covering photon drive systems (mostly on-board but also lightsail-based concepts) in that post as well to wrap up our discussion on the particulars of electric propulsion.

Once we’ve finished our look at the different drive systems, we’ll look at how these systems don’t have to be standalone concepts. Many designs for crewed spacecraft integrate both thermal and electric nuclear propulsion into a single propulsion stage, bimodal nuclear thermal rockets. We’ll examine two different design concepts, one American (the Copernicus-B), and one Russian (the TEM stage), in that post, and look at the relative advantages and disadvantages of each concept.

I would like to acknowledge the huge amount of help that Roland Antonius Gabrielli of the University of Stuttgart Institute for Space Studies has been in this post, and the ones to follow. His knowledge of these topics has made this a far better post than it would have been without his invaluable input.

As ever, I hope you’ve enjoyed the post. Feel free to leave a comment below, and join our Facebook group to join in the discussion!



A Critical History of Electric Propulsion: The First Fifty Years, Choueiri Princeton 2004

A Method and Means of Producing Jets of Electrified Gas, US Patent 1363037A, Goddard 1917

A Synopsis of Ion Propulsion Development Projects in the United States: SERT I to Deep Space 1, Sovey et al, NASA Glenn Research Center 1999

History of the Hall Thruster’s Development in the USSR, Kim et al 2007

NSTAR Technology Validation, Brophy et al 2000

Review Papers for Electric Propulsion

Electric Propulsion: Which One for my Spacecraft? Jordan 2000

Electric Propulsion, Jahn and Choueiri, Princeton University 2002

Spacecraft Optimization

Joint Optimization of the Trajectory and the Main Parameters of an Electric Propulsion Spacecraft, Petukhov et al 2017

Power Sources and Systems of Satellites and Deep Space Probes (slideshow), Farkas ESA


Development and Testing Low Enriched Uranium Nuclear Thermal Systems Test Stands

NTR Hot Fire Testing 2: Modern Designs, New Plans for the LEU NTP

Hello, and welcome back to Beyond NERVA in the second part of our two-part series on ground testing NTRs. In part one, we examined the testing done at the National Defense Research Site in Nevada as part of Project Rover, and also a little bit of the zero power testing that was done at the Los Alamos Scientific Laboratory to support the construction, assembly, and zero-power reactivity characterization of these reactors. We saw that the environmental impact to the population (even those living closest to the test) rarely exceeded the equivalent dose of a full-body high contrast MRI. However, even this low amount of radioisotope release is unacceptable in today’s regulatory environment, so new avenues of testing must be explored.

NERVAEngineTest, AEC
NRX (?) Hot-fire test, image courtesy DOE

We will look at the proposals over the last 25 years for new ways of testing nuclear thermal rockets in full flow, fission-powered testing, as well as looking at cost estimates (which, as always, should be taken with a grain of salt) and the challenges associated with each concept.

Finally, we’re going to look at NASA’s current plans for test facilities, facility costs, construction schedules, and testing schedules for the LEU NTP program. This information is based on the preliminary estimates released by NASA, and as such there’s still a lot that’s up in the air about these concepts and cost estimates, but we’ll look at what’s available.

Diagram side by side with A3
Full exhaust capture at NASA’s A3 test stand, Stennis Space Center. Image courtesy NASA

Pre-Hot Fire Testing: Thermal Testing, Neutronic Analysis, and Preparation for Prototypic Fuel Testing

Alumina sleeve during test, Bradley

We’ve already taken a look at the test stands that are currently in use for fuel element development, CFEET and NTREES. These test stands allow for electrically heated testing in a hydrogen environment, allowing for testing of he thermal and chemical properties of NTR fuel. They also allow for things like erosion tests to be done, to ensure clad materials are able to withstand not just the thermal stresses of the test but also the erosive effects of the hot hydrogen moving through them at a high rate.

However, there are a number of other effects that the fuel elements will be exposed to during reactor operation, and the behavior of these materials in an irradiated environment is something that still needs to be characterized. Fuel element irradiation is done using existing reactors, either in a beamline for out-of-core initial testing, or using specially designed capsules to ensure the fuel elements won’t adversely affect the operation of the reactor, and to ensure the fuel element is in the proper environment for its’ operation, for in-core testing.


TRIGA reactor core, image courtesy Wikimedia

A number of reactors could be used for these tests, including TRIGA-type reactors that are common in many universities around the US. This is one of the advantages of LEU, rather than the traditional HEU: there are fewer restrictions on LEU fuels, so many of these early tests could be carried out by universities and contractors who have these types of reactors. This will be less expensive than using DOE facilities, and has the additional advantage of supporting additional research and education in the field of astronuclear engineering.



Irradiation vessel design for ATF, Thody
Design of an irradiation capsule for use with the ATF, Thody OSU 2018

The initial fuel element prototypes for in-pile testing will be unfueled versions of the fuel element, to ensure the behavior of the rest of the materials involved won’t have adverse reactions to the neutronic and radiation environment that they’ll be subjected to. This is less of a concern then it used to be, because material properties under radiation flux have been continually refined over the decades, but caution is the watchword with nuclear reactors, so this sort of test will still need to be carried out. These experiments will be finally characterized in the Safety Analysis Report and Technical Safety Review documents, a major milestone for any fuel element development program. These documents will provide the reactor operators all the necessary information for the behavior of these fuel elements in the research reactor in preparation for fueled in-pile testing. Concurrently with these plans, extensive neutronic and thermal analysis will be carried out based on any changes necessitated by the in-pile unfueled testing. Finally, a Quality Assurance Plan must be formulated, verified, and approved. Each material has different challenges to producing fuel elements of the required quality, and each facility has slightly different regulations and guidelines to meet their particular needs and research guidelines. After these studies are completed, the in-pile, unfueled fuel elements are irradiated, and then subjected to post irradiation examination, for chemical, mechanical, and radiological behavior changes. Fracture toughness, tensile strength, thermal diffusivity, and microstructure examination through both scanning electron and transmission electron microscopy are particular areas of focus at this point in the testing process.


One last thing to consider for in-pile testing is that the containment vessel (often called a can) that the fuel elements will be held in inside the reactor has to be characterized, especially its’ impact on the neutron flux and thermal transfer properties, before in-pile testing can be done. This is a relatively straightforward, but still complex due to the number of variables involved, process, involving making an MCNP model of the fuel element in the can at various points in each potential test reactor, in order to verify the behavior of the test article in the test reactor. This is something that can be done early in the process, but may need to be slightly modified after the refinements and experimental regime that we’ve been looking at above.

Another consideration for the can will be its’ thermal insulation properties. NTR fuel elements are run at the edge of the thermal capabilities of the materials they’re made out of, since this maximizes thermal transfer and therefore specific impulse. This also means that, for the test to be as accurate as possible, the fuel element itself must be far hotter than the surrounding reactor, generally in the ballpark of 2500 K. The ORNL Irradiation Plan suggests the use of SIGRATHERM, a soft graphite felt, for this insulating material. Graphite’s behavior is well understood in reactors (and for those in the industry, the fact that it has about 4% of the density of solid graphite makes Wigner energy release minimal).

Pre-Hot Fire Testing: In-Pile Prototypic Fuel Testing


High Flux Isotope Reactor (HFIR), Oak Ridge National Lab, image courtesy Wikimedia

Once this extensive testing regime for fuel elements has been completed, a fueled set of fuel elements would be manufactured and transported to the appropriate test reactor. Not only are TRIGA-type reactors common to many universities an option, but three research reactors are also available with unique capabilities. The first is the High Flux Isotope Reactor at Oak Ridge, which is one of the longest-operating research reactors with quite a few ports for irradiation studies at different neutron flux densities. As an incredibly well-characterized reactor, there are many advantages to using this well-understood system, especially for analysis at different levels of fuel burnup and radiation flux.






Transient Reactor Test (TREAT) at Idaho NL. Image courtesy DOE

The second is a newly-reactivated reactor at Idaho National Laboratory, the Transient Reactor Test (TREAT). An air cooled, graphite moderated thermal reactor, the most immediately useful instrument for this sort of experiment is the hodoscope. This device uses fast neutrons to detect fission activity in the prototypic fuel element in real time, allowing unique analysis of fuel element behavior, burnup behavior, and other characteristics that can only be estimated after in-pile testing in other reactors.


Advanced Test Reactor, Idaho NL. Image courtesy DOE

The third is also at Idaho National Lab, this is the Advanced Test Reactor. A pressurized light water reactor, the core of this reactor has four lobes, and almost looks like a clover from above. This allows for very fine control of the neutron flux the fuel elements would experience. In addition, six of the locations in the core allow independent cooling systems that are separated from the primary cooling system. This would allow (with modification, and possible site permission requirements due to the explosive nature of H2) the use of hydrogen coolant to examine the chemical and thermal transfer behaviors of the NTR fuel element while undergoing fission.

Each of these reactors uses a slightly different form of canister to contain the test article. This is required to prevent any damage to the fuel element contaminating the rest of the reactor core, an incredibly expensive, difficult, and lengthy process that can be avoided by isolated the fuel elements from their surrounding environment chemically. Most often, these cans are made out of aluminum-6061, 300 series stainless steel, or grade 5 titanium (links in the reference section). According to a recent Oak Ridge document (linked in references), the most preferred material would be the titanium, with the stainless being the least attractive due to 59Fe and 60Co activation leading to the can to become highly gamma-active. This makes the transportation and disposal of the cans post-irradiation much more costly.

Here’s an example of the properties that would be tested by the time that the tests we’ve looked at so far have been completed:

Fuel Properties and Parameters to Test
Image courtesy Oak Ridge NL

NTR Hot Fire Testing For Today’s Regulatory Environment

It goes without saying that with the current regulatory strictures placed on nuclear testing, the same type of testing as done during Rover will not be able to be done today. Radioisotope release into the environment is something that is incredibly stringently regulated, so the open-air testing as was conducted at Jackass Flats would not be possible. However, there are multiple options that have been proposed for testing of an NTR in the ensuing years within the more rigorous regulatory regime, as well as cost estimates (some more reliable than others) and characterization of the challenges that need to be overcome in order to ensure that the necessary environmental regulations are met.

The options for current hot-fire testing of an NTR are: the use of upgraded versions of the effluent scrubbers used in the Nuclear Furnace test reactor; the use of boreholes as effluent capture and scrubbing systems (either already-existing boreholes drilled for nuclear weapons tests that have not been used for that purpose at Frenchman’s Flat, or new boreholes at the Idaho National Laboratory); the use of a horizontal, hydrogen-cooled scrubbing system (either using existing U-la or P-tunnel facilities modified for the purpose, or constructing a new facility at the National Nuclear Security Site); and the use of a new, full-exhaust-capture system at NASA’s current rocket test facilities at the John C. Stennis Space Center in Mississippi.

The Way We Did It Before: Nuclear Furnace Exhaust Scrubbers

Transverse view, Finseth
NF1 configuration, image from Finseth, 1991 courtesy NASA

The NF-1 test, the last test of Project Rover, actually included an exhaust scrubber to minimize the amount of effluent released in the test. Because this test was looking at different types of fuel elements than had been looked at in most previous tests, there was some concern that erosion would be an issue with these fuel elements more than others.

Effluent Cleanup System Flow Chart
Image from Nuclear Furnace 1 test report, Kirk, courtesy DOE

Axial view, FinsethThe hydrogen exhaust, after passing the instrumentation that would provide similar data to the Elephant Gun used in earlier tests, would be cooled with a spray of water, which then flashed to steam. This water was initially used to moderate the reactor itself, and then part of it was siphoned off into a wastewater holding tank while the rest was used for this exhaust cooling injection system. After this, the steam/H2 mixture had a temperature of about 1100 R.

After leaving the water injector system, the coolant went through radial outflow filter that was about 3 ft long, containing two wire mesh screens, the first with 0.078 inch square openings, the second one with 0.095 inch square openings.

Once it had passed through the screens, a steam generator was used to further cool the effluent, and to pull some of the H2O out of the exhaust stream. Once past this steam generator, the first separator drew the now-condensed water out of the effluent stream. Part of the radioactive component of the exhaust is at this point dissolved in the water. The water was drawn off to maintain an appropriate liquid level, and was moved into the wastewater disposal tank for filtering. A further round of exhaust cooling followed, using a water heat exchanger to cool the remaining effluent enough to condense out the rest of the water. The water used in this heat exchanger would be used by the steam generator that was used earlier in the effluent stream as its’ cool water intake, and would be discharged into the wastewater holding tank, but would not come in direct contact with the effluent stream. Once past the heat exchanger, the now much cooler H2/H2O mixture would go through a second separator identical in design to the first. At this point, most of the radioactive contaminant that could be dissolved in water had been, and the discharge from this unit was at this point pretty much completely dry.

A counterflow, U-tube type heat exchanger was then used to cool the effluent even more, and then a third separator – identical to the first two – was used to capture any last amounts of water still present in the effluent stream. During normal operation, though, basically no water would collect in this separator. The gas would then be passed through a silica gel sorption bed to further dry it. A back flow of gaseous nitrogen would be used to dry this bed for reuse. The gas, at this point completely dried, was then passed through another heat exchanger almost identical to the one that preceded the silica gel bed.

Charcoal Trap System
From NFI test report, Kirk, via DOE

After passing through a throttle valve (used to maintain back-pressure in the reactor), the gas was then passed through an activated charcoal filter trap, 60 inches long and 60 inches in diameter, to capture the rest of the radioactive effluent left in the hydrogen stream after being mixed with LH2 to further cool the gas to 250-350 R. Finally, the now-cleaned H2 is burned to prevent a buildup of H2 gas in the area- a major explosion hazard. This filter system was constantly adjusted after each power test, because pressure problems kept on cropping up for a number of reasons, from too much resistance to thermal disequilibrium.

So how well did this system do at scrubbing the effluent? Two of the biggest concerns were the capture of radiokrypton and radioxenon, both mildly radioactive noble gasses. The activated charcoal bed was primarily tasked with scrubbing these gasses out of the exhaust stream. Since xenon is far more easily captured than krypton in activated charcoal, the focus was on ensuring the krypton would be scrubbed out of the gas stream, since this meant that all the xenon would be captured as well. Because the Kr could be pushed through the charcoal bed by the flow of the H2, a number of traps were placed through the charcoal bed to measure gamma activity at various points. Furthermore, the effluent was sampled before being flared off, to get a final measurement of how much krypton was released by the trap itself.

Looking at the sampling of the exhaust plume, as well as the ground test stations, the highest dose rat was 1 mCi/hr, far lower than the other NTR tests. Radioisotope concentrations were also far lower than the other tests. However, some radiation was still released from the reactor, and the complications of ensuring that this doesn’t occur (effectively no release is allowed under current testing regimes) due to material, chemical, and gas-dynamic challenges makes this a very challenging, and costly, proposition to adapt to a full-flow NTR test.

Above Ground Test Option #1: Exhaust Scrubbing

The most detailed analysis of this concept was in support of the Space Nuclear Thermal Propulsion program, run by the Department of Energy – better known as Project Timber Wind. This was a far larger engine (111kN as opposed to 25 kN) engine, so the exhaust volume would be far larger. This also means that the costs associated with the program would be larger due to the higher exhaust flow rate, but unfortunately it’s impossible to make a reasonable estimate of the cost reduction, since these costs are far from linear in nature (it would cost significantly more than 20% of the cost estimated for the SNTP engine). However, it’s a good example of the types of facilities needed, and the challenges associated with this approach.

SNTP Test Facility
Image courtesy DOE

The primary advantage to the ETS concept is that it doesn’t use H2O to cool the exhaust, but LH2. This means that the potential for release of large amounts of (very mildly) irradiated water into the groundwater supply are severely limited (although the water solubility of the individual fission products would not change). The disadvantage, of course, is that it requires large amounts of LH2 to be on hand. At Stennis SC, this is less of an issue, since LH2 facilities are already in place, but LH2 is – as we saw in the last blog post – a major headache. It was estimated that either a combined propellant-effluent coolant supply could be used (~181,440 kg), or a separate supply for the coolant system (~136,000 kg) could be used (numbers based on a maximum of 2 hours burn time per test). To get a sense of what this amount of LH2 would require, two ~1400 kl dewars of LH2 would be needed for the combined system, about ¾ of the LH2 supply available at Kennedy Space Center (~3200 kl).

Once the exhaust is sufficiently cooled, it is a fairly routine matter to filter out the fission products (a combination of physical filters and chemical reactions can ensure that no radionucleides are released, and radiation monitoring can verify that the H2 has been cleaned of all radioactive effluent). In the NF-1 test, water was used to capture the particulate matter, and the H2O was passed through a silica gel bed to remove the fission products. An activated carbon filter was used to remove the noble gasses and other gaseous and aerosol fission products. After this, depending on the facility setup, it is possible to recycle a good portion of the H2 from the test; however this has massive power requirements for the cryocoolers and hydrogen densification equipment to handle this massive amount of H2.

Saddle Mountain facility diagram
Alternative test facility layout

Due to both the irradiation of the facilities and the very different requirements for this type of test facility, it was determined that the facilities built for the NRDS during Rover would be insufficient for this sort of testing, and so new facilities would need to be constructed, with much larger LH2 storage capabilities. One more recent update to the concept is brought up in the SAFE proposal (next section), using already existing facilities at the Nevada Test Site (now National Nuclear Security Site), in the U-la or P-tunnel complexes. These underground facilities were horizontal, interconnected tunnel complexes used for sub-critical nuclear testing. There are a number of benefits to using these (now-unused) facilities for this type of testing: first, the rhyolite that the P-tunnel facility is cut into is far less permeable to fission products, but remains an excellent heat sink for the thermal effects of the exhaust plume. Second, it’s unlikely to fracture due to overpressure, although back-pressure into the engine itself will constrain the minimum size of the tunnel. Third, a hot cell can be cut into the mountain adjacent to the test location, making a very well-shielded facility for cool-down and disassembly beside the test location, eliminating the need to transport the now-hot engine to another facility for disassembly.

After the gas has passed through a length of tunnel, and cooled sufficiently, a heat exchanger is used to further cool the gas, and then it’s passed through an activated charcoal filter similar to the one used in the NF-1 test. This filtered H2 will then be flared off after going through a number of fission product detectors to ensure the filter maintained its’ integrity. The U-la tunnels are dug into alluvium, so we’ll look at those in the next section.

One concern with using charcoal filters is that their effectiveness varies greatly depending on the temperature of the effluent, and the pressure that it’s fed into the filter. Indeed, the H2 can push fission products through the filter, so there’s a definite limit to how small the filter can be. The longer the test, the larger the filter will be. Activated charcoal is relatively cheap, but by the end of the test it will be irradiated, meaning that it has to be disposed of in nuclear waste repositories.

Cost estimates were avoided in the DOD assessment, due to a number of factors, including uncertain site location and the possibility of using this facility for multiple programs, allowing for cost sharing, but the overall cost for the test systems and facilities was estimated to be $500M in 1993 dollars. Most papers seem to think that this is the most expensive, and least practical, option for above ground NTR testing.

The Borehole Option: Subsurface Active Filtration of Exhaust

Many different options have been suggested over the years as to testing options. The simplest is to fire the rocket with its’ nozzle pointed into a deep borehole at the Nevada Test Site, which has had extensive geological work done to determine soil porosity and other characteristics that would be important to the concept. Known as Subsurface Active Filtration of Exhaust, or SAFE, it was proposed in 1999 by the Center for Space Studies, and continued to be refined for a number of years.

SAFE schematic
SAFE concept, Howe 2012, image courtesy NASA

In this concept, the engine is placed over an already existing (from below-ground nuclear weapons testing) 8 foot wide, 1200 foot deep borehole, with a water spray system being mounted adjacent to the nozzle of the NTR. The first section of the hole will be clad in steel, and the rest will simply be lined with the rock that is being bored into. The main limiting consideration will be the migration of radionucleides into the surrounding rock, which is something that’s been modeled computationally using Frenchman’s Flat geologic data, but has not been verified.

SAFE injector model
SAFE injection system model, Howe 2012

The primary challenges associated with this type of testing will be twofold: first, it needs to be ensured that the fission products will not migrate into groundwater or the atmosphere; and second, in order to ensure that the surrounding bedrock isn’t fractured – and therefore allows greater-than-anticipated migration of fission products to migrate from the borehole – it is necessary to prevent the pressure in the borehole from reaching above a certain level. A sub-scale test with an RL-10 chemical rocket engine and radioisotope tracers was proposed (this test would have a much smaller borehole, and use known radioisotope tracers – either Xe or Kr isotopes – in the fuel to test dispersion of fission products through the bedrock). This test would provide the necessary migration, permeability, and (given appropriate borehole scaling to ensure prototypic temperature and pressure regimes) soil fracture pressures to ensure the full filtration of the exhaust of an NTR.

The advantage to doing this test at Frenchman’s Flat is that the ground has already been extensively tested for the porosity (35%), permeability (8 darcys), water content (initial pore saturation 30%), and homogeneity (alluvium, so pretty much 100%) that is needed. In fact, a model already exists to calculate the behavior of the soil to these effects, known as WAFE, and the model was applied to the test parameters in 1999. Both full thrust (73.4 kg/s of H2O from both exhaust and cooling spray, and 0.64 kg/s of H2) and 30% thrust (20.5 kg/s H2O and 0.33 kg/s of H2) were modeled, both assuming 600 C exhaust injection after the steel liner. They found that the maximum equilibrium pressure in the borehole would reach 36 psia for the full thrust test, and 21 psia in the 30% thrust case, after about 2 hours, well within the acceptable pressure range for the borehole, assuming the exhaust gases were limited to below Mach 1 to prevent excess back-pressure buildup.

P-Tunnel setup

Other options were explored as well, including using the use of the U-la facility at the NNSS for horizontal testing. This is an underground set of tunnels in Nevada, which would provide safety for the testing team and the availability of a hot cell for reactor disassembly beside the test point (the P-tunnel facility is also cut into similar alluvial deposits, so primary filtration will come from the soil itself, and water cooling will still be necessary).

INL geology 2
INL geological composition, image courtesy DOE

Further options were explored in the “Final Report – Assessment of Testing Options for the NTR at the INL.” This is a more geologically complex region, including pahoehoe and rubble basalt, and various types of sediment. Another complication is that INL is on the Snake River plain, and above an aquifer, so the site will be limited to those places that the aquifer is more than 450 feet below the surface. However, the pahoehoe basalt is gas-impermeable, so if a site can be found that has a layer of this basalt below the borehole but above the aquifer, it can provide a gas-impermeable barrier below the borehole.

A 1998 cost estimate by Bechtel Nevada on the test concept estimated a cost of $5M for the non-nuclear validation test, and $16M for the full-scale NTR test, but it’s unclear if this included cost for the hot cell and associated equipment that would need to be built to support the test campaign, and I haven’t been able to find the specific report.

However, this testing option does not seem to feature heavily in NASA’s internal discussions for NTR testing at this point. One of the disadvantages is that it would require the rocket testing equipment, and support facilities, to be built from scratch, and to occur on DOE property. NASA has an extensive rocket testing facility at the John C. Stennis Space Center in Hancock County, MS, which has geology that isn’t conducive to subterranean testing of any sort, much less testing that requires significant isolation from the water table, and most NASA presentations seem to focus on using this facility.

The main reasons given in a late 2017 presentation for not pursuing this option are: Unresolved issues on water saturation effects on soil permeability, hole pressure during engine operation, and soil effectiveness in exhaust filtering. I have been unable to find the Bechtel Nevada and Desert Research Institute studies on this subject, but they have been studied. I would be curious to know why these studies would be considered incomplete.

One advantage to these options, though, which cannot be overstated, is that these facilities would be located on DOE land. As was seen in the recent KRUSTY fission-powered test, nuclear reactors in DOE facilities use an internal certification and licensing program independent of the NRC. This means that the 9-10 year (or longer), incredibly expensive certification process, which has never been approved for a First of a Kind reactor, would be bypassed. This alone is a potentially huge cost savings for the project, and may offset the additional study required to verify the suitability of these sites for NTR testing compared to certifying a new location – no matter how well established it is for rocket testing already.

Above Ground Test Option #2: Complete Capture

Flow Diagram Coote 2017
Image via Coote 2017, courtesy NASA

In this NTR test setup, the exhaust is slowed from supersonic to subsonic speeds, allowing O2 to be injected and mixed well past the molar equilibrium point for H2O. The resultant mixture is then combusted, resulting in hot steam and free O2. A water sprayer is used to cool the steam, and then passes through a debris trap filled with water at the bottom. It is then captured in a storage pool, and the remaining gaseous O2 is run through a desiccant filter, which is exhausted into the same storage pool. The water is filtered of all fission products and any unburned fuel, and then released. The gaseous O2 is recaptured and cooled using liquid nitrogen, and whatever is unable to be efficiently recaptured is vented into the atmosphere. The primary advantage to this system is that the resulting H2O can be filtered at leisure, allowing for more efficient and thorough filtration without the worry of over-pressurization of the system if there’s a blockage in the filters.

Subscale Concept Render
Subscale test stand render, image courtesy BWXT via NASA

There are many questions that need to be answered to ensure that this system works properly, as there are with all of the systems that have yet to be tested. In other to verify that the system will work as advertised, a sub-scale demonstrator will need to be built. This facility will use a hydrogen wave heater in place of the nuclear reactor, and test the rest of the components at a smaller scale wherever possible. Due to the specific needs of the exhaust capture system, especially the need to test complete combustion at different heat loads, the height of the facility may not be able to be scaled down (in order to ensure complete combustion, the gas flow will need to be subsonic before mixing and combustion). Thermal loading on structures is another major concern for the sub-scale test, since many components must be tested at the appropriate temperature, and the smaller structures won’t be able to passively reject heat as well. Finally, some things won’t be able to be tested in a sub-scale system, so what data will need to be collected in the full-scale system needs to be assessed.

One last thing to note is that this system will also be used to verify that high-velocity impacts of hot debris will not be a concern. This was, of course, seen in many of the early Rover tests, as fuel elements would break and be ejected from the nozzle at similar velocities to the exhaust. While CERMET fuels are (likely) more durable, this is an accident condition that has to be prepared for. In addition, smaller pieces of debris need to be able to be fully captured as well (such as flakes of clad, or non-nuclear components). These tests will need to be carried out on the sub-scale test bed to ensure for the regulators that any accident is able to be addressed. This adds to the complexity of the test setup, and encourages the ability to change the test stand as quickly and efficiently as possible – in other words, to make it as modular as possible. This also increases the flexibility of the facility for any other uses that it may be put to.

NTP Testing at Stennis Space Center

SSC overview
Stennis SC test facilities, image courtesy NASA

This last testing concept seems to be the front-runner for current NASA designs, to be integrated into the A3 test stand at NASA’s Stennis Space Center (SSC). SSC is the premier rocket test facility for NASA, testing both solid and liquid rocket engines. The test facilities are located in the “fee area,” a 20 square mile area (avg. radius 2.5 miles) surrounded by an acoustic “buffer zone” that averages 7.9 miles in radius (195 sq mi). With available office space, manufacturing spaces, and indoor and outdoor warehouse space, as well as a number of rocket engine test stands, the facility has much going for it. Most of the rocket engines being used by American launch companies have been tested here, going all the way back to the moon program. This is a VERY advanced, well-developed facility for the development of any type of chemical engine ever developed… but unfortunately, nuclear is different. Because SSC has not supported nuclear operations, a number of facilities will need to be constructed to support NTR testing at the facility. This raises the overall cost of the program considerably, to less than but around $850M (in 2017 dollars). A number of facilities will need to be constructed at SSC to support NTR testing, for both E3 and A3 test stands.

Diagram side by side with A3
Image from Houts presentation 2017, via NASA

As one of the newer facilities at SSC, the A3 test stand groundbreaking was held in August of 2007, and was completed in 2014. It is the only facility that is able to handle the thrust level (300+ Klbf at altitude, 1,000 Klbf nominal design) and simulated altitude (100 Kft) that testing a powerful upper stage requires. There are two additional facilities designed to operate at lower-than-ambient atmospheric pressures at SSC, the A2 test stand (650 Klbf at 60 Kft) and the E3 test facility (60 Klbf at 100 Kft). The E3 facility will be used for sub-scale testing, turbopump validation, and other tests for the NTP program, but the A2 test stand seems to not be under consideration at this time. The rest of the test stands at SSC are designed to operate at ambient pressure (i.e. sea level), and so they are not suitable for NTP testing.

The E3 facility would be used for sub-scale testing, first of the turbopumps (similar to the tests done there for the SSME), and sub-scale reactor tests. These would likely be the first improvements made at SSC to support the NTP testing, within the next couple years, and would cost $35-38M ($15-16M for sub-scale turbopump tests, $20-22M for the sub-scale reactor test, according to preliminary BWXT cost estimates). Another thing that would be tested at E3 would be a sub-scale engine exhaust capture system, which has been approved for both Phases 1&2, work to support this should be starting at any time ($8.74M was allocated to this goal in the FY’14 budget). From what I can see, work had already started (to an unknown extent) at E3 on this sub-scale system, however I have been unable to find information regarding the extent of the work or the scale that the test stand will be compared to the full system.

A3 under construction
A3 test stand under construction, image courtesy NASA

The A3 facility has the most that needs to be added, including facilities for power pack testing ($21M); a full-flow, electrically heated depleted uranium test (cost TBD); a facility for zero power testing and reactor verification before testing ($15M); an adjacent hot cell for reactor cool-down and disassembly (the new version of the EMAD facility, $220M); and testing for both sub-scale and full scale fission powered NTP testing (cost to be determined, it’s likely to be heavily influenced by regulatory burden). This does not include radiation shielding, and an alternate ducting system to ensure that the HVAC system doesn’t become irradiated (a major headache in the decomissioning of the original E-MAD facility). It is unlikely that design work for this facility will start in earnest until FY21, and construction of the facility is not likely to start until FY24. Assuming a 10 year site licensing lead time (which is typical), it is unlikely that any nuclear testing will be able to be done until FY29, with full power nuclear testing not likely until FY30.

Notional schedule
Notional Development Timeline

Documents relating to the test stands at SSC show that there has been some funding for this project since FY ‘16, but it’s difficult to tell how much of that has gone to analysis, environmental impact studies, and other bureaucratic and regulatory necessities, and how much has gone to actual construction. I HAVE had one person who works at SSC mention that physical work has started, but they were unwilling to provide any more information than that due to their not being authorized to speak to the public about the work, and their unfamiliarity with what is and isn’t public knowledge (most of it simply isn’t public). According to a presentation at SSC in July of 2017, the sub-scale turbopump testing may start in the next year or two, but initial design work for the A3 test stand is unlikely to start before FY’21.

NTP draft tech demonstration draft timeline
Draft Tech Development Roadmap, image via NASA

According to the presentation (linked below), there are two major hurdles the program needs to overcome on the policy and regulatory side. First, a national/agency level decision needs to be made between NASA, the DOE, and the NRC as to the specific responsibilities and roles for NTP development, especially in regards to: 1. reactor production, engine and launch vehicle integration strategy, and 2. ground, launch, and in-space operations of the NTR. Second, NTP testing at SSC requires a nuclear site license, which is a 9-10 year process even for a traditional light water power reactor, much less as unusual a reactor architecture as an NTR. This is another area that BWXT’s experience is being leaned on heavily, with two (not publicly available) studies having been carried out by them in FY16 on both a site licensing strategy and implementation roadmap, and on initial identification of policy issues related to licensing an NTP ground test at SSC.

Regulatory Burdens, Bureaucratic Concerns, and Other Matters

Originally, this post was going to delve into the regulatory and environmental challenges of doing NTR testing. An NTR is very different from any other sort of nuclear reactor, not only because it’s a once-through gas cooled reactor operating at a very high temperature, but also due to the performance characteristics that the reactor is expected to be able to provide.

Additionally, these are short-lived reactors – 100 hours of operation is more than enough to complete an entire crewed mission to Mars, and is a long lifetime for a rocket engine. However, as we saw during the Rover hot-fire testing, there are always issues that come up that aren’t able to be adequately tested beforehand (even with our far more advanced computational and modeling capabilities), so iteration is key. This means that the site has to be licensed for multiple different reactors.

Unfortunately, these subjects are VERY complex, and are very difficult to learn. Communicating with the NRC in and of itself is a subspecialty of both the nuclear and legal industries for reactor designers. The fact that the DOE, NASA, and the NRC are having to interact on this project just adds to the complexity.

So, I’m going to put that part of this off for now, and it will become its’ own separate blog post. I have contacted NASA, the DOE and the NRC looking for additional information and clarification in their various areas, and hopefully will hear back in the coming weeks or months. I also am reading the appropriate regulations and internal rules for these organizations, and there’s more than enough there for a quite lengthy blog post on its’ own. If you work with any of these organizations, and are either able to help me gather this information or get me in touch with someone that can, I would greatly appreciate it if you contact me.

Upcoming Posts!

For now, we’re going to leave testing behind as the main focus of the blog, but we will still look at the subject as it becomes relevant in other posts. For now, we’re going to do one final post on solid core pure NTRs, looking at carbide fueled NTRs, both the RD-0410 in Russia and some legacy and new designs from the US. After that, we’ll move on to bimodal NTR/chemical and bimodal NTR/thermal electric designs in the next post.

After that, with one small exception, we’ll leave NTRs behind for a while, and look at nuclear electric propulsion. I plan on doing pages for individual reactor designs during this time, both NTR and NEP, and add the as their own pages on the website. As I write posts, I’ll link to the new (or updated) pages as they’re completed.

Be sure to check out the rest of the website, and join us on Facebook! This blog is far from the only thing going on!



In Pile Testing

Technology Implementation Plan: Irradiation Testing and Qualification for Nuclear Thermal Propulsion Fuel; ORNL/TM-2017/376, Howard et al September 2017

DOE Order 414.1D, Quality Assurance; approved 4/2011

10 CFR Part 830, Nuclear Safety Management;

High Flux Isotope Reactor homepage:

Advanced Test Reactor Irradiation Facilities and Capabilities; Furstenau and Glover 2009

Transient Reactor Test Facility homepage:

Al 6061 Matweb page:

300 Stainless Steel; Pennsylvania Stainless,

Grade 5 Titanium Matweb page:

SIGRATHERM, SGL (manufacturer) website:

Nuclear Furnace ECS

Nuclear Furnace 1 Test Report; LA-5189-MS, by W.L. Kirk, 1973

DOE Fact Sheet, Appendix 2

Above Ground Effluent Treatment System

Space Nuclear Thermal Propulsion Final Report, R.A. Haslett, Grumman Aerospace Corp, 1995

Space Nuclear Thrmal Propulsion Test Facilities Subpanel Final Report, Allen et al, 1993

Subsurface Active Filtration of Exhaust (SAFE)

Ground Testing a Nuclear Thermal Rocket: Design of a sub-scale demonstration experiment, Howe et al, Center for Space Nuclear Research, 2012

Subscale Validation of the Subsurface Active Filtration of Exhaust Approach to NTP Ground Testing, Marshall et al, NASA Glenn RC, 2015 (Conference Paper) and (Presentation Slides)

Final Report – Assessment of Testing Options for the NTR at INL, Howe et al, Idaho NL, 2013

Complete Exhaust Capture and NASA Planning

Stennis Space Center Activities and Plans Overview presentation, NASA

Development and Utilization of Nuclear Thermal Propulsion; Houts and Mitchell, 2016 (slideshow)

Low Enriched Uranium (LEU) Nuclear Thermal Propulsion: System Overview and Ground Test Strategy, Coote 2017 (slideshow)

NASA FY18 Budget Estimates:

NTP Technical Interchange Meeting at SSC, June 2017 (slideshow)

Development and Testing Fission Power Systems Nuclear Thermal Systems Test Stands

NTR Hot Fire Testing Part I: Rover and NERVA Testing

Hello, and welcome back to Beyond NERVA, where today we are looking at ground testing of nuclear rockets. This is the first of two posts on ground testing NTRs, focusing on the testing methods used during Project ROVER, including a look at the zero power testing and assembly tests carried out at Los Alamos Scientific Laboratory, and the hot-fire testing done at the National Defense Research Station at Jackass Flats, Nevada. The next post will focus on the options that both have and are being considered for hot fire testing the next generation of LEU NTP, as well as a brief look at cost estimates for the different options, and the plans that NASA has proposed for the facilities that are needed to support this program (what little of it is available).

We have examined how to test NTR fuel elements in nun-nuclear situations before, and looked at two of the test stands that were developed for testing thermal, chemical, and erosion effects on them as individual components, the Compact Fuel Element Environment Simulator (CFEET) and the Nuclear Thermal Rocket Environment Effects Simulator (NTREES). These test stands provide economical means of testing fuel elements before loading them into a nuclear reactor for neutronic and reactor physics behavioral testing, and can catch many problems in terms of chemical and structural problems without dealing with the headaches of testing a nuclear reactor.

However, as any engineer can tell you, computer modeling is far from enough to test a full system. Without extensive real-life testing, no system can be used in real-life situations. This is especially true of something as complex as a nuclear reactor – much less a rocket engine. NTRs have the challenge of being both.

Engine Maintenance and Disassembly Facility, image via Wikimedia Commons

Back in the days of Project Rover, there were many nuclear propulsion tests performed. The most famous of these were the tests carried out at Jackass Flats, NV, in the National Nuclear Test Site (Now the National Criticality Experiment Research Center), in open-air testing on specialized rail cars. This was far from the vast majority of human habitation (there was one small – less than 100 people – ranch upwind of the facility, but downwind was the test site for nuclear weapons tests, so any fallout from a reactor meltdown was not considered a major concern).

The test program at the Nevada site started with the arrival of the fully-constructed and preliminary tested rocket engines arriving by rail from Los Alamos, NM, along with a contingent of scientists, engineers, and additional technicians. After doing another check-out of the reactor, they were hooked up (still attached to the custom rail car it was shipped on) to instrumentation and hydrogen propellant, and run through a series of tests, ramping up to either full power or engine failure. Rocket engine development in those days (and even today, sometimes) could be an explosive business, and hydrogen was a new propellant to use, so accidents were unfortunately common in the early days of Rover.

After the test, the rockets were wheeled off onto a remote stretch of track to cool down (from a radiation point of view) for a period of time, before being disassembled in a hot cell (a heavily shielded facility using remote manipulators to protect the engineers) and closely examined. This examination verified how much power was produced based on the fission product ratios of the fuel, examined and detailed all of the material and mechanical failures that had occurred, and started the reactor decommissioning and disposal procedures.

As time went on, great strides were made not only in NTR design, but in metallurgy, reactor dynamics, fluid dynamics, materials engineering, manufacturing techniques, cryogenics, and a host of other areas. These rocket engines were well beyond the bleeding edge of technology, even for NASA and the AEC – two of the most scientifically advanced organizations in the world at that point. This, unfortunately, also meant that early on there were many failures, for reasons that either weren’t immediately apparent or that didn’t have a solution based on the design capabilities of the day. However, they persisted, and by the end of the Rover program in 1972, a nuclear thermal rocket was tested successfully in flight configuration repeatedly, the fuel elements for the rocket were advancing by leaps and bounds past the needed specifications, and with the ability to cheaply iterate and test new versions of these elements in new, versatile, and reusable test reactors, the improvements were far from stalling out – they were accelerating.

However, as we know, the Rover program was canceled after NASA was no longer going to Mars, and the development program was largely scrapped. Scientists and engineers at Westinghouse Astronuclear Laboratory (the commercial contractor for the NERVA flight engine), Oak Ridge National Laboratory (where much of the fuel element fabrication was carried out) and Los Alamos Scientific Laboratory (the AEC facility primarily responsible for reactor design and initial testing) spent about another year finishing paperwork and final reports, and the program was largely shut down. The final report on the hot-fire test programs for NASA, though, wouldn’t be released until 1991.

Behind the Scenes: Pre-Hot Fire Testing of ROVER reactors

Pajarito map
Pajarito Test Area, image courtesy LANL

These hot fire tests were actually the end result of many more tests carried out in New Mexico, at Los Alamos Scientific Laboratory – specifically the Pajarito Test Area. Here, there were many test stands and experimental reactors used to measure such things as neutronics, reactor behavior, material behavior, critical assembly limitations and more.

Honeycomb grainy closeup
Honeycomb, with a KIWI mockup loaded. Image via LANL

The first of these was known as Honeycomb, due to its use of square grids made out of aluminum (which is mostly transparent to neutrons), held in large aluminum frames. Prisms of nuclear fuel, reflectors, neutron absorbers, moderator, and other materials were assembled carefully (to prevent accidental criticality, something that the Pajarito Test Site had seen early in its’ existence in the Demon Core experiments and subsequent accident) to ensure that the modeled behavior of possible core configurations matched closely enough to predicted behavior to justify going through the effort and expense of going on to the next steps of refining and testing fuel elements in an operating reactor core. Especially for cold and warm criticality tests, this test stand was invaluable, but with the cancellation of Project Rover, there was no need to continue using the test stand, and so it was largely mothballed.

PARKA, image courtesy LANL

The second was a modified KIWI-A reactor, which used a low-pressure, heavy water moderated island in the center of the reactor to reduce the amount of fissile fuel necessary for the reactor to achieve criticality. This reactor, known as Zepo-A (for zero-power, or cold criticality), was the first of an experiment that was carried out with each successive design in the Rover program, supporting Westinghouse Astronuclear Laboratory and the NNTS design and testing operations. As each reactor went through its’ zero-power neutronic testing, the design was refined, and problems corrected. This sort of testing was completed late in 2017 and early in 2018 at the NCERC in support of the KRUSTY series of tests, which culminated in March with the first full-power test of a new nuclear reactor in the US for more than 40 years, and remain a crucial testing phase for all nuclear reactor and fuel element development. An early, KIWI-type critical assembly test ended up being re-purposed into a test stand called PARKA, which was used to test liquid metal fast breeder reactor (LMFBR, now known as “Integral Fast Reactor or IFR, under development at Idaho National Labs) fuel pins in a low-power, epithermal neutron environment for startup and shutdown transient behavior testing, as well as being a well-understood general radiation source.

Hot Gas Furnace
Hot gas furnace at LASL, image courtesy LANL

Finally, there was a pair of hot gas furnaces (one at LASL, one at WANL) for electrical heating of fuel elements in an H2 environment that used resistive heating to bring the fuel element up to temperature. This became more and more important as the project continued, since development of the clad on the fuel element was a major undertaking. As the fuel elements became more complex, or as materials that were used in the fuel element changed, the thermal properties (and chemical properties at temperature) of these new designs needed to be tested before irradiation testing to ensure the changes didn’t have unintended consequences. This was not just for the clad, the graphite matrix composition changed over time as well, transitioning from using graphite flour with thermoset resin to a mix of flour and flakes, and the fuel particles themselves changed from uranium oxide to uranium carbide, and the particles themselves were coated as well by the end of the program. The gas furnace was invaluable in these tests, and can be considered the grandfather of today’s NTREES and CFEET test stands.

KIWI A Zepo A and Honeycomb shot
KIWI-A, Zepo-A, and Honeycomb mockup in Kiva 3. Image courtesy LANL

An excellent example of the importance of these tests, and the careful checkout that each of the Rover reactors received, can be seen with the KIWI-B4 reactor. Initial mockups, both on Honeycomb and in more rigorous Zepo mockups of the reactor, showed that the design had good reactivity and control capability, but while the team at Los Alamos was assembling the actual test reactor, it was discovered that there was so much reactivity the core couldn’t be assembled! Inert material was used in place of some of the fuel elements, and neutron poisons were added to the core, to counteract this excess reactivity. Careful testing showed that the uranium carbide fuel particles that were suspended in the graphite matrix underwent hydrolysis, moderating the neutrons and therefore increasing the reactivity of the core. Later versions of the fuel used larger particles of UC2, which was then individually coated before being distributed through the graphite matrix, to prevent this absorption of hydrogen. Careful testing and assembly of these experimental reactors by the team at Los Alamos ensured the safe testing and operation of these reactors once they reached the Nevada test site, and supported Westinghouse’s design work, Oak Ridge National Lab’s manufacturing efforts, and the ultimate full-power testing carried out at Jackass Flats.

NTR Core Development Process
NTR Core Design Process, image courtesy IAEA

Once this series of mockup crude criticality testing, zero-power testing, assembly, and checkout was completed, the reactors were loaded onto a special rail car that would also act as a test stand with the nozzle up, and – accompanied by a team of scientists and engineers from both New Mexico and Nevada – transported by train to the test site at Jackass Flats, adjacent to Nellis Air Force Base and the Nevada Test Site, where nuclear weapons testing was done. Once there, a final series of checks was done on the reactors to ensure that nothing untoward had happened during transport, and the reactors were hooked up to test instrumentation and the coolant supply of hydrogen for testing.

Problems at Jackass Flats: Fission is the Easy Part!

The testing challenges that the Nevada team faced extended far beyond the nuclear testing that was the primary goal of this test series. Hydrogen is a notoriously difficult material to handle due to its’ incredibly tiny size and mass. It seeps through solid metal, valves have to be made with incredibly tight clearances, and when it’s exposed to the atmosphere it is a major explosion hazard. To add to the problems, these were the first days of cryogenic H2 experimentation. Even today, handling of cryogenic H2 is far from a routine procedure, and the often unavoidable problems with using hydrogen as a propellant can be seen in many areas – perhaps the most spectacular can be seen during the launch of a Delta-IV Heavy rocket, which is a hydrolox (H2/O2) rocket. Upon ignition of the rocket engines, it appears that the rocket isn’t launching from the pad, but exploding on it, due to the outgassing of H2 not only from the pressure relief valves in the tanks, but seepage from valves, welds, and through the body of the tanks themselves – the rocket catching itself on fire is actually standard operating procedure!

PB K Site Cryo Tank Test
Plu Brook Cryo Tank Pressure Test, image courtesy NASA

In the late 1950s, these problems were just being discovered – the hard way. NASA’s Plum Brook Research Station in Ohio was a key facility for exploring techniques for handling gaseous and liquid hydrogen safely. Not only did they experiment with cryogenic equipment, hydrogen densification methods, and liquid H2 transport and handling, they did materials and mechanical testing on valves, sensors, tanks, and other components, as well as developed welding techniques and testing and verification capabilities to improve the ability to handle this extremely difficult, potentially explosive, but also incredible valuable (due to its’ low atomic mass – the exact same property that caused the problems in the first place!) propellant, coolant, and nuclear moderator. The other options available for NTR propellant (basically anything that’s a gas at reactor operating temperatures and won’t leave excessive residue) weren’t nearly as good of an option due to the lower exhaust velocity – and therefore lower specific impulse.

Plum Brook is another often-overlooked facility that was critical to the success of not just NERVA, but all current liquid hydrogen fueled systems. I plan on doing another post (this one’s already VERY long) looking into the history of the various facilities involved with the Rover and NERVA program.

Indeed, all the KIWI-A tests and the KIWI-B1A used gaseous hydrogen instead of liquid hydrogen, because the equipment that was planned to be used (and would be used in subsequent tests) was delayed due to construction problems, welding issues, valve failures, and fires during checkout of the new systems. These teething troubles with the propellant caused major problems at Jackass Flats, and caused many of the flashiest accidents that occurred during the testing program. Hydrogen fires were commonplace, and an accident during the installation of propellant lines in one reactor ended up causing major damage to the test car, the shed it was contained in, and exposed instrumentation, but only minor apparent damage to the reactor itself, delaying the test of the reactor for a full month while repairs were made (this test also saw two hydrogen fires during testing, a common problem that improved as the program continued and the methods for handling the H2 were improved).

While the H2 coolant was the source of many problems at Jackass Flats, other issues arose due to the fact that these NTRs were using technology that was well beyond bleeding-edge at the time. New construction methods doesn’t begin to describe the level of technological innovation in virtually every area that these engines required. Materials that were theoretical chemical engineering possibilities only a few years before (sometimes even months!) were being utilized to build innovative, very high temperature, chemically and neutronically complex reactors – that also functioned as rocket engines. New metal alloys were developed, new forms of graphite were employed, experimental methods of coating the fuel elements to prevent hydrogen from attacking the carbon of the fuel element matrix (as seen in the KIWI-A reactor, which used unclad graphite plates for fuel, this was a major concern) were constantly being adjusted – indeed, the clad material experimentation continues to this day, but with advanced micro-imaging capabilities and a half century of materials science and manufacturing experience since then, the results now are light-years ahead of what was available for the scientists and engineers in the 50s and 60s. Hydrodynamic principles that were only poorly understood, stress and vibrational patterns that weren’t able to be predicted, and material interactions at temperatures higher than are experienced in the vast majority of situations caused many problems for the Rover reactors.

One common problem in many of these reactors was transverse fuel element cracking, where a fuel element would split across the narrow axis, disrupting coolant flow through the interior channels, exposing the graphite matrix to the hot H2 (which it then would ferociously eat away, exposing both fission products and unburned fuel to the H2 stream and carry it elsewhere – mostly out of the nozzle, but it turned out the uranium would congregate at the hottest points in the reactor – even against the H2 stream – which could have terrifying implications for accidental fission power hot spots. Sometimes, large sections of the fuel elements would be ejected out of the nozzle, spraying partially burned nuclear fuel into the air – sometimes as large chunks, but almost always some of the fuel was aerosolized. Today, this would definitely be unacceptable, but at the time the US government was testing nuclear weapons literally next door to this facility, so it wasn’t considered a cause of major concern.

If this sounds like there were major challenges and significant accidents that were happening at Jackass Flats, well in the beginning of the program that was certainly correct. These early problems were also cited in Congress’ decision to not continue to fund the program (although, without a manned Mars mission, there was really no reason to use the expensive and difficult to build systems, anyway). The thing to remember, though, is that they were EARLY tests, with materials that had been a concept in a material engineer’s imagination only a few years (or sometimes months) beforehand, mechanical and thermal stresses that no-one had ever dealt with, and a technology that seemed the only way to send humans to another planet. The moon was hard enough, Mars was millions of miles further away.

Hot Fire Testing: What Did a Test Look Like?

Nuclear testing is far more complex than just hooking up the test reactor to coolant and instrumentation lines, turning the control drums and hydrogen valves, and watching the dials. Not only are there many challenges associated with just deciding what instrumentation is possible, and where it would be placed, but the installation of these instruments and collecting data from them was often a challenge as well early in the program.

Axial flow path
NRX A2 Flow Diagram, image via NASA (Finseth, 1991)

To get an idea of what a successful hot fire test looks like, let’s look at a single reactor’s test series from later in the program: the NRX A2 technology demonstration test. This was the first NERVA reactor design to be tested at full power by Westinghouse ANL, the others, including KIWI and PHOEBUS, were not technology demonstration tests, but proof-of-concept and design development tests leading up to NERVA, and were tested by LASL. The core itself consisted of 1626 hexagonal prismatic fuel elements This reactor was significantly different from the XE-PRIME reactor that would be tested five years later. One way that it was different was the hydrogen flow path: after going through the nozzle, it would enter a chamber beside the nozzle and above the axial reflector (the engine was tested nozzle-up, in flight configuration this would be below the reflector), then pass through the reflector to cool it, before being diverted again by the shield, through the support plate, and into the propellant channels in the core before exiting the nozzle

Two power tests were conducted, on September 24 and October 15, 1964.

With two major goals and 22 lesser goals, the September 24 test packed a lot into the six minutes of half-to-full power operation (the reactor was only at full power for 40 seconds). The major goals were: 1. Provide significant information for verifying steady-state design analysis for powered operation, and 2. Provide significant information to aid in assessing the reactor’s suitability for operation at steady-state power and temperature levels that were required if it was to be a component in an experimental engine system. In addition to these major, but not very specific, test goals, a number of more specific goals were laid out, including top priority goals of evaluating environmental conditions on the structural integrity of the reactor and its’ components, core assembly performance evaluation, lateral support and seal performance analysis, core axial support system analysis, outer reflector assembly evaluation, control drum system evaluation, and overall reactivity assessment. The less urgent goals were also more extensive, and included nozzle assembly performance, pressure vessel performance, shield design assessment, instrumentation analysis, propellant feed and control system analysis, nucleonic and advanced power control system analysis, radiological environment and radiation hazard evaluation, thermal environment around the reactor, in-core and nozzle chamber temperature control system evaluation, reactivity and thermal transient analysis, and test car evaluation.

Test plot
Image via NASA (Finseth, 1991)

Several power holds were conducted during the test, at 51%, 84%, and 93-98%, all of which were slightly above the power that the holds were planned at. This was due to compressability of the hydrogen gas (leading to more moderation than planned) and issues with the venturi flowmeters used to measure H2 flow rates, as well as issues with the in-core thermocouples used for instrumentation (a common problem in the program), and provides a good example of the sorts of unanticipated challenges that these tests are meant to evaluate. The test length was limited by the availability of hydrogen to drive the turbopump, but despite this being a short test, it was a sweet one: all of the objectives of the test were met, and an ideal specific impulse in a vacuum equivalent of 811 s was determined (low for an NTR, but still over twice as good as any chemical engine at the time).

Low Power Test plot
Image via NASA (Finseth, 1991)

The October 15th test was a low-power, low flow test meant to evaluate the reactor’s operation when not operating in a high power, steady state of operation, focusing on reactor behavior at startup and cool-down. The relevant part of the test lasted for about 20 minutes, and operated at 21-53 MW of power and a flow rate of 2.27-5.9 kg/s of LH2. As with any system, operating at the state that the reactor was designed to operate in was easier to evaluate and model than at startup and shutdown, two conditions that every engine has to go through but are far outside the “ideal” conditions for the system, and operating with liquid hydrogen just made the questions greater. Only four specific objectives were set for this test: demonstration of stability at low LH2 flow (using dewar pressure as a gauge), demonstration of suitability at constant power but with H2 flow variation, demonstration of stability with fixed control drums but variable H2 flow to effect a change in reactor power, and getting a reactivity feedback value associated with LH2 at the core entrance. Many of these tests hinge on the fact that the LH2 isn’t just a coolant, but a major source of neutron moderation, so the flow rate (and associated changes in temperature and pressure) of the propellant have impacts extending beyond just the temperature of the exhaust. This test showed that there were no power or flow instabilities in the low-power, low-flow conditions that would be seen even during reactor startup (when the H2 entering the core was at its’ densest, and therefore most moderating). The predicted behavior and the test results showed good correlation, especially considering the instrumentation used (like the reactor itself) really wasn’t designed for these conditions, and the majority of the transducers used were operating at the extreme low range of their scale.

After the October test, the reactor was wheeled down a shunt track to radiologically cool down (allow the short-lived fission products to decay, reducing the gamma radiation flux coming off the reactor), and then was disassembled in the NRDC hot cell. These post-mortem examinations were an incredibly important tool for evaluating a number of variables, including how much power was generated during the test (based on the distribution of fission products, which would change depending on a number of factors, but mainly due to the power produced and the neutron spectrum that the reactor was operating in when they were produced), chemical reactivity issues, mechanical problems in the reactor itself, and several other factors. Unfortunately, disassembling even a simple system without accidentally breaking something is difficult, and this was far from a simple system. A challenge became “did the reactor break that itself, or did we?” This is especially true of fuel elements, which often broke due to inadequate lateral support along their length, but also would often break due to the way they were joined to the cold end of the core (which usually involved high-temperature, reasonably neutronically stable adhesives).

This issue was illustrated in the A2 test, when there were multiple broken fuel elements that did not have erosion at the break. This is a strong indicator that they broke during disassembly, not during the test itself: hot H2 tends to heavily erode the carbon in the graphite matrix – and the carbide fuel pellets – and is a very good indicator if the fuel rods broke during a power test. Broken fuel elements were a persistent problem in the entire Rover and NERVA programs (sometimes leading to ejection of the hot end portion of the fuel elements), and the fact that all of the fueled ones seem to have not broken was a major victory for the fuel fabricators.

This doesn’t mean that the fuel elements weren’t without their problems. Each generation of reactors used different fuel elements, sometimes multiple different types in a single core, and in this case the propellant channels, fuel element ends, and the tips of the exterior of the elements were clad in NbC, but the full length of the outside of the elements was not, to attempt to save mass and not overly complicate the neutronic environment of the reactor itself. Unfortunately, this means that the small amount of gas that slipped between the filler strips and pyro-tiles placed to prevent this problem could eat away at the middle of the outside of the fuel element (toward the hot end), something known as mid-band corrosion. This occurred mostly on the periphery of the core, and had a characteristic pattern of striations on the fuel elements. A change was made, to ensure that all of the peripheral fuel elements were fully clad with NbC, since the areas that had this clad were unaffected. Once again, the core became more complex, and more difficult to model and build, but a particular problem was addressed due to empirical data gathered during the test. A number of unfueled, instrumented fuel elements in the core were found to have broken in such a way that it wasn’t possible to conclusively rule out handling during disassembly, however, so the integrity of the fuel elements was still in doubt.

The problems associated with these graphite composite fuel elements never really went away during ROVER or NERVA, with a number of broken fuel elements (which were known to have been broken during the test) were found in the PEWEE reactor, the last test of this sort of fuel element matrix (NF-1 used either CERMET – then called composite – or carbide fuel elements, no GC fuel elements were used). The follow-on A3 reactor exhibited a form of fuel erosion known as pin-hole erosion, which the NbC clad was unable to address, forcing the NERVA team to other alternatives. This was another area where long-term use of the GC fuel elements was shown to be unsustainable for long-duration use past the specific mission parameters, and a large part of why the entire NERVA engine was discarded during staging, rather than just the propellant tanks as in modern designs. New clad materials and application techniques show a lot of promise, and GC is able to be used in a carefully designed LEU reactor, but this is something that isn’t really being explored in any depth in most cases (both the LANTR and NTER concepts still use GC fuel elements, with the NTER specifying them exclusively due to fuel swelling issues, but that seems to be the only time it’s actually required).

Worse Than Worst Case: KIWI-TNT

One question that often is asked by those unfamiliar with NTRs is “what happens if it blows up?” The short answer is that they can’t, for a number of reasons. There is only so much reactivity in a nuclear reactor, and only so fast that it can be utilized. The amount of reactivity is carefully managed through fuel loading in the fuel elements and strategically placed neutron poisons. Also, the control systems used for the nuclear reactors (in this case, control drums placed around the reactor in the radial reflector) can only be turned so fast. I recommend checking out the report on Safety Neutronics in Rover Reactors liked at the end of this post if this is something you’d like to look at more closely.

However, during the Rover testing at NRDS one WAS blown up, after significant modification that would not ever be done to a flight reactor. This is the KIWI-TNT test (TNT is short for Transient Nuclear Test). The behavior of a nuclear reactor as it approaches a runaway reaction, or a failure of some sort, is something that is studied in all types of reactors, usually in specially constructed types of reactors. This is required, since the production design of every reactor is highly optimized to prevent this sort of failure from occurring. This was also true of the Rover reactors. However, knowing what a fast excursion reaction would do to the reactor was an important question early in the program, and so a test was designed to discover exactly how bad things could be, and characterize what happened in a worse-than-worst-case scenario. It yielded valuable data for the possibility of an abort during launch that resulted in the reactor falling into the ocean (water being an excellent moderator, making it more likely that accidental criticality would occur), if the launch vehicle exploded on the pad, and also tested the option of destroying the reactor in space after it had been exhausted of its’ propellant (something that ended up not being planned for in the final mission profiles).

B4A Cutaway
KIWI B4A reactor, which KIWI-TNT was based on, image via LANL

What was the KIWI-TNT reactor? The last of the KIWI series of reactors, its’ design was very similar to the KIWI-B4A reactor (the predecessor for the NERVA-1 series of reactors), which was originally designed as a 1000 MW reactor with an exhaust exit chamber temperature of 2000 C. However, a number of things prevented a fast excursion from happening in this reactor: first, the shims used for the fuel elements were made of tantalum, a neutron poison, to prevent excess reactivity; second, the control drums used stepping motors that were slow enough that a runaway reaction wasn’t possible; finally, this experiment would be done without coolant, which also acted as moderator, so much more reactivity was needed than the B4A design allowed. With the shims removed, excess reactivity added to the point that the reactor was less than 1 sub-critical (with control drums fully inserted) and $6 of excess reactivity available relative to prompt critical, and the drum rotation rate increased by a factor of 89(!!), from 45 deg/s to 4000 deg/s, the stage was set for this rapid scheduled disassembly on January 12, 1965. This degree of modification shows how difficult it would be to have an accidental criticality accident in a standard NTR design.

Test Vehicle Schematic
KIWI-TNT Test Stand Schematic, image via LANL

The test had six specific goals: 1. Measure reaction history and total fissions produced under a known reactivity and compare to theoretical predictions in order to improve calculations for accident predictions, 2. to determine distribution of fission energy between core heating and vaporization, and kinetic energies, 3. determination of the nature of the core breakup, including the degree of vaporization and particle sizes produced, to test a possible nuclear destruct system, 4. measure the release into the atmosphere of fission debris under known conditions to better calculate other possible accident scenarios, 5. measure the radiation environment during and after the power transient, and 6. to evaluate launch site damage and clean-up techniques for a similar accident, should it occur (although the degree of modification required to the reactor core shows that this is a highly unlikely event, and should an explosive accident occur on the pad, it would have been chemical in nature with the reactor never going critical, so fission products would not be present in any meaningful quantities).

There were 11 measurements taken during the test: reactivity time history, fission rate time history, total fissions, core temperatures, core and reflector motion, external pressures, radiation effects, cloud formation and composition, fragmentation and particle studies, and geographic distribution of debris. An angled mirror above the reactor core (where the nozzle would be if there was propellant being fed into the reactor) was used in conjunction with high-speed cameras at the North bunker to take images of the hot end of the core during the test, and a number of thermocouples placed in the core.

KIWI-TNT via Pinterest
KIWI-TNT test, AEC image via SomethingAwful

As can be expected, this was a very short test, with a total of 3.1×10^20 fissions achieved after only 12.4 milliseconds. This was a highly unusual explosion, not consistent with either a chemical or nuclear explosion. The core temperature exceeded 17.5000 C in some locations, vaporizing approximately 5-15% of the core (the majority of the rest either burned in the air or was aerosolized into the cloud of effluent), and produced 150 MW/sec of kinetic energy about the same amount of kinetic energy as approximately 100 pounds of high explosive (although due to the nature of this explosion, caused by rapid overheating rather than chemical combustion, in order to get the same effect from chemical explosives would take considerably more HE). Material in the core was observed to be moving at 7300 m/sec before it came into contact with the pressure vessel, and flung the largest intact piece of the pressure vessel (a 0.9 sq. m, 67 kg piece of the pressure vessel) 229 m away from the test location. There were some issues with instrumentation in this test, namely with the pressure transducers used to measure the shock wave. All of these instruments but two (placed 100 ft away) didn’t record the pressure wave, but rather an electromagnetic signal at the time of peak power (those two recorded a 3-5 psi overpressure).

KIWI-TNT residue
KIWI-TNT remains, image via LANL

Radioactive Release during Rover Testing Prequel: Radiation is Complicated

Radiation is a major source of fear for many people, and is the source of a huge amount of confusion in the general population. To be completely honest, when I look into the nitty gritty of health physics (the study of radiation’s effects on living tissue), I spend a lot of time re-reading most of the documents because it is easy to get confused by the terms that are used. To make matters worse, especially for the Rover documentation, everything is in the old, outdated measures of radioactivity. Sorry, SI users out there, all the AEC and NASA documentation uses Ci, rad, and rem, and converting all of it would be a major headache. If someone would like to volunteer helping me convert everything to common sense units, please contact me, I’d love the help! However, the natural environment is radioactive, and the Sun emits a prodigious amount of radiation, only some of which is absorbed by the atmosphere. Indeed, there is evidence that the human body REQUIRES a certain amount of radiation to maintain health, based on a number of studies done in the Soviet Union using completely non-radioactive, specially prepared caves and diets.

Exactly how much is healthy and not is a matter of intense debate, and not much study, though, and three main competing theories have arisen. The first, the linear-no-threshold model, is the law of the land, and states that there’s a maximum amount of radiation that is allowable to a person over the course of a year, no matter if it’s in one incident (which usually is a bad thing), or evenly spaced throughout the whole year. Each rad (or gray, we’ll get to that below) of radiation increases a person’s chance of getting cancer by a certain percentage in a linear fashion, and so effectively the LNT model (as it’s known) determines a minimum acceptable increase in the chance of a person getting cancer in a given timeframe (usually quarters and years). This doesn’t take into account the human body’s natural repair mechanisms, though, which can replace damaged cells (no matter how they’re damaged), which leads most health physicists to see issues with the model, even as they work within the model for their professions.

The second model is known as the linear-threshold model, which states that low level radiation (under the threshold of the body’s repair mechanisms) doesn’t make sense to count toward the likelihood of getting cancer. After all, if you replace your Formica counter top in your kitchen with a granite one, the natural radioactivity in the granite is going to expose you to more radiation, but there’s no difference in the likelihood that you’re going to get cancer from the change. Ramsar, Iran (which has the highest natural background radiation of any inhabited place on Earth) doesn’t have higher cancer rates, in fact they’re slightly lower, so why not set the threshold to where the normal human body’s repair mechanisms can control any damage, and THEN start using the linear model of increase in likelihood of cancer?

The third model, hormesis, takes this one step further. In a number of cases, such as Ramsar, and an apartment building in Taiwan which was built with steel contaminated with radioactive cobalt (causing the residents to be exposed to a MUCH higher than average chronic, or over time, dose of gamma radiation), people have not only been exposed to higher than typical doses of radiation, but had lower cancer rates when other known carcinogenic factors were accounted for. This is evidence that having an increased exposure to radiation may in fact stimulate the immune system and make a person more healthy, and reduce the chance of that person getting cancer! A number of places in the world actually use radioactive sources as places of healing, including radium springs in Japan, Europe, and the US, and the black monazite sands in Brazil. There has been very little research done in this area, since the standard model of radiation exposure says that this is effectively giving someone a much higher risk for cancer, though.

I am not a health physicist. It has become something of a hobby for me in the last year, but this is a field that is far more complex than astronuclear engineering. As such, I’m not going to weigh in on the debate as to which of these three theories is right, and would appreciate it if the comments section on the blog didn’t become a health physics flame war. Talking to friends of mine that ARE health physicists (and whom I consult when this subject comes up), I tend to lean somewhere between the linear threshold and hormesis theories of radiation exposure, but as I noted before, LNT is the law of the land, and so that’s what this blog is going to mostly work within.

Radiation (in the context of nuclear power, especially) starts with the emission of either a particle or ray from a radioisotope, or unstable nucleus of an atom. This is measured with the Curie (Cu) which is a measure of how much radioactivity IN GENERAL is released, or 3.7X10^10 emissions (either alpha, beta, neutron, or gamma) per second. SI uses the term Becquerels (Bq), which is simple: one decay = 1 Bq. So 1 Cu = 3.7X10^10 Bq. Because it’s so small, megaBequerels (Mbq) is often used, because unless you’re looking at highly sensitive laboratory experiments, even a dozen Bq is effectively nothing.

Each different type of radiation affects both materials and biological systems differently, though, so there’s another unit used to describe energy produced by radiation being deposited onto a material, the absorbed dose: this is the rad, and SI unit is the gray (Gy). The rad is defined as 100 ergs of energy deposited in one gram of material, and the gray is defined as 1 joule of radiation absorbed by one kilogram of matter. This means that 1 rad = 0.01 Gy. This is mostly seen for inert materials, such as reactor components, shielding materials, etc. If it’s being used for living tissue, that’s generally a VERY bad sign, since it’s pretty much only used that way in the case of a nuclear explosion or major reactor accident. It is used in the case of an acute – or sudden – dose of radiation, but not for longer term exposures.

This is because there’s many things that go into how bad a particular radiation dose is: if you’ve got a gamma beam that goes through your hand, for instance, it’s far less damaging than if it goes through your brain, or your stomach. This is where the final measurement comes into play: in NASA and AEC documentation, they use the term rem (or radiation equivalent man), but in SI units it’s known as the Sievert. This is the dose equivalent, or normalizing all the different radiation types’ effects on the various tissues of the body, by applying a quality factor to each type of radiation for each part of a human body that is exposed to that type of radiation. If you’ve ever wondered what health physicists do, it’s all the hidden work that goes on when that quality factor is applied.

The upshot of all of this is the way that radiation dose is assessed. There are a number of variables that were assessed at the time (and currently are assessed, with this as an effective starting point for ground testing, which has a minuscule but needing to be assessed consideration as far as release of radioactivity to the general public). The exposure was broadly divided into three types of exposure: full-body (5 rem/yr for an occupational worker, 0.5 rem/yr for the public); skin, bone, and thyroid exposure (30 rem/yr occupational, 3 rem/yr for the public); and other organs (15 rem/yr occupational, 1.5 rem/yr for the public). In 1971, the guidelines for the public were changed to 0.5 rem/yr full body and 1.5 rem/yr for the general population, but as has been noted (including in the NRDS Effluent Final Report) this was more an administrative convenience rather than biomedical need.

1974 Occupational Radiological Release Standards, Image via EPA

Additional considerations were made for discrete fuel element particles ejected from the core – a less than one in ten thousand chance that a person would come in contact with one, and a number of factors were considered in determining this probability. The biggest concern is skin contact can result in a lesion, at an exposure of above 750 rads (this is an energy deposition measure, not an expressly medical one, because it is only one type of tissue that is being assessed).

Finally, and perhaps the most complex to address, is the aerosolized effluent from the exhaust plume, which could be both gaseous fission products (which were not captured by the clad materials used) and from small enough particles to float through the atmosphere for a longer duration – and possibly be able to be inhaled. The relevant limits of radiation exposure for these tests for off-site populations were 170 mrem/yr whole body gamma dose, and a thyroid exposure dose of 500 mrem/yr. The highest full body dose recorded in the program was in 1966, of 20 mrem, and the highest thyroid dose recorded was from 1965 of 72 mrem.

The Health and Environmental Impact of Nuclear Propulsion Testing Development at Jackass Flats

So how bad were these tests about releasing radioactive material, exactly? Considering the sparsely populated area few people – if any – that weren’t directly associated with the program received any dose of radiation from aerosolized (inhalable, fine particulate) radioactive material. By the regulations of the day, no dose of greater than 15% of the allowable AEC/FRC (Federal Radiation Council, an early federal health physics advisory board) dose for the general public was ever estimated or recorded. The actual release of fission products in the atmosphere (with the exception of Cadmium-115) was never more than 10%, and often less than 1% (115Cd release was 50%). The vast majority of these fission products are very short lived, decaying in minutes or days, so there was not much – if any – change for migration of fallout (fission products bound to atmospheric dust that then fell along the exhaust plume of the engine) off the test site. According to a 1995 study by the Department of Energy, the total radiation release from all Rover and Tory-II nuclear propulsion tests was approximately 843,000 Curies. To put this in perspective, a nuclear explosive produces 30,300,000 Curies per kiloton (depending on the size and efficiency of the explosive), so the total radiation release was the equivalent of a 30 ton TNT equivalent explosion.

Test Release Summary NRDS
Summary of Radiological Release, image via DOE

This release came from either migration of the fission products through the metal clad and into the hydrogen coolant, or due to cladding or fuel element failure, which resulted in the hot hydrogen aggressively attacking the graphite fuel elements and carbide fuel particles.

The amount of fission product released is highly dependent on the temperature and power level the reactors were operated at, the duration of the test, how quickly the reactors were brought to full power, and a number of other factors. The actual sampling of the reactor effluent occurred three ways: sampling by aircraft fitted with special sensors for both radiation and particulate matter, the “Elephant gun” effluent sampler placed in the exhaust stream of the engine, and by postmortem chemical analysis of the fuel elements to determine fuel burnup, migration, and fission product inventory. One thing to note is that for the KIWI tests, effluent release was not nearly as well characterized as for the later Phoebus, NRX, Pewee, and Nuclear Furnace tests, so the data for these tests is not only more accurate, but far more complete as well.

cy67 offsite dose map
Offsite Dose Map, 1967 (a year with higher-than-average release, and the first to employ better sampling techniques) Image via EPA

Two sets of aircraft data were collected: one (by LASL/WANL) was from fixed heights and transects in the six miles surrounding the effluent plume, collecting particulate effluent which would be used (combined with known release rates of 115Cd and post-mortem analysis of the reactor) to determine the total fission product inventory release at those altitudes and vectors, and was discontinued in 1967; the second (NERC) method used a fixed coordinate system to measure cloud size and density, utilizing a mass particulate sampler, charcoal bed, cryogenic sampler, external radiation sensor, and other equipment, but due to the fact that these samples were taken more than ten miles from the reactor tests, it’s quite likely that more of the fission products had either decayed or come down to the ground as fallout, so depletion of much of the fission product inventory could easily have occurred by the time the cloud reached the plane’s locations. This technique was used after 1967.

The next sampling method also came online in 1967 – the Elephant Gun. This was a probe that was stuck directly into the hot hydrogen coming out of the nozzle, and collected several moles of the hot hydrogen from the exhaust stream at several points throughout the test, which were then stored in sampling tanks. Combined with hydrogen temperature and pressure data, acid leaching analysis of fission products, and gas sample data, this provided a more close-to-hand estimate of the fission product release, as well as getting a better view of the gaseous fission products that were released by the engine.

EMAD wikimedia
Engine Maintenance and Disassembly Building at NRDC under construction, image via Wikimedia Commons

Finally, after testing and cool-down, each engine was put through a rigorous post-mortem inspection. Here, the amount of reactivity lost compared to the amount of uranium present, power levels and test duration, and chemical and radiological analysis were used to determine which fission products were present (and in which ratios) compared to what SHOULD have been present. This technique enhanced understanding of reactor behavior, neutronic profile, and actual power achieved during the test as well as the radiological release in the exhaust stream.

Radioactive release from these engine tests varied widely, as can be seen in the table above, however the total amount released by the “dirtiest” of the reactor tests, the Phoebus 1B second test, was only 240,000 Curies, and the majority of the tests released less than 2000 Curies. Another thing that varied widely was HOW the radiation was released. The immediate area (within a few meters) of the reactor would be exposed to radiation during operation, in the form of both neutron and gamma radiation. The exhaust plume would contain not only the hydrogen propellant (which wasn’t in the reactor for long enough to accumulate additional neutrons and turn into deuterium, much less tritium, in any meaningful quantities), but the gaseous fission products (most of which the human body isn’t able to absorb, such as 135Xe) and – if fuel element erosion or breakage occurred – a certain quantity of particles that may either have become irradiated or contain burned or unburned fission fuel.

Isotope Release Distribution 25 mi arc
Image via EPA

These particles, and the cloud of effluent created by the propellant stream during the test, were the primary concern for both humans and the environment from these tests. The reason for this is that the radiation is able to spread much further this way (once emitted, and all other things being equal, radiation goes in a straight line), and most especially it can be absorbed by the body, through inhalation or ingestion, and some of these elements are not just radioactive, but chemically toxic as well. As an additional complication, while alpha and beta radiation are generally not a problem for the human body (your skin stops both particles easily), when they’re IN the human body it’s a whole different ballgame. This is especially true of the thyroid, which is more sensitive than most to radiation, and soaks up iodine (131I is a fairly active radioisotope) like nobody’s business. This is why, after a major nuclear accident (or a theoretical nuclear strike), iodine tablets, containing a radio-inert isotope, are distributed: once the thyroid is full, the excess radioactive iodine passes through the body since nothing else in the body can take it up and store it.

There are quite a few factors that go into how far this particulate will spread, including particle mass, temperature, velocity, altitude, wind (at various altitudes), moisture content of the air (particles could be absorbed into water droplets), plume height, and a host of other factors. The NRDS Effluent Program Final Report goes into great depth on the modeling used, and the data collection methods used to collect data to refine these estimates.

Another thing to consider in the context of Rover in particular is that open-air testing of nuclear weapons was taking place in the area immediately surrounding the Rover tests, which released FAR more fallout (by dozens of orders of magnitude), so it contributed a very minor amount to the amount of radionucleides released at the time.

The offsite radiation monitoring program, which included sampling of milk from cows to estimate thyroid exposure, collected data through 1972, and all exposures measured were well below the exposure limits set on the program.

Since we looked at the KIWI-TNT test earlier, let’s look at the environmental effects of this particular test. After all, a nuclear rocket blowing up has to be the most harmful test, right? Surprisingly, ten other tests released more radioactivity than KIWI-TNT. The discrete particles didn’t travel more than 600 feet from the explosion. The effluent cloud was recorded from 4000 feet to 50 miles downwind of the test site, and aircraft monitoring the cloud were able to track it until it went out over the Pacific ocean (although at that point, it was far less radioactive). By the time the cloud had moved 16,000 feet from the test site, the highest whole body dose from the cloud measured was 1.27X10^-3 rad (at station 16-210), and the same station registered an inhalation thyroid dose of 4.55X10^-3 rads. This shows that even the worst credible accident possible with a NERVA-type reactor has only a negligible environmental and biological impact due to either the radiation released or the explosion of the reactor itself, further attesting to the safety of this engine type.

KIWI-TNT Particle Map
Map of discrete particle distribution, image via LANL

If you’re curious about more in-depth information about the radiological and environmental effects of the KIWI-TNT tests, I’ve linked the (incredibly detailed) reports on the experiment at the end of this post.

KIWI-TNT Rad readings
Radiological distribution from particle monitors, image via LANL

The Results of the Rover Test Program

Throughout the Rover testing program, the fuel elements were the source of most of the non-H2 related issues. While other issues, such as instrumentation, were also encountered, the main headache was the fuel elements themselves.

A lot of the problems came down to the mechanical and chemical properties of the graphite fuel matrix. Graphite is easily attacked by the hot H2, leading to massive fuel element erosion, and a number of solutions were experimented with throughout the test series. With the exception of the KIWI-A reactor (which used unclad fuel plates, and was heavily affected by the propellant), each of the reactors featured FEs that were clad to a greater or lesser extent, using a variety of methods and materials. Often, niobium carbide (NbC) was the favored clad material, but other options, such as tungsten, exist.

CVD coating
CVD Coating device, image courtesy LANL

Chemical vapor deposition was an early option, but unfortunately it was not feasible to consistently and securely coat the interior of the propellant tubes, and differential thermal expansion was a major challenge. As the fuel elements heated, they expanded, but at a different rate than the coating did. This led to cracking, and in some cases, flaking off, of the clad material, leading to the graphite being exposed to the propellant and being eroded away. Machined inserts were a more reliable clad form, but required more complexity to install.

The exterior of the fuel elements originally wasn’t clad, but as time went on it was obvious that this would need to be addressed as well. Some propellant would leak between the prisms, leading to erosion of the outside of the fuel elements. This changed the fission geometry of the reactor, led to fission product and fuel release through erosion, and weakened the already somewhat fragile fuel elements. Usually, though, vapor deposition of NbC was sufficient to eliminate this problem

Fortunately, these issues are exactly the sort of thing that CFEET and NTREES are able to test, and these systems are far more economical to operate than a hot-fired NTR is. It is likely that by the time a hot-fire test is being conducted, the fuel elements will be completely chemically and thermally characterized, so these issues shouldn’t arise.

The other issue with the fuel elements was mechanical failure due to a number of problems. The pressure across the system changes dramatically, which leads to differential stress along the length of the fuel elements. The original, minimally-supported fuel elements, would often undergo transverse cracking, leading to blockage of propellant and erosion. In a number of cases, after the fuel element broke this way, the hot end of the fuel element would be ejected from the core.

Tie Tube
Rover tie tube image courtesy NASA

This led to the development of a structure that is still found in many NTR designs today: the tie tube. This is a hexagonal prism, the same size as the fuel elements, which supports the adjacent fuel elements along their length. In addition to being a means of support, these are also a major source of neutron moderation, due to the fact that they’re cooled by hydrogen propellant from the regeneratively cooled nozzle. The hydrogen would make two passes through the tie tube, one in each direction, before being injected into the reactor’s cold end to be fed through the fuel elements.

The tie tubes didn’t eliminate all of the mechanical issues that the fuel element faced. Indeed, even in the NF-1 test, extensive fuel element failure was observed, although none of the fuel elements were ejected from the core. However, new types of fuel elements were being tested (uranium carbide-zirconium carbide carbon composite, and (U,Zr)C carbide), which offered better mechanical properties as well as higher thermal tolerances.

Current NTR designs still usually incorporate tie tubes, especially because the low-enriched uranium that is the main notable difference in NASA’s latest design requires a much more moderated neutron spectrum than a HEU reactor does. However, the ability to support the fuel element mechanically along its entire length (rather than just at the cold end, as was common in NERVA designs) does also increase the mechanical stability of the reactor, and helps maintain the integrity of the fuel elements.

The KIWI-B and Phoebus reactors were successful enough designs to use as starting points for the NERVA engines. NERVA is an acronym for the Nuclear Energy for Rocket Vehicle Applications, and took place in two parts: NERVA-1, or NERVA-NRX, developed the KIWI-B4D reactor into a more flight-prototypic design, including balance of plant optimization, enhanced documentation of the workings of the reactor, and coolant flow studies. The second group of engines, NERVA-2, were based on the Phoebus 2 type of reactor from Rover, and ended up finally being developed into the NERVA-XE, which was meant to be the engine that would power the manned mission to Mars. The NERVA-XE PRIME test was of the engine in flight configuration, with all the turbopumps, coolant tanks, instrumentation, and even the reactor’s orientation (nozzle down, instead of up) were all the way that it would have been configured during the mission.

The first ground experimental nuclear rocket engine (XE) assembl
NERVA XE-PRIME pre-fire installation and verification, image via Westinghouse Engineer (1974)

The XE-PRIME test series lasted for nine months, from December 1968 to September 1969, and involved 24 startups and shutdowns of the reactor. Using a 1140 MW reactor, operating at 2272 K exhaust temperature, and produced 247 kN of thrust at 710 seconds of specific impulse. This included using new startup techniques from cold-start conditions, verification of reactor control systems – including using different subsystems to be able to manipulate the power and operating temperature of the reactor – and demonstrated that the NERVA program had successfully produced a flight-ready nuclear thermal rocket.

Ending an Era: Post-Flight Design Testing

Toward the end of the Rover program, the engine design itself had been largely finalized, with the NERVA XE-Prime test demonstrating an engine tested in flight configuration (with all the relevant support hardware in place, and the nozzle pointing down), however, some challenges remained for the fuel elements themselves. In order to have a more cost-effective testing program for fuel elements, two new reactors were constructed.

NERVAPewee2, AEC 1971
PEWEE Test Stand, image courtesy LANL

The first, Pewee, was a smaller (75 klbf, the same size as NASA’s new NTR) nuclear rocket engine, which was able to have the core replaced for multiple rounds of testing, but was only used once before the cancellation of the program – but not before achieving the highest specific impulse of any of the Rover engines. This reactor was never tested outside of a breadboard configuration, because it was never meant to be used in flight. Instead, it was a cost-saving measure for NASA and the AEC: due to its’ smaller size, it was much cheaper to built, and due to its’ lower propellant flow rate, it was also much easier to test. This meant that experimental fuel elements that had undergone thermal and irradiation testing would be able to be tested in a fission-powered, full flow environment at lower cost.

Transverse view, Finseth
NF-1 Transverse view, image courtesy NASA

The second was the Nuclear Furnace, which mimicked the neutronic environment and propellant flow rates of the larger NTRs, but was not configured as an engine. This reactor also was the first to incorporate an effluent scrubber, capturing the majority of the non-gaseous fission products and significantly reducing the radiological release into the environment. It also achieved the highest operating temperatures of any of the reactors tested in Nevada, meaning that the thermal stresses on the fuel elements would be higher than would be experienced in a full-power burn of an actual NTR. Again, this was designed to be able to be repeatedly reused in order to maximize the financial benefit of the reactor’s construction, but was only used once before the cancellation of the program. The fuel elements were tested in separate cans, and none of them were the graphite composite fuel form: instead, CERMET (then known as composite) and carbide fuel elements, which had been under development but not extensively used in Rover or NERVA reactors, were tested. This system also used an effluent cleanup system, but that’s something that we’re going to look at more in depth on the next post, as it remains a theoretically possible method of doing hot-fire testing for a modern NTR.

A-type reactor
NRX A reactor, which PAX was based on, image courtesy NASA

Westinghouse ANL also proposed a design based on the NERVA XE, called the PAX reactor, which would be designed to have its’ core replaced, but this never left the drawing boards. Again, the focus had shifted toward lower cost, more easily maintained experimental NTR test stands, although this one was much closer to flight configuration. This would have been very useful, because not only would the fuel be subjected to a very similar radiological and chemical environment, but the mechanical linkages, hydrogen flow paths, and resultant harmonic and gas-dynamic issues would have been able to be evaluated in a near-prototypic environment. However, this reactor was never tested.


As we’ve seen, hot-fire testing was something that the engineers involved in the Rover and NERVA programs were exceptionally concerned about. Yes, there were radiological releases into the environment that are well above and beyond what would be considered today, but when compared to the releases from the open-air nuclear weapons tests that were occurring in the immediate vicinity, they were miniscule.

Today, though, these releases would be unacceptable. So, in the next blog post we’re going to look at the options, and restrictions, for a modern testing facility for NTR hot firing, including a look at the proposals over the years and NASA’s current plan for NTR testing. This will include the exhaust filtration system on the Nuclear Furnace, a more complex (but also more effective) filtering system proposed for the SNTP pebblebed reactor (TimberWind), a geological filtration concept called SAFE, and a full exhaust capture and combustion system that could be installed at NASA’s current rocket test facility at Stennis Space Center.

This post is already started, and I hope to have it out in the next few weeks. I look forward to hearing all your feedback, and if there are any more resources on this subject that I’ve missed, please share them in the comments below!



Los Alamos Pajarito Site

Los Alamos Critical Assemblies Facility, LA-8762-MS, by R. E. Malenfant,

Thirty-Five Years at Pajarito Canyon Site, LA-7121-H, Rev., by Hugh Paxton

A History of Critical Experiments at Pajarito Site, LA-9685-H, by R.E. Malenfant, 1983

Environmental Impacts and Radiological Release Reports

NRDS Nuclear Rocket Effluent Program, 1959-1970; NERC-LV-539-6, by Bernhardt et al, 1974

Offsite Monitoring Report for NRX-A2; 1965

Radiation Measurements of the Effluent from the Kiwi-TNT Experiment; LA-3395-MS, by Henderson et al, 1966

Environmental Effects of the KIWI-TNT Effluent: A Review and Evaluation; LA-3449, by R.V.Fultyn, 1968

Technological Development and Non-Nuclear Testing

A Review of Fuel Element Development for Nuclear Rocket Engines; LA-5931, by J.M. Taub

Hot Fire Testing

Rover Nuclear Rocket Engine Program: Overview of Rover Engine Tests; N92-15117, by J.L. Finseth, 1992

Nuclear Furnace 1 Test Report; LA-5189-MS, W.L. Kirk, 1973

KIWI-TNT Testing

KIWI-Transient Nuclear Test; LA-3325-MS, 1965

Kiwi-TNT Explosion; LA-3551, by Roy Reider, 1965

An Analysis of the KIWI-TNT Experiment with MARS Code; Journal of Nuclear Science and Technology, Hirakawa et al. 1968

Miscellaneous Resources

Safety Neutronics for Rover Reactors; LA-3558-MS, Los Alamos Scientific Laboratory, 1965

The Behavior of Fission Products During Nuclear Rocket Reactor Tests; LA-UR-90-3544, by Bokor et al, 1996

Development and Testing Low Enriched Uranium Nuclear Thermal Systems

LEU NTP Part Two: CERMET Fuel – NASA’s Path to Nuclear Thermal Propulsion

Hello, and welcome back to Beyond NERVA, for our second installment of our blog series on NASA’s new nuclear thermal propulsion (NTP) system.

In the last post, we looked briefly at nuclear thermal rockets (NTRs) in general, and NERVA’s XE-Prime engine, the only time a flight configuration NTR has ever been tested in the US. We also looked at the implications for modern manufacturing and methods that would be used in any new NTR, since we are hardly going to be falling back on 60’s era technology for things like turbopumps and cryogenic storage of fuels. Finally, we looked briefly at a new material for the fuel elements, a composite of ceramic fissile fuel and metal matrix called CERMET.

This post is a deep dive into CERMET itself, including its’ design and manufacture, a little bit of its history during the Rover program, its’ rebirth in the 1990s, the test stands currently used for non-nuclear testing and some current ideas to continue to improve its’ capabilities. This is going to be more of a materials and fuel elements deep dive post, the next post will look at the engines themselves, the hot-fire test options and plans will be covered in the following one, and our last post in the series will look at other low-enriched uranium designs that don’t use CERMET fuels, but instead use carbides.

Fuel elements are where the fission itself occurs, and as such tend to be perhaps the most important part of any nuclear reactor. In the case of nuclear thermal propulsion systems (NTR, NTP to NASA), these come in three broad categories: graphite composite ((GC) such as in NERVA, which we looked at in the last post), CERMET, and carbides (something we’ll look at down the road in this series). Each have their advantages and disadvantages, but all have the same goal: to heat the propellant gas passing through the reactor as much as possible, in order to produce the maximum thrust and efficiency that the engine can provide.

Fuel Element Temperature Map, Borowski
Graph of operating temperature vs. lifetime of various NTR fuel element material options, image courtesy NASA

CERMET is a higher-temperature option than the GC elements used during the majority of Rover (although CERMET FEs were tested as part of Rover), and allows for much more control in fabrication thanks to the unique structure of the material itself. In fact, it’s able to provide the possibility of using low enriched uranium for NTR propulsion, which makes it incredibly attractive to NASA.

CERMET composites are used in many different areas of manufacturing and industry, for tooling, bearings, and other materials where hardness, heat resistance, and thermal conductivity are all needed, and the combinations used vary wildly. Different CERMET combinations have different properties, and as such are an incredibly flexible material choice.

Even in the broader nuclear field, there are other CERMET fuel elements being developed, to make more accident-tolerant fuels for terrestrial reactors. These are obviously very different in design (U3O8-Al CERMET fuels are one of the IAEA’s accident tolerant fuels of interest, and are also outside the scope of this blog post), but keep in mind that every time you hear about CERMET nuclear fuel, it’s not necessarily flying humans to Mars, it may be coming soon to a nuclear power plant near you!

However, the focus of Beyond NERVA is space, so let’s turn back to the skies. How is it that CERMET will make NASA’s new nuclear thermal rocket work? To understand that, we first need to understand what CERMET is, and why NASA decided to pick it as a fuel type of interest 20 years ago.

CERMET Fuel Elements

CERMET micrograph, NASA
W-UO2 CERMET micrograph, image courtesy NASA

CERMET is an acronym for CERamic METal composite, and was one of the first fuel forms tested as part of Project Rover, primarily by Idaho National Laboratory (INL) and General Electric, in the 1960s, and were picked up again in the 1990s as an alternative to carbides for advanced nuclear thermal fuel elements. This fuel form offers increased temperature resistance, better thermal conductivity, and greater strength compared to the graphite fuel elements that ended up being selected for NERVA, but unfortunately they also required much more development. Other options for fuel elements included advanced graphite composite and carbide fuel elements of various types, which are introduced in the NTR-S page and will be examined in their own posts.

CERMET fuel elements are a way to gain the thermal resistance and chemical advantages of oxide fuels and the thermal conduction properties of metal fuels in a single fuel form. In order to have both, uranium oxide (UO2) fuel pellets measured in millimeters or micrometers are suspended in a metal matrix, usually tungsten. To protect the oxide from any potential chemical change, these microparticles of UO2 are usually coated before the fuel element itself is made. Then the metal matrix is made, usually using a hot isostatic press (HIP), where the powdered material is placed in a mold, then pressed and cooked, although other techniques are possible as well.

There is another characteristic that makes CERMET fuel attractive in the west: it offers the possibility to use low-enriched uranium instead of highly-enriched uranium by carefully selecting the metals that the matrix is made out of to maximize the amount of moderation available from the fuel elements themselves. Low enriched uranium (LEU) offers one major advantage: a lowering of the security burden required to handle nuclear material needed to test reactor components. The vast majority of NTR systems that have been proposed over the years have been fueled with highly enriched uranium (HEU), which is over 95% 235U. This isn’t quite to bomb-grade 235U, but it’s close, and relatively easy to complete the final few steps of isotopic enrichment needed to be able to construct a weapon. (There are many other safeguards in place that make the loss of HEU unlikely, not the least of which is that the reactor won’t even be on the planet anymore, but nuclear non-proliferation is a serious concern that must be addressed in depth – just not here! For a good, in-depth look into non-proliferation I recommend (among many others), the Nuclear Diner blog, most especially the posts on the Iran nuclear treaty, from a technical-policy point of view.) Due to this increased cost (security, permitting, site re-licensing, etc.), the vast majority of institutions are unable to assist NASA and the DOE with their testing of NTR components. This is a problem, because much of the experimental engineering testing work is often done by Master’s and Doctoral students working on their dissertations. Without access to the materials used in construction, this isn’t an option, leaving the testing to NASA and DOE personnel (who are far more expensive and busy), and slowing up the whole development process. By using LEU, these institutions (that are mostly already certified to work with LEU, and many even have research reactors) are able to more fully participate in the development of the next generation of NTRs.

Often, the assumption is that HEU is superior to LEU, because the majority of LEU is fertile, not fissile: it can absorb a neutron (becoming 239U), then go through two beta decays (239Np, 239Pu), and then become fissile plutonium 239, and then can undergo fission. Why not bring along only the stuff that can split already? Breeding is a far messier process in real life than on paper, after all, and the neutronic environment is far more predictable with (mostly) only one isotope of uranium present. However, breeding occurs in all fuel elements, to the point that by the time fuel is removed from a reactor in the current fleet, the majority of the energy isn’t coming from fissioning 235U, but 239Pu. The amount of breeding that occurs is called the breeding ratio, a ratio of 1:1 means that exactly as much fissile material is being produced as is being burned. Generally speaking, this ratio is higher than 1, in order to account for the buildup of fission byproducts (or poisons) produced over the course of the fuel element’s life. The breeding ratio for this type of reactor is likely not much above 1 (most aren’t, unless it’s meant to either fuel other plants or to produce weapons, neither of which is a goal with a rocket engine); one nuclear engineer of my acquaintance suggested a back-of-envelope guess of about 1.01 for the breeding ratio, but this will largely depend on the details of the fuel element that is finally selected, the reactor core geometry, and the amount of propellant being used (among other factors). With this being the case, assuming careful management of the reactor’s neutron budget (how many neutrons are bouncing off/being absorbed/causing fission/being generated, compared to what’s needed to ensure stable operation), the majority of the “useless” 238U can in fact be burned. A paper by Vishal Patel et al (sorry about the paywall, I try and avoid them but they’re very common in nuclear engineering) suggests that the overall system could actually mass less for the same power output, which would mean that it would be better from an engineering perspective to use LEU rather than HEU. These results were for one particular reactor geometry, but the PI did mention in private correspondence that this isn’t necessarily a difficult thing to achieve, as long as the designers don’t remain tied to one particular fuel element geometry, and so could apply to many different reactor architectures.

CERMET Composition and Manufacture

CERMET fuels have many different components to them, and as such many different physical and chemical properties that have to be accounted for. However, the primary concern from a materials point of view tends to be the thermal limitations of the materials used in the FE.

CERMET Material Melting and Vaporization Points, Stewart 2015
Image from “A Historical Review of CERMET Fuel Development and the Engine Performance Implications,” Stewart, 2015

As with any composite material, there are quite a few steps to making CERMET fuels. This will be a shallow but reasonably thorough look at the manufacturing challenges on each step of the way.

In order to construct a CERMET fuel element, first the fissile fuel granules need to be made. This is not too different from the process used to make terrestrial fuel elements, which are uranium oxide (UO2) based, the main difference is the size of the resulting fuel: instead of having fuel in a pellet the size of the last joint of your finger, it’s a roughly spherical granule ~100 um in diameter.

Angular UO2 Microparticles
Angular UO2 microparticles, image courtesy NASA

There are relatively few suppliers for this form of UO2, and the most common one (BWXT) does not offer it at the price that NASA can work with. Y12 has plenty available in the right size, but they’re angular and irregular in shape; this is a problem because the release of neutrons and fission products is difficult enough to calculate when the beads are spherical, due to their distribution in the overall matrix, if they aren’t spherical enough that will affect the direction and spectrum of the resulting neutron flux, and therefore the behavior of the reactor as a whole. NASA, fortunately, has the capability to spherize these too-angular granules, though (due to their experience and equipment for plasma spray coatings in the Plasma Spheroidization System in the Thermal Spray Laboratory), and both Oak Ridge NL and the Center for Space Nuclear Research are working on gellation processes that allow for these small particles to become spherical.

W-ZrO2 CVD Coated Particles, image courtesy NASA

After the sphere is made, it (usually) has to be coated with a cladding material for three reasons: first, the hot hydrogen propellant will attack the oxide very aggressively; second, the metal matrix surrounding the fissile fuel is unable to completely trap the fission products in the fuel element, leading to irradiated exhaust; and finally the UO2 in the fuel particles tends to break down, so the clad keeps the now-crystallized U in basically the same place as it was before FE thermal damage. The first coatings experimented with were pyrolitic graphite, the same as is used in TRISO fuel. However, this still has a reasonably low melting temperature (for something in an NTR), so tungsten was experimented with next. Attempts to solidify W powder around the UO2 particles led to inconsistent or relatively poor quality results, and so other options have been explored. These include chemical vapor deposition (CVD, for a long time the preferred method), plasma deposition, and other options. In the last couple years, a new technique has been shown to offer better results, which uses fine grains of tungsten rather than the CVD spray. While not as consistent in its coating, it offers advantages to fission fragment capture and overall coating consistency that make it superior to the CVD coatings.

HIP process
Image courtesy NASA


After the fuel particles themselves are manufactured, it’s time to make the fuel element itself. This is done by pouring (at carefully selected ratios, and in this case in particular locations) the powdered tungsten and fuel particles into a mold (usually niobium), placed on a vibrating table to settle the particles, then compressed at high temperatures for extended periods of time. This process is known as Hot Isostatic Press (HIP) sintering, and continues to be used in many fuel element designs. However, the size of the granules, the amount of pressure and temperature applied for how long, and many other factors play into HIP sintering, and especially in a field where crystalline phase can be a major determining factor of if your reactor will work or not (in fuel, moderator, and even some structural components), having a consistent and high-quality matrix around the fuel particles is essential. Again, there are processes that have been proposed in recent years that offer benefits such as lower temperature and shorter time, but we’ll go into those below.

61 channel near-full size HIP can sealed
Modern HIP can, NASA

Initially, the result of these processes was a squat cylinder with coolant channels, which would then be milled and assembled into a fuel element. As time went on, and both techniques and materials understanding improved, the fuel elements began to be cast in longer and longer single units.

Finally, the external clad is applied to the fuel elements. Both chemical vapor deposition and milled inserts have been used over the years for the propellant channel clad, with bubbling in the early tests and differences with the thermal expansion coefficient of the different materials (the clad and the fuel element it’s bonded to would swell at different rates, leading to a number of materials problems) led to the use of milled inserts being used from an early stage. These inserts (usually tungsten or niobium) are then welded to end plates and external clad sheets, also usually niobium.

The Beginnings of CERMET Fuels

Originally developed by Argonne National Labs (ANL) and General Electric(GE) in the 1960s, what were then called composite fuel elements (CFEs) are a type of fuel that gained attention for NTRs in the early to mid 1990s due to the increased thermal conductivity that the metal matrix offers to the FE as a whole. GE developed what would ultimately become the GE710 fuel element from 1962 to 1968, using HEU. After over 300,000 hours of in-environment testing, this program collected a significant amount of data.

ANL 200 MW Reactor
Image courtesy DOE

According to Gordon Kruger (of General Electric at the time of his presentation to the joint NASA/DOD/DOE Nuclear Thermal Propulsion workshop in 1990, the “seed” source as it were for this section), there were two different ANL designs: one was a 100 klbf, 2,000 MWt NTR, with a thrust-to-weight ratio of 5:1 and offering 850 s of specific impulse, the second was a smaller, 200 MWt design. This was (as with most CERMET designs) a tungsten-uranium oxide (W-UO2) fuel element. The fuel particles themselves were chemically stabilized by doping them with gadolinium, and the clad for the fuel particles was W doped with Rhenium. The fuel element developed in this process is now called the ANL-2000 CERMET FE, and remains a popular one for NTR designers. It has a very high number of propellant channels (331 per FE) to allow for greater cooling capability of the fuel.

The GE design, on the other hand, was meant to be more versatile The base design was for a high temperature gas cooled reactor (HTGR), with helium as a working fluid, designed for a 10,000 hour life. Those same fuel elements, in a different core geometry, could instead burn much faster, and much hotter, for use as an NTR (with cryogenic H2 propellant), but the harder use (and harsher chemical environment) correspondingly shortened the life of the fuel elements. This is the GE 710 fuel element, which in a slightly modified form – known as the GE 711 – is still a strong contender for NTR designs, and was the front-runner for the LEU NTP that NASA is working on. With 64 propellant channels of larger diameter, this FE offers a trade-off of easier manufacture (due to the larger, less numerous boreholes) with the potential for greater thermal differences in the FE due to the greater distance between the channels.

Both these designs have many things in common, such as the hexagonal prism shape, and information sharing between the groups was a regular thing. As such, techniques used for the different stages of manufacture was common as well.

Non-Spherical Microparticles
UO2 particles

Both designs used spheres of UO2. These can still be manufactured by two places in the US (Oak Ridge National Labs and BWXT), but there are challenges to getting the pieces to be spherical when they’re that small, so the price is correspondingly high. This indicates at least something of a learning curve when it comes to this stage of manufacture, both for ensuring homogeneity of fissile fuel load (if it’s poorly mixed, hot spots and dead zones can form, leading to very bad things – or nothing at all), and for size and shape consistency. Because of the extreme temperatures, both during manufacture and operation, the gadolinium (Ga) doping experimented with at ANL became essential to stabilize the UO2, and to prevent the dissociation of the oxygen and uranium. Nursing the dissociation temperature up was a consistent effort throughout this process.

ZrO2 MSFCThe clad on the fuel pellets is a challenge another way, as well: applying an even coat of tungsten across the tiny spherical oxide pellets is a major technical challenge, and one that was addressed at the time with chemical vapor deposition (CVD), where the tungsten is liquefied and then sprayed (under a certain set of conditions) over the oxide spheres. Because the droplets are small, they have a high relative surface area, so they are able to coat a material that wouldn’t normally be able to resist the temperature of the molten substance (in this case tungsten, doped with rhenium to lower the melting point). This can lead to a very even coating, if the two substances are chemically compatible, and if the conditions are just right enough for the droplets to be able to spread out enough, and spread evenly enough across the surface. This is a very large challenge, and one that took a lot of time and energy from the teams designing the fuel elements. A competing process, pressure bonded cladding, was also examined for both the fuel particles and the clad for the fuel element itself.

Can component fit check pic

Once the fuel particles were fabricated, the metal matrix of the fuel element could then be fabricated. Hot isostatic press sintering (HIP) was the preferred method of manufacture for the fuel elements. This led to complications stabilizing the UO2 in the fuel (which isn’t able to stand the temperatures of molten tungsten, hence the sintering) used by both groups, hence the Gadolinium doping of the fuel pellets. The trade-off was always how to increase the density of the tungsten (and therefore the energy density and strength of the FE as a whole) while decreasing the amount of decay in the UO2, either by lessening the temperature or the time that the material is cooked, or by chemically stabilizing the oxide itself. Once sintering was complete, the mold is set aside to cool, then the CERMET plug is removed.

SPS SampleThe result of this exercise was known as a compact. This was then machined to drill propellant holes and do final shaping, and its fissile fuel load was assessed. It was labeled, and set aside until a sufficient collection of machined compacts had been completed. These were then stacked according to fissile fuel load, and then the tungsten fuel element end plates, external clad and propellant clad tubes were welded into place to form the overall hexagonal prism shape. These are then assembled in a number of different ways for either an HTGR or an NTR.

The most mature designs to come out of this development series was the GE 710 fuel element, with 19 working fluid channels, and the ANL 2000 designs with 312 coolant channels. In many ways, these form a baseline for CERMET fuels as the NERVA XE-Prime serves as a baseline for NTRs as a whole. Many CERMET NTR designs use this as their baseline fuel form, and for good reason. This fuel element was tested for HTGC reactor use in the 1970s, and showed promising results. However, gas cooled reactors were never popular in the US, and production ended.

The Rebirth of the Idea, and the Building of Test Stands

After the cancellation of the GE710 project, CERMET FE design went quiet for a number of decades, until the 1990s, when the idea was revived again after Project Timberwind (and the rest of the Strategic Defense Initiative) got shot down during defense cuts under President H.W. Bush.

In the early 1990s, focus shifted back from the pebblebed and toward other options. While it was acknowledged that graphite composite was better developed, and carbides offered higher-temperature operation, CERMET fuels were seen as a good compromise. At some point after the 1991 Nuclear Thermal Propulsion conference, focus shifted to CERMET fuels as being compatible enough with the legacy NERVA systems and data collected, while also being easier to work with than carbide fuels. A good overview of the decision process to proceed with CERMET fuels can be seen in Mark Stewart’s presentation for NETS 2015, “A Historical Review of CERMET Fuel Development and Engine Performance Implications” (paper and slides).

Many of the best-known designs for NTRs in the last 25-30 years have been the work of either Michael Houts at NASA’s Marshall Spaceflight Center or Stan Borowski, of NASA’s Glenn Research Center. Looking at the systemic implications of not only the rocket engineering side of things, but the mission analysis, development cost, and testing options available to develop NTRs, they firmly established a new baseline nuclear rocket, seen in popular artwork for over 30 years. Many of these designs were based around a smaller Rover-legacy advanced graphite composite fueled reactor known as the Small Nuclear Rocket Engine. Ths idea was to design an engine just big enough to be useful, and if it wasn’t powerful enough, just add another engine! We’ll look at this design more in depth at a later point, but it is important in that it was a mid-1990’s design that could use CERMET fuel, possibly the first modern one, and is in many ways the baseline for what a modern NTR can do.

In order to gather the information needed to develop the nuclear fuel elements, a number of test stands have been built by NASA in recent years to thermally and environmentally test experimental fuel elements, using depleted uranium (DU) and induction heating. The two most commonly used are the Nuclear Thermal Reactor Element Environmental Simulator (NTREES) and the CERMET Fuel Element Environment (CFEET) test stand. Since hot-fire tests were not an option anymore, and the experimental fuel elements still needed to be exposed to the thermal and environmental conditions of an operating NTR, these were seen as the best way to spend what little money had been allocated to nuclear spaceflight over a number of years.


The Nuclear Thermal Reactor Element Environmental Simulator was first proposed by William Emrich of NASA’s Marshall Spaceflight Center in 2008, and was designed to simulate everything but the radiation environment that an NTR fuel element would experience. This was the next best thing possible, short of starting nuclear hot-fire tests again (which neither the regulations nor the budget would allow): many of the other questions that needed to be answered in order to build a new NTR was being addressed in other programs; for example, cryogenic hydrogen was a major challenge in Rover, but research had continued through chemical propulsion systems. The questions that remained mostly had to do with either core geometry or the fuel element itself, and most of those questions were chemical. By substituting other materials (such as ZrO2) with similar properties (thermal behavior, etc) to UO2 in initial tests, and then move on to the more difficult to use depleted uranium (DU) for more promising test runs (as we saw in the KRUSTY post, DU carries a far stricter burden as far as safety procedures and regulation), testing could continue- and be more focused on the last details that needed to be worked out chemically and thermally.

Houts NTREES Facility 2013

When the test stand was being designed, flexibility was one of the main foci of the design decisions that were made; after all, new equipment for nuclear thermal testing is incredibly rare, and funding for it is virtually impossible to come by, so one piece of test equipment can’t be specialized to just one design, to sit collecting dust on the shelf after that project is canceled and a new one comes along with requirements that make the old equipment obsolete.

NTREES consists of a pressure vessel, an induction heating arrangement for the test article, a data acquisition unit, and an exhaust treatment system. Hydrogen is introduced at the needed pressure and rate into the pressure vessel, where it encounters the test article. Measurements are taken through view ports in the side of the pressure vessel, and then the hot hydrogen is cooled by adding a large amount of nitrogen. This gas mixture is then passed through a mass spectrometer, and then further cooled and collected. The mass spectrometer is designed to be able to detect a wide range of atomic masses, so that uranium-bearing compounds can be detected to measure fissile fuel erosion; with pressure, temperature, and flow sensors they make up the inputs for the data acquisition system.

Chamber installation
Pressure Chamber during upgrade, image courtesy NASA MSFC

The bulk of the test stand is the pressure vessel, which is water cooled, ASME code stamped, and has a maximum operating pressure of 6.9 megapascals (MPa).  Because of the need for flexibility, NTREES can handle test articles up to 2.5 meters long, and 0.3 m in diameter. A number of sapphire view ports along each side of the pressure vessel are used for instrumentation and observation. Along the bottom are ports for the induction heater used to bring the test article up to temperature (one of these can also be modified for vacuum system use). The induction heater is a 1.2 MW unit, upgraded in 2014, although the upgrade wasn’t immediately able to be fully implemented until later due to having to wait for funding to upgrade the N2 cooling system to handle the power increase.

After the now-hot H2 leaves the test article, it enters a gas mixer, which adds cold nitrogen to cool the H2 rapidly, and to dilute it with a more inert gas to reduce explosive hazards. This sleeve is also water-cooled, which draws out even more heat from the gas. The lessons learned about handling gaseous and liquid hydrogen were well-learned, and multiple safety systems and design choices have gone into handling this potentially dangerous and reactive gas safely. Another example of this is at the hot end interface with the test article: there is more pressure on the nitrogen outside the H2 feed, so that N2 inbleeding prevents any H2 leakage at a seal which would be very prone to failure due to the high temperatures involved.

The mixer is also the first stage of the effluent cleanup system, designed to ensure that no potentially harmful chemical releases occur when the exhaust is released into the atmosphere. The second stage of the cleanup system is a water cooled sleeve that further chills the gas mixture (this system was upgraded in 2014 as well, to allow the system to carry away all the heat generated – and therefore be able to run longer-duration tests at higher temperatures). Finally, a filter and back-pressure system is used to clean the now-cool gas before it is exhausted through a smokestack on the outside of the facility.

After dilution, the gas stream passes in front of a far more flexible spectrometer than usual. Most spectrometers only examine a relatively small band of the periodic table, because they’re only needing to measure particular elements. In this case, the elements that could be in the exhaust stream are spread fairly well across the periodic table, and as such a more versatile spectrometer was needed to be able to accurately assess the effluent stream.

The data acquisition system consists of the mass spectrometer, pressure sensors, gas temperature sensors, flow sensors, thermocouples for general temperature measurements, H2 detectors in the chamber and the room, and pyrometers to measure the temperature of the test article itself, and the associated electronics to collect the information from these sensors.

The design of the facility was safety-oriented from the beginning, with every precaution being taken to handle the GH2 safely. If you’re interested, the systems are looked at more on the NTREES page.

When put together, this facility allows for chemical and thermal testing of NTR fuel elements for extended periods of time in an environment that is missing only one component to mimic the environment of an NTR core: radiation. This means that fuel elements can be easily tested for manufacturing technique verification, clad material choice, erosion rates of fuel element materials, and other questions that are primarily chemical or mechanical rather than nuclear in origin.


There is one other difference between this test stand and the environment that a nuclear fuel element will, and that’s the source and distribution of the heat. In NTREES, the induction heating coil is the source of the heat. Power distribution starts on the outside of the fuel element, and  While the coil can be customized to a certain extent to manage the thermal load for different test articles, the spiral pattern will still be there, and the heat will be generated in the fuel element following the rules of inductive heating, not nuclear heating.


In a nuclear fuel element, considerable effort is taken to ensure that there is an even distribution of heat across the fuel element (taking into account all factors), because having a “hot spot” in your fuel element (higher-than desired density of fissile material) can do bad things to your reactor. Because of this, the power density is carefully assessed during manufacture and assembly. In the fuel element, temperature tends to peak around the edge of the fuel element, but otherwise be consistently distributed throughout. This difference can be significant, especially for clad/matrix interfaces where local hot spots can exacerbate thermal expansion differences and clad failure.

The radiation environment in a nuclear reactor will cause additional swelling, and neutron damage, fission product buildup, and other effects will need to be accounted for as well. This difference is something that can be modeled, either through extrapolation from old data sets or from materials analysis in various radiation environments and beamlines in facilities around the world. While verification and validation tests in a reactor environment similar to an NTR core will be needed for whatever fuel elements are selected, this testing allows many of the hurdles to be addressed before this very expensive step is taken.


Front photo with lables, Bradley
CFEET front view, NASA MSFC

The CERMET Fuel Element Environmental Test (CFEET) stand was originally proposed in 2012 by David Bradley at NASA’s Marshall Spaceflight Center as a lower-cost alternative to NTREES. One of the consistent problems in engineering is that to make something more flexible the complexity must increase. This increases the cost to both build and maintain the test stand, which results in a higher cost per test. Also, the larger the volume the test stand uses, the more supplies are needed (in the case of NTREES, GH2 and GN2, plus water for the cooling system), which also increases cost.

CFEET is a low-cost, small scale test stand for NTR fuel elements. It also exposes a test article to temperatures and hydrogen environment that they would experience in the core of an NTR, but again the radiation effects aren’t accounted for since this is purely an inductively heated test stand. Rather than have the extensive piping, effluent cleanup, and exhaust systems that NTREES uses, CFEET uses a simple vacuum chamber with a single RF coil for induction heating to test thermal properties and general reactions with the hydrogen (The hydrogen is pumped through the FE during testing, but I can’t find any information about flow rate of the gas).

CFEET Dimensions, BradleyThis means that the majority of CFEET fits on a (large) desktop. The vacuum chamber is only 16.9” tall and 10” in diameter, and it’s the largest component of the system. Rated to 10^-6 Torr, the chamber has a vacuum-rated RF feed-through port one one side, and opposite that port another, sapphire one for pyrometer readings. Additional ports connect the turbopumps and other equipment to the chamber.

The induction heating equipment is rated to 15 kW, with an output frequency of 20-60 kHz. While significantly lower output than NTREES, CFEET is still able to get test articles to reach temperatures over 2400 K. An insulating sleeve (with a hole formed in it to allow pyrometer readings) of various materials is used to minimize heat loss through radiation.

While CFEET is not able to simulate gas flow, as NTREES is, it is able to assess thermal, chemical, and mechanical properties of materials at temperature and in a pure-hydrogen atmosphere. Because the system is far simpler, and takes far fewer consumables to operate, it is far cheaper to use as a test bed.

More info on CFEET is available on the CFEET page!

What Have They Taught Us?

FE Post-Test W HfN
CERMET FE post-CFEET test, image via NASA

Both NTREES and CFEET have been used to help assess various manufacturing techniques for fuel elements, and also evaluate clad materials and thermal expansion issues. NTREES is able to assess erosion rates (both in mass and in chemical composition). While these aren’t the sexy tests, they have informed decisions about clad materials, manufacturing methods, and the inherent tradeoffs in different designs without having to go through the major expense of designing, building, testing, and then hot-fire testing a nuclear reactor.

Work has continued on investigating different microstructures within the FE, using depleted UO2 (dUO2) for chemical and thermal analysis. These tests have explored many different options as far as fine structure of the fuel forms available, and continue to inform CERMET fuel element design today.

Development Challenges for LEU NTP, and a New Direction

A major change occurred in 2012, however: it was decided by the White House that highly enriched uranium (HEU) would not be used for civilian purposes in the US, in order to reduce the risk of nuclear weapons proliferation, and that low enriched uranium (LEU) would be used for all civilian purposes, including medical and industrial isotope production.. This decision has resulted in thousands, if not tens of thousands, of pages of response, from dry, indifferent technical papers to proponents and opponents of the move screaming and raging in every direction. Because of this decision, NASA’s nuclear programs were forced to look at LEU systems, not the HEU ones that they’d always used. While there are a number of ways to make an NTR out of LEU instead of HEU, the two main options are CERMET and carbide fuel elements. Because CERMET was already under development, and there were ways to use LEU in CERMET fuel, this was the path that was decided by NASA’s management. However, LEU carbide designs (most notably SULEU, the Superior Utilization of Low Enriched Uranium carbide-based NTR) are also an option, and one that offers higher temperature operation as well, but since CERMET fuels are more developed within NASA’s design paradigm they remain the primary focus of NASA’s development.

One of the greatest fears in any development program is the problems that simply can’t be assessed within the budget, the timeframe, or both, of a program. Every program has them, and many engineering fish tales have been made out of solving them. When they haven’t been solved, though, they are the things that often define a program’s schedule… and its cancellation date.

For the LEU NTP program, the main challenge is in the fuel element matrix, and the isotopic purity of the tungsten (W) needed for the metal matrix of the fuel in particular. For an HEU reactor, the isotope of tungsten was less of a concern, because there was a more flexible neutron budget for the reactor due to the higher fuel load. With LEU, the neutron budget becomes tighter, and the more management of the neutron spectrum you can do within the FE, the fewer neutrons are lost to the structural components of the reactor. Isotopic enrichment of reactor components other than fuel elements is relatively common, and so this wasn’t seen as a major challenge.

Most of the analysis up to this point on LEU NTP has focused on this line of development. Tungsten-184 has a small enough neutron capture cross section that it can reflect a neutron many times within the fuel element itself, increasing the likelihood of a capture by the higher-cross sectioned fuel nuclei. In fact, a recent paper by Vishal Patel of the Center for Space Nuclear Research in Idaho Falls, ID (who has kindly answered many questions, often sent at odd hours of the night, while I was researching this post) demonstrated some surprising characteristics that are possible with LEU CERMET fuel… including an overall reduction in system mass! This is an especially surprising result, but he actually went on Facebook to discuss the finding in the first day or two that the paper came out, and the overall conclusion was interesting:

 So the reason all this ends up working is that you are constrained by thermal design concerns (need enough surface are for heat transfer) rather than neutronic reasons (needing enough volume to go critical). This is typical for reactors of this size and above. At much lower thrusts the neutronics eventually dominates and HEU looks better but no rocket person cares for those lower levels of thrust for this type of system. The idea of this study was to show the systems are comparable, choose whichever one you want (but the obvious first thought is proliferation and economics, so choose the one that fits your constraints). 

Unfortunately, tungsten enrichment is a major challenge, and one that we aren’t going to be able to discuss in detail, because 184W is useful in another nuclear technology: explosives. This is because W is a great neutron reflector, and so is used in fission explosives to increase the number of neutrons entering the core during the initial neutron pulse from the initiation of the nuclear detonation. According to NASA, the LEU FEs, as designed, required 90% enriched 184W. It was expected that a 1 mg sample at 50% purity would be available in October of 2016, but a mix of accidents (an inadvertent chemical release is mentioned in the Mid-Year Game Changing Development Status Report for 2017) and technical challenges (which are classified) has forced this requirement into the forefront of everyone involved in the NTP program’s mind.

Alternatives exist, however. BWXT, already a major supplier of experimental fuel elements, has suggested a different core design, where graded molybdenum (Mo) and tungsten can be used instead of (90%) pure 184W. This design is one that is still very new, and because of that (and since it’s being developed by a private company and not a public institution) there’s not much information available. New contracts were signed between NASA and BWXT in 2017 to fund the development of their FE design, and hopefully as time goes on more information will become available. According to one person knowledgeable about the program, hopefully the Nuclear and Emerging Technologies for Space 2018 (to be held in Las Vegas in February) will bring more information. I have been trying to find out more information on this design, but unfortunately there’s not much out there that I can see. I also don’t have the background to determine if the manufacturing techniques described above will be compatible with this particular FE design, or the reasons why they would or wouldn’t be. Being the end of the year, it would be surprising if we heard anything before NETS this year.

Another change that has been floating around since about 2011 is a new process for manufacturing the metal matrix of the fuel element: spark plasma sintering (SPS). This seems to have been most thoroughly explored at Idaho National Laboratory and the Center for Space Nuclear Studies in Idaho Falls, ID. Instead of using HIP sintering, where heat and pressure are used to coax the temperature for a consistent metal matrix down, the individual grains are welded together using electric arcing. This allows a lower sintering temperature to be achieved, allowing for less decomposition of the UO2 in the fuel particles.

This also allows for a new type of clad to be used. Rather than the difficulties that have been experienced with the CVD clad, a binder is used to apply tungsten microparticles. This is one of the newest techniques to be explored for fuel particle coating, and in order to take advantage of it SPS has to be used, because the HIP temperatures are too high. For more info on these developments I recommend this paper by Zhong et al from INL and this presentation by Barnes.

How This Changes the Core

BWXT Core, image via BWXT

Any time a fuel element is changed, either in composition or enrichment, it can lead to significant changes to the core of the reactor. The biggest change in NASA’s NTP system is that tie tubes have been eliminated from the core. As discussed in the last post, the tie tubes perform many different functions, not just structural support for the fuel elements (which suffered persistent failures due to vibrations in the core), but also provided neutron moderation and supplied power to the turbopumps as well. Because of this, there have been designs for tie tubes for LEU NTR cores, although often these are placed around the periphery of the core rather than spread throughout like was originally planned for in the NERVA core. This changes the power distribution in the core, and makes it so that some reactor geometry design changes are necessary, but those are incredibly specific to the fuel elements used, and the results of extensive modeling of neutronic behavior and reactor physics.

Because the fuel elements are able to withstand higher temperatures, the entire reactor will run at elevated temperatures compared to the XE-Prime engine. This gives an increase in specific impulse over the graphite composite core type, although how much of one will largely depend on the particulars of the fuel elements and reactor power, and therefore core geometry, of the design that is finally tested.

More to Come!

Keep checking back for our next installment, which will look at the various reactor cores and engines themselves, for both the LEU NTP system and the Nuclear Cryogenic Propulsion Stage. We’ll also look at test stands and limitations for hot-fire ground testing, and how those will influence the decisions made for the new engines. Finally we’ll wrap up at a look at the advanced carbide designs that are being looked at (although not too closely on NASA’s part… yet!)

Sources and Additional Reading

A Summary of Historical Solid Core Nuclear Thermal Propulsion Fuels, Benensky 2013

  • If you only read one reference on this list, make it this one!

CERMET Fueled Reactors, Cowan et al 1987

A CERMET Fueled Reactor for Nuclear Propulsion, Kruger 1991

Hot Hydrogen Testing of W-UO2 Dioxide CERMET Fuel Materials for NTP, Hihcman et al 2014

Affordable Development and Optimization of CERMET Fuels for NTP Ground Testing, Hickman et al 2014

Design Evolution of HIP Cans for NTP CERMET Fuel Fabrication, Mireles 2014

Spark Plasma Sintering of Fuel CERMETs for Nuclear Reactor Applications, Zhong et al 2011

Low Enriched Nuclear Thermal Propulsion Systems, Houts et al 2017

NTP CERMET Fuel Development Status, Barnes 2017

2017 Game Changing Development program Mid-year Review Slides

Channel update:

My apologies for the delay on posting, the holidays have a way of creating slowdowns in material getting written. Hopefully I will be able to post more regularly soon. Research for the next post (on NASA’s plans for hot-fire test capability at Stennis Spaceflight Center, and the limitations that may place on testing) is underway, as well as research to prepare for results to hopefully be announced at NETS 2018. Sadly, I will not be able to attend, but look forward to all the papers that will be presented on these fascinating engines. I hope to publish on the latest in these new designs shortly after the conference ends. After that, a final post in the series on carbide fuel element LEU NTRs will wrap up this blog series.

At that point, the focus will shift back to trying to get the YT channel going. I haven’t touched Blender in a while, but I don’t think that it will be difficult to do what I need to do, I just need to sit down and learn. The scripts are largely written in draft form, I just need to go back over them for a final edit, then start doing the audio. The search still goes on for video clips to use, especially for Project Rover. Any links to clips that I would be able to use would be greatly appreciated!


Cpoyright 2018 Beyond NERVA. Contact for reprint permission.

Development and Testing Fission Power Systems

KRUSTY: First of a New Breed of Reactors, Kilopower Part II

**UPDATE: full power fission test is compete! If you’re looking for a description of the reactor, this is the post to read. If you want to know how the test went (except for it went perfectly), check out part 3 here:

Hello, and welcome back to the Beyond NERVA blog, and the second installment in our series on NASA’s current plans for in-space nuclear reactors. Last time, we looked at the experiments leading up to the development of NASA and the Department of Energy’s newest reactor. Today, we’re looking at the reactor that will be tested later this year (2017), and the reactors that will follow that test. We have two more installments after this, on larger power systems that NASA has planned and done non-nuclear testing on, but can’t continue due to testing and regulatory limitations. These are the Fission Surface Power program and Project Prometheus.

As we saw in the last post, in-space nuclear reactors have been flown before, mainly by the USSR, and their development in the West has stalled in terms of testing since the 1970s. However, a recent (2012) test at the National Nuclear Security site by scientists and engineers from the Department of Energy (DOE) and NASA, the Desktop Using Flattop Fission test (DUFF), has breathed new life into the program by demonstrating new heat transport and power conversion techniques with a nuclear reactor for the first time.

Now, the results of this experiment are being used to finalize the design and move forward with a new reactor, the Kilowatt Reactor Utilizing Stirling TechnologY, or KRUSTY. This is an incredibly simple small nuclear reactor being developed by Los Alamos National Laboratory (LANL) for the DOE, and Glenn Research Center (GRC) and Marshall Spaceflight Center (MSFC) for NASA.

Since we’ve seen what’s been planned and tested in the past, let’s look at the next big step for in-space nuclear reactors!

KRUSTY: The Little Reactor that Could

KRUSTY cutaway, image courtesy LLNL

As we saw in the last post, there are many hurdles to getting a new nuclear reactor developed and funded. One of the biggest is lack of interest (and therefore funding) from the DOE and NASA. Between the size limitations of current test stands, the expense of new stands, and the regulatory and safety limitations on nuclear testing, development of new nuclear reactors has always been under major constraints. Then, engineers at Los Alamos realized that there was a hole in the range of operational and planned in-space power systems, between the Advanced Stirling Radioisotope Generator (1kWe) and the Fission Surface Power reactor (40kWe),. Because of the new reactor’s small size, it could be tested in current facilities, and there are plenty of missions that fit into that power level. Larger units can easily be built based on the data gathered from the test of the 1 kWe design.

What’s so special about this reactor? Well, in summary, it’s a small reactor that uses heat pipes to transfer the heat out of the reactor core instead of a pumped liquid, and this heat runs to a Stirling power conversion unit. We discuss the theory behind this in the previous post, and we’ll look at the application more in a little while, but this is the first reactor that uses this common, but not well-known, technology, both for heat transport and power conversion.

Let’s look at the basics of the flight reactor concept before we get into the work that’s already been done, and the work that will be done by the end of 2017:

The Core

LANL Kilopower screencap core highlight

Kilopower’s 1 kWe nuclear reactor (the one to be directly tested with KRUSTY) uses metal fuel cast as a single cylinder 11 cm in diameter, with a 4 cm central hole for the test stand and cutouts to accommodate the heat pipes along the periphery. This is made out of uranium-molybdenum alloy, 92% uranium by weight (enriched to 95% 235U), and 8% molybdenum (also by weight). Y12 has extensive experience dealing with this particular fuel form, so it’s well understood both from a reactor physics perspective and from a fabrication and manufacture perspective.

The geometry of the reactor means that there’s a strong negative temperature reactivity coefficient, meaning that this reactor is largely self-regulating and the control rod is only needed for startup, shutdown, or major power level changes. Concepts for increasing the power level of this reactor have heat pipe channels inside the U-Mo fuel as well, but we’ll look at that more later.

There are minor changes to the geometry of the core for the test, but mainly they are to accommodate the reflector being raised around the core rather than a control rod being used to start the fission reaction.

The Reflector

LANL Kilopower screencap Core Reflector highlight

The reactor has two axial neutron reflectors and one radial reflector, all made of beryllium, totaling 70.5 kg. The radial reflector is a frustrum, or truncated cone, with an overall diameter of 27 cm at its widest point, and a cutout to accommodate the core and heat pipes running axially down its center.

This is the part of the reactor that underwent the biggest change from flight configuration to testing configuration. In short, the reflector is separated from the rest of the reactor, and will be lifted around the core to initiate the fission reaction. More on this, and its implications for the test and the flight article, later.

The Heat Pipes

LANL Kilopower screencap Heat Pipes

This is one of the new and exciting things about this reactor. Most reactors previously have relied on cooling loops driven by pumps, either mechanical or electromagnetic for some ferrous fluids. Often this working fluid has been sodium, which has been extensively tested for the Liquid Metal Fast Breeder Reactor (lately the Integral Fast Reactor or IFR), as well as for military and civilian power projects around the world. Here on Earth, combustibility concerns due to violent reactions with water severely limit its use, but this isn’t nearly as much of a problem in space, where there’s no water or atmosphere to cause problems.

Sodium is still the working fluid for this reactor, but the way it’s moved has changed. The heat pipe doesn’t require any moving parts to function, instead relying on convection and wicking action (as detailed in the last post), and is dirt simple in construction: no pumps, and very little in the way of painstaking welding of different sections of pipe. As long as evaporation and capillary action are balanced in the heat pipe, it’s happy.

In this case, NASA is using sodium heat pipes made out of Haynes 230 alloy (Nickel-chromium- tungsten-molybdenum). These heat pipes have an outer diameter of 1.59 cm, and an internal diameter of 1.4 cm, and mass 4.1 kg.This gives it an operating temperature of 500 to 1100 C, and have operated for over 20,000 hours without sign of degradation. The contractor to build the heat pipes for KRUSTY and Kilopower is Advanced Cooling Technologies, or ACT.


The Radiation Shield

LANL Kilopower screencap Rad ShieldIn order to shield the rest of the spacecraft, including the power conversion system and the radiators, from the reactor core, stacked depleted uranium and lithium-hydride plates are placed in between the reactor and everything else. In total, 40.4 kg of LiH and 45.3 kg of DU are used for shielding the reactor.

The Power Conversion System

LANL Kilopower screencap Stirling closeup cutaway

Here’s the other exciting part of this reactor: the power conversion system (PCS). Stirling engines are simple, reliable, and can theoretically reach high energies, but have never been used in real-world applications. However, space has unique challenges and demands, and simplicity is one of the biggest requirements for a system. Materials conversion options offer no moving parts, but also low efficiencies, and the Rankine and Brayton cycle options are complex and heavy. So, NASA turned to the Stirling engine as a simple way to gain more efficiency while minimizing the amount of complexity and number of moving parts.

Being NASA, and being leery of any unnecessary moving parts, they’ve tested these Stirling convertors for over 30,000 hours as part of their Advanced Stirling Radioisotope Generator and Fission Surface Power programs. Manufactured by Sunpower, Inc., these eight free-piston Stirling engines will produce from 1 to 100 kilowatts of electricity (kWe).

The Heat Rejection System

LANL Kilopower screencap 3

Kilopower’s radiators for the 1 kWe space design are made out of titanium-water heat pipes, with panels of carbon fiber to protect the heat pipes. Radiators for surface operations have also been designed, working off data and design lessons from the FSP program. As with the Ha230-K heat pipes, ACT, Inc is the contractor to supply these heat pipes. For KRUSTY, heat will be extracted using cryonic gas cooling to simulate the radiator structure.

Kilopower vs. KRUSTY

As we saw in the last post, testing nuclear reactors is difficult, if not practically impossible. Instead, individual components are tested thermally, chemically, and neutronically before an integrated test can be done. This involves thermal vacuum testing of all components, exposure of key components to beamlines at research reactors, and incredibly extensive modelling.

Often, once all this is done, the researchers are stuck for money. Funding for in-space nuclear systems is rare, much of that goes to radioisotope systems that are either already flight-proven or are improvements of the same basic architecture, and therefore low-risk investments. With a limited number of small testing facilities able to accommodate a test reactor, these larger designs were left out in the cold, not because they wouldn’t work, or due to insurmountable engineering challenges, but simply due to the cost of building new testing facilities. There are places that this could be done, such as the National Nuclear Security Site where Project Rover occurred, but scratch-building a facility to do environmental testing in a vacuum chamber for an operating nuclear reactor is far from cheap.

KRUSTY test article 3d cutaway LANL
KRUSTY mounted on COMET, image via LLNL

The difference here is that KRUSTY is small, both in size and in power output. Even accounting for a vacuum chamber’s weight, it is possible for current facilities to be used to test this reactor design. As we saw in the last post, there are critical assemblies that have been used since the 1950s to do benchmarked fission tests. There, we looked at Flattop, the spherical reactor used to prove heat pipe cooling of a reactor could be predictably modeled. This time, we’re going to meet COMET, another of these benchmark criticality test stands. COMET is not a reactor, nor does it have any nuclear components itself. Instead, it is designed to bring two different parts of a test together very precisely. It consists of a table with hydraulic presses and vernier adjustment that can handle having a sizeable test apparatus secured to it, and a central pillar to place the other part of the critical assembly being tested. COMET is even older than Flattop, having been used for criticality testing for the Little Boy gun-type atomic bomb. Since then, it was used at Los Alamos’ Pajarito test site until it was moved for that site’s decommissioning, along with the rest of the critical assemblies, to the National Criticality Experiments Research Center at the National Nuclear Security Site in Nevada.

The biggest difference between KRUSTY and Kilopower lies in the reflector of the reactor: rather than having a control cylinder that moves in a fixed reactor core, KRUSTY’s core, along with everything else but the reflector, is placed in a vacuum chamber, which is then mounted on the upper platen of Comet, and a reflector made out of discs of beryllium oxide (BeO) of various thicknesses will be raised around the core using Comet’s lift systems. By using different reflector thicknesses, how much reactivity is available can be changed, and the data collected can be used to finalize the reflector design for Kilopower.

Building KRUSTY: Prototyping and Non-Nuclear Testing

Compromises for the Budget

One of the impressive things about Kilopower is that so far they have managed to keep costs incredibly low. This is wonderful in that more work can be done for much less money, and it’s far more likely to fly when it’s able to be cheaply tested. As we’ve seen though, this means that compromising between the ideal, prototypic testing regime and the one that can be afforded. A great example of this is the Fission Surface Power program, where not only did they not manage to do a single nuclear test, but even the heat rejection system was only half-tested in a vacuum chamber.

This doesn’t mean that we learned nothing from that program, or that there wasn’t extensive testing completed on individual components. For instance, that program looked not only at Rankine power conversion systems, but carbon fiber radiators and heat pipe thermal management as well. Virtually every component in this reactor is similar: it’s been tested before, just not in this specific application. KRUSTY is that system-level series of tests that ensures the flight reactor will function as advertised

Building off this knowledge base and available, off-the-shelf technology, the team at GRC built a mockup of KRUSTY, using stainless steel (SS) instead of uranium for the core (these tests were detailed in the DU thermal testing report). They then did a number of thermal tests using electric resistance heaters to mimic the nuclear reaction, using the models of reactor behavior that the DOE (LANL and Y12, mainly) have shown to be the most likely behavior.

At the same time, a decision was reached that would save significant money for the development effort, but would lead to a change in the power conversion system for KRUSTY. Rather than purchase the full set of eight 125 W Stirling convertors that Kilopower would have, instead the engineers at GRC decided to reuse two 70 W Stirling convertors that had been built and tested for the ASRG, and replaced the other six Stirlings with thermal simulators that they designed for the purpose. This meant that the test wouldn’t be using a prototypic PCS, but this is less of a concern than for other components, especially the core and heat pipes. The PCS can be further refined and tested without the headaches and difficulties of dealing with a critical assembly.

Initial testing focused on the individual components that needed to be verified. This proceeded on two fronts: materials testing for unavailable or inconclusive data from past research, and subcomponent testing. The materials questions included fuel creep, thermal expansion coefficient of the fuel, and diffusion between the fuel and the heat pipes. The subcomponent tests focused on the heat pipes and their connection to the reactor core, ensuring that there would be enough heat transport available for the power conversion system.

The success of these tests led to a full scale thermal prototype test using the stainless steel core. This took place at NASA Glenn in the second half of 2015. These tests took place in a vacuum chamber at GRC, using electrical heaters to heat the core to a design temperature of 800 C while conducting a variety of tests. Of special note were the tests focusing on the heat pipe thermal transport capabilities. In one test, the cold end of the heat pipe was connected to an evaporator, which then discharged into the vacuum chamber that the test was taking place in. This test demonstrated that the Haynes 230 heat pipe was able to transfer 4 kWt over 1 m of distance, showing that the system was able to handle the thermal load required to keep the reactor cool.

With this latest set of successful tests complete, a new dummy core was made, this time out of depleted uranium (DU), to test for any chemical reactions, and for mechanical and thermal interfaces with the heat pipes and other components. This also marked the end to individual component testing for the components that would be used for the KRUSTY test, as they would be integrated with the modified power conversion system for testing with the new core in the last round of non-nuclear thermal testing.

Regulations Rear Their Head Again… But We Can Work With That!

DU core in hand
DU test article, image via DOE/LANL

DU is a more difficult substance to work with. This particular core simulator was produced at Y12, contained 8% molybdenum by weight, and the only difference between it and the KRUSTY core is the percentage of 235U (<15%). This is important to characterize thermal effects arising from the density and structural properties of the fuel, and also isolates any chemical, mechanical, or tooling issues with the manufacture of the HEU core to be used by KRUSTY later this year. This was a major step forward for the program, dotting the i’s and crossing the t’s before the nuclear test. This also allowed Y12 to make the molds and other tooling required for the HEU core, and verify that there wouldn’t be any issues in the manufacture.

This also forced a design decision on the power conversion team. Two ways had been discussed to mount the Stirling pistons: the first, and the final as well, is what’s called a dual convertor design. Here, the heat pipes are arranged in pairs radially, with their hot ends toward the center. By managing the stroke of the pistons, their actions cancel each other out from an overall inertia point of view. The alternative was to mount the pistons vertically, and run them in parallel. This requires an active structure to counterbalance the inertial force from the pistons, adding complexity. When it was time to finalize the design, the simpler dual convertor design won. Another compromise from the prototypic flight configuration was the heat rejection system: cold nitrogen gas would be used to remove heat from the cold end of the Stirling convertors and simulators. This allowed for a smaller test apparatus, and also allowed the 75 W Stirlings to simulate a system 50% more powerful.

Stirling PCU for KRUSTY, Mason 2011
Image via NASA

Due to its mild radioactivity, DU does require special handling as a nuclear material. While this is often “only” a major headache, in this case the Kilopower team looked upon it as an opportunity to get everyone together for a dress rehearsal before KRUSTY.

One of the reasons that NASA is looking to get away from using HEU is due to the security required for its storage and handling. The required personnel and organizational resources don’t come cheap, so the less time the fuel actually has to sit in the reactor on the ground, the better. Fortunately, since all nuclear spacecraft have the reactor as far away from the rest of the spacecraft as possible, and since the reactor core is a single piece, fueling can be held off until much later in the process of vehicle assembly and integration.

Neutronics is a tricky business. When first assembling a new nuclear reactor, there are many unknowns, and accidental criticality lurks behind seemingly innocuous mistakes. Because of this, it would be nice to run through an assembly process WITHOUT having to worry about having a nuclear reaction occur.

DU Integration pre clamp
Integration of heat pipes and DU core, NASA/LANL

So this is what everyone did. Personnel from NASA’s Glenn Research Center and Marshall Spaceflight Center (MSFC), Los Alamos (LANL), and the Device Assembly Facility (DAF) at the National Nuclear Security Site gathered for the first dress rehearsal for fueling the reactor. This way, any hitches found could be dealt with on this far simpler test, and everyone was able to run through their roles in preparation for the big day. No major problems were discovered, the core was installed, and KRUSTY was ready for its last round of non-nuclear testing.

One other note about the preparations for nuclear testing: using the DAF sets limitations on instrumentation and other conditions, for instance on coolant for the cold end of the Stirling engines. In order to make sure that there were no issues here, all the connections to the test stand at GRC were identical to the ones that would be used for KRUSTY at the DAF.

DU Core w clamps
DU core with integrated heat pipes, courtesy NASA

Final integration allowed for assessment of the DU core’s interface with the rest of the reactor, especially the mechanical and thermal connection with the heat pipes. This is one of those critical areas that can be well estimated and modeled, but unless it’s experimentally verified with flight-like components there will be unanswerable questions. Another is the possibility of chemical reactions. With no major problems discovered, testing moved on to preparing for the nuclear test.

A final benefit gained from this test rehearsal is the ability to better estimate fueling requirements for a flight reactor. It was determined that the reactor could be fueled, instrumented, insulated, and canned in 12 hours, and final assembly of the radial reflector and control rod requires another 8 hours. This was conservatively estimated at four working days. This is good to know, since one of the biggest costs associated with HEU is the security required for transport and the highly enriched “Special Nuclear Material” fuel, so the shorter the time that the fuel has to be integrated to the spacecraft, the less time you have to pay for those expensive nuclear security personnel and procedures.

The Final Non-Nuclear Tests

The last round of non-nuclear testing occurred in 2016 at GRC. In these tests, the heaters’ control software was programmed with the projected behavior of KRUSTY during a number of reactor and PCS states. The test profile that was programmed in was meant to mimic as closely as possible the thermal environment the reactor would experience at various points in the testing process.

This does not mean that the reactor system will experience exactly these conditions. This is a model of predicted thermal behavior based on nuclear modeling of components that have only been thermally tested using non-nuclear methods. Further nuclear testing before the full-power test would refine the model, and the thermal test profile is designed to account for any unknown thermal effects during the actual test.

So what did the test look like? As close to the testing that will be done at DAF as possible, so we’ll look at that test timetable and mention any variations from it as they come up.

The first test at the DAF will be a thermal break-in test, where the HEU core is electrically heated. This is a final verification that all of the components are functioning correctly, heating rates can be easily controlled, and thermal interfaces are functioning properly (especially at the hot and cold ends of the heat pipes) before the reflector is raised around the core for the cold (or zero-power) fission test. This is also the test that was duplicated by GRC with the DU dummy core.

Full power DU Results
Heating profile of DU core, Briggs et al, NASA GRC 2016

After a ~3 hour warm-up period, the sodium in the heat pipes started to boil, and the hot end of the Stirling engines began to warm. Once they reached 650 C, the two Stirlings and six thermal simulators were turned on, dropping their hot end temperatures. The system was then left to reach a steady state thermal equilibrium, which took about 2-3 hours.

Before the test was concluded, though, transient testing was conducted. This was to verify behavior of the reactor in case something failed, such as a piston jamming, reducing the amount of heat that the Stirling convertor would remove. To simulate this, the convertors were stalled, and the simulators were turned up to full power, to verify that a partial loss of power conversion could be accounted for with the rest of the convertors. Then, the convertors were shut down as well, to see the thermal response of the core to a complete loss of cooling. After this, the heater was turned off, the simulators were turned back on to full power, and shutdown occured.

This is identical to the thermal break-in test that will occur in Nevada. From there, the testing at the National Nuclear Security Site’s Device Assembly Facility differs from the tests that can be done at GRC, in much more stringent conditions and with a lot more cost as well, so every lesson that can be squeezed out of this testbed saved the program money and headaches further down the road.

As Dr. Mason mentions in a conference paper on the electrically heated DU core test article, this test was a major milestone.

“Testing of the Kilopower technology demonstration with the DU core, using the test sequence and configuration for the final testing with the HEU core, has reduced the risk of any unexpected issues in fueling, assembly, or test operations at NNSS… The system as it stands is capable of delivering 120 W electric from two ASC convertors with a maximum thermal power draw of roughly 3000 W, which is sufficient to verify neutronics models at the nominal Kilopower operating condition. There were no issues encountered during DU testing that caused unexpected operational issues which would need to be addressed prior to HEU testing.”

As with any well-constructed test, problems were isolated, which led to potential changes in the design. In this case, Dr. Briggs of NASA Marshall points out (in the DU testing final report) one such issue that was isolated, as well as a potential fix:

“Testing of the DU core shows that at nominal operating conditions there is a 200º C temperature drop between the core and the Stirling convertor hot end. The majority of this temperature drop takes place through the conduction plate and throught the ASC heat acceptor, both of which can be eliminated in future design iterations if money comes available for customized convertors and interfaces.”

However, he also notes:

“There were no issues encountered during DU testing that caused unexpected operational issues which would need to be addressed prior to HEU testing.”

Additional advances continue to be made, and there will be more minor changes between KRUSTY and a flight reactor, but these changes are likely to be minor.

Nuclear Testing: Now Things Get Serious

As seen above, nuclear materials are things that are far from taken lightly by NASA. There are arguments to be made both for and against the overabundance of caution that government agencies are required to take (these days), but that’s another topic that we aren’t going to touch today. Instead, what we’re going to look at is what goes into the nuclear testing side of reactor development.

As mentioned in the last post, the critical assembly machines that used to be at the

Comet Critical Assembly ORNL

Pajarito Test Site (in structures known as Kivas) were either decommissioned or moved to the then-new National Critical Experiments Research Center (NCERC) at the National Nuclear Security Site (NNSS) in Nevada. While we were looking at one of the critical assemblies last time, this time we’re looking at what could be called a “critical assembler,” where it integrates critical assemblies without having any nuclear components itself. There are two of these at the NCERC, Comet (which dates back to Little Boy testing), and Planet (which was built to relieve scheduling pressure on Comet, and ended up being a slightly smaller version of the same thing), although there were others in the past that have since been decommissioned.

Why is this important? One of the odd, and difficult, things about nuclear reactions is that they are sensitive to things that we don’t often think of as important. A classic example comes from the dawn of the nuclear age, when Enrico Fermi was still at the University of Cagliari in Italy. Being Italy, marble is a common choice for lab benches, as was wood. When he and his team were participating in early neutronics research, they discovered by accident that the material of the tabletop they were using made a significant difference in the outcome of the experiments they were performing, because the wood tabletop would slow the neutrons more than the granite tabletop would.

Device Assembly Facility, National Nuclear Security Site, Nevada. Image via DOE

This is one example of hundreds that can be offered of the odd effects of neutronics and the condition known as critical geometry, where a self-sustaining fission reaction can occur. All reactors require some form of cooling, and depending on the fissile material, moderators, and relative positions of these components, a reactor can go critical when you aren’t prepared for it to do so. This is called accidental criticality, and is a spectre that hangs in the room of any operation involving fissile material. Accidental criticality reports are made on a regular basis by Los Alamos National Labs, and the lessons from these accidents are then integrated into new handling procedures for nuclear material.

A quick aside on units here, criticality is measured in dollars and cents. One dollar of reactivity is exactly enough to barely sustain a fission chain reaction, and one cent is 1/100 of a dollar. This is also sometimes referred to as keff. When these terms are used, they refer to the reactivity being “inserted” or “removed” from a reactor. In this case, as the reflector is raised around the core, it reflects more neutrons back into the reactor, and therefore inserts reactivity, but removing control rods, rotating control drums, or removing fission poisons all insert reactivity into a reactor as well. There are also a number of ways to remove reactivity, either by inserting control rods, changing moderator configuration, allowing fission product buildup, or in this case lowering the reflector so it’s not reflecting as many neutrons back in the core.

The key to preventing accidental criticality is care in the design of the test, and in the movement of everything in the test area. Sometimes, a known reactor is used (such as Flattop or one of the Godiva reactors), and sometimes a machine is used that is designed to avoid accidental criticality (such as Comet and Planet, but other designs have been used over the years as well). This allows for a level of certainty and precision that takes into account the unique challenges of initial nuclear assembly. To build a critical assembly for the first time takes an incredible amount of forethought, modeling, planning, and preparation.

That being said, I am not a nuclear engineer, merely an enthusiast with a penchant for research and a respect for the limitations of physics and engineering, so explaining the conditions that lead to accidental criticality are not so much off the deep end of my skill set as off the continental shelf. Treat me as the classic “guy on the internet,” I try and provide as many original sources as I can and always love finding more, and as ever if there’s information I’m missing please let me know in the comments!

Comet (and Planet) have been designed to take care of those concerns to a large extent. They are simple machines, with a fixed platen and a movable one. The lower, movable platen on Comet has rough control via a set of hydraulic lifts, and fine control using a screw drive connected to an electric stepping motor. In this case, all of KRUSTY but the core will be placed in a vacuum chamber mounted on the upper platen on Comet. The core will hang down below this assembly, and the lower platen will be used to raise a reflector around the core.

Platen position details
COMET platen position diagram, Potson LANL 2016

A number of things remained untested (although modeled) after the DU tests, all of the significant ones being on the nuclear side. The first is how much reactivity will be needed for the core to operate In order to account for this, the reflector for KRUSTY is modular in design, containing a number of annular discs (think thick washers) of varying thicknesses that could be selected to tweak how many neutrons are reflected back into the core, and therefore how much reactivity is added. According to Monte Carlo (MCNP) modeling, a reactivity of $1.70 was needed for nominal operating temperatures to occur, but there’s a possible margin of error of up to $1.50. This was an issue, because Comet at the DAF was only authorized to handle $0.80 in excess reactivity, but the site permit changes and modifications were authorized for testing to proceed (a facility basis safety change was approved by the DOE, and there are some indications that even higher reactivity insertion limits may be allowed with the construction of new facilities at the NNSS).

KRUSTY startup reactivity, Potson LANL
Modeled startup reactivity requirements, Potson LANL 2016

Before the nuclear testing occurs, the reactor will undergo a thermal break-in test, mimicking the electrically heated DU test precisely. This is to ensure that all of the thermal interfaces and thermodynamic properties observed at GRC with the DU core are the same with the HEU one, and that the integration of the new core is done properly. After that, testing can move on to neutronics requirement refinements.

In order to test the needed reflector geometry a number of cold criticality (also known as zero power) tests were conducted over the summer (cold criticals in June or July, warm criticals in September) to refine the reflector geometry. By using different thicknesses of BeO, different amounts of criticality can be inserted into the test article, and this information will then inform the final design of the reflector. By adding components in a step-wise fashion, the reactivity requirements can be pinned down.

After the cold criticality testing data has been fed back into the models to verify the predicted behavior and make any adjustments necessary, low temperature testing can begin. This is important to refine the models of reactor behavior before full-power testing can occur, and again is done in a step-by-step manner, from $0.15, to $0.30, $0.60, and finally up to $0.80 excess reactivity (added in $0.02 intervals). This maximum number was chosen because it ensures that the reaction is sustained by the delayed neutrons and occurs at a lower temperature than what would be found at full power.

With the successful completion of the cold-critical tests, KRUSTY’s operating conditions will be well-enough characterized that any potential issues or unknowns discovered during the cold critical tests can be addressed before the full-power, high temperature testing.

Full-Power Test: The Day 40+ Years in the Making

The big test comes this year: KRUSTY will undergo full power testing by the end of November 2017. It’s been a long time since the US has done a nuclear-powered ground test on an in-space nuclear reactor. In fact, it’s been so long that the Department of Energy has never conducted a full-power in-space reactor, the last tests were conducted by the AEC! There are a number of reasons for this, some of which we’ve looked at before, and we’ll touch on them more again in the future. For now, let’s look at the test itself, and what we’ll learn. Based on modeling and non-nuclear testing (again, I can find no data on zero power or cold critical testing), all possible required modifications to the test apparatus have been isolated to the reflector, which can be easily reconfigured to allow for as much reactivity is needed.

The test itself will be a long one, starting the same way that the low-power tests did and continuing through a series of tests meant to measure steady-state operation, and simulating failure of cooling or power conversion systems to test reactor dynamics. Because of the strong negative thermal reactivity coefficient, no issues are anticipated in these power transient tests (as they’re known), but the dynamics of the system still need to be verified experimentally. By wisely selecting the tests performed, a great number of other interactions can be simulated to ensure efficient and reliable operation for many different errors or types of damage. In order to ensure that there’s enough excess reactivity to account for a worst-case underestimation of the amount of reflection needed, the reflector will be loaded with sufficient BeO plates to allow for an insertion of up to $2.20, well more than the $1.70 that MCNP modeling predicts in case of unanticipated reactivity loss.

Full Power Temp Run Full Slide, Potson 2016

As with the other tests, this one begins with an insertion of $0.15 in reactivity, and stepping the reactivity insertions up at regular intervals until the desired fuel temperature is reached (850 C). After this, several hours of steady state operation are recorded, verifying thermal equilibrium across the system.

Now things get interesting. The first transient test involves cutting the power draw from the two Stirlings by a factor of 2, and monitoring the reactor as it adjusts itself to the new power level. Once the system achieves a steady state, the power draw is increased again to test the load following capabilities of the reactor. This is followed by several more hours of solid state operation.

Load Following Experiment SlideThe next transient test involves shutting off one of the Stirling engines to simulate a failed heat pipe. The resulting changes in temperatures and power levels will be monitored and used to refine the failure mode prediction modeling for Kilopower. After the system adjusts to this change, power will be brought back online from the stalled Stirling, and the system will reach full power again.

Full Run Power Removal Halt Test Full Slide

Finally, a total loss of cooling will be simulated, by turning off all heat removal from the Stirling engines and the simulators. Based on the modeling of this system, the temperature will increase, decreasing the reactivity in the core to the point that dissipation of heat from the core and the fission reaction reach a balance point. At this point, the reflector is withdrawn, and the reactor is left to cool, both thermally and radioactively, for a few days.

Modelling has been done on other failure modes, such as adjacent heat pipe failures, but these situations won’t be tested because the data collected from these tests will be enough to predict reactor behavior under those conditions.

Typically, a post-mortem analysis of a test reactor is done after a nuclear-powered test, but due to the low power, relatively short burn time, and high enrichment of the fuel, this may not be as necessary as in other reactors. I’ve found no reference to post-mortem inspections in the papers and presentations I’ve gathered, but it seems like a logical set of tests to make, and presumably with the resources of Y12 available the fuel recycling may be easier than in other projects. If anyone has any information on these plans (or the results after the test is complete) please let me know in the comments!

However, the reactor will be left for between 45 and 60 days to cool while on COMET, and then will be set “in a corner” (in Dr. Poston’s words) to cool further.

After Final Run with stowed platen

What We Expect to Learn

According to Gibson and Poston:

“The KRUSTY test will be the first flight prototype nuclear test of a space reactor performed in decades. The results of the KRUSTY test will validate the computer models, methods, and data used in reactor design. In addition, valuable experience in design, fabrication, startup, operation, transient behavior (load following based on reactor physics), and reactor shutdown will be obtained. The ultimate goal of the KRUSTY experiment is to show that a nuclear system can be designed, built, nuclear tested, and produce electricity via a power conversion system in a cost effective manner.”

Nuclear engineering is a field where the lessons learned from one test can be applied to many different systems, even ones with different specific operating conditions (as seen in the DUFF experiment in the last post). Heat pipe cooled reactors are an attractive option, but the single test of this concept in a nuclear reactor was with a very low power reactor, and a heat pipe material that wouldn’t be used in a flight reactor. KRUSTY takes it a step further, using a coolant that would be used in a wide variety of reactor architectures, and operating at considerably higher power (although still only 1 kW) and temperature. Until heat pipe cooled reactor tests are more common, the data from KRUSTY will inform every reactor design using this technology.

Other unusual characteristics of the reactor will also be able to be better modeled. For instance, the monolithic fuel design (where all the fuel for the reactor is in a single piece) is unusual, and additional information about its’ behavior can be gained.

Radial Power Distribution simulation
Radial power deposition model, Potson LANL 2017

The Stirling PCS, while not prototypic, is another subsystem new to nuclear power production, and the lessons gained in this experiment can be applied to future reactor designs (as well as possibly making improvements for the solar thermal designs). Another system that will be directly impacted by this is the Fission Surface Power reactor, the next in our series, which uses Stirling convertors that are very similar to Kilopower. Questions raised there about the effectiveness of this type of PCS can be addressed with this smaller-scale test.

The lessons learned from this reactor will impact many designs, in many different ways, for many years to come, and as with any technological advance, it’s impossible to guess what will have the most impact.

First Order Value of Experiments 1First Order Value of Experiments 2

A Fundamentally Enabling Technology

There is no more difficult field of human endeavor than spaceflight. The difficulties, the costs associated with those difficulties, the extreme distances and environments that missions must endure all present challenges that are rarely paralleled in other fields of human endeavor. In fact, the only other field with comparable costs and constraints of engineering, chemistry, and physics is nuclear power. Despite this, an inventive team has managed to make do with existing technology, facilities, and equipment to design a reactor that not only can but will be tested, for a pittance. As an engineering student, you are constantly told to keep costs of materials and in mind from the first engineering class you take, but this goal is rarely able to be carried over into aerospace OR nuclear engineering. Here, Dr. Poston and his team at Los Alamos have performed miracles of economy, to bring the first electricity generation from a DOE reactor, and the first full-scale nuclear test of an in-space fission power system since the DOE was founded.

Juno solar panel size
Relative size of Juno spacecraft, image via NASA

So why is this important? Many in aerospace consider nuclear power to not be worth it, the costs in terms of both money and procedural burden are high, even for a simple plug of 239Pu for an RTG. However, there are limits to what solar, the most often suggested substitute, can provide. A classic example is the outer solar system: the Juno spacecraft currently in orbit around Jupiter has three solar arrays large enough for a semi tractor-trailer, but is so power-starved it takes almost two weeks to transmit data back to Earth. Communications isn’t the only area that power is important: radar is a power hog (in fact, the nuclear-powered US-A satellites launched by the USSR were Radar Orbiting Reconnaissance Satellites); electric propulsion is another classic example of more benefits directly deriving from more available power.

Another limitation crops up with the use of radioisotope thermoelectric generators, or RTGs, which use decay heat from a radioactive substance (usually 239Pu, but other isotopes are also used on shorter missions): their power is at its peak when the fuel element is assembled, and it will only drop in power from there. The amount of power can’t be changed, and the decay process can’t be slowed. Additionally, engineering practicalities limit how efficient an RTG can be, and moving toward a heat engine-based conversion system (such as the ASRG) can only do so much to increase available power for a given mass with these systems.

A fission reactor can be launched “cold,” be left off until needed (perhaps even years into a mission) with no nuclear degradation of fuel or materials, and can draw more or less power, and even be turned completely off, on command. Combining this capability with the ability to operate independent of local conditions (mostly) and the ability to provide a very dense power source, fission power supplies offer unique capabilities for both unmanned and manned missions throughout the solar system.

Kilopower: A Nuclear Reactor for Higher-Power Missions

TSSM Enceladus
Titan System Survey Mission, ASRG variant, over Enceladus. Image via NASA

For those in the nuclear engineering field, often the reactor itself seems to be an end in and of itself (I am guilty of this, as well). However, no matter how simple, elegant, unique, or original a concept is, it still is a power source for… something; in this case a NASA mission of some sort. These fall into two broad categories: spacecraft (orbiters or fly-by missions) and landers (either fixed or rovers). Both orbiters and landers have been considered for Kilopower, and we’ll look at some options for each.

What missions have been proposed that this reactor makes possible? Remember, NASA has stacks and stacks of missions that it commissions a one-to-three (usually two) year study on, and stacks them up to wait on certain enabling technologies to come about. Often, this enabling technology is the power supply, and these are the missions that stand out for Kilopower.

Most of these missions did not incorporate a nuclear reactor as part of their power supply options so often the mission changes from what was originally proposed to account for the reactor. In fact, they were all powered by multiple RTGs, as Cassini was (three MMRTGs), which don’t scale well as a general rule. Even if a mission had planned for a reactor, the specific data about this reactor firms up questions that were left in the original design study.

Titan Saturn System Mission (TSSM)

TSSM Artist's Impression, NASA
Artist’s Impression of TSSM with RTG, image via NASA

This was a design from a 2010 decadal survey design, re-examined by the Collaborative Modeling for Parametric Assessment for Space Systems (COMPASS) in 2014. Originally designed with a 500 W ASRG, a 1 kWe Kilopower reactor was installed instead in the 2014 study. This is a good example of the tradeoffs that are considered when looking at different power supplies: there’s less mass and a shorter trip time for the original, RTG-based electric propulsion spacecraft, but the fission power supply (the reactor) allows for more power for instruments and communications, allowing for real-time, continuous communications at a higher bandwidth while allowing higher-resolution imaging due to the increased power available.


TSSM FPS config
TSSM Kilopower variant, NASA GRC

As with the following concepts, this was a mission that was briefly looked at as an option for a mission to use the Kilopower reactor, not a mission designed with the Kilopower reactor in mind from the outset. The short development time of the reactor (I never thought I’d write those words…), combined with the newness of the capability, caught NASA a bit flat-footed in the mission planning area, so not all the implications of this change in power supply have been analyzed.


TSSM Baseline Mission Graphic
Mission vehicle breakdown, NASA/ESA

The mission as designed is impressive: not only is there an orbiter, but a lander (to be designed by ESA, who have already successfully landed on Titan with the Huygens probe), and as a buoyant cherry on top, a balloon for atmospheric study as well.


These low-power missions are where any new in-space power plant will be tested, to ensure a TRL high enough for crewed missions. Because of this, I’m going to be adding mission pages to the website over time, with this being the first, looking at these nuclear-powered probes is the best way to see what could be coming down the pipeline in the near future.

TSSM Mission Summary Timeline
Chart courtesy ESA

Here’s the published papers on the mission:


Chiron Orbiter

Chiron Orbiter FP configThis design is for a flyby of 2060 Chiron, a Centaur-class asteriod. Originally proposed in 2004 as part of a study of radioisotope electric propulsion across the solar system, it was re-examined in 2014 by the COMPASS team.

This is a mission that I’m very interested in, but unfortunately not much has been published on it:

Kuiper Belt Object Orbiter (KBOO):

KBOO on orbit, NASA
Kuiper Belt Object Orbiter, MMRTG variant, image via NASA

A close cousin to the Chiron Observer, the KBOO was originally a RPS-powered mission which used an incredible 9 ASRGs, with a total power output of a little over 4 kWe, to examine an as-yet undetermined target in the Kuiper Belt. Having access to nuclear power is a requirement that far into the solar system, and with Kilopower not only is the mission not power-constrained, but is able to increase the amount of bandwidth available for data, and the power will allow for radar surveys of the objects that KBOO will do flybys of.


KBOO Components, NASA

Jupiter-Europa Orbiter

JEO Artists Rendering
Jupiter-Europa Orbiter MMRTG variant, NASA

A predecessor to the Europa Clipper, the JEO was originally designed with 5 MMRTGs (the equivalent of 1 ASRG, 500 We). However, the design could have double the available power, and much higher data return rates and better data collection capabilities, if a 1 kWe reactor was used. This would increase the power plant mass (at 260 kg for the MMRTGs) by an additional 360 kg, but this would also eliminate the need for Pu-238, which remains very difficult to get a hold of.


JEO Planned Payload
Note: This is initial planned payload, not accounting for the higher power from Kilopower. Image via ESA

The Europa Clipper is based on a more economical version of this mission, the Europa Multiple Flyby Mission, and has some of the same hardware.

Here’s the published papers on the mission:

Human Exploration Missions

While this is certainly smaller than the power requirements for many crewed surface missions, Kilopower has been designed with crewed surface missions in mind. The orientation of the heat pipes has already been tested, and will be tested more thoroughlly at NNSS (when held vertical in a gravity field, the heat pipe acts as a thermosyphon, increasing how much heat the pipes can reject). This reactor could certainly be used for manned space missions as well, but only for what’s known as “hotel load,” not for providing large amounts of electrical power for an electric drive system (we’ll get to that in a couple blog posts). As such, it’s typically seen being used in crewed missions as a modular power unit, with more reactors added as the base grows to keep up with increased power demand.

Kilopower vs FPS


Kilopower integrates fairly easily into NASA’s Design Reference Architecture 5.0, the “Handbook for Getting to Mars” as it were. This is a multiphase program.

Phase 1 launches before humans ever leave Earth, for ISRU, and will either be solar or fission powered. The trade-off between the systems mass and time required for refueling: more fuel and water can be extracted faster using Kilopower, but it masses more than solar panels (after factoring in the full power production system). Phase 2 is the beginning of crewed missions. In this case, a NASA study showed significant mass savings due to energy storage costs over solar.

Surface deployment, Mars NASA


The fundamental advantage on the Moon for fission power systems is the lack of energy storage requirements for the lunar night. The Fission Surface Power program was, in fact, primarily oriented at use with manned Lunar (and later Martian) missions. Kilopower will be able to operate well in these environments, if only offering up to 40 kWe of power (which is where FSP takes over). The study above looks at Lunar mission options and requirements as well.

A New Family of Reactors

Kilopower 4 kWtThis is just the first step for the program, however. The 1 kWe design is the smallest one that is considered practicable for a fission power system, but the basic design concept can be extended up to 40 kWe. While the basic reactor is able to produce up to about 4 kWe, any additional power increase starts to exceed the heat transfer limits of the heat pipes. In order to increase the amount of heat transferred from the core to the power conversion system, changes are needed to the heat pipe system.

12 kWt Kilopower coreThe first option is to move the heat pipes from the periphery of the core to the interior. This is harder than it looks at first glance. The different components of the heat pipe affect the reactivity of the reactor in a number of different ways, which are best assessed “in the wild” with the 1-4 kWt design before moving on to this change.

The advantage to this is that a greater surface area of the heat pipe is exposed to the fuel element generating the heat, so more heat can be transported using the same heat pipes, or even by using smaller pipes (in this case, there’s a reduction in major diameter from 3/8” to 1/2”, and an increase in number to 12). By moving the heat pipes from the periphery to the center of the fuel element, power can be boosted to 13 kWt using only two more kg of 235U, and only 24 kg more in reactor mass.

21 kWt KilopowerThe larger sizes continue this trend, increasing the number of heat pipes in the core to increase the amount of heat removal. Because the reactor has a strong negative thermal reactivity coefficient, it has a corresponding tendency to increase reactivity as heat is more completely removed, increasing the power output of the reactor. The configuration and size of the heat pipes is based on Monte Carlo and thermal conductivity modeling to ensure that the temperature gradient across the fuel is acceptable, even with heat pipe failure.

43 kWt KilopowerWith the testing of KRUSTY, enough information will be gained both in reactor engineering and in fuel element manufacturing to enable the internal heat pipes. Additional expansions of the test area at the Nevada site will allow for this expansion of reactor sizes to allow for ground testing of these larger reactors (to test KRUSTY, a site regulation had to be changed to account for the amount of reactivity being inserted into the reactor).

At this point, the fuel that is used in KRUSTY, the UMo metal fuel, can’t be used anymore. There are issues of critical density of fuel, power levelling across a monolithic fuel element, and other issues mean that metal fuel can’t be used for a larger reactor. Metal fuel is relatively rare in reactor designs on Earth, oxide fuels being more common. This is also an option for a heat pipe cooled reactor, and this is a very attractive small modular reactor in its own right.

Reactor Size Cross section diagram
Image via DOE/LANL

This is the Megapower concept, a concept being explored by the Department of Defense for forward operating bases, disaster relief, and other missions where the lack of a supply chain for electrical power is critical.

2MWe Megapower
Image via Poston, LLNL

Nuclear power is the key to enabling more effective autonomous and crewed exploration, and eventually colonization, of the solar system. Kilopower is the first in a range of nuclear reactors for electricity production that NASA is looking to deploy on future missions. We will look at the others in the next few posts; the next will be on Fission Surface Power (reading up on that system somewhat delayed this post, as did a family reunion and changing jobs), followed by Project Prometheus, and finally the drive system for the Jupiter Icy Moons Observer (JIMO), the final design selected out of Prometheus.

If you have any comments, questions, or corrections, please leave them below.


All sources are linked in-line, but because I’m currently on a free WordPress account I’m not allowed to add code to tweak the blog’s appearance to allow for that. So, I’m also going to link the important sources here, so that it’s easier for people to find them. However, this isn’t an exhaustive list, if there’s something that should be here but isn’t, let me know in the comments and I’ll add it.

A wonderful resource for those interested in the beginnings of Kilopower is Dr. David Poston’s personal blog, SpaceNuke (, mostly written before the DUFF experiment. There’s a lot of insight into the design philosophy behind the reactor, and also into the difficulties of developing nuclear fission systems for in-space use. I can’t recommend it highly enough.

If an image doesn’t have credit, it’s from NASA or the DOE, from one of the sources below.

I’m going to break this up into KRUSTY and Kilopower sections, organized chronologically. The KRUSTY papers tend to be focused more on the reactor physics and hardware testing side, and are a great source for more detailed information about the reactor. The Kilopower papers and presentations are bigger-picture, and focus more on missions and policy.


KRUSTY Experiment Nuclear Design, presentation by Poston et al, Los Alamos NL, July 2015

Kilowatt Reactor Using Stirling TechnologY (KRUSTY) Demonstration: CEDT Phase 1 Preliminary Design Documentation, Sanchez et al, Los Alamos NL, Aug 2015

KRUSTY Design and Modelling, presentation by Poston for KRUSTY Program review, Los Alamos NL, Nov 2016

NCERC Kilowatt Reactor Using Stirling TechnologY (KRUSTY) Experiment Update: March 2017, presentation by Sanchez of LANL, Mar 2017

Electrically Heated Testing of the Kilowatt Reactor Using Stirling TechnologY (KRUSTY) Experiment Using a Depleted Uranium Core, Briggs et al, NASA GRC, July 2017


Design and Testing of Small Nuclear Reactors for Defense and Space Applications, presentation for American Nuclear Society Trinity Section, McClure and Poston, Los Alamos NL Sept 2013

Development of NASA’s Small Fission Power System for Science and Human Exploration, conference paper by Gibson et al, NASA GRC, for AIAA Propulsion and Energy Forum, July 2014

Nuclear Systems Kilopower Overview for Game Changing Development Program, presentation by Palac et al, NASA GRC, Feb 2016

Space Nuclear Reactor Development, McClure et al, Los Alamos NL technical report, Mar 2017

Space Nuclear Reactor Engineering, Nuclear Engineering Capability Review, presentation by Poston, Los Alamos NL, Mar 2017

NASA’s Kilopower Reactor Development and the Path to Higher Power Missions, Gibson et al NASA GRC, conference paper for IEEE Aerospace Conference, Mar 2017

Other related sources

Summary of Test Results From a 1 kWe-Class Free-Piston Stirling Power Convertor Integrated With a Pumped NaK Loop, Briggs et al NASA GRC, Dec 2010

High Temperature Water Heat Pipes for Kilopower System, Beard et al, Advanced Cooling Technologies, conference paper IECEC 2017

Considerations for Launching a Nuclear Fission Reactor for Space-Based Missions, Voss et al, Global Nuclear Network Analysis LLC, conference paper for AIAA SPACE Forum, Sept 2017

Channel and Webpage Updates

I hope to have these posts out somewhat more regularly, this last month has been a busy one for me, between switching to a new (overnight) position at work, a week-long family vacation (in Hawaii!), and getting the resources together for web pages that will hopefully be up soon.

The YouTube channel has been on hold for about a month due to these delays, but that should make editing the scripts slightly easier. Work on Blender is proceeding as well, with a 3D model of the Boeing Integrated Manned Interplanetary Spacecraft (Von Braun’s Battlestar Galactica to Mars) pretty much roughed in for the channel, as well as various 3D flow diagrams for the ROVER A6, and LARS proceeding apace. I hope to start releasing videos early next year, but I was hoping to have two or three done by now anyway, so we’ll see.

Again, thanks to everyone that has helped on this so far. Your support and expert assistance are what has made this project possible so far, and will continue to make this possible in the future!
All rights copyright Stuart Graham 2016

Development and Testing Fission Power Systems

DUFF, Father of KRUSTY, Kilopower part 1

Hello, and welcome to the Beyond NERVA blog! This was originally meant to be a post about KRUSTY, the small nuclear reactor that has been in the news a lot recently, but it ended up exploding into a four part series.

In this post, the first in our series, we’re looking at what came before this design, particularly the DUFF experiment, and the organization within NASA that is currently tasked with developing the next generation of in-space nuclear power systems, the Kilopower program within NASA’s Game Changing Directorate.

The second post will be an overview of the Kilowatt Reactor Utilizing Sterling TechnologY, or KRUSTY, including a more detailed look at its design, and a look at the testing that’s been done and is planned for that particular reactor. The more I learn about KRUSTY, the more I like this reactor design: it’s simple, it’s safe, it has very few moving parts, and those parts that do move are well-tested to be maintenance free in space for years, if not decades.

The third post will look at the Fission Surface Power reactor, or FSP, a step bigger than KRUSTY, but still rated for less than a megawatt of electric power (100 kilowatts is the largest this design gets on the books). This reactor was a recent (2010-2012) design, with some interesting features, but due to design and testing limitations (more on that in the blog post), it hasn’t been tested as thoroughly as KRUSTY. This starts the place where significant additional funding into new programs is going to be required to prepare a flight system, and is the start of more long-range planning for NASA.

Finally, in our fourth post, we will look at NASA’s planned reactor for nuclear electric propulsion, in this case Project Prometheus for the Jupiter Icy Moons Observer spacecraft, or JIMO, another recent design for a nuclear-electric spacecraft that we’ll look at later in this blog series. This design has gone through some serious revisions over its lifetime, and we’ll look at what those are and why. This reactor starts with about 100 kilowatts of electric power (kWe, as opposed to kilowatts of thermal power, kWt), and goes into the multi-megawatt range. This is the largest reactor NASA is currently exploring for electricity production, and can be used to send unmanned, and possibly even manned, missions across the solar system.

What we Have (Or Used to Have)


In-space nuclear reactors are far from a new thing. The US launched the SNAP-10A reactor in 1965 from Vandenburg AFB atop an Atlas-Agena D rocket. The reactor was launched successfully, but after 43 days in orbit a voltage regulator on the Agena failed, causing the reactor to shut down at a maximum power of 590 W. It’s still in orbit.

The USSR had a much more successful program, launching a series of Radar Ocean Reconnaissance Satellites, or RORSATs, powered by small nuclear reactors. These were the US-A spacecraft (ironic, that). The active radars that were the primary instrument on board were (and are) major power hogs, so putting a nuclear reactor on one made sense. They flew a total of 33 of these satellites, with two different reactors on board. The first was known by at least three names, the BUK or BES-5 in the USSR and TOPAZ in the West. It was used more than 30 times, and utilized a thermionic power conversion system to minimize the number of moving parts needed for the reactor. The second, often called the TOPAZ-II in the West, is the ENISY reactor, another, more modern thermionic design.  We’ll look at these more in depth in the channel. The Russians ended up losing one of these satellites, the KOSMOS 954, in the tundra of Canada, the only time a space-based nuclear reactor has hit land upon reentry, although one other (KOSMOS 1402) fell into the sea. The rest remain in graveyard orbits, where they can be kept out of the way for the indefinite future.

US-A spacecraft diagram via Svan Grahn

We’re going to look at these reactors in some of the first videos on the channel, as well as one of the most interesting ironies of NASA’s Moon base plans in the 1990s, but I want to mention one thing about a little-known piece of space history. Shortly after the Iron Curtain fell, the US bought a number of these reactors from the Russians, and an exchange program was set up. The US bought these reactors as a “turn-key” setup, which meant that they also had all the necessary testing apparatus. Because of the design of these reactors, they could be fully tested without using fissile fuel, which means that they could be tested and flight-qualified without becoming radioactive. According to the TSET Facility Construction report, the US saved $400 thousand dollars (at 1992 valuation) compared to developing and building their own equivalent test stand. The three-year program ended, and for a while in the mid-to-late 90s, NASA had plans on the books for a Moon base powered by a Soviet-built nuclear reactor.

These reactors were small. This may seem odd to people who are used to terrestrial nuclear power plants, which have much higher power outputs (on the order of hundreds of megawatts to gigawatts). However, for any mission that has been designed in depth (and has a chance of being funded), that amount of electrical power is completely unnecessary, and going to a larger reactor also means you have to launch more reactor mass. These systems are also going to be some of the first we look at in the channel, so I’m going to be brief here. Larger reactors for in-space power use (mainly propulsion) have been designed, and we’ll be looking at those in the channel, but none has ever been built and tested.

International Space Station via NASA

Let’s look at power levels from these reactors a bit more: what needs that much power in space? The International Space Station is the biggest power hog that we have in space right now, and it uses about 75-90 kW, with a max power of about 120 kW. To put this in perspective, this orbiting research laboratory requires as much power as 55 American homes.

Now, a space station’s operational power load is much less than, say, a high-powered electric propulsion system, but it’s not that different from the likely power requirements for an early Moon or Mars base. Sure, there are things that we could do if we had more power, but those things also (usually) require more payload mass as well, which is already one of our biggest constraints. So why put money into a program that you don’t need right now, especially considering the incredibly constrained budgets that NASA and the DOE’s in-space power programs operate under?

Kilopower: NASA’s Latest Nuclear Power Reactor Program

At NASA, the Space Technology Mission Directorate (STMD) is where components and systems for space missions are evaluated and tested, and where the mission hardware comes together. Within the STMD, the Game Changing Development Program (GCD) focuses on enabling missions that are currently impossible, but not technically impossible, across the whole range of aerospace mission and component types. In turn, GCD is the home of Kilopower, NASA’s current program to develop advanced in-space nuclear reactors. There are a range of reactors of different sizes and power outputs that have been proposed for the program, ranging from 1-10 kWe reactors for unmanned probes and space missions, to 10-100 kWe reactors for surface operations, to 100 kWe+ reactors for nuclear electric propulsion.

This blog post focuses on the work leading up to this point, so I won’t belabor the point here. DUFF was an excellent test, and set the stage for more work to come.

The next big goal for the program is to build a small reactor. This is the Kilowatt Reactor Using Sterling TechnologY, or KRUSTY. This simple, small reactor uses enriched U235 metal fuel, will have a power rating of 1-10 kWe, and will primarily be used for in-space missions, although surface operations are another possible application.

This isn’t the only reactor within NASA’s current range of possible fission power sources that they’re interested in:

GCD Reactor Family

This range of reactors has been described in papers before, with testing regimes carried out on them to a greater or lesser extent. The Fission Surface Power reactor was studied between 2010 and 2012 (the date of the final report), and provides 40-100 kWe of power for surface operations. The largest of the reactors, rated from 130 kWe up, is the design for Project Prometheus, which was chosen for the Jupiter Icy Moons Observer, or JIMO, a nuclear electric spacecraft that we’ll be looking at later on in the channel.

Pre-Existing Conditions, The Budgetary Straightjacket, and the Challenges of Nuclear Reactor Design

NASA, as an organization, is tasked with doing a lot with very little money (Request for 2017). They also have very little flexibility in how the vast majority of the money is spent. While NASA has shown interest in nuclear power in space since the earliest days of the Space Age, the funding levels for this have fluctuated wildly over the years, with a distinct and depressing downward trend.

The thing is, nuclear power isn’t NASA’s balliwick. In the early days, this was the realm of the Atomic Energy Commission (AEC), and now is the domain of the Department of Energy (DOE). Both organizations have/had a general interest in advancing nuclear power in space, and on an individual level the idea is met with optimism and enthusiasm. However, as with NASA, the DOE does not control where its funding goes, and there are other critical missions (like nuclear weapons non-proliferation work  worldwide) that hold a lot more of the official attention, and budget.

Unfortunately, even the DOE isn’t allowed to just go and build a test reactor. Since the dissolution of the AEC, the oversight and regulation of all nuclear reactors in the US is the responsibility of the Nuclear Regulatory Commission (NRC). Even with a design that is largely finalized, extensively modeled, and is the result of hundreds of man-hours of careful review, it’s highly unlikely that the reactor itself will ever be built and tested. Instead, each of the components and subsystems are carefully tested in various test stands to determine different properties and operation considerations. In order to actually be able to test a system, so many different components have to go through independent testing to incredibly stringent standards. Therefore, using already-designed components is incredibly attractive. For any new design, the parts that aren’t designed have to be able to fit into the testing equipment, or a new test rig has to be built. These are not cheap systems, most of the time. The conditions inside a nuclear reactor are extreme, and the restrictions on building an experimental reactor that have been put in place by the NRC make “just build it” impossible. Every component has to be tested beforehand, and the push to use presently available test systems is strong (and often this is the only way to go to keep within budget).

Many of these test systems are unique, and while between the DOE and NASA there is a wide variety of different test stands, each has its own limitations. Mainly, each test stand looks at one aspect of a system, so one system will need to be tested on multiple test stands. Thermal testing is separated from nuclear testing, except for fuel element tests which have their own test stands. The two most commonly used are the Nuclear Thermal Rocket Element Environment Simulator, or NTREES (which we’ll look at more when we look at NASA’s LEU NTR design), and the Compact Fuel Element Environment Test, or CFEET. Both of these are geared more toward the extreme conditions experienced by fuel elements in a reactor running as hot as possible, as a nuclear thermal rocket does, rather than a baseload power as seen in an electrical system, so we’ll look at these more then.

The other side to this is that, over the decades, many individual components have been tested, and can be used in a new design. The place that this is most frequently applied is in fuel element design. To make a new type of fuel element, every step in the manufacture and testing has to be modeled extensively before the first test article is made, and a full testing campaign can require dozens of these highly precise and difficult to manufacture objects to ensure all steps of manufacture and use are well understood.

Various national labs develop fuel elements for different purposes. One of these facilities is Y12 outside Oak Ridge, Tennessee. Originally built to enrich uranium for nuclear weapons, like most DOE facilities it also does research and development work in a variety of areas, and the development and testing of specialized fuel elements has been a focus for them since the earliest days of the facility. Other facilities, such as the Idaho National Laboratory, Los Alamos National Laboratory, and others, design experimental fuel elements as well, and in the channel we’ll see some of their work come up in various designs. Y12 is particularly relevant in this post because they developed the fuel element  used in KRUSTY: the Uranium-Molybdenum metal fuel. This is a well-understood fuel form, enriched to 93% (by weight) 235U. While this is a far higher enrichment than is seen in terrestrial reactor fuel elements, for the majority of the history of in-space nuclear reactors highly enriched uranium has been the norm. This isn’t necessary, as we’ll see in my next blog series about the low-enriched uranium nuclear thermal rocket currently being developed by NASA.

KRUSTY LANL 3D Model 3 Stirling
KRUSTY Stirling convertors via LANL

The development and testing of an in-space nuclear power system goes far beyond just the fuel elements, or the core design that we’ll look at more in the next post. The power conversion system is another component that has been extensively researched, and is available off-the-shelf commercially. In this case we are referring to the Stirling power conversion system built by Sunpower, Inc. Its use here is based on an earlier design integrating it to a larger reactor, the Fission Surface Power (FSP) reactor that we’ll look at more in the third post in this series, but close enough that a lot of the same work that’s been done there will apply to this system as well.

These are just two of hundreds of components that are needed, and while off-the-shelf is easier on the budget in many cases, it also adds engineering constraints on your final system you have to work within (a good case in point, although imperfect, is the Space Launch System). The demanding nature of spaceflight – and the unforgiving nature of reactor physics – means that these systems must be highly reliable, and the regulatory restrictions make this process very difficult and expensive. New testing facilities would be ideal, but for the types of missions on the books, the current facilities can be used for a system that’s just big enough at a far lower price than building new ones.


DUFF, Father of KRUSTY

It’s often easy to overlook the origins of a newly popular piece of technology, and NASA’s new nuclear reactor is no exception. Remember, NASA doesn’t (or wouldn’t) build these reactors, the Department of Energy does. The DOE side of things has been largely ignored in the mainstream media, which focus on the flashier and more PR-conscious NASA, but NASA doesn’t build reactors, the Department of Energy does. On the DOE side, this reactor is the direct result of an experiment carried out in 2012, called the Desktop using Flattop Fissions, or DUFF.

DUFF Bounds Closeup LANL 2012-11-26
Dr. John Bounds with DUFF via NNSA

This experiment was conducted by Dr. John Bounds of Los Alamos National Labs, at the Device Assembly Facility (DAF) at the Nevada National Nuclear Security Site (formerly the Nevada Nuclear Test Site). This facility has been a keystone for nuclear technological development of all sorts since the early days of the Atomic Era. Long after the weapons tests, reactor tests, and the US nuclear thermal rocket programs of the 1950s through 1970s they’re most known for to today’s quieter work with actinide irradiation studies, this facility remains a keystone in American nuclear technology development.

In a conference paper for the Nuclear and Emerging Technologies for Space 2013 (NETS 2013), Gibson et al point out that there’s a gap between the designs that have been used for Radioisotope Thermoelectric Generators, (RTGs) and the designs that have been proposed for fission power systems, or FPS. This gap also happens to coincide with the available testing facilities, and the ideas they proposed were not new, just untested. To improve the design, the team chose a simple, fundamental issue with the proposed reactor, and worked with what they had on hand for everything else. DUFF was focused on a novel (for nuclear reactors) heat rejection system: the heat pipe.

Heat_Pipe_MechanismThe heat pipe is a simple, attractive way of getting rid of waste heat. By using thermal differences and phase changes in a working fluid, a heat pipe can transport a large amount of heat in a simple system that has no moving parts. At the hot end, the working fluid evaporates after coming in contact with the hot casing material, and flows to the cold end. At the cold end of the heat pipe, it condenses onto a wicking material, which then carries the working fluid back to the hot end and completing the cycle. Most of the energy transportation that occurs actually happens in those phase changes, rather than the movement of the working fluid itself. The type of working fluid, and the wicking and casing materials, are defined by the operating environment that it’s going to be in, especially the amount of heat being transported and the temperature of that waste heat. While heat pipes may seem like a peculiar means of rejecting heat, and they’d never been used in a nuclear reactor before this test, most of us use these types of devices every day. The CPU of the computer you’re reading this on is cooled by a heat pipe (even if it’s a phone), as are a huge number of other electronic components. They can operate from very cold temperatures (such as a helium heat pipe), to very high temperatures, such as aluminum. For KRUSTY, the plan is to use sodium potassium eutectic (NaK) for the working fluid.

By replacing pumped coolant with heat pipes, you’re changing more than how the reactor is cooled – you’re also changing how the neutrons in the reactor behave. Any structure in a reactor is going to affect the behavior, or dynamics, of the reactor system. After all, slowing neutrons down isn’t the only way to affect the behavior of neutrons in a reactor, they can be reflected as well. Depending on their chemical and isotopic composition, most materials will either reflect (at a lower energy level), slow, or capture neutrons. This is something that can be modeled with incredible accuracy, but you still have to have the real-world test article to show that it works. This test article was DUFF.

DUFF used an existing test rig for the criticality portion of the experiments. Several were available at the facility, but each required compromises. The chosen test rig was the Flattop criticality test rig, a workhorse of nuclear testing.

Flattop overhead cutaway 2 Gibson 2013Flattop Elevation Gibson 2013

Dr. Dennis Beller, a research professor in nuclear engineering at the University of Nevada, Las Vegas, posted this about the choice:

Flattop is one of four old [ed note: built in 1951], much used critical assemblies that were moved from the Criticality Experiments Facility (CEF) at LANL Technical Area 18 to the Nevada National Security Site’s National Critical Experiments Research Center (NCERC) several years ago (at the time of the move it was NTS and CEF, both later renamed). These assemblies (others are Godiva 4, Comet, and Planet) are used for criticality benchmark experiments, cross section measurements, DOE/NNSA’s hands-on criticality safety training courses (I was a recent student), and a variety of other training and research projects. Flattop, which is a uranium-reflected highly enriched (233U, HEU, Pu, or other) sphere, is unique in that it can be operated super critical to produce an internal temperature of about 300 C (not quite what one would expect in a power reactor “prototype”). Flattop also has a hole through its center that permits insertion of experiments or other actinides, in this case a heat pipe that was built specifically for this test (I don’t believe it’s a space power prototype either, but someone from NASA or LANL might disagree [ed note: it’s not a prototypic test article, just a proof of concept]). In addition, the purpose of the heat pipe is not to cool the reactor (LANL’s words, not this authors) although it does remove a tiny amount of the fission power, it is to transfer energy to the Sterling engine so it will produce electric power.

DUFF was a perfect example of the kind of compromises that are taken in an in-space nuclear development campaign: a test article, using different materials than the flight article would (page 9), to demonstrate that the principles of operation were sound, and that nuclear testing could be done more affordably under the current regulatory regime. To get good data, you need a well-characterized system like Dr. Beller describes above, and it’s easier on the budget as a rule to design your test article to the testbed than the other way around. The other advantage to using an existing criticality test rig is that they are usually very heavily studied and very well understood, so that the effects of the particular test equipment can be more easily isolated and studied.

DUFF Heat Pipe cross-section Gibson 2013DUFF wasn’t focused on the reactor, remember, it was focused on the heat pipe. Because the power output from Flat-top is so low (700 W, estimated max temp 300 C, steady-state operation at 200 C), they weren’t able to use a sodium heat pipe, as the final reactor would use, because it evaporates at too high a temperature. Instead, they looked at other options that had lower boiling points, something that limited their choices greatly. After testing two options, they settled on water as the working fluid (Dowtherm A was the other option looked at, but it wasn’t able to transfer enough heat). After selecting the working fluid, the casing and wick needed to be decided as well. After testing at NASA’s Glenn Research Center, it was determined that the best option was to use a sintered nickel wick (200-mesh) and a 316L stainless steel casing, although other mesh sizes and casing materials were tried. Since this wasn’t going to be a flight article, the fact that this was made out of steel and nickel didn’t matter: the test stand didn’t care about every ounce of weight like a spacecraft would.

DUFF Heat Pipe Elevation, inches Gibson 2013

The final part of the system is the Stirling converters, and once again Dr. Bounds and his team used an existing piece of equipment to both increase the certainty of the measurements and decrease the cost of the test. This was a challenge for a number of reasons. While Stirling conversion systems had been researched for in-space use before, the vast majority of the time these were much higher-power units, requiring a minimum of 200 C hot-end temperatures to operate. This was still 50 C higher than the worst-case hot-end scenario for Flattop, so these systems weren’t an option. Instead, they went with one of the only options available, a Buzz convertor. By cooling the cold end down to -50 C, there was enough temperature difference to produce power (although definitely not net-positive electrical power!). As Dr. Marc Gibson noted,

Although the Buzz convertors do not represent the state of the art in Stirling design and performance, they were affordable, available, and compatible with the DUFF test constraints, making them the best choice for this proof-of-concept test.

DUFF Test setup McClure 2013
DUFF Test Stand via LANL

DUFF passed with flying colors. This proved that a heat pipe waste heat rejection system could be used in a reactor, and also demonstrated a flexibility in thinking among the researchers and designers involved to work within a very limited budget and scheduling constraints imposed across multiple facilities in the DOE. For more info on the challenges leading up to DUFF, I recommend reading through the presentation. The challenges described in critical assembly testing have been enlightening, and the presentation and paper are my main sources of the information about the tests.

Two test runs were made, on Sept. 13th and Sept 18th,, 2012. In the test on the 13th, the reactor power was raised to 2 kWt, and held there for about 5 mins. After reactivity was increased, the Internet connection for the thermal data collection system went out, leaving only the pedestal temperature data available (this is a much lower temperature, possibly the reflector temperature). From here, they decided to fly blind, relying on information from the power conversion system and their models to complete as much testing as possible within the allotted time. Due to pre-test work, it was known that the reactor would have a negative coefficient of reactivity (i.e. the hotter the reactor got, the less neutronically active it would be), so this wasn’t a concern. Limited data collection is a persistent problem in all areas of science, and in astronuclear engineering it has been consistent enough to be ingrained into the researcher’s mindset: some data is better than none. Two minutes later (7 minutes into the test) the heat pipes activated, and more data flowed in. At this point the heat pipe was carrying about 400 W of energy. Over the next ten minutes, the core temperature increased, heating the Stirling engines about 200C. This also kept the core cooler, which in turn adds more reactivity to the core due to the negative thermal reactivity coefficient.

Seventeen minutes in, the Stirling was kick-started (when the hot end was at 225 C) resulting in the production of 24 We. Thermal transfer from the hot end of the Stirling changed the temperatures of the various components significantly over the next minute, to the point that an equilibrium was established and observed, leveling off at 18 We power output. One minute later the reactor was scrammed, and the Stirling engine continued to draw off the residual heat from the various system components. Once the hot end of the Stirling hit ~120 C, it stalled. Four hours later, the team learned the computer that contact had been lost with was still intact, therefore the issue was in communications and not hardware.

Experimental data 2 Sept 13 McClure 2013

After some repairs and adjustments, a second test run was done on Sept. 18th. This was a slower, lower, and longer test than the first, and had some other differences as well. Not only were they going to verify the results from the first test run (hopefully with full experimentation this time), but they were also going to stop and restart the Stirling engines during the test, to see the resulting change in core reactivity and thermal profiles. This is important in a number of ways, but the most important part may be that it allows for better predictions to be made about how a different core would react to the Stirlings either being shut off for maintenance or mission requirements.

After a 9 minute rise to power, the heat pipes activated and the entire system started to approach thermal equilibrium, which was reached about 13 minutes later at ~160C. At this point (22 minutes in), both fission power and system temperature continued to rise at about 5C a minute, while the Stirling remained off. A half hour into the test, the technician turned on the Stirling engine, and heat began to be removed from the system. Once they started the engine, the hot end measured 180C, and electric output was 13 W. Within a minute, the hot head had cooled rapidly, and power output held steady at ~7 W. At the same time, the cooler temperatures increased reactivity, with power output (and temperature) increasing to 11 W. Four minutes later (35 minutes into the test) a final reactor power increase was ordered (reactivity insertion), bringing peak fission power to ~5.5kWt. Over the next five minutes, the negative reactivity coefficient of the reactor kicks in, and the system reached a new equilibrium at about 3 kWt. At this point, the technician changed to a high stroke on the Stirling engine (similar to shifting gears in a car, it changes the torque being utilized by the Stirling engine), increasing the amount of energy produced (from 11 to 17 W), and removing energy from the reactor system. Three minutes later, the technician stopped the engine, and the temperature rapidly increased from 185C to 225C. Two mintues after that, he restarted the engine, reconfirming the results from the first critical run. Forty-six minutes into the test, the reactor was scrammed, and the entire system decayed to thermal equilibrium. The Stirling continued to draw power for about 5 minutes, stalling out when the temperature reached ~115C. With one final gasp of the Stirling 56 minutes after the start of the test, the cooldown period continued through normal radiation of heat energy.

Experimental data 2 Sept 18 McClure 2013

Many firsts were demonstrated with this test, some technical and some organizational. One thing that was surprising to me was that this was the first reactor system developed by Los Alamos to produce electricity. Other organizational notes (that make incredibly depressing reading, to be honest) include: first nuclear space power demonstration since the founding of the DOE (on August 4, 1977), and the first power system operated at the NCERC. More hopeful ones include the first use of a heat pipe power reactor (of any size), and first reactor system to use a Stirling convertor system. Further successes look not to DUFF, but to its’ successor, KRUSTY.

As with many advances in nuclear power, DUFF went largely unnoticed by those outside the nuclear engineering community.

An Interlude into the Humor of Nuclear Engineers

Homer DUFF LANL logo
From NETS 2013 presentation, McClure LANL

DUFF and KRUSTY are far from the first, or most significant, nuclear experiments that bore the names of cartoon characters (although cartoon beers may be a first). Since the earliest days of nuclear research, lighthearted names have been attached to various testing programs. This, I believe, is partly due to an attempt to lighten the mood for a test that may take years to prepare for but be over in an instant, and partly due to the fact that almost no-one reads the reports!! (Yes, that did deserve an italicized double exclamation point; the amount of information available compared to the number of interested people that have read it continues to astound me!)

Reading through the list of test explosions that the AEC (predecessor to the DOE) conducted, you find names like “Danny Boy” (3/62), “Chinchilla” (2&3/62), “Dormouse Prime” (4/62), (and “Ferret Prime” as well in 2/63), “Gazook” (3/73), and many different bird and fish names from around the world. Some of these were randomly selected by the AEC out of a list of words, but some were named by the scientists and test engineers themselves.

This isn’t limited to just the US, either. At Hokkaido University there’s a program called PIKACHU, for instance.

I hope to be able to write a blog post on just this in the future, but for now… the Simpsons are popular in the nuclear community, nuclear engineers have a sense of humor, and yes, there’s going to be a nuclear reactor named after a cartoon clown, begat by a cartoon beer.

Continue to Part II: KRUSTY, First in a New Breed of Reactors