Development and Testing Fission Power Systems History Nuclear Electric Propulsion

SNAP-50: The Last of the SNAP Reactors

Hello, and welcome to Beyond NERVA, for our first blog post of the year! Today, we reach the end of the reactor portion of the SNAP program. A combination of the holidays and personal circumstances prevented me from finishing this post as early as I would have liked to, but it’s finally here! Check the end of the blog post for information on an upcoming blog format change. [Author’s note: somehow the references section didn’t attach to the original post, that issue is now corrected, and I apologize, references are everything in as technical a field as this.]

The SNAP-50 was the last, and most powerful, of the SNAP series of reactors, and had a very different start when compared to the other three reactors that we’ve looked at. A fifth reactor, SNAP-4, also underwent some testing, but was meant for undersea applications for the Navy. The SNAP-50 reactor started life in the Aircraft Nuclear Propulsion program for the US Air Force, and ended its life with NASA, as a power plant for the future modular space station that NASA was planning before the budget cuts of the mid to late 1970s took hold.

Because it came from a different program originally, it also uses different technology than the reactors we’ve looked at on the blog so far: uranium nitride fuel, and higher-temperature, lithium coolant made this reactor a very different beast than the other reactors in SNAP. However, these changes also allowed for a more powerful reactor, and a less massive power plant overall, thanks to the advantages of the higher-temperature design. It was also the first major project to move the space reactor development process away from SNAP-2/10A legacy designs.

The SNAP-50 would permanently alter the way that astronuclear reactors were designed, and would change the course of in-space reactor development for over 20 years. By the time of its cancellation in 1973, it had approached flight readiness to the point that funding and time allowed, but changes in launch vehicle configuration rang the death knell of the SNAP-50.

The Birth of the SNAP-50

Mockup of SNAP-50, image DOE

Up until now, the SNAP program had focused on a particular subset of nuclear reactor designs. They were all fueled with uranium-zirconium hydride fuel (within a small range of uranium content, all HEU), cooled with NaK-78, and fed either mercury Rankine generators or thermoelectric power conversion systems. This had a lot of advantages for the program: fuel element development improvements for one reactor could be implemented in all of them, challenges in one reactor system that weren’t present in another allowed for distinct data points to figure out what was going on, and the engineers and reactor developers were able to look at each others’ work for ideas on how to improve reliability, efficiency, and other design questions.

Tory IIA reactor inlet end, image DOE

However, there was another program that was going on at about the same time which had a very different purpose, but similar enough design constraints that it could be very useful for an in-space fission power plant: the Aircraft Nuclear Propulsion program (ANP), which was primarily run out of Oak Ridge National Laboratory. Perhaps the most famous part of the ANP program was the series of direct cycle ramjets for Project PLUTO: the TORY series. These ramjets were nuclear fission engines using the atmosphere itself as the working fluid. There were significant challenges to this approach, because the clad for the fuel elements must not fail, or else the fission products from the fuel elements would be released as what would be virtually identical to nuclear fallout, only different due to the method that it was generated. The fuel elements themselves would be heavily eroded by the hot air moving through the reactor (which turned out to be a much smaller problem than was initially anticipated). The advantage to this system, though, is that it was simple, and could be made to be relatively lightweight.

Another option was what was known as the semi-indirect cycle, where the reactor would heat a working fluid in a closed loop, which would then heat the air through a heat exchanger built into the engine pod. While this was marginally safer from a fission product release point of view, there were a number of issues with the design. The reactor would have to run at a higher temperature than the direct cycle, because there are always losses whenever you transfer heat from one working fluid to another, and the increased mass of the system also required greater thrust to maintain the desired flight characteristics. The primary coolant loop would become irradiated when going through the reactor, leading to potential irradiation of the air as it passed through the heat exchanger. Another concern was that the heat exchanger could fail, leading to the working fluid (usually a liquid metal) being exposed at high temperature to the superheated air, where it could easily explode. Finally, if a clad failure occurred in the fuel elements, fission products could migrate into the working fluid, making the primary loop even more radioactive, increasing the irradiation of the air as it passed through the engine – and releasing fission products into the atmosphere if the heat exchanger failed.

The alternative to these approaches was an indirect cycle, where the reactor heated a working fluid in a closed loop, transferred this to another working fluid, which then heated the air. The main difference between these systems is that, rather than having the possibly radioactive primary coolant come in close proximity with the air and therefore transferring ionizing radiation, there is an additional coolant loop to minimize this concern, at the cost of both mass and thermal efficiency. This setup allowed for far greater assurances that the air passing through the engine would not be irradiated, because the irradiation of the secondary coolant loop would be so low as to be functionally nonexistent. However, if the semi-indirect cycle was more massive, this indirect cycle would be the heaviest of all of the designs, meaning far higher power outputs and temperatures were needed in order to get the necessary thrust-to-weight ratios for the aircraft. Nevertheless, from the point of view of the people responsible for the ANP program, this was the most attractive design for a crewed aircraft.

Both SNAP and ANP needed many of the same things out of a nuclear reactor: it had to be compact, it had to be lightweight, it had to have a VERY high power density and it needed to be able to operate virtually maintenance-free in a variety of high-power conditions. These requirements are in stark contrast to terrestrial, stationary nuclear reactors which can afford heavy weight, voluminous construction and can thus benefit of low power density. As a general rule of thumb, an increase in power density, will also intensify the engineering, materials, and maintenance challenges. The fact that the ANP program needed high outlet temperatures to run a jet engine also bore the potential of having a large thermal gradient across a power conversion system – meaning that high-conversion-efficiency electrical generation was possible. That led SNAP program leaders to see about adapting an aircraft system into a spacecraft system.

Image DOE

The selected design was under development at the Connecticut Advanced Nuclear Engine Laboratory (CANEL) in Middletown, Connecticut. The prime contractor was Pratt and Whitney. Originally part of the indirect-cycle program, the challenges of heat exchanger design, adequate thrust, and a host of other problems continually set back the indirect cycle program, and when the ANP program was canceled in 1961, Pratt and Whitney no longer had a customer for their reactor, despite doing extensive testing and even fabricating novel alloys to deal with certain challenges that their reactor design presented. This led them to look for another customer for the reactor, and they discovered that both NASA and the US Air Force were both interested in high-power-density, high temperature reactors for in-space use. Both were interested in this high powered reactor, and the SNAP-50 was born.

PWAR-20 cross-section and elevation, image DOE

This reactor was an evolution of a series of test reactors, the PWAR series of test reactors. Three reactors (the PWAR-2, -4, and -8, for 2, 4, and 8 MW of thermal power per reactor core) had already been run for initial design of an aircraft reactor, focused on testing not only the critical geometry of the reactor, but the materials needed to contain its unique (at the time) coolant: liquid lithium. This is because lithium has an excellent specific heat capacity, or the amount of energy that can be contained as heat per unit mass at a given temperature: 3.558 J/kg-C, compared to the 1.124 J/kg-C of NaK78, the coolant of the other SNAP reactors. This means that less coolant would be needed to transport the energy away from the reactor and into the engine in the ANP program, and for SNAP this meant that less working fluid mass would be needed transferring from the reactor to the power conversion system. The facts that Li is much less massive than NaK, and that less of it would be needed, makes lithium a highly coveted option for an astronuclear reactor design. However, this design decision also led to needing novel concepts for how to contain liquid lithium. Even compared to NaK, lithium is highly toxic, highly corrosive in most materials and led, during the ANP program, to Pratt and Whitney investigating novel elemental compositions for their containment structures. We’ll look at just what they did later.

SNAP-50: Designing the Reactor Core

This reactor ended up using a form of fuel element that we have yet to look at in this blog: uranium nitride, UN. While both UC (you can read more about carbide fuels here) and UN were considered at the beginning of the program, the reactor designers ended up settling on UN because of a unique capacity that this fuel form offers: it has the highest fissile fuel density of any type of fuel element. This is offset by the fact that UN isn’t the most heat tolerant of fuel elements, requiring a lower core operating temperature. Other options were considered as well, including CERMET fuels using oxides, carbides, and nitrides suspended in a tungsten metal matrix to increase thermal conductivity and reduce the temperature of the fissile fuel itself. The decision between UN, with its higher mass efficiency (due to its higher fissile density), and uranium carbide (UC), with the highest operating temperature of any solid fuel element, was a difficult decision, and a lot of fuel element testing occurred at CANEL before a decision was reached. After a lot of study, it was determined that UN in a tungsten CERMET fuel was the best balance of high fissile fuel density, high thermal conductivity, and the ability to manage low fuel burnup over the course of the reactor’s life.

From SNAP-50/SPUR Design Summary

Perhaps the most important design consideration for the fuel elements after the type of fuel was how dense the fuel would be, and how to increase the density if this was desired in the final design. While higher density fuel is generally speaking a better idea when it comes to specific power, it was discovered that the higher density the fuel was, the lower the amount of burnup would be possible before the fuel would fail due to fission product gas buildup within the fuel itself. Initial calculations showed that there was an effectively unlimited fuel burnup potential of UN at 80% of its theoretical density since a lot of the gasses could diffuse out of the fuel element. However, once the fuel reached 95% density, this was limited to 1% fuel burnup. Additional work was done to determine that this low burnup was in fact not a project killer for a 10,000 hour reactor lifetime, as was specified by NASA, and the program moved ahead.

These fuel pellets needed a cladding material, as most fuel does, and this led to some additional unique materials challenges. With the decision to use lithium coolant, and the need for both elasticity and strength in the fuel element cladding (to deal with both structural loads and fuel swelling), it was necessary to do extensive experimentation on the metal that would be used for the clad. Eventually, a columbium-zirconium alloy with a small amount of carbon (CB-1ZR-0.6C) was decided on as a barrier between the Cb-Zr alloy of the clad (which resisted the high-temperature lithium erosion on the pressure vessel side of the clad) and the UN-W CERMET fuel (which would react strongly without the carburized layer).

This decisions led to an interesting reactor design, but not necessarily one that is unique from a non-materials point of view. The fuel would be formed into high-density pellets, which would then be loaded into a clad, with a spring to keep the fuel to the bottom (spacecraft end) of the reactor. The gap between the top of the fuel elements and the top of the clad was for the release of fission product gasses produced during operation of the reactor. These rods would be loaded in a hexagonal prism pattern into a larger collection of fuel elements, called a can. Seven of these cans, placed side by side (one regular hexagon, surrounded by six slightly truncated hexagons), would form the fueled portion of the reactor core. Shims of beryllium would shape the core into a cylinder, which was surrounded by a pressure vessel and lateral reflectors. Six poison-backed control drums mounted within the reflector would rotate to provide reactor control. Should the reactor need to be scrammed, a spring mechanism would return all the drums to a position with the neutron poison facing the reactor, stopping fission from occurring.

SNAP-50 flow diagram, image DOE

The lithium, after being heated to a temperature of 2000°F (1093°C), would feed into a potassium boiler, before being returned to the core at an inlet temperature of 1900 F (1037°C). From the boiler, the potassium vapor, which is 1850°F (1010°C), would enter a Rankine turbine which would produce electricity. The potassium vapor would cool down to 1118°F (603°C) in the process and return – condensed to its liquid form – to the boiler, thus closing the circulation. Several secondary coolant loops were used in this reactor: the main one was for the neutron reflectors, shield actuators, control drums, and other radiation hardened equipment, and used NaK as a coolant; this coolant was also used as a lubricant for the condensate pump in the potassium system. Another, lower temperature organic coolant was used for other systems that weren’t in as high a radiation flux. The radiators that were used to reject heat also used NaK as a working fluid, and were split into a primary and secondary radiator array. The primary array pulled heat from the condenser, and reduced it from 1246°F (674°C) to 1096°F (591°C), while the secondary array took the lower-temperature coolant from 730°F (388°C) to 490°F (254°C). This design was designed to operate in both single and dual loop situations, with the second (identical) loop used for high powered operation and to increase redundancy in the power plant.

These design decisions led to a flexible reactor core size, and the ability to adapt to changing requirements from either NASA or the USAF, both of which were continuing to show interest in the SNAP-50 for powering the new, larger space stations that were becoming a major focus of both organizations.

The Power Plant: Getting the Juice Flowing

By 1973, the SNAP 2/10A program had ended, and the SNAP-8/ZrHR program was winding down. These systems simply didn’t provide enough power for the new, larger space station designs that were being envisaged by NASA, and the smaller reactor sizes (the 10B advanced designs that we looked at a couple blog posts back, and the 5 kWe Thermoelectric Reactor) didn’t provide capabilities that were needed at the time. This left the SNAP-50 as the sole reactor design that was practical to take on a range of mission types… but there was a need to have different reactor power outputs, so the program ended up developing two reactor sizes. The first was a 35 kWe reactor design, meant for smaller space stations and lunar bases, although this particular part of the 35 kWe design seems to have never been fully fleshed out. A larger, 300 kWe type was designed for NASA’s proposed modular space station, a project which would eventually evolve into the actual ISS.

Unlike in the SNAP-2 and SNAP-8 programs, the SNAP-50 kept its Rankine turbine design, which had potassium vapor as its working fluid. This meant that the power plant was able to meet its electrical power output requirements far more easily than the lower efficiency demanded by thermoelectric conversion systems. The CRU system meant for the SNAP-2 ended up reaching its design requirements for reliability and life by this time, but sadly the overall program had been canceled, so there was no reactor to pair to this ingenious design (sadly, it’s so highly toxic that testing would be nearly impossible on Earth). The boiler, pumps, and radiators for the secondary loop were tested past the 10,000 hour design lifetime of the power plant, and all major complications discovered during the testing process were addressed, proving that the power conversion system was ready for the next stage of testing in a flight configuration.

One concern that was studied in depth was the secondary coolant loop’s tendency to become irradiated in the neutron flux coming off the reactor. Potassium has a propensity for absorbing neutrons, and in particular 41K (6% of unrefined K) can capture a neutron and become 42K. This is a problem, because 42K goes through gamma decay, so anywhere that the secondary coolant goes needs to have gamma radiation shielding to prevent the radiation from reaching the crew. This limited where the power conversion system could be mounted, to keep it inside the gamma shielding of the temporary, reactor-mounted shield, however the compact nature of both the reactor core and the power conversion system meant that this was a reasonably small concern, but one worthy of in-depth examination by the design team.

The power conversion system and auxiliary equipment, including the actuators for the control drums, power conditioning equipment, and other necessary equipment was cooled by a third coolant loop, which used an organic coolant (basically the oil needed for the moving parts to be lubricated), which ran through its own set of pumps and radiators. This tertiary loop was kept isolated from the vast majority of the radiation flux coming off the reactor, and as such wasn’t a major concern for irradiation damage of the coolant/lubricant.

Some Will Stay, Some Will Go: Mounting SNAP-50 To A Space Station

SNAP50 mounted to early NASA modular space station concept, image DOE

Each design used a 4-pi (a fully enclosing) shield with a secondary shadow shield pointing to the space station in order to reduce radiation exposure for crews of spacecraft rendezvousing or undocking from the space station. This primary shield was made out of a layer of beryllium to reflect neutrons back into the core, and boron carbide (B4C, enriched in boron-10) to absorb the neutrons that weren’t reflected back into the core. These structures needed to be cooled to ensure that the shield wouldn’t degrade, so a NaK shield coolant system (using technology adapted from the SNAP-8 program) was used to keep the shield at an acceptable temperature.

The shadow shield was built in two parts: the entire structure would be launched at the same time for the initial reactor installation for the space station, and then when the reactor needed to be replaced only a portion of the shield would be jettisoned with the reactor. The remainder, as well as the radiators for the reactor’s various coolant systems, would be kept mounted to the space station in order to reduce the amount of mass that needed to be launched for the station resupply. The shadow shield was made out of layers of tungsten and LiH, for gamma and neutron shielding respectively.

Image DOE

When it came time to replace the core of the reactor at the end of its 10,000 hour design life (which was a serious constraint on the UN fuels that they were working with due to fuel burnup issues), everything from the separation plane back would be jettisoned. This could theoretically have been dragged to a graveyard orbit by an automated mission, but the more likely scenario at the time would have been to leave it in a slowly degrading orbit to give the majority of the short-lived isotopes time to decay, and then design it to burn up in the atmosphere at a high enough altitude that diffusion would dilute the impact of any radioisotopes from the reactor. This was, of course, before the problems that the USSR ran into with their US-A program [insert link], which eliminated this lower cost decommissioning option.

Image DOE

After the old reactor core was discarded, the new core, together with the small forward shield and power conversion system, could be put in place using a combination of off-the-shelf hardware, which at the time was expected to be common enough: either Titan-III or Saturn 1B rockets, with appropriate upper stages to handle the docking procedure with the space station. The reactor would then be attached to the radiator, the docking would be completed, and within 8 hours the reactor would reach steady-state operations for another 10,000 hours of normal use. The longest that the station would be running on backup power would be four days. Unfortunately, information on the exact docking mechanism used is thin, so the details on how they planned this stage are still somewhat hazy, but there’s nothing preventing this from being done.

A number of secondary systems, including accumulators, pumps, and other equipment are mounted along with the radiator in the permanent section of the power supply installation. Many other systems, especially anything that has been exposed to a large radiation flux or high temperatures during operation (LiH, the primary shielding material, loses hydrogen through outgassing at a known rate depending on temperature, and can almost be said to have a half-life), will be separated with the core, but everything that was practicable to leave in place was kept.

This basic design principle for reloadable (which in astronuclear often just means “replaceable core”) reactors will be revisited time and again for orbital installations. Variations on the concept abound, although surface power units seem to favor “abandon in place” far more. In the case of large future installations, it’s not unreasonable to suspect that refueling of a reactor core would be possible, but at this point in astronuclear mission utilization, even having this level of reusability was an impressive feat.

35 kWe SNAP-50: The Starter Model

In the 1960s, having 35 kWe of power for a space station was considered significant enough to supply the vast majority of mission needs. Because of this, a smaller version of the SNAP-50 was designed to fit this mission design niche. While the initial power plant would require the use of a Saturn 1B to launch it into orbit, the replacement reactors could be launched on either an Atlas-Centaur or Titan IIIA-Centaur launch vehicle. This was billed as a low cost option, as a proof of concept for the far larger – and at this point, far less fully tested – 300 kWe version to come.

NASA was still thinking of very large space stations at this time. The baseline crew requirements alone were incredible: 24-36 crew, with rotations lasting from 3 months to a year, and a station life of five years. While 35 kWe wouldn’t be sufficient for the full station, it would be an attractive option. Other programs had looked at nuclear power plants for space stations as well, like we saw with the Manned Orbiting Laboratory and the Orbital Workshop (later Skylab), and facilities of that size would be good candidates for the 35 kWe system.

The core itself measured 8.3 inches (0.211 m) across, 11.2 inches (0.284 m) long, and used 236 fuel elements arranged into seven fuel element cans within the pressure vessel of the core. Six poison-backed control drums were used for primary reactor control. The core would produce up to 400 kW of thermal power. The pressure vessel, control drums, and all other control and reflective materials together measured just 19.6 inches (4.98 m) by 27.9 inches (7.09 m), and the replaceable portion of the reactor was between four and five feet (1.2 m and 1.5 m) tall, and five and six feet (1.5 m and 1.8 m) across – including shielding.

SNAP-50 powered probe concept, image DOE

This reactor could also have been a good prototype reactor for a nuclear electric probe, a concept that will be revisited later, although there’s little evidence that this path was ever seriously explored. Like many smaller reactor designs, this one did not get the amount of attention that its larger brother offered, but at the time this was considered a good, solid space station power supply.

300 kWe SNAP-50: The Most Powerful Space Reactor to Date

While there were sketches for more powerful reactors than the 300 kWe SNAP-50 variant, they never really developed the reactors to any great extent, and certainly not to the point of experimental verification that SNAP-50 had achieved. This was considered to be a good starting point for possibly a crewed nuclear electric spacecraft, as well as being able to power a truly huge space station.

The 300 kWe variant of the reactor was slightly different in more than size when compared to its smaller brother. Despite using the same fuel, clad, and coolant as the 35 kWe system, the 300 kWe system could achieve over four times the fuel burnup of the smaller reactor (0.32% vs 1.3%), and had a higher maximum fuel power density as well, both of which have a huge impact on core lifetimes and dynamics. This was partially achieved by making the fuel elements almost half as narrow, and increasing the number of fuel elements to 1093, held in 19 cans within the core. This led to a core that was 10.2 inches (0.259 m) wide, and 14.28 inches (0.363 m) long (keeping the same 1:1.4 gore geometry between the reactors), and a pressure vessel that was 12” (0.305 m) in diameter by 43” (1.092 m) in length. It also increased the thermal output of the reactor to 2200 kWt. The number of control drums was increased from six to eight longer control drums to fit the longer core, and some rearrangement of lithium pumps and other equipment for the power conversion system occurred within the larger 4 pi shield structure. The entire reactor assembly that would undergo replacement was five to six feet high, and six to seven feet in diameter (1.5 m; 1.8 m; 2.1 m).

Lander-based SNAP-50 concept, image DOE

Sadly, even the ambitious NASA space station wasn’t big enough to need even the smaller 35 kWe version of the reactor, much less the 300 kWe variants. Plans had been made for a fleet of nuclear electric tugs that would ferry equipment back and forth to a permanent Moon base, but cancellation of that program occurred at the same time as the death of the moon base itself.

Mass Tradeoffs: Why Nuclear Instead of Solar?

By the middle of the 1960s, photovoltaic solar panels had become efficient and reliable enough for use in spacecraft on a regular basis. Because of this, it was a genuine question for the first time ever whether to go with solar panels or a nuclear reactor, whereas in the 1950s and early 60s nuclear was pretty much the only option. However, solar panels have a downside: drag. Even in orbit, there is a very thin atmosphere, and so for lower orbits a satellite has to regularly raise itself up or it will burn up in the atmosphere. Another down side comes from MM/OD: micro meteorites and orbital debris. Since solar panels are large, flat, and all pointing at the sun all the time, there’s a greater chance that something will strike one of those panels, damaging or possibly even destroying it. Managing these two issues is the primary concern of using solar panels as a power supply in terms of orbital behavior, and determines the majority of the refueling mass needed for a solar powered space station.

Image DOE, from SNAP-50 Design Summary

On the nuclear side, by 1965, there were two power plant options on the table: the SNAP-8 (pre-ZrHR redesign) and the SNAP-50, and solar photovoltaics had developed to the point that they could be deployed in space. Because of this, a comparison was done by Pratt and Whitney of the three systems to determine the mass efficiency of each system, not only in initial deployment but also in yearly fueling and tankage requirements. Each of the systems was compared at a 35 kWe power level to the space station in order to allow for a level playing field.

One thing that stands out about the solar system (based on a pair of Lockheed and General Electric studies) is that it’s marginally the lightest of all the systems at launch, but within a year the total system maintenance mass required far outstrips the mass of the nuclear power plants, especially the SNAP-50. This is because the solar panels have a large sail area, which catches the very thin atmosphere at the station’s orbital altitude and drags the station down into the thicker atmosphere, so thrust is needed to re-boost the space station. This is something that has to be done on a regular basis for the ISS. The mass of the fuel, tankage, and structure to allow for this reboost is extensive. Even back in 1965 there were discussions on using electric propulsion for the reboosting of the space station, in order to significantly reduce the mass needed for this procedure. That discussion is still happening casually with the ISS, and Ad Astra still hopes to use VASIMR for this purpose – a concept that’s been floated for the last ten or so years.

Overall, the mass difference between the SNAP-50 and the optimistic Lockheed proposal of the time was significant: the original deployment was only about 70 lbs (31.75 kg) different, but the yearly maintenance mass requirements would be 5,280 lbs (2395 kg) different – quite a large amount of mass.

Because the SNAP-50 and SNAP-8 don’t have these large sail areas, and the radiators needed can be made aerodynamically enough to greatly reduce the drag on the station, the reboost requirements are significantly lower than for the solar panels. The SNAP-50 weighs significantly less than the SNAP-8, and has significantly less surface area, because the reactor operates at a far higher temperature, and therefore needs a smaller radiator. Another difference between the reactors is volume: the SNAP-50 is physically smaller than the SNAP-8 because of that same higher temperature, and also due to the fact that the UN fuel is far more dense than its U-ZrH fueled counterpart.

These reactors were designed to be replaced once a year, with the initial launch being significantly more massive than the follow-up launches, benefitting of the sectioned architecture with a separation plane just at the small of the shadow shield as described above. Only the smaller section of shield remained with the reactor when it was separated. The larger, heavier section, on the other hand, would remain with the space station, as well as the radiators, and serve as the mounting point for the new reactor core and power conversion system, which would be sent via an automated refueling launch to the space station.

Solar panels, on the other hand, require both reboost to compensate for drag as well as equipment to repair or replace the panels, batteries, and associated components as they wear out. This in turn requires a somewhat robust repair capability for ongoing maintenance – a requirement for any large, long term space station, but the more area you have to get hit by space debris, which means more time and mass spent on repairs rather than doing science.

Of course, today solar panels are far lighter, and electric thrusters are also far more mature than they were at that time. This, in addition to widespread radiophobia, make solar the most widespread occurrence in most satellites, and all space stations, to date. However, the savings available in overall lifetime mass and a sail area that is both smaller and more physically robust, remain key advantages for a nuclear powered space station in the future

The End of an Era: Changing Priorities, Changing Funding

The SNAP-50, even the small 35 kWe version, offered more power, more efficiency, and less mass and volume than the most advanced of SNAP-8’s children: the A-ZrHR [Link]. This was the end of the zirconium hydride fueled reactor era for the Atomic Energy Commission, and while this type of fuel continues to be used in reactors all over the world in TRIGA research and training reactors (a common type of small reactor for colleges and research organizations), its time as the preferred fuel for astronuclear designs was over.

In fact, by the end of the study period, the SNAP-50 was extended to 1.5 MWe in some designs, the most powerful design to be proposed until the 1980s, and one of the most powerful ever proposed… but this ended up going nowhere, as did much of the mission planning surrounding the SNAP program.

At the same time as these higher-powered reactor designs were coming to maturity, funding for both civilian and military space programs virtually disappeared. National priorities, and perceptions of nuclear power, were shifting. Technological advances eliminated many future military crewed missions in favor of uncrewed ones with longer lifetimes, less mass, less cost – and far smaller power requirements. NASA funding began falling under the axe even as we were landing on the Moon for the first time, and from then on funding became very scarce on the ground.

The transition from the Atomic Energy Commission to the Department of Energy wasn’t without its hiccups, or reductions in funding, either, and where once every single AEC lab seemed to have its own family of reactor designs, the field narrowed greatly. As we’ll see, even at the start of Star Wars the reactor design was not too different from the SNAP-50.

Finally, the changes in launch system had their impact as well. NASA was heavily investing in the Space Transport System (the Space Shuttle), which was assumed to be the way that most or all payloads would be launched, so the nuclear reactor had to be able to be flown up – and in some cases returned – by the Shuttle. This placed a whole different set of constraints on the reactor, requiring a large rewrite of the basic design. The follow-on design, the SP-100, used the same UN fuel and Li coolant as the SNAP-50, but was designed to be launched and retrieved by the Shuttle. The fact that the STS never lived up to its promise in launch frequency or cost (and that other launchers were available continuously) means that this was ultimately a diversion, but at the time it was a serious consideration.

All of this spelled the death of the SNAP-50 program, as well as the end of dedicated research into a single reactor design until 1983, with the SP-100 nuclear reactor system, a reactor we’ll look at another time.

While I would love to go into many of the reactors that were developed up to this time, including heat pipe cooled reactors (SABRE at Los Alamos), thermionic power conversion systems (5 kWe Thermionic Reactor), and other ideas, there simply isn’t time to go into them here. As we look at different reactor components they’ll come up, and we’ll mention them there. Sadly, while some labs were able to continue funding some limited research with the help of NASA and sometimes the Department of Defense or the Defense Nuclear Safety Agency. The days of big astronuclear programs, though, were fading into a thing of the past. Both space and nuclear power would refocus, and then fade in the rankings of budgetary requirements over the years. We will be looking at these reactors more as time goes on, in our new “Forgotten Reactors” column (more on that below).

The Blog is Changing!

With the new year, I’ve been thinking a lot about the format of both the website and the blog, and where I hope to go in the next year. I’ve had several organizational projects on the back burner, and some of them are going to be started here soon. The biggest part is going to be the relationship between the blog and the website, and what I write more about where.

Expect another blog post shortly (it’s already written, just not edited yet) about our plans for the next year!

I’ve got big plans for Beyond NERVA this year, and there are a LOT of things that are slowly getting started in the background which will greatly improve the quality of the blog and the website, and this is just the start!


SNAP-50/SPUR Program Summary, Pratt and Whitney staff, 1964

35 and 300 kWe SNAP-50/SPUR Power Plants for the Manned Orbiting Space Station Application, Pratt and Whitney staff, 1965

Uranium Nitride Fuel Development SNAP-50, Pratt and Whitney staff, 1965

SNAP Program Summary Report, Voss 1984

Development and Testing Forgotten Reactors History Non-nuclear Testing Nuclear Thermal Systems Test Stands

Timber Wind: America’s Return to Nuclear Thermal Rockets

 Hello, and welcome to Beyond NERVA! Today, we’re continuing to look at the pebble bed nuclear thermal rocket (check out the most recent blog post on the origins of the PBR nuclear thermal rocket here)!

Sorry it took so long to get this out… between the huge amount of time it took just to find the minimal references I was able to get my hands on, the ongoing COVID pandemic, and several IRL challenges, this took me far longer than I wanted – but now it’s here!

Today is special because it is covering one of the cult classics of astronuclear engineering, Project Timber Wind, part of the Strategic Defense Initiative (better known colloquially as “Star Wars”). This was the first time since Project Rover that the US put significant resources into developing a nuclear thermal rocket (NTR). For a number of reasons, Timber Wind has a legendary status among people familiar with NTRs, but isn’t well reported on, and a lot of confusion has built up around the project. It’s also interesting in that it was an incredibly (and according to the US Office of the Inspector General, overly) classified program, which means that there’s still a lot we don’t know about this program 30 years later. However, as one of the most requested topics I hear about, I’m looking forward to sharing what I’ve discovered with you… and honestly I’m kinda blown away with this concept.

Timber Wind was an effort to build a second stage for a booster rocket, to replace the second (and sometimes third) stage of anything from an MX ballistic missile to an Atlas or Delta booster. This could be used for a couple of different purposes: it could be used similarly to an advanced upper stage, increasing the payload capacity of the rocket and the range of orbits that the payload could be placed in; alternatively it could be used to accelerate a kinetic kill vehicle (basically a self-guided orbital bullet) to intercept an incoming enemy intercontinental ballistic missile before it deploys its warheads. Both options were explored, with much of the initial funding coming from the second concept, before the kill vehicle concept was dropped and the slightly more traditional upper stage took precedence.

Initially, I planned on covering both Timber Wind and the Space Nuclear Thermal Propulsion program (which it morphed into) in a single post, but the mission requirements, and even architectures, were too different to incorporate into a single blog post. So, this will end up being a two-parter, with this post focusing on the three early mission proposals for the Department of Defense (DOD) and Strategic Defense Initiative Organization (SDIO): a second stage of an ICBM to launch an anti-booster kinetic kill vehicle, an orbital transfer vehicle (basically a fancy, restartable second stage for a booster), and a multi-megawatt orbital nuclear power plant. The next post will cover when the program became more open, testing became more prevalent, and grander plans were laid out – and some key restrictions on operating parameters eliminated the first and third missions on this list.

Ah, Nomenclature, Let’s Deal with That

So, there’s a couple things to get out of the way before we begin.

The first is the name. If you do a Google/Yandex/etc search for “Timber Wind,” you aren’t going to find much compared to “Timberwind,” but from what I’ve seen in official reporting it should be the other way around. The official name of this program is Project Timber Wind (two words), which according to the information I’ve been able to find is not unusual. The anecdotal evidence I have (and if you know more, please feel free to leave a comment below!) is that for programs classified Top Secret: Special Access (as this was) had a name assigned based on picking two random words via computer, whereas other Top Secret (or Q, or equivalent) programs didn’t necessarily follow this protocol.

However, when I look for information about this program, I constantly see “Timberwind.” not the original “Timber Wind.” I don’t know when this shift happened – it didn’t ever happen with rare exceptions in official documentation, even in the post-cancellation reporting, but somehow public reporting always uses the single word variation. I kinda attribute it to reading typewritten reports when the reader is used to digitally written documents as personal head-canon, but that’s all that explanation is – my guess which makes sense to me.

So there’s a disconnect between what most easily accessible sources use (single word), and the official reporting (two words). I’m going to use the original, because the only reason I’ve gotten as far as I have by being weird about minor details in esoteric reports, so I’m not planning on stopping now (I will tag the single word in the blog, just so people can find this, but that’s as far as I’m going)!

The second is in nuclear reactor geometry definitions.

Having discrete, generally small fuel elements generally falls into two categories: particle beds and pebble beds. Particles are small, pebbles are big, and where the line falls seems to be fuzzy. In modern contexts, the line seems to fall around the 1 cm diameter mark, although finding a formal definition has so far eluded me. However, pebble beds are also a more colloquial term than particle beds in use: a particle bed is a type of pebble bed in common use, but not vice versa.

In this context, both the RBR and Timber Wind are both particle bed reactors, and I’ll call them such, but if a source calls the reactor a pebble bed (which many do), I may end up slipping up and using the term.

OK, nomenclature lesson done. Back to the reactor!

Project Timber Wind: Back to the Future

For those in the know, Timber Wind is legendary. This was the first time after Project Rover that the US put its economic and industrial might behind an NTR program. While there had been programs in nuclear electric propulsion (poorly funded, admittedly, and mostly carried through creative accounting in NASA and the DOE), nuclear thermal propulsion had taken a back seat since 1972, when Project Rover’s continued funding was canceled, along with plans for a crewed Mars mission, a crewed base on the Moon, and a whole lot of other dreams that the Apollo generation grew up on.

There was another difference, as well. Timber Wind wasn’t a NASA program. Despite all the conspiracy theories, the assumptions based on the number of astronauts with military service records, and the number of classified government payloads that NASA has handled, it remains a civilian organization, with the goal of peacefully exploring the solar system in an open and transparent manner. The Department of Defense, on the other hand, is a much more secretive organization, and as such many of the design details of this reactor were more highly classified than is typical in astronuclear engineering as they deal with military systems. However, in recent years, many details have become available on this system, which we’ll cover in brief today – and I will be linking not only my direct sources but all the other information I’ve found below.

Also unlike NTR designs since the earliest days of Rover, Timber Wind was meant to act as a rocket stage during booster flight. Most NTR designs are in-space only: the reactor is launched into a stable, “nuclear-safe” (i.e. a long-term stable orbit with minimal collision risk with other satellites and debris) orbit, then after being mated to the spacecraft is brought to criticality and used for in-space orbital transfers, interplanetary trajectories, and the like. (Interesting aside, this program’s successor seems to be the first time that now-common term was used in American literature on the subject.)

Timber Wind was meant to support the Strategic Defense Initiative (SDI), popularly known as Star Wars. Started in 1983, this extensive program was meant to provide a ballistic missile shield, among other things, for the US, and was given a high priority and funding level for a number of programs. One of these programs, the Boost Phase Intercept vehicle, meant to destroy an intercontinental ballistic missile during the boost phase of the vehicle using a kinetic impactor which would be launched either from the ground or be pre-deployed in space. A kinetic kill vehicle is basically a set of reaction control thrusters designed to guide a small autonomous spacecraft into its target at high velocity and destroy it. They are typically small, very nimble, and limited only by the sensors and autonomous guidance software available for them.

In order to do this, the NTR would need to function as the second stage of a rocket, meaning that while the engine would be fired only after it had entered the lower reaches of space or the upper reaches of the atmosphere (minimizing the radiation risk from the launch), it would still very much be in a sub-orbital flight path at the time, and would have much higher thrust-to-weight ratio requirements as a result.

The engine that was selected was based on a design by James Powell at Brookhaven National Laboratory (BNL) in the late 1970s. He presented the design to Grumman in 1982, and from there it came to the attention of the Strategic Defense Initiative Organization (SDIO), the organization responsible for all SDI activities.

Haslett 1994

SDIO proceeded to break the program up into three phases:

  • Phase I (November 1987 to September 1989): verify that the pebblebed reactor concept would meet the requirements of the upper stage of the Boost Phase Intercept vehicle, including the Preliminary Design Review of both the stage and the whole vehicle (an MX first stage, with the PBR second stage /exceeding Earth escape velocity after being ignited outside the atmosphere)
  • Phase II (September 1989-October 1991 under SDIO, October 1991-January 1994 when it was canceled under the US Air Force, scheduled completion 1999): Perform all tests to support the ground test of a full PBR NTR system in preparation for a flight test, including fuel testing, final design of the reactor, design and construction of testing facilities, etc. Phase II would be completed with the successful ground hot fire test of the PBR NTR, however the program was canceled before the ground test could be conducted.
    • Once the program was transferred to the US Air Force (USAF), the mission envelope expanded from an impactor’s upper stage to a more flexible, on-orbit multi-mission purpose, requiring a design optimization redesign. This is also when NASA became involved in the program.
    • Another change was that the program name shifted from Timber Wind to the Space Nuclear Thermal Propulsion program (SNTP), reflecting both the change in management as well as the change in the mission design requirements.
  • Phase III (never conducted, planned for 2000): Flight test of the SNTP upper stage using an Atlas II launch vehicle to place the NTR into a nuclear-safe orbit. Once on orbit, a number of on-orbit tests would be conducted on the engine, but those were not specified to any degree due to the relatively early cancellation of the program.

While the program offered promise, many factors combined to ensure the program would not be completed. First, the hot fire testing facilities required (two were proposed, one at San Tan and one at the National Nuclear Security Site) would be incredibly expensive to construct, second the Space Exploration Initiative was heavily criticized for reasons of cost (a common problem with early 90’s programs), and third the Clinton administration cut many nuclear research programs in all US federal departments in a very short period of time (the Integral Fast Reactor at Argonne National Laboratory was another program to be cut at about the same time).

The program would be transferred into a combined USAF and NASA program in 1991, and end in 1994 under those auspices, with many successful hurdles overcome, and it remains an attractive design, one which has become a benchmark for pebble bed nuclear thermal rockets, and a favorite of the astronuclear community to speculate what would be possible with this incredible engine.

To understand why it was so attractive, we need to go back to the beginning, in the late 1970s at Brookhaven National Laboratory in the aftermath of the Rotating Fluidized Bed Reactor (RBR, covered in our last post here).

The Beginning of Timber Wind

When we last left particle bed NTRs, the Rotating Fluidized Bed Reactor program had made a lot of progress on many of the fundamental challenges with the concept of a particle bed reactor, but still faced many challenges. However, the team, including Dr. John Powell, were still very enthusiastic about the promise it offered – and conscious of the limitations of the system.

Dr. Powell continued to search for funding for a particle bed reactor (PBR) NTR program, and interest in NTR was growing again in both industry and government circles, but there were no major programs and funding was scarce. In 1982, eight years after the conclusion of the RBR, he had a meeting with executives in the Grumman Corporation, where he made a pitch for the PBR NTR concept. They were interested in the promise of higher specific impulse and greater thrust to weight ratios compared to what had become the legacy NERVA architecture, but there wasn’t really a currently funded niche for the project. However, they remained interested enough to start building a team of contractors willing to work on the concept, in case the US government revived its NTR program. The companies included other major aerospace companies (such as Garrett Corp and Aerojet) and nuclear contractors (such as Babcock and Wilcox), as well as subcontractors for many components.

At the same time, they tried to sell the concept of astronuclear PBR designs to potentially interested organizations: a 1985 briefing to the Air Force Space Division on the possibility of using the PBR as a boost phase interceptor was an early, but major, presentation that would end up being a major part of the initial Timber Wind architecture, and the next year an Air Force Astronautics Laboratory issues a design study contract for a PBR-based Orbital Transfer Vehicle (OTV, a kind of advanced upper stage for an already-existing booster). While neither of these contracts was big enough to do a complete development program, they WERE enough money to continue to advance the design of the PBR, which by now was showing two distinct parts: the boost phase interceptor, and the OTV. There was also a brief flirtation with using the PBR architecture from Timber Wind as a nuclear electric power source, which we’ll examine as well, but this was never particularly well focused on or funded, so remains a footnote in the program.

Reactor Geometry

From Atomic Power in Space, INL 2015

Timber Wind was a static particle bed reactor, in the general form of a cylinder 50 cm long by 50 cm in diameter, using 19 fuel elements to heat the propellant in a folded flow path. Each fuel element was roughly cylindrical with a 6.4 cm diameter, consisting of a cold frit (a perforated cylinder) made of stainless steel and a hot frit made out of zirconium carbide (ZrC, although rhenium – Rh – clad would also meet thermal and reactivity requirements) coated carbon-carbon composite, which held a total of 125 kg (15 liters) of 500 micron diameter spheres of uranium/zirconium carbide fueled fuel particles which were clad in two layers of different carbon compositions followed by ZrC cladding. These would be held in place through mechanical means, rather than centrifugal force like in the RBR, reducing the mass of the system at the (quite significant materially) cost of developing a hot frit to mechanically contain the fuel. This is something we’ll cover more in depth in the next post.

From Atomic Power in Space, INL 2015

The propellant would then pass into a central, truncated cone central void, becoming wider from the spacecraft to the nozzle end. This is called orificing. An interesting challenge with nuclear reactors is the fact that the distribution of energy generation changes based on location within the reactor, called radial/axial power peaking (something that occurs in individual fuel elements both in isolation and in terms of their location in a core as well, part of why refueling a nuclear reactor is an incredibly complex process), and in this case it was dealt with in a number of ways, but one of the primary ones was individually changing the orificing of each fuel element to accommodate the power generation and propellant flow rate of each fuel element.

Along these lines, another advantage of this type of core is the ability to precisely control the amount of fissile fuel in each fuel element along the length of the reactor, and along the radius of the fuel element. Since the fuel particles are so small, and the manufacturing of each would be a small-batch process (even fueling a hundred of these things would only take 1500 liters of volume, with the fissile component of that volume being a small percentage of that), a variety of fuel loading options were inherently available, and adjustments to power distribution were reasonably easy to achieve from reactor to reactor. This homogenizes power distribution in some reactors, and increases local power in other, more specialized reactors (like some types of NTRs), but here an even power distribution along the length of the fuel element is desired. This power leveling is done in virtually every fuel element in every reactor, but is a difficult and complex process with large fuel elements due to the need to change how much U is in each portion of the fuel elements. With a particle bed reactor, on the other hand, the U content doesn’t need to vary inside each individual fuel paritcles, and both fueled and unfueled particles can be added in specific regions of the fuel element to achieve the desired power balance within the element. There was actually a region of unfueled particles on the last cm of the particle bed in each fuel element to maximize the efficiency of power distribution into the propellant, and the level of enrichment for the 235U fuel was varied from 70% to 93.5% throughout the fueled portions. This resulted in an incredibly flat power profile, with a ratio of only 1.01:1 from the peak power density to the average power density.

Since the propellant would pass from the outside of each fuel element to the inside, cooling the reactor was far easier, and lower-mass (or higher efficiency) options for things such as moderator were an option. This is a benefit of what’s called a folded-flow­ propellant path, something that we’ve discussed before in some depth in our post on Dumbo, the first folded flow NTR concept [insert link]. In short, instead of heating the propellant as it passes down the length of the reactor such as in Rover, a folded flow injects the cold propellant laterally into the fuel element, heating it in a very short distance, and ejecting it through a central void in the fuel element. This has the advantage of keeping the vast majority of the reactor very cool, eliminating many of the thermal structural problems that Rover experienced, at the cost of a more complex gasdynamic behavior system. This also allowed for lighter-weight materials to be used in the construction, such as aluminum structural members and pressure vessel, to further reduce the mass of the reactor.

Interestingly, many of these lower-mass options, such as Li7H moderator, were never explored, since the mass of the reactor came in at only about 0.6 tons, a very small number compared to the 10 ton payload, so it just wasn’t seen as a big enough issue to continue working on at that point.

Finally, because of the low (~1 hr) operating time of the reactor, radiation problems were minimized. With a reactor only shielded by propellant, tankage, and the structures of the NTR itself, it’s estimated that the NOTV would subject its payload to a total of 100 Gy of gamma radiation and a neutron fluence of less than 10^14 n/cm^2. Obviously, reducing this for a crewed mission would be necessary, but depending on the robotic mission payload, additional shielding may not be necessary. The residual radiation would also be minimal due to the short burn time, although if the reactor was reused this would grow over time.

In 1987, the estimated cost per unit (not including development and testing was about $4 million, a surprisingly low number, due to the ease of construction, low thermal stresses requiring fewer exotic materials and solutions, and low uranium load requirements.

This design would continue to evolve throughout Timber Wind and into SNTP as mission requirements changed (this description is based on a 1987 paper linked below), and we’ll look at the final design in the next post.

For now, let’s move on to how this reactor would be used.

Nuclear Thermal Kinetic Kill Vehicle

The true break for the project came in the same year: 1987. This is when the SDIO picked the Brookhaven (and now Grumman) concept as their best option for a nuclear-enhanced booster for their proposed ground deployed boost phase interceptor.

I don’t do nuclear weapons coverage, in fact that’s a large part of why I’ve never covered systems like Pluto here, but it is something that I’ve gained some knowledge of through osmosis through interactions with many intelligent and well-educated people on social media and in real life… but this time I’m going to make a slight exception for strategic ballistic missile shield technology, because an NTR powered booster is… extremely rare. I can think of four American proposals that continued to be pursued after the 1950s, one early (apocryphal) Soviet design in the early 1950s, one modern Chinese concept, and that’s it! I get asked about it relatively frequently, and my answer is basically always the same: unless something significant changes, it’s not a great idea, but in certain contexts it may work. I leave it up to the reader to decide if this is a good context. (The list I can think of is the Reactor In-Flight Test, or RIFT, which was the first major casualty of Rover/NERVA cutbacks; Timber Wind; and for private proposals the Liberty Ship nuclear lightbulb booster and the Nuclear Thermal Turbo Rocket single stage to orbit concept).

So, the idea behind boost stage interception is that it targets an intercontinental ballistic missile and destroys the vehicle while it’s still gaining velocity – the earlier the interception that can destroy the vehicle, the better. There were many ideas on how to do this, including high powered lasers, but the simplest idea (in theory, not in execution) was the kinetic impactor: basically a self-guided projectile would hit the very thin fuel or oxidizer tanks of the ICBM, and… boom, no more ICBM. This was especially attractive since, by this time, missiles could carry over a dozen warheads, and this would take care of all of them at once, rather than a terminal phase interceptor, which would have to deal with each warhead individually.

The general idea behind Timber Wind was that a three-stage weapon would be used to deliver a boost-phase kinetic kill vehicle. The original first stage was based on the LGM-118 Peacekeeper (“MX,” or Missile – Experimental) first stage, which had just deployed two years earlier. This solid fueled ICBM first stage normally used a 500,000 lbf (2.2 MN) SR118 solid rocket motor, although it’s not clear if this engine was modified in any way for Timber Wind. The second stage would be the PBNTR Timber Wind stage, which would achieve Earth escape velocity to prevent reactor re-entry, and the third stage was the kinetic kill vehicle (which I have not been able to find information about).

Here’s a recent Lockheed Martin KKV undergoing testing, so you can get an idea of what this “bullet” looks and behaves like:

Needless to say, this would be a very interesting launch profile, and one that I have not seen detailed anywhere online. It would also be incredibly challenging to

  1. detect the launch of an ICBM;
  2. counter-launch even as rapid-fire-capable a missile as a Peacekeeper;
  3. provide sufficient guidance to the missile in real-time to guide the entire stack to interception;
  4. go through three staging events (two of which were greater than Earth escape velocity!);
  5. guide the kinetic kill vehicle to the target with sufficient maneuvering capability to intercept the target;
  6. and finally have a reasonably high chance of mission success, which required both the reactor to go flying off into a heliocentric orbit and have the kinetic kill vehicle impact the target booster

all before the second (or third) staging event for the target ICBM (i.e. before warhead deployment).

This presents a number of challenges to the designers: thrust-to-weight ratio is key to a booster stage, something that to this point (and even today) NTRs struggle with – mostly due to shielding requirements for the payload.

There simply isn’t a way to mitigate gamma radiation in particular without high atomic number nuclei to absorb and re-emit these high energy photons enough times that a lighter shielding material can be used to either stop or deflect the great-great-great-great-…-great grand-daughter photons from sensitive payloads, whether crew or electronics. However, electronics are far less sensitive than humans to this sort of irradiation, so right off the bat this program had an advantage over Rover: there weren’t any people on board, so shielding mass could be minimized.

Ed. Note: figuring out shielded T/W ratio in this field is… interesting to say the least. It’s an open question whether reported T/W includes anything but the thrust structure (i.e. no turbopumps and associated hardware, generally called the “power pack” in rocket engineering), much less whether it includes shielding – and the amount of necessary shielding is another complex question which changes with time. Considering the age of many of these studies, and the advances in computational capability to model not only the radiation being emitted from the reactor vessel but the shielding ability of many different materials, every estimate of required shielding must be taken with 2-3 dump trucks of salt!!! Given that shielding is an integral part of the reactor system, this makes pretty much every T/W estimate questionable.

One of the major challenges of the program, apparently, was to ensure that the reactor would not re-enter the atmosphere, meaning that it had to achieve Earth orbit escape velocity, while still able to deploy the third stage kinetic kill vehicle. I’ve been trying to figure out this staging event for a while now, and have come to the conclusion that my orbital mechanics capabilities simply aren’t good enough to assess how difficult this is beyond “exceptionally difficult.”

However, details of this portion of the program were more highly classified than even the already-highly-classified program, and incredibly few details are available about this portion in specific. We do know that by 1991, the beginning of Phase II of Timber Wind, this portion of the program had been de-emphasized, so apparently the program managers also found it either impractical or no longer necessary, focusing instead on the Nuclear Orbital Transfer Vehicle, or NOTV.

PBR-NOTV: Advanced Upper Stage Flexibility

NOTV Mockup, Powell 1987

At the same time as Timber Wind was gaining steam, the OTV concept was going through a major evolution into the PBR-NOTV (Particle Bed Reactor – Nuclear Orbital Transfer Vehicle). This was another interesting concept, and one which played around with many concepts that are often discussed in the astronuclear field (some related to pebble bed reactors, some related to NTRs), but are almost never realized.

The goals were… modest…

  1. ~1000 s isp
  2. multi-meganewton thrust
  3. ~50% payload mass fraction from LEO to GEO
  4. LEO to GEO transfer time measured in hours, burn time measured in minutes
  5. Customizable propellant usage to change thrust level from same reactor (H2, NH3, and mixtures of the two)

These NOTVs were designed to be the second stage of a booster, similar to the KKV concept we discussed above, but rather than deliver a small kinetic impactor and then leave the cislunar system, these would be designed to place payloads into specific orbits (low Earth orbit, or LEO, mid-Earth orbit, or MEO, and geostationary orbit, GEO, as well as polar and retrograde orbits) using rockets which would normally be far too small to achieve these mission goals. Since the reactor and nozzle were quite small, it was envisioned that a variety of launch vehicles could be used as a first stage, and the tanks for the NTR could be adjusted in size to meet both mission requirements and launch vehicle dimensions. By 1987, there was even discussion of launching it in the Space Shuttle cargo bay, since (until it was taken critical) the level of danger to the crew was negligible due to the lack of oxidizer on board (a major problem facing the Shuttle-launched Centaur with its chemical engine).

There were a variety of missions that the NOTV was designed around, including single-use missions which would go to LEO/MEO/GEO, drop off the payload, and then go into a graveyard orbit for disposal, as well as two way space tug missions. The possibility of on-orbit propellant reloading was also discussed, with propellant being stored in an orbiting depot, for longer term missions. While it wasn’t discussed (since there was no military need) the stage could easily have handled interplanetary missions, but those proposals would come only after NASA got involved.

Multiple Propellants: a Novel Solution to Novel Challenges with Novel Complications

In order to achieve these different orbits, and account for many of the orbital mechanical considerations of launching satellites into particular orbits, a novel scheme for adjusting both thrust and specific impulse was devised: use a more flexible propellant scheme than just cryogenic H2. In this case, the proposal was to use NH3, H2, or a combination of the two. It was observed that the most efficient method of using the two-propellant mode was to use the NH3 first, followed by the H2, since thrust is more important earlier in the booster flight model. One paper observed that in a Hohman transfer orbit, the first part of the perigee burn would use ammonia, followed by the hydrogen to finish the burn (and I presume to circularize the orbit at the end).

When pure ammonia was used, the specific impulse of the stage was reduced to only 500 s isp (compared to the 200-300 s for most second stages), but the thrust would double from 10,000 lbs to 20,000 lbs. By the time the gas had passed into the nozzle, it would have effectively completely dissociated into 3H2 + N2.

One of the main advantages of the composite system is that it significantly reduced the propellant volume needed for the NTR, a key consideration for some of the boosters that were being investigated. In both the Shuttle and Titan rockets, center of gravity and NTR+payload length were a concern, as was volume.

Sadly, there was also a significant (5,000 lb) decrease in payload advantage over the Centaur using NH3 instead of pure H2, but the overall thrust budget could be maintained.

There’s quite a few complications to consider in this design: first, hydrogen behaves very differently than ammonia in a turbopump, not only due to density but also due to compressability: while NH3 is minimally compressible, meaning that it can be considered to have a constant volume for a given pressure and temperature while being accelerated by the turbopump, hydrogen is INCREDIBLY compressible, leading to a lot of the difficulties in designing the power pack (turbopumps, turbines, and supporting hardware of a rocket) for a hydrogen system. It is likely (although not explicitly stated) that at least two turbopumps and two turbines would be needed for this scheme, meaning increased system mass.

Next is chemical sensitivities and complications from the different propellants: while NH3 is far less reactive than H2 at the temperatures an NTR operates at, it nevertheless has its own set of behaviors which have to be accounted for in both chemical reactions and thermal behavior. Ammonia is far more opaque to radiation than hydrogen, for instance, so it’ll pick up a lot more energy from the reactor. This in turn will change the thermal reactivity behavior, which might require the reactor to run at a higher power level with NH3 than it would with H2 to maintain reactor equilibrium.

This leads us neatly into the next behavioral difference: NH3 will expand less than H2 when heated to the same temperature, but at these higher temps the molecule itself may (or will) start to dissociate, as the thermal energy in the molecule exceeds the bonding strength between the covalent bonds in the propellant. This means you’ve now got monatomic hydrogen and various partially-deconstructed nitrogen complexes with different masses and densities to deal with – although this dissociation does decrease propellant mass, increasing specific impulse, and none of the constituent atoms are solids so plating material into your reactor won’t be a concern. These gasdynamic differences have many knock-on effects though, including engine orificing.

See how the top end of the fuel element’s central void is so much narrower than the bottom? One of the reasons for this is that the propellant is hotter – and therefore less dense – at the bottom (it’s also because as you travel down the fuel element more and more propellant is being added). This is something you see in prismatic fuel elements as well, but it’s not something I’ve seen depicted well anywhere so I don’t have as handy a diagram to use.

This taper is called “orificing,” and is used to balance the propellant pressure within an NTR. It depends on the thermal capacity of the propellant, how much it expands, and how much pressure is desired at that particular portion of the reactor – and the result of these calculations is different for NH3 and H2! So some compromises would have to be reached in this cases as well.

Finally, the tankage for the propellant is another complex question. The H2 has to be stored at such a lower temperature compared to the NH3 that a common bulkhead between the tanks simply wouldn’t be possible – the hydrogen would freeze the ammonia. This could lead to a failure mode similar to what happened to SpaceX’s Falcon 9 in September 2016, when the helium tanks became super-chilled and then ruptured on the pad leading to the loss of the vehicle. Of course, the details would be different, but the danger is the same. This leads to the necessity for a complex tankage system in addition to the problems with the power pack that we discussed earlier.

All of this leads to a heavier and heavier system, with more compromises overall, and with a variety of reactor architectures being discussed it was time to consolidate the program.

Multi-Megawatt Power: Electricity Generation

While all these studies were going on, other portions of SDIO were also undergoing studies in astronuclear power systems. The primary electric power system was the SP-100, a multi-hundred kilowatt power supply using technology that had evolved out of the SNAP reactor program in the 60s and 70s. While this program was far along in its development, it was over budget, delayed, and simply couldn’t provide enough power for some of the more ambitious projects within SDIO. Because of this, SDIO (briefly) investigated higher power reactors for their more ambitious – and power-hungry – on-orbit systems.

Power generation was something that was often discussed for pebble bed reactors – the same reasons that make the concept phenomenal for nuclear thermal rockets makes a very attractive high temperature gas cooled reactors (HTGR): the high thermal transfer rates reduce the size of the needed reactor, while the pebble bed allows for very high gas flow rates (necessary due to the low thermal capacity of the coolant in an HTGR). To do this, the gas doesn’t go through a nozzle, but instead through a gas turbine – known as the Brayton cycle. This has huge efficiency advantages over thermoelectric generators, the design being used in SP-100, meaning that the same size reactor can generate much more electricity – but this would definitely not be the same size reactor!

The team behind Timber Wind (including the BNL, SNL and B&W teams) started discussing both electric generation and bimodal nuclear thermal and nuclear electric reactor geometry as early as 1986, before SDIO picked up the program. Let’s take a look at the two proposals by the team, starting with the bimodal proposal.

Particle Bed BNTR: A Hybrid of a Hybrid

Powell et al 1987

The bimodal NTR (BNTR) system never gained any traction, despite it being a potentially valuable addition to the NOTV concept. It is likely that the combination of the increased complexity and mass of the BNTR compared to the design that was finally decided on for Timber Wind, but it was interesting to the team, and they figured someone may be interested as well. This design used the same coolant channels for both the propellant and coolant, which in this case was He. This allowed for similar thermal expansion characteristics and ass flow in the coolant compared to the propellant, while minimizing both corrosion and gas escape challenges.

Horn et al 1987

A total of 37 fuel elements, similar to those used on Timber Wind, were placed in a triangular configuration, with zirconium hydride moderator surrounding them, with twelve control rods for reactivity control. Unusually for many power generation systems, this concept used a conbination of low power, closed loop coolant (using He) and a high power open loop system using H2, which would then be vented out into space through a nozzle (this second option was limited to about 30 mins of high power operation before exhausting H2 reserves). A pair of He Brayton turbines and a radiator was integrated into the BNTR structure. The low power system was designed to operate for “years at a time,” producing 555 kWe of power, while the high power system was rated to 100 Mwe in either rapid ramp or burst mode.

Horn et al 1987

However, due to the very preliminary nature of this design very few things are completely fleshed out in the only report on the concept that I’ve been able to find. The images, such as they are, are also disappointingly poor in quality, but provide at least a vague idea of the geometry and layout of the reactor:

Horn et al 1987

Multi-Megawatt Steady State and Burst Reactor Proposal

By 1989, two years into Timber Wind, SDIO wanted a powerful nuclear reactor to provide two different power configurations: a steady state, 10 Mwe reactor with a 1 year full power lifetime, which was also able to provide bursts of up to 500 MW for long enough to power neutral particle beams and free electron lasers. A variety of proposals were made, including an adaptation of Timber Wind’s reactor core, an adaptation of a NERVA A6 type core (the same family of NERVA reactors used in XE-Prime), a Project Pluto-based core, a hybrid NERVA/Pluto core, a larger, pellet fueled reactor, and two rarer types of fuel: a wire core reactor and a foam fueled reactor. This is in addition to both thermionic and metal Rankine power systems.

The designs for a PBR-based reactor, though, were very different than the Timber Wind reactor. While using the same TRISO-type fuel, they bear little resemblance to the initial reactor proposal. Both the open and closed cycle concepts were explored.

However, this concept, while considered promising, was passed over in preference for more mature fuel forms (under different reactor configurations, namely a NERVA-derived gas reactor.

Finding information about this system is… a major challenge, and one that I’m continuing to work on, but considering this is the best summary I’ve been able to find based on over a week’s searching for source material which as far as I can tel is still classified or has never been digitally documented, as unsatisfying a summary as this is I’m going to leave it here for now.

When I come back to nuclear electric concepts. we’ll come back to this study. I’ve got… words… about it, but at the present moment it’s not something I’m comfortable enough to comment on (within my very limited expertise).

Phase I Experiments

The initial portion of Timber Wind, Phase I, wasn’t just a paper study. Due to the lack of experience with PBR reactors, fuel elements, and integrating them into an NTR, a pair of experiments were run to verify that this architecture was actually workable, with more experiments being devised for Phase II.

Sandia NL ACCR, image DOE

The first of these tests was PIPE (Pulse Irradiation of a Particle Bed Fuel Element), a test of the irradiation behavior of the PBR fuel element which was divided into two testing regimes in 1988 and 1989 at Sandia National Laboratory’s Annular Core Research Reactor using fuel elements manufactured by Babcock and Wilcox. While the ACCR prevented the power density of the fuel elements to achieve what was desired for the full PBR, the data indicated that the optimism about the potential power densities was justified. Exhaust temperatures were close to that needed for an NTR, so the program continued to move forward. Sadly, there were some manufacturing and corrosion issues with the fuel elements in PIPE-II, leading to some carbon contamination in the test loop, but this didn’t impact the ability to gather the necessary data or reduce the promise of the system (just created more work for the team at SNL).

A later test, PIPET (Particle Bed Reactor Integral Performance Tester) began undergoing preliminary design reviews at the same time, which would end up consuming a huge amount of time and money while growing more and more important to the later program (more on that in the next post).

The other major test to occur at this time was CX1, or Critical Experiment 1.

Carried out at Sandia National Laboratory, CX1 was a novel configuration of prototypic fuel elements and a non-prototypical moderator to verify the nuclear worth of fuel elements in a reactor environment and then conduct post-irradiation testing. This sort of testing is vitally important to any new fuel element, since the computer modeling used to estimate reactor designs requires experimental data to confirm the required assumptions used in the calculations.

This novel architecture looked nothing like an NTR, since it was a research test-bed. In fact, because it was a low power system there wasn’t much need for many of the support structures a nuclear reactor generally uses. Instead, it used 19 fuel elements placed within polyethylene moderator plugs, which were surrounded by a tank of water for both neutron reflection and moderation. This was used to analyze a host of different characteristics, from prompt neutron production (since the delayed neutron behavior would be dependent on other materials, this wasn’t a major focus of the testing), as well as initial criticality and excess reactivity produced by the fuel elements in this configuration.

CX-1 was the first of two critical experiments carried out using the same facilities in Sandia, and led to further testing configurations, but we’ll discuss those more in the next post.

Phase II: Moving Forward, Moving Up

With the success of the programmatic, computational and basic experiments in Phase I, it was time for the program to focus on a particular mission type, prepare for ground (and eventual flight) testing, and move forward.

This began Phase II of the program, which would continue from the foundation of Phase I until a flight test was able to be flown. By this point, ground testing would be completed, and the program would be in a roughly similar position to NERVA after the XE-Prime test.

Phase II began in 1990 under the SDIO, and would continue under their auspices until October 1991. The design was narrowed further, focusing on the NOTV concept, which was renamed the Orbital Maneuvering Vehicle.

Many decisions were made at this point which I’ll go into more in the next post, but some of the major decisions were:

  1. 40,000 lbf (~175 kN) thrust level
  2. 1000 MWt power level
  3. Hot bleed cycle power pack configuration
  4. T/W of 20:1
  5. Initial isp est of 930 s

While this is a less ambitious reactor, it could be improved as the program matured and certain challenges, especially in materials and reactor dynamics uncertainties, were overcome.

Another critical experiment (CX2) was conducted at Sandia, not only further refining the nuclear properties of the fuel but also demonstrating a unique control system, called a “Peek-A-Boo” scheme. Here, revolving rings made up of aluminum and gadolinium surrounded the central fuel element, and would be rotated to either absorb neutrons or allow them to interact with the other fuel elements. While the test was promising (the worth of the system was $1.81 closed and $5.02 open, both close to calculated values), but this system would not end up being used in the final design.

Changing of the Guard: Timber Wind Falls to Space Nuclear Thermal Propulsion

Even as Timber Wind was being proposed, tensions with the USSR had been falling. By the time it got going in 1987, tensions were at an all-time low, reducing the priority of the SDIO mission. Finally, the Soviet Union fell, eliminating the need for the KKV concept.

At the same time, the program was meeting its goals (for the most part), and showed promise not just for SDIO but for the US Air Force (who were responsible for launching satellites for DOD and intelligence agencies) as well as NASA.

1990 was a major threshold year for the program. After a number of Senate-requested assessments by the Defense Science Board, as well as assessment by NASA, the program was looking like it was finding a new home, one with a less (but still significantly) military-oriented focus, and with a civilian component as well.

The end of Timber Wind would come in 1991. Control of the program would transfer from SDIO to the US Air Force, which would locate the programmatic center of the project at the Phillips Research Laboratory in Albuquerque, NM – a logical choice due to the close proximity of Sandia National Lab where much of the nuclear analysis was taking place, as well as being a major hub of astronuclear research (the TOPAZ International program was being conducted there as well). Additional stakes in the program were given to NASA, which saw the potential of the system for both uncrewed and crewed missions from LEO to the Moon and beyond.

With this, Timber Wind stopped being a thing, and the Space Nuclear Thermal Propulsion program picked up basically exactly where it left off.

The Promise of SNTP

With the demise of Timber Wind, the Space Nuclear Thermal Propulsion program gained steam. Being a wider collaboration between different portions of the US government, both civil and military, gave a lot of advantages, wider funding, and more mission options, but also brought its’ own problems.

In the next post, we’ll look at this program, what its plans, results, and complications were, and what the legacy of this program was.

References and Further Reading

Timber Wind/SNTP General References


Orbital Transfer Vehicle



Horn et al, “The Use of Nuclear Power for Bimodal Applications in Space,” Brookhaven NL 1987

Multi-Megawatt Power Plant


Marshall, A.C “A Review of Gas-Cooled Reactor
Concepts for SDI Applications” Sandia NL 1987

“Atomic Power in Space: a History, chapter 15”

Development and Testing Forgotten Reactors History Non-nuclear Testing Nuclear Thermal Systems Test Stands

Pebblebed NTRs: Solid Fuel, but Different

Hello, and welcome back to Beyond NERVA!

Today, we’re going to take a break from the closed cycle gas core nuclear thermal rocket (which I’ve been working on constantly since mid-January) to look at one of the most popular designs in modern NTR history: the pebblebed reactor!

This I should have covered between solid and liquid fueled NTRs, honestly, and there’s even a couple types of reactor which MAY be able to be used for NTR between as well – the fluidized and shush fuel reactors – but with the lack of information on liquid fueled reactors online I got a bit zealous.

Beads to Explore the Solar System

Most of the solid fueled NTRs we’ve looked at have been either part of, or heavily influenced by, the Rover and NERVA programs in the US. These types of reactors, also called “prismatic fuel reactors,” use a solid block of fuel of some form, usually tileable, with holes drilled through each fuel element.

The other designs we’ve covered fall into one of two categories, either a bundled fuel element, such as the Russian RD-0410, or a folded flow disc design such as the Dumbo or Tricarbide Disc NTRs.

However, there’s another option which is far more popular for modern American high temperature gas cooled reactor designs: the pebblebed reactor. This is a clever design, which increases the surface area of the fuel by using many small, spherical fuel elements held in a (usually) unfueled structure. The coolant/propellant passes between these beads, picking up the heat as it passes between them.

This has a number of fundamental advantages over the prismatic style fuel elements:

  1. The surface area of the fuel is so much greater than with simple holes drilled in the prismatic fuel elements, increasing thermal transfer efficiency.
  2. Since all types of fuel swell when heated, the density of the packed fuel elements could be adjusted to allow for better thermal expansion behavior within the active region of the reactor.
  3. The fuel elements themselves were reasonably loosely contained within separate structures, allowing for higher temperature containment materials to be used.
  4. The individual elements could be made smaller, allowing for a lower temperature gradient from the inside to the outside of a fuel, reducing the overall thermal stress on each fuel pebble.
  5. In a folded flow design, it was possible to not even have a physical structure along the inside of the annulus if centrifugal force was applied to the fuel element structure (as we saw in the fluid fueled reactor designs), eliminating the need for as many super-high temperature materials in the highest temperature region of the reactor.
  6. Because each bead is individually clad, in the case of an accident during launch, even if the reactor core is breached and a fuel release into the environment occurs, the release of either any radiological components or any other fuel materials into the environment is minimized
  7. Because each bead is relatively small, it is less likely that they will sustain sufficient damage either during mechanical failure of the flight vehicle or impact with the ground that would breach the cladding.

However, there is a complication with this design type as well, since there are many (usually hundreds, sometimes thousands) of individual fuel elements:

  1. Large numbers of fuel beads mean large numbers of fuel beads to manufacture and perform quality control checks on.
  2. Each bead will need to be individually clad, sometimes with multiple barriers for fission product release, hydrogen corrosion, and the like.
  3. While each fuel bead will be individually clad, and so the loss of one or all the fuel will not significantly impact the environment from a radiological perspective in the case of an accident, there is potential for significant geographic dispersal of the fuel in the event of a failure-to-orbit or other accident.

There are a number of different possible flow paths through the fuel elements, but the two most common are either an axial flow, where the propellant passes through a tubular structure packed with the fuel elements, and a folded flow design, where the fuel is in a porous annular structure, with the coolant (usually) passing from the outside of the annulus, through the fuel, and the now-heated coolant exiting through the central void of the annulus. We’ll call these direct flow and folded flow pebblebed fuel elements.

In addition, there are many different possible fuel types, which regulars of this blog will be familiar with by now: oxides, carbides, nitrides, and CERMET are all possible in a pebblebed design, and if differential fissile fuel loading is needed, or gradients in fuel composition (such as using tungsten CERMET in higher temperature portions of the reactor, with beryllium or molybdenum CERMET in lower temperature sections), this can be achieved using individual, internally homogeneous fuel types in the beads, which can be loaded into the fuel support structure at the appropriate time to create the desired gradient.

Just like in “regular” fuel elements, these pebbles need to be clad in a protective coating. There have been many proposals over the years, obviously depending on what type of fissile fuel matrix the fuel uses to ensure thermal expansion and chemical compatibility with the fuel and coolant. Often, multiple layers of different materials are used to ensure structural and chemical integrity of the fuel pellets. Perhaps the best known example of this today is the TRISO fuel element, used in the US Advanced Gas Reactor fuel development program. The TRI-Structural ISOtropic fuel element uses either oxide or carbide fuel in the center, followed by a porous carbon layer, a pyrolitic carbon layer (sort of like graphite, but with some covalent bonds between the carbon sheets), followed by a silicon carbide outer shell for mechanical and fission product retention. Some variations include a burnable poison for reactivity control (the QUADRISO at Argonne), or use different outer layer materials for chemical protection. Several types have been suggested for NTR designs, and we’ll see more of them later.

The (sort of) final significant variable is the size of the pebble. As the pebbles go down in size, the available surface area of the fuel-to-coolant interface increases, but also the amount of available space between the pebbles decreases and the path that the coolant flows through becomes more resistant to higher coolant flow rates. Depending on the operating temperature and pressure, the thermal gradient acceptable in the fuel, the amount of decay heat that you want to have to deal with on shutdown (the bigger the fuel pebble, the more time it will take to cool down), fissile fuel density, clad thickness requirements, and other variables, a final size for the fuel pebbles can be calculated, and will vary to a certain degree between different reactor designs.

Not Just for NTRs: The Electricity Generation Potential of Pebblebed Reactors

Obviously, the majority of the designs for pebblebed reactors are not meant to ever fly in space, they’re mostly meant to operate as high temperature gas cooled reactors on Earth. This type of architecture has been proposed for astronuclear designs as well, although that isn’t the focus of this video.

Furthermore, the pebblebed design lends itself to other cooling methods, such as molten salt, liquid metal, and other heat-carrying fluids, which like the gas would flow through the fuel pellets, pick up the heat produced by the fissioning fuel, and carry it into a power conversion system of whatever design the reactor has integrated into its systems.

Finally, while it’s rare, pebblebed designs were popular for a while with radioisotope power systems. There are a number of reasons for this beyond being able to run a liquid coolant through the fuel (which was done on one occasion that I can think of, and we’ll cover in a future post): in an alpha-emitting radioisotope, such as 238Pu, over time the fuel will generate helium gas – the alpha particles will slow, stop, and become doubly ionized helium nuclei, which will then strip electrons off whatever materials are around and become normal 4He. This gas needs SOMEWHERE to go, which is why just like with a fissile fuel structure there are gas management mechanisms used in radioisotope power source fuel assemblies such as areas of vacuum, pressure relief valves, and the like. In some types of RTG, such as the SNAP-27 RTG used by Apollo, as well as the Multi-Hundred Watt RTG used by Voyager, the fuel was made into spheres, with the gaps in between the spheres (normally used to pass coolant through) are used for the gas expansion volume.

We’ll discuss these ideas more in the future, but I figured it was important to point out here. Let’s get back to the NTRs, and the first (and only major) NTR program to focus on the pebblebed concept: the Project Timberwind and the Space Nuclear Propulsion Program in the 1980s and early 1990s.

The Beginnings of Pebblebed NTRs

The first proposals for a gas cooled pebblebed reactor were from 1944/45, although they were never pursued beyond the concept stage, and a proposal for the “Space Vehicle Propulsion Reactor” was made by Levoy and Newgard at Thikol in 1960, with again no further development. If you can get that paper, I’d love to read it, here’s all I’ve got: “Aero/Space Engineering 19, no. 4, pgs 54-58, April 1960” and ‘AAE Journal, 68, no. 6, pgs. 46-50, June 1960,” and “Engineering 189, pg 755, June 3, 1960.” Sounds like they pushed hard, and for good reason, but at the time a pebblebed reactor was a radical concept for a terrestrial reactor, and getting a prismatic fueled reactor, something far more familiar to nuclear engineers, was a challenge that seemed far simpler and more familiar.

Sadly, while this design may end up have informed the design of its contemporary reactor, it seems like this proposal was never pursued.

Rotating Fluidized Bed Reactor (“Hatch” Reactor) and the Groundwork for Timberwind

Another proposal was made at the same time at Brookhaven National Laboratory, by L.P. Hatch, W.H. Regan, and a name that will continue to come up for the rest of this series, John R. Powell (sorry, can’t find the given names of the other two, even). This relied on very small (100-500 micrometer) fuel, held in a perforated drum to contain the fuel but also allow propellant to be injected into the fuel particles, which was spun at a high rate to provide centrifugal force to the particles and prevent them from escaping.

Now, fluidized beds need a bit of explanation, which I figured was best to put in here since this is not a generalized property of pebblebed reactors. In this reactor (and some others) the pebbles are quite small, and the coolant flow can be quite high. This means that it’s possible – and sometimes desirable – for the pebbles to move through the active zone of the reactor! This type of mobile fuel is called a “fluidized bed” reactor, and comes in several variants, including pebble (solid spheres), slurry (solid particulate suspended in a liquid), and colloid (solid particulate suspended in a gas). The best way to describe the phenomenon is with what is called the point of minimum fluidization, or when the drag forces on the mass of the solid objects from the fluid flow balances with the weight of the bed (keep in mind that life is a specialized form of drag). There’s a number of reasons to do this – in fact, many chemical reactions using a solid and a fluid component use fluidization to ensure maximum mixing of the components. In the case of an NTR, the concern is more to do with achieving as close to thermal equilibrium between the solid fuel and the gaseous propellant as possible, while minimizing the pressure drop between the cold propellant inlet and the hot propellant outlet. For an NTR, the way that the “weight” is applied is through centrifugal force on the fuel. This is a familiar concept to those that read my liquid fueled NTR series, but actually began with the fluidized bed concept.

This is calculated using two different relations between the same variables: the Reynolds number (Re), which determines how turbulent fluid flow is, and the friction coefficient (CD, or coefficient of drag, which deptermines how much force acts on the fuel particles based on fluid interactions with the particles) which can be found plotted below. The plotted lines represent either the Reynolds number or the void fraction ε, which represents the amount of gas present in the volume defined by the presence of fuel particles.

Hendrie 1970

If you don’t follow the technical details of the relationships depicted, that’s more than OK! Basically, the y axis is proportional to the gas turbulence, while the x axis is proportional to the particle diameter, so you can see that for relatively small increases in particle size you can get larger increases in propellant flow rates.

The next proposal for a pebble bed reactor grew directly out of the Hatch reactor, the Rotating Fluidized Bed Reactor for Space Nuclear Propulsion (RBR). From the documentation I’ve been able to find, from the original proposal work continued at a very low level at BNL from the time of the original proposal until 1973, but the only reports I’ve been able to find are from 1971-73 under the RBR name. A rotating fuel structure, with small, 100-500 micrometer spherical particles of uranium-zirconium carbide fuel (the ZrC forming the outer clad and a maximum U content of 10% to maximize thermal limits of the fuel particles), was surrounded by a reflector of either metallic beryllium or BeO (which was preferred as a moderator, but the increased density also increased both reactor mass and manufacturing requirements). Four drums in the reflector would control the reactivity of the engine, and an electric motor would be attached to a porous “squirrel cage” frit, which would rotate to contain the fuel.

Much discussion was had as to the form of uranium used, be it 235U or 233U. In the 235U reactor, the reactor had a cavity length of 25 in (63.5 cm), an inner diameter of 25 in (63.5 cm), and a fuel bed depth when fluidized of 4 in (10.2 cm), with a critical mass of U-ZrC being achieved at 343.5 lbs (155.8 kg) with 9.5% U content. The 233U reactor was smaller, at 23 in (56 cm) cavity length, 20 in (51 cm) bed inner diameter, 3 in (7.62 cm) deep fuel bed with a higher (70%) void fraction, and only 105.6 lbs (47.9 kg) of U-ZrC fuel at a lower (and therefore more temperature-tolerant) 7.5% U loading.

233U was the much preferred fuel in this reactor, with two options being available to the designers: either the decreased fuel loading could be used to form the smaller, higher thrust-to-weight ratio engine described above, or the reactor could remain at the dimensions of the 235U-fueled option, but the temperature could be increased to improve the specific impulse of the engine.

There was als a trade-off between the size of the fuel particles and the thermal efficiency of the reactor,:

  • Smaller particles advantages
    • Higher surface area, and therefore better thermal transfer capabilities,
    • Smaller radius reduces thermal stresses on fuel
  • Smaller particles disadvantages
    • Fluidized particle bed fuel loss would be a more immediate concern
    • More sensitive to fluid dynamic behavior in the bed
    • Bubbles could more easily form in fuel
    • Higher centrifugal force required for fuel containment
  • Larger particle advantages
    • Ease of manufacture
    • Lower centrifugal force requirements for a given propellant flow rate
  • Larger particle disadvantages
    • Higher thermal gradient and stresses in fuel pellets
    • Less surface area, so lower thermal transfer efficiency

It would require testing to determine the best fuel particle size, which could largely be done through cold flow testing.

These studies looked at cold flow testing in depth. While this is something that I’ve usually skipped over in my reporting on NTR development, it’s a crucial type of testing in any gas cooled reactor, and even more so in a fluidized bed NTR, so let’s take a look at what it’s like in a pebblebed reactor: the equipment, the data collection, and how the data modified the reactor design over time.

Cold flow testing is usually the predecessor to electrically heated flow testing in an NTR. These tests determine a number of things, including areas within the reactor that may end up with stagnant propellant (not a good thing), undesired turbulence, and other negative consequences to the flow of gas through the reactor. They are preliminary tests, since as the propellant heats up while going through the reactor, a couple major things will change: first, the density of the gas will decrease and second, as the density changes the Reynolds number (a measure of self-interaction, viscosity, and turbulent vs laminar flow behavior) will change.

In this case, the cold flow tests were especially useful, since one of the biggest considerations in this reactor type is how the gas and fuel interact.

The first consideration that needed to be examined is the pressure drop across the fuel bed – the highest pressure point in the system is always the turbopump, and the pressure will decrease from that point throughout the system due to friction with the pipes carrying propellant, heating effects, and a host of other inefficiencies. One of the biggest questions initially in this design was how much pressure would be lost from the frit (the outer containment structure and propellant injection system into the fuel) to the central void in the body of the fuel, where it exits the nozzle. Happily, this pressure drop is minimal: according to initial testing in the early 1960s (more on that below), the pressure drop was equal to the weight of the fuel bed.

The next consideration was the range between fluidizing the fuel and losing the fuel through literally blowing it out the nozzle – otherwise known as entrainment, a problem we looked at extensively on a per-molecule basis in the liquid fueled NTR posts (since that was the major problem with all those designs). Initial calculations and some basic experiments were able to map the propellant flow rate and centrifugal force required to both get the benefit of a fluidized bed and prevent fuel loss.

Rotating Fluidized Bed Reactor testbed test showing bubble formation,

Another concern is the formation of bubbles in the fuel body. As we covered in the bubbler LNTR post (which you can find here), bubbles are a problem in any fuel type, but in a fluid fueled reactor with coolant passing through it there’s special challenges. In this case, the main method of transferring heat from the fuel to the propellant is convection (i.e. contact between the fuel and the propellant causing vortices in the gas which distributes the heat), so an area that doesn’t have any (or minimal) fuel particles in it will not get heated as thoroughly. That’s a headache not only because the overall propellant temperature drops (proportional to the size of the bubbles), but it also changes the power distribution in the reactor (the bubbles are fission blank spots).

Finally, the initial experiment set looked at the particle-to-fluid thermal transfer coefficients. These tests were far from ideal, using a 1 g system rather than the much higher planned centrifugal forces, but they did give some initial numbers.

The first round of tests was done at Brookhaven National Laboratory (BNL) from 1962 to 1966, using a relatively simple test facility. A small, 10” (25.4 cm) length by 1” (2.54 cm) diameter centrifuge was installed, with gas pressure provided by a pressurized liquefied air system. 138 to 3450 grams of glass particles were loaded into the centrifuge, and various rotational velocities and gas pressures were used to test the basic behavior of the particles under both centrifugal force and gas pressure. While some bobbles were observed, the fuel beds remained stable and no fuel particles were lost during testing, a promising beginning.

These tests provided not just initial thermal transfer estimates, pressure drop calculations, and fuel bed behavioral information, but also informed the design of a new, larger test rig, this one 10 in by 10 in (25.4 by 25.4 cm), which was begun in 1966. This system would not only have a larger centrifuge, but would also use liquid nitrogen rather than liquefied air, be able to test different fuel particle simulants rather than just relatively lightweight glass, and provide much more detailed data. Sadly, the program ran out of funding later that year, and the partially completed test rig was mothballed.

Rotating Fluidized Bed Reactor (RBR): New Life for the Hatch Reactor

It would take until 1970, when the Space Nuclear Systems office of the Atomic Energy Commission and NASA provided additional funding to complete the test stand and conduct a series of experiments on particle behavior, reactor dynamics and optimization, and other analytical studies of a potential advanced pebblebed NTR.

The First Year: June 1970-June 1971

After completing the test stand, the team at BNL began a series of tests with this larger, more capable equipment in Building 835. The first, most obvious difference is the diameter of the centrifuge, which was upgraded from 1 inch to 10 inches (25.4 cm), allowing for a more prototypical fuel bed depth. This was made out of perforated aluminum, held in a stainless steel pressure housing for feeding the pressurized gas through the fuel bed. In addition, the gas system was changed from the pressurized air system to one designed to operate on nitrogen, which was stored in liquid form in trailers outside the building for ease of refilling (and safety), then pre-vaporized and held in two other, high-pressure trailers.

Photographs were used to record fluidization behavior, taken viewing the bottom of the bed from underneath the apparatus. While initially photos were only able to be taken 5 seconds apart, later upgrades would improve this over the course of the program.

The other major piece of instrumentation surrounded the pressure and flow rate of the nitrogen gas throughout the system. The gas was introduced at a known pressure through two inlets into the primary steel body of the test stand, with measurements of upstream pressure, cylindrical cavity pressure outside the frit, and finally a pitot tube to measure pressure inside the central void of the centrifuge.

Three main areas of pressure drop were of interest: due to the perforated frit itself, the passage of the gas through the fuel bed, and finally from the surface of the bed and into the central void of the centrifuge, all of which needed to be measured accurately, requiring calibration of not only the sensors but also known losses unique to the test stand itself.

The tests themselves were undertaken with a range of glass particle sizes from 100 to 500 micrometers in diameter, similar to the earlier tests, as well as 500 micrometer copper particles to more closely replicate the density of the U-ZrC fuel. Rotation rates of between1,000 and 2,000 rpm, and gas flow rates from 1,340-1,800 scf/m (38-51 m^3/min) were used with the glass beads, and from 700-1,500 rpm with the copper particles (the lower rotation rate was due to gas pressure feed limitations preventing the bed from becoming fully fluidized with the more massive particles).

Finally, there were a series of physics and mechanical engineering design calculations that were carried out to continue to develop the nuclear engineering, mechanical design, and system optimization of the final RBR.

The results from the initial testing were promising: much of the testing was focused on getting the new test stand commissioned and calibrated, with a focus on figuring out how to both use the rig as it was constructed as well as which parts (such as the photography setup) could be improved in the next fiscal year of testing. However, particle dynamics in the fuidized bed were comfortably within stable, expected behavior, and while there were interesting findings as to the variation in pressure drop along the axis of the central void, this was something that could be worked with.

Based on the calculations performed, as well as the experiments carried out in the first year of the program, a range of engines were determined for both 233U and 235U variants:

Work Continues: 1971-1972

This led directly into the 1971-72 series of experiments and calculations. Now that the test stand had been mostly completed (although modifications would continue), and the behavior of the test stand was now well-understood, more focused experimentation could continue, and the calculations of the physics and engineering considerations in the reactor and engine system could be advanced on a more firm footing.

One major change in this year’s design choices was the shift toward a low-thrust, high-isp system, in part due to greater interest at NASA and the AEC in a smaller NTR than the original design envelope. While analyzing the proposed engine size above, though, it was discovered that the smallest two reactors were simply not practical, meaning that the smallest design was over 1 GW power level.

Another thing that was emphasized during this period from the optimization side of the program was the mass of the reflector. Since the low thrust option was now the main thrust of the design, any increase in the mass of the reactor system has a larger impact on the thrust-to-weight ratio, but reducing the reflector thickness also increases the neutron leakage rate. In order to prevent this, a narrower nozzle throat is preferred, but also increases thermal loading across the throat itself, meaning that additional cooling, and probably more mass, is needed – especially in a high-specific-impulse (aka high temperature) system. This also has the effect of needing higher chamber pressures to maintain the desired thrust level (a narrower throat with the same mass flow throughput means that the pressure in the central void has to be higher).

These changes required a redesign of the reactor itself, with a new critical configuration:

Hendrie 1972

One major change is how fluidized the bed actually is during operation. In order to get full fluidization, there needs to be enough inward (“upward” in terms of force vectors) velocity at the inner surface of the fuel body to lift the fuel particles without losing them out the nozzle. During calculations in both the first and second years, two major subsystems contributed hugely to the weight and were very dependent on both the rotational speed and the pellet size/mass: the weight of the frit and motor system, which holds the fuel particles, and the weight of the nozzle, which not only forms the outlet-end containment structure for the fuel but also (through the challenges of rocket motor dynamics) is linked to the chamber pressure of the reactor – oh, and the narrower the nozzle, the less surface area is available to reject the heat from the propellant, so the harder it is to keep cool enough that it doesn’t melt.

Now, fluidization isn’t a binary system: a pebblebed reactor is able to be settled (no fluidization), partially fluidized (usually expressed as a percentage of the pebblebed being fluidized), and fully fluidized to varying degrees (usually expressed as a percentage of the volume occupied by the pebbles being composed of the fluid). So there’s a huge range, from fully settled to >95% fluid in a fully fluidized bed.

The designers of the RBR weren’t going for excess fluidization: at some point, the designer faces diminishing returns on the complications required for increased fluid flow to maintain that level of particulate (I’m sure it’s the same, with different criteria, in the chemical industry, where most fluidized beds actually are used), both due to the complications of having more powerful turbopumps for the hydrogen as well as the loss of thermalization of that hydrogen because there’s simply too much propellant to be heated fully – not to mention fuel loss from the particulate fuel being blown out of the nozzle – so the calculations for the bed dynamics assumed minimal full fluidization (i.e. when all the pebbles are moving in the reactor) as the maximum flow rate – somewhere around 70% gas in the fuel volume (that number was never specifically defined that I found in the source documentation, if it was, please let me know), but is dependent on both the pressure drop in the reactor (which is related to the mass of the particle bed) and the gas flow.

Ludewig 1974

However, the designers at this point decided that full fluidization wasn’t actually necessary – and in fact was detrimental – to this particular NTR design. Because of the dynamics of the design, the first particles to be fluidized were on the inner surface of the fuel bed, and as the fluidization percentage increased, the pebbles further toward the outer circumference became fluidized. Because the temperature difference between the fuel and the propellant is greater as the propellant is being injected through the frit and into the fuel body, more heat is carried away by the propellant per unit mass, and as the propellant warms up, thermal transfer becomes less efficient (the temperature difference between two different objects is one of the major variables in how much energy is transferred for a given surface area), and fluidization increases that efficiency between a solid and a fluid.

Because of this, the engineers re-thought what “minimal fluidization” actually meant. If the bed could be fluidized enough to maximize the benefit of that dynamic, while at a minimum level of fluidization to minimize the volume the pebblebed actually took up in the reactor, there would be a few key benefits:

  1. The fueled volume of the reactor could be smaller, meaning that the nozzle could be wider, so they could have lower chamber pressure and also more surface area for active cooling of the nozzle
  2. The amount of propellant flow could be lower, meaning that turbopump assemblies could be smaller and lighter weight
  3. The frit could be made less robustly, saving on weight and simplifying the challenges of the bearings for the frit assembly
  4. The nozzle, frit, and motor/drive assembly for the frit are all net neutron poisons in the RBR, meaning that minimizing any of these structures’ overall mass improves the neutron economy in the reactor, leading to either a lower mass reactor or a lower U mass fraction in the fuel (as we discussed in the 233U vs. 235U design trade-off)

After going through the various options, the designers decided to go with a partially fluidized bed. At this point in the design evolution, they decided on having about 50% of the bed by mass being fluidized, with the rest being settled (there’s a transition point in the fuel body where partial fluidization is occurring, and they discuss the challenges of modeling that portion in terms of the dynamics of the system briefly). This maximizes the benefit at the circumference, where the thermal difference (and therefore the thermal exchange between the fuel and the propellant) is most efficient, while also thermalizing the propellant as much as possible as the temperature difference decreases from the propellant becoming increasingly hotter. They still managed to reach an impressive 2400 K propellant cavity temperature with this reactor, which makes it one of the hottest (and therefore highest isp) solid core NTR designs proposed at that time.

This has various implications for the reactor, including the density of the fissile component of the fuel (as well as the other solid components that make up the pebbles), the void fraction of the reactor (what part of the reactor is made up of something other than fuel, in this particular instance hydrogen within the fuel), and other components, requiring a reworking of the nuclear modeling for the reactor.

An interesting thing to me in the Annual Progress Report (linked below) is the description of how this new critical configuration was modeled; while this is reasonably common knowledge in nuclear engineers from the days before computational modeling (and even to the present day), I’d never heard someone explain it in the literature before.

Basically, they made a bunch of extremely simplified (in both number of dimensions and fidelity) one-dimensional models of various points in the reactor. They then assumed that they could rotate these around that elevation to make something like an MRI slice of the nuclear behavior in the reactor. Then, they moved far enough away that it was different enough (say, where the frit turns in to the middle of the reactor to hold the fuel, or the nozzle starts, or even the center of the fuel compared to the edge) that the dynamics would change, and did the same sort of one-dimensional model; they would end up doing this 18 times. Then, sort of like an MRI in reverse, they took these models, called “few-group” models, and combined them into a larger group – called a “macro-group” – for calculations that were able to handle the interactions between these different few-group simulations to build up a two-dimensional model of the reactor’s nuclear structure and determine the critical configuration of the reactor. They added a few other ways to subdivide the reactor for modeling, for instance they split the neutron spectrum calculations into fast and thermal, but this is the general shape of how nuclear modeling is done.

Ok, let’s get back to the RBR…

Experimental testing using the rotating pebblebed simulator continued through this fiscal year, with some modifications. A new, seamless frit structure was procured to eliminate some experimental uncertainty, the pressure measuring equipment was used to test more areas of the pressure drop across the system, and a challenge for the experimental team – finding 100 micrometer copper spheres that were regularly enough shaped to provide a useful analogue to the UC-ZrC fuel (Cu specific gravity 8.9, UC-ZrC specific gravity ~6.5) were finally able to be procured.

Additionally, while thermal transfer experiments had been done with the 1-gee small test apparatus which preceded the larger centrifugal setup (with variable gee forces available), the changes were too great to allow for accurate predictions on thermal transfer behavior. Therefore, thermal transfer experiments began to be examined on the new test rig – another expansion of the capabilities of the new system, which was now being used rigorously since its completing and calibration testing of the previous year. While they weren’t conducted that year, setting up an experimental program requires careful analysis of what the test rig is capable of, and how good data accuracy can be achieved given the experimental limitations of the design.

The major achievement for the year’s ex[experimentation was a refining of the relationship between particle size, centrifugal force, and pressure drop of the propellant from the turbopump to the frit inlet to the central cavity, most especially from the frit to the inner cavity through the fuel body, on a wide range of particle sizes, flow rates, and bed fluidization levels, which would be key as the design for the RBR evolved.

The New NTR Design: Mid-Thrust, Small RBR

So, given the priorities at both the AEC and NASA, it was decided that it was best to focus primarily on a given thrust, and try and optimize thrust-to-weight ratios for the reactor around that thrust level, in part because the outlet temperature of the reactor – and therefore the specific impulse – was fixed by the engineering decisions made in regards to the rest of the reactor design. In this case, the target thrust was was 90 kN (20,230 lbf), or about 120% of a Pewee-class engine.

This, of course, constrained the reactor design, which at this point in any reactor’s development is a good thing. Every general concept has a huge variety of options to play with: fuel type (oxide, carbide, nitride, metal, CERMET, etc), fissile component (233U and 235U being the big ones, but 242mAm, 241Cf, and other more exotic options exist), thrust level, physical dimensions, fuel size in the case of a PBR, and more all can be played with to a huge degree, so having a fixed target to work towards in one metric allows a reference point that the rest of the reactor can work around.

Also, having an optimization point to work from is important, in this case thrust-to-weight ratio (T/W). Other options, such as specific impulse, for a target to maximize would lead to a very different reactor design, but at the time T/W was considered the most valuable consideration since one way or another the specific impulse would still be higher than the prismatic core NTRs currently under development as part of the NERVA program (being led by Los Alamos Scientific Laboratory and NASA, undergoing regular hot fire testing at the Jackass Flats, NV facility). Those engines, while promising, were limited by poor T/W ratios, so at the time a major goal for NTR improvement was to increase the T/W ratio of whatever came after – which might have been the RBR, if everything went smoothly.

One of the characteristics that has the biggest impact on the T/W ratio in the RBR is the nozzle throat diameter. The smaller the diameter, the higher the chamber pressure, which reduces the T/W ratio while increasing the amount of volume the fuel body can occupy given the same reactor dimensions – meaning that smaller fuel particles could be used, since there’s less chance that they would be lost out of the narrower nozzle throat. However, by increasing the nozzle throat diameter, the T/W ratio improved (up to a point), and the chamber pressure could be decreased, but at the cost of a larger particle size; this increases the thermal stresses in the fuel particles, and makes it more likely that some of them would fail – not as catastrophic as on a prismatic fueled reactor by any means, but still something to be avoided at all costs. Clearly a compromise would need to be reached.

Here are some tables looking at the design options leading up to the 90 kN engine configuration with both the 233U and 235U fueled versions of the RBR:

After analyzing the various options, a number of lessons were learned:

  1. It was preferable to work from a fixed design point (the 90 kN thrust level), because while the reactor design was flexible, operating near an optimized power level was more workable from a reactor physics and thermal engineering point of view
  2. The main stress points on the design were reflector weight (one of the biggest mass components in the system), throat diameter (from both a mass and active cooling point of view as well as fuel containment), and particle size (from a thermal stress and heat transfer point of view)
  3. On these lower-trust engines, 233U was looking far better than 235U for the fissile component, with a T/W ratio (without radiation shielding) of 65.7 N/kg compared to 33.3 N/kg respectively
    1. As reactor size increased, this difference reduced significantly, but with a constrained thrust level – and therefore reactor power – the difference was quite significant.

The End of the Line: RBR Winds Down

1973 was a bad year in the astronuclear engineering community. The flagship program, NERVA, which was approaching flight ready status with preparations for the XE-PRIME test, the successful testing of the flexible, (relatively) inexpensive Nuclear Furnace about to occur to speed not only prismatic fuel element development but also a variety of other reactor architectures (such as the nuclear lightbulb we began looking at last time), and the establishment of a robust hot fire testing structure at Jackass Flats, was fighting for its’ life – and its’ funding – in the halls of Congress. The national attention, after the success of Apollo 11, was turning away from space, and the missions that made NTR technologically relevant – and a good investment – were disappearing from the mission planners’ “to do” lists, and migrating to “if we only had the money” ideas. The Rotating Fluidized Bed Reactor would be one of those casualties, and wouldn’t even last through the 1971/72 fiscal year.

This doesn’t mean that more work wasn’t done at Brookhaven, far from it! Both analytical and experimental work would continue on the design, with the new focus on the 90 kN thrust level, T/W optimized design discussed above making the effort more focused on the end goal.

Multi-program computational architecture used in 1972/73 for RBR, Hoffman 1973

On the analytical side, many of the components had reasonably good analytical models independently, but they weren’t well integrated. Additionally, new and improved analytical models for things like the turbopump system, system mass, temp and pressure drop in the reactor, and more were developed over the last year, and these were integrated into a unified modeling structure, involving multiple stacked models. For more information, check out the 1971-72 progress report linked in the references section.

The system developed was on the verge of being able to do dynamics modeling of the proposed reactor designs, and plans were laid out for what this proposed dynamic model system would look like, but sadly by the time this idea was mature enough to implement, funding had run out.

On the experimental side, further refinement of the test apparatus was completed. Most importantly, because of the new design requirements, and the limitations of the experiments that had been conducted so far, the test-bed’s nitrogen supply system had to be modified to handle higher gas throughput to handle a much thicker fuel bed than had been experimentally tested. Because of the limited information about multi-gee centrifugal force behavior in a pebblebed, the current experimental data could only be used to inform the experimental course needed for a much thicker fuel bed, as was required by the new design.

Additionally, as was discussed from the previous year, thermal transfer testing in the multi-gee environment was necessary to properly evaluate thermal transfer in this novel reactor configuration, but the traditional methods of thermal transfer simply weren’t an option. Normally, the procedure would be to subject the bed to alternating temperatures of gas: cold gas would be used to chill the pebbles to gas-ambient temperatures, then hot gas would be used on the chilled pebbles until they achieved thermal equilibrium at the new temperature, and then cold gas would be used instead, etc. The temperature of the exit gas, pebbles, and amount of gas (and time) needed to reach equilibrium states would be analyzed, allowing for accurate heat transfer coefficients at a variety of pebble sizes, centrifugal forces, propellant flow rates, etc. would be able to be obtained, but at the same time this is a very energy-intensive process.

An alternative was proposed, which would basically split the reactor’s propellant inlet into two halves, one hot and one cold. Stationary thermocouples placed through the central void in the centrifuge would record variations in the propellant at various points, and the gradient as the pebbles moved from hot to cold gas and back could get good quality data at a much lower energy cost – at the cost of data fidelity reducing in proportion to bed thickness. However, for a cash-strapped program, this was enough to get the data necessary to proceed with the 90 kN design that the RBR program was focused on.

Looking forward, while the team knew that this was the end of the line as far as current funding was concerned, they looked to how their data could be applied most effectively. The dynamics models were ready to be developed on the analytical side, and thermal cycling capability in the centrifugal test-bed would prepare the design for fission-powered testing. The plan was to address the acknowledged limitations with the largely theoretical dynamic model with hot-fired experimental data, which could be used to refine the analytical capabilities: the more the system was constrained, and the more experimental data that was collected, the less variability the analytical methods had to account for.

NASA had proposed a cavity reactor test-bed, which would serve primarily to test the open and closed cycle gas core NTRs also under development at the time, which could theoretically be used to test the RBR as well in a hot-fore configuration due to its unique gas injection system. Sadly, this test-bed never came to be (it was canceled along with most other astronuclear programs), so the faint hope for fission-powered RBR testing in an existing facility died as well.

The Last Gasp for the RBR

The final paper that I was able to find on the Rotating Fluidized Bed Reactor was by Ludewig, Manning, and Raseman of Brookhaven in the Journal of Spacecraft, Vol 11, No 2, in 1974. The work leading up to the Brookhaven program, as well as the Brookhaven program itself, was summarized, and new ideas were thrown out as possibilities as well. It’s evident reading the paper that they still saw the promise in the RBR, and were looking to continue to develop the project under different funding structures.

Other than a brief mention of the possibility of continuous refueling, though, the system largely sits where it was in the middle of 1973, and from what I’ve seen no funding was forthcoming.

While this was undoubtedly a disappointing outcome, as virtually every astronuclear program in history has faced, and the RBR never revived, the concept of a pebblebed NTR would gain new and better-funded interest in the decades to come.

This program, which has its own complex history, will be the subject for our next blog post: Project Timberwind and the Space Nuclear Thermal Propulsion program.


While the RBR was no more, the idea of a pebblebed NTR would live on, as I mentioned above. With a new, physically demanding job, finishing up moving, and the impacts of everything going on in the world right now, I’m not sure exactly when the next blog post is going to come out, but I have already started it, and it should hopefully be coming in relatively short order! After covering Timberwind, we’ll look at MITEE (the whole reason I’m going down this pebblebed rabbit hole, not that the digging hasn’t been fascinating!), before returning to the closed cycle gas core NTR series (which is already over 50 pages long!).

As ever, I’d like to thank my Patrons on Patreon (, especially in these incredibly financially difficult times. I definitely would have far more motivation challenges now than I would have without their support! They get early access to blog posts, 3d modeling work that I’m still moving forward on for an eventual YouTube channel, exclusive content, and more. If you’re financially able, consider becoming a Patron!

You can also follow me at for more regular updates!


Rotating Fluidized Bed Reactor

Hendrie et al, “ROTATING FLUIDIZED BED REACTOR FOR SPACE NUCLEAR PROPULSION Annual Report: Design Studies and Experimental Results, June, 1970- June, 1971,” Brookhaven NL, August 1971

Hendrie et al, “ROTATING FLUIDIZED BED REACTOR FOR SPACE NUCLEAR PROPULSION Annual Report: Design Studies and Experimental Results, June 1971 – June 1972,” Brookhaven NL, Sept. 1972

Hoffman et al, “ROTATING FLUIDIZED BED REACTOR FOR SPACE NUCLEAR PROPULSION Annual Report: Design Studies and Experimental Results, July 1972 – January 1973,” Brookhaven NL, Sept 1973

Cavity Test Reactor



Development and Testing History Nuclear Thermal Systems

The Nuclear Lightbulb – A Brief Introduction

Hello, and welcome back to Beyond NERVA! Really quickly, I apologize that I haven’t published more recently. Between moving to a different state, job hunting, and the challenges we’re all facing with the current medical situation worldwide, this post is coming out later than I was hoping. I have been continuing to work in the background, but as you’ll see, this engine isn’t one that’s easy to take in discrete chunks!

Today, we jump into one of the most famous designs of advanced nuclear thermal rocket: the “nuclear lightbulb,” more properly known as the closed cycle gas core nuclear thermal rocket. This will be a multi-part post on not only the basics of the design, but a history of the way the design has changed over time, as well as examining both the tests that were completed as well as the tests that were proposed to move this design forward.

Cutaway of simplified LRC Closed Cycle Gas Core NTR, image credit Winchell Chung of Atomic Rockets

One of the challenges that we saw on the liquid core NTR was that the fission products could be released into the environment. This isn’t really a problem from the pollution side for a space nuclear reactor (we’ll look at the extreme version of this in a couple months with the open cycle gas core), but as a general rule it is advantageous to avoid it most of the time to keep the exhaust mass low (why we use hydrogen in the first place). In ideal circumstances, and with a high enough thrust-to-weight ratio, eliminating this release could even enable an NTR to be used in surface launches.

That’s the potential of the reactor type we’re going to be discussing today, and in the next few posts. Due to the complexities of this reactor design, and how interconnected all the systems are, there may be an additional pause in publication after this post. I’ve been working on the details of this system for over a month and a half now, and am almost done covering the basics of the fuel itself… so if there’s a bit of delay, please be understanding!

The closed cycle gas core uses uranium hexafluoride (UF6) as fuel, which is contained within a fused silica “bulb” to form the fuel element – hence the popular name “nuclear lightbulb”. Several of these are distributed through the reactor’s active zone, with liquid hydrogen coolant flowing through the silica bulb, and then the now-gaseous hydrogen passing around the bulbs and out the nozzle of the reactor. This is the most conservative of the gas core designs, and only a modest step above the vapor core designs we examined last time, but still offers significantly higher temperatures, and potentially higher thrust-to-weight ratios, than the VCNTR.

A combined research effort by NASA’s Lewis (now Glenn) Research Center and United Aircraft Corporation in the 1960s and 70s made significant progress in the design of these reactors, but sadly with the demise of the AEC and NASA efforts in nuclear thermal propulsion, the project languished on the shelves of astronuclear research for decades. While it has seen a resurgence of interest in the last few decades in popular media, most designs for spacecraft that use the lightbulb reactor reference the efforts from the 60s and 70s in their reactor designs- despite this being, in many ways, one of the most easily tested advanced NTR designs available.

Today’s blog post focuses on the general shape of the reactor: its basic geometry, a brief examination of its analysis and testing, and the possible uses of the reactor. The next post will cover the analytical studies of the reactor in more detail, including the limits of what this reactor could provide, and what the tradeoffs in the design would require to make a practical NTR, as well as the practicalities of the fuel element design itself. Finally, in the third we’ll look at the testing that was done, could have been done with in-core fission powered testing, the lessons learned from this testing, and maybe even some possibilities for modern improvements to this well-known, classic design.

With that, let’s take a look at this reactor’s basic shape, how it works, and what the advantages of and problems with the basic idea are.

Nuclear Lightbulb: Nuclear Powered Children’s Toy (ish)

Easy Bake Oven, image Wikimedia

For those of us of a certain age, there was a toy that was quite popular: the Easy-Bake Oven. This was a very simple toy: an oven designed for children with minimal adult supervision to be able to cook a variety of real baked goods, often with premixed dry mixes or simple recipes. Rather than having a more normal resistive heating element as you find in a normal oven, though, a special light bulb was mounted in the oven, and the waste heat from the bulb would heat the oven enough to cook the food.

Closed cycle gas core bulb, image DOE colorized by Winchell Chung

The closed cycle gas core NTR takes this idea, and ramps it up to the edges of what materials limits allow. Rather than a tungsten wire, the heat in the bulb is generated by a critical mass of uranium hexafluoride, a gas at room temperature that’s used in, among other things, fissile fuel enrichment for reactors and other applications. This is contained in a fused silica bulb made up of dozens of very thin tubes – not much different in material, but very different in design, compared to the Easy-Bake Oven – which contains the fissile fuel, and prevents the fission products from escaping. The fuel turns from gas to plasma, and forms a vortex in the center of the fuel element.

Axial cross-section of the fuel/buffer/wall region of the lightbulb, Rodgers 1972

To further protect the bulb from direct contact with the uranium and free fluorine, a gaseous barrier of noble gas (either argon or neon) is injected between the fuel and the wall of the bulb itself. Because of the extreme temperatures, the majority of the electromagnetic radiation coming off the fuel isn’t in the form of infrared (heat), but rather as ultraviolet radiation, which the silica is transparent to, minimizing the amount of energy that’s deposited into the bulb itself. In order to further protect the silica bulb, microparticles of the same silica are added to the neon flow to absorb some of the radiation the bulb isn’t transparent to, in order to remove that part of the radiation before it hits the bulb. This neon passes around the walls of the chamber, creating a vortex in the uranium which further constrains it, and then passes out of one or both ends of the bulb. It then goes through a purification and cooling process using a cryogenic hydrogen heat exchanger and gas centrifuge, before being reused.

Now, of course there is still an intense amount of energy generated in the fuel which will be deposited in the silica, and will attempt to melt the bulb almost instantly, so the bulb must be cooled regeneratively. This is done by liquid hydrogen, which is also mostly transparent to the majority of the radiation coming off the fuel plasma, minimizing the amount of energy the coolant absorbs from anything but the silica of the bulb itself.

Finally, the now-gaseous hydrogen from both the neon and bulb cooling processes, mixed with any hydrogen needed to cool the pressure vessel, reflectors of the reactor, and other components, is mixed with microparticles of tungsten to increase the amount of UV radiation emitted by the fuel. This then passes around the bulbs in the reactor, getting heated to their final temperature, before exiting the nozzle of the NTR.

Overall configuration, Rodgers 1972

The most commonly examined version of the lightbulb uses a total of seven bulbs, with those bulbs being made up of a spiral of hydrogen coolant channels in fused silica. This was pioneered by NASA’s Lewis Research Center (LRC), and studied by United Aircraft Corp of Mass (UA). These studies were carried out between 1963 and 1972, with a very small number of follow-up studies at UA completing by 1980. This design was a 4600 MWt reactor fueled by 233U, an isp of 1870 seconds, and a thrust-to-weight ratio of 1.3.

A smaller version of this system, using a single bulb rather than seven, was proposed by the same team for probe missions and the like, but unfortunately the only papers are behind paywalls.

During the re-examination of nuclear thermal technology in the early 1990s by NASA and the DOE, the design was re-examined briefly to assess the advantages that the design could offer, but no advances in the design were made at the time.

Since then, while interest in this concept has grown, new studies have not been done, and the design remains dormant despite the extensive amount of study which has been carried out.

What’s Been Done Before: Previous Studies on the Lightbulb

Bussard 1958

The first version of the closed cycle gas core proposed by Robert Bussard in 1946. This design looked remarkably like an internal combustion firing chamber, with the UF6 gas being mechanically compressed into a critical density with a piston. Coolant would be run across the outside of the fuel element and then exit the reactor through a nozzle. While this design hasn’t been explored in any depth that I’ve been able to determine, a new version using pressure waves rather than mechanical pistons to compress gas into a critical mass has been explored in recent years (we’ll cover that in the open cycle gas core posts).

Starting in 1963, United Aircraft (UA, a subsidiary of United Technologies) worked with NASA’s Lewis Research Center (LRC) and Los Alamos Scientific Laboratory (LASL) on both the open and closed cycle gas core concepts, but the difficulties of containing the fuel in the open cycle concept caused the company to focus exclusively on the closed cycle concepts. Interestingly, according to Tom Latham of UA (who worked on the program), the design was limited in both mass and volume by the then-current volume of the proposed Space Shuttle cargo bay. Another limitation of the original concept was that no external radiators could be used for thermal management, due to the increased mass of the closed radiator system and its associated hardware.

System flow diagram, Rodgers 1972

The design that evolved was quite detailed, and also quite efficient in many ways. However, the sheer number of interdependent subsystems makes is fairly heavy, limiting its potential usefulness and increasing its complexity.

In order to get there, a large number of studies were done on a number of different subsystems and physical behaviors, and due to the extreme nature of the system design itself many experimental apparatus had to be not only built, but redesigned multiple times to get the results needed to design this reactor.

We’ll look at the testing history more in depth in a future blog post, but it’s worth looking at the types of tests that were conducted to get an idea of just how far along this design was:

RF Heating Test Apparatus, Roman 1969

Both direct current and radio frequency testing of simulated fuel plasmas were conducted, starting with the RF (induction heating) testing at the UA facility in East Hartford, CT. These studies typically used tungsten in place of uranium (a common practice, even still used today) since it’s both massive and also has somewhat similar physical properties to uranium. At the time, argon was considered for the buffer gas rather than neon, this change in composition will be something we’ll look at later in the detailed testing post.

Induction heating works by using a vibrating magnetic field to heat materials that will flip their molecular direction or vibrate, generating heat. It is a good option for nuclear testing since it is able to more evenly heat the simulated fuel, and can achieve high temperatures – it’s still used for nuclear fuel element testing not only in the Compact Fuel Element Environment Test (CFEET) test stand, which I’ve covered here , but also in the Nuclear Thermal Rocket Environmental Effects Simulator, which I covered here: . One of the challenges of this sort of heating, though, is the induction coil, the device that creates the heating in the material. In early testing they managed to melt the copper coil they were using due to resistive heating (the same method used to make heat in a space heater or oven), and constructing a higher-powered apparatus wasn’t possible for the team.

This led to direct current heating testing to achieve higher temperatures, which uses an electrical arc through the tungsten plasma. This isn’t as good at simulating the way that heat is distributed in the plasma body, but could achieve higher temperatures. This was important for testing the stability of the vortex generated by not only the internal heating of the fuel, but also the interactions between the fuel and the neon containment system.

Spectral flux from the edge of the fuel body, Rodgers 1972 (will be covered more in depth in another post)

Another concern was determining what frequencies of radiation silicon, aluminum and neon were transparent to. By varying the temperature of the fissioning fuel mass, the frequency of radiation could, to a certain degree, be tuned to a frequency that maximized how much energy would pass through both the noble gas (then argon) and the bulb structure itself. Again, at the time (and to a certain extent later), the bulb configuration was slightly different: a layer of aluminum was added to the inner surface of the bulb to reflect more thermal radiation back into the fissioning fuel in order to increase heating, and therefore increase the temperature of the fuel. We’ll look at how this design option changed over time in future posts.

More studies and tests were done looking at the effects of neutron and gamma radiation on reactor materials. These are significant challenges in any reactor, but the materials being used in the lightbulb reactor are unusual, even by the standards of astronuclear engineering, so detailed studies of the effects of these radiation types were needed to ensure that the reactor would be able to operate throughout its required lifetime.

Fused silica test article, Vogt 1970

Perhaps one of the biggest concerns was verifying that the bulb itself would maintain both its integrity and its functionality throughout the life of the reactor. Silica is a material that is highly unusual in a nuclear reactor, and the fact that it needed to remain not only transparent but able to contain both a noble gas seeded with silica particles and hydrogen while remaining transparent to a useful range of radiation while being bombarded with neutrons (which would change the crystalline structure) and gamma rays (which would change the energy states of the individual nuclei to varying degrees) was a major focus of the program. On top of that, the walls of the individual tubes that made up the bulbs needed to be incredibly thin, and the shape of each of the individual tubes was quite unusual, so there were significant experimental manufacturing considerations to deal with. Neutron, gamma and beta (high energy electron) radiation could all have their effect on the bulb itself during the course of the reactor’s lifetime, and these effects needed to be understood and accounted for. While these tests were mostly successful, with some interesting materials properties of silica discovered along the way, when Dr. Latham discussed this project 20 years later, one of the things he mentioned was that modern materials science could possibly offer better alternatives to the silica tubing – a concept that we will touch on again in a future post.

Another challenge of the design was that it required seeding two different materials into two different gasses: the neon/argon had to be seeded with silica in order to protect the bulb, and the hydrogen propellant needed to be seeded with tungsten to make it absorb the radiation passing through the bulb as efficiently as possible while minimizing the increase in the mass of the propellant. While the hydrogen seeding process was being studied for other reactor designs – we saw this in the radiator liquid fueled NTR, and will see it again in the future in open cycle gas core and some solid core designs we haven’t covered yet – the silica seeding was a new challenge, especially because the material being seeded and the material the seeded gas would travel through was the same as the material that was seeded into the gas.

Image DOE via Chris Casilli on Twitter

Finally, there’s the challenge of nuclear testing. Los Alamos Scientific Laboratory conducted some tests that were fission-powered, which proved the concept in theory, but these were low powered bench-top tests (which we’ll cover in depth in the future). To really test the design, it would be ideal to do a hot-fire test of an NTR. Fortunately, at the time the Nuclear Furnace test-bed was being completed (more on NERVA hot fire testing here: and the exhaust scrubbers for the Nuclear furnace here: ). This meant that it was possible to use this versatile test-bed to test a single, sub-scale lightbulb in a controlled, well-understood system. While this test was never actually conducted, much of the preparatory design work for the test was completed, another thing we’ll cover in a future post.

A Promising, Developed, Unrealized Option

The closed cycle gas core nuclear thermal rocket is one of the most perrenially fascinating concepts in astronuclear history. Not only does it offer an option for a high-temperature nuclear reactor which is able to avoid many of the challenges of solid fuel, but it offers better fission product containment than any other design besides the vapor core NTR.

It is also one of the most complex systems that has ever been proposed, with two different types of closed cycle gas systems involving heat exchangers and separation systems supporting seven different fuel chambers, a host of novel materials in unique environments, the need to tune both the temperature and emissivity of a complex fuel form to ensure the reactor’s components won’t melt down, and the constant concerns of mass and complexity hanging over the heads of the designers.

Most of these challenges were addressed in the 1960s and 1970s, with most of the still-unanswered questions needing testing that simply wasn’t possible at the time of the project’s cancellation due to shifting priorities in the space program. Modern materials science may offer better solutions to those that were available at the time as well, both in the testing and operation of this reactor.

Sadly, updating this design has not happened, but the original design remains one of the most iconic designs in astronuclear engineering.

In the next two posts, we’ll look at the testing done for the reactor in detail, followed by a detailed look at the reactor itself. Make sure to keep an eye out for them!

If you would like to support my work, consider becoming a Patreon supporter at . Not only do you get early access to blog posts, but I post extra blogs, images from the 3d models I’m working on of both spacecraft and reactors, and more! Every bit helps.

You can also follow me on Twitter ( for more content and conversation!


McLafferty, G.H. “Investigation of Gaseous Nuclear Rocket Technology – Summary Technical Report” 1969

Rodgers, R.J. and Latham, T.S. “Analytical Design and Performance Studies of the Nuclear Light Bulb Engine” 1972

Latham, T.S. “Nuclear Light Bulb,” 1992

History Nuclear Thermal Systems

Radiator LNTR: The Last of the Line

Hello, and welcome back to Beyond NERVA! Today, we’re finishing (for now) our in-depth look at liquid fueled nuclear thermal rockets, by looking at the second major type of liquid NTR (LNTR): the radiator-type LNTR. If you’re just joining us, make sure to check out the introduction (available here) and the bubbler post (available here) for some important context to understand how this design got here.

Rather than passing the propellant directly through the molten fuel, in this system the propellant would pass through the central void of the fuel element, becoming heated primarily through radiation (although some convection within the propellant flow would occur, overall it was a minor effect), hence the name.

This concept had been mentioned in previous works on bubbler-type LNTRs, and initial studies on the thermalization behavior of the propellant, and conversely fuel cooling behavior, were conducted during the early 1960s, but the first major study wouldn’t occur until 1966. However, it would also extend into the 1990s in its development, meaning that it was a far longer-lived design.

Let’s begin by looking at the differences between the bubbler and radiator designs, and why the radiator offers an attractive trade-off compared to the bubbler.

The Vapor Problem, or Is Homogenization of Propellant/Fuel Temp Worth It?

Liquid fuels offer many advantages for an NTR, including the fact that the fuel will distribute its heat evenly across the volume of the fuel element, the fact that the effective temperature of the fuel can be quite high, and that the fuel is able to be reasonably well contained with minimal theoretical challenges.

The bubbler design had an additional advantage: by passing the propellant directly through the fuel, in discrete enough bundles (the bubbles themselves) that the fuel and the propellant would have the same temperature.

Maximum specific impulse due to vapor pressure, Barrett Jr.

Sadly, there are significant challenges to making this sort of nuclear reactor into a rocket, the biggest one being propellant mass. These types of NTRs still use hydrogen propellant, the problem occurs in the fuel mass itself. As the bubbles move through the zirconium/niobium-uranium carbide fuel, it heats up rapidly, and the pressure drops significantly during this process. This means that all of the components of the fuel (the Zr/Nb, C, and U) end up vaporizing into the bubbles, to the point that the bubble is completely saturated by a mix of these elements in vapor form by the time it exits the fuel body. This is called vapor entrainment.

This is a major problem, because it means that the propellant leaving the nozzle has a far higher mass than the hydrogen that was originally input into the system. While there’s the possibility that a different propellant could be used which would not entrain as much of the fuel mass, but would also be higher molecular mass to start – to the point that the gains might likely outweigh the losses (if you feel like exploring this trade-off on a more technical footing, please let me know! I’d love to explore this more), and it wouldn’t eliminate the entrainment problem.

This led people to wonder if you have to pass the propellant through the fuel in the first place. After all, while there is a thermodynamically appealing symmetry to homogenizing your fuel and propellant temperatures, this isn’t actually necessary, is it? The fuel elements are already annular in shape, after all, so why not use them as a more traditional fuel element for an NTR? The lower surface area would mean that there’s less chance for the inconveniently high vapor pressure of the fuel would be mitigated by the fact that the majority of the propellant wouldn’t come in contact with the fuel (or even the layer of propellant that does interact with the fuel), meaning that the overall propellant molecular mass would be kept low… right?

The problem is that this means that the only method of heating the propellant becomes radiation (there’s a small amount of convection, but it’s so negligible that it can be ignored)… which isn’t that great in hydrogen, especially in the UV spectrum that most of the photons from the nuclear reaction are emitted in. The possibility of using either microparticles or vapors which would absorb the UV and re-emit it in a lower wavelength, which would be more easily absorbed by the hydrogen, was already being investigated in relation to gas core NTRs (which have the same problem, but in a completely different order of magnitude), and offered promise, but was also a compromise: this deliberately increases the molar mass of the propellant one way to minimize the molar mass a different way. This was a design possibility that needed to be carefully studied before it could be considered more feasible than the bubbler LNTR.

The leader of the effort to study this trade-off was one of the best-known fluid fueled NTR designers on the NASA side: Robert Ragsdale at Lewis Research Center (LRC, and we’ll come back to Ragsdale in gas core NTR design as well). There were a collection of studies around a particular design, beginning with a study looking at reactor geometry and fuel element size optimization to not only maximize the thrust and specific impulse, but also to minimize the uranium loss rates of the reactor.

This study concluded that there were many advantages to the radiator-type LNTR over the bubbler-type. The first consideration, minimizing the vapor entrainment problem that was laguing the bubbler, was minimized, but not completely eliminated, in the radiator design. The next conclusion is that the specific impulse of the negine could be maintained, or increased, to 1400 s isp or more. Finally, one of th emost striking was in thrust-to-core-weight ratio, which went from about 1:1 in the Nelson/Princeton design that we discussed in the last post all the way up to 19:1 (potentially)! This is because the propellant flow rate isn’t limited y the bubble velocity moving through the fuel (for more detail on this problem, and the other related constraints, check out the last blog post, here).

These conclusions led to NASA gathering a team of researchers, including Ragsdale, Kasack, Donovan, Putre, and others ti develop the Lewis LNTR reactor.

Lewis LNTR: The First of the Line

Lweis Radiator LNTR, Ragsdale 1967

Once the basic feasibility of the radiator LNTR was demonstrated, a number of studies were conducted to determine the performance characteristics, as well as the basic engineering challenges, facing this type of NTR. They were conducted in 1967/68, and showed distinct promise, for the desired 2000 to 5000 MWt power range (similar to the Phoebus 2 reactor’s power goal, which remains the most powerful nuclear reactor ever tested at 3500 MWt).

Fuel tube cross-section, Putre 1968

As with any design, the first question was the basics of reactor configuration. The LRC team never looked at a single-tube LNTR, for a variety of reasons, and instead focused their efforts on a multi-tube design, but the number and diameter of the tubes was one of the major questions to be addressed in initial studies. Because of this, and the particular characteristics of the heat transfer required, the reactor would have many fuel elements with a diameter of between 1 and 4 inches, but which diameter was best would be a matter of quite some study.

Another question for the study team was what the fuel element temperature would be. As in every NTR design, the hotter the propellant, the higher the isp (all other things being equal), but as we saw in the bubbler design, higher temperatures also mean higher vapor pressure, meaning that mass is lost more easily into the propellant – which increases the propellant mass and reduces the isp, and at some point even cost more specific impulse due to higher mass than is gained with the higher temperature. Because the propellant and the fuel would only interact on the surface of the fuel element, the surface temperature of the fuel was the overriding consideration, and was also explored, in the range of 5000 to 6100 K.

Effect of Reactor Pressure on T/W Ratio and U mass loss ratio in H, Ragsdale 1967

The final consideration which was optimized in this design was engine operating pressures. Because this design wasn’t fundamentally limited by the bubble velocity and void fraction of the molten fuel, the chamber pressure could be increased significantly, leading to both more thrust and a higher thrust-to-weight ratio. However, the trade-off here is that at some point the propellant isn’t being completely thermalized, resulting in a lower specific impulse. This final consideration was explored in the range of 200 to 1000 atm (2020-10100 N/cm2).

The three primary goals were: to maximize specific impulse, maximize thrust-to-weight ratio, and minimize uranium mass loss. They quickly discovered that they couldn’t have their cake and eat it, too: higher temperatures, and therefore higher isp, led to faster U mass loss rates, increasing T/W ratio reduced the specific impulse, and minimizing the U loss rate hurt both T/W and isp. They could improve any one (or often two) of these characteristics, but always at the cost of the third characteristic.

Four potential LNTR configurations, note the tradeoffs between isp, T/W, and fuel loss rates. Ragsdale 1967

We’ll look at many of the design characteristics and engineering considerations of the LRC work in the next section on general design challenges and considerations for the radiator LNTR, but for now we’ll look at their final compromise reactor.

The reactor itself would be made up of several (oddly, never specified) fuel elements, in a beryllium structure, with each fuel element being made up of Be as well. These would be cooled by cryogenic hydrogen moving from the nozzle end to the spacecraft end of the reactor, before flowing back into the central void of the fuel element. As it was fed through the central annulus, it would be seeded with tungsten microparticles to increase the amount of heat the propellant would absorb. Finally, it would be exhausted through a standard De Laval nozzle to provide thrust.

Reference LRC LNTR design characteristics, Putre 1968

The final fuel that they settled on was a liquid ternary carbide design, with the majority of the fuel being niobium carbide (although ZrC was also considered), with a molar mass fraction of 0.02 being UC2. This compromise offered good power density for the reactor while minimizing the vaporization rate of the fuel mass. This would be held in 2 inch diameter, 5 foot long fuel element tubes, with a fuel surface temperature of 5060 K. The propellant would be pressurized to 200 atm in the reactor.

Final LRC LNTR Fuel Characteristics, Putre 1968

This led to a design that struck a compromise between isp, T/W, and U mass loss which was not only acceptable, but impressive: 1400 s isp (on par with some forms of electric propulsion), a T/W ratio (of the core alone) of 4, and a hydrogen-to-uranium flow rate ratio of 50.

They did observe that none of these characteristics were as high as they could be, especially in terms of T/W ratio (which they calculated could go as high as 19!), or isp (with a theoretical maximum of 1660), and the uranium loss was twice the theoretical minimum, but sadly the cost of maximizing any of these characteristics was so high from an engineering point of view that it wasn’t feasible.

Sadly, I haven’t been able to find any documentation on this reactor design – and very few references to it – after February 1968. The exact time of the cancellation, and the reasons why, are a mystery to me. If someone is able to help me find that information it would be greatly appreciated.

LARS: The Brookhaven Design

LARS cross section,

The radiator LNTR would remain dormant for decades, as astronuclear funding was scarce and focused on particular, well-characterized systems (most of which were electric powerplant concepts), until the start of the Space Exploration Initiative. In 1991, a conference was held to explore the use of various types of NTR in future crewed space missions. This led to many proposals, including one from the Department of Energy’s Brookhaven National Laboratory in New York. This was the Liquid Annular Reactor System, or LARS.

A team of physicists and engineers, including Powell, Ludewig, Lazareth, and Maise decided to revisit the radiator LNTR design, but as far as I can tell didn’t use any of the research done by the LRC team. Due to the different design philosophies, lack of references, and also the general compartmentalization of knowledge within the different parts of the astronuclear community, I can only conclude that they began this design from scratch (if this is incorrect, and anyone has knowledge of this program, please get in contact with me!).

LARS was a very different design than the LRC concept, and seems to have gone through two distinct iterations. Rather than the high-pressure system that the LRC team investigated, this was a low-pressure, low-thrust design, which optimized a different characteristic: hydrogen dissociation. This maximizes the specific impulse of the NTR by reducing the mass of the propellant to the lowest theoretically possible mass while maintaining the temperature of the propellant (up to 1600 s, according to the BNL team). The other main distinction from the LRC design was the power level: rather than having a very powerful reactor (3000 to 5000 MWt), this was a modest reactor of only 200 MWt. This leads to a very different set of design tradeoffs, but many of the engineering and materials challenges remain the same.

LARS would continue to us NbC diluted with UC2, but the fuel would not completely melt in the fuel element, leaving a solid layer against the walls of the beryllium fuel element tube. This in turn would be regeneratively cooled with hydrogen flowing through a number of channels in the durm, as well as a gap surrounding the body of the fuel element which would also be filled with cold hydrogen. A drive system would be attached on the cold end of the tube to spin it at an appropriate rate (which was not detailed in the papers). The main changes were in the fuel element configuration, size, and number.

The first iteration of LARS was an interesting concept, using a folded-flow system. This used many small fuel element tubes, arranged in a similar manner to the flow channels in the Dumbo reactor, with the propellant moving from the center of the reactor to the outer circumference, before being ejected out of the nozzle of the reactor. Each layer of fuel elements contained eleven individual tubes, with between 1 and 10 layers of fuel elements in the reactor. As the number of layers increased, the length and radius of the fuel elements decreased.

One of the important notes that was made by the team at this early design date was that the perpendicular fuel element orientation would minimize the amount of fission products that would be ejected from the rocket. I’m unable to determine how this was done, other than if they were solids which would stick to the outside of the propellant flue, however.

Unfortunately, I haven’t been able to discover exactly why this design was abandoned for a more traditional LNTR architecture, but the need to cool the entire exterior of the reactor to keep it from melting seems to possibly be a concern. Reversing the flow, with the hot propellant being in the center of the reactor rather than the external circumference, seems like an easy fix if this was the primary concern, but the discussions of reactor architecture after this seem to pretty much ignore this early iteration. Another complication would be the complexity of the reactor architecture. Whether with dedicated motors, or with a geared system allowing one motor to spin multiple fuel elements, a complex system is needed to spin the fuel elements, which would not only be something which would potentially break down more, but also require far more mass than a simpler system.

The second version of LARS kept the same type of fuel, power output, and low pressure operation, but rather than using the folded flow concept it went with seven fuel elements in a beryllium body. The propellant would be used to cool first the nozzle of the rocket, then the rotating beryllium drum which contained the fuel element, before entering the main propellant channel. The final thermalization of the propellant would be facilitated by the use of tungsten microparticles in the H2, necessary due to the low partial pressure and high transparency of pure H2 (while the vapor pressure issues of any LNTR were acknowledged, the effect that this would have on the thermalization seems to have not been considered a significant factor in the seeding necessity) Two versions, defined by the emissivity of the fuel element, were proposed.

Final two LARS options, f is fuel emissivity, Maise 1999

This design was targeted to reach up to 2000 s isp, but due to uncertainties in U loss rates (as well as C and Nb), the overall mass of the propellant upon exiting the reactor was uncertain, so the authors used a range of 1600-2000 s. The thrust of the engine was approximately 20,000 N, which would result in a T/W ratio of about 1;1 when including a shadow shield (one author points out that without the shield the ratio would be about 3-4/1.

I have been unable to find the research reports themselves for this program (unlike the LRC design), so the specifics of the reactor physics tradeoffs, engineering compromises, actual years of research and the like aren’t something that I’m able to discuss. The majority of my sources are conference papers and journal articles, which occurred in 1991 and 1992, but there was one paper from 1999, so it was at least under discussion through the 1990s (interestingly, that paper discussed using LARS for the 550 AU mission concept, which later got remade into the FOCAL gravitational lens mission: ).

This seems to be the last time that LARS has been mentioned in the technical literature, so while it is mentioned as the “baseline” liquid core concept in places such as Atomic Rockets (–Nuclear_Thermal–Liquid_Core–LARS) it has not been explored in depth since.

Lessons Learned, Lessons to Learn: The Challenges of LNTR

In many ways, the apparent dual genesis of radiator LNTRs offers the ability to look into two particular thought processes in what the challenges are with radiator-type LNTRs. One example of this is what’s discussed more in the “fundamental challenges” sections of the introductory section of the reports: for the LRC team they focus on vapor entrainment minimization, whereas in the BNL presentations it seems quite important to point out that “yes, containing a refractory in a spinning, gas cooled drum is relatively trivial.” This juxtaposition of foci is interesting to me, as an examination of the different engineering philosophies of the two teams, and the priorities of the times.

Wall Construction

Both the LRC and LARS LNTRs ended up with similar fuel element configurations: a high temperature material, with coolant tubes traveling the length of the fuel element walls to regeneratively cool the walls. This material would have to withstand not only the temperature of the fuel element, but also resist chemical attack by the hydrogen used for the regenerative cooling, as well as being able to withstand the mechanical strain of not only the spinning fuel, but also the torque from whatever drive system is used to spin the fuel element to maintain the centripetal force used to contain the fuel element.

Another constant concern is the temperature of the wall. While high temperature loadings can be handled using regenerative cooling, the more heat is removed from the fuel during the regenerative cooling step, but it reduces the specific impulse of the engine. Here’s a table from the LRC study that examines the implications of wall cooling ratio vs specific impulse in that design, which will also apply as a general rule of thumb for LARS:

However, from there, the two designs differed significantly. The LARS design is far simpler: a can of beryllium, with a total of 20% of the volume being the regenerative cooling channel. As mentioned previously, the fuel didn’t become completely molten, but remained solid (and mostly containing the ZrC/NbC component, with very little U). Surrounding the outside of the fuel element can itself was another coolant gap. This would then be mounted to the reactor body with a drive system at the ship end, and a bearing at the hot end. This would then be mounted in the stationary moderator which made up the majority of the internal volume of the reactor core, which was shielded from the heat in the fuel element in a very heterogeneous temperature profile.

The LRC concept on the other hand, was far more complex in some ways. Rather than using a metal can, the LRC design used graphite, which maintains its strength far better than many metals at high temperatures. A number of options were considered to maintain the wall of the can, due not only to the fuel mixture potentially attacking the graphite (as the carbon could be dissolved into the carbide of the fuel element), as well as attack from the hydrogen in the coolant channels (which would be able to be addressed in a similar way to how NERVA fuel elements used refractory metal coatings to prevent the same erosive effects).

The LRC design, since the fuel would be completely molten across the entire volume of the fuel element, was a more complex challenge. A number of options were considered to minimize the wall heating of the fuel element, including:

  • Selective fuel loading
    • A common strategy in solid fuel elements, this creates hotter and cooler zones in the fuel element
      • Neutron heating will distribute the radiative heating past U distribution
    • Convection and fuel mixing will end up distributing the fuel over time
    • May be able to be limited by affecting the temperature and viscosity of the fuel for the life of the reactor
  • Multiple fluids in fuel
    • Step beyond selective loading, a different material may be used as the outer layer of the fuel body, resisting mixing and reducing thermal load on the wall
  • Vapor insulation along exterior of fuel body
    • Using thermally opaque vapor to insulate the fuel element wall from the fuel body
    • Significantly reduces the heating on the outer wall
    • Two options for maintaining vapor wall:
      • Ablative coating on inner wall of fuel element can
      • Porous wall in can (similar to a low-flow version of a bubbler fuel element) pumping vapor into gap between fuel and can
    • Maximum stable vapor-layer thickness based on vapor bubble force balance vs centripetal force of liquid fuel
      • Two phase flow dynamics needed to maintain the vapor layer would be complex

This set of options offer a trade-off: either a simpler option, which sets hard limits on the fuel element temperature in order to ensure the phase gradient in the fuel element (the LARS concept), or the fully liquid, more complex-behaving LRC design which has better power distribution, and a higher theoretical fuel element temperature – only limited by the vapor pressure increase and fuel loss rates in the fuel element, rather than the wall heating temperature limits of the LARS design.

Anyone designing a new radiator LNTR has much work that they can draw from, but other than the dynamics of the actual fuel behavior (which have never gone through a criticality test), the fuel element can design will be perhaps the largest set of engineering challenges in this type of system (although simpler than the bubbler-type LNTR).

Propellant Thermalization

The major change between the bubbler and radiator-type LNTRs is the difference in the thermalization behavior of the propellant: in a bubbler-type LNTR, assuming the propellant can be fed through the fuel, the two components reach thermal equilibrium, so the only thing needed is to direct it out of the nozzle; a radiator on the other hand has a similar flow path to the Rover-type NTRs, once through from nozzle to ship side for regenerative cooling, then a final thermalization pass through the central void of the fuel element.

This is a problem for hydrogen propellant, which is largely transparent to the EM radiation coming off the reactor. This thermal transfer accounted for all but 10% of the thermalization effects in the LARS design, and in many of the LRC studies this was completely ignored as negligible, with the convective effects in the propellant mainly being a concern in terms of fuel mass loss and propellant mass increase.

While the fuel mass loss would increase the opacity of the gas (making it absorb more heat), a far better option was available: adding a material in microparticle form to the propellant flow as it goes through the final thermalization cycle. The preferred material for the vast majority of these applications, which we’ll see in the gas cycle NTRs as well, is microparticles of tungsten.

This has been studied in a host of different applications, and will be something that I’ll discuss in depth on a section of the propellant webpage in the future (which I’ll link to here once it’s done), but for the LRC design the target goal for increasing the opacity of the H2 was to achieve between 10,000 and 20,000 cm^2/gm, for a reduction in single-digit percentage of specific impulse due to the higher mass. They pointed out that the simplified calculations used for the fuel mass loss behavior could lead to an error that they were unable to properly address, and which could either increase or decrease the amount of additive used.

The LARS concept used tungsten microparticles as well, and their absorption actually was the defining factor in the two designs they proposed: the emissivity and reflectivity of the fuel in terms of the absorption of the wall and the propellant.

Two other options are available for increasing the opacity of the hydrogen gas.

The first is to use a metal vapor deliberately, as was the paradigm in Soviet gas core design. Here, they used either NaK or Li vapor, both of which have small neutron absorption cross-sections and high thermal capacity. This has the advantage of being more easily mixed with the turbulent propellant stream, as well as being far lower mass than the W that is often used in US designs, but may be less opaque to the EM frequencies being emitted by the fuel’s surface in an LNTR design. I’m still trying to track down a more thorough writeup of the use of these vapors in NTR design at the moment (a common problem in both Soviet and Russian astronuclear literature is a lack of translations), but when I do I’ll discuss it in far more depth, since it’s an idea that doesn’t seem to have translated into the American NTR design paradigm.

As I said, this is a concept that I’m going to cover more in depth in both the gas core and general propellant pages, so with one final – and fascinating – note, we’ll move on to the conclusion.

An Interesting Proposal

The final option is something that Cavan Stone mentioned to me on Facebook a while ago: the use of lithium deuteride (LiD) as a propellant or additive in this design. This is an interesting concept, since Li7 is a fissile material, and is reasonably opaque to the frequencies being discussed in these reactors. The use of deuterium rather than protium also increases the neutron moderation of the propellant, which can in turn increase fissile efficiency of the reactor. The Li will harden the neutron spectrum overall, while the D and Be (in the fuel element can/reactor body) will thermalize the spectrum.

There was a discussion of using LiD as a propellant in NTRs in the 1960s [], but sadly I can’t find it anywhere online. If someone is able to help me find it, please let me know. This is a fascinating concept, and one that I’m very glad Cavan brought up to me, but also one that is complex enough that I really need to see an in-depth study by someone far more knowledgeable than me to be able to intelligently discuss the implications of.

Conclusion, or The Future of the Forgotten Reactor

While often referenced in passing in any general presentation on nuclear thermal rockets, the liquid core NTR seems to be the least studied of the different NTR types, and also the least covered. While the bubbler offers distinct advantages from a purely thermodynamic point of view, the radiator offers far more promise from a functional perspective.

Sadly, while both solid and gas core NTRs have been studied into the 20th century, the liquid core has been largely forgotten, and the radiator in particular seems to have gone through a reinvention of the wheel, as it were, between the 1960s NASA design and the 1990s DOE design, with few of the lessons learned from the LRC concept being applied to the BNL design as far as vapor dynamics, thermal transfer, and the like.

This doesn’t mean that the design is without promise, though, or that the challenges that the reactor faces are insurmountable. A number of hurdles in testing need to be overcome for this design to work – but many of the problems that there simply isn’t any data for can be solved with a simple set of criticality and reactor physics tests, something well within the capabilities of most research nuclear programs with the capability to test NTRs.

With the advances in nuclear and two-phase flow modeling, a body of research that doesn’t seem to have been examined in depth for over two decades, and the possibility of a high-isp, moderate-to-high thrust engine without the complications of a gas core NTR (a subject that we’ll be covering soon), the LNTR, and the radiator in particular, offer a combination of promise and ability to develop advanced NTRs as low hanging fruit that few systems are able to offer.

Final Note

With that, we’re leaving the realm of liquid fueled NTRs for now. This is a fascinating field, and one that I haven’t seen much discussion of outside the original technical papers, so I hope you enjoyed it! I’m going to work on getting these posts into a more easily-referenced form on the website proper, and will make a note of that in my blog (and on my social media) when I do! If anyone is aware of any additional references pertaining to the LNTR, as well as its thermophysical behavior, fuel materials options, or anything else relating to these desgins, please let me know, either in the comments or by sending me a message to beyondnerva at gmail dot com.

Our next blog post will be on droplet and vapor core NTRs, and will be covered by a good friend of mine and fellow astronuclear enthusiast: Calixto Lopez. These reactors have fascinated him since he was in school many moons ago, and he’s taught me the majority of what I know about them, so I asked him if he was willing to write that post.

After that, we’re going to move on to the closed cycle gas core NTR, which I’ve already begun research on. There’s lots of fascinating tidbits about this reactor type that I’ve already uncovered, so this may end up being another multiple part blog series.

Finally, to wrap up our discussion of advanced NTRs, we’re going to do a series on the open cycle gas core NTR types. This is going to be a long, complex series on not only the basic physics challenges, but the design evolution of the engine type, as well as discussion on various engineering methods to mitigate the major fuel loss and energy waste issues involved in this type of engine. There may be a delay between the closed and open cycle NTR posts due to the sheer amount of research necessary to do open cycles justice, but rest assured I’m already doing research on them.

As you can guess, this blog takes a lot of time, and a lot of research, to write. If you would like to support me in my efforts to bring the wide and complex history of astronuclear engineering to light, consider supporting me on Patreon: . Every dollar helps, and you get access to not only early releases of every blog post and webpage, but at the higher donation amounts you also get access to the various 3d models that I’m working on, videos, and eventually the completed 3d models themselves for your own projects (with credit for the model construction, of course!).

I’m also always looking for new or forgotten research in astronuclear engineering, especially that done by the Soviet Union, Russia, China, India, and European countries. If you run across anything interesting, please consider sending it to beyondnerva at gmail dot com.

You can find me on Twitter ( @beyondnerva), as well as join my facebook group (insert group link) for more astronuclear goodness.


General References



Fundamental Material Limitations in Heat-Exchanger Nuclear Rockets, Kane and Wells, Jr. 1965


Radiator-Specific LNTR References

Lewis Research Center LNTR




Liquid Annular Reactor System (LARS)

[Paywall] Conceptual Design of a LARS Based Propulsion System, Ludewig et al 1991

The Liquid Annular Reactor System (LARS) Propulsion, Powell et al 1991

LIQUID ANNULUS, Ludewig 1992

[Paywall] The liquid annular reactor system (LARS) for deep space exploration, Maise et al 1999

Development and Testing Forgotten Reactors Nuclear Thermal Systems

The Bubbler: Liquid NTRs Without Barriers

Hello, and welcome back to Beyond NERVA! Today, we continue our look at liquid fueled nuclear thermal rockets (LNTRs), with a deep dive into the first of the two main types: what I call the bubbler LNTR.

This potentially attractive form of advanced NTR is a design that has been largely forgotten in the history of NTR designs outside some minor footnotes. Because of this, I felt that it was a great subject for the blog! All of the sources that I can find on the designs are linked at the end of this post, including a couple that are not available digitally, so if you’re interested in a more technical analysis of the concept please check that out!

What is a Bubbler LNTR?

Every NTR has to heat the (usually hydrogen) propellant in some way, which is usually done through (usually thermal) radiation from the fuel’s surface into the propellant.

Bubbles passing through fuel, Nelson 1963

This design, though, changes that paradigm by passing the propellant through the liquid fuel (usually a mix of uranium carbide (UC2) and some other carbide – either zirconium (ZrC) or niobium (NbC). This is done by having a porous outer wall which the propellant is injected through. This is known as a “folded flow propellant path,” and is seen in other NTRs as well, notably the Dumbo reactor from the early days of Project Rover.

In order to keep the fuel in place, each fuel element is spun at a high enough rate to keep the fuel in place using centrifugal force. The number of fuel elements is one of the design choices that varies from design to design, and the overall diameter, as well as the thickness of the fuel layer, is a matter of some design flexibility as well, but on average the individual fuel elements range from about 2 to about 6 inches in diameter, with the ratio between the thickness of the fuel layer and the thickness of the central void where the now-hot propellant passes through to the nozzle being roughly 1:1.

This was the first type of LNTR to be proposed, and was a subject of study for over a decade, but seems to have fallen out of favor with NTR designers in the late 1960s/early 1970s due to fuel/propellant interaction complications and engineering challenges related to the physical structures for injecting the propellant (more on that later).

Let’s look at the history of bubbler LNTR in more depth, and see how the proposals have evolved over time.

History of the Bubbler-type LNTR: The First of its Kind

McCarthy, 1954

Image from Barrett, Jr 1964

The first proposal for a liquid fueled NTR was in 1954, by J McCarthy in “Nuclear Reactors for Rockets” [ed. Note I have been unable to locate this report in digital form, if anyone is able to help me get ahold of it I would greatly appreciate your assistance; the following summary is based on references to this study in later works]: This design was the first to suggest the centrifugal containment of liquid fuel, and was also the first of the bubbler designs. It used a single fuel element as the entire reactor, with a large central void in the center of the fuel body as the propellant flow channel once it left the fuel itself.

This design was fundamentally limited by three factors:

  1. A torus is a terrible neutronic structure, and while the hydrogen propellant in the central void of the fuel would provide some neutron moderation, McCarthy found upon running the MCNP calculations that the difference was so negligible that it could be assumed to be a vacuum; and
  2. Only a certain amount of heat could be removed from the fuel by the propellant based on assumed fuel element geometry, and that cooling the reactor could pose a major challenge at higher reactor powers; and
  3. The behavior of the hydrogen as it passes through, and also out of, the liquid fuel was not well understood in practice, and
  4. the vapor pressure of the fuel’s constituent components could lead to fuel being absorbed in the gas as vapor in both the bubbles and exhausting propellant flow, causing both a loss of specific impulse and fissile fuel. This process is called “entrainment,” and is a (if not the) major issue for this type of reactor.

However, despite these problems this design jump started the design of LNTRs, defined the beginnings of the design envelope for this type of engine, and introduced the concept of the bubbler LNTR for the first time.

The Princeton LNTR, 1963

Princeton LNTR, Nelson et al 1963

The next major design step was undertaken by Nelson et al at Princeton’s Dept. of Aeronautical Engineering in 1963, under contract by NASA. This was a far more in-depth study than the proposal by McCarthy, and looked to address many of the challenges that the original design faced.

Perhaps the most notable change was the shift from a single large fuel element to multiple smaller ones, arranged in a hexagonal matrix for maximum fuel element packing. This does a couple of things:

  1. It homogenizes the reactor more. While heterogeneous (mixed-region) reactors work well, for a variety of reasons it’s beneficial to have a more consistent distribution of materials through the core – mainly for neutronic properties and ease of modeling (this is 1963, MCNP in a heterogeneous core using a slide rule sounds… agonizing).
  2. Given a materially limited, fixed specific impulse (see the Fuel Materials Constraints section for more in-depth discussion on this) NTR, the thrust is proportional to the total surface area of the fuel/propellant interface. By using multiple fuel elements (which they call vortices), the total available surface area increases in the same volume, increasing the thrust without compromising isp (this also implies a greater specific power, another good thing in an NTR).

This was a thermal (0.37 eV) neutron spectrum reactor, fueled by a mix of UC2 and ZrC, varying the dilution level for greater moderation and increased thermal limits. It was surrounded by a 21 cm reflector of beryllium (a “standard reflector”).

From there, the basic geometry of the reactor, from the number of fuel elements and their fueled thickness, to the core diameter and volume (the length was at a fixed ratio compared to the radius), to the shape, velocity, and number of bubbles (as well as vapor entrainment losses of the fuel material) were studied.

This was a fairly limited study, despite its length, due to the limitations of the resources available. Transients and reactor kinetics were specifically excluded from this study, the hydrogen was again replaced with vacuum in calculations, the temperature was assumed to be higher than possible due to vapor entrainment problems (4300 K, instead of 3600 K at 10 atm, 3800 at 30 atm) the chamber pressure was limited to only >1 atm, and age-diffusion theory calculations only give results within an order of magnitude… but it’s still one of the most thorough study of LNTRs I’ve found, and the most researched bubbler architecture. They pointed out the potential benefits of the use of 233U, or a larger but neutronically equivalent volume of 232Th (turning the reactor into a thermal breeder), in order to improve the overall vaporization characteristics, but this was not included in the study.

Barrett LNTR, 1964

The next year, W. Louis Barrett presented a variation of the Princeton LNTR at the AIAA Conference. The main distinction between the two designs was the addition of zirconium hydride in the areas between the fuel elements and the outer reflector, and presented the first results from a study being conducted on the bubble behavior in the fuel (being conducted at Princeton at the time). The UC2/ZrC fuel was the same, as were the number of fuel elements and reactor dimensions. The author concluded that a specific impulse of 1500-1550 seconds was possible, with a T/W of 1 at 100 atm, with thrust not being limited by heat transfer but by available flow area.

Below are the two relevant graphs from his findings: the first is the point at which the fissile fuel itself would end up becoming captured by the passing gas, and the second looks at the maximum specific impulse any particular fissile fuel could theoretically offer. The image for the McCarthy reactor above was from the same paper.

Final Work: Bubbles are Annoying

For this reactor to work, the heat must be adequately transferred from the fuel element to the propellant as it bubbles through the fuel mass radially. The amount of heat that needs to be removed, and the time and distance that it can be removed in, is a function of both the fuel and the bubbles of H2.

Sadly, the most comprehensive study of this has never been digitized, but for anyone who’s able to get documents digitized at Princeton University and would like to help make the mechanics of bubbler-type LNTRs more accessible, here’s the study: Liebherr, J.F., Williams, P.M., and Grey, J., “Bubble Motion Studies for the Liquid Core Nuclear Rocket,” Princeton University Aeronautical Engineering Report No. 673, December 1963. Apparently you can check it out after you can convince the librarians to excavate it, based on their website:

McGuirk 1972

Here, a clear plastic housing was constructed which consisted of two main layers: an outer, solid casing which formed the outer body of the apparatus, and a perforated, inner cylinder, which simulated the fuel element canister. Water was used as the fuel element analog, and the entire apparatus was spun along its long axis to apply centrifugal acceleration to the water at various rotation rates. Pressurized air (again, at various pressures) was used in place of the hydrogen coolant. Stroboscopic photography was used to document bubble size, shape, and behavior, and these behaviors were then used to calculate the potential thermal exchange, vapor entrainment, and other characteristics of the behavior of this system.

One significant finding, based on Gray’s reporting, though, is that there’s a complex relationship between the dimensions, shape, velocity, and transverse momentum of the bubbles and their thermal uptake capacity, as well as their vapor entrainment of fuel element components. However, without being able to read this work, I can only hope someone can make this work accessible to the world at large (and if you’ve got technical knowledge and interest in the subject, and feel like writing about it, let me know: I’m more than happy to have you write a blog post on here on this INSANELY complex topic).

The last reference to a bubbler LNTR I can find is from AIAA’s Engineering Notes from May 1972 by McGuirk and Park, “Propellant Flow Rate through Simulated Liquid-Core Nuclear Rocket Fuel Bed.” This paper brings up a fundamental problem that heretofore had not been addressed in the literature on bubblers, and quite possibly spelled their death knell.

Every study until this point greatly simplified, or ignored, two phase flow thermodynamic interactions. If you’re familiar with thermodynamics, this is… kinda astounding, to be honest. It also leads me to a diversion that could be far longer than the two pages that this report covers, but I won’t indulge myself. In short, two phase flow is used to model the thermal transfer, hydro/gasdynamic properties, and other interactions between (in this case) a liquid and a gas, or a melting or boiling liquid going through a phase change.

This is… a problem, to say the least. Based on the simplified modeling, the fundamental thermal limitation for this sort of reactor was vapor entrainment of the fuel matrix, reducing the specific impulse and changing he proportions of elements in the matrix, causing potential phase change and neutronics complications.

This remains a problem, but is unfortunately not the main thermal limitation of this reactor, rather it was discovered that the amount of thermal rejection available through the bubbling of the propellant through the fuel is not nearly as high as was expected at lower propellant flow rates, and higher flow rates led to splattering of the bubbles bursting, as well as unstable flow in the system. We’ll look at the consequences of this later, but needless to say this was a major hiccup in the development of the bubbler type LNTR.

While there may be further experimentation on the bubbler type LNTR, this paper came out shortly before the cancellation of the vast majority of astronuclear funding in the US, and when research was restarted it appears that the focus had shifted to radiator-type LNTRs, so let’s move on to looking at them.

Bubbler-Specific Constraints

Fuel Element Thickness and Heat Transfer

One of the biggest considerations in a bubbler LNTR is the thickness of the fuel within each fuel canister. The fundamental trade-off is one of mechanical vs thermodynamic requirements: the smaller the internal radius at the fuel element’s interior surface, the higher the angular velocity has to be to maintain sufficient centrifugal force to contain the fuel, btu also the greater time and distance the bubbles are able to collect heat from the fuel.

In the Princeton study, the total volume within the fuel canister was roughly equally divided between fuel and propellant to achieve a comfortable trade-off between fuel mass, reactor volume, and thermal uptake in the propellant. In this case, they included the volume of the propellant as it passed through the fuel to be part of the central annulus’ volume, which eases the neutronic calculations, but also induces a complication in the actual diameter of the central void: as propellant flow increases, the void diameter decreases, requiring more angular momentum to maintain sufficient centrifugal force.

A thinner fuel element, on the other hand, runs into the challenge of requiring a greater volume of propellant to pass through it to remove the same amount of energy, but an overall lower temperature of the propellant that is used. This, in turn, reduces the propellant’s final velocity, resulting in lower specific impulse but higher thrust. However, another problem is that the fluid mixture of the propellant/fuel can only contain so much gas before major problems develop in the behavior of the mixture. In an unpublished memorandum from 1963 (“Some Considerations on the Liquid Core Reactor Concept,” Mar 23), Bussard speculated that the maximum ratio of gas to fuel would be around 0.3 to 0.4; at this point the walls of the bubbles are likely to merge, converting the fuel into a very liquidy droplet core reactor (a concept that we’ll discuss in a future blog post), as well as leading to excess splattering of the fuel into the central void of the fuel element. While some sort of recapture system may be possible to prevent fuel loss, in a classic bubbler LNTR this is an unacceptable situation, and therefore this type of limitation (which may or may not actually be 0.3-0.4, something for future research to examine) intrinsically ties fuel element thickness to maximum propellant flow rates based on volume.

There are some additional limits here, as well, but we’ll discuss those in the next section. While the propellant will gain some additional power through its passage out of the fuel element and toward the nozzle, as in the radiator type LNTR, this will not be as significant as the propellant is entering along the entire length fuel element.

Bubble Dynamics

This is probably the single largest problem that a bubbler faces: the behavior of the bubbles themselves. As this is the primary means of cooling the fuel, as well as thermalizing the propellant, the behavior of these bubbles, and the ability of the propellant stream to control the entirety of the heat generated in the fuel, is of absolutely critical importance. We looked briefly in the last section at the impacts of the thickness of the fuel, but what occurs within that distance is a far more complex topic than it may appear at first glance. With advances in two phase flow modeling (which I’m unable to accurately assess), this problem may not be nearly as daunting as it was when this reactor was being researched, but in all likelihood this set of challenges is perhaps the single largest reason that the bubbler LNTR disappeared from the design literature when it did.

The other effect that the bubbles have on the fuel is that they are the main source of vapor entrainment of fuel element materials in a bubbler, since they are the liquid/gas interface that occurs for the longest, and have the largest relative surface area. We aren’t going to discuss this particular dynamic to any great degree, but the behavior of this interaction compared to inner surface interactions will potentially be significant, both due to the fact that these bubbles are the longest-lived liquid/gas interaction by surface area and are completely encircled by the fuel itself while undergoing heating (and therefore expansion, exacerbated by the decreasing pressure from the centrifugal acceleration gradient). One final note on this behavior: it may be possible that the bubbles may become saturated with vapor during their thermalization, preventing uptake of more material while also increasing the thermal uptake of energy from the fuel (metal vapors were suggested by Soviet NTR designers, including Li and NaK, to deal with the thermal transparency of H2 in advanced NTR designs).

The behavior of the bubbles depends on a number of characteristics:

  1. Size: The smaller the bubble, the greater the surface area to volume ratio, increasing the amount of heat the can be absorbed in a given time relative to the volume, but also the less thermal energy that can be transported by each bubble. The size of the bubbles will increase as they move through the fuel element, gaining energy though heat, and therefore expanding and becoming less dense.
  2. Shape: Partially a function of size, shape can have several impacts on the behavior and usefulness of the bubbles. Only the smallest bubbles (how “small” depends on the fluids under consideration) can retain a spherical shape. The other two main shape classifications of bubbles in the LNTR literature are oblate spheroid and spherical cap. In practice, the higher propellant flow rates result in the largest, spherical cap-type bubbles in the fuel, which complicate both thermal transfer and motion modeling. One consequence of this is that the bubbles tend to have a high Reynolds number, leading to more turbulent behavior as they move through the fuel mass. Most standard two-phase modeling equations at the time had a difficult time adequately predicting the behavior of these sorts of bubbles. Another important consideration is that the bubbles will change shape to a certain degree as they pass through the fuel element, due to the higher temperature and lower centrifugal force being experienced on them as they move into the central void of the fuel element.
  3. Velocity: A function of centrifugal force, viscosity of the fuel, initial injection pressure of the propellant, density of the constituent gas/vapor mix, and other factors, the velocity of a bubble through the fuel element determines how much heat – and vapor – can be absorbed by a bubble of a given size and shape. An increase in velocity also changes the bubble shape, for instance from an oblate spheroid to a spherical cap. One thing to note is that the bubbles don’t move directly along the radius of the fuel element, both oscillation laterally and radially occur as the shape deforms and as centrifugal, convective, and other forces interact with the bubble; whether this effect is significant enough to change the necessary modeling of the system will depend on a number of factors including fuel element thickness, convective and Coriolis behavior in the fuel mass, bubble Reynolds number, and angular velocity of the fuel element,
  4. Distribution: One concern in a bubbler LNTR is ensuring that the bubbles passing through the fuel mass don’t combine into larger conglomerations, or that the density of bubbles results in a lack of overall cohesion in the fuel mass. This means that the distribution system for the bobbles must balance propellant flow rate, bubble size, velocity, and shape, non-vertical behavior of the bubbles, and the overall gas fraction of the fuel element based on the fuel element design being used.

As mentioned previously, the final paper on the bubbler I was able to find looked at the challenges of bubble dynamics in a simulated LNTR fuel element; in this case using water and compressed air. Several compromises had to be made, leading to unpredictable behavior of the propellant stream and the simulated fuel behavior, which could be due to the challenges of using water to simulate ZrC/UC2, including insufficient propellant pressure, bubble behavior irregularities, and other problems. Perhaps the most major challenge faced in this study is that there were three distinct behavioral regimes in the two phase system: orderly (low prop pressure), disordered (medium prop pressure), and violent (high prop pressure), each of which was a function of the relationship between propellant flow and centrifugal force being applied. As suspected, having too high a void fraction within the fuel mass led to splattering, and therefore fuel mass loss rates that were unacceptably high, but the point that this violent disorder occurred was low enough that it was not assured that the propellant might not be able to completely remove all the thermal energy from the fuel element itself. If the energy level of each fuel element is reduced (by reducing the fissile component of the fuel while maintaining a critical mass, for instance), this can be compensated for, but only by losing power density and engine performance. The alternative, increasing the centrifugal force on the system, leads to greater material and mechanical challenges for the system.

Adequately modeling these characteristics was a major challenge at the time these studies were being conducted, and the number of unique circumstances involved in this type of reactor makes realistic modeling remain non-trivial; advances in both computational and modeling techniques make this set of challenges more accessible than in the 1960s and 70s, though, which may make this sort of LNTR more feasible than it once was, and restarting interest in this unique architecture.

These constraints define many things in a bubbler LNTR, as they form the single largest thermodynamic constraint on the engine. Increasing centrifugal force increases the stringency for both the fuel element canister (with incorporated propellant distribution system), mechanical systems to maintain angular velocity for fuel containment, maximum thrust and isp for a given design, and other considerations.

Suffice to say, until the bubble behavior, and its interactions with the fuel mass, can be adequately modeled and balanced, the bubbler LNTR would require significant basic empirical testing to be able to be developed, and this limitation was probably a significant contributor to the reason that it hasn’t been re-examined since the early-to-mid 1970s.

The “Restart Problem”

The last major issue in a bubbler-type design is the “restart problem”: when the reactor is powered down, there will be a period of time when the fuel is still molten, requiring centrifugal containment, but the reactor being powered down allows for the fuel to be pressed into the pores of the fuel element canister, blocking the propellant passages.

One potential solution for the single fuel element design was proposed by L. Crocco, who suggested that the fuel material is used for the bubbling structure itself. When powered up, the fuel would be completely solid, and would radiate heat in all directions until the fuel becomes molten [ed. Note: according to Crocco, this would occur from the inner surface to the outer one, but I can’t find backup for that assumption of edge power peaking behavior, or how it would translate to a multi-fuel-element design], and propellant would be able to pass through the inner layers of the fuel element once the liquid/solid interface reached the pre-drilled propellant channels in the fuel element.

Another would be to continue to pass the hydrogen propellant through the fuel element until the pressure to continue pumping the H2 reaches a certain threshold pressure, then use a relief valve to vent the system elsewhere while continuing to reject the final waste heat until a suitable wall temperature has been reached. This is going to both make the fuel element less dense, and also result in a lower fuel element density near the wall than at the inner surface of the fuel element. While this could maybe [ed. Note: speculation on my part] make it so that the fuel is more likely to melt from the inner surface to the outer one, the trapped H2 may also be just enough to cause power peaking around the bubbles, allow chemical reactions to occur during startup with unknown consequences, and other complications that I couldn’t even begin to guess at – but the tubes would be kept clear.

Wall Material Constraints

Other than the “restart problem,” additional constraints apply to the wall material. It needs to be able to handle the rotational stresses of the spinning fuel element, be permeable to the propellant, and able to withstand rather extreme thermal gradients: on one side, gaseous hydrogen at near-cryogenic temperatures (the propellant would have already absorbed some heat from the reactor body) to about 6000 K on the inside, where it comes in contact with the molten fuel.

Also, the bearings holding the fuel element will need to be designed with care. Not only do they need to handle the rather large amount of thermal expansion that will occur in all directions during reactor startup, they have to be able to deal with high rotation rates throughout the temperature range.

The Paths Not (Yet?) Taken

Perhaps due to the early time period in which the LNTR was explored, a number of design options don’t seem to have been explored in this sort of reactor.

One option is neutron moderator. Due to the high thermal gradients in this reactor, ZrH and other thermally sensitive moderators could be used to further thermalize the neutron spectrum. While this might not be explicitly required, it may help reduce the fissile requirements of the reactor, and would not be likely to significantly increase reactor mass.

A host of other options are possible as well, if you can think of one, comment below!

Diffuser LNTR

The other option was brought up by Michael Turner at Project Persephone, in regards to the vapor entrainment and restart problem issues: what if you get rid of the holes in the walls of the fuel element, and the bubbles through the fuel element, altogether? As we saw when discussing Project Rover, hydrogen gets through EVERYTHING, especially hot metals. This diffusion process is done through individual molecules, not through bubbles, meaning that the possibility of vapor entrainment is eliminated. The down side is that the propellant mass flow will be extremely reduced, resulting in a higher-isp (due to the ability to increase fuel temp because the vapor losses are minimized), much-lower-thrust reactor than those designed before. As he points out, this may be able to be mixed with bubbles for a high-thrust, lower-isp mode, if “shutters” on the fuel element outer frit were able to be engineered. Another possible requirement would be to reduce the fissile component density of the fuel to match the power output to the hydrogen flow rates, or to create a hybrid diffuser/radiator LNTR to balance the propellant flow and thermal output of the reactor.

I have not been able to calculate if this would be feasible or not, and am reasonably skeptical, but found it an intriguing possibility.


The bubbler liquid nuclear thermal rocket is a fascinating concept which has not been explored nearly as much as many other advanced NTR designs. The advantage of being able to fully thermalize the propellant to the highest fuel element temperature while maintaining cryogenic temperatures outside the fuel element is a rarity in NTR design, and offers many options for structures outside the fuel elements themselves. After over a decade of research at Princeton (and other centers), the basic research on the dynamics of this type of reactor has been established, and with the computational and modeling capabilities that were unavailable at the time of these studies, new and promising versions of this concept may come to light if anyone chooses to study the design.

The problems of vapor entrainment, fissile fuel loss, and restarting the reactor are significant, however, and impact many area of the reactor design which have not been addressed in previous studies. Nevertheless, the possibility remains that this drive may one day indeed make a useful stepping stone from the solid-fueled NTRs of tomorrow to the advanced NTRs of the decades ahead.


A Technical Report on the CONCEPTUAL DESIGN – STUDY OF A LIQUID-CORE NUCLEAR ROCKET, Nelson et al 1963

The Liquid Core Nuclear Rocket, Grey 1965 (pg 92)

Specific Impulse of a Liquid Core Nuclear Rocket, Barrett Jr 1963

Propellant Flow Rate through Simulated Liquid-Core Nuclear Rocket Fuel Bed, McGuirk and Park 1972

Electric propulsion Electrostatic Propulsion Fission Power Systems Heat Rejection Nuclear Electric Propulsion Power Conversion Systems Spacecraft Concepts

Transport and Energy Module: Russia’s new NEP Tug

Hello, and welcome back to Beyond NERVA! Today’s blog post is a special one, spurred on by the announcement recently about the Transport and Energy Module, Russia’s new nuclear electric space tug! Because of the extra post, the next post on liquid fueled NTRs will come out on Monday or Tuesday next week.

This is a fascinating system with a lot of promise, but has also gone through major changes in the last year that seem to have delayed the program. However, once it’s flight certified (which is to be in the 2030s), Roscosmos is planning on mass-producing the spacecraft for a variety of missions, including cislunar transport services and interplanetary mission power and propulsion.

Begun in 2009, the TEM is being developed by Energia on the spacecraft side and the Keldysh Center on the reactor side. This 1 MWe (4MWt) nuclear reactor will power a number of gridded ion engines for high-isp missions over the spacecraft’s expected 10-year mission life.

First publicly revealed in 2013 at the MAKS aerospace show, a new model last year showed significant changes, with additional reporting coming out in the last week indicating that more changes are on the horizon (there’s a section below on the current TEM status).

This is a rundown of the TEM, and its YaDEU reactor. I also did a longer analysis of the history of the TEM on my Patreon page (, including a year-by-year analysis of the developments and design changes. Consider becoming a Patron for only $1 a month for additional content like early blog access, extra blog posts and visuals, and more!

TEM Spacecraft

Lower left: stowed configuration for launch, upper right: operational configuration. Image Roscosmos

The TEM is a nuclear electric spacecraft, designed around a gas-cooled high temperature reactor and a cluster of ion engines.

The TEM is designed to be delivered by either Proton or Angara rockets, although with the retirement of the Proton the only available launcher for it currently is the Angara-5.

Secondary Power System

Both versions of the TEM have had secondary folding photovoltaic power arrays. Solar panels are relatively commonly used for what’s known as “hotel load,” or the load used by instrumentation, sensors, and other, non-propulsion systems.

It is unclear if these feed into the common electrical bus of the spacecraft or form a secondary system. Both schemes are possible; if the power is run through a common electrical bus the system is simpler, but a second power distribution bus allows for greater redundancy in the spacecraft.

Propulsion System

Image: user Valentin on Habr from MAKS 2013

The Primary propulsion system is the ID-500 gridded ion engine. For more information about gridded ion engines in general, check out my page on them here:

The ID-500 was designed by the Keldysh Center specifically to be used on the TEM, in conjunction with YaEDU. Due to the very high power availability of the YaEDU, standard ion engines simply weren’t able to handle either the power input or the needed propellant flow rates, so a new design had to be come up with.

The ID-500 is a xenon-propelled ion engine, with each thruster having a maximum power level of about 35 kW, with a grid diameter of 500 mm. The initially tested design in 2014 (see references below) had a tungsten cathode, with an expected lifetime of 5000 hours, although additional improvements through the use of a carbon-carbon cathode were proposed which could increase the lifetime by a factor of 10 (more than 50,000 hours of operation).

Each ID-500 is designed to throttle from 375-750 mN of thrust, varying both propellant flow rate and ionization chamber pressure. The projected exhaust velocity of the engine is 70,000 m/s (7000 s isp), making it an attractive option for the types of orbit-altering, long duration missions that the TEM is expected to undertake.

The fact that this system uses a gridded ion thruster, rather than a Hall effect thruster (HET), is interesting, since HETs are the area that Soviet, then Russian, engineers and scientists have excelled at. The higher isp makes sense for a long-term tug, but with a system that seems that it could refuel, the isp-to-thrust trade-off is an interesting decision.

The initial design released at MAKS 2013 had a total of 16 ion thrusters on four foldable arms, but the latest version from MAKS-2019 has only five thrusters. The new design is visible below:

The first design is ideal for the tug configuration: the distance between the thrusters and the payload ensure that a minimal amount of the propellant hits the payload, robbing the spacecraft of thrust, contaminating the spacecraft, and possibly building up a skin charge on the payload. The downside is that those arms, and their hinge system, cost mass and complexity.

The new design clusters only five (less than one third) thrusters clustered in the center-line of the spacecraft. This saves mass, but the decrease in the number of thrusters, and the fact that they’re placed in the exact location that the payload makes most sense to attach, has me curious about what the mission profile for this initial TEM is.

It is unclear if the thrusters are the same design.


Thermal Management

This may be the most interesting thing in in the TEM: the heat rejection system.

Most of the time, spacecraft use what are commonly called “bladed tubular radiators.” These are tubes which carry coolant after it reaches its maximum temperature. Welded to the tube are plates, which do two things: it increases the surface area of the tube (with the better conductivity of metal compared to most fluids this means that the heat can be further distributed than the diameter of the pipe) and it protects the pipe from debris impacts. However, there are limitations in how much heat can be rejected by this type of radiator: the pipes, and joints between pipes, have definite thermal limits, with the joins often being the weakest part in folding radiators.

The TEM has the option of using a panel-type radiator, in fact there’s many renderings of the spacecraft using this type of radiator, such as this one:

Image Roscosmos

However, many more renderings present a far more exciting possibility: a liquid droplet radiator, called a “drip refrigerator” in Russian. This design uses a spray of droplets in place of the panels of the radiator. This increases the surface area greatly, and therefore allows far more heat to be rejected. In addition it can reduce the mass of the system significantly, both due to the increased surface area and also the potentially higher temperature, assuming the system can recapture the majority of its coolant.

Work has been done both on the ground and in space on this system. The Drop-2 test is being conducted on the ISS, and multiple papers were published on it. It began in 2014, and according to Roscosmos will continue until 2024.

Here it is being installed:

Image Roscosmos

Here’s an image of the results:

Image Roscosmos via Twitter user Niemontal

A patent for what is possibly the droplet collection system has also been registered in Russia:

This system was also tested on the ground throughout 2018 (, and appears to have passed all the vacuum chamber ground tests needed. Based on the reporting, more in-orbit tests will be needed, but with Drop-2 already on-station it may be possible to conduct these tests reasonably easily.

I have been unable to determine what the working fluid that would be used is, but anything with a sufficiently low vapor pressure to survive the vacuum of space and the right working fluid range can be used, from oils to liquid metals.

For more on this type of system, check out Winchell Chung’s incredible page on them at Atomic Rockets: I will also cover them in the future (possibly this fall, hopefully by next year) in my coverage of thermal management solutions.

Of all the technologies on this spacecraft, this has to be the one that I’m most excited about. Some reporting ( ) says that this radiator can hit between 0.12 and 0.2 kW/kg system specific power!

Reaction Control Systems

Nothing is known of the reaction control system for the TEM. A number of options are available and currently used in Russian systems, but it doesn’t seem that this part of the design has been discussed publicly.

Additional Equipment

The biggest noticeable change in the rest of the spacecraft is the change in the spine structure. The initial model and renders had a square cross section telescoping truss with an open triangular girder profile. The new version has a cylindrical truss structure, with a tetrahedral girder structure which almost looks like the same structure that chicken-wire uses. I’m certain that there’s a trade-off between mass and rigidity in this change, but what precisely it is is unclear due to the fact that we don’t have dimensions or materials for the two structures. The change in the cross-section also means that while the new design is likely stronger from all angles, it makes it harder to pack into the payload fairing of the launch vehicle.

Image Twitter user Katya Pavlushcenko

The TEM seems like it has gone through a major redesign in the last couple years. Because of this, it’s difficult to tell what other changes are going to be occurring with the spacecraft, especially if there’s a significant decrease in electrical power available.

It is safe to assume that the first version of the TEM will be more heavily instrumented than later versions, in order to support flight testing and problem-solving, but this is purely an assumption on my part. The reconfiguration of the spacecraft at MAKS-2019 does seem to indicate, at least for one spacecraft, the loss of the payload capability, but at this point it’s impossible to say.

YaEDU Architecture

The YaEDU is the reactor that will be used on the TEM spacecraft. Overall, with power conversion system, the power system will weigh about 6800 kg.


Image NIKIET at MAKS 2013

The reactor itself is a gas cooled, fast neutron spectrum, oxide fueled reactor, designed with an electrical output requirement rather than a thermal output requirement, oddly enough (choice in power conversion system changes the ratio of thermal to electrical power significantly, and as we’ll see it’s not set in stone yet) of 1 Mwe. This requires a thermal output of at least 4 MWt, although depending on power conversion efficiency it may be higher. Currently, though, the 4 MWt figure seems to be the baseline for the design. It is meant to have a ten year reactor lifetime.

This system has undergone many changes over its 11 year life, and due to the not-completely-clear nature of much of its development and architecture, there’s much about the system that we have conflicting or incomplete information on. Therefore, I’m going to be providing line-by-line references for the design details in these sections, and if you’ve got confirmable technical details on any part of this system, please comment below with your references!


The fuel for the reactor appears to be highly enriched uranium oxide, encased in a monocrystalline molybdenum clad. According to some reporting ( ), the total fuel mass is somewhere between 80-150 kg, depending on enrichment level. There have been some mentions of carbonitride fuel, which offers a higher fissile fuel density but is more thermally sensitive (although how much is unclear), but these have been only passing mentions.

The use of monocrystalline structures in nuclear reactors is something that the Russians have been investigating and improving for decades, going all the way back to the Romashka reactor in the 1950s. The reason for this is simple: grain boundaries, or the places where different crystalline structures interact within a solid material, act as refractory points for neutrons, similarly to how a cracked pane of glass distorts the light coming through it through internal reflection and the disruption of light waves undergoing refraction in the material. There’s two ways around this: either make sure that there are no grain boundaries (the Russian method), or make it so that the entire structure – or as close to it as possible – are grain boundaries, called nanocrystalline materials (the preferred method of the US and other Western countries. While the monocrystalline option is better in many ways, since it makes an effectively transparent, homogeneous material, it’s difficult to grow large monocrystalline structures, and they can be quite fragile in certain materials and circumstances. This led the US and others to investigate the somewhat easier to execute, but more loss-intensive, nanocrystalline material paradigm. For astronuclear reactors, particularly ones with a relatively low keff (effective neutron multiplication rate, or how many neutrons the reactor has to work with), this monocrystalline approach makes sense, but I’ve been unable to find the keff of this reactor anywhere, so it may be quite high in theory.

It was reported by in 2014 ( ) that the first fuel element (or TVEL in Russian) was assembled at Mashinostroitelny Zavod OJSC.

Reference was made ( ) in 2015 to the fuel rods as “RUGBK” and “RUEG,” although the significance of this acronym is beyond me. If you’re familiar with it, please comment below!

In 2016, Dmitry Markov, the Director of the Institute of Reactor Materials in Zarechny, Sverdlovsk, reported that full size fuel elements had been successfully tested (https://xn--80aaxridipd.xn--p1ai/uchenye-iz-sverdlovskoj-oblasti-uspeshno-zavershili-ispytaniya-tvel-dlya-kosmicheskogo-yadernogo-dvigatelya/ ).


The TEM uses a mix of helium and xenon as its primary coolant, a common choice for fast-spectrum reactors. Initial reporting indicated an inlet temperature of 1200K, with an outlet temperature of 1500K, although I haven’t been able to confirm this in any more recent sources. Molybdenum, tantalum, tungsten and niobium alloys are used for the primary coolant tubes.

Testing of the coolant loop took place at the MIR research reactor in NIIAR, in the city of Dimitrovgrad. Due to the high reactor temperature, a special test loop was built in 2013 to conduct the tests. Interestingly, other options, including liquid metal coolant, were considered ( ), but rejected due to lower efficiency and the promise of the initial He-Xe testing.

Power Conversion System

There have been two primary options proposed for the power conversion system of the TEM, and in many ways it seems to bounce back and forth between them: the Brayton cycle gas turbine and a thermionic power conversion system. The first offers far superior power conversion ratios, but is notoriously difficult to make into a working system for a high temperature astronuclear system; the second is a well-understood system that has been used through multiple iterations in flown Soviet astronuclear systems, and was demonstrated on the Buk, Topol, and Yenesiy reactors (the first two types flew, the third is the only astronuclear reactor to be flight-certified by both Russia and the US).

Prototype Brayton turbine, Image Habr user Valentin from MAKS 2013

In 2013, shortly after the design outline for the TEM was approved, the MAKS trade show had models of many components of the TEM, including a model of the Brayton system. At the time, the turbine was advertised to be a 250 kW system, meaning that four would have been used by the TEM to support YaEDU. This system was meant to operate at an inlet temperature of 1550K, with a rotational speed of 60,000 rpm and a turbine tip speed of 500 m/s. The design work was being primarily carried out at Keldysh Center.

Prototype heat exchanger plates for turbine, image Habr user Valentin from MAKS 2013

The Brayton system would include both DC/AC and AC/DC convertors, buffer batteries as part of a power conditioning system, and a secondary coolant system for both the power conversion system bearing lubricant and the batteries.

Building and testing of a prototype turbine began before the 2013 major announcement, and was carried out at Keldysh Center. ( )

As early as 2015, though, there were reports ( ) that RSC Energia, the spacecraft manufacturer, were considering going with a simpler power conversion system, a thermionic one. Thermionic power conversion heats a material, which emits electrons (thermions). These electrons pass through either a vacuum or certain types of exotic materials (called Cs-Rydberg matter) to deposit on another surface, creating a current.

This would reduce the power conversion efficiency, so would reduce the overall electric power available, but is a technology that the Russians have a long history with. These reactors were designed by the Arsenal Design Bureau, who apparently had designs for a large (300-500 kW) thermionic design. If you’d like to learn more about the history of thermionic reactors in the USSR and Russia, check out these posts:

This was potentially confirmed just a few days ago by the website Atomic Energy ( ) by the first deputy head of Roscosmos, Yuri Urlichich. If so, this is not only a major change, but a recent one. Assuming the reactor itself remains in the same configuration, this would be a departure from the historical precedent of Soviet designs, which used in-core thermionics (due to their radiation hardness) rather than out-of-core designs, which were investigated by the US for the SNAP-8 program (something we’ll cover in the future).

So, for now we wait and see what the system will be. If it is indeed the thermionic system, then system efficiency will drop significantly (from somewhere around 30-40% to about 10-15%), meaning that far less electrical power will be available for the TEM.

Radiation Shielding

The shielding for the YaEDU is a mix of high-hydrogen blocks, as well as structural components and boron-containing materials (

The hydrogen is useful to shield most types of radiation, but the inclusion of boron materials stops neutron radiation very effectively. This is important to minimize damage from neutron irradiation through both atomic displacement and neutron capture, and boron does a very good job of this.

Current TEM Status

Two Russian media articles came out within the past week about the TEM, which spurred me to write this article.

RIA, an official state media outlet, reported a couple days ago that the first flight of a test unit is scheduled for 2030. In addition:

Roscosmos announced the completion of the first project to create a unique space “tug” – a transport and energy module (TEM) – based on a megawatt-class nuclear power propulsion system (YaEDU), designed to transport goods in deep space, including the creation of long-term bases on the planets. A technical complex for the preparation of satellites with a nuclear tug is planned to be built at Vostochny Cosmodrome and put into operation in 2030.

A second report ( said that the reactor was now using a thermionic power conversion system, which is consistent with the reports that Arsenal is now involved with the program. This is a major design change from the Brayton cycle option, however it’s one that could be considered not surprising: in the US, both Rankine and Brayton cycles have often been proposed for space reactors, only to have them replaced by thermoelectric power conversion systems. While the Russians have extensive thermoelectric experience, their experience in the more efficient thermionic systems is also quite extensive.

It appears that there’s a current tender for 525 million rubles for the TEM project by Roscosmos, according to the Russian government procurement website, through November 2021 ( ), for

“Creation of theoretical and experimental and experimental backlogs to ensure the development of highly efficient rocket propulsion and power plants for promising rocket technology products, substantiation of their main directions (concepts) of innovative development, the formation of basic requirements, areas of rational use, design and rational level of parameters with development software and methodological support and guidance documents on the design and solution of problematic issues of creating a new generation of propulsion and power plants.”

Work continues on the Vostnochy Cosmodrome facilities, and the reporting still concludes that it will be completed by 2030, when the first mass-production TEMs are planned to be deployed.

According to Yuri Urlichich, deputy head of Roscosmos, the prototype for the power plant would be completed by 2025, and life testing on the reactor would be completed by 2030. This is the second major delay in the program, and may indicate that there’s a massive redesign of the reactor. If the system has been converted to thermionic power, it would explain both the delay and the redesign of the spacecraft, but it’s not clear if this is the reason.

For now, we just have to wait and see. It still appears that the TEM is a major goal of both Roscosmos and Rosatom, but it is also becoming apparent that there have been challenges with the program.

Conclusions and Author Commentary

It deserves reiterating: I’m some random person on the Internet for all intents and purposes, but my research record, as well as my care in reporting on developments with extensive documentation, is something that I think deserves paying attention to. So I’m gonna put my opinion on this spacecraft out there.

This is a fascinating possibility. As I’ve commented on Twitter, the capabilities of this spacecraft are invaluable. Decommissioning satellites is… complicated. The so-called “graveyard orbits,” or those above geosynchronous where you park satellites to die, are growing crowded. Satellites break early in valuable orbits, and the operators, and the operating nations, are on the hook for dealing with that – except they can’t.

Additionally, while many low-cost launchers are available for low and mid Earth orbit launches, geostationary orbit is a whole different thing. The fact that India has a “Polar Satellite Launch Vehicle” (PSLV) and “Geostationary Satellite Launch Vehicle” (GSLV) classification for two very different satellites drives this home within a national space launch architecture.

The ability to contract whatever operator runs TEM missions (I’m guessing Roscosmos, but I may be wrong) with an orbital path post-booster-cutoff, and specify a new orbital pat, and have what is effectively an external, orbital-class stage come and move the satellite into a final orbit is… unprecedented. The idea of an inter-orbital tug is one that’s been proposed since the 1960s, before electric propulsion was practical. If this works the way that the design specs are put at, this literally rewrites the way mission planning can be done for any satellite operator who’s willing to take advantage of it in cislunar space (most obviously, military and intelligence customers outside Russia won’t be willing to take advantage of it).

The other thing to consider in cislunar space is decommissioning satellites: dragging things into a low enough orbit that they’ll burn up from GEO is costly in mass, and assumes that the propulsion and guidance, navigation, and control systems survive to the end of the satellite’s mission. As a satellite operator, and a host nation to that satellite with all the treaty obligations the OST requires the nation to take on, being able to drag defunct satellites out of orbit is incredibly valuable. The TEM can deliver one satellite and drag another into a disposal orbit on the way back. To paraphrase a wonderful character from Sir Terry Pratchett (Harry King)“They pay me to take it away, and they pay me to buy it after.” In this case, it’s opposite: they pay me to take it out, they pay me to take it back. Especially in graveyard orbit challenge mitigation, this is a potentially golden opportunity financially for the TEM operator: every mm/s of mission dV can potentially be operationally profitable. This is potentially the only system I’ve ever seen that can actually say that.

More than that, depending on payload restrictions for TEM cargoes, interplanetary missions can gain significant delta-vee from using this spacecraft. It may even be possible, should mass production actually take place, that it may be possible to purchase the end of life (or more) dV of a TEM during decommissioning (something I’ve never seen discussed) to boost an interplanetary mission without having to pay the launch mass penalty for the Earth’s escape velocity. The spacecraft was proposed for Mars crewed mission propulsion for the first half of its existence, so it has the capability, but just as SpaceX Starship interplanetary missions require SpaceX to lose a Starship, the same applies here, and it’s got to be worth the while of the (in this case interplanetary) launch provider to lose the spacecraft to get them to agree to it.

This is an exciting spacecraft, and one that I want to know more about. If you’re familiar with technical details about either the spacecraft or the reactor that I haven’t covered, please either comment or contact me via email at

We’ll continue with our coverage of fluid fueled NTRs in the next post. These systems offer many advantages over both traditional, solid core NTRs and electrically propelled spacecraft such as the TEM, and making the details more available is something I’ve greatly enjoyed. We’ll finish up liquid fueled NTRS, followed by vapor fuels, then closed and open fueled gas core NTRs, probably by the end of the summer

If you’re able to support my efforts to continue to make these sorts of posts possible, consider becoming a Patron at My supporters help me cover systems like this, and also make sure that this sort of research isn’t lost, forgotten, or unavailable to people who come into the field after programs have ended.

Development and Testing Forgotten Reactors History Nuclear Thermal Systems

Liquid Fueled NTRs: An Introduction

Hello, and welcome back to Beyond NERVA! Today we continue our look into advanced NTR fuel types, by diving into an extended look at one of the least covered design types in this field: the liquid fueled NTR (LNTR).

This is a complex field, with many challenges unique to the phase state of the fuel, so while I was planning on making this a single-part series, now there’s three posts! This first one is going to discuss LNTRs in general, as well as some common problems and challenges that they face. I’ll include a very brief history of the designs, almost all of them dating from the 1950s and 1960s, which we’ll look at more in depth in the next couple posts.

Unfortunately, a lot of the fundamental problems of an LNTR get deep – fast, for a lot of people, but the fundamental concepts are often not too hard to get in the broad strokes. I’m gonna try my best to explain them the way that I learned them, and if there’s more questions I’ll attempt to point you to the references I’ve used as a layperson, but I honestly believe that this architecture has suffered from a combination of being “not terrible, not great” in terms of engine performance (1300 s isp, 19/1 T/W).

With that, let’s get into liquid fueled NTRs (LNTR), their history, and their design!

Basic Design Options for LNTR

LNTRs are not a very diverse group of reactor concepts, partially due to the nature of the fuel and partially because they haven’t been well-researched overall. All designs I’ve found use centrifugal force to contain molten fuel inside a tube, with the central void in the spinning tube being the outlet point for the propellant. The first design used a single, large fuel mass in a single fuel element, but quickly this was divided into multiple individual fuel elements, which became the norm for LNTR through the latest designs. One consequence of this first design was the calculation of the neutronic moderation capacity of the H2 propellant in this toroidal fuel structure, and the authors of the study determined that it was so close to zero that it was worth it to consider the center of the fuel element to be a vacuum as far as MCNP (the standard neutronic modeling code both at the time, and in updated form now) is concerned. This is something worth noting: any significant neutron moderation for the core must come from the reflectors and moderator either integrated into the fuel structure (complex to do in a liquid in many cases) or the body of the reactor, the propellant flow won’t matter enough to cause a significant decrease in neutron velocities.

They do seem to fall into two broad categories, which I’ll call bubblers and radiators. A bubbler LNTR is one where the fuel is fed from the outside of the fuel element, through the molten fuel, and into the central void of the fuel element; a radiator LNTR passes propellant only through the central void along the long axis of the fuel element.

A bubbler has the advantage that it is able to use an incredible amount of surface area for heat transfer from the fuel to the propellant, with the surface area being inversely proportional to the size of the individual bubbles: smaller bubbles, more surface area, more heat transfer, greater theoretical power density in the active region of the reactor. They also have the advantage of being able to regeneratively cool the entire length of the fuel element’s outside surface as a natural consequence of the way the propellant is fed into the fuel, rather than using specialized regenerative cooling systems in the fuel element canister and reactor body. However, bubblers also have a couple problems: first, the reactor will not be operating continuously, so on shutdown the fuel will solidify, and the bubbling mechnaism will become clogged with frozen nuclear fuel; second, the breaching of the bubbles to the surface can fling molten fuel into the fast-moving propellant stream, causing fuel to be lost; finally, the bubbles increase mixing of the fuel, which is mostly good but can also lead to certain chemical components of the fuel being carried at a greater rate by either vaporizing and being absorbed into the bubbles or becoming entrained in the fuel and outgassing when the bubble breaches the surface. In a way, it’s sort of like boiling pasta sauce: the water boils, and the bubbles mix the sauce while they move up, but some chemical compounds diffuse into the water vapor along the way (which ones depend on what’s in the sauce), and unless there’s a lid on the pot the sauce splatters across the stove, again depending on the other components of the sauce that you’re cooking. (the obvious problem with this metaphor is that, rather than the gaseous component being a part of the initial solution they’re externally introduced)/

Radiators avoid many of the problems of a bubbler, but not all, by treating the fuel almost like a solid mass when its under centrifugal force: the propellant enters from the ship end, through the central void in the fuel element, and then out the aft end to enter the nozzle through an outlet plenum. This makes fuel retention a far simpler problem overall, but fuel will still be lost through vaporization into the propellant stream (more on this later). Another issue with radiators is that without the propellant passing all the way through the fuel from the outer to inner diameter, the thermal emissions will not only go into the propellant, but also into the fuel canister and the reactor itself – more efficiently, actually, since H2 isn’t especially good at capturing heat,k and conduction is more efficient than radiation. This requires regenerative cooling both for the fuel canister and the reactor as well most of the time – which while doable also requires a more complex plumbing setup within the reactor body to maintain material thermal limits on even relatively high temperature materials, much less hydrides (which are good low-volume, low-mass moderators for compact reactors, but incredibly thermally sensitive).

As with any other astronuclear design, there’s a huge design envelope to play with in terms of fuel matrix, even in liquid form (although this is more limited in liquid designs, as we’ll see), as well as moderation level, number and size of fuel elements, moderator type, and other decisions. However, the vast majority of the designs have been iterative concepts on the same basic two ideas, with modifications mostly focusing on fuel element dimensions and number, fuel temperature, propellant flow rates, and individual fuel matrix materials rather than entirely different reactor architectures.

It’s worth noting that there’s another concept, the droplet core NTR, which diffuses the liquid fuel into the propellant, then recaptures it using (usually) centrifugal force before the droplets can leave the nozzle, but this is a concept that will be covered alongside the vapor core reactor, since it’s a hybrid of the two concepts.

A (Very) Brief History of LNTR

Because we’re going to be discussing the design evolution of each type of LNTR in depth in the next two posts, I’m going to be incredibly brief here, giving a general overview of the history of LNTRs. While they’re often mentioned as an intermediate-stage NTR option, there’s been a surprisingly small amount of research done on them, with only two programs of any significant size being conducted in the 1960s.

Single cavity LNTR, Barrett 1963

The first proposal for an LNTR was by J. McCarthy in 1954, in his “Nuclear Reactors for Rockets.” This design used a single, large cylinder, spun around the long axis, as both the reactor and fuel element. The fuel was fed into the void in the cylinder radially, bubbling through the fuel mass, which was made of uranium carbide (UC2). This design, as any first design, had a number of problems, but showed sufficient promise for the design to be re-examined, tweaked, and further researched to make it more practical. While I don’t have access to this paper, a subsequent study of the design placed the maximum specific impulse of this type of NTR in the range of 1200-1400 seconds.

Multiple Fuel Element LNTR, Nelson et al 1963

This led to the first significant research program into the LNTR, carried out by Nelson et al at the Princeton Aeronautical Engineering Laboratory in 1963. This design changed the single large rotating cylinder into several smaller ones, each rotating independently, while keeping the same bubbler architecture of the McCarthy design. This ended up improving the thrust to weight ratio, specific impulse, power density, and other key characteristics. The study also enumerated many of the challenges of both the LNTR in general, and the bubbler in specific, for the first time in a detailed and systematic fashion, but between the lack of information on the materials involved, as well as lack of both computational theory and modeling capability, this study was hampered by many assumptions of convenience. Despite these challenges (which would continue to be addressed over time in smaller studies and other designs), the Princeton LNTR became the benchmark for most LNTR designs of both types that followed. The final design chosen by the team has a vacuum specific impulse of 1250 s, a chamber pressure of 10 atm, and a thrust-to-weight ratio of about 2:1, with a reactor mass of approximately 100 metric tons.

Experimental setup for bublle behavior studies, Barrett Jr 1963

Studies on the technical details of the most challenging aspect of this design, that of bubble motion, would continue at Princeton for a number of years, including experiments to observe the behavior of the particular bubble form needed while under centrifugal acceleration, but challenges in modeling the two-phase (liquid/gas) interactions for thermodynamics and hydrodynamics continued to dog the bubbler design. It is unclear when work stopper on the bubbler design, but the last reference to it that I can find in the literature was from 1972, in a published Engineering Note by W.L. Barrett, who observed that many of the hoped-for goals were overly optimistic, but not by a huge margin. This is during the time that American astronuclear funding was being demolished, and so it would not be surprising that the concept would go into dormancy at that point. Since the restarting of modest astronuclear funding, though, I have been unable to find any reference to a modern bubbler design for either terrestrial or astronuclear use.

Perhaps the main reason for this, which we’ll discuss in the next section, is the inconveniently high vapor pressure of many compounds when operating in the temperature range of an LNTR (about 8800 K). This means that the constituent parts of the fuel body, most notably the uranium, would vaporize into the propellant, not only removing fissile material from the reactor but significantly increasing the mass of the propellant stream, decreasing specific impulse. This, in fact, was the reason the Lewis Research Center focused on a different form of LNTR: the radiator.

Work on the radiator concept began in 1964, and was conducted by a team headed by R Ragsdale, one of the leading NTR designers ar Lewis Research Center. To mitigate the vapor losses of the bubbler type, the question was asked if the propellant actually had to pass through the fuel, or if radiant heating would suffice to thermalize the hydrogen propellant while minimizing the fuel loss from the liquid/gas interaction zone. The answer was a definite yes, although the fuel temperature would have to be higher, and the propellant would likely need to be seeded with some particulate or vapor to increase its thermal absorption. While the overall efficiency would be slightly lower, only a minimal loss of specific impulse would occur, and the thrust to weight ratio could be increased due to higher propellant flow (only so much propellant can pass through a given volume of bubbler-type fuel before unacceptable splattering and other difficulties would arise). This seems to have reached its conclusion in 1967, the last date that any of the papers or reports that I’ve been able to find, with a final compromise design achieving 1400 s of isp, a thrust-to-core-weight-ratio of 4:1, at a core temperature of 5060 K and a reactor pressure of 200 atm (2020 N/m^2).

However, unlike with the bubbler-type LNTR, the radiator would have one last, minor hurrah. In the 1990s, at the beginning of the Space Exploration Initiative, funding became available again for NTR development. A large conference was held in 1991, in Albuquerque, NM, and served as a combination state-of-research and idea presentation for what direction NTR development should go in, as well as determining which concepts should be explored more in depth. As part of this, presentations were made on many different fundamental reactor architectures, and proposals for each type of NTR were made. While the bubbler LNTR was not represented, the radiator was.

LARS cross-section, Powell 1991

This concept, presented by J Powell of Brookhaven National Lab, was the Liquid Annular Reactor System. Compared to the Lewis and Princeton designs, it was a simple reactor, with only seven fuel elements, These would be spaced in a cylinder of Be/H moderator, and would use a twice-through coolant/propellant system: each cylinder was regeneratively cooled from nozzle-end to ship-end, and then the propellant, seeded with W microparticles, would then pass through the central void and out the nozzle. Interestingly enough, this design did not seem to reference the work done by either Princeton or Lewis RC, so there’s a possibility that this was a new design from first principles (other designs presented at the conference made extensive use of legacy data and modeling). This reactor was only conceptually sketched out in the documentation I’ve found, operated at higher temperatures (~6000 K) and lower pressures (~10 atm) than the previous designs to dissociate virtually all of the hydrogen propellant, and no estimated thrust-to-core-weight ratios.

It is unclear how much work was done on this reactor design, and it also remains the last design of any LNTR type that I’ve been able to come across.

Lessons from History: Considerations for LNTR Design

Having looked through the history of LNTR design, it’s worth looking at the lessons that have been learned from these design studies and experiments, as well as the reasons (as far as we can tell) that the designs have evolved the way they did. I just want to say up front that I’m going to be especially careful about when I use my own interpretation, compared to a more qualified someone else’s interpretation, on the constraints and design philosophies here, because this is an area that runs into SO MANY different materials, neutronics, etc constraints that I don’t even know where to begin independently assessing the advantages and disadvantages.

Also, we’re going to be focusing on the lessons that (mostly) apply to both the bubbler and radiator concepts. The following posts, covering the types individually, will address the specific challenges of the two types of LNTR.

Reactor Architecture

The number of fuel elements in an LNTR is a trade-off.

  • Advantages to increasing the number of fuel elements
    • The total surface area available in the fuel/propellant boundary increases, increasing thrust for a given specific impulse
    • The core becomes more homogeneous, making a more idealized neutronic environment (there’s a limit to this, including using interstitial moderating blocks between the fuel elements to further thermalize the reactor, but is a good rule of thumb in most cases)
  • Advantages to minimizing the number of fuel elements
    • The more fuel elements, the more manufacturing headache in making the fuel element canisters and elements themselves, as well as the support equipment for maintaining the rotation of the fuel elements;
      • depending on the complexity of the manufacturing process, this could be a significant hurdle,
      • Electronic motors don’t do well in a high neutron flux, generally requiring driveshaft penetration of at least part of the shadow shield, and turbines to drive the system can be so complex that this is often not considered an option in NTRs (to be fair, it’s rare that they would be needed)
    • The less angular velocity is needed for each fuel element to have the same centrifugal force, due to the larger radius of the fuel element
    • For a variety of reasons the fuel thickness increases to maintain the same critical mass in the reactor – NOTE: this is a benefit for bubbler-type LNTRs, but either neutral or detrimental to streamer-type NTRs.

Another major area of trade-off is propellant mass flow rates. These are fundamentally limited in bubbler LNTRs (something we’ll discuss in the next post), since the bubbles can’t be allowed to combine (or splattering and free droplets will occur), the more bubbles the more the fuel expands (causing headaches for fuel containment), and other issues will present themselves. On the other hand, for radiator – and to a lesser extent the bubbler – type LNTRs, the major limitation is thermal uptake in the propellant (too much mass flow means that the exhaust velocity will drop), which can be somewhat addressed by propellant seeding (something that we’ll discuss in a future webpage).

Fuel Material Constraints

One fundamental question for any LNTR fuel is the maximum theoretical isp of a design, which is a direct function of the critical temperature (when the fuel boils) and at what rate the fuel would vaporize from where the fuel and propellant interact. Pretty much every material has a range of temperature and pressure values where either sublimation (in a solid) or vaporization (in a liquid) will occur, and these characteristics were not well understood at the time.

This is actually one of the major tradeoffs in bubbler vs radiator designs. In a bubbler, you get the propellant and the maximum fuel temperature to be the same, but you also effectively saturate the fuel with any available vapor. The actual vapor concentrations are… well, as far as I can tell, it’s only ever been modeled with 1960s methods, and those interactions are far beyond what I’m either qualified or comfortable to assess, but I suspect that while the problem may be able to be slightly mitigated it won’t be able to be completely avoided.

However, there are general constraints on the fuels available for use, and the choice of every LNTR has been UC2, usually with a majority of the fuel mass being either ZrC or NbC as the dilutent. Other options are available, potentially, such as 184W-U or U-Si metals, but they have not been explored in depth.

Let’s look at the vapor pressure implications more in depth, since it really is the central limitation of LNTR fuels at temperatures that are reasonable for these rockets.

Vapor Pressure Implications

A study on the vapor pressure of uranium was conducted in 1953 by Rauh et al at Argonne NL, which determined an approximate function of the vapor pressure of “pure” uranium metal (some discussion about the inhibiting effects of oxygen, which would not be present in an NTR to any great degree, and also tantalum contamination of the uranium, were needed based on the experimental setup), but this was based on solid U, so was only useful as a starting point.

Barrett Jr 1963

W Louis Barret Jr. conducted another study in 1963 on the implications of fuel composition for a bubbler-type LNTR, and the constraints on the potential specific impulse of this type of reactor. The author examined many different fissile fuel matrices in their paper, including Pu and Th compounds:

From this, and assuming a propellant pressure of 10^3 psi, a maximum theoretical isp was calculated for each type of fuel:

Barrett Jr 1963

Additional studies were carried out on uranium metal and carbon compounds – mostly Zr-C-U, Nb-C-U and 184W-C-U, in various concentrations – in 1965 and 66 by Kaufman and Peters of MANLABS for NASA Lewis Research Center (the center of LNTR development at the time), conducted at 100 atmospheres and ~4500 to ~5500 K. These were low atomic mass fraction systems (0.001-0.02), which may be too low for some designs, but will minimize fissile fuel loss to the propellant flow. Other candidate materials considered were Mo-C-U, B-C-U, and Me-C-U, but not studied at the time.

A summary of the results can be found below:

Perhaps the most significant question is mass loss rates due to hydrogen transport, which can be found in this table:

Kaufman, 1966

These values offer a good starting point for those that want to explore the maximum operating temperature of this type of reactor, but additional options may exist. For instance, a high vapor pressure, high boiling point, low neutron absorption metal which will mix minimally with the uranium-bearing fuel could be used as a liquid fuel clad layer, either in a persistent form (meant to survive the lifetime of the fuel element) or as a sacrificial vaporization layer similar to how ablative coatings are used in some rocket nozzles (one note here: this will increase the atomic mass of the propellant stream, decreasing the specific impulse of such a design). However, other than the use of ZrC in the Princeton design study in the inner region of that fuel element design (which was also considered a sacrificial component of the fuel), I haven’t seen anyone discuss this concept in depth in the literature.

A good place to start investigating this concept, however, would be with a study done by Charles Masser in 1967 entitled “Vapor-Pressure Data Extrapolated to 1000 Atmospheres of 13 Refractory Materials with Low Thermal Absorption Cross Sections.” While this was focused on the seeding of propellant with microparticles to increase thermal absorption in colder H2, the vapor-pressure information can provide a good jumping off point for anyone interested in investigating this subject further. The paper can be found here:

Author speculation concept:

Another, far more speculative option is available if the LNTR can be designed as a thermal breeder, and dealing with certain challenges in fuel worth fluctuations (and other headaches), especially at startup: thorium. This is because Th has a much lower vapor pressure than U does (although the vapor pressure behavior of carbides in a high temperature, high pressure situation doesn’t seem to have been studied ThO2 and ThO3 outperform UC2 – but oxides are a far worse idea than carbides in this sort of reactor), so it may be possible to make a Th-breeder LNTR to reduce fissile fuel vapor losses – which does nothing for C, or Zr/Nb, but may be worth it.

This requires a couple things to happen: first, the reactor’s available reactivity needs to be able to remain within the control authority of the control systems in a far more complex system, and the breeding ratio of the reactor needs to be carefully managed. There’s a few reasons for this, but let’s look at the general shape of the challenge.

Many LNTR designs are either fast or epithermal designs, with few extending into the thermal neutron spectrum. Thorium breeds into 233U best in the thermal neutron spectrum, so the neutron flux needs to be balanced against the Th present in the reactor in order to make sure that the proper breeding ratio is maintained. This can be adjusted by adding moderator blocks between the fuel elements, using other filler materials, and other options common to NTR neutronics design, but isn’t something that I’ve seen addressed anywhere.

Let’s briefly look at the breeding process: when 232Th is bred into 233U, it goes through a two-week period where the nucleus undergoing the breeding process ends up existing as 233Pa, a strong neutron poison. Unlike the thorium breeding molten salt reactor, these designs don’t have on-board fuel reprocessing, and that’s a very heavy, complex system that is going to kill your engine’s dry mass, so just adding one isn’t a good option from a systems engineering point of view. So, initially, the reactor loses a neutron to the 232Th, which then changes to 233Th before quickly decaying into 233Pa, a strong neutron poison which will stay in the reactor until long after the reactor is shut down (and so waste energy will need to be dealt with, but radiation may/probably is enough to deal with that), and then it’s likely that the next time the engine is started up, that neutron poison has transmuted into an even more fissile material unless you load the fuel with 233U first (233U has a stronger fission capture cross-section than 235U, which in practical effect reduces the fissile requirements by ~33%)!

This means that the reactor has to go through startup, have a reasonably large amount of control authority to continue to add reactivity to the reactor to counterbalance the fission poison buildup of not only 233Pa, but other fission product neutron poisons and fissile fuel worth degradation (if the fuel element has been used before), and then be able to deal with a potentially more reactive reactor (if the breeding ratio has more of a fudge factor due to the fast ramp-up/ramp-down behavior of this reactor, varying power levels, etc, making it higher in effect than ~1.01/4).

The other potential issue is that if you need less fissile material in the core, every atom of fissile is more valuable in the core than a less fissile fuel. If the vapor entrainment ends up being higher than the effective breeding ratio (i.e. the effect of breeding when the reactor’s operating), then the reactor’s going to lose reactivity too fast to maintain. Along these lines, the 233Pa behavior is also going to need to be studied, because that’s not only your future fuel, but also a strong neutron poison, in a not-great neutronic configuration for your fuel element, so there’s a few complications on that intermediate step.

This is an addressable option, potentially, but it’s also a lot of work on a reactor that already has a lot of work needed to make feasible.


Liquid fueled NTRs (LNTRs) show great promise as a stepping stone to advanced NTR development in both their variations, the bubbler and radiator variants. The high specific impulse, as well as potentially high thrust-to-weight ratio, offer benefits for many interplanetary missions, both crewed and uncrewed.

However, there are numerous challenges in the way of developing these systems. Of all the NTR types, they are some of the least researched, with only a handful of studies conducted in the 1960s, and a single project in the 1990s. These projects have focused on a single family of fuels, and those have not been able to be tested under fission power for various neutronic and reactor physics behaviors necessary for the proper modeling of these systems.

Additionally, the interactions between the fuel and propellant in these systems is far more complex than it is in most other fuel types. Only two other types of NTR (the droplet/colloid core and open cycle gas core NTRs) face the same level of challenge in fissile fuel retention and fuel element mass entrainment that the LNTR faces, especially in the bubbler variation.

Finally, they are some of the least well-known variations of NTR in both popular and technical literature, with only a few papers ever being published and only short blurbs on popular websites due to the difficulty in finding the technical source material.

We will continue to look at these systems in the next two blog posts, covering the bubbler-type LNTR in the next one, and the radiator type in the one following that. These blog posts are already in progress, and should be ready for publication in the near term.

If you would like early access to these, as well as all future blog posts and websites, consider becoming a Patron of the page! My Patrons help me be able to devote the time that I need to the website, and provide strong encouragement for me to put out more material as well! You can sign up here:



Specific Impulse of a Liquid Core Nuclear Rocket, Barrett Jr 1963





A Technical Report on the CONCEPTUAL DESIGN – STUDY OF A LIQUID-CORE NUCLEAR ROCKET, Nelson et al 1963





The Liquid Annular Reactor System (LARS) Propulsion, Powell et al 1992

Nuclear Thermal Systems Test Stands

Fluid Fueled NTRs: A Brief Introduction

 Hello, and welcome back to Beyond NERVA! This is actually about the 6th blog post I’ve started, and then split up when they ran more than 20 pages long, in the last month, and more explanatory material was needed before I discussed the concepts I was trying to discuss (this blog post has also been split up multiple times).

I apologize about the long hiatus. A combination of personal, IRL complications (I’ve updated the “About Me” section to reflect this, but those will not affect the type of content I share on here), and the professional (and still under wraps) opportunity of a lifetime have kept me away from the blog for a while. I want to return to Nuclear Thermal Rockets (NTRs) for a while, rather than continuing Nuclear Electric Propulsion (NEP) power plants, as a fun, still-not-covered area for me to work my way back into writing regularly for y’all again.

This is the first in an extensive blog series on fluid fueled NTRs, of three main types: liquid, vapor, and gas core NTRs. These reactors avoid the thermal limitations of the fuel elements themselves, increasing the potential core temperature to above 2550 K (the generally accepted maximum thermal limit on workable carbide fuel elements), increasing the specific impulse of these rockets. At the same time, structural material thermal limits, challenges in adequately heating the propellant to gain these advantages in a practical way, fissile fuel containment, and power density issues are major concerns in these types of reactors, so we’re going to dig into the weeds of the general challenges of fluid fueled reactors in general in this blog post (with some details on each reactor type’s design envelope).

Let’s start by looking at the basics behind how a nuclear reactor can operate without any solid fuel elements, and what the advantages and disadvantages of going this route are.

Non-Solid Fuels

A nuclear reactor is, at its basic level, a method of maintaining a fission reaction in a particular region for a given time. This depends on maintaining a combination of two characteristics: the number of fissile atoms in a given volume, and the number and energy of neutrons in that same volume (the neutron flux). As long as the number of neutrons and the number of fissile atoms in the area are held in balance, a controlled fission reaction will occur in that area.

Solid Core Fuel Element, image DOE

The easiest way to maintain that reaction is to hold the fissile atoms in a given place using a solid matrix of material – a fuel element. However, a number of things have to be balanced for a fuel element to be a useful and functional piece of reactor equipment. For an astronuclear reactor, there are two main concerns: the amount of power produced by the fission reaction has to be balanced by how much thermal energy the fuel element is able to contain, and the fuel element needs to survive the chemical and thermal environment that it is exposed to in the reactor. (Another for terrestrial reactors is that the fuel element has to contain the resulting fission products from the reaction itself, as well as any secondary chemical pollutants, but this isn’t necessarily a problem for astronuclear reactors, where the only environment that’s of concern is the more heavily shielded payload of the rocket.) 

This doesn’t mean that a reactor has to use a solid fuel element. As the increasingly well known molten salt reactor, as well as various other fluid fueled reactor concepts, demonstrate, the only requirement is the combination of the number of fissile atoms and the required energy level and density of neutrons to exist in the same region of the reactor. This, especially in Russian literature, is called the “active zone” of the reactor core. This can be an especially useful as a term, since the reactor core can contain areas that aren’t as active in terms of fission activity. (A great example of this is the travelling wave reactor, most recently investigated – and then abandoned – by Terrestrial Energy.) But more generally it’s useful to differentiate the fueled areas undergoing fission from other structures in the reactor, such as neutron moderation and control regions in the reactor. The key takeaway is that, as long as there is enough fuel, and the right density of neutrons at the right energy, then a sustained – and controlled – fission reactor has been achieved.

The obvious consequence is that the solid fuel element isn’t required – and in the case of a nuclear thermal rocket, where the efficiency of the rocket is directly tied to the temperature it can achieve, the solid fuel is in fact a major limitation to a designer. The downside to this is that, unlike solids, fluids tend to move, especially under thrust. Because the materials used in a solid fueled rocket are already at the extremes of what molecular bonds can handle, this means that either very clever cooling or very robust containment methods need to be used to keep the rest of the reactor from destroying itself.

Finally, one of the interesting consequences of not having a solid fuel element is that the reactor’s power density (W/m^2) and specific power (W/kg) can be increased in proportion to how much coolant can be used in theory, but in practice it can be challenging to maintain a high power density in certain types of fluid fueled reactors due to the high rate of thermal expansion that these reactors can undergo. There are ways around this, and fluid fueled reactors can have higher power densities than even closely related solid fueled variants, but the fact that fluids are able to expand much more than solids under high temperatures is an effect that should be taken into account. On the other hand, if the fluid expands too much, it can drop the power density, but not necessarily the specific mass of the system.

Types of and Reasons for Fluid Fuels

Fluid fuels fall into three broad categories: liquids, vapors, and gasses. There are intermediate steps, and hybrids between various phase states of fuel, but these three broad categories are useful. While liquid fuels are fairly self-explanatory (a liquid state fissile material is used to fuel the core, often uranium carbide mixed with other carbides, or U-Mo, but other options exist), the vapor and gas concepts are far less straightforward overall. The vapor core has two major variants: discrete liquid droplets, or a low pressure, relatively low temperature gaseous suspension similar to a cloud. The gas core could be more appropriately called a “plasma core,” since these are very high temperature reactors, which either mechanically hold the plasma in place, or use hydrodynamic or electrodynamic forces to hold the plasma in place.

However, they all have some common advantages, so we’ll look at them as a group first. The obvious reason for using non-solid fuel, in most cases, is that they are generally less thermally limited than solid fuels are (with some exceptions). This means that higher core temperatures, and therefore higher exhaust velocity (and specific impulse) can be achieved.

Convection pattern in radiator-type
liquid fuel element, image DOE

An additional benefit to most fluid fueled designs is that the fluid nature of the fuel helps mitigate or eliminate hot spots in the fuel. With solid fuels, one of the major challenges is to distribute the fissile material throughout the fuel as evenly as possible (or along a specifically desired gradient of fissile content depending on the position of the fuel element within the reactor). If this isn’t done properly, either through a manufacturing flaw or migration of the fissile component as a fuel element becomes weakened or damaged during use, then a hot spot can develop and damage the fuel element in both its nuclear and mechanical properties, leaning to a potentially failed fuel element. If the process is widespread enough, this can damage or destroy the entire reactor.

Fluid fuels, on the other hand, have the advantage that the fuel isn’t statically held in a solid structure. Let’s look at what happens when the fuel isn’t fully homogeneous (completely mixed) to understand this:

  1. A higher density of fissile atoms in the fuel results in more fission occurring in a particular volume.
  2. The fuel heats up through both radiation absorption and fission fragment heating.
  3. The fuel in this volume becomes less dense as the temperature increases.
  4. The increased volume, combined with convective mixing of cooler fuel fluids and radiation/conduction from the surface of the hotter region cools the region further.
  5. At the same time, the lower density decreases the fission occurring in that volume, while it remains at previous levels in the “normally heated” regions.
  6. The hot spot dissipates, and the fuel returns to a (mostly) homogeneous thermal and fissile profile.

In practice, this doesn’t necessarily mean that the fuel is the same temperature throughout the element – this very rarely occurs, in fact. Power levels and temperatures will vary throughout the fuel, causing natural vortices and other structures to appear. Depending on the fuel element configuration, this can be either minimized or enhanced depending on the need of the reactor. However, the mixing of the fuel is considered a major advantage in this sort of fuel.

Another advantage to using fluid fuels (although one that isn’t necessarily high on the priority list of most designs) is that the reactor can be refueled more easily. In most solid fueled reactors, the fissile content, fission poison content, and other key characteristics are carefully distributed through the reactor before startup, to ensure that the reactor will behave as predictably as possible for as long as possible at the desired operating conditions. In terrestrial solid reactors, refueling is a complex, difficult process, which involves moving specific fuel bundles in a complex pattern to ensure the reactor will continue to operate properly, with only a little bit of new fuel added with each refueling cycle.

PEWEE Test stand, image courtesy DOE

There were only two refuelable NTR testbeds in the US Rover program: Pewee and the Nuclear Furnace. Both of these were designed to be fuel element development apparatus, rather than functional NTRs (although Pewee managed to hit the highest Isp of any NTR tested in Rover without even trying!), so this is a significant difference. While it’s possible to refuel a solid core NTR, especially one such as the RD-0410 with its discrete fuel bundles, the likely method would be to just replace the entire fueled portion of the reactor – not the best option for ease of refueling, and one that would likely require a drydock of sorts to complete the work. To give an example, even the US Navy doesn’t always refuel their reactors, opting for long-lived highly enriched uranium fuel which will last for the life of the reactor. If the ship needs refueled, the reactor is removed and replaced whole in most cases. This reticence to refuel solid core reactors is likely to still be a thing in astronuclear reactors for the indefinite future, since placing the fuel elements is a complex process that requires a lot of real-time analysis of the particulars of the individual fuel elements and reactors (in Rover this was done at the Pajarito Site in Los Alamos).

Fluid fuels, though, can be added or removed from the reactor using pumps, compressed gasses, centrifugal force, or other methods. While not all designs have the capability to be refueled, many do, and some even require online fuel removal, processing and reinsertion into the active region of the core to maintain proper operation. If this is being done in a microgravity environment, there will be other challenges to address as well, but these have already been at least partially addressed by on-orbit experiments over the decades in the various space programs. (Specific behaviors of certain fluids will likely need to be experimentally tested for this particular application, but the basic physics and engineering solutions have been researched before).

Finally, fluid fuels also allow for easier transport of the fuel from one location to another, including into orbit or another planet. Rather than having a potentially damageable solid pellet, rod, prism, or ribbon, which must be carefully packaged to not only prevent damage but accidental criticality, fluids can be transported with far less risk of damage: just ensure that accidental criticality can’t occur, chemical compatibility between the fluid and the vessel it’s carrying, and package it strongly enough to survive an accident, and the problem is solved. If chemical processing and synthesis is available wherever the fuel is being sent (likely, if extensive and complex ISRU is being conducted), then the fuel doesn’t even need to be in its final form: more chemically inert options (UF4 and UF6 can be quite corrosive, but are easily managed with current materials and techniques), or less fissile-dense options (to reduce the chance of accidental criticality further) can be used as fuel precursors, and the final fuel form can be synthesized at the fueling depot. This may not be necessary, or even desirable, in most cases, but the option is available.

So, while solid fuels offer certain advantages over fluid fuels, the combination of being more delicate (thermally, chemically, and mechanically) combine to make fluid fuels a very attractive option. Once NTRs are in use, it is likely that research into fluid fueled NTRs will accelerate, making these “advanced” systems a reality.

Fuel Elements: An Overview

Now that we’ve looked at the advantages of fluid fuels in general, let’s look at the different types of fluid fuels and the proposals for the form the fuel elements in these reactors would take. This will be a brief overview of the various types of fuels, with more in-depth examinations coming up in future blog posts.

Liquid Fuel

A liquid fueled reactor is the best known popularly, although the most common type (the molten salt reactor) uses either fluoride or chloride salts, both of which are very corrosive at the temperatures an NTR operates at. While I’ve heard arguments that the extensive use of regenerative cooling can address this thermal limitation, this would still remain a major problem for an NTR. Another liquid fuel type, the molten metal reactor, has also been tested, using highly corrosive plutonium fuel in the best known case (the Liquid Annular Molten Plutonium Reactor Experiment, or LAMPRE, run by Los Alamos Scientific Lab from 1957 to 1963, covered very well here).

Early bubbler-type liquid NTR, Barrett 1963

The first proposal for a liquid fueled NTR was in 1954, by J McCarthy in “Nuclear Reactors for Rockets.” This design spun molten uranium carbide to produce centrifugal force (a common characteristic in liquid NTRs of all designs), and passed the propellant through a porous outer wall, through the fuel mass, and into the central void in the reactor before it was ejected out of the nozzle.The main problem with this reactor was that the tube was simply too large to allow for as much heat transfer as was ideal to take place, so the next evolution of the design broke up the single large spinning fuel element up into several thinner ones of the same length, increasing the total surface area available for heating the propellant. This work was conducted at Princeton, and would continue on and off until 1973. These designs I generally call “bubblers,” due to the propellant flow path.

Princeton multi-fuel-element bubbler, Nelson et al 1963

One problem with these designs is that the fuel would vaporize in the low pressure hydrogen environment of the bubbles, and significant amounts of uranium would be lost as the propellant went through the fuel. Not only is uranium valuable, but it’s heavy, reducing the exhaust velocity and therefore the specific impulse. Another issue is that there are hard limits to how much propellant can be passed through the fuel at any given time before it starts to splatter, directly tying thrust to fuel volume. 

In order to combat this, a team at NASA’s Lewis Research Center decided to study the idea of passing the propellant only through the central void in the fuel, allowing radiation to be the sole means of heating the propellant. Additional regenerative cooling structures were needed for this design, and ensuring the propellant got heated sufficiently was a challenge, but this sort of LNTR, the radiator type, became the predominant design. Vapor losses of the uranium were still a problem, but were minimized in this configuration.

It too would be cancelled in the late 1960s, but briefly revived by a team at Brookhaven National Laboratory in the early 1990s for possible use in the Space Exploration Initiative, however this program was not selected for further development.

Despite these challenges, liquid core NTRs have the potential to reach above 1300 s isp, and a T/W ratio of up to 0.5, so there is definite promise in the concept.

Droplet/Vapor Fuel

Picture a spray bottle, the sort used for household plants, ironing, or cleaning products like window cleaner. When the trigger is pulled, there’s a fine spray of liquid exiting the nozzle, which contains a mix of liquid and gas. Using a similar system to mix liquids and gasses is possible in a nuclear reactor, and is called a droplet core NTR. This reactor type is useful in that there’s incredible surface area available for radiation to occur into the propellant, but unfortunately it also means that separating the fuel droplets from the propellant upon leaving the nozzle (as well as preventing the fuel from coating the reactor core walls) is a major hydrodynamics challenge in this type of reactor.

Vapor core NTR, Diaz et al 1992

The other option is to use a vapor as fuel. A vapor is a substance that is in a gaseous state, but not at the critical point of the material – i.e. at standard temperature and pressure it would still be a liquid. One interesting property of a vapor is that a vapor is able to be condensed or evaporated in order to change the phase state of the substance without changing its temperature, which could be a useful tool to use for reactor startup. The downside of this type of fuel is that it has to be in an enclosed vessel in order to maintain the vapor state.

So why is this useful in an NTR? Despite the headaches we’ve just (briefly, believe it or not) discussed in the liquid fuels section, liquid fuel has a major advantage over gaseous fuel (our next section): the liquid phase is far better at containing its constituent parts than the gas phase is, due to the higher interatomic bond strength. At the same time, maintaining a large, liquid body can be a challenge, especially in the context of complex molecular structures in some of the most chemically difficult elements known to humanity (the actinides and transuranics). If the liquid component is small, though, it’s far easier to manage the thermal distribution, as well as offering greater thermal diffusion options (remember, the heat IN the fissile fuel needs to be moved OUT of it, and into the propellant, which is a direct function of available surface area).

The droplet core NTR offers many advantages over a liquid fuel in that the large-scale behavior of the liquid fuel isn’t a concern for reactor dynamics, and the aforementioned high surface area offers awesome thermal transfer properties throughout the propellant feed, rather than being focused on one volume of the propellant.

Vapors offer a middle ground between liquids and gasses: the fissile fuel itself is in suspension, meaning that the individual molecules of fissile fuel are able to circulate and maintain a more or less homogeneous temperature. 

This is another design concept that has seen very little development as an NTR (although NEP applications have been investigated more thoroughly, something that we’ll discuss the application and complications of, for an NTR in the future). In fact, I’ve only ever been able to find one design of each type designed for NTR use (and a series of evolving designs for NEP), the appropriately named Droplet Core Nuclear Rocket (DCNR) and the Nuclear Vapor Thermal Reactor (NTVR).

Droplet Core NTR, Anghaie et al 1992

The DCNR was developed in the late 1980s based on an earlier design from the 1970s, the colloid core reactor. The original design used ultrafine microparticles of U-C-ZR carbide fuel, which would be suspended in the propellant flow. This sort of fuel is something that we’ll look at more when covering gas core NTRs (metal microparticles are one of the fuel types available for a GCNTR), but the use of carbides increases the fuel failure temperature to the point that structural components would fail before the fuel itself would, leading to what could be called an early pseudo-dusty plasma NTR. The droplet core NTR took this concept, and applied it to a liquid rather than solid fuel form. We’ll look at how the fuel was meant to be contained before exiting the nozzle in the next section, but this was the main challenge of the DCNR from an engineering point of view.

The NVTR was a compromise design based on NERVA fuel element development with a different fissile fuel carrier. Here, the fuel (in the form of UF4) is contained within a carbon-carbon composite fuel element in sealed channels, with interspersed coolant channels to manage the thermal load on the fuel element. While significant thrust-to-weight ratio improvements were possible, and (in advanced NTR terms) modest specific impulse gains were possible, the design didn’t undergo any significant development. We’ll cover containment in the next section, and other options for architectures as well.

Gas Fuel

Finally, there are gas core NTRs. In these, the fuel is in gaseous form, allowing for the highest core temperatures of any core configuration. Due to the very high temperatures of these reactors, the uranium (and in general the rest of the components in the fuel) become ionized, meaning that a “plasma core” is as accurate a description as a “gas core” is, but gas remains the convention. The fuel form for a gas core NTR has a few variants, with the most common being UF6, or metal fuel which vaporizes as it is injected into the core. Due to the high temperatures of these reactors, the UF6 will often break down as all of the constituent molecules become ionized, meaning that whatever structures will come in contact with the fuel itself (either containment structures or nozzle components) must be designed in such a way to prevent being attacked by high temperature fluorine ions and hydrofluoric acid vapors formed when the fluorine ions come in contact with the propellant.

Containing the gas is generally done in one of three ways: either by compressing the gas mechanically in a container, by holding the gas in the middle of the reactor using the gas pressure from the propellant being injected into the core, or by using electromagnets to contain the plasma similarly to how a spherical tokamak operates. The first concept is a closed cycle   gas core (CCGCNTR, or GC-C), while the second two are called open cycle gas core NTRs (OCGCNTR or GC-O), because while the first one physically contains the fuel and prevents fission products, unburned fuel, and the previously mentioned free fluorine from exiting in the exhaust plume of the reactor, the open cycle’s largest problem in designing a workable NTR is that the vast majority (often upwards of 90%) if the uranium ends up being stripped away from the plasma body before it undergoes fission – a truly hot radioactive mess which you don’t want to use anywhere near anything sensitive to radiation and an insanely inefficient use of fissile material. There are many other designs and hybrids of these concepts, which we’ll cover in the gas core NTR series, and will look briefly at the containment challenges below.

Fluid Fuel Elements: Containment Strategies

Fluid fuels are, well, fluid. Unlike with a solid fuel element, as we’ve looked at in the past, a fluid has to be contained somehow. This can be in a sealed container or by using some outside force to keep it in place.

Another issue with fluid fuels can be (but isn’t always) maintaining the necessary density to achieve the power requirements for an NTR (or any astronuclear system, for that matter). All materials expand when heated, but with fluids this change can be quite dramatic, especially in the case of gas core NTRs. Because of this, careful design is required in order to maintain the high density of fissile fuel necessary to make a mass-efficient rocket engine possible.

This leads to a rather obvious conclusion: rather than the fuel element being a physical object, in a fluid fueled NTR the fuel element is a containment structure. Depending on the fuel type and the reactor architecture, this can take many forms, even in the same type of fuel. This will be a long-ish review of the proposed fuel containment strategies, and how they impact the performance of the reactors themselves.

One thing to note about all of these reactor types is that 235U is not required to be the fissile component in the fuel, in fact many gas core designs use 233U instead, due to the lower requirements for critical mass. (According to most Russian literature on gas core NTRs, this  reduces the critical mass requirements by 1/3). Other options include using 242mAm, a stable isomer of 242Am, which has the lowest critical mass of any fissile fuel. By using these types of fuels rather than the typical 235U, either less of the fuel mass needs to be fissile (in the case of a liquid fueled NTR), or less fuel in general is needed (in the case of vapor/gas core NTRs). This can be a double-edged sword in some systems with high fuel loss rates (like an open cycle gas core), which would require more robust and careful fuel management strategies to prevent power transients due to fuel level variations in the active zone of the reactor, but the overall reduction in fuel requirements means that there’s less fuel that can be lost. Many other fissile fuel types also exist, but generally speaking either short half-lives, high spontaneous fission rates, or expense in manufacture have prevented them from being extensively researched.

Let’s look at each of the design types in general, with a particular focus on gas core NTRs at the end.

Liquid FE

For liquid fuels, there’s one universal option for containing the fuel: by spinning the fuel element. However, after this, there’s two main camps on how a liquid fueled NTR interacts with the propellant. The original design, first proposed in the 1950s and researched at least through the 1960s, proposed the use of either one or several spinning cylinders with porous outer walls (frits), which would be used to inject the propellant into the reactor’s active region. For those that remember the Dumbo reactor, this may be familiar as a folded flow NTR, and does two things: first, it allowed the area surrounding the fuel elements to be kept at very low temperatures, allowing the use of ZrH and other thermally sensitive materials throughout the reactor, and second it increases the heat transfer area available from the fuel to the propellant. Experiments (using water as a uranium analog) were conducted to study the basics of bubble behavior in a spinning fluid to estimate fuel mass loss rates, and the impact of evaporation or vaporization of various forms of uranium (including U metal, UC2, and others) were conducted. 

This concept is the radiator type LNTR. Here, rather than the folded flow used previously, axial flow is used: the H2 is used as a coolant for reactor structures (including the nozzle) passing from the nozzle end to the ship end, and then injected through the central void of each of the fuel elements before exiting the nozzle. This design reduces the loss of fuel mass due to bubbling in the fuel, but adds an additional challenge of severely reducing the amount of surface area available for heat transfer from the fuel to the propellant. In order to mitigate this, some designs propose to seed the propellant with microparticles of tungsten, which would absorb the significant about of UV and X rays coming off the fuel, and turn it into IR radiation which is more easily absorbed by the H. At the designed operating temperatures, this reactor would dissociate the majority of the H2 into monatomic hydrogen, increasing the specific impulse significantly.

In all these designs, there is no solid clad between the fuel itself and the propellant, because this means that the hottest portion of the fuel element would be limited by how high the temperature can reach before melting the clad. Some early LNTR designs used a mix of molten UC2 and ZrC/NbC as a fuel element, with the ZrC meant to migrate to the upper areas of the fuel element and not only provide neutron moderation but reduce the amount of erosion from the propellant. It may be possible to use a liquid metal clad as a barrier to prevent mass erosion of the fissile fuel in a metal fueled reactor as well, and possibly even add some neutron moderation for the fuel element itself. However, the material would need to have not only a very high boiling point, high thermal conductivity, low reactivity to both hydrogen and the fuel, and low neutron capture cross section, it would also need to have a high vapor pressure in order to prevent erosion from the propellant flow (although I suppose adding additional clad during the course of operation would also be an option, at the cost of higher propellant mass and therefore lost specific impulse).

Droplet/Vapor FE

Now let’s look at the vapor core NTR.

NVTR fuel element, Diaz et al 1992

Containing the UF4 vapor in the NVTR vapor core NTR is done by using a sealed tube embedded in a fuel element, which is then surrounded by propellant channels to carry away the heat. Two configurations were proposed in the NTVR concept: the first used a large central cavity, sealed at both ends, to contain the vapor, and the second design dispersed the fuel cylinders in an alternating hexagonal pattern throughout the fuel element. The second option provides a more even thermal distribution not only within the fuel element itself, but across the entire active zone of the reactor core.

Droplet core NTRs are very different in their core structure. Rather than having multiple areas that the fissile fuel is isolated in, the droplet core sprays droplets of fissile fuel into a large cylinder, which is spun to induce centrifugal force. The fuel is kept away from the walls of the reactor core using a collection of high-pressure H2 jets, injecting the propellant into the fuel suspension and maintaining hydrostatic containment on the fuel. The last section of the reactor core, instead of using hydrogen, injects a liquid lithium spray to bind with the uranium, which is then carried to the walls of the reactor due to the lack of tangential force. The fuel is then recirculated to the top of the reactor vessel, where it is once again injected into the core.

This hydrostatic equilibrium concept is very similar to how many gas core NTRs operate (which we’ll look at below), and has proven to be the biggest Achilles’ Heel of these sorts of designs. While it may be theoretically possible to do this (the lower temperatures of the droplet core allow for collection and recirculation, which may provide a means of fissile fuel loss reduction), many of the challenges of the droplet core are very similar to that of the open cycle gas core, a far more capable engine type.

Gas Core

Gas core containment is possibly the most complex topic in this post, due to the sheer variety of possible designs and extreme engineering requirements. We’ll be discussing the different designs in depth in upcoming blog posts, but it’s worth doing an overview of the different designs, their strengths and weaknesses, here.

Closed Cycle

One half of the lightbulb configuration, McLafferty et al 1968

The simplest design to describe is the closed cycle gas core, which in many ways resembles a vapor core NTR. In most iterations, a sealed cylinder with a piston at one end (similar in many ways to the piston in an automobile engine), is filled with UF6 gas. This is compressed in order to reach critical geometry, and fission occurs in the cylinder. The walls of the cylinder are generally made out of quartz, which is transparent to the majority of the radiation coming off the fissioning uranium, and is able to resist the fluorination from the gas (other options include silicon dioxide, magnesium oxide, and aluminum oxide). Additionally, while the quartz will darken under the heat, the radiation actually “anneals” the quartz to keep it transparent, and coolant is run through the cylinder to maintain the material within thermal limits; a vortex is induced during fission which, when properly managed, also keeps the majority of the uranium (now in a charged state) from coming in contact with the walls of the chamber as well, reducing thermal load on the material. Some designs have used pressure waves in place of the piston to induce fission, but the fluid-mechanical result is very similar. This results in a lightbulb-like structure, hence the common nickname “nuclear lightbulb.” One variation mentioned in Russian literature also uses a closed uranium loop, circulating the fissile fuel to minimize the fission product buildup and maintain the fissile density of the reactor.

The main advantage to these types of designs is that all fission products and particle radiation are contained within the bulb structure, meaning that fission product and radiation release into the environment is eliminated, with only gamma and x-ray radiation during operation being a concern. However, due to the fact that there’s a solid structure between the fuel element and the propellant, this engine is thermally limited more than any other gas core design, and its performance in both thrust and specific impulse suffers as a result.

Open Cycle

The next very broad category is an open cycle gas core. Here, there is usually no solid structure between the fissioning uranium and the propellant, meaning that core temperatures can reach astoundingly high temperatures (sometimes limited only by the melting temperature of the materials surrounding the active reactor zone, such as reflectors and pressure vessel). Sadly, this also means that actually containing the fuel is the single largest challenge in this type of reactor, and the exhaust tends to be incredibly radioactive as a result, On the plus side, this sort of rocket can achieve isp in the tens of thousands of seconds (similar to or better than electric propulsion), and also achieve high thrust.

Perhaps the easiest way to make a pure open cycle gas core NTR is to allow the fuel and the propellant to fully mix, similarly to how the droplet core NTR was done, and either ensure all (or most) of the fissile fuel is burned before leaving the rocket nozzle. Insanely radioactive, sure, but with a complete mixing of the fissioning atoms and the propellant the theoretically most efficient transfer of energy is possible. However, the challenge of fully fissioning the fuel in such a short period of time is significant, and I can’t find any evidence of significant research into this type of gas core reactor.

Due to the challenges of burning the fissile fuel completely enough during a single pass through the reactor, though, it is generally considered required to maintain a more stable fissile structure within the reactor’s active region. Maintaining this sort of structure is a challenge, but is generally done through gasdynamic effects: the propellant injected into the reactor is used to push the fuel back into the center of the reactor. This involves the use of a porous outer wall of the reactor, where the hydrogen propellant is inserted at a high enough pressure and evenly enough spaced intervals to counterbalance both the tendency of the plasma to expand until it’s not able to undergo fission and the tendency of the fuel to leave the nozzle before being burned.

Soviet-type Vortex Stabilized open cycle, image Koroteev et al 2007

The next way is to create a low pressure stagnant area in the center of the core, which will contain the fissile fuel. In order to maintain this type of pressure differential, a solid structure is usually needed, generally made out of a high temperature refractory metal. In a way this is a hybrid closed/open cycle gas core (even though the plasma isn’t in direct contact with the structure of the reactor itself), because the structure itself is key to generating this low pressure zone necessary for maintaining this plasma body fuel element. This type of NTR has been the focus of Russian gas core research since the 1970s, and will be covered more in the future.

Spherical gas core diagram, image NASA

As I’m sure most of you have guessed, fuel containment is a very complex and difficult problem, and one that’s had many solutions over the years (which we’ll cover in a future post). Most recent gas core NTR designs in the US are based on the spherical gas core. Here, the plasma is held in the center of the active zone using jets of propellant from all sides. This is generally called a porous wall gas core NTR, and while it takes advantage of any vortex stabilization that may occur in the fuel, it does not rely on it; in many ways, it’s a lot like an indoor skydiving arena with air jets blowing from all sides. This design, first proposed in the 1970s, uses high pressure propellant to contain the fuel in the reactor, and in many designs the flow can be adjusted to deal with the engine being under thrust, pushing the fuel toward the nozzle in traditional design configurations. Most designs suffer from massive erosion of the fuel by shear forces from the propellant eroding the fuel from the outside edge, but in some conceptual sketches this can be gotten around using non-traditional nozzle configurations which have a solid structure along the main thrust axis of the rocket. (More on that in a future post. I’m still trying to track down the sources to fully explain that pseudo-aerospike concept).

Hybrid gas core diagram, Beveridge 2017

The most promising designs as far as fuel loss rates minimize the amount of plasma required to maintain the reaction. This is what’s known as a hybrid solid-gas NTR, first proposed by Hyland in the 1970s, and also one of the designs which has been most recently investigated by Lucas Beveridge. Here, the fissile fuel is split between two components: the high-temperature plasma fuel is used for final heating of the propellant, but isn’t able to sustain fission independently. Instead, a sphere of solid fuel encases the outside of the active zone of the reactor. This minimizes the amount of fuel that can be easily eroded while ensuring that a critical mass of fissile material is contained in the active region of the reactor. This really is less complicated than it sounds, but is difficult to summarize briefly without delving into the details of critical geometry, so I’ll try to explain it this way: the interior of the reactor is viewed by the neutrons in the reactor as a high-density low temperature fuel area, surrounding a low density high temperature fuel area, with the coolant/moderator passing through the high density area and flowing around the low density area, making a complete reactor between these parts while minimizing how much of the low density fuel is needed and therefore minimizing the fuel loss. I wish I was able to make this more clear in less than a couple pages, but sadly I’m not that good at summarizing in non-technical terms. I’ll try and do better on the hybrid core post coming in the future.

All of these designs suffer from massive fuel loss, leading to highly radioactive exhaust and incredibly inefficient engines which are absurdly expensive to operate due to the amount of highly enriched fissile fuel needed. (Because everything going into the reactor needs to fission as quickly as possible, every component of the fuel itself needs to undergo fission as easily as possible.) This is the major Achilles heel of this NTR type: despite the massive potential promise, the fuel loss, and radioactive plume coming off these reactors, make them unusable with current engineering.

There’s going to be a lot more that I’m going to write about this type of NTR, and I skipped a lot of ideas, and variations on these ideas, so expect a lot more in the coming year on this subject.

Cooling the Reactor/Heating the Propellant

Finally there’s cooling, which usually comes in one of two varieties:

  1. cooling using the propellant, as in most NTR designs that we’ve seen, to reject all the heat from the reactor
  2. cooling in a closed loop, as is done in an NEP system
Hybrid gas core with secondary cooling diagram, Beveridge 2017

While the ideal situation is to reject all the heat into the propellant, which maximizes the thrust and minimizes the dry mass of the system, this is the exception in many of these systems, rather than the norm. There’s a couple reasons for this: containing a fluid with fast-moving (or high pressure) hydrogen is challenging because the gas wants to strip away the mass that it comes in contact with (far easier in a fluid than a solid), H2 is insanely difficult to contain at almost any temperature, and these reactors are designed to achieve incredibly high temperatures which can outstrip the available heat rejection area that the reactor designs allow.

Complicating the issue further, hydrogen is mostly transparent to the radiation that a nuclear reactor puts off (mostly in the hard UV/X/gamma spectrum), meaning that it takes a lot of hydrogen to reject the heat produced in the reactor (a common complaint in any gas-cooled reactor, to be fair), and that hydrogen doesn’t get heated that much on an atom-by-atom basis, all things considered.

There’s a way around this, though, which many designs, from LARS on the liquid side to basically every gas core design I’ve ever seen use: microparticle or vapor seeding. This is a form of hybrid propellant, which I mention in my NTR propellants page. Basically, a metal is ground incredibly fine (or is vaporized), and then included in the propellant feed. This captures the high-wavelength photons (due to its higher atomic mass, and greater opacity to those wavelengths as a result), which are re-emitted at a lower frequency which is more easily absorbed by the propellant. While the US prefers to use tungsten microparticles in their designs, the USSR and Russia have also examined two other types of metals: lithium and NaK vapor. These have the advantage of being lower mass, impacting the overall propellant mass less, and also far easier to control fluid insertion rates (although microparticles can act as fluidized materials due to their small size, and maintain suspension in the H2 propellant well). This is a subject that I’ll cover in more depth in the future in the gas core NTR post.

(Side note: I’ve NEVER seen data on non-hydrogen propellant in a liquid-fueled NTR, but this problem would be somewhat ameliorated by using a higher atomic mass fuel, but which one is used will determine both how much more radiation would be directly absorbed, and what kind of loss in specific impulse would accompany this substitution. Also, using other elements/molecules would significantly change the neutronic structure and hydrodynamic behavior of the reactor, a subject I’ve never seen covered in any paper.)

Sadly, in many designs there simply isn’t the heat capacity to remove all of the reactor’s thermal energy through the propellant stream. Early gas core NTRs were especially notorious for this, with some only able to reject about 3% of the reactor’s thermal energy into the propellant. In order to prevent the reactor and pressure vessel from melting, external radiators were used – hence the large, arrowhead-shaped radiators on many gas core NTR designs.

This is unfortunate, since it directly affects the dry mass of the system, making it not only heavier but less power efficient overall. Fortunately, due to the high temperatures which need to be rejected, advanced high temperature radiators can be used (such as liquid droplet radiators, membrane radiators, or high temperature liquid metal radiators) which can reject more energy in less mass and surface area.

Another example, one which I’ve never seen discussed before (with one exception) is the use of a bimodal system. If significant amounts of heat are coming off the reactor, then it may be worth it to use a power conversion system to convert some of the heat into electricity for an electric propulsion system to back up the pure thermal system. This is something that would have to be carefully considered, for a number of reasons:

  1. It increases the complexity of the system: power conversion system, power conditioning system, thrusters, and support subsystems for each must be added, and each needs extensive reliability testing.
  2. It will significantly increase the mass of the system, so either the thrust needs to be significantly increased or the overall thrust efficiency needs to offset the additional dry mass (depending on the desire for thrust or efficiency in the system).
    1. Knock on mass increases will be extensive, with likely additions being: an additional primary heat loop, larger radiators for heat rejection, main truss restructuring and brackets, additional radiation shielding for certain radiation sensitive components, possible backup power conditioning and storage systems, and many other subsystem support structures.
  3. This concept has not been extensively studied; the only example that I’ve seen is the RD-600, which used a low power mode with an MHD that the plasma passed directly through in a closed loop system (more on this system in the future); this is obviously not the same type of system being discussed here. The only other similar parallel is with the Werka-type dusty plasma fission fragment rocket, which uses a helium-xenon Brayton turbine to provide about 100 kWe for housekeeping and system electrical power. However, this system only rejected less than 1% of the total FFRE waste heat.
    1. The proper power conversion system needs to be selected, thruster selection is in a similar position, and other systems would go through similar selection and optimization processes would need to be done. This is made more complex due to the necessity to match the PCS and thermal management of the system to the reactor, which has not been finalized and is currently very inefficient in terms of fissile material. If a heat engine is used, the quality of the heat reduces, meaning larger (and heavier) radiators are needed, as well.

Fluid Fuels: Promises of Advanced Rockets, but Many Challenges to Overcome

As we’ve seen in this brief overview of fluid fueled NTRs, the diversity in advanced NTR designs is broad, with an incredible amount of research having been done over the decades on many aspects of this incredibly promising, but challenging, propulsion technology. From the chemically challenging liquid fuel NTR, with several materials and propellant feed challenges and options, to the reliable vapor core, to the challenging but incredibly promising gas core NTR, the future of nuclear thermal propulsion is far more promising than the already-impressive solid core designs we’ve examined in the past.

Coming up on Beyond NERVA, we will examine each of these types in detail in a series of blog posts, and the information both in this post and future posts will be adapted into more-easily referenced web pages. Interspersed with this, I will be working on filling in details on the Rover series of engines and tests on the webpage, and we may also cover some additional solid core concepts that haven’t been covered yet, especially the pebble-bed designs, such as Timberwind and MITEE (the pebble-bed concept is also sometimes called a fluidized bed, since the fuel is able to move in relation to the other pellets in the fueled section of the reactor in many designs, so can be considered a hybrid system in some ways).

With the holiday season, life events, and concluding the project which has kept me from working as much as I would have liked on here in the coming months, I can’t predict when the next post (the first of three on liquid fueled NTRs) will be published, but I’ve already got 7 pages written on that post, six on the next (bubblers), and 6 on the final in that trilogy (radiator LNTR) with another 4 on vapor cores, and about 10 pages on the basic physics principles of gas core reactor physics (which is insanely complex), so hopefully these will be coming in the near future!

As ever, I look forward to your feedback, and follow me on Twitter, or join the Beyond NERVA Facebook page, for more content!


This is just going to be a short list of references, rather than the more extensive typical one, since I’m covering all this more in depth later… but here’s a short list of references:

Liquid fuels

“Analysis of Vaporization of Liquid Uranium, Metal, and Carbon Systems at 9000 and 10000 R,” Kaufman et al 1966

“A Technical Report on Conceptual Design Study of a Liquid Core Nuclear Rocket,” Nelson et al 1963

“Performance Potential of a Radiant Heat Transfer Liquid Core Nuclear Rocket Engine,” Ragsdale 1967

Vapor and Droplet Core

“Droplet Core Nuclear Reactor (DCNR),” Anghaie 1992

“Vapor Core Propulsion Reactors,” Diaz 1992

Gas Core

“Analytical Design and Performance Studies of the Nuclear Light Bulb Engine,” Rogers et al 1973

“Open Cycle Gas Core Nuclear Rockets,” Ragsdale 1992

“A Study of the Potential Feasibility of a Hybrid-Fuel Open Cycle Gas Core Nuclear Thermal Rocket,” Beveridge 2017

RHU RTGs Spacecraft Concepts

Dragonfly: NASA’s Newest Nuclear Powered Spacecraft

Hello, and welcome back to Beyond NERVA! The blog itself has been quiet for a while for a number of reasons, but the website continues to grow! I’ve added extensive sections on radioisotope power sources and many details of their operation. There are now pages for radioisotope power sources in general (which you can find here), the fuel elements and heating units used in these systems (here), the considerations for fuel selection (which can be found here), the various options for the fuel itself (each of these has its own page: 238Pu, 241Am, and 210Po). I also covered the RTGs of the SNAP era (available here), which evolved concurrently with the SNAP-2, SNAP-10, and SNAP-8 reactors that we’ve already looked at in depth (each of those now has its own page, as well as the experimental and development reactors associated with the systems, just click on their names to find them!), as well as the Multi-Hundred Watt RTG that powered the Voyager spacecraft, the General Purpose Heat Source RTG which powered Galileo, Ulysses, and Cassini, and still powers New Horizons, and as this is released a page will be coming online about the Multi-Mission RTG, NASA’s current workhorse RTG! Make sure to check out those pages for in depth information on an ecosystem of power supplies which are fascinating, but often overlooked (including by me) for the flashier, higher-powered fission reactor proposals.

Today, we’re going to look at a particular application of the Multi-Mission RTG, or MMRTG.

Concept model of Dragonfly lander in flight configuration, image JHUAPL

This is the newly announced Dragonfly mission to Titan. This mission will only be the second to touch down on the surface of that Saturnian moon, and promises to transform our understanding of the complex hydrocarbon cycles, liquid water (which is under the surface of Titan), and complex organic chemistry that may hold clues to the early atmospheric conditions of Earth.

Congratulations are very much in order for the Johns Hopkins University Applied Physics Laboratory team for their successful mission concept!

Lander in surface, image JHUAPL

This mission will be primarily based around the quadcopter which has gained the most attention in the recent announcements. Using eight rotors mounted to four motor mounts, this system will charge a set of lithium ion batteries with the output of the MMRTG, and then use this higher-power-density supply to fly in short hops, first scouting out the landing point where it will settle for the science gathering portion after the one it will be landing on for that flight, and then land at an already-scouted position on the Titanic surface. There a suite of instruments, including a mass spectrometer, a neutron and gamma spectrometer, a meteorological and seismic sensor suite, and a camera suite. There may be others that are yet to be fully determined but will be refined over the course of the final mission planning (which can begin now that the mission has been selected for funding.

Flying on Titan is far easier than on Earth due to the high atmospheric density (over 4 times higher than Earth’s at sea level) and low gravity (1/7 g) of the moon, and as such a host of proposals for flying probes have been proposed over the years, from balloons to helicopters to airships to airplanes – and even a radioisotope thermal rocket proposal! Furthermore, both surface and lake landers have been proposed.

As the team pointed out in their proposal, the Dragonfly concept combines the benefits of a Curiosity-class lander’s equipment suite and surface science capability with the mobility of an aerial movement platform. This provides the scientific advantages of surface deployment with the ability to relocate the lander at a far higher rate of travel than a wheeled rover such as Curiosity. This provides a good balance of the advantages of each design type.

Let’s first look at the mission itself, from launch to landing on Titan, followed by the science goals of the lander, and finally we’ll look at the power supply relatively briefly, focusing on the thermal management strategies needed for both the lander and the RTG itself. All of the information here is as of July 2019, so much is still subject to change in the next few years before the launch window.

For a more in-depth look at the MMRTG itself, check out the MMRTG page here!

If you’re looking for a particular aspect of the mission, the links below will jump you to the appropriate section, but as with all missions, each phase or component informs every other one, so by skipping ahead you may miss some interesting tidbits about this incredible mission!

Mission Profile Pre-Titan: Launch, Cruise, and EDL

Launch of Atlas V-541 configuration for Curiosity, November 26, 2010 from Cape Canaveral AFS, image ULA

Dragonfly will launch on either an Atlas 541 or equivalent launcher on April 12th, 2025, and conduct a series of flybys of various planets to get out to the Saturnian system. This means that a smaller launcher can be used, and therefore a less expensive one to save more funding for the spacecraft and science teams. However, since this launch vehicle (to my knowledge) hasn’t firmly been pinned down, differences in the actual launcher – as well as any possible onboard non-optimal launch conditions, could change the exact timing of the gravitational assists listed below.

The first gravitational assist will be on April 11th, 2026 from Earth, followed by a Venusian gravitational assist on 4/16/2017, another Earth gravitational assist on 5/27/2028, and a final Earth gravitational assist on 9/3/2031. While there are options to rearrange these gravitational assists, this one was selected due to a number of orbital mechanical factors. Sadly, Jupiter will be out of phase with Saturn at the appropriate time, so it’s impossible to use the large planet’s convenient gravity well to shorten the trip to the Saturnian system.


After this series of four inner-system gravitational assists, Dragonfly will have the necessary dV to get to Titan. A mid-course correction around December 2, 2031 (while the spacecraft is between Mars and Juptier) will ensure that the spacecraft is oriented correctly for Titan capture.

Fueled MMRTG for Curiosity. External radiator piping seen as silver “U” between blades of radiator. Image DOE/INL

During this time, the MMRTG will use its secondary coolant loop (visible as the silver tubes at the base of the RTG’s fins in the above image) combined with a pumped coolant loop similar to what was used on Curiosity’s cruise stage, in order to reject the waste heat from the RTG during the time that it’s enclosed in the heat shield and cruise stage. It’s unclear in the mission documentation whether the RTG will be used to power any instruments during cruise, as Curiosity’s particle detection system was. The instrumentation on Dragonfly is significantly different from Curiosity, and it’s not apparent whether any of the instruments would either be useful in cruise, or perhaps whether operation of the cruise stage would interfere with those experiments that could be useful to the point that the data would simply be too messy or corrupted to bother.

Mars Science Laboratory cruise stage thermal management diagram, image NASA/JPL

The cruise stage design is also something that I haven’t been able to find, so it’s possible (although unlikely due to increased electrical bus complexity) that the MMRTG could be used to power scientific instruments on the cruise stage itself. While the majority of the time the spacecraft will be doing the series of gravitational assists in the inner Solar System, allowing for the use of solar panels, by the time it is heading out of the inner Solar System the power available from solar radiation will be dropping off exponentially. This could likely be handled by battery power on board the cruise stage, or it could possibly be handled by the MMRTG. However, the increased complexity of the electrical system should it be connected to both the cruise stage as well as the lander may be sufficient to have the mission planners decide to not go this route.

Entry, Descent, and Landing for Dragonfly

Finally, at the close of 2034 (12/30), the spacecraft will reach Titan, and perform its entry, descent and landing procedures.

For those that remember Curiosity’s landing, this was (rightly) touted as the “Seven Minutes of Terror,” involving a huge number of complex and risky maneuvers with a collection of sometimes exotic craft to land on Gale Crater. These included: a set of hypersonic parachutes designed for the thin Martian atmosphere, followed by the use of an eight-engined “sky crane,” which hovered meters off the surface of Mars to lower the rover on a winch and cable system (during which the rover’s wheel and bogey driving system deployed for the first time), followed by placing the rover on the surface of Mars, disconnecting the cables, and then flying away to crash at some distance. This was due to the combination of thin atmosphere, reasonably high gravity, and the need to minimize dust and debris from collecting on the surface of the rover (to protect the wet and dry lab sample collection system on the surface of the rover from being contaminated). It was a stunning series of firsts in an EDL system, and one which rightly received worldwide attention – and garnered NASA to repeat the process for Mars 2020.

Titan, on the other hand, is a whole different ballgame. Its thick atmosphere and low gravity make it a very different EDL environment, one that is both easier to deal with but also requires a different set of conditions to be followed. Due to the fact that the gravity is so low (1/7 of Earth’s), Titan’s sensible atmosphere extends far higher than even Earth’s does. In fact, even though the atmospheric density is 4 times that of Earth’s at the surface, the atmospheric pressure is only slightly higher (1.4x). Combining a far lighter-weight rover (weight=mass x gravity), with far better lift, and onboard flight capability, the sky-crane maneuver isn’t necessary on Titan.

The beginning will look similar: the cruise stage is outwardly very similar to the MSL cruise stage, and will also be ejected as the craft begins to enter the atmosphere. A set of parachutes will slow the craft further, until an optimal velocity is achieved above the surface of Titan. During this time, it’s likely that the radar designed to map the Titanic surface in flight will be used to verify the lander’s location over the surface and find an acceptable landing location for its first touchdown. Based on the typical flight profile of Dragonfly, and depending on the power level in the batteries (which would likely be fully charged) compared to how much power it will take to get to a safe landing location, the next flight’s landing location may be scouted as well.

This video from JHUAPL shows the EDL process for Dragonfly:


The location of the first landing location (as well as all other subsequent locations) is dependent on communication with Earth. Dragonfly is designed to communicate directly with Earth, something which is standard for outer solar system missions due to the lack of a communications architecture in a useful location (the one exception is the Huygens probe, also deployed to Titan, which used the Cassini spacecraft as a communications relay). This places certain limits on the location that Dragonfly can deploy to on the Titanic surface, and also where the lander will move across the surface during its ground science mission – but more on that in the surface mission section.


The mission planners for Dragonfly have selected a similar landing latitude and season as the Huygens mission’s landing in order to maximize the knowledge of the atmospheric conditions for the initial, riskiest portion of the lander’s atmospheric operations. This also maximizes communications availability with Earth and works well with orbital mechanical constraints upon entering the Saturnian system and aerocapture by Titan itself.

After the lander is safely on the surface of Titan, it will deploy its communications dish, send data back to Earth, and begin surface science experiments. This will be the start of the Dragonfly surface mission, which will last a number of years – until either a failure occurs on the lander which prevents its further operation, or the MMRTG degrades to the point that communication with Earth is no longer possible (the science equipment takes less power than the communications do, so even if Dragonfly isn’t able to fly it can still provide valuable scientific data – if it can phone home).

This begins the purpose of the whole mission, which will be explored in the next section

Titan Surface Mission and Flight Profile

Titan is a fascinating place. The coldest location in the Solar System thanks to its location in the outer solar system, complex hydrocarbon cycles which in may ways mimic the hydrological cycles of Earth (but with a complex set of liquid hydrocarbons rather than water), and a chemical profile that may be similar to Earth’s at the beginning of the evolution of life on Earth provide a fascinating place to conduct science missions, with implications reaching not only far back into Earth’s past, but also have a major impact on the future of humanity in the Solar System.

For an in-depth look at the scientific appeal of Titan, I highly recommend reading Ralph Lorentz’s “The Exploration of Titan,” available here:

The surface mission of Dragonfly largely comes in two major phases: landed science instruments and communications, and movement. While some data is collected in flight (mostly imaging), the in depth data collection and transmission are done on the surface of the moon. This is important, because while the MMRTG is the best power supply available for this mission, it isn’t able to provide the power needed for flight as quickly as needed. This means that a set of lithium ion batteries, stored in an insulated box that’s heated by waste heat from the MMRTG, are charged while on the surface, and then once the desired power level is reached, a new flight can occur.

Battery charge vs time, Image JHUAPL

Let’s begin by looking at the science instruments which will be included on Dragonfly. This list is quoted from “Dragonfly: A Rotorcraft Lander Concept for Scientific Exploration of Titan,” a white paper from Lorentz et al at JHUAPL on January 9th of 2019:

DraMS—Dragonfly Mass Spectrometer (Goddard Space Flight Center). A central element of the payload is a highly capable mass spectrometer instrument, with front-end sample processing able to handle highmolecular-weight materials and samples of prebiotic interest. The system has elements from the highly successful SAM (Sample Analysis at Mars) instrument on Curiosity, which has pyrolysis and gas chromatographic analysis capabilities, and also draws on developments for the ExoMars/MOMA (Mars Organic Material Analyser).

DraGNS—Dragonfly Gamma-Ray and Neutron Spectrometer (APL/Goddard Space Flight Center). This instrument allows the elemental composition of the ground immediately under the lander to be determined without requiring any sampling operations. Note that because Titan’s thick and extended atmosphere shields the surface from cosmic rays that excite gammarays on Mars and airless bodies, the instrument includes a pulsed neutron generator to excite the gamma-ray signature, as also advocated for Venus missions. The abundances of carbon, nitrogen, hydrogen, and oxygen allow a rapid classification of the surface material (for example, ammonia-rich water ice, pure ice, and carbonrich dune sands). This instrument also permits the detection of minor inorganic elements such as sodium or sulfur. This quick chemical reconnaissance at each new site can inform the science team as to which types of sampling (if any) and detailed chemical analysis should be performed.

DraGMet—Dragonfly Geophysics and Meteorology Package (APL). This instrument is a suite of simple sensors with low-power data handling electronics. Atmospheric pressure and temperature are sensed with COTS sensors. Wind speed and direction are determined with thermal anemometers (similar to those flown on several Mars missions) placed outboard of each rotor hub, so that at least one senses wind upstream of the lander body, minimizing flow perturbations due to obstruction and by the thermal plume from the MMRTG. Methane abundance (humidity) is sensed by differential near-IR absorption, using components identified in the TiME Phase A study. Electrodes on the landing skids are used to sense electric fields (and in particular the AC field associated with the Schumann resonance, which probes the depth to Titan’s interior liquid water ocean) as well as to measure the dielectric constant of the ground. The thermal properties of the ground are sensed with a heated temperature sensor to assess porosity and dampness. Finally, seismic instrumentation assesses regolith properties (e.g., via sensing drill noise) and searches for tectonic activity and possibly infers Titan’s interior structure.

DragonCam—Dragonfly Camera Suite (Malin Space Science Systems). A set of cameras, driven by a common electronics unit, provides for forward and downward imaging (landed and in flight), and a microscopic imager can examine surface material down to sand-grain scale. Panoramic cameras can survey sites in detail after landing: in many respects, the imaging system is similar to that on Mars landers, although the optical design takes the weaker illumination at Titan (known from Huygens data) into account. LED illuminators permit color imaging at night, and a UV source permits the detection of certain organics (notably polycyclic aromatic hydrocarbons) via fluorescence.

Engineering systems. Data from the inertial measurement unit (IMU) may be used to recover an atmospheric density profile via the deceleration history during entry. IMU and other navigation data may provide constraints on winds during rotorcraft flight. Additionally, the radio link via Doppler and/or ranging measurements may shed light on Titan’s rotation state, which, in turn, is influenced by its internal structure.”

Source: “Dragonfly: A Rotorcraft Lander Concept for Scientific Exploration of Titan,” Lorentz et al at JHUAPL

This is a diverse set of instruments, and offer a wonderful range of science data return in a compact and highly mobile lander platform.

For more information on how these instruments will be applied in studying the surface composition of Titan, check out this paper from Trainer (NASA GSFC) et al: “DRAGONFLY: INVESTIGATING THE SURFACE COMPOSITION OF TITAN”

It’s unclear whether all electrical power will be routed through these batteries or not. While there are advantages from a power conditioning point of view (ensuring the correct voltage and wattage, preventing power dropouts or spikes to sensitive instruments, etc), it can also cause battery life complications due to the constant discharging of the batteries themselves – and degradation of especially the anode in the batteries. It’s unclear which power conditioning scheme will be used for the always-on systems, such as the meteorological system, but the high powered systems will likely draw power from the batteries exclusively.

Flight profile representation, image JHUAPL

In order to deploy these instruments, the lander will fly from one location to another, scouting out the location of the landing after the target that it will be landing at on that mission. A good schematic of the flight profile can be seen below:

Flight profile as function of distance and altitude, image JHUAPL

This ensures that the flight time for the next flight can be maximized, since a known safe landing location is already known before the lander takes off for its next flight, as well as providing margin in flight time for all legs of the mission after the initial one. It’s unclear if the first flight will also use this profile, since that’s part of the entry, descent, and landing sequence, but there’s no apparent reason why it couldn’t if the lander is ejected from the backshell at moderate altitude and velocity.

Direct-to-Earth Communications: Mission Profile Implications

Due to the remote nature of this mission, there aren’t many (if any) options available to use communications hubs off Earth as relays for Dragonfly, a major contrast to Martian operations where the many orbiters around the planet also serve as communications satellites for the various landers and rovers operating on the surface of Mars. This means that in order for the scientific and engineering data from Dragonfly to be returned to Earth, and additional commands to be transmitted, the rover needs line of sight to Earth. This is done via a deployable high gain antenna, which will be stowed for flight operations to reduce drag and stress on the antenna itself.

This complicates matters in two ways: if the lander is at too high or too low a latitude, there’s no line-of-sight available to Earth from the Titanic surface, meaning that communications is impossible; second, although the length of the sol (the extraterrestrial equivalent of a day, in the case of Titan this is almost 16 Earth days, the same as Titan’s orbital period around Saturn).

While it may be possible to do a hop outside the communications window, collect scientific data, then return to the communications window to transmit the data, it also increases the chance of an unrecoverable failure due to the lack of ability for the engineering team on Earth to troubleshoot and resolve any potential spacecraft failure.

Available landing locations in polar view, image JHUAPL

The lander isn’t tied to the ability to gather sunlight like a solar-powered spacecraft is, but at the same time the ability to communicate with Earth and having the Sun visible are conditions that pretty closely overlap, so night-time scientific data gathering on this mission essentially means that all data would need to be stored on board the spacecraft, and the rate of data collection and data storage capabilities of Dragonfly aren’t clear from current documentation. This means that, should the lander need to overnight (a very real possibility) on Titan, some of the data may need to be written over in order to make room for more immediately interesting scientific data.

Whether this is an avoidable circumstance or not is something that I’ve been unable to determine, but the mission design team have made provisions for overnighting Dragonfly on Titan, and this may in fact be required to allow for the time necessary to recharge the batteries to flight condition. If this is the case, it’s likely that the mission’s data storage capabilities will be sufficient to collect all desired data through the Titanic night and transmit them during daytime surface operations, when line-of-sight with Earth is possible.

Now that we’ve looked at what Dragonfly is going to be doing on the surface of Titan, let’s look at how it will do it from a power point of view: the Multi-Mission RTG, or MMRTG.

Multi-Mission RTG: NASA and the DOE’s Flagship RTG

The Multi-Mission RTG was the second design to use the GPHS fuel architecture (for more information on that, check out our GPHS page), after the GPHS-RTG used on Galileo and Cassini (more on that here). It also revived and improved a technology we’ve seen before, on the SNAP-19 for Pioneer and Viking (more information here).

This is also the first RTG to be built for NASA in decades that was designed to operate in an atmosphere, a major thermal management change from the typical spaceborne MHW-RTG and GPHS-RTG systems. RTGs have of course been designed to reject heat into both atmosphere and into liquid during the SNAP RTG program, which included naval and air force programs (more on those systems here), but this application hadn’t been used by NASA since the SNAP-19 (the Viking landers used two of the generators, details available here).

All of the non-GPHS elements of the MMRTG are manufactured by Teledyne (the company that the thermocouple inventors worked for in the 1960s and 1970s), and Aerojet Rocketdyne. Lockheed Martin and the Department of Energy provide the systems and materials for the GPHS modules that fuel the MMRTG.

The thermoelectric generator (TEG) assembly of the MMRTG is based on a legacy design: the lead telluride/telluride arsenic germanium selenite (PbTe/TAGS85) thermoelectric thermocouple system. This was first used in the SNAP-19 RTG (more information available here). The MMRTG uses an updated configuration, using 768 thermocouples configured as two series-parallel chains for fault tolerance: any thermocouple that fails will do so individually, resulting in a negligible and isolated loss of power that is easy and reliable to integrate into mission planning.

Unfortunately, one significant barrier for the use of these materials is differential sublimation of the thermoelectric materials themselves. This is unfortunate, but there are a number of ways to manage this effect. In the current incarnation, a mix of argon and helium are used as a cover gas, but other cover gas compositions are also possible. Additionally, silica insulation is used immediately surrounding the thermocouples to reduce sublimation rates.

This has significant implications for Dragonfly, as we’ll see below.

The MMRTG is fascinating for a number of reasons in the context of thermal management. The two most prominent (upon investigation) differences between the MMRTG and almost every other off-Earth RTG design are:

The radiators are designed to work in both vacuum and atmospheric conditions, and

  • The MMRTG is able to use a pumped coolant thermal management system during operation, the first to do so during a mission with the Mars Science Laboratory cruise stage. This was not only critical for the cruise stage thermal management, but also had an impact on the Entry, Descent, and Landing profile for a lander or rover.

In space as part of a composite spacecraft, such Dragonfly in cruise, thermal transfer points are mechanically and thermally attached to the radiator fins of the RTG. The cruise stage for Curiosity had a total of 23 m of aluminum tubes in a two-split flow configuration integrated to the cruise stage for this purpose, which when welded to a 1.5 mm aluminum sheet creates a radiator about 6 m^2 in area.

Cruise stage configuration. The silver ring on the cruise stage is likely the radiator for the MMRTG. Image JHUAPL

This is integrated into the base of the fins on the radiator, which were then jettisoned for surface operations (although exactly when and how this occurs is unclear).

Coolant tube configuration, image NASA/JPL

The spacecraft interface consists of a mounting bracket which connects to the spacecraft for mechanical attachment, as well as an electrical and telemetry connection to the spacecraft through a single connector. Mechanical integration uses a four bolt mounting interface. The only telemetry provided is from platinum resistance thermometers within the RTG.

The RTG is only ever integrated onto the spacecraft at the launch pad, due to nuclear material security concerns, waste heat management simplification, and radiological safety. Interestingly, the entire RTG was integrated into Curiosity on the launch pad, after the rest of the rover and cruise stage was already integrated into the launch vehicle (a ULA Atlas 541). This will be done with Mars 2020 as well, with a mass simulator being used in its place. I presume the same will be done for Dragonfly.

For more information on the MMRTG in general, check out the MMRTG page here!

Multi-Mission RTG Use for Dragonfly

While the MMRTG (more information available here) was designed to handle the Titan environment from the beginning, there are many difference between this version of the MMRTG and those that are (and will be) used on Mars. Additionally, the fact that aerodynamics and mass distribution are a major design criterion for Dragonfly places additional requirements on the use of the system.

Martian MMRTG configuration. Note the two shields on either side; this is extended into a full cylinder on Dragonfly. Image JPL

A standard MMRTG uses an eight bladed, cylindrical radiator to reject heat. This is present on both the Mars and deep space version of the MMRTG, but the Martian version uses a pair of shields on either side of the rover to both control air flow past the radiator (limiting the amount of convective heat loss) and to capture waste heat from the RTG to heat certain temperature-sensitive system components. These shields don’t touch the radiator, and the configuration of the radiator is the same as the outer space version of the system (although the deep space version may be painted black instead of white for thermal management).

All systems that use radiators have a minimum and maximum operating temperature for the radiator itself. For the MMRTG it’s based on the temperature at the root of each fin on the radiator. While the MMRTG was designed with a maximum allowable temperature of 200 C (!) (with a corresponding loss in conversion efficiency due to a lower thermal gradient) for Martian or orbital operations in an absolute worst-case scenario, the system faces the opposite problem on Titan: the incredibly low surface temperature is well below the minimum operating temperature at the fin root temperature of -269 C. This requires the radiator to be insolated from the exterior environment to a certain degree, meaning that the heat rejection system needs to be changed.

Of course, the fins on the radiator aren’t the most aerodynamic things around: not only would they cause changes in yaw and roll to be more difficult, but the upward angle of the RTG’s mounting causes problems with drag as well.

Rear view of Dragonfly in flight. Note the angled cylinder on the back of the lander: this is the MMRTG housing. Image JHUAPL

Fortunately, both of these problems can be addressed together, by taking an idea that’s already in use on the Curiosity rover, extending and adapting it for the Titanic environment. On Mars, as we already mentioned, a pair of shields are used on either side of the rover. For Dragonfly, those shields are extended to make a cylindrical housing for the MMRTG, as seen at the back of the lander. Not only does this ensure that the RTG components are kept at the appropriate temperature, but also that the lander has improved aerodynamic conditions. This is especially important because while lift is easy to gain on Titan, drag is also far more powerful.

Caution and the RTG: Unknown Unknowns in Mission Design and Power Supply Fidelity

While the MMRTG has been performing within the design envelope for the needs of the Curiosity mission on Mars for years, it has shown some moderately concerning degradation behaviors that the Dragonfly team are taking into account when designing the power profile for Dragonfly.

The JPL “Radioisotope Power Systems Reference Book for Mission Designers and Planners,” by Young Lee and Brian Barstow at JPL, covers many of the details of the MMRTG, as well as recommendations on the design margins that should be adhered to when applying this power supply to a mission proposal.

The first concern is fuel age. While the MMRTG on Curiosity supplied 114 We of power on landing on Mars (and this includes three years in storage, and four years on vehicle integration, launch, and cruise), the Reference Book lists the conservative power output for a brand new MMRTG to be only 107 We on the surface of Mars. This is also a longer time period than the Mars 2020 rover’s MMRTG, which was delivered in August of 2018 for a 2020 launch, although when the fuel was produced is something I have yet to determine. This bodes well for having sufficient power from the 238Pu fuel for the Dragonfly at the beginning of its surface mission, nine years after launch.

Sadly, radioactive decay isn’t the only cause of regular degradation for the MMRTG. The thermoelectric generator itself, the GeTe/TAGS-85 thermocouples that are the children of those used by the Pioneer 10 and 11 probes, lose their thermoelectric materials (mainly germanium) over time through sublimation and migration out of the thermocouples themselves. This is currently reduced by using a cover gas of argon and helium in a carefully controlled ratio, but sadly the utility of this seems to be less than ideal on the Martian surface. Additionally, silicon insulation immediately surrounding the thermocouples can assist in reducing sublimation, but whether that’s currently in use or not is unclear.

Ideal power output as a function of time, image JPL

The MMRTG’s output was assumed to degrade between 3.5% and 4.8% per year, between the decay energy decline in the 238Pu fuel elements and the sublimation of the materials in the thermocouples themselves. Sadly, real-world data from Curiosity shows that the MMRTG on board the rover is degrading at the top end of that scale: 4.8%. This is a fact to cause worry for a mission planner, and one that the Dragonfly team at JHUAPL have taken into account in the mission design.

While there are advanced mitigation techniques (such as cladding them in Al2O3 with an atomic layer deposition process for a highly regular clad similar to nuclear fuel elements, but far more exacting than average) have been proposed and demonstrated by the University of Dayton, it’s unclear whether these mitigation techniques will be used on Dragonfly due to the unknowns that they can introduce into the system.

Because of the long cruise phase of the mission, the Dragonfly team assume that the MMRTG will only be able to provide about 70 We of power once the mission arrives at Titan. This is sufficient to power the Li ion batteries onboard the lander (held in a thermal insulative box and kept at temperature with waste heat from the RTG) for both flight and energy-intensive testing, as well as provide power for the scientific instruments that will be running during grounded science operations.

According to Lorentz et al at JHUAPL in 2018:

“Although sample acquisition and chemical analysis are somewhat power-hungry activities, they require only a few hours of activity. Science activities that require continuous monitoring, namely meteorological and seismological measurements, although of low power, actually dominate the payload energy budget. Indeed, for these extended periods, the lander avionics are powered down and data acquisition is performed only by the instrument, to maximize the rate of recharge of the battery.”

This means that there is more than enough margin for a years-long successful mission from even an MMRTG degrading at the high end of the degradation curve, ensuring that the power supply will likely not be a cause for significant concern of mission failure.

Titan, Here We Come!

The exploration of Titan has long been a goal, ever since the Voyager spacecraft first sent data back from this fascinating moon. Cassini and Huygens sent back reams of data, but sadly only served to whet our appetite for more.

Now, Dragonfly will provide us far more data, in an incredibly mobile platform, on the composition, chemical processes, and weather on Titan. This will not only increase our knowledge of the moon itself, but early chemistry on Earth and how it could have led to the rise of life in the Solar System.

Sadly, it will be 2034 before we receive data back from Dragonfly on the surface of Titan, but this is not unusual for so distant a location. Until then, we can only follow the mission development, cheer on the team at Johns Hopkins University APL, and wait with bated breath for the first data to be transmitted from this distant, intriguing moon.

Dragonfly team with Earth analogue test article, Image credit JHUAPL

For more information on the MMRTG, make sure to check out the new MMRTG page, available here: MMRTG page.

More coming soon!

References and Additional Resources

Dragonfly mission homepage, Johns Hopkins Applied Physics Laboratory

Dragonfly: A Rotorcraft Lander Concept for Scientific Exploration at Titan, Lorenz et al 2018

Dragonfly: New Frontiers mission concept study in situ exploration of Titan’s prebiotic organic chemistry and habitability, (presentation slides) Turtle et al 2018

Preliminary Interplanetary Mission Design and Navigation for the Dragonfly New Frontiers Mission Concept, Scott et al 2018

The Exploration of Titan, Lorentz 2018


Development and Testing RHU RTGs Test Stands

ESA’s RTG Program: Breaking New Ground

Hello, and welcome back to Beyond NERVA! I would like to start by apologizing for the infrequency in posting recently. My wife is currently finishing her thesis in wildlife biology (multispecies occupancy, or the combination of statistical normalization of detection likelihood with the interactions between various species in an ecosystem to determine inter-species interactions and sensitivities), which has definitely made our household more hectic in the last few months. She should defend her thesis soon, and things will return to somewhat normal afterward, including a return to more frequent posting.

Today, we return to radioisotope thermoelectric generators (RTGs), which have once again been in the news. This time, the new is on a happier note than the passing of one of the pioneers in the field: the refining of a different type of fuel for RTGs: Americium 241. This work was done at the United Kingdom’s National Nuclear Laboratory Cumbria Lab by a team including personnel from the University of Leicester, and announced on May 3rd. This material was isolated out of the UK’s nuclear weapons stockpile of plutonium for nuclear warheads, which is something that we’ll look at more in the post itself.

A quick note on nomenclature and notation, since this is something that varies a bit: the way I learned to notate isotopes of an element (or nuclei of a given element with different numbers or neutrons) is as follows, and is what I generally use. If the element is spelled out, the result is [Element name][atomic mass of the nucleus](isomer state – if applicable); if it’s abbreviated it’s [atomic mass](isomer)[element symbol]. An isomer shows the energetic state of the nucleus: if gamma rays are absorbed by the nucleus, then a nucleon can jump to a higher energy state, just as electrons do for lower-wavelength (and lower energy) photons, and for this post it may not matter, but will come up with this element in the future. This means that Plutonium 238 will become 238Pu, and Americium 242(m) becomes 242(m)Am – I chose that because it’s a long-lived isomer that we will return to at some point due to its incredible usefulness in astronuclear design and reactor geometry advantages.

In this post, we’ll take a brief look at the fuels we currently use, as well as this “new” fuel, and one or two more that have been used in the past.

Radioisotope Power Source Fuels: What Are They, and How Do they Work?

Radioisotopes release energy in proportion to the instability of the isotope of whatever element is being used. This level of instability is a very complex issue, and how they decay largely is determined by where they fall in relation to the Valley of Stability, or the region in the Chart of the Isotopes where the number of protons and the number of neutrons balances the strong nuclear and electroweak forces in the nucleus. We aren’t going to go into this in too much detail, but the CEA did an excellent video on this topic which is highly recommended viewing if you want more information on the mechanics of radioactive decay and its different manifestations:

For our purposes, the important things to consider about radioactive decay are: how much energy is released in a given amount of time, and how easy is that energy to harness into kinetic energy, and therefore heat, while minimizing the amount of unharnessable radiation that could potentially damage sensitive components in the payload of the spacecraft. The amount of energy released in a given time is inversely proportional to the half-life of the isotope: the shorter the half-life, the more energy is released in a given time, but as a consequence the energy is being released at a faster rate and therefore the fuel will produce useful energy levels for a shorter period of time. How much of that energy is useful is determined by the decay chain of the isotope in question, which shows the potential decay mechanisms that the isotope will go through, how much energy is released in each decay, and how what the subsequent “daughter” isotopes are – with their decay mechanisms as well. The overall energy release from a particular isotope has to take all of these daughters, granddaughters, etc into consideration in order to calculate the total energy release, and how useful that energy is. If it’s mostly (or purely) alpha radiation, that’s ideal since it’s easily converted to kinetic energy, and each decay releases a proportionally larger amount of energy on average due to the high relative mass of the alpha particle. Beta radiation is not as good, but still easily shieldable (which converts the energy into kinetic energy and therefore heat), and so while not as ideal for heating is still acceptable.

For an RTG designer, the biggest problem (in this area at least) is gamma radiation. Not only is this very hard to convert into useful energy, but it can damage the payload and components of a spacecraft. This is because of the way gamma radiation interacts with matter: the wavelength is so short that it is absorbed most efficiently by very dense elements with a high number of nucleons (protons and neutrons), after which that nucleon jumps to a higher energy state (an isomer), and usually just spits it back out at a lower energy state. This process is repeated until there isn’t enough energy in the short wavelength photon to actually do anything meaningful.

Diverted particle (blue) emitting a photon (green) through brehmsstrahlung. Image Wikipedia

Unfortunately, radioactive decay isn’t the only way to produce gamma rays, you also have to deal with one of the key concepts in radiation shielding which always ties my fingers in knots whenever I try and type it (my tongue just says “nah, not even going to try”): Brehmsstrahlung. This is also known by the name “braking radiation,” and is close to the concept of “cyclotron radiation.” Basically, when a massive, high energy particle is diverted from its original course, two things happen: it slows down, and that slowing down means that energy has to be conserved somehow. In the case of quantum mechanics, the elastic collisions between the particle and whatever force acts on it (in practice the electromagnetic field it interacts with) creates kinetic energy (heat) if it hits a particle, and in every case (including magnetic containment of the particle) a photon is produced. This photon’s wavelength is proportional to the energy the particle originally has, the degree that its direction is changed, and how much energy is lost through mechanisms other than elastic collisions with other nucleons – and it’s never able to be fully slowed by elastic collisions for very complex quantum mechanical reasons. The amount converted to brehmstrahlung is proportional to the initial velocity of the particle in the case we’re talking about (magnetic containment and diversion of radiation isn’t the subject of this blog post after all). This means that if you have a high energy alpha particle emitter, and a low energy alpha emitter, the high energy alpha emitter will create more gamma radiation, meaning that it’s more problematic in terms of energy efficiency of a radioisotope for heat production.

Another concern is what chemical form the radioisotope is used in the fuel element itself. This defines a number of things, including how well any heat produced is transmitted to where it’s useful (or if the heat isn’t transferred efficiently enough, the thermal failure point of the fuel), what kind of nuclear interactions will occur within the fuel, the chemical impact of the elemental composition of the fuel changing through the radioactive decay that is the entire point of using these materials, how much the particles are slowed within the fuel itself (and any subsequent brehmsstrahlung), how much of the radiation that comes from the radioisotope decay is shielded by the fuel itself, and a whole host of other characteristics that are critical to the real-world design of a radioisotope power source.

Now that we’ve (briefly) looked at how radioisotopes are used as power sources, let’s look at what options are available for fueling these systems, which ones have been used in the past, and which ones may be used in the future.

How Are Radioisotopes Used in Space?

The best known use of radioisotopes in space is the use of radioisotope thermoelectric generators, RTGs. RTGs are devices that use the thermoelectric effect, paired with a material containing an isotope going through radioactive decay, to produce electricity. While these are the most well known of the radioisotope power sources, they aren’t the only ones, and in fact they are made up of an even more fundamental type of power source: a radioisotope heating unit, or RHU.

These RHUs are actually more common than RTGs, since often the challenge isn’t getting power to your systems but keeping them from freezing in the extreme cold of space (when there isn’t direct sunlight, in which case it’s extreme heat which is the problem). In that case, a small pellet of radioisotope is connected to a heat transfer mechanism, which could be simple thermal conduction through metal components or the use of heat pipes, which use a wick to transfer a working fluid from a heat source to a cold sink through the boiling and then condensation of the working fluid. Heat pipes have become well known in the astronuclear community thanks to the Kilopower reactor, but are common components in electronics and other systems, and have been used in many flight systems for decades.

RTGs use RHUs as well, as the source of their heat to produce electricity. In fact, the design of a common RHU for both RTG and spacecraft thermal management requirements was a focus of NASA for years, resulting in the General Purpose Heating Unit (GPHU) and its successor variations by the same name. This is important to efficiently and reliably manufacture the radioisotope fuel needed for the wide variety of systems that NASA has used in its spacecraft over the decades. While RTGs are the focus of this technology, we aren’t interested in either the power conversion system or the heat rejection system that make up two of the four main systems in an RTG, so we won’t delve into the details of these systems in particular. Rather, our focus is on the RHU radioisotope fuel itself, and the shielding requirements that this fuel mandates for both spacecraft and payload functionality (the other two major systems of an RTG). Because of this, for the rest of the post we will be discussing RHUs rather than RTGs for the most part (although mentions of RTG implications will be liberally scattered throughout).

Another potential use for radioisotopes in space seems to be something that is rare in discussions of space systems, but is common on Earth: as a source of well-characterized and -understood radiation for both analysis and manufacture of materials and systems. Radioisotope tracking is common in everything from medicine to agriculture, and radioactive analysis is used on everything from ancient artifacts to modern currency. The ISS has had experiments that use mildly radioactive isotopes to analyze the growth of living beings and microbes in microgravity environments, for instance, a very common use in agriculture to analyze nutrient uptake in crops, and a variation of a technique used in medicine to analyze everything from circulatory flow to tumor growth in nuclear medicine. X-ray analysis of materials is also a common method used in high-end manufacturing, and as groups such as Made in Space, Relativity Space and Tethers Unlimited continue exploring 3d printing and ISRU in microgravity and low gravity environments, this will be an invaluable tool. However, this is a VERY different subject, and so this will be where we leave this biological analysis and technological development technology and move to the most common use of radioisotopes: to provide heat to do some sort of work.

RHU Fuel: The Choices Available, and the Choices Made

Most RHUs, for space applications at least, are made from 238Pu, which is an isotope of plutonium that is not only not able to undergo fission, but in fairly minute quantities will render Pu meant for nuclear weapons completely unusable. In the early days of the American (and possibly Soviet) nuclear weapons program, small amounts of this isotope were isolated from material meant for nuclear weapons, but as time went on and the irradiation process became more efficient for producing weapons (which is very different from producing power), the percentage of 238Pu dropped from the single digits to insignificant. By this time, though, the Mound Laboratories in Miamisburg, Ohio had become very interested in the material as a source of radioactive decay heat for a variety of uses. These uses ranged from spacecraft to pacemakers (yes, pacemakers… they were absolutely safe unless the person was cremated, and the fact that the removal of said pacemakers couldn’t be guaranteed is what killed the program). This doesn’t mean that they didn’t investigate the rest of what we’ll be looking at as well, but much of their later focus was on 238Pu.

The advantages of 238Pu are significant: it only undergoes alpha decay with an exceptionally small chance of spontaneous fission (which occurs in the vast majority, if not all, isotopes of uranium and above on the periodic table), it has a good middle-of the road half-life (87.84 years) for long term use as a power source for missions lasting decades (as the majority of outer solar system missions require just to get tot he target destination and provide useful science returns), it has a good amount of energy released during alpha decay (5.59 MeV), and its daughter – 234U – is incredibly stable, with a half-life so long that it basically won’t undergo any significant decay until long after the fuel is useless and the spacecraft has fulfilled its mission centuries ago (245,500 years). The overall radiation power for 238Pu is 0.57 W/g, which is one of the best power densities available for long lived radioisotopes.

Additionally, the fuel used PuO2, is incredibly chemically stable, and when 238Pu becomes 234U, those two oxygen atoms are easily able to reattach to the newly formed U to form UO2 – the same fissile fuel form used in most nuclear reactors (although since it’s 234 rather than 235, it would be useless in a reactor), which is also incredibly chemically stable. Finally, the fuel itself is largely self-shielding once encased in the usual iridium clad to protect the fuel during handling and transport, meaning that the minimal amount of gamma radiation coming off the power source is only a major consideration for instruments looking in the x-ray and gamma bands of the EM spectrum causing noise, rather than significant material or electronic degradation.

238Pu is generated by first making a target of 237Np, usually through irradiation of a precursor material in a nuclear reactor. This target is then exposed to the neutrons produced by a nuclear reactor for a set length of time, and then chemically separated into 238Pu. Due to the irradiation time and energy level, the final result is almost pure 238Pu (close enough for the DOE and NASA’s purposes, anyway), which can then be turned into the ceramic pellet and encased in the clad material. This is then mated to the system that the spacecraft will use the power source for, usually just before spacecraft integration into the launch vehicle. Due to the rigid and strict interpretations of radiation protection, this is an incredibly complex and challenging process – but one that is done on a fairly regular basis. The supply of 238Pu has been a challenge from a bureaucratic perspective, but a recent shakeup of the American astronuclear industry due to a GAO report released last year offers hope for sufficient supply for American flight systems.

This is far from the only radioisotope source used for RHUs – in fact, it wasn’t even the first. Many early US designs used strontium 90, as did many Soviet designs. The first RTG to fly, SNAP-3, used this power source, as did multiple nautical navigation buoys, Soviet lighthouses along their northern coast, and many other systems. This isotope has a half-life of 28.79 years, which means that it’s more useful for shorter-lived systems than 238Pu, but is still a long enough half-life that it’s still a useful radioisotope source. The disadvantage is that it decays via beta decay (at 546 KeV), to 90Y, which is less efficient in converting the radioactive decay into heat. However, this isotope goes through another decay as well. 90Y only has a 2.66 day (!) half-life, ejecting a 2.28 MeV beta particle, resulting in a final decay product of 90Zr, which is radiologically stable. This means that the total energy release is 2.826 MeV, through two beta emissions. The overall energy release in the initial decay is attractive, at 0.9 W/g, however, so either sufficient shielding (of sufficiently high thermal conductivity) to convert this beta radiation into kinetic energy, or a different power conversion system than one that is thermally based is a potential way to increase the efficiency of these systems.

Finally, we come to the one that scares many, and has a horrible reputation in international politics, polonium 210. This is most famous for being used as a poison in the case of Alexander Litvinenko, a Russian defector who was poisoned with Po in his tea (and a whole lot of other people being exposed), but much of the effectiveness is due to chemical toxicity, not radiotoxicity. 210Po has an incredibly short half-life, of only 138.38 days. This is only acceptable for short mission times, but the massive amount of heat generated is still incredibly attractive for designers looking for very high temperature applications. Designs such as radioisotope thermal rockets (where the decay heats hydrogen or other propellant, much like an NTR driven by decay rather than a reactor) that have their efficiency defined purely by temperature can gain significant advantages thanks to this high decay energy. 210Po decays via alpha decay into lead 206, a stable isotope, so there are no complexities from daughter products emitting additional radiation, and a 5.4 MeV alpha particle carries quite a lot of energy.

Other isotopes are also available, and there’s a fascinating table in one of my sources that shows the ESA decision process when it comes to radioisotope selection from 2012:

241Am: The Fuel in the News

Americium-241 is the big news, however, and the main focus of this post. 241Am is produced during irradiation of uranium, which goes through neutron capture during irradiation to 241Pu, which is one of the isotopes that degrades the usefulness of weapons-grade 239Pu. It decays in a very short time (14 day half-life) into 241Am, which is not useful as weapons-grade material, but is still useful in nuclear reactors. According to Ed Pheil, the CEO of Elysium Industries, the Monju sodium cooled fast reactor in Japan faced a restart problem because the 241Pu in the reactor’s fuel decayed before restarting the reactor, causing a reactivity deficit and delaying startup.

241Am decays through a 5.64 MeV alpha emission into 237Np, which in turn goes through a 4.95 MeV alpha decay with a half-life of 2.14 x 10^6 years. This means that the daughter is effectively radiologically stable. With its longer half-life of 433 years (compared to 88 years for 238Pu), 241Am won’t put out as much energy at any given time compared to 238Pu fuel, but the difference in power output as the mission continues will allow for a more steady power supply for the spacecraft. A comparison of beginning of life power to 20 year end of life power for a 238Pu RTG shows a reduction in power output of about 15%, compared to only about 3.5% for 241Am. This allows for more consistent power availability, and for extremely long range or long duration missions a greater amount of power available. However, in the ESA design documentation, this extreme longevity is not something that is examined, a curious omission on first inspection. This could be explained by two factors, however: mission component lifetime, which could be influenced by multiple factors independent of power supply, and the continuing high cost of maintaining a mission control team and science complement to support the probe.

Depending on what power level is needed (more on that in the next section about the ESA RTG design), and how long the mission is, the longer half-life could make 241Am superior in terms of useful energy released compared to 238Pu, and is one of the reasons ESA started looking at 241Am as the main focus of their RTG efforts.

Why the focus on weapons material in the description of production methods? Because that’s where the UK’s National Nuclear Laboratory gained their 241Am in the latest announcement. The NNL is responsible for production of all 241Am for Europe’s RTG programs, but doesn’t HAVE to produce the material from weapons stockpiles.

Fuel cycle using civilian power reactors, Summerer 2012

The EU reprocesses their fuel, unlike the US, and use the Pu to create mixed-oxide (MOX) fuel. If the Pu is chemically separated from the irradiated fuel pellets, then allowed to decay, the much shorter half-life of 241Pu compared to all of the others will lead to the ability to chemically separate the 241Am from the Pu fuel for the MOX. This could, in theory, allow for a steady supply of 241Am for European space missions. As to how much 241Am that would be available through reprocessing, this is a complex question, and one that I have not been able to explore sufficiently to give a good answer to how much 241Am would be available through reprocessing. Jaro Franta was kind enough to provide a pressurized water reactor spent fuel composition table, which provides a vague baseline:

However, MOX fuel generally undergoes higher burnup, and according to several experts the Pu is quickly integrated into fuel as part of the reprocessing of spent fuel. This could be to ensure weapons material is not lying around in Le Hague, but also prevents enough of a decay time to separate the 241Am – plus, as we see in 238Pu production, where the materials are fabricated in one place, separated in another, and made into fuel in a third, Le Hague and the Cumbria Laboratory are not only in different locations but different countries, and after this process the Pu is even more useful for weapons, this bureaucratic requirement makes the process of using spent nuclear fuel for 241Am production an iffy proposition at best. However, according to Summerer and Stephenson (referenced in one of the papers, but theirs is behind a paywall) the economical separation of 241Am from spent civilian fuel can be economical (I’m assuming due to the short half-life of 241Pu), so it seems like the problem is systemic, not technical.

241Am is used as RHU fuel in the form of Am2O3. This allows for very good chemical stability, as well as reasonable thermal transfer properties (for an oxide). This fuel is encased in a “multilayer containment structure similar to that of the general-purpose heat source (GPHS) system,” with thermal and structural trade-offs made to account for the different thermal profile and power level of the ESA RTG (which, as far as I can tell, doesn’t have a catchy name like “GPHS RTG” or “MMRTG” yet). Neptunium oxide most often takes the form of NpO2, meaning that a deficit of oxygen will occur over time in the fuel pellet. The implications of this are something that I am completely unable to answer, and something that is definitely distinct from the use of 238Pu, which then becomes 234U, both of which have two oxygen atoms in their most common oxide state. However, considering there’s a stoichiometric mismatch between the initial material, the partially-decayed material, and the final, fully-degenerate state of the fuel element. I know just enough to know that this is far from ideal, and will change a whole host of properties, from thermal conductivity to chemical reactivity with the clad, so there will be (potentially insignificant) other factors that have impacts on fuel element life from the chemical point of view rather than the nuclear one.

ESA’s RTG: the 241Am RTG

ESA has been interested in RTGs for basically its entire existence, but for reasons that I haven’t been able to determine for certainty, they have not investigated 238Pu production for their RTG fuel to any significant degree for that entire time. Rather, they have focused on 241Am. This comes with a trade-off: due to the longer half-life, at any given time less energy is available per unit mass (0.57 W/g for 238Pu vs 0.1 W/g for 241Am), but as previously noted there are a couple of threshold points at which this longer half-life is an advantage.

Mass to power design envelope, Ambrose et al 2013

The first advantage is in mission lifetime (assuming the radiochemistry situation is of minimal concern): the centuries-long half-life could allow for unprecedented mission durations, where overall mass budget for the power source is of less concern in mission design than longevity of the mission. The second comes to when a very small amount of power is used, and this is the focus of ESA’s RTG program. The GPHS used in American systems produces 250 Wt at the time of manufacture in no more than 1.44 kg. Sadly, it’s very difficult to determine what the ESA fuel element’s mass and specific composition, so a direct comparison is currently impossible. Based on a 2012 presentation, there were two parallel options being explored, with different specific power characteristics, but also different thermal conductivity – and more efficient thermal transport. One was the use of CERMET fuel, where the oxide fuel is encapsulated in a refractory metal (which was unspecified), manufactured by spark plasma sintering. This is a technology that we’ve examined in terms of fissile fuel rather than radioisotope fuel, but many of the advantages apply in both cases. In addition, the refractory metal provides internal brehmstrahlung shielding, potentially offsetting the need for robust clad materials, but this is potentially offset by the need to contain the radioisotope in the case of a launch failure, which requires a mechanical strength that may ensure that the reduction in radiation shielding requirements are rendered effectively moot. The second was a parallel of American RHU design, using a multi-layered clad structure, using refractory metals, carbon-based insulators, and carbon-carbon insulators. This seems to be the architecture which was ultimately chosen, after a contract with Areva TA in France and other partners (which I have not been able to find documentation on, if you have that info please post it in the comments below!).

Cross-section of 1st generation ESA RTG, Ambrosi et al 2013

ESA has two RTG designs that they’ve been discussing over the last two decades: an incredibly small one, of only 1 W of electricity, and a larger one, in the 10-50 We range. These systems are similar in that they are both starting from the same design philosophy, but at the same time different materials and design tradeoffs were required for the two systems, so they have evolved in different directions.

BiTe thermocouple unit, Summerer 2012

The 10-50 We RTG combines the 241Am RHU with a composite clad, surrounded by a bismuth telluride thermocouple structure. This is very similar to lead telluride TE converters in many ways, but is more efficient in lower operating temperatures. PbTe is also under investigation, developed and manufactured by Fraunhofer IPM in Germany, but BiTe seems to be the current technological forerunner in ESA RTG development. This seems to be commercially available, but sadly based on the large number of thermopiles (another name for TE converters) available, it is not clear which is being used.

The 1-1.5 We RTG is one that doesn’t seem to be explored in depth in the currently published literature. This power output is useful for certain applications, such as powering an individual sensor, but is a much more niche application. Details on this design are very thin on the ground, though, so we will leave this design at that and move on to the experiment performed – again, very little is available on the specifics, but the experiment WAS described in previous papers.

Gen 2 ESA RTG, Ambrosi 2015

This design went through an evolution into the model seen in the public announcement video. This is called the Gen 2 Flight System Design, and likely was introduced sometime in the 2015 timeframe. At the same time, the Radioisotope Heating Unit itself went to TRL 3.

On the fuel element fabrication side, after 241Am was selected in 2010 two phases of isotope production occurred. The first was from 2011 to 2013, and the second from 2013 to 2016.

This was the first test of that batch of RTG fuel to produce electricity, which is a major achievement, for which the entire team should be congratulated.

The Experiment in the News: What is All the Fuss Actually About?

Sadly, there’s no publicly available information about the Am-fueled test in the news. The promotional video and press release from NNL/UL provided only two pieces of information: 241Am fuel was used, and they lit a light bulb. No details about how much 241Am, its fuel form or clad, the thermocouples used, the power requirements of the light bulb… this was meant for maximum viral distribution, not for conveying technical information.

The best way to look at the test is by looking at preceeding tests. RTG design from the ground up takes time, and in the case of ESA and the NNL, this process was only funded for the last ten years. They have had to pick a fuel type, continue to go through selection on thermocouple type, and will be working to finalize the design of their flight system.

Electrically heated breadboard experiment, Ambrosi 2012

In 2012, an electrically heated breadboard experiment was conducted. The test used a cuboid form factor, rather than the more common cylindrical form, and was conducted in a liquid nitrogen cooled vacuum chamber. It was designed around a theoretical fuel element of Am2O3 which would provide 83 Wt of power to the thermocouple. This thermocouple, in the proposal phase, was either a commercially available BiTe or bespoke PbTe thermocouple, contained in a cover gas of argon. The maximum output of electrical power was 5 We, which is less than the minimum size of the “normal” 10 We RTG design. Depending on the fuel element used, it’s possible that the experiment was carried out at 83 Wt/ 5We to allow for maximum comparison between the electrically heated version and the radioisotope powered version, but the lack of RI-powered experiment data prevents us from knowing if this was the case, or whether the experiment was performed at 10 We (166 Wt?) due to manufacturing and fuel element design constraints.

The electrically heated design in the breadboard experiment demonstrated that in the 5-50 We power output range, a specific power of 2 We/kg is feasible. There are possibilities that this could be improved somewhat, but it provides a good baseline for the power range and the specific power that these units will provide. This paper (linked below, Development and Testing of Americium-241 Radioisotope Thermoelectric Generator: Concept Designs and Breadboard System, Ambrosi et al) notes that only at small power outputs can 241Am compete with 238Pu systems, but extended mission lifetime considerations were not addressed.

ESA and the University of Leicester continue to look at expanding the use of RTGs in the future. The focus for the thermocouples in the first flight design was in bismuth telluride TEGs as of 2015. They are also looking into Stirling convertors as well, continuing in the current drive to move away from the

It takes time to do the things that they’re attempting, there are few specialists in these areas (although the fundamental tasks aren’t difficult if you know what you’re doing, it takes a while to know the ins and outs of any sort of large-scale chemistry), and it requires a lot of research to verify that each step of the way is both safe and reliable without compromising efficiency.

I have reached out to Dr. Ambrosi at University of Leicester for additional information about this test. If I hear anything from him, I will add the information about this particular test to the page on this NTR system, which should release soon.

Conclusions: 241Am, Is It the RTG Fuel of the Future?

As with most things in astronuclear engineering, the choice of an RHU fuel is a very complex question, and one which has no simple answer. 238Pu remains the preferred long-duration RTG fuel for space missions in terms of specific power, but its expense, and the requirements as far as infrastructure for fabrication and manufacture provide a high cost barrier for new entrants into the use of RHUs. For Europe, this barrier to entry is considered unacceptable, and that has kept them out of the RTG-flying world community for the entirety of their history.

241Am, on the other hand, is available to nations that conduct reprocessing of spent nuclear fuel, such as signatories to the Euratom treaty (as well as the UK, who are withdrawing voluntarily from the treaty as part of Brexit… don’t ask me why, it’s not required), where reprocessing of spent nuclear fuel is normal practice. Similar challenges to bureaucratic roadblocks to significant production of 238Pu by the US can be seen in European production of 241Am, but the existence of significant reprocessing capabilities make it theoretically far more available. 241Am is also available commercially in the US, meaning that at least in one country the regulatory barriers to possession, and therefore cost, are significantly lower than 238Pu.

This choice, however, comes at a cost of halving the specific power available to RTG systems due to the lower specific power of the fuel, at least as the design is historically described. Optimization calculations by ESA and its partners, primarily the University of Leicester, show that in the 5-50 We range of electric output the impact on mission mass is minimal, and for very low power applications (1-1.5 We) it is superior. The increased availability, and lower cost of acquisition of sufficiently pure 241Am, offset the advantages of 238Pu from the European perspective, as it integrates into current industrial capabilities, which outweigh the engineering advantages of 238Pu for the organizations involved. Even so, this is an exciting development for deep space exploration nerds, and one that can’t be overstated.

While this was a fascinating experiment, and I will be trying to find more information, the significance of this experiment boils down to one thing: this is the first time, outside the US or Russia, that an RTG designed for spacecraft use produced electrical power. It opens up new mission opportunities for ESA, who have been hampered in deep space exploration by their lack of suitable RHU fuel, and offers hope for more missions, more science, and more discoveries in the future.


Isotope information for 241Am,

Isotope information for 238Pu,

Isotope information for 90Sr,

Isotope information for 210Po,

241Am ESA RTG Design

Development and Testing of Americium-241 Radioisotope Thermoelectric Generator: Concept Designs and Breadboard System; Ambrosi et al 2012

Americium-241 Radioisotope Thermoelectric Generator Development for Space Applications, Ambrosi et al 2013

Nuclear Power Sources for Space Applications – a key enabling technology (slideshow), Summerer et al, ESA 2012

Space Nuclear Power Systems: Update on Activities and Programmes in the UK, Ambrosi (University of Leicester) and Tinsley (National Nuclear Laboratory), 2015