Categories
Development and Testing Forgotten Reactors History Non-nuclear Testing Nuclear Thermal Systems Test Stands

Pebblebed NTRs: Solid Fuel, but Different

Hello, and welcome back to Beyond NERVA!

Today, we’re going to take a break from the closed cycle gas core nuclear thermal rocket (which I’ve been working on constantly since mid-January) to look at one of the most popular designs in modern NTR history: the pebblebed reactor!

This I should have covered between solid and liquid fueled NTRs, honestly, and there’s even a couple types of reactor which MAY be able to be used for NTR between as well – the fluidized and shush fuel reactors – but with the lack of information on liquid fueled reactors online I got a bit zealous.

Beads to Explore the Solar System

Most of the solid fueled NTRs we’ve looked at have been either part of, or heavily influenced by, the Rover and NERVA programs in the US. These types of reactors, also called “prismatic fuel reactors,” use a solid block of fuel of some form, usually tileable, with holes drilled through each fuel element.

The other designs we’ve covered fall into one of two categories, either a bundled fuel element, such as the Russian RD-0410, or a folded flow disc design such as the Dumbo or Tricarbide Disc NTRs.

However, there’s another option which is far more popular for modern American high temperature gas cooled reactor designs: the pebblebed reactor. This is a clever design, which increases the surface area of the fuel by using many small, spherical fuel elements held in a (usually) unfueled structure. The coolant/propellant passes between these beads, picking up the heat as it passes between them.

This has a number of fundamental advantages over the prismatic style fuel elements:

  1. The surface area of the fuel is so much greater than with simple holes drilled in the prismatic fuel elements, increasing thermal transfer efficiency.
  2. Since all types of fuel swell when heated, the density of the packed fuel elements could be adjusted to allow for better thermal expansion behavior within the active region of the reactor.
  3. The fuel elements themselves were reasonably loosely contained within separate structures, allowing for higher temperature containment materials to be used.
  4. The individual elements could be made smaller, allowing for a lower temperature gradient from the inside to the outside of a fuel, reducing the overall thermal stress on each fuel pebble.
  5. In a folded flow design, it was possible to not even have a physical structure along the inside of the annulus if centrifugal force was applied to the fuel element structure (as we saw in the fluid fueled reactor designs), eliminating the need for as many super-high temperature materials in the highest temperature region of the reactor.
  6. Because each bead is individually clad, in the case of an accident during launch, even if the reactor core is breached and a fuel release into the environment occurs, the release of either any radiological components or any other fuel materials into the environment is minimized
  7. Because each bead is relatively small, it is less likely that they will sustain sufficient damage either during mechanical failure of the flight vehicle or impact with the ground that would breach the cladding.

However, there is a complication with this design type as well, since there are many (usually hundreds, sometimes thousands) of individual fuel elements:

  1. Large numbers of fuel beads mean large numbers of fuel beads to manufacture and perform quality control checks on.
  2. Each bead will need to be individually clad, sometimes with multiple barriers for fission product release, hydrogen corrosion, and the like.
  3. While each fuel bead will be individually clad, and so the loss of one or all the fuel will not significantly impact the environment from a radiological perspective in the case of an accident, there is potential for significant geographic dispersal of the fuel in the event of a failure-to-orbit or other accident.

There are a number of different possible flow paths through the fuel elements, but the two most common are either an axial flow, where the propellant passes through a tubular structure packed with the fuel elements, and a folded flow design, where the fuel is in a porous annular structure, with the coolant (usually) passing from the outside of the annulus, through the fuel, and the now-heated coolant exiting through the central void of the annulus. We’ll call these direct flow and folded flow pebblebed fuel elements.

In addition, there are many different possible fuel types, which regulars of this blog will be familiar with by now: oxides, carbides, nitrides, and CERMET are all possible in a pebblebed design, and if differential fissile fuel loading is needed, or gradients in fuel composition (such as using tungsten CERMET in higher temperature portions of the reactor, with beryllium or molybdenum CERMET in lower temperature sections), this can be achieved using individual, internally homogeneous fuel types in the beads, which can be loaded into the fuel support structure at the appropriate time to create the desired gradient.

Just like in “regular” fuel elements, these pebbles need to be clad in a protective coating. There have been many proposals over the years, obviously depending on what type of fissile fuel matrix the fuel uses to ensure thermal expansion and chemical compatibility with the fuel and coolant. Often, multiple layers of different materials are used to ensure structural and chemical integrity of the fuel pellets. Perhaps the best known example of this today is the TRISO fuel element, used in the US Advanced Gas Reactor fuel development program. The TRI-Structural ISOtropic fuel element uses either oxide or carbide fuel in the center, followed by a porous carbon layer, a pyrolitic carbon layer (sort of like graphite, but with some covalent bonds between the carbon sheets), followed by a silicon carbide outer shell for mechanical and fission product retention. Some variations include a burnable poison for reactivity control (the QUADRISO at Argonne), or use different outer layer materials for chemical protection. Several types have been suggested for NTR designs, and we’ll see more of them later.

The (sort of) final significant variable is the size of the pebble. As the pebbles go down in size, the available surface area of the fuel-to-coolant interface increases, but also the amount of available space between the pebbles decreases and the path that the coolant flows through becomes more resistant to higher coolant flow rates. Depending on the operating temperature and pressure, the thermal gradient acceptable in the fuel, the amount of decay heat that you want to have to deal with on shutdown (the bigger the fuel pebble, the more time it will take to cool down), fissile fuel density, clad thickness requirements, and other variables, a final size for the fuel pebbles can be calculated, and will vary to a certain degree between different reactor designs.

Not Just for NTRs: The Electricity Generation Potential of Pebblebed Reactors

Obviously, the majority of the designs for pebblebed reactors are not meant to ever fly in space, they’re mostly meant to operate as high temperature gas cooled reactors on Earth. This type of architecture has been proposed for astronuclear designs as well, although that isn’t the focus of this video.

Furthermore, the pebblebed design lends itself to other cooling methods, such as molten salt, liquid metal, and other heat-carrying fluids, which like the gas would flow through the fuel pellets, pick up the heat produced by the fissioning fuel, and carry it into a power conversion system of whatever design the reactor has integrated into its systems.

Finally, while it’s rare, pebblebed designs were popular for a while with radioisotope power systems. There are a number of reasons for this beyond being able to run a liquid coolant through the fuel (which was done on one occasion that I can think of, and we’ll cover in a future post): in an alpha-emitting radioisotope, such as 238Pu, over time the fuel will generate helium gas – the alpha particles will slow, stop, and become doubly ionized helium nuclei, which will then strip electrons off whatever materials are around and become normal 4He. This gas needs SOMEWHERE to go, which is why just like with a fissile fuel structure there are gas management mechanisms used in radioisotope power source fuel assemblies such as areas of vacuum, pressure relief valves, and the like. In some types of RTG, such as the SNAP-27 RTG used by Apollo, as well as the Multi-Hundred Watt RTG used by Voyager, the fuel was made into spheres, with the gaps in between the spheres (normally used to pass coolant through) are used for the gas expansion volume.

We’ll discuss these ideas more in the future, but I figured it was important to point out here. Let’s get back to the NTRs, and the first (and only major) NTR program to focus on the pebblebed concept: the Project Timberwind and the Space Nuclear Propulsion Program in the 1980s and early 1990s.

The Beginnings of Pebblebed NTRs

The first proposals for a gas cooled pebblebed reactor were from 1944/45, although they were never pursued beyond the concept stage, and a proposal for the “Space Vehicle Propulsion Reactor” was made by Levoy and Newgard at Thikol in 1960, with again no further development. If you can get that paper, I’d love to read it, here’s all I’ve got: “Aero/Space Engineering 19, no. 4, pgs 54-58, April 1960” and ‘AAE Journal, 68, no. 6, pgs. 46-50, June 1960,” and “Engineering 189, pg 755, June 3, 1960.” Sounds like they pushed hard, and for good reason, but at the time a pebblebed reactor was a radical concept for a terrestrial reactor, and getting a prismatic fueled reactor, something far more familiar to nuclear engineers, was a challenge that seemed far simpler and more familiar.

Sadly, while this design may end up have informed the design of its contemporary reactor, it seems like this proposal was never pursued.

Rotating Fluidized Bed Reactor (“Hatch” Reactor) and the Groundwork for Timberwind

Another proposal was made at the same time at Brookhaven National Laboratory, by L.P. Hatch, W.H. Regan, and a name that will continue to come up for the rest of this series, John R. Powell (sorry, can’t find the given names of the other two, even). This relied on very small (100-500 micrometer) fuel, held in a perforated drum to contain the fuel but also allow propellant to be injected into the fuel particles, which was spun at a high rate to provide centrifugal force to the particles and prevent them from escaping.

Now, fluidized beds need a bit of explanation, which I figured was best to put in here since this is not a generalized property of pebblebed reactors. In this reactor (and some others) the pebbles are quite small, and the coolant flow can be quite high. This means that it’s possible – and sometimes desirable – for the pebbles to move through the active zone of the reactor! This type of mobile fuel is called a “fluidized bed” reactor, and comes in several variants, including pebble (solid spheres), slurry (solid particulate suspended in a liquid), and colloid (solid particulate suspended in a gas). The best way to describe the phenomenon is with what is called the point of minimum fluidization, or when the drag forces on the mass of the solid objects from the fluid flow balances with the weight of the bed (keep in mind that life is a specialized form of drag). There’s a number of reasons to do this – in fact, many chemical reactions using a solid and a fluid component use fluidization to ensure maximum mixing of the components. In the case of an NTR, the concern is more to do with achieving as close to thermal equilibrium between the solid fuel and the gaseous propellant as possible, while minimizing the pressure drop between the cold propellant inlet and the hot propellant outlet. For an NTR, the way that the “weight” is applied is through centrifugal force on the fuel. This is a familiar concept to those that read my liquid fueled NTR series, but actually began with the fluidized bed concept.

This is calculated using two different relations between the same variables: the Reynolds number (Re), which determines how turbulent fluid flow is, and the friction coefficient (CD, or coefficient of drag, which deptermines how much force acts on the fuel particles based on fluid interactions with the particles) which can be found plotted below. The plotted lines represent either the Reynolds number or the void fraction ε, which represents the amount of gas present in the volume defined by the presence of fuel particles.

Hendrie 1970

If you don’t follow the technical details of the relationships depicted, that’s more than OK! Basically, the y axis is proportional to the gas turbulence, while the x axis is proportional to the particle diameter, so you can see that for relatively small increases in particle size you can get larger increases in propellant flow rates.

The next proposal for a pebble bed reactor grew directly out of the Hatch reactor, the Rotating Fluidized Bed Reactor for Space Nuclear Propulsion (RBR). From the documentation I’ve been able to find, from the original proposal work continued at a very low level at BNL from the time of the original proposal until 1973, but the only reports I’ve been able to find are from 1971-73 under the RBR name. A rotating fuel structure, with small, 100-500 micrometer spherical particles of uranium-zirconium carbide fuel (the ZrC forming the outer clad and a maximum U content of 10% to maximize thermal limits of the fuel particles), was surrounded by a reflector of either metallic beryllium or BeO (which was preferred as a moderator, but the increased density also increased both reactor mass and manufacturing requirements). Four drums in the reflector would control the reactivity of the engine, and an electric motor would be attached to a porous “squirrel cage” frit, which would rotate to contain the fuel.

Much discussion was had as to the form of uranium used, be it 235U or 233U. In the 235U reactor, the reactor had a cavity length of 25 in (63.5 cm), an inner diameter of 25 in (63.5 cm), and a fuel bed depth when fluidized of 4 in (10.2 cm), with a critical mass of U-ZrC being achieved at 343.5 lbs (155.8 kg) with 9.5% U content. The 233U reactor was smaller, at 23 in (56 cm) cavity length, 20 in (51 cm) bed inner diameter, 3 in (7.62 cm) deep fuel bed with a higher (70%) void fraction, and only 105.6 lbs (47.9 kg) of U-ZrC fuel at a lower (and therefore more temperature-tolerant) 7.5% U loading.

233U was the much preferred fuel in this reactor, with two options being available to the designers: either the decreased fuel loading could be used to form the smaller, higher thrust-to-weight ratio engine described above, or the reactor could remain at the dimensions of the 235U-fueled option, but the temperature could be increased to improve the specific impulse of the engine.

There was als a trade-off between the size of the fuel particles and the thermal efficiency of the reactor,:

  • Smaller particles advantages
    • Higher surface area, and therefore better thermal transfer capabilities,
    • Smaller radius reduces thermal stresses on fuel
  • Smaller particles disadvantages
    • Fluidized particle bed fuel loss would be a more immediate concern
    • More sensitive to fluid dynamic behavior in the bed
    • Bubbles could more easily form in fuel
    • Higher centrifugal force required for fuel containment
  • Larger particle advantages
    • Ease of manufacture
    • Lower centrifugal force requirements for a given propellant flow rate
  • Larger particle disadvantages
    • Higher thermal gradient and stresses in fuel pellets
    • Less surface area, so lower thermal transfer efficiency

It would require testing to determine the best fuel particle size, which could largely be done through cold flow testing.

These studies looked at cold flow testing in depth. While this is something that I’ve usually skipped over in my reporting on NTR development, it’s a crucial type of testing in any gas cooled reactor, and even more so in a fluidized bed NTR, so let’s take a look at what it’s like in a pebblebed reactor: the equipment, the data collection, and how the data modified the reactor design over time.

Cold flow testing is usually the predecessor to electrically heated flow testing in an NTR. These tests determine a number of things, including areas within the reactor that may end up with stagnant propellant (not a good thing), undesired turbulence, and other negative consequences to the flow of gas through the reactor. They are preliminary tests, since as the propellant heats up while going through the reactor, a couple major things will change: first, the density of the gas will decrease and second, as the density changes the Reynolds number (a measure of self-interaction, viscosity, and turbulent vs laminar flow behavior) will change.

In this case, the cold flow tests were especially useful, since one of the biggest considerations in this reactor type is how the gas and fuel interact.

The first consideration that needed to be examined is the pressure drop across the fuel bed – the highest pressure point in the system is always the turbopump, and the pressure will decrease from that point throughout the system due to friction with the pipes carrying propellant, heating effects, and a host of other inefficiencies. One of the biggest questions initially in this design was how much pressure would be lost from the frit (the outer containment structure and propellant injection system into the fuel) to the central void in the body of the fuel, where it exits the nozzle. Happily, this pressure drop is minimal: according to initial testing in the early 1960s (more on that below), the pressure drop was equal to the weight of the fuel bed.

The next consideration was the range between fluidizing the fuel and losing the fuel through literally blowing it out the nozzle – otherwise known as entrainment, a problem we looked at extensively on a per-molecule basis in the liquid fueled NTR posts (since that was the major problem with all those designs). Initial calculations and some basic experiments were able to map the propellant flow rate and centrifugal force required to both get the benefit of a fluidized bed and prevent fuel loss.

Rotating Fluidized Bed Reactor testbed test showing bubble formation,

Another concern is the formation of bubbles in the fuel body. As we covered in the bubbler LNTR post (which you can find here), bubbles are a problem in any fuel type, but in a fluid fueled reactor with coolant passing through it there’s special challenges. In this case, the main method of transferring heat from the fuel to the propellant is convection (i.e. contact between the fuel and the propellant causing vortices in the gas which distributes the heat), so an area that doesn’t have any (or minimal) fuel particles in it will not get heated as thoroughly. That’s a headache not only because the overall propellant temperature drops (proportional to the size of the bubbles), but it also changes the power distribution in the reactor (the bubbles are fission blank spots).

Finally, the initial experiment set looked at the particle-to-fluid thermal transfer coefficients. These tests were far from ideal, using a 1 g system rather than the much higher planned centrifugal forces, but they did give some initial numbers.

The first round of tests was done at Brookhaven National Laboratory (BNL) from 1962 to 1966, using a relatively simple test facility. A small, 10” (25.4 cm) length by 1” (2.54 cm) diameter centrifuge was installed, with gas pressure provided by a pressurized liquefied air system. 138 to 3450 grams of glass particles were loaded into the centrifuge, and various rotational velocities and gas pressures were used to test the basic behavior of the particles under both centrifugal force and gas pressure. While some bobbles were observed, the fuel beds remained stable and no fuel particles were lost during testing, a promising beginning.

These tests provided not just initial thermal transfer estimates, pressure drop calculations, and fuel bed behavioral information, but also informed the design of a new, larger test rig, this one 10 in by 10 in (25.4 by 25.4 cm), which was begun in 1966. This system would not only have a larger centrifuge, but would also use liquid nitrogen rather than liquefied air, be able to test different fuel particle simulants rather than just relatively lightweight glass, and provide much more detailed data. Sadly, the program ran out of funding later that year, and the partially completed test rig was mothballed.

Rotating Fluidized Bed Reactor (RBR): New Life for the Hatch Reactor

It would take until 1970, when the Space Nuclear Systems office of the Atomic Energy Commission and NASA provided additional funding to complete the test stand and conduct a series of experiments on particle behavior, reactor dynamics and optimization, and other analytical studies of a potential advanced pebblebed NTR.

The First Year: June 1970-June 1971

After completing the test stand, the team at BNL began a series of tests with this larger, more capable equipment in Building 835. The first, most obvious difference is the diameter of the centrifuge, which was upgraded from 1 inch to 10 inches (25.4 cm), allowing for a more prototypical fuel bed depth. This was made out of perforated aluminum, held in a stainless steel pressure housing for feeding the pressurized gas through the fuel bed. In addition, the gas system was changed from the pressurized air system to one designed to operate on nitrogen, which was stored in liquid form in trailers outside the building for ease of refilling (and safety), then pre-vaporized and held in two other, high-pressure trailers.

Photographs were used to record fluidization behavior, taken viewing the bottom of the bed from underneath the apparatus. While initially photos were only able to be taken 5 seconds apart, later upgrades would improve this over the course of the program.

The other major piece of instrumentation surrounded the pressure and flow rate of the nitrogen gas throughout the system. The gas was introduced at a known pressure through two inlets into the primary steel body of the test stand, with measurements of upstream pressure, cylindrical cavity pressure outside the frit, and finally a pitot tube to measure pressure inside the central void of the centrifuge.

Three main areas of pressure drop were of interest: due to the perforated frit itself, the passage of the gas through the fuel bed, and finally from the surface of the bed and into the central void of the centrifuge, all of which needed to be measured accurately, requiring calibration of not only the sensors but also known losses unique to the test stand itself.

The tests themselves were undertaken with a range of glass particle sizes from 100 to 500 micrometers in diameter, similar to the earlier tests, as well as 500 micrometer copper particles to more closely replicate the density of the U-ZrC fuel. Rotation rates of between1,000 and 2,000 rpm, and gas flow rates from 1,340-1,800 scf/m (38-51 m^3/min) were used with the glass beads, and from 700-1,500 rpm with the copper particles (the lower rotation rate was due to gas pressure feed limitations preventing the bed from becoming fully fluidized with the more massive particles).

Finally, there were a series of physics and mechanical engineering design calculations that were carried out to continue to develop the nuclear engineering, mechanical design, and system optimization of the final RBR.

The results from the initial testing were promising: much of the testing was focused on getting the new test stand commissioned and calibrated, with a focus on figuring out how to both use the rig as it was constructed as well as which parts (such as the photography setup) could be improved in the next fiscal year of testing. However, particle dynamics in the fuidized bed were comfortably within stable, expected behavior, and while there were interesting findings as to the variation in pressure drop along the axis of the central void, this was something that could be worked with.

Based on the calculations performed, as well as the experiments carried out in the first year of the program, a range of engines were determined for both 233U and 235U variants:

Work Continues: 1971-1972

This led directly into the 1971-72 series of experiments and calculations. Now that the test stand had been mostly completed (although modifications would continue), and the behavior of the test stand was now well-understood, more focused experimentation could continue, and the calculations of the physics and engineering considerations in the reactor and engine system could be advanced on a more firm footing.

One major change in this year’s design choices was the shift toward a low-thrust, high-isp system, in part due to greater interest at NASA and the AEC in a smaller NTR than the original design envelope. While analyzing the proposed engine size above, though, it was discovered that the smallest two reactors were simply not practical, meaning that the smallest design was over 1 GW power level.

Another thing that was emphasized during this period from the optimization side of the program was the mass of the reflector. Since the low thrust option was now the main thrust of the design, any increase in the mass of the reactor system has a larger impact on the thrust-to-weight ratio, but reducing the reflector thickness also increases the neutron leakage rate. In order to prevent this, a narrower nozzle throat is preferred, but also increases thermal loading across the throat itself, meaning that additional cooling, and probably more mass, is needed – especially in a high-specific-impulse (aka high temperature) system. This also has the effect of needing higher chamber pressures to maintain the desired thrust level (a narrower throat with the same mass flow throughput means that the pressure in the central void has to be higher).

These changes required a redesign of the reactor itself, with a new critical configuration:

Hendrie 1972

One major change is how fluidized the bed actually is during operation. In order to get full fluidization, there needs to be enough inward (“upward” in terms of force vectors) velocity at the inner surface of the fuel body to lift the fuel particles without losing them out the nozzle. During calculations in both the first and second years, two major subsystems contributed hugely to the weight and were very dependent on both the rotational speed and the pellet size/mass: the weight of the frit and motor system, which holds the fuel particles, and the weight of the nozzle, which not only forms the outlet-end containment structure for the fuel but also (through the challenges of rocket motor dynamics) is linked to the chamber pressure of the reactor – oh, and the narrower the nozzle, the less surface area is available to reject the heat from the propellant, so the harder it is to keep cool enough that it doesn’t melt.

Now, fluidization isn’t a binary system: a pebblebed reactor is able to be settled (no fluidization), partially fluidized (usually expressed as a percentage of the pebblebed being fluidized), and fully fluidized to varying degrees (usually expressed as a percentage of the volume occupied by the pebbles being composed of the fluid). So there’s a huge range, from fully settled to >95% fluid in a fully fluidized bed.

The designers of the RBR weren’t going for excess fluidization: at some point, the designer faces diminishing returns on the complications required for increased fluid flow to maintain that level of particulate (I’m sure it’s the same, with different criteria, in the chemical industry, where most fluidized beds actually are used), both due to the complications of having more powerful turbopumps for the hydrogen as well as the loss of thermalization of that hydrogen because there’s simply too much propellant to be heated fully – not to mention fuel loss from the particulate fuel being blown out of the nozzle – so the calculations for the bed dynamics assumed minimal full fluidization (i.e. when all the pebbles are moving in the reactor) as the maximum flow rate – somewhere around 70% gas in the fuel volume (that number was never specifically defined that I found in the source documentation, if it was, please let me know), but is dependent on both the pressure drop in the reactor (which is related to the mass of the particle bed) and the gas flow.

Ludewig 1974

However, the designers at this point decided that full fluidization wasn’t actually necessary – and in fact was detrimental – to this particular NTR design. Because of the dynamics of the design, the first particles to be fluidized were on the inner surface of the fuel bed, and as the fluidization percentage increased, the pebbles further toward the outer circumference became fluidized. Because the temperature difference between the fuel and the propellant is greater as the propellant is being injected through the frit and into the fuel body, more heat is carried away by the propellant per unit mass, and as the propellant warms up, thermal transfer becomes less efficient (the temperature difference between two different objects is one of the major variables in how much energy is transferred for a given surface area), and fluidization increases that efficiency between a solid and a fluid.

Because of this, the engineers re-thought what “minimal fluidization” actually meant. If the bed could be fluidized enough to maximize the benefit of that dynamic, while at a minimum level of fluidization to minimize the volume the pebblebed actually took up in the reactor, there would be a few key benefits:

  1. The fueled volume of the reactor could be smaller, meaning that the nozzle could be wider, so they could have lower chamber pressure and also more surface area for active cooling of the nozzle
  2. The amount of propellant flow could be lower, meaning that turbopump assemblies could be smaller and lighter weight
  3. The frit could be made less robustly, saving on weight and simplifying the challenges of the bearings for the frit assembly
  4. The nozzle, frit, and motor/drive assembly for the frit are all net neutron poisons in the RBR, meaning that minimizing any of these structures’ overall mass improves the neutron economy in the reactor, leading to either a lower mass reactor or a lower U mass fraction in the fuel (as we discussed in the 233U vs. 235U design trade-off)

After going through the various options, the designers decided to go with a partially fluidized bed. At this point in the design evolution, they decided on having about 50% of the bed by mass being fluidized, with the rest being settled (there’s a transition point in the fuel body where partial fluidization is occurring, and they discuss the challenges of modeling that portion in terms of the dynamics of the system briefly). This maximizes the benefit at the circumference, where the thermal difference (and therefore the thermal exchange between the fuel and the propellant) is most efficient, while also thermalizing the propellant as much as possible as the temperature difference decreases from the propellant becoming increasingly hotter. They still managed to reach an impressive 2400 K propellant cavity temperature with this reactor, which makes it one of the hottest (and therefore highest isp) solid core NTR designs proposed at that time.

This has various implications for the reactor, including the density of the fissile component of the fuel (as well as the other solid components that make up the pebbles), the void fraction of the reactor (what part of the reactor is made up of something other than fuel, in this particular instance hydrogen within the fuel), and other components, requiring a reworking of the nuclear modeling for the reactor.

An interesting thing to me in the Annual Progress Report (linked below) is the description of how this new critical configuration was modeled; while this is reasonably common knowledge in nuclear engineers from the days before computational modeling (and even to the present day), I’d never heard someone explain it in the literature before.

Basically, they made a bunch of extremely simplified (in both number of dimensions and fidelity) one-dimensional models of various points in the reactor. They then assumed that they could rotate these around that elevation to make something like an MRI slice of the nuclear behavior in the reactor. Then, they moved far enough away that it was different enough (say, where the frit turns in to the middle of the reactor to hold the fuel, or the nozzle starts, or even the center of the fuel compared to the edge) that the dynamics would change, and did the same sort of one-dimensional model; they would end up doing this 18 times. Then, sort of like an MRI in reverse, they took these models, called “few-group” models, and combined them into a larger group – called a “macro-group” – for calculations that were able to handle the interactions between these different few-group simulations to build up a two-dimensional model of the reactor’s nuclear structure and determine the critical configuration of the reactor. They added a few other ways to subdivide the reactor for modeling, for instance they split the neutron spectrum calculations into fast and thermal, but this is the general shape of how nuclear modeling is done.

Ok, let’s get back to the RBR…

Experimental testing using the rotating pebblebed simulator continued through this fiscal year, with some modifications. A new, seamless frit structure was procured to eliminate some experimental uncertainty, the pressure measuring equipment was used to test more areas of the pressure drop across the system, and a challenge for the experimental team – finding 100 micrometer copper spheres that were regularly enough shaped to provide a useful analogue to the UC-ZrC fuel (Cu specific gravity 8.9, UC-ZrC specific gravity ~6.5) were finally able to be procured.

Additionally, while thermal transfer experiments had been done with the 1-gee small test apparatus which preceded the larger centrifugal setup (with variable gee forces available), the changes were too great to allow for accurate predictions on thermal transfer behavior. Therefore, thermal transfer experiments began to be examined on the new test rig – another expansion of the capabilities of the new system, which was now being used rigorously since its completing and calibration testing of the previous year. While they weren’t conducted that year, setting up an experimental program requires careful analysis of what the test rig is capable of, and how good data accuracy can be achieved given the experimental limitations of the design.

The major achievement for the year’s ex[experimentation was a refining of the relationship between particle size, centrifugal force, and pressure drop of the propellant from the turbopump to the frit inlet to the central cavity, most especially from the frit to the inner cavity through the fuel body, on a wide range of particle sizes, flow rates, and bed fluidization levels, which would be key as the design for the RBR evolved.

The New NTR Design: Mid-Thrust, Small RBR

So, given the priorities at both the AEC and NASA, it was decided that it was best to focus primarily on a given thrust, and try and optimize thrust-to-weight ratios for the reactor around that thrust level, in part because the outlet temperature of the reactor – and therefore the specific impulse – was fixed by the engineering decisions made in regards to the rest of the reactor design. In this case, the target thrust was was 90 kN (20,230 lbf), or about 120% of a Pewee-class engine.

This, of course, constrained the reactor design, which at this point in any reactor’s development is a good thing. Every general concept has a huge variety of options to play with: fuel type (oxide, carbide, nitride, metal, CERMET, etc), fissile component (233U and 235U being the big ones, but 242mAm, 241Cf, and other more exotic options exist), thrust level, physical dimensions, fuel size in the case of a PBR, and more all can be played with to a huge degree, so having a fixed target to work towards in one metric allows a reference point that the rest of the reactor can work around.

Also, having an optimization point to work from is important, in this case thrust-to-weight ratio (T/W). Other options, such as specific impulse, for a target to maximize would lead to a very different reactor design, but at the time T/W was considered the most valuable consideration since one way or another the specific impulse would still be higher than the prismatic core NTRs currently under development as part of the NERVA program (being led by Los Alamos Scientific Laboratory and NASA, undergoing regular hot fire testing at the Jackass Flats, NV facility). Those engines, while promising, were limited by poor T/W ratios, so at the time a major goal for NTR improvement was to increase the T/W ratio of whatever came after – which might have been the RBR, if everything went smoothly.

One of the characteristics that has the biggest impact on the T/W ratio in the RBR is the nozzle throat diameter. The smaller the diameter, the higher the chamber pressure, which reduces the T/W ratio while increasing the amount of volume the fuel body can occupy given the same reactor dimensions – meaning that smaller fuel particles could be used, since there’s less chance that they would be lost out of the narrower nozzle throat. However, by increasing the nozzle throat diameter, the T/W ratio improved (up to a point), and the chamber pressure could be decreased, but at the cost of a larger particle size; this increases the thermal stresses in the fuel particles, and makes it more likely that some of them would fail – not as catastrophic as on a prismatic fueled reactor by any means, but still something to be avoided at all costs. Clearly a compromise would need to be reached.

Here are some tables looking at the design options leading up to the 90 kN engine configuration with both the 233U and 235U fueled versions of the RBR:

After analyzing the various options, a number of lessons were learned:

  1. It was preferable to work from a fixed design point (the 90 kN thrust level), because while the reactor design was flexible, operating near an optimized power level was more workable from a reactor physics and thermal engineering point of view
  2. The main stress points on the design were reflector weight (one of the biggest mass components in the system), throat diameter (from both a mass and active cooling point of view as well as fuel containment), and particle size (from a thermal stress and heat transfer point of view)
  3. On these lower-trust engines, 233U was looking far better than 235U for the fissile component, with a T/W ratio (without radiation shielding) of 65.7 N/kg compared to 33.3 N/kg respectively
    1. As reactor size increased, this difference reduced significantly, but with a constrained thrust level – and therefore reactor power – the difference was quite significant.

The End of the Line: RBR Winds Down

1973 was a bad year in the astronuclear engineering community. The flagship program, NERVA, which was approaching flight ready status with preparations for the XE-PRIME test, the successful testing of the flexible, (relatively) inexpensive Nuclear Furnace about to occur to speed not only prismatic fuel element development but also a variety of other reactor architectures (such as the nuclear lightbulb we began looking at last time), and the establishment of a robust hot fire testing structure at Jackass Flats, was fighting for its’ life – and its’ funding – in the halls of Congress. The national attention, after the success of Apollo 11, was turning away from space, and the missions that made NTR technologically relevant – and a good investment – were disappearing from the mission planners’ “to do” lists, and migrating to “if we only had the money” ideas. The Rotating Fluidized Bed Reactor would be one of those casualties, and wouldn’t even last through the 1971/72 fiscal year.

This doesn’t mean that more work wasn’t done at Brookhaven, far from it! Both analytical and experimental work would continue on the design, with the new focus on the 90 kN thrust level, T/W optimized design discussed above making the effort more focused on the end goal.

Multi-program computational architecture used in 1972/73 for RBR, Hoffman 1973

On the analytical side, many of the components had reasonably good analytical models independently, but they weren’t well integrated. Additionally, new and improved analytical models for things like the turbopump system, system mass, temp and pressure drop in the reactor, and more were developed over the last year, and these were integrated into a unified modeling structure, involving multiple stacked models. For more information, check out the 1971-72 progress report linked in the references section.

The system developed was on the verge of being able to do dynamics modeling of the proposed reactor designs, and plans were laid out for what this proposed dynamic model system would look like, but sadly by the time this idea was mature enough to implement, funding had run out.

On the experimental side, further refinement of the test apparatus was completed. Most importantly, because of the new design requirements, and the limitations of the experiments that had been conducted so far, the test-bed’s nitrogen supply system had to be modified to handle higher gas throughput to handle a much thicker fuel bed than had been experimentally tested. Because of the limited information about multi-gee centrifugal force behavior in a pebblebed, the current experimental data could only be used to inform the experimental course needed for a much thicker fuel bed, as was required by the new design.

Additionally, as was discussed from the previous year, thermal transfer testing in the multi-gee environment was necessary to properly evaluate thermal transfer in this novel reactor configuration, but the traditional methods of thermal transfer simply weren’t an option. Normally, the procedure would be to subject the bed to alternating temperatures of gas: cold gas would be used to chill the pebbles to gas-ambient temperatures, then hot gas would be used on the chilled pebbles until they achieved thermal equilibrium at the new temperature, and then cold gas would be used instead, etc. The temperature of the exit gas, pebbles, and amount of gas (and time) needed to reach equilibrium states would be analyzed, allowing for accurate heat transfer coefficients at a variety of pebble sizes, centrifugal forces, propellant flow rates, etc. would be able to be obtained, but at the same time this is a very energy-intensive process.

An alternative was proposed, which would basically split the reactor’s propellant inlet into two halves, one hot and one cold. Stationary thermocouples placed through the central void in the centrifuge would record variations in the propellant at various points, and the gradient as the pebbles moved from hot to cold gas and back could get good quality data at a much lower energy cost – at the cost of data fidelity reducing in proportion to bed thickness. However, for a cash-strapped program, this was enough to get the data necessary to proceed with the 90 kN design that the RBR program was focused on.

Looking forward, while the team knew that this was the end of the line as far as current funding was concerned, they looked to how their data could be applied most effectively. The dynamics models were ready to be developed on the analytical side, and thermal cycling capability in the centrifugal test-bed would prepare the design for fission-powered testing. The plan was to address the acknowledged limitations with the largely theoretical dynamic model with hot-fired experimental data, which could be used to refine the analytical capabilities: the more the system was constrained, and the more experimental data that was collected, the less variability the analytical methods had to account for.

NASA had proposed a cavity reactor test-bed, which would serve primarily to test the open and closed cycle gas core NTRs also under development at the time, which could theoretically be used to test the RBR as well in a hot-fore configuration due to its unique gas injection system. Sadly, this test-bed never came to be (it was canceled along with most other astronuclear programs), so the faint hope for fission-powered RBR testing in an existing facility died as well.

The Last Gasp for the RBR

The final paper that I was able to find on the Rotating Fluidized Bed Reactor was by Ludewig, Manning, and Raseman of Brookhaven in the Journal of Spacecraft, Vol 11, No 2, in 1974. The work leading up to the Brookhaven program, as well as the Brookhaven program itself, was summarized, and new ideas were thrown out as possibilities as well. It’s evident reading the paper that they still saw the promise in the RBR, and were looking to continue to develop the project under different funding structures.

Other than a brief mention of the possibility of continuous refueling, though, the system largely sits where it was in the middle of 1973, and from what I’ve seen no funding was forthcoming.

While this was undoubtedly a disappointing outcome, as virtually every astronuclear program in history has faced, and the RBR never revived, the concept of a pebblebed NTR would gain new and better-funded interest in the decades to come.

This program, which has its own complex history, will be the subject for our next blog post: Project Timberwind and the Space Nuclear Thermal Propulsion program.

Conclusion

While the RBR was no more, the idea of a pebblebed NTR would live on, as I mentioned above. With a new, physically demanding job, finishing up moving, and the impacts of everything going on in the world right now, I’m not sure exactly when the next blog post is going to come out, but I have already started it, and it should hopefully be coming in relatively short order! After covering Timberwind, we’ll look at MITEE (the whole reason I’m going down this pebblebed rabbit hole, not that the digging hasn’t been fascinating!), before returning to the closed cycle gas core NTR series (which is already over 50 pages long!).

As ever, I’d like to thank my Patrons on Patreon (www.patreon.com/beyondnerva), especially in these incredibly financially difficult times. I definitely would have far more motivation challenges now than I would have without their support! They get early access to blog posts, 3d modeling work that I’m still moving forward on for an eventual YouTube channel, exclusive content, and more. If you’re financially able, consider becoming a Patron!

You can also follow me at https://twitter.com/BeyondNerva for more regular updates!

References

Rotating Fluidized Bed Reactor

Hendrie et al, “ROTATING FLUIDIZED BED REACTOR FOR SPACE NUCLEAR PROPULSION Annual Report: Design Studies and Experimental Results, June, 1970- June, 1971,” Brookhaven NL, August 1971 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19720017961.pdf

Hendrie et al, “ROTATING FLUIDIZED BED REACTOR FOR SPACE NUCLEAR PROPULSION Annual Report: Design Studies and Experimental Results, June 1971 – June 1972,” Brookhaven NL, Sept. 1972 https://inis.iaea.org/collection/NCLCollectionStore/_Public/04/061/4061469.pdf

Hoffman et al, “ROTATING FLUIDIZED BED REACTOR FOR SPACE NUCLEAR PROPULSION Annual Report: Design Studies and Experimental Results, July 1972 – January 1973,” Brookhaven NL, Sept 1973 https://inis.iaea.org/collection/NCLCollectionStore/_Public/05/125/5125213.pdf

Cavity Test Reactor

Whitmarsh, Jr, C. “PRELIMINARY NEUTRONIC ANALYSIS OF A CAVITY TEST REACTOR,” NASA Lewis Research Center 1973 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19730009949.pdf

Whitmarsh, Jr, C. “NUCLEAR CHARACTERISTICS OF A FISSIONING URANIUM PLASMA TEST REACTOR WITH LIGHT -WATER COOLING,” NASA Lewis Research Center 1973 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19730019930.pdf

Categories
Development and Testing History Nuclear Thermal Systems

The Nuclear Lightbulb – A Brief Introduction

Hello, and welcome back to Beyond NERVA! Really quickly, I apologize that I haven’t published more recently. Between moving to a different state, job hunting, and the challenges we’re all facing with the current medical situation worldwide, this post is coming out later than I was hoping. I have been continuing to work in the background, but as you’ll see, this engine isn’t one that’s easy to take in discrete chunks!

Today, we jump into one of the most famous designs of advanced nuclear thermal rocket: the “nuclear lightbulb,” more properly known as the closed cycle gas core nuclear thermal rocket. This will be a multi-part post on not only the basics of the design, but a history of the way the design has changed over time, as well as examining both the tests that were completed as well as the tests that were proposed to move this design forward.

Cutaway of simplified LRC Closed Cycle Gas Core NTR, image credit Winchell Chung of Atomic Rockets

One of the challenges that we saw on the liquid core NTR was that the fission products could be released into the environment. This isn’t really a problem from the pollution side for a space nuclear reactor (we’ll look at the extreme version of this in a couple months with the open cycle gas core), but as a general rule it is advantageous to avoid it most of the time to keep the exhaust mass low (why we use hydrogen in the first place). In ideal circumstances, and with a high enough thrust-to-weight ratio, eliminating this release could even enable an NTR to be used in surface launches.

That’s the potential of the reactor type we’re going to be discussing today, and in the next few posts. Due to the complexities of this reactor design, and how interconnected all the systems are, there may be an additional pause in publication after this post. I’ve been working on the details of this system for over a month and a half now, and am almost done covering the basics of the fuel itself… so if there’s a bit of delay, please be understanding!

The closed cycle gas core uses uranium hexafluoride (UF6) as fuel, which is contained within a fused silica “bulb” to form the fuel element – hence the popular name “nuclear lightbulb”. Several of these are distributed through the reactor’s active zone, with liquid hydrogen coolant flowing through the silica bulb, and then the now-gaseous hydrogen passing around the bulbs and out the nozzle of the reactor. This is the most conservative of the gas core designs, and only a modest step above the vapor core designs we examined last time, but still offers significantly higher temperatures, and potentially higher thrust-to-weight ratios, than the VCNTR.

A combined research effort by NASA’s Lewis (now Glenn) Research Center and United Aircraft Corporation in the 1960s and 70s made significant progress in the design of these reactors, but sadly with the demise of the AEC and NASA efforts in nuclear thermal propulsion, the project languished on the shelves of astronuclear research for decades. While it has seen a resurgence of interest in the last few decades in popular media, most designs for spacecraft that use the lightbulb reactor reference the efforts from the 60s and 70s in their reactor designs- despite this being, in many ways, one of the most easily tested advanced NTR designs available.

Today’s blog post focuses on the general shape of the reactor: its basic geometry, a brief examination of its analysis and testing, and the possible uses of the reactor. The next post will cover the analytical studies of the reactor in more detail, including the limits of what this reactor could provide, and what the tradeoffs in the design would require to make a practical NTR, as well as the practicalities of the fuel element design itself. Finally, in the third we’ll look at the testing that was done, could have been done with in-core fission powered testing, the lessons learned from this testing, and maybe even some possibilities for modern improvements to this well-known, classic design.

With that, let’s take a look at this reactor’s basic shape, how it works, and what the advantages of and problems with the basic idea are.

Nuclear Lightbulb: Nuclear Powered Children’s Toy (ish)

Easy Bake Oven, image Wikimedia

For those of us of a certain age, there was a toy that was quite popular: the Easy-Bake Oven. This was a very simple toy: an oven designed for children with minimal adult supervision to be able to cook a variety of real baked goods, often with premixed dry mixes or simple recipes. Rather than having a more normal resistive heating element as you find in a normal oven, though, a special light bulb was mounted in the oven, and the waste heat from the bulb would heat the oven enough to cook the food.

Closed cycle gas core bulb, image DOE colorized by Winchell Chung

The closed cycle gas core NTR takes this idea, and ramps it up to the edges of what materials limits allow. Rather than a tungsten wire, the heat in the bulb is generated by a critical mass of uranium hexafluoride, a gas at room temperature that’s used in, among other things, fissile fuel enrichment for reactors and other applications. This is contained in a fused silica bulb made up of dozens of very thin tubes – not much different in material, but very different in design, compared to the Easy-Bake Oven – which contains the fissile fuel, and prevents the fission products from escaping. The fuel turns from gas to plasma, and forms a vortex in the center of the fuel element.

Axial cross-section of the fuel/buffer/wall region of the lightbulb, Rodgers 1972

To further protect the bulb from direct contact with the uranium and free fluorine, a gaseous barrier of noble gas (either argon or neon) is injected between the fuel and the wall of the bulb itself. Because of the extreme temperatures, the majority of the electromagnetic radiation coming off the fuel isn’t in the form of infrared (heat), but rather as ultraviolet radiation, which the silica is transparent to, minimizing the amount of energy that’s deposited into the bulb itself. In order to further protect the silica bulb, microparticles of the same silica are added to the neon flow to absorb some of the radiation the bulb isn’t transparent to, in order to remove that part of the radiation before it hits the bulb. This neon passes around the walls of the chamber, creating a vortex in the uranium which further constrains it, and then passes out of one or both ends of the bulb. It then goes through a purification and cooling process using a cryogenic hydrogen heat exchanger and gas centrifuge, before being reused.

Now, of course there is still an intense amount of energy generated in the fuel which will be deposited in the silica, and will attempt to melt the bulb almost instantly, so the bulb must be cooled regeneratively. This is done by liquid hydrogen, which is also mostly transparent to the majority of the radiation coming off the fuel plasma, minimizing the amount of energy the coolant absorbs from anything but the silica of the bulb itself.

Finally, the now-gaseous hydrogen from both the neon and bulb cooling processes, mixed with any hydrogen needed to cool the pressure vessel, reflectors of the reactor, and other components, is mixed with microparticles of tungsten to increase the amount of UV radiation emitted by the fuel. This then passes around the bulbs in the reactor, getting heated to their final temperature, before exiting the nozzle of the NTR.

Overall configuration, Rodgers 1972

The most commonly examined version of the lightbulb uses a total of seven bulbs, with those bulbs being made up of a spiral of hydrogen coolant channels in fused silica. This was pioneered by NASA’s Lewis Research Center (LRC), and studied by United Aircraft Corp of Mass (UA). These studies were carried out between 1963 and 1972, with a very small number of follow-up studies at UA completing by 1980. This design was a 4600 MWt reactor fueled by 233U, an isp of 1870 seconds, and a thrust-to-weight ratio of 1.3.

A smaller version of this system, using a single bulb rather than seven, was proposed by the same team for probe missions and the like, but unfortunately the only papers are behind paywalls.

During the re-examination of nuclear thermal technology in the early 1990s by NASA and the DOE, the design was re-examined briefly to assess the advantages that the design could offer, but no advances in the design were made at the time.

Since then, while interest in this concept has grown, new studies have not been done, and the design remains dormant despite the extensive amount of study which has been carried out.

What’s Been Done Before: Previous Studies on the Lightbulb

Bussard 1958

The first version of the closed cycle gas core proposed by Robert Bussard in 1946. This design looked remarkably like an internal combustion firing chamber, with the UF6 gas being mechanically compressed into a critical density with a piston. Coolant would be run across the outside of the fuel element and then exit the reactor through a nozzle. While this design hasn’t been explored in any depth that I’ve been able to determine, a new version using pressure waves rather than mechanical pistons to compress gas into a critical mass has been explored in recent years (we’ll cover that in the open cycle gas core posts).

Starting in 1963, United Aircraft (UA, a subsidiary of United Technologies) worked with NASA’s Lewis Research Center (LRC) and Los Alamos Scientific Laboratory (LASL) on both the open and closed cycle gas core concepts, but the difficulties of containing the fuel in the open cycle concept caused the company to focus exclusively on the closed cycle concepts. Interestingly, according to Tom Latham of UA (who worked on the program), the design was limited in both mass and volume by the then-current volume of the proposed Space Shuttle cargo bay. Another limitation of the original concept was that no external radiators could be used for thermal management, due to the increased mass of the closed radiator system and its associated hardware.

System flow diagram, Rodgers 1972

The design that evolved was quite detailed, and also quite efficient in many ways. However, the sheer number of interdependent subsystems makes is fairly heavy, limiting its potential usefulness and increasing its complexity.

In order to get there, a large number of studies were done on a number of different subsystems and physical behaviors, and due to the extreme nature of the system design itself many experimental apparatus had to be not only built, but redesigned multiple times to get the results needed to design this reactor.

We’ll look at the testing history more in depth in a future blog post, but it’s worth looking at the types of tests that were conducted to get an idea of just how far along this design was:

RF Heating Test Apparatus, Roman 1969

Both direct current and radio frequency testing of simulated fuel plasmas were conducted, starting with the RF (induction heating) testing at the UA facility in East Hartford, CT. These studies typically used tungsten in place of uranium (a common practice, even still used today) since it’s both massive and also has somewhat similar physical properties to uranium. At the time, argon was considered for the buffer gas rather than neon, this change in composition will be something we’ll look at later in the detailed testing post.

Induction heating works by using a vibrating magnetic field to heat materials that will flip their molecular direction or vibrate, generating heat. It is a good option for nuclear testing since it is able to more evenly heat the simulated fuel, and can achieve high temperatures – it’s still used for nuclear fuel element testing not only in the Compact Fuel Element Environment Test (CFEET) test stand, which I’ve covered here https://beyondnerva.com/nuclear-test-stands-and-equipment/non-nuclear-thermal-testing/cfeet-compact-fuel-element-environmental-test/ , but also in the Nuclear Thermal Rocket Environmental Effects Simulator, which I covered here: https://beyondnerva.com/nuclear-test-stands-and-equipment/non-nuclear-thermal-testing/ntrees/ . One of the challenges of this sort of heating, though, is the induction coil, the device that creates the heating in the material. In early testing they managed to melt the copper coil they were using due to resistive heating (the same method used to make heat in a space heater or oven), and constructing a higher-powered apparatus wasn’t possible for the team.

This led to direct current heating testing to achieve higher temperatures, which uses an electrical arc through the tungsten plasma. This isn’t as good at simulating the way that heat is distributed in the plasma body, but could achieve higher temperatures. This was important for testing the stability of the vortex generated by not only the internal heating of the fuel, but also the interactions between the fuel and the neon containment system.

Spectral flux from the edge of the fuel body, Rodgers 1972 (will be covered more in depth in another post)

Another concern was determining what frequencies of radiation silicon, aluminum and neon were transparent to. By varying the temperature of the fissioning fuel mass, the frequency of radiation could, to a certain degree, be tuned to a frequency that maximized how much energy would pass through both the noble gas (then argon) and the bulb structure itself. Again, at the time (and to a certain extent later), the bulb configuration was slightly different: a layer of aluminum was added to the inner surface of the bulb to reflect more thermal radiation back into the fissioning fuel in order to increase heating, and therefore increase the temperature of the fuel. We’ll look at how this design option changed over time in future posts.

More studies and tests were done looking at the effects of neutron and gamma radiation on reactor materials. These are significant challenges in any reactor, but the materials being used in the lightbulb reactor are unusual, even by the standards of astronuclear engineering, so detailed studies of the effects of these radiation types were needed to ensure that the reactor would be able to operate throughout its required lifetime.

Fused silica test article, Vogt 1970

Perhaps one of the biggest concerns was verifying that the bulb itself would maintain both its integrity and its functionality throughout the life of the reactor. Silica is a material that is highly unusual in a nuclear reactor, and the fact that it needed to remain not only transparent but able to contain both a noble gas seeded with silica particles and hydrogen while remaining transparent to a useful range of radiation while being bombarded with neutrons (which would change the crystalline structure) and gamma rays (which would change the energy states of the individual nuclei to varying degrees) was a major focus of the program. On top of that, the walls of the individual tubes that made up the bulbs needed to be incredibly thin, and the shape of each of the individual tubes was quite unusual, so there were significant experimental manufacturing considerations to deal with. Neutron, gamma and beta (high energy electron) radiation could all have their effect on the bulb itself during the course of the reactor’s lifetime, and these effects needed to be understood and accounted for. While these tests were mostly successful, with some interesting materials properties of silica discovered along the way, when Dr. Latham discussed this project 20 years later, one of the things he mentioned was that modern materials science could possibly offer better alternatives to the silica tubing – a concept that we will touch on again in a future post.

Another challenge of the design was that it required seeding two different materials into two different gasses: the neon/argon had to be seeded with silica in order to protect the bulb, and the hydrogen propellant needed to be seeded with tungsten to make it absorb the radiation passing through the bulb as efficiently as possible while minimizing the increase in the mass of the propellant. While the hydrogen seeding process was being studied for other reactor designs – we saw this in the radiator liquid fueled NTR, and will see it again in the future in open cycle gas core and some solid core designs we haven’t covered yet – the silica seeding was a new challenge, especially because the material being seeded and the material the seeded gas would travel through was the same as the material that was seeded into the gas.

Image DOE via Chris Casilli on Twitter

Finally, there’s the challenge of nuclear testing. Los Alamos Scientific Laboratory conducted some tests that were fission-powered, which proved the concept in theory, but these were low powered bench-top tests (which we’ll cover in depth in the future). To really test the design, it would be ideal to do a hot-fire test of an NTR. Fortunately, at the time the Nuclear Furnace test-bed was being completed (more on NERVA hot fire testing here: https://beyondnerva.com/2018/06/18/ntr-hot-fire-testing-part-i-rover-and-nerva-testing/ and the exhaust scrubbers for the Nuclear furnace here: https://beyondnerva.com/nuclear-test-stands-and-equipment/nuclear-furnace-exhaust-scrubbers/ ). This meant that it was possible to use this versatile test-bed to test a single, sub-scale lightbulb in a controlled, well-understood system. While this test was never actually conducted, much of the preparatory design work for the test was completed, another thing we’ll cover in a future post.

A Promising, Developed, Unrealized Option

The closed cycle gas core nuclear thermal rocket is one of the most perrenially fascinating concepts in astronuclear history. Not only does it offer an option for a high-temperature nuclear reactor which is able to avoid many of the challenges of solid fuel, but it offers better fission product containment than any other design besides the vapor core NTR.

It is also one of the most complex systems that has ever been proposed, with two different types of closed cycle gas systems involving heat exchangers and separation systems supporting seven different fuel chambers, a host of novel materials in unique environments, the need to tune both the temperature and emissivity of a complex fuel form to ensure the reactor’s components won’t melt down, and the constant concerns of mass and complexity hanging over the heads of the designers.

Most of these challenges were addressed in the 1960s and 1970s, with most of the still-unanswered questions needing testing that simply wasn’t possible at the time of the project’s cancellation due to shifting priorities in the space program. Modern materials science may offer better solutions to those that were available at the time as well, both in the testing and operation of this reactor.

Sadly, updating this design has not happened, but the original design remains one of the most iconic designs in astronuclear engineering.

In the next two posts, we’ll look at the testing done for the reactor in detail, followed by a detailed look at the reactor itself. Make sure to keep an eye out for them!

If you would like to support my work, consider becoming a Patreon supporter at https://www.patreon.com/beyondnerva . Not only do you get early access to blog posts, but I post extra blogs, images from the 3d models I’m working on of both spacecraft and reactors, and more! Every bit helps.

You can also follow me on Twitter (https://twitter.com/BeyondNerva) for more content and conversation!

References

McLafferty, G.H. “Investigation of Gaseous Nuclear Rocket Technology – Summary Technical Report” 1969 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19700008165.pdf

Rodgers, R.J. and Latham, T.S. “Analytical Design and Performance Studies of the Nuclear Light Bulb Engine” 1972 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19730003969.pdf

Latham, T.S. “Nuclear Light Bulb,” 1992 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19920001892.pdf

Categories
History Nuclear Thermal Systems

Radiator LNTR: The Last of the Line

Hello, and welcome back to Beyond NERVA! Today, we’re finishing (for now) our in-depth look at liquid fueled nuclear thermal rockets, by looking at the second major type of liquid NTR (LNTR): the radiator-type LNTR. If you’re just joining us, make sure to check out the introduction (available here) and the bubbler post (available here) for some important context to understand how this design got here.

Rather than passing the propellant directly through the molten fuel, in this system the propellant would pass through the central void of the fuel element, becoming heated primarily through radiation (although some convection within the propellant flow would occur, overall it was a minor effect), hence the name.

This concept had been mentioned in previous works on bubbler-type LNTRs, and initial studies on the thermalization behavior of the propellant, and conversely fuel cooling behavior, were conducted during the early 1960s, but the first major study wouldn’t occur until 1966. However, it would also extend into the 1990s in its development, meaning that it was a far longer-lived design.

Let’s begin by looking at the differences between the bubbler and radiator designs, and why the radiator offers an attractive trade-off compared to the bubbler.

The Vapor Problem, or Is Homogenization of Propellant/Fuel Temp Worth It?

Liquid fuels offer many advantages for an NTR, including the fact that the fuel will distribute its heat evenly across the volume of the fuel element, the fact that the effective temperature of the fuel can be quite high, and that the fuel is able to be reasonably well contained with minimal theoretical challenges.

The bubbler design had an additional advantage: by passing the propellant directly through the fuel, in discrete enough bundles (the bubbles themselves) that the fuel and the propellant would have the same temperature.

Maximum specific impulse due to vapor pressure, Barrett Jr.

Sadly, there are significant challenges to making this sort of nuclear reactor into a rocket, the biggest one being propellant mass. These types of NTRs still use hydrogen propellant, the problem occurs in the fuel mass itself. As the bubbles move through the zirconium/niobium-uranium carbide fuel, it heats up rapidly, and the pressure drops significantly during this process. This means that all of the components of the fuel (the Zr/Nb, C, and U) end up vaporizing into the bubbles, to the point that the bubble is completely saturated by a mix of these elements in vapor form by the time it exits the fuel body. This is called vapor entrainment.

This is a major problem, because it means that the propellant leaving the nozzle has a far higher mass than the hydrogen that was originally input into the system. While there’s the possibility that a different propellant could be used which would not entrain as much of the fuel mass, but would also be higher molecular mass to start – to the point that the gains might likely outweigh the losses (if you feel like exploring this trade-off on a more technical footing, please let me know! I’d love to explore this more), and it wouldn’t eliminate the entrainment problem.

This led people to wonder if you have to pass the propellant through the fuel in the first place. After all, while there is a thermodynamically appealing symmetry to homogenizing your fuel and propellant temperatures, this isn’t actually necessary, is it? The fuel elements are already annular in shape, after all, so why not use them as a more traditional fuel element for an NTR? The lower surface area would mean that there’s less chance for the inconveniently high vapor pressure of the fuel would be mitigated by the fact that the majority of the propellant wouldn’t come in contact with the fuel (or even the layer of propellant that does interact with the fuel), meaning that the overall propellant molecular mass would be kept low… right?

The problem is that this means that the only method of heating the propellant becomes radiation (there’s a small amount of convection, but it’s so negligible that it can be ignored)… which isn’t that great in hydrogen, especially in the UV spectrum that most of the photons from the nuclear reaction are emitted in. The possibility of using either microparticles or vapors which would absorb the UV and re-emit it in a lower wavelength, which would be more easily absorbed by the hydrogen, was already being investigated in relation to gas core NTRs (which have the same problem, but in a completely different order of magnitude), and offered promise, but was also a compromise: this deliberately increases the molar mass of the propellant one way to minimize the molar mass a different way. This was a design possibility that needed to be carefully studied before it could be considered more feasible than the bubbler LNTR.

The leader of the effort to study this trade-off was one of the best-known fluid fueled NTR designers on the NASA side: Robert Ragsdale at Lewis Research Center (LRC, and we’ll come back to Ragsdale in gas core NTR design as well). There were a collection of studies around a particular design, beginning with a study looking at reactor geometry and fuel element size optimization to not only maximize the thrust and specific impulse, but also to minimize the uranium loss rates of the reactor.

This study concluded that there were many advantages to the radiator-type LNTR over the bubbler-type. The first consideration, minimizing the vapor entrainment problem that was laguing the bubbler, was minimized, but not completely eliminated, in the radiator design. The next conclusion is that the specific impulse of the negine could be maintained, or increased, to 1400 s isp or more. Finally, one of th emost striking was in thrust-to-core-weight ratio, which went from about 1:1 in the Nelson/Princeton design that we discussed in the last post all the way up to 19:1 (potentially)! This is because the propellant flow rate isn’t limited y the bubble velocity moving through the fuel (for more detail on this problem, and the other related constraints, check out the last blog post, here).

These conclusions led to NASA gathering a team of researchers, including Ragsdale, Kasack, Donovan, Putre, and others ti develop the Lewis LNTR reactor.

Lewis LNTR: The First of the Line

Lweis Radiator LNTR, Ragsdale 1967

Once the basic feasibility of the radiator LNTR was demonstrated, a number of studies were conducted to determine the performance characteristics, as well as the basic engineering challenges, facing this type of NTR. They were conducted in 1967/68, and showed distinct promise, for the desired 2000 to 5000 MWt power range (similar to the Phoebus 2 reactor’s power goal, which remains the most powerful nuclear reactor ever tested at 3500 MWt).

Fuel tube cross-section, Putre 1968

As with any design, the first question was the basics of reactor configuration. The LRC team never looked at a single-tube LNTR, for a variety of reasons, and instead focused their efforts on a multi-tube design, but the number and diameter of the tubes was one of the major questions to be addressed in initial studies. Because of this, and the particular characteristics of the heat transfer required, the reactor would have many fuel elements with a diameter of between 1 and 4 inches, but which diameter was best would be a matter of quite some study.

Another question for the study team was what the fuel element temperature would be. As in every NTR design, the hotter the propellant, the higher the isp (all other things being equal), but as we saw in the bubbler design, higher temperatures also mean higher vapor pressure, meaning that mass is lost more easily into the propellant – which increases the propellant mass and reduces the isp, and at some point even cost more specific impulse due to higher mass than is gained with the higher temperature. Because the propellant and the fuel would only interact on the surface of the fuel element, the surface temperature of the fuel was the overriding consideration, and was also explored, in the range of 5000 to 6100 K.

Effect of Reactor Pressure on T/W Ratio and U mass loss ratio in H, Ragsdale 1967

The final consideration which was optimized in this design was engine operating pressures. Because this design wasn’t fundamentally limited by the bubble velocity and void fraction of the molten fuel, the chamber pressure could be increased significantly, leading to both more thrust and a higher thrust-to-weight ratio. However, the trade-off here is that at some point the propellant isn’t being completely thermalized, resulting in a lower specific impulse. This final consideration was explored in the range of 200 to 1000 atm (2020-10100 N/cm2).

The three primary goals were: to maximize specific impulse, maximize thrust-to-weight ratio, and minimize uranium mass loss. They quickly discovered that they couldn’t have their cake and eat it, too: higher temperatures, and therefore higher isp, led to faster U mass loss rates, increasing T/W ratio reduced the specific impulse, and minimizing the U loss rate hurt both T/W and isp. They could improve any one (or often two) of these characteristics, but always at the cost of the third characteristic.

Four potential LNTR configurations, note the tradeoffs between isp, T/W, and fuel loss rates. Ragsdale 1967

We’ll look at many of the design characteristics and engineering considerations of the LRC work in the next section on general design challenges and considerations for the radiator LNTR, but for now we’ll look at their final compromise reactor.

The reactor itself would be made up of several (oddly, never specified) fuel elements, in a beryllium structure, with each fuel element being made up of Be as well. These would be cooled by cryogenic hydrogen moving from the nozzle end to the spacecraft end of the reactor, before flowing back into the central void of the fuel element. As it was fed through the central annulus, it would be seeded with tungsten microparticles to increase the amount of heat the propellant would absorb. Finally, it would be exhausted through a standard De Laval nozzle to provide thrust.

Reference LRC LNTR design characteristics, Putre 1968

The final fuel that they settled on was a liquid ternary carbide design, with the majority of the fuel being niobium carbide (although ZrC was also considered), with a molar mass fraction of 0.02 being UC2. This compromise offered good power density for the reactor while minimizing the vaporization rate of the fuel mass. This would be held in 2 inch diameter, 5 foot long fuel element tubes, with a fuel surface temperature of 5060 K. The propellant would be pressurized to 200 atm in the reactor.

Final LRC LNTR Fuel Characteristics, Putre 1968

This led to a design that struck a compromise between isp, T/W, and U mass loss which was not only acceptable, but impressive: 1400 s isp (on par with some forms of electric propulsion), a T/W ratio (of the core alone) of 4, and a hydrogen-to-uranium flow rate ratio of 50.

They did observe that none of these characteristics were as high as they could be, especially in terms of T/W ratio (which they calculated could go as high as 19!), or isp (with a theoretical maximum of 1660), and the uranium loss was twice the theoretical minimum, but sadly the cost of maximizing any of these characteristics was so high from an engineering point of view that it wasn’t feasible.

Sadly, I haven’t been able to find any documentation on this reactor design – and very few references to it – after February 1968. The exact time of the cancellation, and the reasons why, are a mystery to me. If someone is able to help me find that information it would be greatly appreciated.

LARS: The Brookhaven Design

LARS cross section,

The radiator LNTR would remain dormant for decades, as astronuclear funding was scarce and focused on particular, well-characterized systems (most of which were electric powerplant concepts), until the start of the Space Exploration Initiative. In 1991, a conference was held to explore the use of various types of NTR in future crewed space missions. This led to many proposals, including one from the Department of Energy’s Brookhaven National Laboratory in New York. This was the Liquid Annular Reactor System, or LARS.

A team of physicists and engineers, including Powell, Ludewig, Lazareth, and Maise decided to revisit the radiator LNTR design, but as far as I can tell didn’t use any of the research done by the LRC team. Due to the different design philosophies, lack of references, and also the general compartmentalization of knowledge within the different parts of the astronuclear community, I can only conclude that they began this design from scratch (if this is incorrect, and anyone has knowledge of this program, please get in contact with me!).

LARS was a very different design than the LRC concept, and seems to have gone through two distinct iterations. Rather than the high-pressure system that the LRC team investigated, this was a low-pressure, low-thrust design, which optimized a different characteristic: hydrogen dissociation. This maximizes the specific impulse of the NTR by reducing the mass of the propellant to the lowest theoretically possible mass while maintaining the temperature of the propellant (up to 1600 s, according to the BNL team). The other main distinction from the LRC design was the power level: rather than having a very powerful reactor (3000 to 5000 MWt), this was a modest reactor of only 200 MWt. This leads to a very different set of design tradeoffs, but many of the engineering and materials challenges remain the same.

LARS would continue to us NbC diluted with UC2, but the fuel would not completely melt in the fuel element, leaving a solid layer against the walls of the beryllium fuel element tube. This in turn would be regeneratively cooled with hydrogen flowing through a number of channels in the durm, as well as a gap surrounding the body of the fuel element which would also be filled with cold hydrogen. A drive system would be attached on the cold end of the tube to spin it at an appropriate rate (which was not detailed in the papers). The main changes were in the fuel element configuration, size, and number.

The first iteration of LARS was an interesting concept, using a folded-flow system. This used many small fuel element tubes, arranged in a similar manner to the flow channels in the Dumbo reactor, with the propellant moving from the center of the reactor to the outer circumference, before being ejected out of the nozzle of the reactor. Each layer of fuel elements contained eleven individual tubes, with between 1 and 10 layers of fuel elements in the reactor. As the number of layers increased, the length and radius of the fuel elements decreased.

One of the important notes that was made by the team at this early design date was that the perpendicular fuel element orientation would minimize the amount of fission products that would be ejected from the rocket. I’m unable to determine how this was done, other than if they were solids which would stick to the outside of the propellant flue, however.

Unfortunately, I haven’t been able to discover exactly why this design was abandoned for a more traditional LNTR architecture, but the need to cool the entire exterior of the reactor to keep it from melting seems to possibly be a concern. Reversing the flow, with the hot propellant being in the center of the reactor rather than the external circumference, seems like an easy fix if this was the primary concern, but the discussions of reactor architecture after this seem to pretty much ignore this early iteration. Another complication would be the complexity of the reactor architecture. Whether with dedicated motors, or with a geared system allowing one motor to spin multiple fuel elements, a complex system is needed to spin the fuel elements, which would not only be something which would potentially break down more, but also require far more mass than a simpler system.

The second version of LARS kept the same type of fuel, power output, and low pressure operation, but rather than using the folded flow concept it went with seven fuel elements in a beryllium body. The propellant would be used to cool first the nozzle of the rocket, then the rotating beryllium drum which contained the fuel element, before entering the main propellant channel. The final thermalization of the propellant would be facilitated by the use of tungsten microparticles in the H2, necessary due to the low partial pressure and high transparency of pure H2 (while the vapor pressure issues of any LNTR were acknowledged, the effect that this would have on the thermalization seems to have not been considered a significant factor in the seeding necessity) Two versions, defined by the emissivity of the fuel element, were proposed.

Final two LARS options, f is fuel emissivity, Maise 1999

This design was targeted to reach up to 2000 s isp, but due to uncertainties in U loss rates (as well as C and Nb), the overall mass of the propellant upon exiting the reactor was uncertain, so the authors used a range of 1600-2000 s. The thrust of the engine was approximately 20,000 N, which would result in a T/W ratio of about 1;1 when including a shadow shield (one author points out that without the shield the ratio would be about 3-4/1.

I have been unable to find the research reports themselves for this program (unlike the LRC design), so the specifics of the reactor physics tradeoffs, engineering compromises, actual years of research and the like aren’t something that I’m able to discuss. The majority of my sources are conference papers and journal articles, which occurred in 1991 and 1992, but there was one paper from 1999, so it was at least under discussion through the 1990s (interestingly, that paper discussed using LARS for the 550 AU mission concept, which later got remade into the FOCAL gravitational lens mission: https://www.centauri-dreams.org/2010/11/15/a-focal-mission-into-the-oort-cloud/ ).

This seems to be the last time that LARS has been mentioned in the technical literature, so while it is mentioned as the “baseline” liquid core concept in places such as Atomic Rockets (http://www.projectrho.com/public_html/rocket/enginelist2.php#id–Nuclear_Thermal–Liquid_Core–LARS) it has not been explored in depth since.

Lessons Learned, Lessons to Learn: The Challenges of LNTR

In many ways, the apparent dual genesis of radiator LNTRs offers the ability to look into two particular thought processes in what the challenges are with radiator-type LNTRs. One example of this is what’s discussed more in the “fundamental challenges” sections of the introductory section of the reports: for the LRC team they focus on vapor entrainment minimization, whereas in the BNL presentations it seems quite important to point out that “yes, containing a refractory in a spinning, gas cooled drum is relatively trivial.” This juxtaposition of foci is interesting to me, as an examination of the different engineering philosophies of the two teams, and the priorities of the times.

Wall Construction

Both the LRC and LARS LNTRs ended up with similar fuel element configurations: a high temperature material, with coolant tubes traveling the length of the fuel element walls to regeneratively cool the walls. This material would have to withstand not only the temperature of the fuel element, but also resist chemical attack by the hydrogen used for the regenerative cooling, as well as being able to withstand the mechanical strain of not only the spinning fuel, but also the torque from whatever drive system is used to spin the fuel element to maintain the centripetal force used to contain the fuel element.

Another constant concern is the temperature of the wall. While high temperature loadings can be handled using regenerative cooling, the more heat is removed from the fuel during the regenerative cooling step, but it reduces the specific impulse of the engine. Here’s a table from the LRC study that examines the implications of wall cooling ratio vs specific impulse in that design, which will also apply as a general rule of thumb for LARS:

However, from there, the two designs differed significantly. The LARS design is far simpler: a can of beryllium, with a total of 20% of the volume being the regenerative cooling channel. As mentioned previously, the fuel didn’t become completely molten, but remained solid (and mostly containing the ZrC/NbC component, with very little U). Surrounding the outside of the fuel element can itself was another coolant gap. This would then be mounted to the reactor body with a drive system at the ship end, and a bearing at the hot end. This would then be mounted in the stationary moderator which made up the majority of the internal volume of the reactor core, which was shielded from the heat in the fuel element in a very heterogeneous temperature profile.

The LRC concept on the other hand, was far more complex in some ways. Rather than using a metal can, the LRC design used graphite, which maintains its strength far better than many metals at high temperatures. A number of options were considered to maintain the wall of the can, due not only to the fuel mixture potentially attacking the graphite (as the carbon could be dissolved into the carbide of the fuel element), as well as attack from the hydrogen in the coolant channels (which would be able to be addressed in a similar way to how NERVA fuel elements used refractory metal coatings to prevent the same erosive effects).

The LRC design, since the fuel would be completely molten across the entire volume of the fuel element, was a more complex challenge. A number of options were considered to minimize the wall heating of the fuel element, including:

  • Selective fuel loading
    • A common strategy in solid fuel elements, this creates hotter and cooler zones in the fuel element
      • Neutron heating will distribute the radiative heating past U distribution
    • Convection and fuel mixing will end up distributing the fuel over time
    • May be able to be limited by affecting the temperature and viscosity of the fuel for the life of the reactor
  • Multiple fluids in fuel
    • Step beyond selective loading, a different material may be used as the outer layer of the fuel body, resisting mixing and reducing thermal load on the wall
  • Vapor insulation along exterior of fuel body
    • Using thermally opaque vapor to insulate the fuel element wall from the fuel body
    • Significantly reduces the heating on the outer wall
    • Two options for maintaining vapor wall:
      • Ablative coating on inner wall of fuel element can
      • Porous wall in can (similar to a low-flow version of a bubbler fuel element) pumping vapor into gap between fuel and can
    • Maximum stable vapor-layer thickness based on vapor bubble force balance vs centripetal force of liquid fuel
      • Two phase flow dynamics needed to maintain the vapor layer would be complex

This set of options offer a trade-off: either a simpler option, which sets hard limits on the fuel element temperature in order to ensure the phase gradient in the fuel element (the LARS concept), or the fully liquid, more complex-behaving LRC design which has better power distribution, and a higher theoretical fuel element temperature – only limited by the vapor pressure increase and fuel loss rates in the fuel element, rather than the wall heating temperature limits of the LARS design.

Anyone designing a new radiator LNTR has much work that they can draw from, but other than the dynamics of the actual fuel behavior (which have never gone through a criticality test), the fuel element can design will be perhaps the largest set of engineering challenges in this type of system (although simpler than the bubbler-type LNTR).

Propellant Thermalization

The major change between the bubbler and radiator-type LNTRs is the difference in the thermalization behavior of the propellant: in a bubbler-type LNTR, assuming the propellant can be fed through the fuel, the two components reach thermal equilibrium, so the only thing needed is to direct it out of the nozzle; a radiator on the other hand has a similar flow path to the Rover-type NTRs, once through from nozzle to ship side for regenerative cooling, then a final thermalization pass through the central void of the fuel element.

This is a problem for hydrogen propellant, which is largely transparent to the EM radiation coming off the reactor. This thermal transfer accounted for all but 10% of the thermalization effects in the LARS design, and in many of the LRC studies this was completely ignored as negligible, with the convective effects in the propellant mainly being a concern in terms of fuel mass loss and propellant mass increase.

While the fuel mass loss would increase the opacity of the gas (making it absorb more heat), a far better option was available: adding a material in microparticle form to the propellant flow as it goes through the final thermalization cycle. The preferred material for the vast majority of these applications, which we’ll see in the gas cycle NTRs as well, is microparticles of tungsten.

This has been studied in a host of different applications, and will be something that I’ll discuss in depth on a section of the propellant webpage in the future (which I’ll link to here once it’s done), but for the LRC design the target goal for increasing the opacity of the H2 was to achieve between 10,000 and 20,000 cm^2/gm, for a reduction in single-digit percentage of specific impulse due to the higher mass. They pointed out that the simplified calculations used for the fuel mass loss behavior could lead to an error that they were unable to properly address, and which could either increase or decrease the amount of additive used.

The LARS concept used tungsten microparticles as well, and their absorption actually was the defining factor in the two designs they proposed: the emissivity and reflectivity of the fuel in terms of the absorption of the wall and the propellant.

Two other options are available for increasing the opacity of the hydrogen gas.

The first is to use a metal vapor deliberately, as was the paradigm in Soviet gas core design. Here, they used either NaK or Li vapor, both of which have small neutron absorption cross-sections and high thermal capacity. This has the advantage of being more easily mixed with the turbulent propellant stream, as well as being far lower mass than the W that is often used in US designs, but may be less opaque to the EM frequencies being emitted by the fuel’s surface in an LNTR design. I’m still trying to track down a more thorough writeup of the use of these vapors in NTR design at the moment (a common problem in both Soviet and Russian astronuclear literature is a lack of translations), but when I do I’ll discuss it in far more depth, since it’s an idea that doesn’t seem to have translated into the American NTR design paradigm.

As I said, this is a concept that I’m going to cover more in depth in both the gas core and general propellant pages, so with one final – and fascinating – note, we’ll move on to the conclusion.

An Interesting Proposal

The final option is something that Cavan Stone mentioned to me on Facebook a while ago: the use of lithium deuteride (LiD) as a propellant or additive in this design. This is an interesting concept, since Li7 is a fissile material, and is reasonably opaque to the frequencies being discussed in these reactors. The use of deuterium rather than protium also increases the neutron moderation of the propellant, which can in turn increase fissile efficiency of the reactor. The Li will harden the neutron spectrum overall, while the D and Be (in the fuel element can/reactor body) will thermalize the spectrum.

There was a discussion of using LiD as a propellant in NTRs in the 1960s [https://www.osti.gov/biblio/4764043-nuclear-effect-using-lithium-hydride-propellant-nuclear-rocket-reactor-thesis], but sadly I can’t find it anywhere online. If someone is able to help me find it, please let me know. This is a fascinating concept, and one that I’m very glad Cavan brought up to me, but also one that is complex enough that I really need to see an in-depth study by someone far more knowledgeable than me to be able to intelligently discuss the implications of.

Conclusion, or The Future of the Forgotten Reactor

While often referenced in passing in any general presentation on nuclear thermal rockets, the liquid core NTR seems to be the least studied of the different NTR types, and also the least covered. While the bubbler offers distinct advantages from a purely thermodynamic point of view, the radiator offers far more promise from a functional perspective.

Sadly, while both solid and gas core NTRs have been studied into the 20th century, the liquid core has been largely forgotten, and the radiator in particular seems to have gone through a reinvention of the wheel, as it were, between the 1960s NASA design and the 1990s DOE design, with few of the lessons learned from the LRC concept being applied to the BNL design as far as vapor dynamics, thermal transfer, and the like.

This doesn’t mean that the design is without promise, though, or that the challenges that the reactor faces are insurmountable. A number of hurdles in testing need to be overcome for this design to work – but many of the problems that there simply isn’t any data for can be solved with a simple set of criticality and reactor physics tests, something well within the capabilities of most research nuclear programs with the capability to test NTRs.

With the advances in nuclear and two-phase flow modeling, a body of research that doesn’t seem to have been examined in depth for over two decades, and the possibility of a high-isp, moderate-to-high thrust engine without the complications of a gas core NTR (a subject that we’ll be covering soon), the LNTR, and the radiator in particular, offer a combination of promise and ability to develop advanced NTRs as low hanging fruit that few systems are able to offer.

Final Note

With that, we’re leaving the realm of liquid fueled NTRs for now. This is a fascinating field, and one that I haven’t seen much discussion of outside the original technical papers, so I hope you enjoyed it! I’m going to work on getting these posts into a more easily-referenced form on the website proper, and will make a note of that in my blog (and on my social media) when I do! If anyone is aware of any additional references pertaining to the LNTR, as well as its thermophysical behavior, fuel materials options, or anything else relating to these desgins, please let me know, either in the comments or by sending me a message to beyondnerva at gmail dot com.

Our next blog post will be on droplet and vapor core NTRs, and will be covered by a good friend of mine and fellow astronuclear enthusiast: Calixto Lopez. These reactors have fascinated him since he was in school many moons ago, and he’s taught me the majority of what I know about them, so I asked him if he was willing to write that post.

After that, we’re going to move on to the closed cycle gas core NTR, which I’ve already begun research on. There’s lots of fascinating tidbits about this reactor type that I’ve already uncovered, so this may end up being another multiple part blog series.

Finally, to wrap up our discussion of advanced NTRs, we’re going to do a series on the open cycle gas core NTR types. This is going to be a long, complex series on not only the basic physics challenges, but the design evolution of the engine type, as well as discussion on various engineering methods to mitigate the major fuel loss and energy waste issues involved in this type of engine. There may be a delay between the closed and open cycle NTR posts due to the sheer amount of research necessary to do open cycles justice, but rest assured I’m already doing research on them.

As you can guess, this blog takes a lot of time, and a lot of research, to write. If you would like to support me in my efforts to bring the wide and complex history of astronuclear engineering to light, consider supporting me on Patreon: https://www.patreon.com/beyondnerva . Every dollar helps, and you get access to not only early releases of every blog post and webpage, but at the higher donation amounts you also get access to the various 3d models that I’m working on, videos, and eventually the completed 3d models themselves for your own projects (with credit for the model construction, of course!).

I’m also always looking for new or forgotten research in astronuclear engineering, especially that done by the Soviet Union, Russia, China, India, and European countries. If you run across anything interesting, please consider sending it to beyondnerva at gmail dot com.

You can find me on Twitter ( @beyondnerva), as well as join my facebook group (insert group link) for more astronuclear goodness.

References

General References

ANALYSES OF VAPORIZATION IN LIQUID URANIUM BEARING SYSTEMS AT VERY HIGH TEMPERATURES, Kaufman and Peters 1965 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19660002967.pdf

ANALYSIS OF VAPORIZATION OF LIQUID URANIUM, METAL, AND CARBON SYSTEMS AT 9000” AND 10,000” R Kaufman and Peters 1966 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19660025363.pdf

Fundamental Material Limitations in Heat-Exchanger Nuclear Rockets, Kane and Wells, Jr. 1965 https://www.osti.gov/servlets/purl/4610034/

VAPOR-PRESSURE DATA EXTRAPOLATED TO 1000 ATMOSPHERES (1.01~108 N/m2) FOR 13 REFRACTORY MATERIALS WITH LOW THERMAL ABSORPTION CROSS SECTIONS Masser 1967 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19670030361.pdf

Radiator-Specific LNTR References

Lewis Research Center LNTR

PERFORMANCE POTENTIAL OF A RADIANT-HEAT-TRANSFER LIQUID-CORE NUCLEAR ROCKET ENGINE, Ragsdale 1967 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19670030774.pdf

HEAT- AND MASS-TRANSFER CHARACTERISTICS OF AN AXIAL-FLOW LIQUID-CORE NUCLEAR ROCKET EMPLOYING RADIATION HEAT TRANSFER, Ragsdale et al 1967 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19670024548.pdf

FEASIBILITY OF SUPPORTING LIQUID FUEL ON A SOLID WALL IN A RADIATING LIQUID-CORE NUCLEAR ROCKET CONCEPT, Putre and Kasack 1968 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19680007624.pdf

Liquid Annular Reactor System (LARS)

[Paywall] Conceptual Design of a LARS Based Propulsion System, Ludewig et al 1991 https://arc.aiaa.org/doi/abs/10.2514/6.1991-3515

The Liquid Annular Reactor System (LARS) Propulsion, Powell et al 1991 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19910012832.pdf

LIQUID ANNULUS, Ludewig 1992 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19920001886.pdf

[Paywall] The liquid annular reactor system (LARS) for deep space exploration, Maise et al 1999 https://www.sciencedirect.com/science/article/abs/pii/S0094576599000442

Categories
Development and Testing Forgotten Reactors Nuclear Thermal Systems

The Bubbler: Liquid NTRs Without Barriers

Hello, and welcome back to Beyond NERVA! Today, we continue our look at liquid fueled nuclear thermal rockets (LNTRs), with a deep dive into the first of the two main types: what I call the bubbler LNTR.

This potentially attractive form of advanced NTR is a design that has been largely forgotten in the history of NTR designs outside some minor footnotes. Because of this, I felt that it was a great subject for the blog! All of the sources that I can find on the designs are linked at the end of this post, including a couple that are not available digitally, so if you’re interested in a more technical analysis of the concept please check that out!

What is a Bubbler LNTR?

Every NTR has to heat the (usually hydrogen) propellant in some way, which is usually done through (usually thermal) radiation from the fuel’s surface into the propellant.

Bubbles passing through fuel, Nelson 1963

This design, though, changes that paradigm by passing the propellant through the liquid fuel (usually a mix of uranium carbide (UC2) and some other carbide – either zirconium (ZrC) or niobium (NbC). This is done by having a porous outer wall which the propellant is injected through. This is known as a “folded flow propellant path,” and is seen in other NTRs as well, notably the Dumbo reactor from the early days of Project Rover.

In order to keep the fuel in place, each fuel element is spun at a high enough rate to keep the fuel in place using centrifugal force. The number of fuel elements is one of the design choices that varies from design to design, and the overall diameter, as well as the thickness of the fuel layer, is a matter of some design flexibility as well, but on average the individual fuel elements range from about 2 to about 6 inches in diameter, with the ratio between the thickness of the fuel layer and the thickness of the central void where the now-hot propellant passes through to the nozzle being roughly 1:1.

This was the first type of LNTR to be proposed, and was a subject of study for over a decade, but seems to have fallen out of favor with NTR designers in the late 1960s/early 1970s due to fuel/propellant interaction complications and engineering challenges related to the physical structures for injecting the propellant (more on that later).

Let’s look at the history of bubbler LNTR in more depth, and see how the proposals have evolved over time.

History of the Bubbler-type LNTR: The First of its Kind

McCarthy, 1954

Image from Barrett, Jr 1964

The first proposal for a liquid fueled NTR was in 1954, by J McCarthy in “Nuclear Reactors for Rockets” [ed. Note I have been unable to locate this report in digital form, if anyone is able to help me get ahold of it I would greatly appreciate your assistance; the following summary is based on references to this study in later works]: This design was the first to suggest the centrifugal containment of liquid fuel, and was also the first of the bubbler designs. It used a single fuel element as the entire reactor, with a large central void in the center of the fuel body as the propellant flow channel once it left the fuel itself.

This design was fundamentally limited by three factors:

  1. A torus is a terrible neutronic structure, and while the hydrogen propellant in the central void of the fuel would provide some neutron moderation, McCarthy found upon running the MCNP calculations that the difference was so negligible that it could be assumed to be a vacuum; and
  2. Only a certain amount of heat could be removed from the fuel by the propellant based on assumed fuel element geometry, and that cooling the reactor could pose a major challenge at higher reactor powers; and
  3. The behavior of the hydrogen as it passes through, and also out of, the liquid fuel was not well understood in practice, and
  4. the vapor pressure of the fuel’s constituent components could lead to fuel being absorbed in the gas as vapor in both the bubbles and exhausting propellant flow, causing both a loss of specific impulse and fissile fuel. This process is called “entrainment,” and is a (if not the) major issue for this type of reactor.

However, despite these problems this design jump started the design of LNTRs, defined the beginnings of the design envelope for this type of engine, and introduced the concept of the bubbler LNTR for the first time.

The Princeton LNTR, 1963

Princeton LNTR, Nelson et al 1963

The next major design step was undertaken by Nelson et al at Princeton’s Dept. of Aeronautical Engineering in 1963, under contract by NASA. This was a far more in-depth study than the proposal by McCarthy, and looked to address many of the challenges that the original design faced.

Perhaps the most notable change was the shift from a single large fuel element to multiple smaller ones, arranged in a hexagonal matrix for maximum fuel element packing. This does a couple of things:

  1. It homogenizes the reactor more. While heterogeneous (mixed-region) reactors work well, for a variety of reasons it’s beneficial to have a more consistent distribution of materials through the core – mainly for neutronic properties and ease of modeling (this is 1963, MCNP in a heterogeneous core using a slide rule sounds… agonizing).
  2. Given a materially limited, fixed specific impulse (see the Fuel Materials Constraints section for more in-depth discussion on this) NTR, the thrust is proportional to the total surface area of the fuel/propellant interface. By using multiple fuel elements (which they call vortices), the total available surface area increases in the same volume, increasing the thrust without compromising isp (this also implies a greater specific power, another good thing in an NTR).

This was a thermal (0.37 eV) neutron spectrum reactor, fueled by a mix of UC2 and ZrC, varying the dilution level for greater moderation and increased thermal limits. It was surrounded by a 21 cm reflector of beryllium (a “standard reflector”).

From there, the basic geometry of the reactor, from the number of fuel elements and their fueled thickness, to the core diameter and volume (the length was at a fixed ratio compared to the radius), to the shape, velocity, and number of bubbles (as well as vapor entrainment losses of the fuel material) were studied.

This was a fairly limited study, despite its length, due to the limitations of the resources available. Transients and reactor kinetics were specifically excluded from this study, the hydrogen was again replaced with vacuum in calculations, the temperature was assumed to be higher than possible due to vapor entrainment problems (4300 K, instead of 3600 K at 10 atm, 3800 at 30 atm) the chamber pressure was limited to only >1 atm, and age-diffusion theory calculations only give results within an order of magnitude… but it’s still one of the most thorough study of LNTRs I’ve found, and the most researched bubbler architecture. They pointed out the potential benefits of the use of 233U, or a larger but neutronically equivalent volume of 232Th (turning the reactor into a thermal breeder), in order to improve the overall vaporization characteristics, but this was not included in the study.

Barrett LNTR, 1964

The next year, W. Louis Barrett presented a variation of the Princeton LNTR at the AIAA Conference. The main distinction between the two designs was the addition of zirconium hydride in the areas between the fuel elements and the outer reflector, and presented the first results from a study being conducted on the bubble behavior in the fuel (being conducted at Princeton at the time). The UC2/ZrC fuel was the same, as were the number of fuel elements and reactor dimensions. The author concluded that a specific impulse of 1500-1550 seconds was possible, with a T/W of 1 at 100 atm, with thrust not being limited by heat transfer but by available flow area.

Below are the two relevant graphs from his findings: the first is the point at which the fissile fuel itself would end up becoming captured by the passing gas, and the second looks at the maximum specific impulse any particular fissile fuel could theoretically offer. The image for the McCarthy reactor above was from the same paper.

Final Work: Bubbles are Annoying

For this reactor to work, the heat must be adequately transferred from the fuel element to the propellant as it bubbles through the fuel mass radially. The amount of heat that needs to be removed, and the time and distance that it can be removed in, is a function of both the fuel and the bubbles of H2.

Sadly, the most comprehensive study of this has never been digitized, but for anyone who’s able to get documents digitized at Princeton University and would like to help make the mechanics of bubbler-type LNTRs more accessible, here’s the study: Liebherr, J.F., Williams, P.M., and Grey, J., “Bubble Motion Studies for the Liquid Core Nuclear Rocket,” Princeton University Aeronautical Engineering Report No. 673, December 1963. Apparently you can check it out after you can convince the librarians to excavate it, based on their website: https://catalog.princeton.edu/catalog/1534764.

McGuirk 1972

Here, a clear plastic housing was constructed which consisted of two main layers: an outer, solid casing which formed the outer body of the apparatus, and a perforated, inner cylinder, which simulated the fuel element canister. Water was used as the fuel element analog, and the entire apparatus was spun along its long axis to apply centrifugal acceleration to the water at various rotation rates. Pressurized air (again, at various pressures) was used in place of the hydrogen coolant. Stroboscopic photography was used to document bubble size, shape, and behavior, and these behaviors were then used to calculate the potential thermal exchange, vapor entrainment, and other characteristics of the behavior of this system.

One significant finding, based on Gray’s reporting, though, is that there’s a complex relationship between the dimensions, shape, velocity, and transverse momentum of the bubbles and their thermal uptake capacity, as well as their vapor entrainment of fuel element components. However, without being able to read this work, I can only hope someone can make this work accessible to the world at large (and if you’ve got technical knowledge and interest in the subject, and feel like writing about it, let me know: I’m more than happy to have you write a blog post on here on this INSANELY complex topic).

The last reference to a bubbler LNTR I can find is from AIAA’s Engineering Notes from May 1972 by McGuirk and Park, “Propellant Flow Rate through Simulated Liquid-Core Nuclear Rocket Fuel Bed.” This paper brings up a fundamental problem that heretofore had not been addressed in the literature on bubblers, and quite possibly spelled their death knell.

Every study until this point greatly simplified, or ignored, two phase flow thermodynamic interactions. If you’re familiar with thermodynamics, this is… kinda astounding, to be honest. It also leads me to a diversion that could be far longer than the two pages that this report covers, but I won’t indulge myself. In short, two phase flow is used to model the thermal transfer, hydro/gasdynamic properties, and other interactions between (in this case) a liquid and a gas, or a melting or boiling liquid going through a phase change.

This is… a problem, to say the least. Based on the simplified modeling, the fundamental thermal limitation for this sort of reactor was vapor entrainment of the fuel matrix, reducing the specific impulse and changing he proportions of elements in the matrix, causing potential phase change and neutronics complications.

This remains a problem, but is unfortunately not the main thermal limitation of this reactor, rather it was discovered that the amount of thermal rejection available through the bubbling of the propellant through the fuel is not nearly as high as was expected at lower propellant flow rates, and higher flow rates led to splattering of the bubbles bursting, as well as unstable flow in the system. We’ll look at the consequences of this later, but needless to say this was a major hiccup in the development of the bubbler type LNTR.

While there may be further experimentation on the bubbler type LNTR, this paper came out shortly before the cancellation of the vast majority of astronuclear funding in the US, and when research was restarted it appears that the focus had shifted to radiator-type LNTRs, so let’s move on to looking at them.

Bubbler-Specific Constraints

Fuel Element Thickness and Heat Transfer

One of the biggest considerations in a bubbler LNTR is the thickness of the fuel within each fuel canister. The fundamental trade-off is one of mechanical vs thermodynamic requirements: the smaller the internal radius at the fuel element’s interior surface, the higher the angular velocity has to be to maintain sufficient centrifugal force to contain the fuel, btu also the greater time and distance the bubbles are able to collect heat from the fuel.

In the Princeton study, the total volume within the fuel canister was roughly equally divided between fuel and propellant to achieve a comfortable trade-off between fuel mass, reactor volume, and thermal uptake in the propellant. In this case, they included the volume of the propellant as it passed through the fuel to be part of the central annulus’ volume, which eases the neutronic calculations, but also induces a complication in the actual diameter of the central void: as propellant flow increases, the void diameter decreases, requiring more angular momentum to maintain sufficient centrifugal force.

A thinner fuel element, on the other hand, runs into the challenge of requiring a greater volume of propellant to pass through it to remove the same amount of energy, but an overall lower temperature of the propellant that is used. This, in turn, reduces the propellant’s final velocity, resulting in lower specific impulse but higher thrust. However, another problem is that the fluid mixture of the propellant/fuel can only contain so much gas before major problems develop in the behavior of the mixture. In an unpublished memorandum from 1963 (“Some Considerations on the Liquid Core Reactor Concept,” Mar 23), Bussard speculated that the maximum ratio of gas to fuel would be around 0.3 to 0.4; at this point the walls of the bubbles are likely to merge, converting the fuel into a very liquidy droplet core reactor (a concept that we’ll discuss in a future blog post), as well as leading to excess splattering of the fuel into the central void of the fuel element. While some sort of recapture system may be possible to prevent fuel loss, in a classic bubbler LNTR this is an unacceptable situation, and therefore this type of limitation (which may or may not actually be 0.3-0.4, something for future research to examine) intrinsically ties fuel element thickness to maximum propellant flow rates based on volume.

There are some additional limits here, as well, but we’ll discuss those in the next section. While the propellant will gain some additional power through its passage out of the fuel element and toward the nozzle, as in the radiator type LNTR, this will not be as significant as the propellant is entering along the entire length fuel element.

Bubble Dynamics

This is probably the single largest problem that a bubbler faces: the behavior of the bubbles themselves. As this is the primary means of cooling the fuel, as well as thermalizing the propellant, the behavior of these bubbles, and the ability of the propellant stream to control the entirety of the heat generated in the fuel, is of absolutely critical importance. We looked briefly in the last section at the impacts of the thickness of the fuel, but what occurs within that distance is a far more complex topic than it may appear at first glance. With advances in two phase flow modeling (which I’m unable to accurately assess), this problem may not be nearly as daunting as it was when this reactor was being researched, but in all likelihood this set of challenges is perhaps the single largest reason that the bubbler LNTR disappeared from the design literature when it did.

The other effect that the bubbles have on the fuel is that they are the main source of vapor entrainment of fuel element materials in a bubbler, since they are the liquid/gas interface that occurs for the longest, and have the largest relative surface area. We aren’t going to discuss this particular dynamic to any great degree, but the behavior of this interaction compared to inner surface interactions will potentially be significant, both due to the fact that these bubbles are the longest-lived liquid/gas interaction by surface area and are completely encircled by the fuel itself while undergoing heating (and therefore expansion, exacerbated by the decreasing pressure from the centrifugal acceleration gradient). One final note on this behavior: it may be possible that the bubbles may become saturated with vapor during their thermalization, preventing uptake of more material while also increasing the thermal uptake of energy from the fuel (metal vapors were suggested by Soviet NTR designers, including Li and NaK, to deal with the thermal transparency of H2 in advanced NTR designs).

The behavior of the bubbles depends on a number of characteristics:

  1. Size: The smaller the bubble, the greater the surface area to volume ratio, increasing the amount of heat the can be absorbed in a given time relative to the volume, but also the less thermal energy that can be transported by each bubble. The size of the bubbles will increase as they move through the fuel element, gaining energy though heat, and therefore expanding and becoming less dense.
  2. Shape: Partially a function of size, shape can have several impacts on the behavior and usefulness of the bubbles. Only the smallest bubbles (how “small” depends on the fluids under consideration) can retain a spherical shape. The other two main shape classifications of bubbles in the LNTR literature are oblate spheroid and spherical cap. In practice, the higher propellant flow rates result in the largest, spherical cap-type bubbles in the fuel, which complicate both thermal transfer and motion modeling. One consequence of this is that the bubbles tend to have a high Reynolds number, leading to more turbulent behavior as they move through the fuel mass. Most standard two-phase modeling equations at the time had a difficult time adequately predicting the behavior of these sorts of bubbles. Another important consideration is that the bubbles will change shape to a certain degree as they pass through the fuel element, due to the higher temperature and lower centrifugal force being experienced on them as they move into the central void of the fuel element.
  3. Velocity: A function of centrifugal force, viscosity of the fuel, initial injection pressure of the propellant, density of the constituent gas/vapor mix, and other factors, the velocity of a bubble through the fuel element determines how much heat – and vapor – can be absorbed by a bubble of a given size and shape. An increase in velocity also changes the bubble shape, for instance from an oblate spheroid to a spherical cap. One thing to note is that the bubbles don’t move directly along the radius of the fuel element, both oscillation laterally and radially occur as the shape deforms and as centrifugal, convective, and other forces interact with the bubble; whether this effect is significant enough to change the necessary modeling of the system will depend on a number of factors including fuel element thickness, convective and Coriolis behavior in the fuel mass, bubble Reynolds number, and angular velocity of the fuel element,
  4. Distribution: One concern in a bubbler LNTR is ensuring that the bubbles passing through the fuel mass don’t combine into larger conglomerations, or that the density of bubbles results in a lack of overall cohesion in the fuel mass. This means that the distribution system for the bobbles must balance propellant flow rate, bubble size, velocity, and shape, non-vertical behavior of the bubbles, and the overall gas fraction of the fuel element based on the fuel element design being used.

As mentioned previously, the final paper on the bubbler I was able to find looked at the challenges of bubble dynamics in a simulated LNTR fuel element; in this case using water and compressed air. Several compromises had to be made, leading to unpredictable behavior of the propellant stream and the simulated fuel behavior, which could be due to the challenges of using water to simulate ZrC/UC2, including insufficient propellant pressure, bubble behavior irregularities, and other problems. Perhaps the most major challenge faced in this study is that there were three distinct behavioral regimes in the two phase system: orderly (low prop pressure), disordered (medium prop pressure), and violent (high prop pressure), each of which was a function of the relationship between propellant flow and centrifugal force being applied. As suspected, having too high a void fraction within the fuel mass led to splattering, and therefore fuel mass loss rates that were unacceptably high, but the point that this violent disorder occurred was low enough that it was not assured that the propellant might not be able to completely remove all the thermal energy from the fuel element itself. If the energy level of each fuel element is reduced (by reducing the fissile component of the fuel while maintaining a critical mass, for instance), this can be compensated for, but only by losing power density and engine performance. The alternative, increasing the centrifugal force on the system, leads to greater material and mechanical challenges for the system.

Adequately modeling these characteristics was a major challenge at the time these studies were being conducted, and the number of unique circumstances involved in this type of reactor makes realistic modeling remain non-trivial; advances in both computational and modeling techniques make this set of challenges more accessible than in the 1960s and 70s, though, which may make this sort of LNTR more feasible than it once was, and restarting interest in this unique architecture.

These constraints define many things in a bubbler LNTR, as they form the single largest thermodynamic constraint on the engine. Increasing centrifugal force increases the stringency for both the fuel element canister (with incorporated propellant distribution system), mechanical systems to maintain angular velocity for fuel containment, maximum thrust and isp for a given design, and other considerations.

Suffice to say, until the bubble behavior, and its interactions with the fuel mass, can be adequately modeled and balanced, the bubbler LNTR would require significant basic empirical testing to be able to be developed, and this limitation was probably a significant contributor to the reason that it hasn’t been re-examined since the early-to-mid 1970s.

The “Restart Problem”

The last major issue in a bubbler-type design is the “restart problem”: when the reactor is powered down, there will be a period of time when the fuel is still molten, requiring centrifugal containment, but the reactor being powered down allows for the fuel to be pressed into the pores of the fuel element canister, blocking the propellant passages.

One potential solution for the single fuel element design was proposed by L. Crocco, who suggested that the fuel material is used for the bubbling structure itself. When powered up, the fuel would be completely solid, and would radiate heat in all directions until the fuel becomes molten [ed. Note: according to Crocco, this would occur from the inner surface to the outer one, but I can’t find backup for that assumption of edge power peaking behavior, or how it would translate to a multi-fuel-element design], and propellant would be able to pass through the inner layers of the fuel element once the liquid/solid interface reached the pre-drilled propellant channels in the fuel element.

Another would be to continue to pass the hydrogen propellant through the fuel element until the pressure to continue pumping the H2 reaches a certain threshold pressure, then use a relief valve to vent the system elsewhere while continuing to reject the final waste heat until a suitable wall temperature has been reached. This is going to both make the fuel element less dense, and also result in a lower fuel element density near the wall than at the inner surface of the fuel element. While this could maybe [ed. Note: speculation on my part] make it so that the fuel is more likely to melt from the inner surface to the outer one, the trapped H2 may also be just enough to cause power peaking around the bubbles, allow chemical reactions to occur during startup with unknown consequences, and other complications that I couldn’t even begin to guess at – but the tubes would be kept clear.

Wall Material Constraints

Other than the “restart problem,” additional constraints apply to the wall material. It needs to be able to handle the rotational stresses of the spinning fuel element, be permeable to the propellant, and able to withstand rather extreme thermal gradients: on one side, gaseous hydrogen at near-cryogenic temperatures (the propellant would have already absorbed some heat from the reactor body) to about 6000 K on the inside, where it comes in contact with the molten fuel.

Also, the bearings holding the fuel element will need to be designed with care. Not only do they need to handle the rather large amount of thermal expansion that will occur in all directions during reactor startup, they have to be able to deal with high rotation rates throughout the temperature range.

The Paths Not (Yet?) Taken

Perhaps due to the early time period in which the LNTR was explored, a number of design options don’t seem to have been explored in this sort of reactor.

One option is neutron moderator. Due to the high thermal gradients in this reactor, ZrH and other thermally sensitive moderators could be used to further thermalize the neutron spectrum. While this might not be explicitly required, it may help reduce the fissile requirements of the reactor, and would not be likely to significantly increase reactor mass.

A host of other options are possible as well, if you can think of one, comment below!

Diffuser LNTR

The other option was brought up by Michael Turner at Project Persephone, in regards to the vapor entrainment and restart problem issues: what if you get rid of the holes in the walls of the fuel element, and the bubbles through the fuel element, altogether? As we saw when discussing Project Rover, hydrogen gets through EVERYTHING, especially hot metals. This diffusion process is done through individual molecules, not through bubbles, meaning that the possibility of vapor entrainment is eliminated. The down side is that the propellant mass flow will be extremely reduced, resulting in a higher-isp (due to the ability to increase fuel temp because the vapor losses are minimized), much-lower-thrust reactor than those designed before. As he points out, this may be able to be mixed with bubbles for a high-thrust, lower-isp mode, if “shutters” on the fuel element outer frit were able to be engineered. Another possible requirement would be to reduce the fissile component density of the fuel to match the power output to the hydrogen flow rates, or to create a hybrid diffuser/radiator LNTR to balance the propellant flow and thermal output of the reactor.

I have not been able to calculate if this would be feasible or not, and am reasonably skeptical, but found it an intriguing possibility.

Conclusion

The bubbler liquid nuclear thermal rocket is a fascinating concept which has not been explored nearly as much as many other advanced NTR designs. The advantage of being able to fully thermalize the propellant to the highest fuel element temperature while maintaining cryogenic temperatures outside the fuel element is a rarity in NTR design, and offers many options for structures outside the fuel elements themselves. After over a decade of research at Princeton (and other centers), the basic research on the dynamics of this type of reactor has been established, and with the computational and modeling capabilities that were unavailable at the time of these studies, new and promising versions of this concept may come to light if anyone chooses to study the design.

The problems of vapor entrainment, fissile fuel loss, and restarting the reactor are significant, however, and impact many area of the reactor design which have not been addressed in previous studies. Nevertheless, the possibility remains that this drive may one day indeed make a useful stepping stone from the solid-fueled NTRs of tomorrow to the advanced NTRs of the decades ahead.

References

A Technical Report on the CONCEPTUAL DESIGN – STUDY OF A LIQUID-CORE NUCLEAR ROCKET, Nelson et al 1963 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19650026954.pdf

The Liquid Core Nuclear Rocket, Grey 1965 (pg 92) https://permalink.lanl.gov/object/tr?what=info:lanl-repo/lareport/LA-03229-MS

Specific Impulse of a Liquid Core Nuclear Rocket, Barrett Jr 1963 https://arc.aiaa.org/doi/abs/10.2514/3.2141?journalCode=aiaaj

Propellant Flow Rate through Simulated Liquid-Core Nuclear Rocket Fuel Bed, McGuirk and Park 1972 https://arc.aiaa.org/doi/abs/10.2514/3.61690?journalCode=jsr

Categories
Development and Testing Fission Power Systems Forgotten Reactors Nuclear Electric Propulsion Test Stands

Topaz International part II: The Transition to Collaboration


Hello, and welcome back to Beyond NERVA! Before we begin, I would like to announce that our Patreon page, at https://www.patreon.com/beyondnerva, is live! This blog consumes a considerable amount of my time, and being able to pay my bills is of critical importance to me. If you are able to support me, please consider doing so. The reward tiers are still very much up for discussion with my Patrons due to the early stage of this part of the Beyond NERVA ecosystem, but I can only promise that I will do everything I can to make it worth your support! Every dollar counts, both in terms of the financial and motivational support!

Today, we continue our look at the collaboration between the US and the USSR/Russia involving the Enisy reactor: Topaz International. Today, we’ll focus on the transfer from the USSR (which became Russia during this process) to the US, which was far more drama-ridden than I ever realized, as well as the management and bureaucratic challenges and amusements that occurred during the testing. Our next post will look at the testing program that occurred in the US, and the changes to the design once the US got involved. The final post will overview the plans for missions involving the reactors, and the aftermath of the Topaz International Program, as well as the recent history of the Enisy reactor.

For clarification: In this blog post (and the next one), the reactor will mostly be referred to as Topaz-II, however it’s the same as the Enisy (Yenisey is another common spelling) reactor discussed in the last post. Some modifications were made by the Americans over the course of the program, which will be covered in the next post, but the basic reactor architecture is the same.

When we left off, we had looked at the testing history within the USSR. The entry of the US into the list of customers for the Enisy reactor has some conflicting information: according to one document (Topaz-II Design History, Voss, linked in the references), the USSR approached a private (unnamed) US company in 1980, but the company did not purchase the reactor, instead forwarding the offer up the chain in the US, but this account has very few details other than that; according to another paper (US-Russian Cooperation… TIP, Dabrowski 2013, also linked), the exchange built out of frustration within the Department of Defense over the development of the SP-100 reactor for the Strategic Defense Initiative. We’ll look at the second, more fleshed out narrative of the start of the Topaz International Program, as the beginning of the official exchange of technology between the USSR (and soon after, Russia) and the US.

The Topaz International Program (TIP) was the final name for a number of programs that ended up coming under the same umbrella: the Thermionic System Evaluation Test (TSET) program, the Nuclear Electric Propulsion Space Test Program (NEPSTP), and some additional materials testing as part of the Thermionic Fuel Element Verification Program (TFEVP). We’ll look at the beginnings of the overall collaboration in this post, with the details of TSET, NEPSTP, TFEVP, the potential lunar base applications, and the aftermath of the Topaz International Program, in the next post.

Let’s start, though, with the official beginnings of the TIP, and the challenges involved in bringing the test articles, reactors, and test stands to the US in one of the most politically complex times in modern history. One thing to note here: this was most decidedly not the US just buying a set of test beds, reactor prototypes, and flight units (all unfueled), this was a true international technical exchange. Both the American and Soviet (later Russian) organizations involved on all levels were true collaborators in this program, with the Russian head of the program, Academician Nikolay Nikolayvich Ponomarev-Stepnoy, still being highly appreciative of the effort put into the program by his American counterparts as late as this decade, when he was still working to launch the reactor that resulted from the TIP – because it’s still not only an engineering masterpiece, but could perform a very useful role in space exploration even today.

The Beginnings of the Topaz International Program

While the US had invested in the development of thermionic power conversion systems in the 1960s, the funding cuts in the 1970s that affected so many astronuclear programs also bit into the thermionic power conversion programs, leading to their cancellation or diminution to the point of being insignificant. There were several programs run investigating this technology, but we won’t address them in this post, which is already going to run longer than typical even for this blog! An excellent resource for these programs, though, is Thermionics Quo Vadis by the Defense Threat Reduction Agency, available in PDF here: https://www.nap.edu/catalog/10254/thermionics-quo-vadis-an-assessment-of-the-dtras-advanced-thermionics (paywall warning).

Our story begins in detail in 1988. The US was at the time heavily invested in the Strategic Defense Initiative (SDI), which as its main in-space nuclear power supply was focused on the SP-100 reactor system (another reactor that we’ll be covering in a Forgotten Reactors post or two). However, certain key players in the decision making process, including Richard Verga of the Strategic Defense Initiative Organization (SDIO), the organizational lynchpin on the SDI. The SP-100 was growing in both cost and time to develop, leading him to decide to look elsewhere to either meet the specific power needs of SDI, or to find a fission power source that was able to operate as a test-bed for the SDI’s technologies.

Investigations into the technological development of all other nations’ astronuclear capabilities led Dr. Verga to realize that the most advanced designs were those of the USSR, who had just launched the two TOPOL-powered Plasma-A satellites. This led him to invite a team of Soviet space nuclear power program personnel to the Eighth Albuquerque Space Nuclear Power Symposium (the predecessor to today’s Nuclear and Emerging Technologies for Space, or NETS, conference, which just wrapped up recently at the time of this writing) in January of 1991. The invitation was accepted, and they brought a mockup of the TOPAZ. The night after their presentation, Academician Nikolay Nicolayvich Ponomarev-Stepnoy, the Russian head of the Topol program, along with his team of visiting academicians, met with Joe Wetch, the head of Space Power Incorporated (SPI, a company made up mostly of SNAP veterans working to make space fission power plants a reality), and they came to a general understanding: the US should buy this reactor from the USSR – assuming they could get both governments to agree to the sale. The terms of this “sale” would take significant political and bureaucratic wrangling, as we’ll see, and sadly the problems started less than a week later, thanks to their generosity in bringing a mockup of the Topaz reactor with them. While the researchers were warmly welcomed, and they themselves seemed to enjoy their time at the conference, when it came time to leave a significant bureaucratic hurdle was placed in their path.

Soviet researchers at Space Nuclear Power Symposium, 1991, image Dabrowski

This mockup, and the headaches surrounding being able to take it back with the researchers, were a harbinger of things to come. While this mockup was non-functional, but the Nuclear Regulatory Commission claimed that, since it could theoretically be modified to be functional (a claim which I haven’t found any evidence for, but is theoretically possible), and as such was considered a “nuclear utilization facility” which could not be shipped outside the US. Five months later, and with the direct intervention of numerous elected officials, including US Senator Pete Domenici, the mockup was finally returned to Russia. This decision by the NRC led to a different approach to importing further reactors from the USSR and Russia, when the time came to do this. The mockup was returned, however, and whatever damage the incident caused to the newly-minted (hopeful) partnership was largely weathered thanks to the interpersonal relationships that were developed in Albuquerque.

Teams of US researchers (including Susan Voss, who was the major source for the last post) traveled to the USSR, to inspect the facilities used to build the Enisy (Yenisey is another common spelling, the reactor was named after the river in Siberia). These visits started in Moscow, with Drs Wetch and Britt of SPI, when a revelation came to the American astronuclear establishment: there wasn’t one thermionic reactor in the USSR, but two, and the most promising one was available for potential export and sale!

These visits continued, and personal relationships between the team members from both sides of the Iron Curtain grew. Due to headaches and bureaucratic difficulties in getting technical documentation translated effectively in the timeframe that the program required, often it was these interpersonal relationships that allowed the US team to understand the necessary technical details of the reactor and its components. The US team also visited many of the testing and manufacturing locations used in the production and development of the Enisy reactor (if you haven’t read it yet, check out the first blog post on the Enisy for an overview of how closely these were linked), as well as observing testing in Russia of these systems. This is also the time when the term “Topaz-II” was coined by one of the American team members, to differentiate the reactor from the original Topol (known in the west as Topaz, and covered in our first blog post on Soviet astronuclear history) in the minds of the largely uninformed Western academic circles.

The seeds of the first cross-Iron Curtain technical collaboration on astronuclear systems development, planted in Albuquerque, were germinating in Russian soil.

The Business of Intergovernmental Astronuclear Development

During this time, due to the headaches involved in both the US and the USSR from a bureaucratic point of view (I’ve never found any information that showed that the two teams ever felt that there were problems in the technological exchange, rather they all seem to be political and bureaucratic in nature, and exclusively from outside the framework of what would become known as the Topaz International Program), two companies were founded to provide an administrative touchstone for various points in the technological transfer program.

The first was International Scientific Products, which from the beginning (in 1989) was made specifically to facilitate the purchase of the reactors for the US, and worked closely with the SDIO Dr. Verga was still intimately involved, and briefed after every visit to Russia on progress in the technical exchange and eventual purchase of the reactors. This company was the private lubricant for the US government to be able to purchase these reactor systems (for reasons too complex to get into in this blog post). The two main players in ISP were Drs Wetch and Britt, who also appear to be the main administrative driving force in the visits. The company gave a legal means to transmit non-classified data from the USSR to the US, and vice versa. After each visit, these three would meet, and Dr. Verga kept his management at SDIO consistently briefed on the progress of the program.

The second was the International Nuclear Energy Research and Technology corporation, known as INERTEK. This was a joint US-USSR company, involving the staff of ISP, as well as individuals from all of the Soviet team of design bureaus, manufacturing centers (except possibly in Talinn, but I haven’t been able to confirm this, it’s mainly due to the extreme loss of documentation from that facility following the collapse of the USSR), and research institutions that we saw in the last post. These included the Kurchatov Institute of Atomic Energy (headed by Academician and Director Ponomarev-Stepnoy, the head of the Russian portion of the Topaz International Program), the Scientific Industrial Association “LUCH” (represented by Deputy Director Yuri Nikolayev), the Central Design Bureau for Machine Building (represented by Director Vladmir Nikitin), and the Keldysh Institute of Rocket Research (represented by Director Academician Anatoli Koreteev). INERTEK was the vehicle by which the technology, and more importantly to the bureaucrats the hardware, would be exported from the USSR to the US. Academician Ponomarev-Stepnoy was the director of the company, and Dr Wetch was his deputy. Due to the sensitive nature of the company’s focus, the company required approval from the Ministry of Atomic Energy (Minatom) in Moscow, which was finally achieved in December 1990.

In order to gain this approval, the US had to agree to a number of demands from Minatom. This included: the Topaz-II reactors had to be returned to Russia after the testing and that the reactors could not be used for military purposes. Dr. Verga insisted on additional international cooperation, including staff from the UK and France. This not only was a cost-saving measure, but reinforced the international and transparent nature of the program, and made military use more challenging.

While this was occurring, the Americans were insistent that the non-nuclear testing of the reactors had to be duplicated in the US, to ensure they met American safety and design criteria. This was a major sticking point for Minatom, and delayed the approval of the export for months, but the Americans did not slow in their preparations for building a test facility. Due to the concentration of space nuclear power research resources in New Mexico (with Los Alamos and Sandia National Laboratories, the US Air Force Philips Laboratory, and the University of New Mexico’s New Mexico Engineering Research Institute (NMERI), as well as the presence of the powerful Republican senator Pete Domenici to smooth political feathers in Washington, DC (all of the labs were within his Senatorial district in the north of the state), it was decided to test the reactors in Albuquerque, NM. The USAF purchased an empty building from the NMERI, and hired personnel from UNM to handle the human resources side of things. The selection of UNM emphasized the transparent, exploratory nature of the program, an absolute requirement for Minatom, and the university had considerable organizational flexibility when compared to either the USAF or the DOE. According to the contract manager, Tim Stepetic:

The University was very cooperative and accommodating… UNM allowed me to open checking accounts to provide responsive payments for the support requirements of the INTERTEK and LUCH contracts – I don’t think they’ve ever permitted such checkbook arrangements either before or since…”

These freedoms were necessary to work with the Russian team members, who were in culture shock and dealing with very different organizational restrictions than their American counterparts. As has been observed both before and since, the Russian scientists and technicians preferred to save as much of their (generous in their terms) per diem for after the project and the money would go further. They also covered local travel expenses as well. One of the technicians had to leave the US for Russia for his son’s brain tumor operation, and was asked by the surgeon to bring back some Tylenol, a request that was rapidly acquiesced to with bemusement from his American colleagues. In addition, personal calls (of a limited nature due to international calling rates at the time) were allowed for the scientists and technicians to keep in touch with their families and reduce their homesickness.

As should be surprising to no-one, the highly unusual nature of this financial arrangement, as well as the large amount of money involved (which ended up coming out to about $400,000 in 1990s money), a routine audit led to the Government Accounting Office being called in to investigate the arrangement later. Fortunately, no significant irregularities in the financial dealings of the NMERI were found, and the program continued. Additionally, the reuse of over $500,000 in equipment scrounged from SNL and LANL’s junk yards allowed for incredible cost savings in the program.

With the business side of the testing underway, it was time to begin preparations for the testing of the reactors in the US, beginning with the conversion of an empty building into a non-nuclear test facility. The building’s conversion, under the head of Frank Thome on the facilities modification side, and Scott Wold as the TSET training manager, began in April of 1991, only four months after Minatom’s approval of INTERTEK. Over the course of the next year, the facility would be prepared for testing, and would be completed just before the delivery of the first shipment of reactors and equipment from Russia.

By this point, the test program had grown to include two programs. The first was the Thermionic Systems Evaluation Test (TSET), which would study mechanical, thermophysical, and chemical properties of the reactors to verify the data collected in Russia. This was to flight-qualify the reactors for American space mission use, and establish the collaboration of the various international participants in the Topaz International Program.

The second program was the Nuclear Electric Propulsion Space Test Program (NEPSTP); run by the Johns Hopkins Applied Physics Laboratory, and funded by the SDIP Ballistic Missile Defense Organization, it proposed an experimental spacecraft that would use a set of six different electric thrusters, as well as equipment to monitor the environmental effects of both the thrusters and the reactor during operation. Design work for the spacecraft began almost immediately after the TSET program began, and the program was of interest to both the American and Russian parts of the team.

Later, one final program would be added: the Thermionic Fuel Element Verification Program (TFEVP). This program, which had predated TIP, is where many of the UK and French researchers were involved, and focused of increasing the lifetime of the thermionic fuel elements from one year (the best US estimate before the TSET) to at least three, and preferably seven, years. This would be achieved through better knowledge of materials properties, as well as improved manufacturing methods.

Finally, there were smaller programs that were attached to the big three, looking at materials effets in intense radiation and plasma environments, as well as long-term contact with cesium vapor, chemcal reactions within the hardware itself, and the surface electrical properties of various ceramics. These tests, while not the primary focus of the program, WOULD contribute to the understanding of the environment an astronuclear spacecraft would experience, and would significantly affect future spacecraft designs. These tests would occur in the same building as the TSET testing, and the teams involved would frequently collaborate on all projects, leading to a very well-integrated and collegial atmosphere.

Reactor Shipment: A Funny Little Thing Occurred in Russia

While all of this was going on in the Topaz International Program, major changes were happening thoughout the USSR: it was falling apart. From the uprisings in Latvia and Lithuania (violently put down by the Soviet military), to the fall of the Berlin Wall, to the ultimate lowering of the hammer and sickle from the Kremlin in December 1991 and its replacement with the tricolor of the Russian Federation, the fall of the Iron Curtain was accelerating. The TIP teams were continuing to work at their program, knowing that it offered hope for the Topaz-II project as well as a vehicle to form closer technological collaborations with their former adversaries, but the complications would rear their heads in this small group as well.

The American purchase of the Topaz reactors was approved by President George H.W. Bush on 27 March, 1992 during a meeting with his Secretary of State, James Barker, and Secretary of Defense Richard Cheney. This freed the American side of the collaboration to do what needed to be done to make the program happen, as well as begin bringing in Russian specialists to begin test facility preparations.

Trinity site obelisk

The first group of 14 Russian scientists and technicians to arrive in the US for the TSET program arrived on April 3, 1992, but only got to sleep for a few hours before being woken up by their guests (who also brought their families) for a long van journey. This was something that the Russians greatly appreciated, because April 4 is a special day in one small part of the world: it’s one of only two days of the year that the Trinity Site, the location of the first nuclear explosion in history, is open to the public. According to one of them, Georgiy Kompaniets:

It was like for a picnic! And at the entrance to the site there were souvenir vendors selling t-shirts with bombs and rocks supposedly at the epicenter of the blast…” (note: no trinitite is allowed to be collected at the Trinity site anymore, and according to some interpretations of federal law is considered low-level radioactive waste from weapons production)

The Russians were a hit at the Trinity site, being the center of attention from those there, and were interviewed for television. They even got to tour the McDonald ranch house, where the Gadget was assembled and the blast was initiated. This made a huge impression on the visiting Russians, and did wonders in cementing the team’s culture.

Hot air balloon in New Mexico, open source

Another cultural exchange that occurred later (exactly when I’m not sure) was the chance to ride in a hot air balloon. Albuquerque’s International Balloon Fiesta is the largest hot air ballooning event in the world, and whenever atmospheric conditions are right a half dozen or more balloons can be seen floating over the city. A local ballooning club, having heard about the Russian scientists and technicians (they had become minor local celebrities at this point) offered them a free hot air balloon ride. This is something that the Russians universally accepted, since none of them had ever experienced this.

According to Boris Steppenov:

The greatest difficulty, it seemed, was landing. And it was absolutely forbidden to touch down on the reservations belonging to the Native Americans, as this would be seen as an attack on their land and an affront to their ancestors…

[after the flight] there were speeches, there were oaths, there was baptism with champagne, and many other rituals. A memory for an entire life!”

The balloon that Steppenov flew in did indeed land on the Sandia Pueblo Reservation, but before touchdown the tribal police were notified, and they showed up to the landing site, issued a ticket to the ballooning company, and allowed them to pack up and leave.

These events, as well as other uniquely New Mexican experiences, cemented the TIP team into a group of lifelong friends, and would reinforce the willingness of everyone to work together as much as possible to make TIP as much of a success as it could be.

C-141 taking off, image DOD

In late April, 1992, a team of US military personnel (led by Army Major Fred Tarantino of SDIO, with AF Major Dan Mulder in charge of logistics), including a USAF Airlift Control Element Team, landed in St. Petersburg on a C-141 and C-130, carrying the equipment needed to properly secure the test equipment and reactors that would be flown to the US. Overflight permissions were secured, and special packing cases, especially for the very delicate tungsten TISA heaters, were prepared. These preparations were complicated by the lack of effective packing materials for these heaters, until Dr. Britt of both ISP and INTERTEK had the idea of using foam bedding pads from a furniture store. Due to the large size and weight of the equipment, though, the C-141 and C-130 aircraft were not sufficient for airlifting the equipment, so the teams had to wait on the larger C-5 Galaxy transports intended for this task, which were en route from the US at the time.

Sadly, when the time came for the export licenses to be given to the customs officer, he refused to honor them – because they were Soviet documents, and the Soviet Union no longer existed. This led Academician Ponomarev-Stepnoy and INTERTEK’s director, Benjamin Usov, to travel to Moscow on April 27 to meet with the Chairman of the Government, Alexander Shokhin, to get new export licenses. After consulting with the Minister of Foreign Economic Relations, Sergei Glazev, a one-time, urgent export license was issued for the shipment to the US. This was then sent via fast courier to St. Petersburg on May 1.

C-5 Galaxy, image USAF

The C-5s, though, weren’t in Russia yet. Once they did land, though, a complex paperwork ballet needed to be carried out to get the reactors and test equipment to America. First, the reactors were purchased by INTERTEK from the Russian bureaus responsible for the various components. Then, INTERTEK would sell the reactors and equipment to Dr. Britt of ISP once the equipment was loaded onto the C-5. Dr. Britt then immediately resold the equipment to the US government. This then avoided the import issues that would have occurred on the US side if the equipment had been imported by ISP, a private company, or INTERTEK, a Russian-led international consortium.

One of them landed in St. Petersburg on May 6, was loaded with the two Topaz-II reactors (V-71 and Ya-21U) and as much equipment as could be fit in the aircraft, and left the same day. It would arrive in Albuquerque on May 7. The other developed maintenance problems, and was forced to wait in England for five days, finally arriving on May 8. The rest of the equipment was loaded up (including the Baikal vacuum chamber), and the plane left later that day. Sadly, it ran into difficulties again upon reaching England, as was forced to wait two more days for it to be repaired, arriving in Albuquerque on May 12.

Preparations for Testing: Two Worlds Coming Together

Unpacking and beryllium checks at TSET Facility in Albuquerque, Image DOE/NASA

Once the equipment was in the US, detailed examination of the payload was required due to the beryllium used in the reflectors and control drums of the reactor. Berylliosis, or the breathing in of beryllium dust, is a serious health issue, and one that the DOE takes incredibly seriously (they’ll evacuate an entire building at the slightest possibility that beryllium dust could be present, at the cost of millions of dollars on occasion). Detailed checks, both before the equipment was removed from the aircraft and during the unpackaging of the reactors. However, no detectable levels of beryllium dust were detected, and the program continued with minimal disruption.

Then it came time to unbox the equipment, but another problem arose: this required the approval of the director of the Central Design Bureau of Heavy Machine Building, Vladmir Nikitin, who was in Moscow. Rather than just call him for approval, Dr Britt called and got approval for Valery Sinkevych, the Albuquerque representative for INTERTEK, to have discretional control over these sorts of decisions. The approval was given, greatly smoothing the process of both setup and testing during TIP.

Sinkevych, Scott Wold and Glen Schmidt worked closely together in the management of the project. Both were on hand to answer questions, smooth out difficulties, and other challenges in the testing process, to the point that the Russians began calling Schmidt “The Walking Stick.” His response was classic: that’s my style, “Management by Walking Around.”

Soviet technicians at TSET Test Facility, image Dabrowski

Every day, Schmidt would hold a lab-wide meeting, ensuring everyone was present, before walking everyone through the procedures that needed to be completed for the day, as well as ensuring that everyone had the resources that they needed to complete their tasks. He also made sure that he was aware of any upcoming issues, and worked to resolve them (mostly through Wetch and Britt) before they became an issue for the facility preparations. This was a revelation to the Russian team, who despite working on the program (in Russia) for years, often didn’t know anything other than the component that they worked on. This synthesis of knowledge would continue throughout the program, leading to a far

Initial estimates for the time that it would take to prepare the facility and equipment for testing of the reactors were supposed to be 9 months. Due to both the well-integrated team, as well as the more relaxed management structure of the American effort, this was completed in only 6 ½ months. According to Sinkevych:

The trust that was formed between the Russian and American side allowed us in an unusually short time to complete the assembly of the complex and demonstrate its capabilities.”

This was so incredible to Schmidt that he went to Wetch and Britt, asking for a bonus for the Russians due to their exceptional work. This was approved, and paid proportional to technical assignment, duration, and quality of workmanship. This was yet another culture shock for the Russian team, who had never received a bonus before. The response was twofold: greatly appreciative, and also “if we continue to save time, do we get another bonus?” The answer to this was a qualified “perhaps,” and indeed one more, smaller bonus was paid due to later time savings.

Installation of Topaz-II reactor at TSET Facility, image DOE/NASA

Mid-Testing Drama, and the Second Shipment

Both in the US and Russia, there were many questions about whether this program was even possible. The reason for its success, though, is unequivocally that it was a true partnership between the American and Russian parts of TIP. This was the first Russian-US government-to-government cooperative program after the fall of the USSR. Unlike the Nunn-Lugar agreement afterward, TIP was always intended to be a true technological exchange, not an assistance program, which is one of the main reasons why the participants of TIP still look fondly and respectfully at the project, while most Russian (and other former Soviet states) participants in N-L consider it to be demeaning, condescending, and not something to ever be repeated again. More than this, though, the Russian design philosophy that allowed full-system, non-nuclear testing of the Topaz-II permanently changed American astronuclear design philosophy, and left its mark on every subsequent astronuclear design.

However, not all organizations in the US saw it this way. Drs. Thorne and Mulder provided excellent bureaucratic cover for the testing program, preventing the majority of the politics of government work from trickling down to the management of the test itself. However, as Scott Wold, the TSET training manager pointed out, they would still get letters from outside organizations stating:

[after careful consideration] they had concluded that an experiment we proposed to do wouldn’t be possible and that we should just stop all work on the project as it was obviously a waste of time. Our typical response was to provide them with the results of the experiment we had just wrapped up.”

As mentioned, this was not uncommon, but was also a minor annoyance. In fact, if anything it cemented the practicality of collaborations of this nature, and over time reduced the friction the program faced through proof of capabilities. Other headaches would arise, but overall they were relatively minor.

Sadly, one of the programs, NEPSTP, was canceled out from under the team near the completion of the spacecraft. The new Clinton administration was not nearly as open to the use of nuclear power as the Bush administration had been (to put it mildly), and as such the program ended in 1993.

One type of drama that was avoided was the second shipment of four more Topaz-II reactors from Russia to the US. These were the Eh-40, Eh-41, Eh-43, and Eh-44 reactors. The use of these terms directly contradicts the earlier-specified prefixes for Soviet determinations of capabilities (the systems were built, then assessed for suitability for mechanical, thermal, and nuclear capabilities after construction, for more on this see our first Enisy post here). These units were for: Eh-40 thermal-hydraulic mockup, with a functioning NaK heat rejection system, for “cold-test” testing of thermal covers during integration, launch, and orbital injection; Eh-41 structural mockup for mechanical testing, and demonstration of the mechanical integrity of the anticriticality device (more on that in the next post), modified thermal cover, and American launch vehicle integration; Eh-43 and -44 were potential flight systems, which would undergo modal testing, charging of the NaK coolant system, fuel loading and criticality testing, mechanical vibration, shock, and acoustic tests, 1000 hour thermal vacuum steady-state stability and NaK system integrity tests, and others before launch.

An-124, image Wikimedia

How was drama avoided in this case? The previous shipment was done by the US Air Force, which has many regulations involved in the transport of any cargo, much less flight-capable nuclear reactors containing several toxic substances. This led to delays in approval the first time this shipment method was used. The second time, in 1994, INTERTEK and ISP contracted a private cargo company, Russian Volga Dnepr Airlines, to transport these four reactors. In order to do this, Volga Dnepr Airlines used their An-124 to fly these reactors from St. Petersburg to Albuquerque.

For me personally, this was a very special event, because I was there. My dad got me out of school (I wasn’t even a teenager yet), drove me out to the landing strip fence at Kirtland AFB, and we watched with about 40 other people as this incredible aircraft landed. He told me about the shipment, and why they were bringing it in, and the seed of my astronuclear obsession was planted.

No beryllium dust was found in this shipment, and the reactors were prepared for testing. Additional thermophysical testing, as well as design work for modifications needed to get the reactors flight-qualified and able to be integrated with the American launchers, were conducted on these reactors. These tests and changes will be the subject of the next blog post, as well as the missions that were proposed for the reactors.

These tests would continue until 1995, and the end of testing in Albuquerque. All reactors were packed up, and returned to Russia per the agreement between INTERTEK and Minatom. The Enisy would continue to be developed in Russia until at least 2007.

More Coming Soon!

The story of the Topaz International Program is far from over. The testing in the US, as well as the programs that the US/Russian team had planned have not even been touched on yet besides very cursory mentions. These programs, as well as the end of the Topaz International Program and the possible future of the Enisy reactor, are the focus of our next blog post, the final one in this series.

This program provided a foundation, as well as a harbinger of challenges to come, in international astronuclear collaboration. As such, I feel that it is a very valuable subject to spend a significant amount of time on.

I hope to have the next post out in about a week and a half to two weeks, but the amount of research necessary for this series has definitely surprised me. The few documents available that fill in the gaps are, sadly, behind paywalls that I can’t afford to breach at my current funding availability.

As such, I ask, once again, that you support me on Patreon. You can find my page at https://www.patreon.com/beyondnerva every dollar counts.

References:

US-Russian Cooperation in Science and Technology: A Case Study of the TOPAZ Space-Based Nuclear Reactor International Program, Dabrowski 2013 https://www.researchgate.net/profile/Richard_Dabrowski/publication/266516447_US-Russian_Cooperation_in_Science_and_Technology_A_Case_Study_of_the_TOPAZ_Space-Based_Nuclear_Reactor_International_Program/links/5433d1e80cf2bf1f1f2634b8/US-Russian-Cooperation-in-Science-and-Technology-A-Case-Study-of-the-TOPAZ-Space-Based-Nuclear-Reactor-International-Program.pdf

Topaz-II Design Evolution, Voss 1994 http://gnnallc.com/pdfs/NPP%2014%20Voss%20Topaz%20II%20Design%20Evolution%201994.pdf

Categories
Development and Testing Fission Power Systems History Spacecraft Concepts

US Astro-nuclear History part 2: SNAP-8, NASA’s Space Station Power Supply

Hello, and welcome back to Beyond NERVA! As some of you may have noticed, the website has moved! Yes, we’re now at beyondnerva.com! I’m working on updating the webpage, and am getting the pieces together for a major website redesign (still a ways off, but lots of the pieces are starting to fall into place) to make the site easier to navigate and more user friendly. Make sure to update your bookmarks with this new address! With that brief administrative announcement out of the way, let’s get back to our look at in-space fission power plants.

Today, we’re going to continue our look at the SNAP program, America’s first major attempt to provide electric power in space using nuclear energy, and finishing up our look at the zirconium hydride fueled reactors that defined the early SNAP reactors by looking at the SNAP-8, and its two children – the 5 kW Thermoelectric Reactor and the Advanced Zirconium Hydride Reactor.

SNAP 8 was the first reactor designed with these space stations in mind in mind. While SNAP-10A was a low-power system (at 500 watts when flown, later upgraded to 1 kW), and SNAP-2 was significantly larger (3 kW), there was a potential need for far more power. Crewed space stations take a lot of power (the ISS uses close to 100 kWe, as an example), and neither the SNAP-10 or the SNAP-2 were capable of powering the space stations that NASA was in the beginning stages of planning.

Initially designed to be far higher powered, with 30-60 kilowatts of electrical power, this was an electric supply that could power a truly impressive outpost for humanity in orbit. However, the Atomic Energy Commission and NASA (which was just coming into existence at the time this program was started) didn’t want to throw the baby out with the bath water, as it were. While the reactor was far higher powered than the SNAP 2 reactor that we looked at last time, many of the power system’s components are shared: both use the same fuel (with minor exceptions), both use similar control drum structures for reactor control, both use mercury Rankine cycle power conversion systems, and perhaps most attractively both were able to evolve with lessons learned from the other part of the program.

While SNAP 8 never flew, it was developed to a very high degree of technical understanding, so that if the need for the reactor arose, it would be available. One design modification late in the SNAP 8 program (when the reactor wasn’t even called SNAP 8 anymore, but the Advanced Zirconium Hydride Reactor) had a very rare attribute in astronuclear designs: it was shielded on all sides for use on a space station, providing more than twice the electrical power available to the International Space Station without any of the headaches normally associated with approach and docking with a nuclear powered facility.

Let’s start back in 1959, though, with the SNAP 8, the first nuclear electric propulsion reactor system.

SNAP 8: NASA Gets Involved Directly

The SNAP 2 and SNAP 10A reactors were both collaborations between the Atomic Energy Commission (AEC), who were responsible for the research, development, and funding of the reactor core and primary coolant portions of the system, and the US Air Force, who developed the secondary coolant system, the power conversion system, the heat rejection system, the power conditioning unit, and the rest of the components. Each organization had a contractor that they used: the AEC used Atomics International (AI), one of the movers and shakers of the advanced reactor industry, while the US Air Force went to Thompson Ramo Wooldridge (better known by their acronym, TRW) for the SNAP-2 mercury (Hg) Rankine turbine and Westinghouse Electric Corporation for the SNAP-10’s thermoelectric conversion unit.

S8ER Slightly Disassembled
SNAP-8 with reflector halves slightly separated, image DOE

1959 brought NASA directly into the program on the reactor side of things, when they requested a fission reactor in the 30-60 kWe range for up to one year; one year later the SNAP-8 Reactor Development Program was born. It would use a similar Hg-based Rankine cycle as the SNAP-2 reactor, which was already under development, but the increased power requirements and unique environment that the power conversion system necessitated significant redesign work, which was carried out by Aerojet General as the prime contractor. This led to a 600 kWe rector core, with a 700 C outlet temperature As with the SNAP-2 and SNAP-10 programs, the SNAP 8’s reactor core was funded by the AEC, but in this case the power conversion system was the funding responsibility of NASA.

The fuel itself was similar to that in the SNAP-2 and -10A reactors, but the fuel elements were far longer and thinner than those of the -2 and -10A. Because the fuel element geometry was different, and the power level of the reactor was so much higher than the SNAP-2 reactor, the SNAP-8 program required its own experimental and developmental reactor program to run in parallel to the initial SNAP Experimental and Development reactors, although the materials testing undertaken by the SNAP-2 reactor program, and especially the SCA4 tests, were very helpful in refining the final design of the SNAP-8 reactor.

The power conversion system for this reactor was split in two: identical Hg turbines would be used, with either one or both running at any given time depending on the power needs of the mission. This allows for more flexibility in operation, and also simplifies the design challenges involved in the turbines themselves: it’s easier to design a turbine with a smaller power output range than a larger one. If the reactor was at full power, and both turbines were used, the design was supposed to produce up to 60 kW of electrical power, while the minimum power output of a single turbine would be in the 30 kWe range. Another advantage was that if one was damaged, the reactor would continue to be able to produce power.

S8ER Assembly 1962
SNAP-8 Experimental Reactor being assembled, 1962

Due to the much higher power levels, an extensive core redesign was called for, meaning that different test reactors would need to be used to verify this design. While the fuel elements were very similar, and the overall design philosophy was operating in parallel to the SNAP-2/10A program, there was only so much that the tests done for the USAF system would be able to help the new program. This led to the SNAP-8 development program, which began in 1960, and had its first reactor, the SNAP-8 Experimental Reactor, come online in 1963.

SNAP-8 Experimental Reactor: The First of the Line

S8ER Cutaway
Image DOE

The first reactor in this series, the SNAP 8 Experimental Reactor (S8ER), went critical in May 1963, and operated until 1965. it operated for 2522 hours at above 600 kWt, and over 8000 hours at lower power levels. The fuel elements for the reactor were 14 inches in length, and 0.532 inches in diameter, with uranium-zirconium hydride (U-ZrH, the same basic fuel type as the SNAP-2/10A system that we looked at last time) enriched to 93.15% 235U, with 6 X 10^22 atoms of hydrogen per cubic centimeter.

S8ER Cutaway Drawing
S8ER Fuel Element

The biggest chemical change in this reactor’s fuel elements compared to the SNAP-2/10A system was the hydrogen barrier inside the metal clad: instead of using gadolinium as a burnable poison (which would absorb neutrons, then decay into a neutron-transparent element as the reactor underwent fission over time), the S8ER used samarium. The reasons for the change are rather esoteric, relating to the neutron spectrum of the reactor, the particular fission products and their ratios, thermal and chemical characteristics of the fuel elements, and other factors. However, the change was so advantageous that eventually the different burnable poison would be used in the SNAP-2/10A system as well.

S8ER Cross SectionThe fuel elements were still loaded in a triangle array, but makes more of a cylinder than a hexagon like in the -2/10A, with small internal reflectors to fill out the smooth cylinder of the pressure vessel. The base and head plates that hold the fuel elements are very similar to the smaller design, but obviously have more holes to hold the increased number of fuel elements. The NaK-78 coolant (identical to the SNAP-2/10A system) entered in the bottom of the reactor into a space in the pressure vessel (a plenum), flowed through the base plate and up the reactor, then exits the top of the pressure vessel through an upper plenum. A small neutron source used as a startup neutron source (sort of like a spark plug for a reactor) was mounted to the top of the pressure vessel, by the upper coolant plenum. The pressure vessel itself was made out of 316 stainless steel.

Instead of four control drums, the S8ER used six void-backed control drums. These were directly derived from the SNAP-2/10A control system. Two of the drums were used for gross reactivity control – either fully rotated in or out, depending on if the reactor is under power or not. Two were used for finer control, but at least under nominal operation would be pretty much fixed in their location over longer periods of time. As the reactor approached end of life, these drums would rotate in to maintain the reactivity of the system. The final two were used for fine control, to adjust the reactivity for both reactor stability and power demand adjustment. The drums used the same type of bearings as the -2/10A system.

S8ER Facility Setup
SNAP-8 Experimental Reactor test facility, Santa Susana Field Site, image DOE

The S8ER first underwent criticality benchmark tests (pre-dry critical testing) from September to December 1962 to establish the reactor’s precise control parameters. Before filling the reactor with the NaK coolant, water immersion experiments for failure-to-orbit safety testing (as an additional set of tests to the SCA-4 testing which also supported SNAP-8) was carried out between January and March of 1963. After a couple months of modifications and refurbishment, dry criticality tests were once again conducted on May 19, 1963, followed in the next month with the reactor reaching wet critical power levels on June 23. Months of low-power testing followed, to establish the precise reactor control element characteristics, thermal transfer characteristics, and a host of other technical details before the reactor was increased in power to full design characteristics.

S8ER Core Containment StructureThe reactor was shut down from early August to late October, because some of the water coolant channels used for the containment vessel failed, necessitating the entire structure to be dug up, repaired, and reinstalled, with significant reworking of the facility being required to complete this intensive repair process. Further modifications and upgrades to the facility continued into November, but by November 22, the reactor underwent its first “significant” power level testing. Sadly, this revealed that there were problems with the control drum actuators, requiring the reactor to be shut down again.

After more modifications and repairs, lower power testing resumed to verify the repairs, study reactor transient behavior, and other considerations. The day finally came for the SNAP-8 Experimental Reactor achieved its first full power, at temperature testing on December 11, 1963. Shortly after, the reactor had to be shut down again to repair a NaK leak in one of the primary coolant loop pumps, but the reactor was up and operating again shortly after. Lower power tests were conducted to evaluate the samarium burnable poisons in the fuel elements, measure xenon buildup, and measure hydrogen migration in the core until April 28, interrupted briefly by another NaK pump failure and a number of instrumentation malfunctions in the automatic scram system (which was designed to automatically shut down the reactor in the case of an accident or certain types of reactor behaviors). However, despite these problems, April 28 marked 60 days of continuous operation at 450 kWt and 1300 F (design temperature, but less-than-nominal power levels).

S8ER Drive Mechanism
S8ER Control drum actuator and scram mechanism, image DOE

After a shutdown to repair the control drive mechanisms (again), the reactor went into near-continuous operation, either at 450 or 600 kWt of power output and 1300 F outlet temperature until April 15, 1965, when the reactor was shut down for the last time. By September 2 of 1964, the S8ER had operated at design power and temperature levels for 1000 continuous hours, and went on in that same test to exceed the maximum continuous operation time of any SNAP reactor to date on November 5 (1152 hours). January 18 of 1965 it achieved 10,000 hours of total operations, and in February of that year reached 100 days of continuous operation at design power and temperature conditions. Just 8 days later, on February 12, it exceeded the longest continuous operation of any reactor to that point (147 days, beating the Yankee reactor). March 5 marked the one year anniversary of the core outlet temperature being continuously at over 1200 F. By April 15, when the reactor was shut down for the last time it achieved an impressive set of accomplishments:

  1. 5016.5 continuous operations immediately preceeding the shutdown (most at 450 kWt, all at 1200 F or greater)
  2. 12,080 hours of total operations
  3. A total of 5,154,332 kilowatt-hours of thermal energy produced
  4. 91.09% Time Operated Efficiency (percentage of time that the reactor was critical) from November 22, 1963 (the day of first significant power operations of the reactor), and 97.91% efficiency in the last year of operations.

Once the tests were concluded, the reactor was disassembled, inspected, and fuel elements were examined. These tests took place at the Atomics International Hot Laboratory (also at Santa Susana) starting on July 28, 1965. For about 6 weeks, this was all that the facility focused on; the core was disassembled and cleaned, and the fuel elements were each examined, with many of them being disassembled and run through a significant testing regime to determine everything from fuel burnup to fission product percentages to hydrogen migration. The fuel element tests were the most significant, because to put it mildly there were problems.

S8ER FE Damage Map
Core cross section with location and type of damage to fuel elements, image DOE

Of the 211 fuel elements in the core, only 44 were intact. Many of the fuel elements also underwent dimensional changes, either swelling (with a very small number actually decreasing) across the diameter or the length, becoming oblong, dishing, or other changes in geometry. The clad on most elements was damaged in one way or another, leading to a large amount of hydrogen migrating out of the fuel elements, mostly into the coolant and then out of the reactor. This means that much of the neutron moderation needed for the reactor to operate properly migrated out of the core, reducing the overall available reactivity even as the amount of fission poisons in the form of fission products was increasing. For a flight system, this is a major problem, and one that definitely needs to be addressed. However, this is exactly the sort of problem that an experimental reactor is meant to discover and assess, so in this way as well the reactor was a complete success, if not as smooth a development as the designers would likely have preferred.

S8ER FE Damage Photo

It was also discovered that, while the cracks in the clad would indicate that the hydrogen would be migrating out of the cracks in the hydrogen diffusion barrier, far less hydrogen was lost than was expected based on the amount of damage the fuel elements underwent. In fact, the hydrogen migration in these tests was low enough that the core would most likely be able to carry out its 10,000 hour operational lifetime requirement as-is; without knowing what the mechanism that was preventing the hydrogen migration was, though, it would be difficult if not impossible to verify this without extensive additional testing, when changes in the fuel element design could result in a more satisfactory fuel clad lifetime, reduced damage, and greater insurance that the hydrogen migration would not become an issue.

S8ER Post irradiation fuel characteristics

The SNAP-8 Experimental Reactor was an important stepping stone to nuclear development in high-temperature ZrH nuclear fuel development, and greatly changed the direction of the whole SNAP-8 program in some ways. The large number of failures in cladding, the hydrogen migration from the fuel elements, and the phase changes within the crystalline structure of the U-ZrH itself were a huge wake-up call to the reactor developers. With the SNAP-2/10A reactor, these issues were minor at best, but that was a far lower-powered reactor, with very different geometry. The large number of fuel elements, the flow of the coolant through the reactor, and numerous other factors made the S8ER reactor far more complex to deal with on a practical level than most, if any, anticipated. Plating of the elements associated with Hastelloy on the stainless steel elements caused concern about the materials that had been selected causing blockages in flow channels, further exacerbating the problems of local hot spots in the fuel elements that caused many of the problems in the first place. The cladding material could (and would) be changed relatively easily to account for the problems with the metal’s ductility (the ability to undergo significant plastic deformation before rupture, in other words to endure fuel swelling without the metal splitting, cracking, fracturing or other ways that the clad could be breached) under high temperature and radiation fluxes over time. A number of changes were proposed to the reactor’s design, which strongly encouraged – or required – changes in the SNAP-8 Development Reactor that was currently being designed and fabricated. Those changes would alter what the SNAP-8 reactor would become, and what missions it would be proposed for, until the program was finally put to rest.

After the S8ER test, a mockup reactor, the SNAP-8 Development Mockup, was built based on the 1962 version of the design. This mockup never underwent nuclear testing, but was used for extensive non-nuclear testing of the designs components. Basically, every component that could be tested under non-nuclear conditions (but otherwise identical, including temperature, stress loading, vacuum, etc.) was tested and refined with this mockup. The tweaks to the design that this mockup suggested are far more minute than we have time to cover here, but it was an absolutely critical step to preparing the SNAP-8 reactor’s systems for flight test.

SNAP-8 Development Reactor: Facing Challenges with the Design

S8DR with installed reflector assembly
SNAP 8 Development Reactor post-reflector installation, before being lowered into containment structure. Image DOE

The final reactor in the series, the SNAP-8 Development Reactor, was a shorter-lived reactor, in part because many of the questions that needed to be answered about the geometry had been answered by the S8ER, and partly because the unanswered materials questions were able to be answered with the SCA4 reactor. This reactor underwent dry critical testing in June 1968, and power testing began at the beginning of the next year. From January 1969 to December 1969, when the reactor was shut down for the final time, the reactor operated at nominal (600 kWt) power for 668 hours, and operated at 1000 kWt for 429 hours.

S8DR Cutaway Drawing in test vault
S8DR in Test Vault, image DOE

The SNAP-8 Development Reactor (S8DR) was installed in the same facility as the S8ER, although it operated under different conditions than the S8ER. Instead of having a cover gas, the S8DR was tested in a vacuum, and a flight-type radiation shield was mounted below it to facilitate shielding design and materials choices. Fuel loading began on June 18, 1968, and criticality was achieved on June 22, with 169 out of the 211 fuel elements containing the U-ZrH fuel (the rest of the fuel elements were stainless steel “dummy” elements) installed in the core. Reactivity experiments for the control mechanisms were carried out before the remainder of the dummy fuel elements were replaced with actual fuel in order to better calibrate the system.

Finally, on June 28, all the fuel was loaded and the final calibration experiments were carried out. These tests then led to automatic startup testing of the reactor, beginning on December 13, 1968, as well as transient analysis, flow oscillation, and temperature reactivity coefficient testing on the reactor. From January 10 to 15, 1969, the reactor was started using the proposed automated startup process a total of five times, proving the design concept.

1969 saw the beginning of full-power testing, with the ramp up to full design power occurring on January 17. Beginning at 25% power, the reactor was stepped up to 50% after 8 hours, then another 8 hours in it was brought up to full power. The coolant flow rates in both the primary and secondary loops started at full flow, then were reduced to maintain design operating temperatures, even at the lower power setting. Immediately following these tests on January 23, an additional set of testing was done to verify that the power conversion system would start up as well. The biggest challenge was verification that the initial injection of mercury into the boiler would operate as expected, so a series of mercury injection tests were carried out successfully. While they weren’t precisely at design conditions due to test stand limitations, the tests were close enough that it was possible to verify that the design would work as planned.

Control RoomAfter these tests, the endurance testing of the reactor began. From January 25 to February 24 was the 500-hour test at design conditions (600 kWt and 1300 F), although there were two scram incidents that led to short interruptions. Starting on March 20, the 9000 hour endurance run at design conditions lasted until April 10. This was followed by a ramp up to the alternate design power of 1 MWt. While this was meant to operate at only 1100 F (to reduce thermal stress on the fuel elements, among other things), the airblast heat exchanger used for heat rejection couldn’t keep up with the power flow at that temperature, so the outlet temperature was increased to 1150 F (the greater the temperature difference between a radiator and its environment, the more efficient it is, something we’ll discuss more in the heat rejection posts). After 18 days of 1 MWt testing, the power was once again reduced to 600 kWt for another 9000 hour test, but on June 1, the reactor scrammed itself again due to a loss of coolant flow. At this point, there was a significant loss of reactivity in the core, which led the team to decide to proceed at a lower temperature to mitigate hydrogen migration in the fuel elements. Sadly, reducing the outlet temperature (to 1200 F) wasn’t enough to prevent this test from ending prematurely due to a severe loss in reactivity, and the reactor scrammed itself again.

The final power test on the S8ER began on November 20, 1969. For the first 11 days, it operated at 300 kWt and 1200 F, when it was then increased in power back to 600 kWt, but the outlet temperature was reduced to 1140F, for an additional 7 days. An increase of outlet temperature back to 1200 F was then dialed in for the final 7 days of the test, and then the reactor was shut down.

This shutdown was an interesting and long process, especially compared to just removing all the reactivity of the control drums by rotating them all fully out. First, the temperature was dropped to 1000 F while the reactor was still at 600 kWt, and then the reactor’s power was reduced to the point that both the outlet and inlet coolant temperatures were 800 F. This was held until December 21 to study the xenon transient behavior, and then the temperatures were further reduced to 400 F to study the decay power level of the reactor. On January 7, the temperature was once again increased to 750 F, and two days later the coolant was removed. The core temperature then dropped steadily before leveling off at 180-200F.

Once again, the reactor was disassembled and examined at the Hot Laboratory, with special attention being paid to the fuel elements. These fuel elements held up much better than the S8ER’s fuel elements, with only 67 of the 211 fuel elements showing cracking. However, quite a few elements, while not cracked, showed significant dimensional changes and higher hydrogen loss rates. Another curiosity was that a thin (less than 0.1 mil thick) metal film, made up of iron, nickel, and chromium, developed fairly quickly on the exterior of the cladding (the exact composition changed based on location, and therefore on local temperature, within the core and along each fuel element).

S8DR FE Damage Map
S8DR Fuel Element Damage map, image DOE

The fuel elements that had intact cladding and little to no deformation showed very low hydrogen migration, an average of 2.4% (this is consistent with modeling showing that the permeation barrier was damaged early in its life, perhaps during the 1 MWt run). However, those with some damage lost between 6.8% and 13.2 percent of their hydrogen. This damage wasn’t limited to just cracked cladding, though – the swelling of the fuel element was a better indication of the amount of hydrogen lost than the clad itself being split. This is likely due to phase changes in the fuel elements, when the UzrH changes crystalline structure, usually due to high temperatures. This changes how well – and at what bond angle – the hydrogen is kept within the fuel element’s crystalline structure, and can lead to more intense hot spots in the fuel element, causing the problem to become worse. The loss of reactivity scrams from the testing in May-July 1969 seem to be consistent with the worst failures in the fuel elements, called Type 3 in the reports: high hydrogen loss, highly oval cross section of the swollen fuel elements (there were a total of 31 of these, 18 of them were intact, 13 were cracked). One interesting note about the clad composition is that where there was a higher copper content due to irregularities in metallography there was far less swelling of the Hastelloy N clad, although the precise mechanism was not understood at the time (and my rather cursory perusal of current literature didn’t show any explanation either). However, at the time testing showed that these problems could be mitigated, to the point of insignificance even, by maintaining a lower core temperature to ensure localized over-temperature failures (like the changes in crystalline structure) would not occur.

S8DR H loss rate table
Image DOE

The best thing that can be said about the reactivity loss rate (partially due to hydrogen losses, and partially due to fission product buildup) is that it was able to be extrapolated given the data available, and that the failure would have occurred after the design’s required lifetime (had S8DR been operated at design temperature and power, the reactor would have lost all excess reactivity – and therefore the ability to maintain criticality – between October and November of 1970).

On this mixed news note, the reactor’s future was somewhat in doubt. NASA was certainly still interested in a nuclear reactor of a similar core power, but this particular configuration was neither the most useful to their needs, nor was it exceptionally hopeful in many of the particulars of its design. While NASA’s reassessment of the program was not solely due to the S8DR’s testing history, this may have been a contributing factor.

One way or the other, NASA was looking for something different out of the reactor system, and this led to many changes. Rather than an electric propulsion system, focus shifted to a crewed space station, which has different design requirements, most especially in shielding. In fact, the reactor was split into three designs, none of which kept the SNAP name (but all kept the fuel element and basic core geometry).

A New Life: the Children of SNAP-8

Even as the SNAP-8 Development Reactor was undergoing tests, the mission for the SNAP-8 system was being changed. This would have major consequences for the design of the reactor, its power conversion system, and what missions it would be used in. These changes would be so extensive that the SNAP-8 reactor name would be completely dropped, and the reactor would be split into four concepts.

The first concept was the Space Power Facility – Plumbrook (SPT) reactor, which would be used to test shielding and other components at NASA’s Plum Brook Research Center outside Cleveland, OH, and could also be used for space missions if needed. The smallest of the designs (at 300 kWt), it was designed to avoid many of the problems associated with the S8ER and S8DR; however, funding was cut before the reactor could be built. In fact, it was cut so early that details on the design are very difficult to find.

The second reactor, the Reactor Core Test, was very similar to the SPF reactor, but it was the same power output as the nominal “full power” reactor, at 600 kWt. Both of these designs increased the number of control drums to eight, and were designed to be used with a traditional shadow shield. Neither of them were developed to any great extent, much less built.

A third design, the 5 kWe Thermoelectric Reactor, was a space system, meant to take many of the lessons from the SNAP-8 program and apply them to a medium-power system which would apply both the lessons of the SNAP-8 ER and DR as well as the SNAP-10A’s experience with thermoelectric power conversion systems to a reactor between the SNAP-10B and Reference Zirconium Hydride reactor in power output.

The final design, the Reference Zirconium Hydride Reactor (ZrHR), was extensively developed, even if geometry-specific testing was never conducted. This was the most direct replacement for the SNAP-8 reactor, and the last of the major U-ZrH fueled space reactors in the SNAP program. Rather than powering a nuclear electric spacecraft, however, this design was meant to power space stations.

The 5 kWe Thermoelectric Reactor: Simpler, Cleaner, and More Reliable

Artists Depiction
5 kWe Thermoelectric Reactor, artist’s concept cutaway drawing. Image DOE

The 5 kWe Thermoelectric Reactor (5 kWe reactor) was a reasonably simple adaptation of the SNAP-8 design, intended to be used with a shadow shield. Unsurprisingly, a lot of the design changes mirrored some of the work done on the SNAP-10B Interim design, which was undergoing work at about the same time. Meant to supply 5 kWe of power for 5 years using lead telluride thermoelectric convertors (derived from the SNAP-10A convertors), this system was meant to provide power for everything from small crewed space stations to large communications satellites. In many ways, this was a very different departure from the SNAP-8 reactor, but at the same time the changes that were proposed were based on evolutionary changes during the S8ER and S8DR experimental runs, as well as advances in the SNAP 2/10 core which was undergoing parallel post-SNAPSHOT design evolution (the SNAP-10A design had been frozen for the SNAPSHOT program at this point, so these changes were either for the followon SNAP-10A Advanced or SNAP-10B reactors). The change from mercury Rankine to thermoelectric power conversion, though, paralleled a change in the SNAP-2/10A origram, where greater efficiency was seen as unnecessary due to the constantly-lower power requirements of the systems.

5 kWe SchematicThe first thing (in the reactor itself, at least) that was different about the design was that the axial reflector was tapered, rather than cylindrical. This was done to keep the exterior profile of the reactor cleaner. While aerodynamic considerations aren’t a big deal (although they do still play a minute part in low Earth orbit) for astronuclear power plants, everything that’s exposed to the unshielded reactor becomes a radiation source itself, due to radiation scattering and material activation under neutron bombardment. If you could get your reactor to be a continuation of the taper of your shadow shield, rather than sticking out from that cone shape, you can make the shadow shield smaller for a given reactor. Since the shield is often many times heavier than the power system itself, especially for crewed applications, the single biggest place a designer can save mass is in the shadow shield.

This tapered profile meant two things: first, there would be a gradient in the amount of neutron moderation between the top and the bottom of the reactor, and second, the control system would have to be reworked. It’s unclear exactly how far the neutronics analysis for the new reflector configuration had proceeded, sadly, but the control systems were adaptations of the design changes that were proposed for the SNAP-10B reactor: instead of having the wide, partial cylinder control drums of the original design, large sections (235 degrees in total) of the reflector would be slid up or down around the core containment vessel to control the amount of reactivity available. This is somewhat similar to the SNAP-10B and BES-5 concepts in its execution, but the mechanism is quite different from a neutronics perspective: rather than capturing the unwanted neutrons using a neutron poison like boron or europium, they’re essentially vented into space.

A few other big changes from the SNAP-8 reference design when it comes to the core itself. The first is in the fuel: instead of having a single long fuel rod in the clad, the U-XrH fuel was split into five different “slugs,” which were held together by the clad. This would create a far more complex thermal distribution situation in the fuel, but would also allow for better thermal stress management within the hydride itself. The number of fuel elements was reduced to 85, and they came in three configurations: one set of 27 had radial fins to control the flow that spiralled around the fuel element in a right-hand direction, another set of 27 had fins in the left-hand direction, and the final 31 were unfinned. This was done to better manage the flow of the NaK coolant through the core, and avoid some of the hydrodynamic problems that were experienced on the S8DR.

Blueprint Layout of System
5kWe Thermoelectric power system. Image DOE

Summary TableSummary Table 2

The U-ZrH Reactor: Power for America’s Latest and Greatest Space Stations.

ZrHR Cutaway Drawing
Cutaway drawing, Image DOE

The Reference ZrH Reactor was begun in 1968, while the S8DR was still under construction. Because of this increased focus on having a crewed space station configuration, and the shielding requirement changes, some redesign of the reactor core was needed. The axial shield would change the reactivity of the core, and the control drums would no longer be able to effectively expose portions of the core to the vacuum of space to get rid of excess reactivity. Because of this, the number of fuel elements in the core were increased from 211 to 295. Another change was that rather than the even spacing of fuel elements used in the S8DR, the fuel elements were spaced in such a way that the amount of coolant around each fuel element was proportional to the amount of power produced by each fuel element. This means that the fuel elements on the interior of the core were wider spaced than the fuel elements around the periphery. This made it far more unlikely that local hot spots will develop which could lead to fuel element failures, but it also meant that the flow of coolant through the reactor core would need to be far more thoroughly studied than was done on the SNAP 8 reactor design. These thermohydrodynamic studies would be a major focus of the ZrHR program.

Reference Design Xsec and ElevtionAnother change was in the control drum configuration, as well as the need to provide coolant to the drums. This was because the drums were now not only fully enclosed solid cylinders, but were surrounded by a layer of molten lead gamma shielding. Each drum would be a solid cylinder in overall cross section; the main body was beryllium, but a crescent of europium alloy was used as a neutron poison (this is one of the more popular alternatives to boron for control mechanisms that operate in a high temperature environment) to absorb neutrons when this portion of the control drum was turned toward the core. These drums would be placed in dry wells, with NaK coolant flowing around them from the spacecraft (bottom) end before entering the upper reactor core plenum to flow through the core itself. The bearings would be identical to those used on the SNAP-8 Development Reactor, and minimal modifications would be needed for the drum motion control and position sensing apparatus. Fixed cylindrical beryllium reflectors, one small one along the interior radius of the control drums and a larger one along the outside of the drums, filled the gaps left by the control drums in this annular reflector structure. These, too, would be kept cool by the NaK coolant flowing around them.

Surrounding this would be an axial gamma shield, with the preferred material being molten lead encased in stainless steel – but tungsten was also considered as an alternative. Why the lead was kept molten is still a mystery to me, but my best guess is that this was due to the thermal conditions of the axial shield, which would have forced the lead to remain above its melting point. This shield would have made it possible to maneuver near the space station without having to remain in the shadow of the directional shield – although obviously dose rates would still be higher than being aboard the station itself.

TE Layout diagramAnother interesting thing about the shielding is that the shadow shield was divided in two, in order to balance thermal transfer and radiation protection for the power conversion system, and also to maximize the effectiveness of the shadow shields. Most designs used a 4 pi shield design, which is basically a frustrum completely surrounding the reactor core with the wide end pointing at the spacecraft. The primary coolant loops wrapped around this structure before entering the thermoelectric conversion units. After this, there’s a small “galley” where the power conversion system is mounted, followed by a slightly larger shadow shield, with the heat rejection system feed loops running across the outside as well. Finally, the radiator – usually cylindrical or conical – completed the main body of the power system. The base of the radiator would meet up with the mounting hardware for attachment to the spacecraft, although the majority of the structural load was an internal spar running from the core all the way to the spacecraft.

While the option for using a pure shadow shield concept was always kept on the table, the complications in docking with a nuclear powered space station which has an unshielded nuclear reactor at one end of the structure were significant. Because of this, the ZrHR was designed with full shielding around the entire core, with supplementary shadow shields between the reactor itself and the power conversion system, and also a second shadow shield after the power conversion system. These shadow shields could be increased to so-called 4-pi shields for more complete shielding area, assuming the mission mass budget allowed, but as a general rule the shielding used was a combination of the liquid lead gamma shield and the combined shadow shield configuration. These shields would change to a fairly large extent depending on the mission that the ZrHR would be used on.

Radiator Design BaselineAnother thing that was highly variable was the radiator configuration. Some designs had a radiator that was fixed in relation to the reactor, even if it was extended on a boom (as was the case of the Saturn V Orbital Workshop, later known as Skylab). Others would telescope out, as was the case for the later Modular Space Station (much later this became the International Space Station). The last option was for the radiators to be hinged, with flexible joints that the NaK coolant would flow through (this was the configuration for the lunar surface mission), and those joints took a lot of careful study, design, and material testing to verify that they would be reliable, seal properly, and not cause too many engineering compromises. We’ll look at the challenges of designing a radiator in the future, when we look at heat rejection systems (at this point, possibly next summer), but suffice to say that designing and executing a hinged radiator is a significant challenge for engineers, especially with a material at hot, and as reactive, as liquid NaK.

The ZrHR was continually being updated, since there was no reason to freeze the majority of the design components (although the fuel element spacing and fin configuration in the core may have indeed been frozen to allow for more detailed hydrodynamic predictability), until the program’s cancellation in 1973. Because of this, many design details were still in flux, and the final reactor configuration wasn’t ever set in stone. Additional modifications for surface use for a crewed lunar base would have required tweaking, as well, so there is a lot of variety in the final configurations.

The Stations: Orbital Missions for SNAP-8 Reactors

OWS with two CSMs, 1966.PNG
Frontispiece for nuclear-powered Saturn V Orbital Workstation (which flew as Skylab), image NASA 

At the time of the redesign, three space stations were being proposed for the near future: the first, the Manned Orbiting Research Laboratory, (later changed to the Manned Orbiting Laboratory, or MOL), was a US Air Force project as part of the Blue Gemini program. Primarily designed as a surveillance platform, advances in photorecoinnasance satellites made this program redundant after just a single flight of an uncrewed, upgraded Gemini capsule.

The second was part of the Apollo Applications Program. Originally known as the Saturn V Orbital Workshop (OWS), this later evolved into Skylab. Three crews visited this space station after it was launched on the final Saturn V, and despite huge amounts of work needed to repair damage caused during a particularly difficult launch, the scientific return in everything from anatomy and physiology to meteorology to heliophysics (the study of the Sun and other stars) fundamentally changed our understanding of the solar system around us, and the challenges associated with continuing our expansion into space.

The final space station that was then under development was the Modular Space Station, which would in the late 1980s and early 1990s evolve into Space Station Freedom, and at the start of its construction in 1998 (exactly 20 years ago as of the day I’m writing this, actually) was known as the International Space Station. While many of the concepts from the MSS were carried over through its later iterations, this design was also quite different from the ISS that we know today.

TE Detail FlowBecause of this change in mission, quite a few of the subsystems for the power plant were changed extensively, starting just outside the reactor core and extending through to shielding, power conversion systems, and heat rejection systems. The power conversion system was changed to four parallel thermoelectric convertors, a more advanced setup than the SNAP-10 series of reactors used. These allowed for partial outages of the PCS without complete power loss. The heat rejection system was one of the most mission-dependent structures, so would vary in size and configuration quite a bit from mission to mission. It, too, would use NaK-78 as the working fluid, and in general would be 1200 (on the OWS) to 1400 (reference mission) sq. ft in surface area. We’ll look more at these concepts in later posts on power conversion and heat rejection systems, but these changes took up a great deal of the work that was done on the ZrHR program.

Radiation Shield ZonesOne of the biggest reasons for this unusual shielding configuration was to allow a compromise between shielding mass and crew radiation dose. In this configuration, there would be three zones of radiation exposure: only shielded by the 4 pi shield during rendezvous and docking (a relatively short time period) called the rendezvous zone; a more significant shielding for the spacecraft but still slightly higher than fully shielded (because the spacecraft would be empty when docked the vast majority of the time) called the scatter shield zone; and the crewed portion of the space station itself, which would be the most shielded, called the primary shielded zone. With the 4 pi shield, the entire system would mass 24,450 pounds, of which 16,500 lbs was radiation shielding, leading to a crew dose of between 20 and 30 rem a year from the reactor.

The mission planning for the OWS was flexible in its launch configuration: it could have launched integral to the OWS on a Saturn V (although, considering the troubles that the Skylab launch actually had, I’m curious how well the system would have performed), or it could have been launched on a separate launcher and had an upper stage to attach it to the OWS. The two options proposed were either a Saturn 1B with a modified Apollo Service Module as a trans-stage, or a Titan IIIF with the Titan Trans-stage for on-orbit delivery (the Titan IIIC was considered unworkable due to mass restrictions).

Deorbit System ConceptAfter the 3-5 years of operational life, the reactor could be disposed of in two ways: either it would be deorbited into a deep ocean area (as with the SNAP-10A, although as we saw during the BES-5’s operational history this ended up not being considered a good option), or it could be boosted into a graveyard orbit. One consideration which is very different from the SNAP-10A is that the reactor would likely be intact due to the 4 pi shield, rather than burning up as the SNAP-10A would have, meaning that a terrestrial impact could lead to civilian population exposures to fission products, and also having highly enriched (although not quite bomb grade) uranium somewhere for someone to be able to relatively easily pick up. This made the deorbiting of the reactor a bit pickier in terms of location, and so an uncontrolled re-entry was not considered. The ideal was to leave it in a parking orbit of at least 400 nautical miles in altitude for a few hundred years to allow the fission products to completely decay away before de-orbiting the reactor over the ocean.

Nuclear Power for the Moon

Lunar configuration cutaway
Lunar landing configuration, image DOE

The final configuration that was examined for the Advanced ZrH Reactor was for the lunar base that was planned as a follow-on to the Apollo Program. While this never came to fruition, it was still studied carefully. Nuclear power on the Moon was nothing new: the SNAP-27 radioisotope thermoelectric generator had been used on every single Apollo surface mission as part of the Apollo Lunar Surface Experiment Package (ALSEP). However, these RTGs would not provide nearly enough power for a permanently crewed lunar base. As an additional complication, all of the power sources available would be severely taxed by the 24 day long, incredibly cold lunar night that the base would have to contend with. Only nuclear fission offered both the power and the heat needed for a permanently staffed lunar base, and the reactor that was considered the best option was the Advanced ZrH Reactor.

The configuration of this form of the reactor was very different. There are three options for a surface power plant: the reactor is offloaded from the lander and buried in the lunar regolith for shielding (which is how the Kilopower reactor is being planned for surface operations); an integral lander and power plant which is assembled in Earth (or lunar) orbit before landing, with a 4 pi shield configuration; finally an integrated lander and reactor with a deployable radiator which is activated once the reactor is on the surface of the moon, again with a 4 pi shield configuration. There are, of course, in-between options between the last two configurations, where part of the radiator is fixed and part deploys. The designers of the ZrHR decided to go with the second option as their best design option, due to the ability to check out the reactor before deployment to the lunar surface but also minimizing the amount of effort needed by the astronauts to prepare the reactor for power operations after landing. This makes sense because, while on-orbit assembly and checkout is a complex and difficult process, it’s still cheaper in terms of manpower to do this work in Earth orbit rather than a lunar EVA due to the value of every minute on the lunar surface. If additional heat rejection was required, a deployable radiator could be used, but this would require flexible joints for the NaK coolant, which would pose a significant materials and design challenge. A heat shield was used when the reactor wasn’t in operation to prevent exessive heat loss from the reactor. This eased startup transient issues, as well as ensuring that the NaK coolant remained liquid even during reactor shutdown (frozen working fluids are never good for a mechanical system, after all). The power conversion system was exactly the same configuration as would be used in the OWS configuration that we discussed earlier, with the upgraded, larger tubes rather than the smaller, more numerous ones (we’ll discuss the tradeoffs here in the power conversion system blog posts).

Surface configuration diagram

This power plant would end up providing a total of 35.5 kWe of conditioned (i.e. usable, reliable power) electricity out of the 962 kWt reactor core, with 22.9 kWe being delivered to the habitat itself, for at least 5 years. The overall power supply system, including radiator, shield, power conditioning unit, and the rest of the ancillary bits and pieces that make a nuclear reactor core into a fission power plant, ended up massing a total of 23,100 lbs, which is comfortably under the 29,475 lb weight limit of the lander design that was selected (unfortunately, finding information on that design is proving difficult). A total dose rate at a half mile for an unshielded astronaut would be 7.55 mrem/hr was considered sufficient for crew radiation safety (this is a small radiation dose compared to the lunar radiation environment, and the astronauts will spend much of their time in the much better shielded habitat.

Sadly, this power supply was not developed to a great extent (although I was unable to find the source document for this particular design: NAA-SR-12374, “Reactor Power Plants for Lunar Base Applications, Atomics International 1967), because the plans for the crewed lunar base were canceled before much work was done on this design. The plans were developed to the point that future lunar base plans would have a significant starting off point, but again the design was never frozen, so there was a lot of flexibility remaining in the design.

The End of the Line

Sadly, these plans never reached fruition. The U-ZrH Reactor had its budget cut by 75% in 1971, with cuts to alternate power conversion systems such as the use of thermionic power conversion (30%) and reactor safety (50%), and the advanced Brayton system (completely canceled) happening at the same time. NERVA, which we covered in a number of earlier posts, also had its funding slashed at the same time. This was due to a reorientation of funds away from many current programs, and instead focusing on the Space Shuttle and a modular space station, whose power requirements were higher than the U-ZrH Reactor would be able to offer.

At this point, the AEC shifted their funding philosophy, moving away from preparing specific designs for flight readiness and instead moving toward a long-term development strategy. In 1973 head of the AEC’s Space Nuclear Systems Division said that, given the lower funding levels that NASA was forced to work within, “…the missions which were likely to require large amounts of energy, now appear to be postponed until around 1990 or later.” This led to the cancellation of all nuclear reactor systems, and a shift in focus to radioisotope thermoelectric generators, which gave enough power for NASA and the DoD’s current mission priorities in a far simpler form.

Funding would continue at a low level all the way to the current day for space fission power systems, but the shift in focus led to a very different program. While new reactor concepts continue to be regularly put forth, both by Department of Energy laboratories and NASA, for decades the focus was more on enhancing the technological capability of many areas, especially materials, which could be used by a wide range of reactor systems. This meant that specific systems wouldn’t be developed to the same level of technological readiness in the US for over 30 years, and in fact it wouldn’t be until 2018 that another fission power system of US design would undergo criticality testing (the KRUSTY test for Kilopower, in early 2018).

More Coming Soon!

Originally, I was hoping to cover another system in this blog post as well, but the design is so different compared to the ZrH fueled reactors that we’ve been discussing so far in this series that it warranted its own post. This reactor is the SNAP-50, which didn’t start out as a space reactor, but rather one of the most serious contenders for the indirect-cycle Aircraft Nuclear Propulsion program. It used uranium carbide/nitride fuel elements, liquid lithium coolant, and was far more powerful than anything that weve discussed yet in terms of electric power plants. Having it in its own post will also allow me to talk a little bit about the ANP program, something that I’ve wanted to cover for a while now, but considering how much more there is to discuss about in-space systems (and my personal aversion to nuclear reactors for atmospheric craft on Earth), hasn’t really been in the cards until now.

This series continues to expand, largely because there’s so much to cover that we haven’t gotten to yet – and no-one else has covered these systems much either! I’m currently planning on doing the SNAP-50/SPUR system as a standalone post, followed by the SP-100 and a selection of other reactor designs. After that, we’ll cover the ENISY reactor program in its own post, followed by the NEP designs from the 90s and early 00s, both in the US and Russia. Finally, we’ll cover the predecessors to Kilopower, and round out our look at fission power plant cores by revisiting Kilopower to have a look at what’s changed, and what’s stayed the same, over the last year since the KRUSTY test. We will then move on to shielding materials and design (probably two or three posts, because there’s a lot to cover there) before moving on to power conversion systems, another long series. We’ll finish up the nuclear systems side of nuclear power supplies by looking at heat sinks, radiators, and other heat rejection systems, followed by a look at nuclear electric spacecraft architecture and design considerations.

A lot of work continues in the background, especially in terms of website planning and design, research on a lot of the lesser-known reactor systems, and planning for the future of the web page. The blog is definitely set for topics for at least another year, probably more like two, just covering the basics and history of astronuclear design, but getting the web page to be more functional is a far more complex, and planning-heavy, task.

I hope you enjoyed this post, and much more is coming next year! Don’t forget to join us on Facebook, or follow me on Twitter!

References

SNAP 8 Summary Report, AEC/Atomics International Staff, 1973 https://www.osti.gov/servlets/purl/4393793

SNAP-8, the First Electric Propulsion Power System, Wood et al 1961 https://www.osti.gov/servlets/purl/4837472

SNAP 8 Reactor Preliminary Design Summary, ed. Rosenberg, 1961 https://www.osti.gov/servlets/purl/4476265

SNAP 8 Reactor and Shield, Johnson and Goetz 1963 https://www.osti.gov/servlets/purl/4875647

SNAP 8 Reactors for Unmanned and Manned Applications, Mason 1965 https://www.osti.gov/servlets/purl/4476766

SNAP 8 Reactor Critical Experiment, ed. Crouter. 1964 https://www.osti.gov/servlets/purl/4471079

Disassembly and Postoperation Examination of the SNAP 8 Experimental Reactor, Dyer 1967 https://www.osti.gov/servlets/purl/4275472

SNAP 8 Experimental Reactor Fuel Element Behavior, Pearlman et al 1966 https://www.osti.gov/servlets/purl/4196260

Summary of SNAP 8 Development Reactor Operations, Felten and May 1973 https://www.osti.gov/servlets/purl/4456300

Structural Analysis of the SNAP 8 Development Reactor Fuel Cladding, Dalcher 1969 https://www.osti.gov/servlets/purl/6315913

Reference Zirconium Hydride Thermoelectric System, AI Staff 1969 https://www.osti.gov/servlets/purl/4004948

Reactor-Thermoelectric System for NASA Space Station, Gylfe and Johnson 1969 https://www.osti.gov/servlets/purl/4773689

Categories
Development and Testing Fission Power Systems History Nuclear Electric Propulsion Test Stands

History of US Astronuclear Reactors part 1: SNAP-2 and 10A

Hello, and welcome to Beyond NERVA! Today we’re going to look at the program that birthed the first astronuclear reactor to go into orbit, although the extent of the program far exceeds the flight record of a single launch.

Before we get into that, I have a minor administrative announcement that will develop into major changes for Beyond NERVA in the near-to-mid future! As you may have noticed, we have moved from beyondnerva.wordpress.com to beyondnerva.com. For the moment, there isn’t much different, but in the background a major webpage update is brewing! Not only will the home page be updated to make it easier to navigate the site (and see all the content that’s already available!), but the number of pages on the site is going to be increasing significantly. A large part of this is going to be integrating information that I’ve written about in the blog into a more topical, focused format – with easier access to both basic concepts and technical details being a priority. However, there will also be expansions on concepts, pages for technical concepts that don’t really fit anywhere in the blog posts, and more! As these updates become live, I’ll mention them in future blog posts. Also, I’ll post them on both the Facebook group and the new Twitter feed (currently not super active, largely because I haven’t found my “tweet voice yet,” but I hope to expand this soon!). If you are on either platform, you should definitely check them out!

The Systems for Nuclear Auxiliary Propulsion, or SNAP program, was a major focus for a wide range of organizations in the US for many decades. The program extended everywhere from the bottom of the seas (SNAP-4, which we won’t be covering in this post) to deep space travel with electric propulsion. SNAP was divided up into an odd/even numbering scheme, with the odd model numbers (starting with the SNAP-3) being radioisotope thermoelectric generators, and the even numbers (beginning with SNAP-2) being fission reactor electrical power systems.

Due to the sheer scope of the SNAP program, even eliminating systems that aren’t fission-based, this is going to be a two post subject. This post will cover the US Air Force’s portion of the SNAP reactor program: the SNAP-2 and SNAP-10A reactors; their development programs; the SNAPSHOT mission; and a look at the missions that these reactors were designed to support, including satellites, space stations, and other crewed and uncrewed installations. The next post will cover the NASA side of things: SNAP-8 and its successor designs as well as SNAP-50/SPUR. The one after that will cover the SP-100, SABRE, and other designs from the late 1970s through to the early 1990s, and will conclude with looking at a system that we mentioned briefly in the last post: the ENISY/TOPAZ II reactor, the only astronuclear design to be flight qualified by the space agencies and nuclear regulatory bodies of two different nations.

SNAP capabilities 1964
SNAP Reactor Capabilities and Status as of 1973, image DOE

The Beginnings of the US Astronuclear Program: SNAP’s Early Years

Core Cutaway Artist
Early SNAP-2 Concept Art, image courtesy DOE

Beginning in the earliest days of both the nuclear age and the space age, nuclear power had a lot of appeal for the space program: high power density, high power output, and mechanically simple systems were in high demand for space agencies worldwide. The earliest mention of a program to develop nuclear electric power systems for spacecraft was the Pied Piper program, begun in 1954. This led to the development of the Systems for Nuclear Auxiliary Power program, or SNAP, the following year (1955), which was eventually canceled in 1973, as were so many other space-focused programs.

s2 Toroidal Station
SNAP-2 powered space station concept image via DOE

Once space became a realistic place to send not only scientific payloads but personnel, the need to provide them with significant amounts of power became evident. Not only were most systems of the day far from the electricity efficient designs that both NASA and Roscosmos would develop in the coming decades; but, at the time, the vision for a semi-permanent space station wasn’t 3-6 people orbiting in a (completely epic, scientifically revolutionary, collaboratively brilliant, and invaluable) zero-gee conglomeration of tin cans like the ISS, but larger space stations that provided centrifugal gravity, staffed ‘round the clock by dozens of individuals. These weren’t just space stations for NASA, which was an infant organization at the time, but the USAF, and possibly other institutions in the US government as well. In addition, what would provide a livable habitation for a group of astronauts would also be able to power a remote, uncrewed radar station in the Arctic, or in other extreme environments. Even if crew were there, the fact that the power plant wouldn’t have to be maintained was a significant military advantage.

Responsible for both radioisotope thermoelectric generators (which run on the natural radioactive decay of a radioisotope, selected according to its energy density and half-life) as well as fission power plants, SNAP programs were numbered with an even-odd system: even numbers were fission reactors, odd numbers were RTGs. These designs were never solely meant for in-space application, but the increased mission requirements and complexities of being able to safely launch a nuclear power system into space made this aspect of their use the most stringent, and therefore the logical one to design around. Additionally, while the benefits of a power-dense electrical supply are obvious for any branch of the military, the need for this capability in space far surpassed the needs of those on the ground or at sea.

Originally jointly run by the AEC’s Department of Reactor Development (who funded the reactor itself) and the USAF’s AF Wright Air Development Center (who funded the power conversion system), full control was handed over to the AEC in 1957. Atomics International Research was the prime contractor for the program.

There are a number of similarities in almost all the SNAP designs, probably for a number of reasons. First, all of the reactors that we’ll be looking at (as well as some other designs we’ll look at in the next post) used the same type of fissile fuel, even though the form, and the cladding, varied reasonably widely between the different concepts. Uranium-zirconium hydride (U-ZrH) was a very popular fuel choice at the time. Assuming hydrogen loss could be controlled (this was a major part of the testing regime in all the reactors that we’ll look at), it provided a self-moderating, moderate-to-high-temperature fuel form, which was a very attractive feature. This type of fuel is still used today, for the TRIGA reactor – which, between it and its direct descendants is the most common form of research and test reactor worldwide. The high-powered reactors (SNAP 2 and 8) both initially used variations on the same power conversion system: a boiling mercury Rankine power conversion cycle, which was determined by the end of the testing regime to be possible to execute, however to my knowledge has never been proposed again (we’ll look at this briefly in the post on heat engines as power conversion systems, and a more in-depth look will be available in the future), although a mercury-based MHD conversion system is being offered as a power conversion system for an accelerator-driven molten salt reactor.

SNAP-2: The First American Built-For-Space Nuclear Reactor Design

S2 Artist Cutaway Core
SNAP-2 Reactor Cutaway, image DOE

The idea for the SNAP-2 reactor originally came from a 1951 Rand Corporation study, looking at the feasibility of having a nuclear powered satellite. By 1955, the possibilities that a fission power supply offered in terms of mass and reliability had captured the attention of many people in the USAF, which was (at the time) the organization that was most interested and involved (outside the Army Ballistic Missile Agency at the Redstone Arsenal, which would later become the Goddard Spaceflight Center) in the exploitation of space for military purposes.

The original request for the SNAP program, which ended up becoming known as SNAP 2, occurred in 1955, from the AEC’s Defense Reactor Development Division and the USAF Wright Air Development Center. It was for possible power sources in the 1 to 10 kWe range that would be able to autonomously operate for one year, and the original proposal was for a zirconium hydride moderated sodium-potassium (NaK) metal cooled reactor with a boiling mercury Rankine power conversion system (similar to a steam turbine in operational principles, but we’ll look at the power conversion systems more in a later post), which is now known as SNAP-2. The design was refined into a 55 kWt, 5 kWe reactor operating at about 650°C outlet temperature, massing about 100 kg unshielded, and was tested for over 10,000 hours. This epithermal neutron spectrum would remain popular throughout much of the US in-space reactor program, both for electrical power and thermal propulsion designs. This design would later be adapted to the SNAP-10A reactor, with some modifications, as well.

S2 Critical Assembly
SNAP Critical Assembly core, image DOE

SNAP-2’s first critical assembly test was in October of 1957, shortly after Sputnik-1’s successful launch. With 93% enriched 235U making up 8% of the weight of the U-ZrH fuel elements, a 1” beryllium inner reflector, and an outer graphite reflector (which could be varied in thickness), separated into two rough hemispheres to control the construction of a critical assembly; this device was able to test many of the reactivity conditions needed for materials testing on a small economic scale, as well as test the behavior of the fuel itself. The primary concerns with testing on this machine were reactivity, activation, and intrinsic steady state behavior of the fuel that would be used for SNAP-2. A number of materials were also tested for reflection and neutron absorbency, both for main core components as well as out-of-core mechanisms. This was followed by the SNAP-2 Experimental Reactor in 1959-1960 and the SNAP 2 Development Reactor in 1961-1962.

S2ER Cross Section
SNAP-2 Experimental Reactor core cros section diagram, image DOE

The SNAP-2 Experimental Reactor (S2ER or SER) was built to verify the core geometry and basic reactivity controls of the SNAP-2 reactor design, as well as to test the basics of the primary cooling system, materials, and other basic design questions, but was not meant to be a good representation of the eventual flight system. Construction started in June 1958, with construction completed by March 1959. Dry (Sept 15) and wet (Oct 20) critical testing was completed the same year, and power operations started on Nov 5, 1959. Four days later, the reactor reached design power and temperature operations, and by April 23 of 1960, 1000 hours of continuous testing at design conditions were completed. Following transient and other testing, the reactor was shut down for the last time on November 19, 1960, just over one year after it had first achieved full power operations. Between May 19 and June 15, 1961, the reactor was disassembled and decommissioned. Testing on various reactor materials, especially the fuel elements, was conducted, and these test results refined the design for the Development Reactor.

S2ER Schedule and Timeline

S2DR Core Xsec
SNAP 2 Development Reactor core cross section, image DOE

The SNAP-2 Development Reactor (S2DR or SDR, also called the SNAP-2 Development System, S2DS) was installed in a new facility at the Atomics International Santa Susana research facility to better manage the increased testing requirements for the more advanced reactor design. While this wasn’t going to be a flight-type system, it was designed to inform the flight system on many of the details that the S2ER wasn’t able to. This, interestingly, is much harder to find information on than the S2ER. This reactor incorporated many changes from the S2ER, and went through several iterations to tweak the design for a flight reactor. Zero power testing occurred over the summer of 1961, and testing at power began shortly after (although at SNAP-10 power and temperature levels. Testing continued until December of 1962, and further refined the SNAP-2 and -10A reactors.

S2DR Development Timeline

A third set of critical assembly reactors, known as the SNAP Development Assembly series, was constructed at about the same time, meant to provide fuel element testing, criticality benchmarks, reflector and control system worth, and other core dynamic behaviors. These were also built at the Santa Susana facility, and would provide key test capabilities throughout the SNAP program. This water-and-beryllium reflected core assembly allowed for a wide range of testing environments, and would continue to serve the SNAP program through to its cancellation. Going through three iterations, the designs were used more to test fuel element characteristics than the core geometries of individual core concepts. This informed all three major SNAP designs in fuel element material and, to a lesser extent, heat transfer (the SNAP-8 used thinner fuel elements) design.

Extensive testing was carried out on all aspects of the core geometry, fuel element geometry and materials, and other behaviors of the reactor; but by May 1960 there was enough confidence in the reactor design for the USAF and AEC to plan on a launch program for the reactor (and the SNAP-10A), called SNAPSHOT (more on that below). Testing using the SNAP-2 Experimental Reactor occurred in 1960-1961, and the follow-on test program, including the Snap 2 Development reactor occurred in 1962-63. These programs, as well as the SNAP Critical Assembly 3 series of tests (used for SNAP 2 and 10A), allowed for a mostly finalized reactor design to be completed.

S2 PCS Cutaway Drawing
CRU mercury Rankine power conversion system cutaway diagram, image DOE

The power conversion system (PCS), a Rankine (steam) turbine using mercury, were carried out starting in 1958, with the development of a mercury boiler to test the components in a non-nuclear environment. The turbine had many technical challenges, including bearing lubrication and wear issues, turbine blade pitting and erosion, fluid dynamics challenges, and other technical difficulties. As is often the case with advanced reactor designs, the reactor core itself wasn’t the main challenge, nor the control mechanisms for the reactor, but the non-nuclear portions of the power unit. This is a common theme in astronuclear engineering. More recently, JIMO experienced similar problems when the final system design called for a theoretical but not yet experimental supercritical CO2 Brayton turbine (as we’ll see in a future post). However, without a power conversion system of useable efficiency and low enough mass, an astronuclear power system doesn’t have a means of delivering the electricity that it’s called upon to deliver.

Reactor shielding, in the form of a metal honeycomb impregnated with a high-hydrogen material (in this case a form of paraffin), was common to all SNAP reactor designs. The high hydrogen content allowed for the best hydrogen density of the available materials, and therefore the greatest shielding per unit mass of the available options.

s10 FSM Reactor
SNAP-2/10A FSM reflector and drum mechanism pre-test, image DOE

Testing on the SNAP 2 reactor system continued until 1963, when the reactor core itself was re-purposed into the redesigned SNAP-10, which became the SNAP-10A. At this point the SNAP-2 reactor program was folded into the SNAP-10A program. SNAP-2 specific design work was more or less halted from a reactor point of view, due to a number of factors, including the slower development of the CRU power conversion system, the large number of moving parts in the Rankine turbine, and the advances made in the more powerful SNAP-8 family of reactors (which we’ll cover in the next post). However, testing on the power conversion system continued until 1967, due to its application to other programs. This didn’t mean that the reactor was useless for other missions; in fact, it was far more useful, due to its far more efficient power conversion system for crewed space operations (as we’ll see later in this post), especially for space stations. However, even this role would be surpassed by a derivative of the SNAP-8, the Advanced ZrH Reactor, and the SNAP-2 would end up being deprived of any useful mission.

The SNAP Reactor Improvement Program, in 1963-64, continued to optimize and advance the design without nuclear testing, through computer modeling, flow analysis, and other means; but the program ended without flight hardware being either built or used. We’ll look more at the missions that this reactor was designed for later in this blog post, after looking at its smaller sibling, the first reactor (and only US reactor) to ever achieve orbit: the SNAP-10A.

S2 Program History Table

SNAP-10: The Father of the First Reactor in Space

At about the same time as the SNAP 2 Development Reactor tests (1958), the USAF requested a study on a thermoelectric power conversion system, targeting a 0.3 kWe-1kWe power regime. This was the birth of what would eventually become the SNAP-10 reactor. This reactor would evolve in time to become the SNAP-10A reactor, the first nuclear reactor to go into orbit.

In the beginning, this design was superficially quite similar to the Romashka reactor that we’ll examine in the USSR part of this blog post, with plates of U-ZrH fuel, separated by beryllium plates for heat conduction, and surrounded by radial and axial beryllium reflectors. Purely conductively cooled internally, and radiatively cooled externally, this was later changed to a NaK forced convection cooling system for better thermal management (see below). The resulting design was later adapted to the SNAP-4 reactor, which was designed to be used for underwater military installations, rather than spaceflight. Outside these radial reflectors were thermoelectric power conversion systems, with a finned radiating casing being the only major component that was visible. The design looked, superficially at least, remarkably like the RTGs that would be used for the next several decades. However, the advantages to using even the low power conversion efficiency thermoelectric conversion system made this a far more powerful source of electricity than the RTGs that were available at the time (or even today) for space missions.

Reactor and Shield Cutaway
SNAP-10A Reactor sketch, image DOE

Within a short period, however, the design was changed dramatically, resulting in a design very similar to the core for the SNAP-2 reactor that was under development at the same time. Modifications were made to the SNAP-2 baseline, resulting in the reactor cores themselves becoming identical. This also led to the NaK cooling system being implemented on the SNAP-10A. Many of the test reactors for the SNAP-2 system were also used to develop the SNAP-10A. This is because the final design, while lower powered, by 20 kWe of electrical output, was largely different in the power conversion system, not the reactor structure. This reactor design was tested extensively, with the S2ER, S2DR, and SCA test series (4A, 4B, and 4C) reactors, as well as the SNAP-10 Ground Test Reactor (S10FS-1). The new design used a similar, but slightly smaller, conical radiator using NaK as the working fluid for the radiator.

This was a far lower power design than the SNAP-2, coming in at 30 kWt, but with the 1.6% power conversion ratio of the thermoelectric systems, its electrical power output was only 500 We. It also ran almost 100°C cooler (200 F), allowing for longer fuel element life, but less thermal gradient to work with, and therefore less theoretical maximum efficiency. This tradeoff was the best on offer, though, and the power conversion system’s lack of moving parts, and ease of being tested in a non-nuclear environment without extensive support equipment, made it more robust from an engineering point of view. The overall design life of the reactor, though, remained short: only about 1 year, and less than 1% fissile fuel burnup. It’s possible, and maybe even likely, that (barring spacecraft-associated failure) the reactor could have provided power for longer durations; however, the longer the reactor operates, the more the fuel swells, due to fission product buildup, and at some point this would cause the clad of the fuel to fail. Other challenges to reactor design, such as fission poison buildup, clad erosion, mechanical wear, and others would end the reactor’s operational life at some point, even if the fuel elements could still provide more power.

SNAP Meteorological Satellite
SNAP-10A satellite concept, image DOE

The SNAP-10A was not meant to power crewed facilities, since the power output was so low that multiple installations would be needed. This meant that, while all SNAP reactors were meant to be largely or wholly unmaintained by crew personnel, this reactor had no possibility of being maintained. The reliability requirements for the system were higher because of this, and the lack of moving parts in the power conversion system aided in this design requirement. The design was also designed to only have a brief (72 hour) time period where active reactivity control would be used, to mitigate any startup transients, and to establish steady-state operations, before the active control systems would be left in their final configuration, leaving the reactor entirely self-regulating. This placed additional burden on the reactor designers to have a very strong understanding of the behavior of the reactor, its long-term stability, and any effects that would occur during the year-long lifetime of the system.

Reflector Ejection Test
Reflector ejection test in progress, image DOE

At the end of the reactor’s life, it was designed to stay in orbit until the short-lived and radiotoxic portions of the reactor had gone through at least five product half-lives, reducing the radioactivity of the system to a very low level. At the end of this process, the reactor would re-enter the atmosphere, the reflectors and end reflector would be ejected, and the entire thing would burn up in the upper atmosphere. From there, winds would dilute any residual radioactivity to less than what was released by a single small nuclear test (which were still being conducted in Nevada at the time). While there’s nothing wrong with this approach from a health physics point of view, as we saw in the last post on the BES-5 reactors the Soviet Union was flying, there are major international political problems with this concept. The SNAPSHOT reactor continues to orbit the Earth (currently at an altitude of roughly 1300 km), and will do so for more than 2000 years, according to recent orbital models, so the only system of concern is not in danger of re-entry any time soon; but, at some point, the reactor will need to be moved into a graveyard orbit or collected and returned to Earth – a problem which currently has no solution.

The Runup to Flight: Vehicle Verification and Integration

1960 brought big plans for orbital testing of both the SNAP-2 and SNAP-10 reactors, under the program SNAPSHOT: Two SNAP-10 launches, and two SNAP-2 launches would be made. Lockheed Missiles System Division was chosen as the launch vehicle, systems integration, and launch operations contractor for the program; while Atomics International, working under the AEC, was responsible for the power plant.

The SNAP-10A reactor design was meant to be decommissioned by orbiting for long enough that the fission product inventory (the waste portion of the burned fuel elements, and the source of the vast majority of the radiation from the reactor post-fission) would naturally decay away, and then the reactor would be de-orbited, and burn up in the atmosphere. This was planned before the KOSMOS-954 accident, when the possibility of allowing a nuclear reactor to burn up in the atmosphere was not as anathema as it is today. This plan wouldn’t increase the amount of radioactivity that the public would receive to any appreciable degree; and, at the time, open-air testing of nuclear weapons was the norm, sending up thousands of kilograms of radioactive fallout per year. However, it was important that the fuel rods themselves would burn up high in the atmosphere, in order to dilute the fuel elements as much as possible, and this is something that needed to be tested.

RFD1
RFD-1 Experimental Payload

Enter the SNAP Reactor Flight Demonstration Number 1 mission, or RFD-1. The concept of this test was to demonstrate that the planned disassembly and burnup process would occur as expected, and to inform the further design of the reactor if there were any unexpected effects of re-entry. Sandia National Labs took the lead on this part of the SNAPSHOT program. After looking at the budget available, the launch vehicles available, and the payloads, the team realized that orbiting a nuclear reactor mockup would be too expensive, and another solution needed to be found. This led to the mission design of RFD-1: a sounding rocket would be used, and the core geometry would be changed to account for the short flight time, compared to a real reentry, in order to get the data needed for the de-orbiting testing of the actual SNAP-10A reactor that would be flown.

So what does this mean? Ideally, the development of a mockup of the SNAP-10A reactor, with the only difference being that there wouldn’t be any highly enriched uranium in the fuel elements, as normally configured; instead depleted uranium would be used. It would be launched on the same launch vehicle that the SNAPSHOT mission would use (an Atlas-Agena D), be placed in the same orbit, and then be deorbited at the same angle and the same place as the actual reactor would be; maybe even in a slightly less favorable reentry angle to know how accurate the calculations were, and what the margin of error would be. However, an Atlas-Agena rocket isn’t a cheap piece of hardware, either to purchase, or to launch, and the project managers knew that they wouldn’t be able to afford that, so they went hunting for a more economical alternative.

RFD1 Flight Path
RFD-1 Mission Profile, image DOE

This led the team to decide on a NASA Scout sounding rocket as the launch vehicle, launched from Wallops Island launch site (which still launches sounding rockets, as well as the Antares rocket, to this day, and is expanding to launch Vector Space and RocketLab orbital rockets as well in the coming years). Sounding rockets don’t reach orbital altitudes or velocities, but they get close, and so can be used effectively to test orbital components for systems that would eventually fly in orbit, but for much less money. The downside is that they’re far smaller, with less payload and less velocity than their larger, orbital cousins. This led to needing to compromise on the design of the dummy reactor in significant ways – but those ways couldn’t compromise the usefulness of the test.

Sandia Corporation (which runs Sandia National Laboratories to this day, although who runs Sandia Corp changes… it’s complicated) and Atomics International engineers got together to figure out what could be done with the Scout rocket and a dummy reactor to provide as useful an engineering validation as possible, while sticking within the payload requirements and flight profile of the relatively small, suborbital rocket that they could afford. Because the dummy reactor wouldn’t be going nearly as fast as it would during true re-entry, a steeper angle of attack when the test was returning to Earth was necessary to get the velocity high enough to get meaningful data.

Scout rocket
Scout sounding rocket, image DOE

The Scout rocket that was being used had much less payload capability than the Atlas rocket, so if there was a system that could be eliminated, it was removed to save weight. No NaK was flown on RFD-1, the power conversion system was left off, the NaK pump was simulated by an empty stainless steel box, and the reflector assembly was made out of aluminum instead of beryllium, both for weight and toxicity reasons (BeO is not something that you want to breathe!). The reactor core didn’t contain any dummy fuel elements, just a set of six stainless steel spacers to keep the grid plates at the appropriate separation. Because the angle of attack was steeper, the test would be shorter, meaning that there wouldn’t be time for the reactor’s reflectors to degrade enough to release the fuel elements. The fuel elements were the most important part of the test, however, since it needed to be demonstrated that they would completely burn up upon re-entry, so a compromise was found.

The fuel elements would be clustered on the outside of the dummy reactor core, and ejected early in the burnup test period. While the short time and high angle of attack meant that there wouldn’t be enough time to observe full burnup, the beginning of the process would be able to provide enough data to allow for accurate simulations of the process to be made. How to ensure that this data, which was the most important part of the test, would be able to be collected was another challenge, though, which forced even more compromises for RFT-1’s design. Testing equipment had to be mounted in such a way as to not change the aerodynamic profile of the dummy reactor core. Other minor changes were needed as well, but despite all of the differences between the RFD-1 and the actual SNAP-10A the thermodynamics and aerodynamics of the system were different in only very minor ways.

Testing support came from Wallops Island and NASA’s Bermuda tracking station, as well as three ships and five aircraft stationed near the impact site for radar observation. The ground stations would provide both radar and optical support for the RFD-1 mission, verifying reactor burnup, fuel element burnup, and other test objective data, while the aircraft and ships were primarily tasked with collecting telemetry data from on-board instruments, as well as providing additional radar data; although one NASA aircraft carried a spectrometer in order to analyze the visible radiation coming off the reentry vehicle as it disintegrated.

RFD-1 Burnup Splice
Film splice of RV burnup during RFD-1, image DOE

The test went largely as expected. Due to the steeper angle of attack, full fuel element burnup wasn’t possible, even with the early ejection of the simulated fuel rods, but the amount that they did disintegrate during the mission showed that the reactor’s fuel would be sufficiently distributed at a high enough altitude to prevent any radiological risk. The dummy core behaved mostly as expected, although there were some disagreements between the predicted behavior and the flight data, due to the fact that the re-entry vehicle was on such a steep angle of attack. However, the test was considered a success, and paved the way for SNAPSHOT to go forward.

The next task was to mount the SNAP-10A to the Agena spacecraft. Because the reactor was a very different power supply than was used at the time, special power conditioning units were needed to transfer power from the reactor to the spacecraft. This subsystem was mounted on the Agena itself, along with tracking and command functionality, control systems, and voltage regulation. While Atomics International worked to ensure the reactor would be as self-contained as possible, the reactor and spacecraft were fully integrated as a single system. Besides the reactor itself, the spacecraft carried a number of other experiments, including a suite of micrometeorite detectors and an experimental cesium contact thruster, which would operate from a battery system that would be recharged by electricity produced by the reactor.

S10FSM Vac Chamber
FSEM-3 in vacuum chamber for environmental and vibration tests, image DOE

In order to ensure the reactor would be able to be integrated to the spacecraft, a series of Flight System Prototypes (FSM-1, and -4; FSEM-2 and -3 were used for electrical system integration) were built. These were full scale, non-nuclear mockups that contained a heating unit to simulate the reactor core. Simulations were run using FSM-1 from launch to startup on orbit, with all testing occurring in a vacuum chamber. The final one of the series, FSM-4, was the only one that used NaK coolant in the system, which was used to verify that the thermal performance of the NaK system met with flight system requirements. FSEM-2 did not have a power system mockup, instead it used a mass mockup of the reactor, power conversion system, radiator, and other associated components. Testing with FSEM-2 showed that there were problems with the original electrical design of the spacecraft, which required a rebuild of the test-bed, and a modification of the flight system itself. Once complete, the renamed FSEM-2A underwent a series of shock, vibration, acceleration, temperature, and other tests (known as the “Shake and Bake” environmental tests), which it subsequently passed. The final mockup, FSEM-3, underwent extensive electrical systems testing at Lockheed’s Sunnyvale facility, using simulated mission events to test the compatibility of the spacecraft and the reactor. Additional electrical systems changes were implemented before the program proceeded, but by the middle of 1965, the electrical system and spacecraft integration tests were complete and the necessary changes were implemented into the flight vehicle design.

SNAP_10A_Space_Nuclear_Power_Plant
SNAP-10A F-3 (flight unit for SNAPSHOT) undergoing final checks before spacecraft integration. S10F-4 was identical.

The last round of pre-flight testing was a test of a flight-configured SNAP-10A reactor under fission power. This nuclear ground test, S10F-3, was identical to the system that would fly on SNAPSHOT, save some small ground safety modifications, and was tested from January 22 1965 to March 15, 1966. It operated uninterrupted for over 10,000 hours, with the first 390 days being at a power output of 35 kWt, and (following AEC approval) an additional 25 days of testing at 44 kWt. This testing showed that, after one year of operation, the continuing problem of hydrogen redistribution caused the reactor’s outlet temperature to drop more than expected, and additional, relatively minor, uncertainties about reactor dynamics were seen as well. However, overall, the test was a success, and paved the way for the launch of the SNAPSHOT spacecraft in April 1965; and the continued testing of S10F-3 during the SNAPSHOT mission was able to verify that the thermal behavior of astronuclear power systems during ground test is essentially identical to orbiting systems, proving the ground test strategy that had been employed for the SNAP program.

SNAPSHOT: The First Nuclear Reactor in Space

In 1963 there was a change in the way the USAF was funding these programs. While they were solely under the direction of the AEC, the USAF still funded research into the power conversion systems, since they were still operationally useful; but that changed in 1963, with the removal of the 0.3 kWe to 1 kWe portion of the program. Budget cuts killed the Zr-H moderated core of the SNAP-2 reactor, although funding continued for the Hg vapor Rankine conversion system (which was being developed by TRW) until 1966. The SNAP-4 reactor, which had not even been run through criticality testing, was canceled, as was the planned flight test of the SNAP-10A, which had been funded under the USAF, because they no longer had an operational need for the power system with the cancellation of the 0.3-1 kWe power system program. The associated USAF program that would have used the power supply was well behind schedule and over budget, and was canceled at the same time.

The USAF attempted to get more funding, but was denied. All parties involved had a series of meetings to figure out what to do to save the program, but the needed funds weren’t forthcoming. All partners in the program worked together to try and have a reduced SNAPSHOT program go through, but funding shortfalls in the AEC (who received only $8.6 million of the $15 million they requested), as well as severe restrictions on the Air Force (who continued to fund Lockheed for the development and systems integration work through bureaucratic creativity), kept the program from moving forward. At the same time, it was realized that being able to deliver kilowatts or megawatts of electrical power, rather than the watts currently able to be produced, would make the reactor a much more attractive program for a potential customer (either the USAF or NASA).

Finally, in February of 1964 the Joint Congressional Committee on Atomic Energy was able to fund the AEC to the tune of $14.6 million to complete the SNAP-10A orbital test. This reactor design had already been extensively tested and modeled, and unlike the SNAP-2 and -8 designs, no complex, highly experimental, mechanical-failure-prone power conversion system was needed.

On Orbit Artist
SNAP-10A reactor, artist’s rendering (artist unknown), image DOE

SNAPSHOT consisted of a SNAP-10A fission power system mounted to a modified Agena-D spacecraft, which by this time was an off-the-shelf, highly adaptable spacecraft used by the US Air Force for a variety of missions. An experimental cesium contact ion thruster (read more about these thrusters on the Gridded Ion Engine page) was installed on the spacecraft for in-flight testing. The mission was to validate the SNAP-10A architecture with on-orbit experience, proving the capability to operate for 9 days without active control, while providing 500 W (28.5 V DC) of electrical power. Additional requirements included the use of a SNAP-2 reactor core with minimal modification (to allow for the higher-output SNAP-2 system with its mercury vapor Rankine power conversion system to be validated as well, when the need for it arose), eliminating the need (while offering the option) for active control of the reactor once startup was achieved for one year (to prove autonomous operation capability); facilitating safe ground handling during spacecraft integration and launch; and, accommodating future growth potential in both available power and power-to-weight ratio.

While the threshold for mission success was set at 90 days, for Atomics International wanted to prove 1 year of capability for the system; so, in those 90 days, the goal was that the entire reactor system would be demonstrated to be capable of one year of operation (the SNAP-2 requirements). Atomics International imposed additional, more stringent, guidelines for the mission as well, specifying a number of design requirements, including self-containment of the power system outside the structure of the Agena, as much as possible; more stringent mass and center-of-gravity requirements for the system than specified by the US Air Force; meeting the military specifications for EM radiation exposure to the Agena; and others.

atlas-slv3_agena-d__snapshot__1
SNAPSHOT launch, image USAF via Gunter’s Space Page

The flight was formally approved in March, and the launch occurred on April 3, 1965 on an Atlas-Agena D rocket from Vandenberg Air Force Base. The launch went perfectly, and placed the SNAPSHOT spacecraft in a polar orbit, as planned. Sadly, the mission was not one that could be considered either routine or simple. One of the impedance probes failed before launch, and a part of the micrometeorite detector system failed before returning data. A number of other minor faults were detected as well, but perhaps the most troubling was that there were shorts and voltage irregularities coming from the ion thruster, due to high voltage failure modes, as well as excessive electromagnetic interference from the system, which reduced the telemetry data to an unintelligible mess. This was shut off until later in the flight, in order to focus on testing the reactor itself.

The reactor was given the startup order 3.5 hours into the flight, when the two gross adjustment control drums were fully inserted, and the two fine control drums began a stepwise reactivity insertion into the reactor. Within 6 hours, the reactor achieved on-orbit criticality, and the active control portion of the reactor test program began. For the next 154 hours, the control drums were operated with ground commands, to test reactor behavior. Due to the problems with the ion engine, the failure sensing and malfunction sensing systems were also switched off, because these could have been corrupted by the errant thruster. Following the first 200 hours of reactor operations, the reactor was set to autonomous operation at full power. Between 600 and 700 hours later, the voltage output of the reactor, as well as its temperature, began to drop; an effect that the S10-F3 test reactor had also demonstrated, due to hydrogen migration in the core.

On May 16, just over one month after being launched into orbit, contact was lost with the spacecraft for about 40 hours. Some time during this blackout, the reactor’s reflectors ejected from the core (although they remained attached to their actuator cables), shutting down the core. This spelled the end of reactor operations for the spacecraft, and when the emergency batteries died five days later all communication with the spacecraft was lost forever. Only 45 days had passed since the spacecraft’s launch, and information was received from the spacecraft for only 616 orbits.

What caused the failure? There are many possibilities, but when the telemetry from the spacecraft was read, it was obvious that something badly wrong had occurred. The only thing that can be said with complete confidence is that the error came from the Agena spacecraft rather than from the reactor. No indications had been received before the blackout that the reactor was about to scram itself (the reflector ejection was the emergency scram mechanism), and the problem wasn’t one that should have been able to occur without ground commands. However, with the telemetry data gained from the dwindling battery after the shutdown, some suppositions could be made. The most likely immediate cause of the reactor’s shutdown was traced to a possible spurious command from the high voltage command decoder, part of the Agena’s power conditioning and distribution system. This in turn was likely caused by one of two possible scenarios: either a piece of the voltage regulator failed, or it became overstressed because of either the unusual low-power vehicle loads or commanding the reactor to increase power output. Sadly, the cause of this system failure cascade was never directly determined, but all of the data received pointed to a high-voltage failure of some sort, rather than a low-voltage error (which could have also resulted in a reactor scram). Other possible causes of instrumentation or reactor failure, such as thermal or radiation environment, collision with another object, onboard explosion of the chemical propellants used on the Agena’s main engines, and previously noted flight anomalies – including the arcing and EM interference from the ion engine – were all eliminated as the cause of the error as well.

Despite the spacecraft’s mysterious early demise, SNAPSHOT provided many valuable lessons in space reactor design, qualification, ground handling, launch challenges, and many other aspects of handling an astronuclear power source for potential future missions: Suggestions for improved instrumentation design and performance characteristics; provision for a sunshade for the main radiator to eliminate the sun/shade efficiency difference that was observed during the mission; the use of a SNAP-2 type radiation shield to allow for off-the-shelf, non-radiation-hardened electronic components in order to save both money and weight on the spacecraft itself; and other minor changes were all suggested after the conclusion of the mission. Finally, the safety program developed for SNAPSHOT, including the SCA4 submersion criticality tests, the RFT-1 test, and the good agreement in reactor behavior between the on-orbit and ground test versions of the SNAP-10A showed that both the AEC and the customer of the SNAP-10A (be it the US Air Force or NASA) could have confidence that the program was ready to be used for whatever mission it was needed for.

Sadly, at the time of SNAPSHOT there simply wasn’t a mission that needed this system. 500 We isn’t much power, even though it was more power than was needed for many systems that were being used at the time. While improvements in the thermoelectric generators continued to come in (and would do so all the way to the present day, where thermoelectric systems are used for everything from RTGs on space missions to waste heat recapture in industrial facilities), the simple truth of the matter was that there was no mission that needed the SNAP-10A, so the program was largely shelved. Some follow-on paper studies would be conducted, but the lowest powered of the US astronuclear designs, and the first reactor to operate in Earth orbit, would be retired almost immediately after the SNAPSHOT mission.

Post-SNAPSHOT SNAP: the SNAP Improvement Program

The SNAP fission-powered program didn’t end with SNAPSHOT, far from it. While the SNAP reactors only ever flew once, their design was mature, well-tested, and in most particulars ready to fly in a short time – and the problems associated with those particulars had been well-addressed on the nuclear side of things. The Rankine power conversion system for the SNAP-2, which went through five iterations, reached technological maturity as well, having operated in a non-nuclear environment for close to 5,000 hours and remained in excellent condition, meaning that the 10,000 hour requirement for the PCS would be able to be met without any significant challenges. The thermoelectric power conversion system also continued to be developed, focusing on an advanced silicon-germanium thermoelectric convertor, which was highly sensitive to fabrication and manufacturing processes – however, we’ll look more at thermoelectrics in the power conversion systems series of blog posts, just keep in mind that the power conversion systems continued to improve throughout this time, not just the reactor core design.

LiH FE Cracking
Fuel element post-irradiation. Notice the cracks where the FE would rest in the endcap reflector

On the reactor side of things, the biggest challenge was definitely hydrogen migration within the fuel elements. As the hydrogen migrates away from the ZrH fuel, many problems occur; from unpredictable reactivity within the fuel elements, to temperature changes (dehydrogenated fuel elements developed hotspots – which in turn drove more hydrogen out of the fuel element), to changes in ductility of the fuel, causing major headaches for end-of-life behavior of the reactors and severely limiting the fuel element temperature that could be achieved. However, the necessary testing for the improvement of those systems could easily be conducted with less-expensive reactor tests, including the SCA4 test-bed, and didn’t require flight architecture testing to continue to be improved.

The maturity of these two reactors led to a short-lived program in the 1960s to improve them, the SNAP Reactor Improvement Program. The SNAP-2 and -10 reactors went through many different design changes, some large and some small – and some leading to new reactor designs based on the shared reactor core architecture.

By this time, the SNAP-2 had mostly faded into obscurity. However, the fact that it shared a reactor core with the SNAP-10A, and that the power conversion system was continuing to improve, warranted some small studies to improve its capabilities. The two of note that are independent of the core (all of the design changes for the -10 that will be discussed can be applied to the -2 core as well, since at this point they were identical) are the change from a single mercury boiler to three, to allow more power throughput and to reduce loads on one of the more challenging components, and combining multiple cores into a single power unit. These were proposed together for a space station design (which we’ll look at later) to allow an 11 kWe power supply for a crewed station.

The vast majority of this work was done on the -10A. Any further reactors of this type would have had an additional three sets of 1/8” beryllium shims on the external reflector, increasing the initial reactivity by about 50 cents (1 dollar of reactivity is exactly break even, all other things being equal; reactivity potential is often somewhere around $2-$3, however, to account for fission product buildup); this means that additional burnable poisons (elements which absorb neutrons, then decay into something that is mostly neutron transparent, to even out the reactivity of the reactor over its lifetime) could be inserted in the core at construction, mitigating the problems of reactivity loss that were experienced during earlier operation of the reactor. With this, and a number of other minor tweaks to reflector geometry and lowering the core outlet temperature slightly, the life of the SNAP-10A was able to be extended from the initial design goal of one year to five years of operation. The end-of-life power level of the improved -10A was 39.5 kWt, with an outlet temperature of 980 F (527°C) and a power density of 0.14 kWt/lb (0.31 kWt/kg).

Thes

Interim 10A 2
Interim SNAP 10A/2, image DOE

e design modifications led to another iteration of the SNAP-10A, the Interim SNAP-10A/2 (I-10A/2). This reactor’s core was identical, but the reflector was further enhanced, and the outlet temperature and reactor power were both increased. In addition, even more burnable poisons were added to the core to account for the higher power output of the reactor. Perhaps the biggest design change with the Interim -10A/2 was the method of reactor control: rather than the passive control of the reactor, as was done on the -10A, the entire period of operation for the I-10A/2 was actively controlled, using the control drums to manage reactivity and power output of the reactor. As with the improved -10A design, this reactor would be able to have an operational lifetime of five years. These improvements led the I-10A/2 to have an end of life power rating of 100 kWt, an outlet temperature of 1200 F (648°C), and an improved power density of 0.33 kWt/lb (0.73 kWt/kg).

10A2 Table
Interim SNAP 10A/2 Design References, image DOE

This design, in turn, led to the Upgraded SNAP-10A/2 (U-10A/2). The biggest in-core difference between the I-10A/2 and the U-10A/2 was the hydrogen barrier used in the fuel elements: rather than using the initial design that was common to the -2, -10A, and I-10A/2, this reactor used the hydrogen barrier from the SNAP-8 reactor, which we’ll look at in the next blog post. This is significant, because the degradation of the hydrogen barrier over time, and the resulting loss of hydrogen from the fuel elements, was the major lifetime limiting factor of the SNAP-10 variants up until this point. This reactor also went back to static control, rather than the active control used in the I-10A/2. As with the other -10A variants, the U-10A/2 had a possible core lifetime of five years, and other than an improvement of 100 F in outlet temperature (to 1300 F), and a marginal drop in power density to 0.31 kWt/lb, it shared many of the characteristics that the I-10A/2 had.

SNAP-10B: The Upgrade that Could Have Been

10B Cutaway System
SNAP=10B Cutaway Diagram, image DOE

One consistent mass penalty in the SNAP-10A variants that we’ve looked at so far is the control drums: relatively large reactivity insertions were possible with a minimum of movement due to the wide profile of the control drums, but this also meant that they extended well away from the reflector, especially early in the mission. This meant that, in order to prevent neutron backscatter from hitting the rest of the spacecraft, the shield had to be relatively wide compared the the size of the core – and the shield was not exactly a lightweight system component.

The SNAP-10B reactor was designed to address this problem. It used a similar core to the U-10A/2, with the upgraded hydrogen barrier from the -8, but the reflector was tapered to better fit the profile of the shadow shield, and axially sliding control cylinders would be moved in and out to provide control instead of the rotating drums of the -10A variants. A number of minor reactor changes were needed, and some of the reactor physics parameters changed due to this new control system; but, overall, very few modifications were needed.

The first -10B reactor, the -10B Basic (B-10B), was a very simple and direct evolution of the U-10A/2, with nothing but the reflector and control structures changed to the -10B configuration. Other than a slight drop in power density (to 0.30 kWt/lb), the rest of the performance characteristics of the B-10B were identical to the U-10A/2. This design would have been a simple evolution of the -10A/2, with a slimmer profile to help with payload integration challenges.

10B Basic Table
Image DOE

The next iteration of the SNAP-10B, the Advanced -10B (A-10B), had options for significant changes to the reactor core and the fuel elements themselves. One thing to keep in mind about these reactors is that they were being designed above and beyond any specific mission needs; and, on top of that, a production schedule hadn’t been laid out for them. This means that many of the design characteristics of these reactors were never “frozen,” which is the point in the design process when the production team of engineers need to have a basic configuration that won’t change in order to proceed with the program, although obviously many minor changes (and possibly some major ones) would continue to be made up until the system was flight qualified.

Up until now, every SNAP-10 design used a 37 fuel element core, with the only difference in the design occurring in the Upgraded -10A/2 and Basic -10B reactors (which changed the hydrogen barrier ceramic enamel inside the fuel element clad). However, with the A-10B there were three core size options: the first kept the 37 fuel element core, a medium-sized 55-element core, and a large 85-element core. There were other questions about the final design, as well, looking at two other major core changes (as well as a lot of open minor questions). The first option was to add a “getter,” a sheath of hydrophilic (highly hydrogen-absorbing) metal to the clad outside the steel casing, but still within the active region of the core. While this isn’t as ideal as containing the hydrogen within the U-ZrH itself, the neutron moderation provided by the hydrogen would be lost at a far lower rate. The second option was to change the core geometry itself, as the temperature of the core changed, with devices called “Thermal Coefficient Augmenters” (TCA). There were two options that were suggested: first, there was a bellows system that was driven by NaK core temperature (using ruthenium vapor), which moves a portion of the radial reflector to change the core’s reactivity coefficient; second, the securing grids for the fuel elements themselves would expand as the NaK increased in temperature, and contract as the coolant dropped in temperature.

Between the options available, with core size, fuel element design, and variable core and fuel element configuration all up in the air, the Advanced SNAP-10B was a wide range of reactors, rather than just one. Many of the characteristics of the reactors remained identical, including the fissile fuel itself, the overall core size, maximum outlet temperature, and others. However, the number of fuel elements in the core alone resulted in a wide range of different power outputs; and, which core modification the designers ultimately decided upon (Getter vs TCA, I haven’t seen any indication that the two were combined) would change what the capabilities of the reactor core would actually be. However, both for simplicity’s sake, and due to the very limited documentation available on the SNAP-10B program, other than a general comparison table from the SNAP Systems Capability Study from 1966, we’ll focus on the 85 fuel element core of the two options: the Getter core and the TCA core.

A final note, which isn’t clear from these tables: each of these reactor cores was nominally optimized to a 100 kWt power output, the additional fuel elements reduced the power density required at any time from the core in order to maximize fuel lifetime. Even with the improved hydrogen barriers, and the variable core geometry, while these systems CAN offer higher power, it comes at the cost of a shorter – but still minimum one year – life on the reactor system. Because of this, all reported estimates assumed a 100 kWt power level unless otherwise stated.

Yt Getter FE
Fuel element with yttrium getter, image DOE

The idea of a hydrogen “getter” was not a new one at the time that it was proposed, but it was one that hadn’t been investigated thoroughly at that point (and is a very niche requirement in terrestrial nuclear engineering). The basic concept is to get the second-best option when it comes to hydrogen migration: if you can’t keep the hydrogen in your fuel element itself, then the next best option is keeping it in the active region of the core (where fission is occurring, and neutron moderation is the most directly useful for power production). While this isn’t as good as increasing the chance of neutron capture within the fuel element itself, it’s still far better than hydrogen either dissolving into your coolant, or worse yet, migrating outside your reactor and into space, where it’s completely useless in terms of reactor dynamics. Of course, there’s a trade-off: because of the interplay between the various aspects of reactor physics and design, it wasn’t practical to change the external geometry of the fuel elements themselves – which means that the only way to add a hydrogen “getter” was to displace the fissile fuel itself. There’s definitely an optimization question to be considered; after all, the overall reactivity of the reactor will have to be reduced because the fuel is worth more in terms of reactivity than the hydrogen that would be lost, but the hydrogen containment in the core at end of life means that the system itself would be more predictable and reliable. Especially for a static control system like the A-10B, this increase in behavioral predictability can be worth far more than the reactivity that the additional fuel would offer. Of the materials options that were tested for the “getter” system, yttrium metal was found to be the most effective at the reactor temperatures and radiation flux that would be present in the A-10B core. However, while improvements had been made in the fuel element design to the point that the “getter” program continued until the cancellation of the SNAP-2/10 core experiments, there were many uncertainties left as to whether the concept was worth employing in a flight system.

The second option was to vary the core geometry with temperature, the Thermal Coefficient Augmentation (TCA) variant of the A-10B. This would change the reactivity of the reactor mechanically, but not require active commands from any systems outside the core itself. There were two options investigated: a bellows arrangement, and a design for an expanding grid holding the fuel elements themselves.

A-10B Bellows Diagram
Ruthenium vapor bellows design for TCA, image DOE

The first variant used a bellows to move a portion of the reflector out as the temperature increased. This was done using a ruthenium reservoir within the core itself. As the NaK increased in temperature, the ruthenium would boil, pushing a bellows which would move some of the beryllium shims away from the reactor vessel, reducing the overall worth of the radial reflector. While this sounds simple in theory, gas diffusion from a number of different sources (from fission products migrating through the clad to offgassing of various components) meant that the gas in the bellows would not just be ruthenium vapor. While this could have been accounted for, a lot of study would have needed to have been done with a flight-type system to properly model the behavior.

A-10B Expandable BaseplateThe second option would change the distance between the fuel elements themselves, using base plate with concentric accordion folds for each ring of fuel elements called the “convoluted baseplate.” As the NaK heated beyond optimized design temperature, the base plates would expand radially, separating the fuel elements and reducing the reactivity in the core. This involved a different set of materials tradeoffs, with just getting the device constructed causing major headaches. The design used both 316 stainless steel and Hastelloy C in its construction, and was cold annealed. The alternative, hot annealing, resulted in random cracks, and while explosive manufacture was explored it wasn’t practical to map the shockwave propagation through such a complex structure to ensure reliable construction at the time.

A-10B Expandable Baseplate 2While this is certainly a concept that has caused me to think a lot about the concept of a variable reactor geometry of this nature, there are many problems with this approach (which could have possibly been solved, or proven insurmountable). Major lifetime concerns would include ductility and elasticity changes through the wide range of temperatures that the baseplate would be exposed to; work hardening of the metal, thermal stresses, and neutron bombardment considerations of the base plates would also be a major concern in this concept.

These design options were briefly tested, but most of these ended up not being developed fully. Because the reactor’s design was never frozen, many engineering challenges remained in every option that had been presented. Also, while I know that a report was written on the SNAP-10B reactor’s design (R. J . Gimera, “SNAP 1OB Reactor Conceptual Design,” NAA-SR-10422), I can’t find it… yet. This makes writing about the design difficult, to say the least.

Because of this, and the extreme paucity of documentation on this later design, it’s time to turn to what these innovative designs could have offered when it comes to actual missions.

The Path Not Taken: Missions for SNAP-2, -10A

Every space system has to have a mission, or it will never fly. Both SNAP-2 and SNAP-10 offered a lot for the space program, both for crewed and uncrewed missions; and what they offered only grew with time. However, due to priorities at the time, and the fact that many records from these programs appear to never have been digitized, it’s difficult to point to specific mission proposals for these reactors in a lot of cases, and the missions have to be guessed at from scattered data, status reports, and other piecemeal sources.

SNAP-10 was always a lower-powered system, even with its growth to a kWe-class power supply. Because of this, it was always seen as a power supply for unmanned probes, mostly in low Earth orbit, but certainly it would also have been useful in interplanetary studies as well, which at this point were just appearing on the horizon as practical. Had the SNAPSHOT system worked as planned, the cesium thruster that had been on board the Agena spacecraft would have been an excellent propulsion source for an interplanetary mission. However, due to the long mission times and relatively fragile fuel of the original SNAP-10A, it is unlikely that these missions would have been initially successful, while the SNAP-10A/2 and SNAP-B systems, with their higher power output and lifetimes, would have been ideal for many interplanetary missions.

As we saw in the US-A program, one of the major advantages that a nuclear reactor offers over photovoltaic cells – which were just starting to be a practical technology at the time – is that they offer very little surface area, and therefore the atmospheric drag that all satellites experience due to the thin atmosphere in lower orbits is less of a concern. There are many cases where this lower altitude offers clear benefits, but the vast majority of them deal with image resolution: the lower you are, the more clear your imagery can be with the same sensors. For the Russians, the ability to get better imagery of US Navy movements in all weather conditions was of strategic importance, leading to the US-A program. For Americans, who had other means of surveillance (and an opponent’s far less capable blue-water navy to track), radar surveillance was not a major focus – although it should be noted that 500 We isn’t going to give you much, if any resolution, no matter what your altitude.

SNAP Meteorological Satellite
SNAP-10 powered meteorological satellite, image DOE

One area that SNAP-10A was considered for was for meteorological satellites. With a growing understanding of how weather could be monitored, and what types of data were available through orbital systems, the ability to take and transmit pictures from on-orbit using the first generations of digital cameras (which were just coming into existence, and not nearly good enough to interest the intelligence organizations at the time), along with transmitting the data back to Earth, would have allowed for the best weather tracking capability in the world at the time. By using a low orbit, these satellites would be able to make the most of the primitive equipment available at the time, and possibly (speculation on my part) have been able to gather rudimentary moisture content data as well.

However, while SNAP-10A was worked on for about a decade, for the entire program there was always the question of “what do you do with 500-1000 We?” Sure, it’s not an insignificant amount of power, even then, but… communications and propulsion, the two things that are the most immediately interesting for satellites with reliable power, both have a linear relationship between power level and capability: the more power, the more bandwidth, or delta-vee, you have available. Also, the -10A was only ever rated for one year of operations, although it was always suspected it could be limped along for longer, which precluded many other missions.

The later SNAP-10A/2 and -10B satellites, with their multi-kilowatt range and years-long lifespans, offered far more flexibility, but by this point many in the AEC, the US Air Force, NASA, and others were no longer very interested in the program; with newer, more capable, reactor designs being available (we’ll look at some of those in the next post). While the SNAP-10A was the only flight-qualified and -tested reactor design (and the errors on the mission were shown to not be the fault of the reactor, but the Agena spacecraft), it was destined to fade into obscurity.

SNAP-10A was always the smallest of the reactors, and also the least powerful. What about the SNAP-2, the 3-6 kWe reactor system?

Initial planning for the SNAP-2 offered many options, with communications satellites being mentioned as an option early on – especially if the reactor lifetime could be extended. While not designed specifically for electric propulsion, it could have utilized that capability either on orbit around the Earth or for interplanetary missions. Other options were also proposed, but one was seized on early: a space station.

S2 Cylinder Station
Cylindrical space station, image DOE

At the time, most space station designs were nuclear powered, and there were many different configurations. However, there were two that were the most common: first was the simple cylinder, launched as a single piece (although there were multiple module designs proposed which kept the basic cylinder shape) which would be finally realized with the Skylab mission; second was a torus-shaped space station, which was proposed almost a half a century before by Tsiolkovsky, and popularized at the time by Werner von Braun. SNAP-2 was adapted to both of these types of stations. Sadly, while I can find one paper on the use of the SNAP-2 on a station, it focuses exclusively on the reactor system, and doesn’t use a particular space station design, instead laying out the ground limits of the use of the reactor on each type of station, and especially the shielding requirements for each station’s geometry. It was also noted that the reactors could be clustered, providing up to 11 kWe of power for a space station, without significant change to the radiation shield geometry. We’ll look at radiation shielding in a couple posts, and look at the particulars of these designs there.

s2 Toroidal Station
Hexagonal/Toroid space station. Note the wide radiation shield. Image DOE

Since space stations were something that NASA didn’t have the budget for at the time, most designs remained vaguely defined, without much funding or impetus within the structure of either NASA or the US Air Force (although SNAP-2 would have definitely been an option for the Manned Orbiting Laboratory program of the USAF). By the time NASA was seriously looking at space stations as a major funding focus, the SNAP-8 derived Advanced Zirconium Hydride reactor, and later the SNAP-50 (which we’ll look at in the next post) offered more capability than the more powerful SNAP-2. Once again, the lack of a mission spelled the doom of the SNAP-2 reactor.

Hg Rankine Cutaway Drawing
Power conversion system, SNAP-2

The SNAP-2 reactor met its piecemeal fate even earlier than the SNAP-10A, but oddly enough both the reactor and the power conversion system lasted just as long as the SNAP-10A did. The reactor core for the SNAP-2 became the SNAP-10A/2 core, and the CRU power conversion system continued under development until after the reactor cores had been canceled. However, mention of the SNAP-2 as a system disappears in the literature around 1966, while the -2/10A core and CRU power conversion system continued until the late 1960s and late 1970s, respectively.

The Legacy of The Early SNAP Reactors

The SNAP program was canceled in 1971 (with one ongoing exception), after flying a single reactor which was operational for 43 days, and conducting over five years of fission powered testing on the ground. The death of the program was slow and drawn out, with the US Air Force canceling the program requirement for the SNAP-10A in 1963 (before the SNAPSHOT mission even launched), the SNAP-2 reactor development being canceled in 1967, all SNAP reactors (including the SNAP-8, which we’ll look at next week) being canceled by 1974, and the CRU power conversion system being continued until 1979 as a separate internal, NASA-supported but not fully funded, project by Rockwell International.

The promise of SNAP was not enough to save the program from the massive cuts to space programs, both for NASA and the US Air Force, that fell even as humanity stepped onto the Moon for the first time. This is an all-too-common fate, both in advanced nuclear reactor engineering and design as well as aerospace engineering. As one of the engineers who worked on the Molten Salt Reactor Experiment noted in a recent documentary on that technology, “everything I ever worked on got canceled.”

However, this does not mean that the SNAP-2/10A programs were useless, or that nothing except a permanently shut down reactor in orbit was achieved. In fact, the SNAP program has left a lasting mark on the astronuclear engineering world, and one that is still felt today. The design of the SNAP-2/10A core, and the challenges that were faced with both this reactor core and the SNAP-8 core informed hydride fuel element development, including the thermal limits of this fuel form, hydrogen migration mitigation strategies, and materials and modeling for multiple burnable poison options for many different fuel types. The thermoelectric conversion system (germanium-silicon) became a common one for high-temperature thermoelectric power conversion, both for power conversion and for thermal testing equipment. Many other materials and systems that were used in this reactor system continued to be developed through other programs.

Possibly the most broad and enduring legacy of this program is in the realm of launch safety, flight safety, and operational paradigms for crewed astronuclear power systems. The foundation of the launch and operational safety guidelines that are used today, for both fission power systems and radioisotope thermoelectric generators, were laid out, refined, or strongly informed by the SNAPSHOT and Space Reactor Safety program – a subject for a future web page, or possibly a blog post. From the ground handling of a nuclear reactor being integrated to a spacecraft, to launch safety and abort behavior, to characterizing nuclear reactor behavior if it falls into the ocean, to operating crewed space stations with on-board nuclear power plants, the SNAP-2/10A program literally wrote the book on how to operate a nuclear power supply for a spacecraft.

While the reactors themselves never flew again, nor did their direct descendants in design, the SNAP reactors formed the foundation for astronuclear engineering of fission power plants for decades. When we start to launch nuclear power systems in the future, these studies, and the carefully studied lessons of the program, will continue to offer lessons for future mission planners.

More Coming Soon!

The SNAP program extended well beyond the SNAP-2/10A program. The SNAP-8 reactor, started in 1959, was the first astronuclear design specifically developed for a nuclear electric propulsion spacecraft. It evolved into several different reactors, notably the Advanced ZrH reactor, which remained the preferred power option for NASA’s nascent modular space station through the mid-to-late 1970s, due to its ability to be effectively shielded from all angles. Its eventual replacement, the SNAP-50 reactor, offered megawatts of power using technology from the Aircraft Nuclear Propulsion program. Many other designs were proposed in this time period, including the SP-100 reactor, the ancestor of Kilopower (the SABRE heat pipe cooled reactor concept), as well as the first American in-core thermionic power system, advances in fuel element designs, and many other innovations.

Originally, these concepts were included in this blog post, but this post quickly expanded to the point that there simply wasn’t room for them. While some of the upcoming post has already been written, and a lot of the research has been done, this next post is going to be a long one as well. Because of this, I don’t know exactly when the post will end up being completed.

After we look at the reactor programs from the 1950s to the late 1980s, we’ll look at NASA and Rosatom’s collaboration on the TOPAZ-II reactor program, and the more recent history of astronuclear designs, from SDI through the Fission Surface Power program. We’ll finish up the series by looking at the most recent power systems from around the world, from JIMO to Kilopower to the new Russian on-orbit nuclear electric tug.

After this, we’ll look at shielding for astronuclear power plants, and possibly ground handling, launch safety, and launch abort considerations, then move on to power conversion systems, which will be a long series of posts due to the sheer number of options available.

These next posts are more research-intensive than usual, even for this blog, so while I’ll be hard at work on the next posts, it may be a bit more time than usual before these posts come out.

References

SNAP

SNAP Reactor Overview, Voss 1984 http://www.dtic.mil/dtic/tr/fulltext/u2/a146831.pdf

SNAP-2

Preliminary Results of the SNAP-2 Experimental Reactor, Hulin et al 1961 https://www.osti.gov/servlets/purl/4048774

Application of the SNAP 2 to Manned Orbiting Stations, Rosenberg et al 1962 https://www.osti.gov/servlets/purl/4706177

The ORNL-SNAP Shielding Program, Mynatt et al 1971 https://www.osti.gov/servlets/purl/4045094

SNAP-10/10A

SNAP-10A Nuclear Analysis, Dayes et al 1965 https://www.osti.gov/servlets/purl/4471077

SNAP 10 FS-3 Reactor Performance Hawley et al 1966 https://www.osti.gov/servlets/purl/7315563

SNAPSHOT and the Flight Safety Program

SNAP-10A SNAPSHOTProgram Development, Atomics International 1962 https://www.osti.gov/servlets/purl/4194781

Reliability Improvement Program Planning Report for the SNAP-10A Reactor, Coombs et al 1961 https://www.osti.gov/servlets/purl/966760

Aerospace Safety Reentry Analytical and Experimental Program SNAP 2 and 10A Interim Report, Elliot 1963 https://www.osti.gov/servlets/purl/4657830

SNAPSHOT orbit, Heavens Above https://www.heavens-above.com/orbit.aspx?satid=1314

SNAP Improvement Program

Static Control of SNAP Reactors, Birney et al 1966 https://digital.library.unt.edu/ark:/67531/metadc1029222/m2/1/high_res_d/4468078.pdf

SNAP Systems Capabilities Vol 2, Study Introduction, Reactors, Shielding, Atomics International 1965 https://www.osti.gov/servlets/purl/4480419

Progress Report, SNAP Reactor Improvement Program, April-June 1965 https://www.osti.gov/servlets/purl/4467051

Progress Report for SNAP General Supporting Technology May-July 1964 https://www.osti.gov/servlets/purl/4480424/

Categories
Electric propulsion Electrothermal Thrusters MPD Thrusters Spacecraft Concepts

Electric Propulsion Part 1: Thermal and Magnetoplasmadynamic Thrusters

Hello, and welcome back to Beyond NERVA! My apologies for the delay in this post, electric propulsion is not one of my strong points, so I spent a lot of extra time on research and in discussion with people who are more knowledgeable than I am on this subject. Special thanks to both Roland A. Gabrielli and Mikkel Haaheim for their invaluable help, not only for extensively picking their brains but also for their excellent help in editing (and sometimes rewriting large sections of) this post.

Today, we continue looking at electric propulsion, by starting to look at electrothermal and magnetoplasmadynamic (MPD) propulsion. Because there’s a fair bit of overlap, and because there are a lot of similarities in design, between these two types of thruster, we’ll start here, and then move to electrostatic thrusters in the next post.

As we saw in the last post, there are many ways to produce thrust using electricity, and many different components are shared between the different thruster types. This is most clear, though, when looking at thermal and plasma-dynamic thrusters, as we will see in this post. I’ve also made a compromise on this post’s structure: there are a few different types of thruster that fall in the gray area between thermal and MPD thrusters, but rather than writing about it between the two thruster types, one will be left for last: VASIMR, the Variable Specific Impulse Magnetoplasma Rocket. This thruster has captured the public’s imagination like few types of electric propulsion ever have; and, sadly, this has bred an incredible amount of clickbait. At the end of this post, I hope to lay to rest some of the misconceptions, and look at not only the advantages of this type of thruster, but the limitations as well. This will involve looking a little bit into mission planning and orbital mechanics, an area that we haven’t addressed much in this blog, but I hope to keep it relatively simple.

Electrothermal Propulsion

This is, to put it simply, the use of electric heaters to energize  a propellant and to produce thrust by expanding it. In the most primitive and low energy thrusters,   this is done with a Laval nozzle as in chemical and other thermal engines. This can be an inefficient use of energy (although this is definitely not always the case), however it can produce the most thrust for the same amount of power out of any of the systems that will be discussed today (debatably, depending on the systems and methods used). It is something that has been used since the 1960s, and continues to be used today for small-sat propulsion systems.

There are a number of ways to use electricity to make heat, and each of these methods are used for propulsion. We’ll look at them in turn: resistance heating, induction heating, and arc heating are all used by different designs. Each have their advantages and disadvantages, some of the concepts used for each thruster type are used in other types of thrusters also, and we’ll look at each in turn.

Resistojets

Primex resistojet, Choueiri
Primex hydrazine fueled resistojet, Chouieri

Using electricity to produce heat is something that everyone is familiar with. Space heaters and central heating are the obvious examples, but any electrical circuit produces heat due to electrical resistance within the system; this is why incandescent light bulbs and the computer that you’re reading this on get hot. This is resistive heating (or joule or Ohmic heating), and in a propulsion application this is called a “resistojet,” or an “electro-thermal thruster.” Often, this is used as a second stage for a chemical reaction – in this case the use of hydrazine propellant that undergoes catalysis to force chemical decay into a more voluminous gas. This two-stage approach is something that we’ll see again with a different heating method later in this post.

The first use of resistojets was with the Vela military satellites (first launched in 1963, canceled in 1985), which used a suite of instruments to detect nuclear tests from space, using a BE-3A AKM thruster (which I can’t find anything but the name of, if someone has documentation, please leave it in a comment below). The Intelsat-V program also used resistojet thrusters, and it has become a favored station-keeping option for smallsats. The reason for this is that the thrust is produced thermally, with no need for chemically reactive components, which is often pretty much a requirement for smallsats, which generally speaking are secondary payloads for larger spacecraft, and as such need to be absolutely safe for everything around them in order to get permission to be placed on the launch vehicle.

One of the main advantages of the resistojet is that they can achieve very high thrust efficiencies of up to 90%. Resistojets are primarily limited by two factors: first, the heat resistance of the Ohmic elements themselves; and second, the thermal transfer capacity of the system. As we have seen in NTRs, the ability to remove heat needs to be balanced with the heat produced, and the overall system needs to provide enough thrust to be useful. In the case of propelling a spacecraft on interplanetary missions, this is unlikely to come out to a useful result; however, for station-keeping with a high thrust requirement, it proves to be useful as shown in figure ## naming a few examples of satellites with EP. Exhaust velocities of about 3500 m/s are possible with decomposed hydrazine monopropellant, at about 80% efficiency. According to the ESA, specific impulse on this type of system is between 150 and 700 s of isp depending on the propellant, which is the bottom of the electric propulsion range.

Induction Thermal Thrusters

Another option for electrothermal thrusters is to use induction heating. Induction heating occurs when a high frequency alternating current is passed through a coil. Because of this, the induced magnetic field in the surroundings is swinging rapidly, stirring polar particles (particles that have a distinct plus and minus pole even if they’re electrically neutral overall) in the field. This can result even in ripping molecules apart (dissociation) and knocking electrons out of their orbitals (ionization). The charged remnants are ions and electrons, forming a plasma. Because of this, the device is called an “inductive plasma generator” or IPG. Plasma are even more susceptible to this high frequency heating. In purely thermal IPG based thrusters, Laval nozzles are used for expansion, once more, but magnetic nozzles, as explained later, can augment the performance on top of what a physical nozzle can provide. This principle is something that we’ve already seen a lot in this blog (both CFEET and NTREES operate through induction heating), and is used in one concept for bimodal nuclear thermal-electric propulsion, in the Nuclear Thermo-Electric Rocket (NTER) concept by Dr. Dujarric at ESA. This principle is also used in several different sort of electric thrusters, such as the Pulsed Induction Thruster (PIT); this is not a thermal thruster, though, so we’ll look more at it later.

This is a higher-powered system if you want significant thrust, due to the current required for the induction heater; so it’s not one that is commonly in use for smaller satellites like most of these systems. One other limitation noted by the NTER team is that supersonic induction is not an area that has been studied, and in most cases heating supersonic gasses don’t actually make them travel faster (called frozen energy), so during heating it’s necessary to make sure the propellant velocity remains subsonic.

IPG is also one of the foci of research at the Institute of Space Systems of the University of Stuttgart, studying both space and terrestrial applications, grouped in figure ## below, demonstrating the versatility of the concept: Generating a plasma without contact, the propellant cannot damage e.g. electrodes. This allows a near arbitrary selection of gasses as propellant, and therefore viable in-situ resource utilization concepts. Even space station wastes could be fed to such a thruster. Eventually, this prompted research on IPG based waste treatment for terrestrial communities. At the Institute of Space Systems, IPG also serve for the emulation of planetary atmospheres in re-entry experimentation in plasma wind tunnels.

IPG tech tree
Applications for inductive plasma generators investigated at the Insitute of Space Systems USTUTT and affiliations. Gabrielli 2018

Which type of heating is used is generally a function of the frequency of energy used to cause the oscillations, and therefore the heat. Induction heating, as we’ve discussed before in context of testing NTR fuel elements, usually occurs between 100 and 500 kHz. Radio Frequency heating occurs between 5 and 50 MHz. Finally, microwave heating occurs above 100 MHZ, although GHz operational ranges are common for many applications, like domestic microwaves which are found in most kitchens.

RF Electrothermal Thruster

RF thrusters operate via dielectric heating, where a material that has magnetic poles is oscillated rapidly in an electromagnetic field by a beam of radio waves, or more properly the molecules flip orientation relative to the field as the radio waves pass across them, causing heat to transfer to adjacent molecules. One side effect of using RF for heating is that these have very long wavelengths, meaning that the object being heated (in this case the propellant) can be heated more evenly throughout the entire mass of propellant than is typically possible in a microwave heating device.

While this is definitely a viable way to heat a propellant, this mechanism is more commonly used in ionization chambers, where the oscillating orientation of the dielectric molecules causes electrons of adjacent molecules to be stripped off, ionizing the propellant. This ionized propellant is then often accelerated using either MPD or electrostatic forces. We’ll look at that later, though, it’s just a good example of the way that many different components of these thrusters are used in different ways depending on the configuration and type of the thrusters in question.

Microwave Thermal Thrusters

 

MeT Clemens 2008
Microwave Electrothermal Thruster Diagram, Clemens 2008

Finally, we come to the last major type of electrothermal thruster: the microwave frequency thruster. This is not the Em-drive (nor will that concept be covered in this post)! Rather, it’s more akin to the microwave in your home: either radio frequencies or microwaves are used to convert the propellant, often Teflon (Polyfluorotriethylene, PFTE), into a plasma, which causes it to expand and accelerate out of a nozzle. This is most commonly done with microwaves rather than the longer wavelength radio frequencies due to a number of practical reasons.

 

Microwave thermal thrusters have been demonstrated with a very wide range of propellants, from H2 and N2 to Kr, Ar, Xe, PTFE, and others, at a wide variety of power levels. Due to the different power levels and propellant masses, specific impulse and thrust vary wildly. However, hydrogen-based thruster concepts have been demonstrated to have a specific impulse of approximately 1000 s with 54 kN of thrust.

An interesting option for this type of thruster is to not have your power supply on-board your spacecraft, and instead have a beam of microwaves hit a rectifier, or microwave antenna, which is then directed into the propellant. This has a major advantage of not having to have your power supply, electric conversion system, and microwave emitters weighing down your spacecraft. The beam will diverge over time, growing wider and requiring larger and larger collectors, but this may still end up being a major mass savings for quite a few different applications. Prof. Komurasaki at University of Tokyo is a major contributor to research in this concept, but this isn’t going to be something that we’re going to delve too deeply into in this post.

Electrothermal: What’s it Good For?

As we’ve seen, these systems aren’t much, if any, more efficient than a nuclear thermal system in terms of specific impulse, and the additional mass of the power conversion and heat rejection systems make them less attractive than a purely nuclear thermal system. So why would you use them?

There are a number of current applications, as has been mentioned in each of the concepts. They offer a fair bit of thrust for the system mass and complexity, a wide array of propellant options, and a huge range of sizes for the thrusters as well (including systems that are simply too small for a dedicated reactor for a nuclear thermal rocket).

Some designs for nuclear powered space stations (including Von Braun’s inflatable torus space station in the 1960s) use electrothermal thrusters for reaction control systems, partly due to their relatively high thrust. This could be a very attractive option, especially with chemically inert, inexpensive propellants such as PTFE that don’t require cryogenic propellants . They could also be used for orbital insertion burns, as they offer advantages toward thrust capability rather than efficiency, due to their simplicity and relatively low dry mass. For instance, an electric spacecraft on an interplanetary mission may use an electrothermal system to leave low Earth orbit on an interplanetary insertion burn, and then another drive system is used for the interplanetary portions of the mission; the drive system may be staged, discarding the now-burned-out drive system, or the propellant tankage (if any) could be discarded after use to minimize mass, and at orbital insertion at the destination the system could be activated again (this is, of course, not necessary, but in some cases may be advantageous, for instance for crewed missions, where the living beings on board don’t want to spend a couple months climbing out of Earth’s gravity well if they can avoid it).

Overall, electrical and thrust efficiency can be high, which makes these systems attractive for spacecraft. However, as a sole method of propulsion for interplanetary missions, this type of system DOES leave a lot to be desired, due to the generally low specific impulse of these types of thrusters, and in practice is not something that would be able to be used for this type of mission. Electric propulsion’s advantages for spaceflight are in high exhaust velocities, high specific impulse, and continuous use resulting in high spacecraft velocities, and thrust is generally secondary.

Arcjets – The First Middle Ground Between Thermal and MPD

These aren’t the only ways to produce heat from electricity, though. The first option we will discuss in the gray area between thermal and magnetically based propulsion, arc heating, is a very interesting option. Here, a spark, or arc, of electricity is sustained between two electrodes. This heating is virtually the same way an arc welder operates. This has the advantage that you aren’t thermally limited by your resistor for your highest thermal temperature: instead of being limited to about a thousand Kelvin by the melting point of your electric components, you can use the tens of thousands of Kelvin from the plasma of the arc – meaning more energy can be transferred, over a shorter time, within the same volume, due to the greater temperature difference. In most modern designs, the positive electrode – the anode – is at the throat of  the nozzle. However, this arc also erodes your electrodes, carrying ablated and vaporized and even plasmified bits of their material., So there’s a limitation to how long this sort of thruster can operate before the components have to be replaced. The propellant isn’t just heated by the arc itself, but also conduction and convection from the heated structural components of the thruster.

Arcjet diagram, Gabrielli
Schematic of an arcjet (Institute of Space Systems, USTUTT). F: thrust, ce: exhaust velocity,m: propellant mass flow (feed).

Arcjets have been studied by NASA since the 1950s, however they didn’t become commonly used in spacecraft until the 1990s. Several companies, including Lockheed Martin and others, offer a variety of operational arcjet thrusters. As with the resistojet, the chemical stability is excellent for piggyback payloads, and they offer better efficiencies than a resistojet. They are higher-powered systems, though; often higher than your average satellite power bus (sometimes in the kW range), necessitating custom power supplies for most spacecraft (in a nuclear-powered spacecraft, this would obviously be less of an issue). Arcjets offer much higher exhaust velocities compared to resistojets, generally 3500-5000 m/s for hydrazine decomposition drives similar to what we discussed above, and up to 9000 m/s using ammonia, and are also able to scale by both scale and power efficiently. In these systems, though, the propellant doesn’t necessarily need to be a gas or liquid at operational temperature: some thrusters have used polytetrafluoroethelene (PFTE, Teflon) as a propellant.

This type of propellant is also very common in a sub-type of arcjet thruster, one that doesn’t use an internal cathode: the pulsed plasma thruster. Here, the PFTE propellant block is brought into contact with a cathode on one side of the thruster, and an anode on the other. The electric charge arcs across the gap, vaporizing the propellant, and pushing the propellant back slightly, The arc and resulting plasma continue to the end of the thrust chamber, and then the propellant (usually loaded on a spring mechanism) is brought back to the point that the propellant can be vaporized. This type of thruster is very common for small spacecraft, since it’s incredibly simple and compact, although certain designs can have engineering challenges with the spring mechanism and the lifetime of the cathode and anode.

Arcjets can also be combined with magnetoplasmadynamic thruster acceleration chambers, since arc heating is also a good way to create plasma. The pulsed plasma thruster often uses electric arcs for charging their propellant, for instance. This mechanism is also used in magnetoplasmadynamic (MPD) thrusters, which is why we haven’t placed them with the rest of the thermal thrusters.

In fact, there’s more in common between an arcjet and an MPD thruster than between other thermal designs. The cathode and anode of an arcjet are placed in exactly the same configuration as most designs for an MPD (with some exceptions, to be fair). The exhaust itself is not only vaporized, but ionized, which – like with the RF or MW thrusters – lends itself to adding electromagnetic acceleration.

Self Field MPD
Comparison of self-field MPD and arcjet thruster geometry. Top half of the diagram is MPD, bottom half arcjet, in same scale. (Institute of Space Systems, USTUTT)

VASIMR: the VAriable Specific Impulse Magnetoplasma Rocket

Coauthor Mikkel Haaheim

VASIMR 3d image Bering et al 2014
VASIMR 3d diagram, Bering et al 2014

As mentioned earlier, the difference between MPD and thermal thrusters is a very gray area, and, even more than our previous examples, the VASIMR engine shows how gray this area is: the propellant is plasma of various types (although most designing and testing have focused on argon and xenon, other propellants could be used). This propellant is first ionized in what’s typically a separate chamber, and then fed into a magnetically confined chamber, with RF heating. This then is accelerated, and that thrust is then directed out of a magnetic nozzle.

VASIMR is the stuff of clickbait. Between the various space and futurism groups I’m active in on Facebook, I see terribly written, poorly understood, and factually incorrect articles on this design. This has led me to avoid the concept for a long time, and also puts a lot of pressure on me to get the thruster’s design and details right.

VASIMR isn’t that different from many types of electrothermal thrusters; after all, the primary method of accelerating the propellant, imparting thrust, and determining the specific impulse of the thruster is electrically generated RF heating. The fact that the propellant is a plasma also isn’t just an MPD concept, after all: the pulsed plasma thruster, and in fact most arcjets, produce plasma as their propellants as well. This thruster really demonstrates the gray area between thermal and MPD thrusters in ways that are unique in electric propulsion.

VASIMR Schematic Bering et al 2014
VASIMR system sketch, Bering et al 2014

Since the characteristics of plasma did not play a vital role in the working principles of the previous thermo-electric thrusters, we should briefly discuss the concept. The energy of plasma is so high that electrons are no longer tied to their atoms, which then become ions. Both electrons and ions are charged particles whizzing around in a shared cloud, the plasma. Despite being neutral to the outward due to containing the same amount of negative as of positive charges, the plasma is interacting with magnetic fields. These interactions present themselves for various magnetohydrodynamic applications, ranging from power generators in terrestrial power plants over magnetic plasma bottles to propulsion.

In VASIMR, these forces push the hot plasma away from the walls, protecting both the walls from damaging heat loads and the plasma from crippling quenching. This allows VASIMR to have a very hot medium for expansion. While this puts VASIMR among MHD thrusters, it would not yet be a genuine “plasma thruster,” if it was not for the magnetic nozzle, which adds electromagnetic components to the forces generating the thrust. Among these components, the most important is the Lorentz-force, which occurs when a charged particle moves through a magnetic field. The Lorentz-force is orthogonal to the local magnetic field line as well as to the particle’s trajectory.

Despite the incredible amount of overblown hype, VASIMR is an incredibly interesting design. Dr. Franklin Chang Diaz, the founder of Ad Astra Rocket Company, became interested in the concept while he was a PhD candidate at MIT for applied plasma physics. Before he was able to pursue this concept, though, his obsession with space led him to become an astronaut, and he flew seven times on the Space Transport System (Space Shuttle), spending a total of over 66 days on orbit. Upon retiring from NASA, he founded the Ad Astra Rocket Company to pursue the idea that had captured his imagination during his doctoral thesis; refined by his understanding of aerospace engineering and propulsion from his time at NASA. Ad Astra continues to develop the VASIMR thruster, and consistently meets its deadlines and budgetary requirements (as well as the modeling expectations of Ad Astra), but the concept is complex in application, and as with everything in aerospace, the development process takes a long time to come to fruition.

VX-200 Prototype
Ad Astra VX-200 Prototype, image courtesy Ad Astra

After the end of several rounds of funding from NASA, and a series of successful tests of their VX-100 prototype, Ad Astra continued to develop the thruster privately. Their newer VX-200 thruster is designed for higher power, and with better optimization of several of its components. Following additional testing, the engine is currently going through another round of upgrades to prepare for a 100-hour test firing of the thruster. Ad Astra has been criticized for its development schedule, and the problems that they face are indeed significant, but so far they’ve managed to meet every target that they’ve set.

The main advantage of this concept is that it eliminates both friction and erosion between the propellant and the body of the thruster. This also reduces the thermal load on the thruster, because, since there’s no physical contact, conduction can’t occur, and the amount of heat that’s absorbed by the thruster is limited to radiation (which is limited by the surface area of the plasma and the temperature difference between that plasma and the thruster body). This doesn’t mean that there’s not a need to cool the thruster in most cases, it does mean that more heat is kept within the plasma, and in fact, by using regenerative cooling (as most modern chemical engines do) it’s possible to increase the efficiency of the thruster.

Another major advantage, and one that may be unique to VASIMR, is the first part of the acronym: VAriable Specific Impulse. Every staged rocket has variable specific impulse, in a way: most first-stage boosters have very low specific impulse compared to the upper stages (although, in the case of the boosters, this is due to both the atmospheric pressure and the need to impart a large amount of thrust over a limited timespan), and there are designs that use different propulsion systems with different specific impulse and thrust characteristics to optimize their usefulness for particular mission profiles (such as bimodal thermal-electric nuclear rockets, the subject of our next blog series after our look at electric propulsion), but VASIMR offers the ability to vary its’ exhaust velocity by changing the temperature it heats the propellant to. This, in turn, changes the specific impulse, and therefore its’ thrust. This is where the “30 Day Round Trip to Mars” clickbait headlines come into play: by continuously varying its’ thrust and isp depending on where it is in terms of the interplanetary transfer maneuver, VASIMR is able to optimize the trip time in ways that few, if any, other contemporary propulsion types can. However, the trip time is highly dependent on available power, and trip times on the order of 90 days require a power source of 200 MW, and the specific power of the system becomes a major concern. To explain this in detail gets into orbital mechanics far more deeply than I would like in this already very long blog post, so we’ll save that discussion for another time.

So how does VASIMR actually work, what are the requirements for efficient operation, and how does it have these highly unusual capabilities? In many ways, this is very similar to a typical RF thruster: a gas, usually argon, is injected into a quartz plenum, and then run through a helicon RF emitter. Because of the shape of the radio waves produced, this causes a cascading ionization effect within the gas, converting it into a plasma, but the electrons aren’t removed, like in the case of more familiar electrostatic thrusters (the focus of our next blog post). This excitation also heats the plasma to about 5800K. The plasma then moves to a second RF emitter, designed to heat the plasma further using an ion cyclotron emitter. This type of RF emitter efficiently heats the plasma to the desired temperature, which is then directed out of the back of the thruster. Because all of this is occurring at very high temperatures, the entire thruster is wrapped in superconducting electromagnets to contain the plasma away from the walls of the thruster, and the nozzle used to direct the thrust is magnetic as well. Because there are no components physically in contact with the plasma after it becomes ionized, there are no erosion wear points within the thruster, which extends the lifetime of the system. By varying the amount of gas that is fed into the system while maintaining the same power level, the gas will become ionized to different levels, and the amount of heating that occurs will be different, meaning that the exhaust velocity will be higher, increasing the specific impulse of the engine while reducing the thrust. This is perhaps the most interesting part of this propulsion concept, and the reason that it gets so much attention. Other systems that use pulsed thrust rather than steady state are able to vary the thrust level without changing the isp (such as the pulsed induction thruster, or PIT) by changing the pulse rate of the system, but these systems have limits as to how much the pulse rate can be varied. We’ll look at these differences more in a later blog post, though.

 

Thrust Efficiency charts, Bering et al 2014
Thrust efficiency vs RF power and isp, Bering et al 2014

Many studies have looked at the thrust efficiency of the VASIMR. Like many electric propulsion concepts, it becomes more efficient as more power is applied to the system; in addition, the higher the specific impulse being used, the more efficiently it uses the electrical power available. The current VX-200 prototype is a 212 kW input, 120 kW thrust system, far more powerful than the original VX-10, and as such is more efficient. Most estimates of average efficiency seem to suggest a minimum of 60% thrust efficiency (the amount of efficiency increases with power input), increasing to 90% for higher-isp functioning. However, given the system’s sensitivity to available power level, and the fact that it’s not clear what the final flight thruster’s power availability will be, it’s difficult to guess what a flight system’s thrust efficiency will be.

 

VASIMR is currently upgrading their VX-200 thruster for extended operations. As of this point, problems with cooling various components (such as the superconducting electromagnets) have led to shorter lifetimes than are theoretically possible, although to be fair the lack of cooling problems come down to cooling systems not being installed. Additionally, more optimization is being done on the magnetic nozzle. One of the challenges with using a magnetic nozzle is that the plasma doesn’t want to “unstick” from the magnetic field lines used to contain the propellant. While this isn’t a major challenge for the thruster the way that the thermal management problems are, it is a source of inefficiency in the system, and so is worth addressing.

There’s a lot more that we could go into on VASIMR, and in the future we will come back to this concept; but, for the purposes of this article, it’s a wonderful example of how gray the area between thermal and MPD thrusters are: the propellant ionization and magnetic confinement of the heated plasma are both virtually identical to the applied field MPD thruster (more on that below), but the heating mechanism and thrust production are virtually identical to an RF thruster.

Let’s go ahead and look at what happens if you use magnetic fields instead of heat to accelerate your propellant, but keep most of the systems we’ve described identical in function: the magnetoplasmadynamic thruster.

 

Magnetoplasmadynamic Thrusters

Coauthor: Roland A. Gabrielli, IRS

NASA MPD concept
Self-field MPD Thruster Concept, image courtesy NASA

Magnetoplasmadynamic thrusters are a high-performance electric propulsion concept; and, as such, offer greater thrust potential than the electrostatic thrusters that we’ll look at in the next blog post. They also tend to have higher power requirements. Therefore they have not been used as a dedicated thruster on operational spacecraft to date, although they’ve been researched since the 1960s in the USSR, the USA, Western Germany, Italy, and Japan. Only a few demonstrators have flown on both Russian and Japanese experimental satellites. They remain an attractive and cost efficient option for high-thrust electric propulsion including Mars transfer engines.

So far in this article, we have discussed electric thrusters which are principally thermal thrusters: The propellant runs into a reaction chamber, tanks heat and expands through a nozzle. This is as true for VASIMR, resistojets, thrusters based on inductive plasma generators, and also for arcjets. Yet, VASIMR introduces a different set of physics for thrusters, magnetohydrodynamics (MHD). This term designates the harnessing of fluids (hence ‘hydro’) with the forces (hence ‘dynamic’) emerging from magnetic fields (hence ‘magneto’). In order to effectively use magnetic forces, the fluid has to be susceptible to them, and its particles should somehow be electrically polar or even charged. The latter case, plasma, is the most common in electric propulsion.

Since the characteristics of plasma did not play a vital role in the working principles of the previous thermo-electric thrusters, we should briefly discuss the concept. The energy of plasma is so high that electrons are no longer tied to their atoms, which then become ions. Both electrons and ions are charged particles whizzing around in a shared cloud, the plasma. Despite being neutral to the outward due to containing the same amount of negative as of positive charges, the plasma interacts with magnetic fields. These magnetohydrodynamic interactions present themselves for various applications, ranging from power generators in terrestrial power plants over magnetic plasma bottles to propulsion.

In VASIMR, these forces push the hot plasma away from the walls, protecting both the walls from damaging heat loads and the plasma from cooling so rapidly that thrust is lost. This allows VASIMR to have a very hot medium for expansion. While this puts VASIMR among MHD thrusters, it would not yet be a genuine “plasma thruster,” if it was not for the magnetic nozzle, which adds electromagnetic components to the forces generating the thrust. Among these components, the most important is the Lorentz-force, which occurs when a charged particle moves through a magnetic field. The Lorentz-force is at right angles to both the local magnetic field line as well as to the particle’s trajectory.

There are two main characteristics of an MPD thruster:

  1. The plasma constitutes a substantial part of the medium, which imparts a significant integral Lorentz force,
  2. the integral Lorentz force has a relevant contribution towards the exhaust direction.

The electromagnetic contribution is the real distinction from the previous thermal approaches, as the kinetic energy of the jet is not only gained from undirected heating, but also from a very directed acceleration. The greater the electric discharge, and the more powerful the magnetic field, the more the propellant is accelerated, therefore the more the exhaust velocity is increased. Besides the Lorentz force, there are also minor electromagnetic effects, like a “swirl” and Hall acceleration (which will be looked at in the next blog post), but the defining electromagnetic contribution of MPD thrusters is the Lorentz force. Since the latter applies on a plasma, this type of thrusters is called magnetoplasmadynamic (MPD) thrusters.

The Lorentz force contribution is also the way magnetic nozzles work:the forces involved can be broken into three parts: along the thruster axis, toward the thruster axis, and at right angle to both of these around the axis. The first part adds to the thrust, the second pushes the plasma towards to the centre, the third generates a swirling effect, both contributing to the thrust as to spreading the arc into a radial symmetry.

 

X16 Plasma
X-16 Plasma plume with argon propellant, ISS USTUTT

There are various ways to build MPD thrusters, with differing different propellants, geometries, methods of plasma and magnetic field generation, and operation regimes, stationary or pulsed. The actual design depends mostly on the available power. The core of the most common architecture for stationary MPD thrusters is the arc plasma generator, which makes MPD thrusters seem fairly similar to thermal arcjets. But that’s just in appearance, as you can in fact build MPD thrusters almost completely free of a thermal contribution to the thrust, as evidenced by the German Aerospace Center’s (DLR) X-16, or the PEGASUS thruster that we’ll look at later in this post.

 

These types of thruster (technically known as stationary MPD thrusters with arc generation) differ most noticeably is the way the magnetic field is generated:

  • Applied-field (AF) MPD thrusters, endowed with either and torus of permanent magnets, or a Helmholtz coil placed around the jet.
  • Self-field (SF) MPD thrusters, which generate their magnetic field by induction around the current travelling in the arc.

Note that arc generator based AF-MPD thrusters also experience (to a minor extent) self-field effects. A schematic of an SF-MPD thruster is shown below, illustrating the conceptual differences between arcjets, and MPD thrusters (the top half is an MPD, the bottom half is an arcjet, note the difference in throst length and nozzle size). The most crucial difference is the contact of the arc with the anode. While a very long arc is undesirable in arcjets (for very important design reasons which we don’t have time to go into here), in the MPD thruster this is crucial to provide the thruster with sufficient Lorentz force. Moreover, the longer the oblique leg of the arc, the more of the Lorentz force will point out of the thruster. This effect means that relatively large anode diameters are the norm with MPD thrusters of this type. Therefore, simple arcjets and other electro-thermal thrusters tend to be more slender than most arc based MPD thrusters. The anode diameter may however not be too large, as the arc will be more resistive with increasing lengths, entailing more and more energy losses.

Self Field MPD
Schematic opposing an SF-MPD thruster above the dash-dotted axis to a simple Arcjet below (Institute of Space Systems, USTUTT). F: thrust, ce: exhaust velocity,m: propellant mass flow (feed). Note how the arc pushes out far of the nozzle exit. The dashed lines j indicate the current in the MPD thruster’s arc, and the fat lines B the induced magnetic field. Its circle would be within a plane vertical to the thruster axis. The thin arrows show the local direction of both lines. It is to these arrows that the Lorentz-force FLor is at right angles.

In stationary arc based MPD thrusters, the choice of propellant is mainly dictated by the ease of ionisation which tends to be more important than molar mass, which is what causes the preference for hydrogen in thermal thrusters. This shift is the more pronounced, the more the Lorentz-force contribution outweighs the thermal contribution. Consequently, many arc based MPD experiments are run with noble gasses, like Helium, or Neon; while Xenon is often discussed in pure development, it is rarely considered for missions due to its cost. The most important noble gas for MPD is thus Argon. Other easily ionised substances are liquid alkali metals, commonly Lithium, which enables very good efficiencies. However, the complicated propellant feed system and the risk of depositions in that case is a serious drawback. Nevertheless, there is still a very large field for hydrogen or ammonia as propellants.

The major lifetime restricting component in arc based MPD are the cathode and – to a minor extent – anode lifetimes. These will erode over time which is caused by the plasma arc. The arc will gnaw at the metals due to electron emission, sublimation and other mechanisms. Depending on the quality of the design and the material, this will be significant after a few hundred hours of operation, or a tens of thousands. Extending their life is a challenge, because the plasma behavior will change depending on a number of factors determined by the plasma and the system in question. To add to the complexity, the geometry of the electrodes, affected by the erosion, is one of them. Because of this, some designs have easily replaceable cathodes, others (like the Pegasus which we’ll cover below) just swap out the drive: the original design for the SEI that Pegasus was proposed for actually had seven thrusters on board, run in series as the cathode wore out on each one.

AF-MPD – The Lower-Power Option

 

Japanese AF-MPD, permanent magnet
Japanese AF-MPD concept with 0.1 T permanent magnet

Depending on the available power, the arc current in MPD may or may not be intense enough to induce a significant magnetic self-field. At the lower end of the power scale, this is definitely breaking the MPD thruster principle. Because of this, in lower powered systems an external magnet  is required to create the magnetic field, which is why it’s called an applied field MPD. In general, these systems  range from 50 to 500 kW of electric power, although this is far from a hard limit. The advantage of applied field MPD thrusters over a self-field types (more on the self-field later) is that the magnetic fields can be manipulated independent of the amount of charge running through the cathode and anode, which can mean longer component life for the thruster. There are two main approaches to provide for an external field: The first is a ring of a permanent magnet around the volume occupied by the arc; the second is the placement of a Helmholtz coil instead (an electromagnet whose coil wraps around the lengthwise axis of the thruster, sometimes using superconductors). At the lower ending of the power range, the permanent magnet may be the better option because it doesn’t consume what little electricity you have, while the electromagnets are more interesting at the upper end.

 

All these solutions do require cooling, and the requirements are important the more powerful the magnet is. This cooling can be achieved passively at the lower ending of the power range (given enough free volume). For mid level power, the cold propellant itself can provide the cooling prior to running alongside the hot anode and entering the plasma generator. Using cold propellant for cooling the thrusters is called regenerative cooling (a mainstay of current chemical and nuclear thermal engines). The most performant magnets for AF-MPD, superconducting coils, must be brought to really low temperatures, and this tends to require an additional, secondary coolant cycle, including an own refrigeration system, with pumps, compressors, and radiators.

 

ISS SX-3 AF-MPD Helmholtz coil
Recent development at the Institute of Space Systems, Stuttgart: SX 3 prior experimentation. The outer flange covers a Helmholtz coil.

The nice thing about the electromagnets is that it’s possible to tune the strength of the field in a certain range. If the coil degrades over time, more electricity (and coolant, due to increased electrical resistance) can be pumped through. This isn’t an option for a permanent magnet. However, the magnetic field generation equipment is one of the lifetime limiting components of this type of thruster, so it’s worth considering.

 

There’s not really a limit to how much power you use in an applied field MPD thruster, and especially with a Helmholtz coil you can theoretically tune your drive system in a number of interesting ways, like increasing the strength to constrict the plasma more if there’s a lower-mass stream. Something happens once the plasma has enough charge going through it, though: the unavoidable self field contribution increases. .  Besides increasing the complexity of the determination of the field topology, the self field is an advantage. At sufficient power, you can get away without coil or magnets, making the system lighter, simpler, and less temperature-sensitive. This is why most very high powered systems use the self-field type of MPD.

Before we look at this concept in the next subsection, let us have a look at current developments from over the world. Table ## summarises a few interesting AF-MPD thrusters, both the performance parameters thrust F, exhaust velocity c_e, thrust efficiency η_T, electric (feed) power P_e and jet power P_T, and design,  like anode radius r_A, cathode radius r_C, arc current I, magnetic field B and the propellant. Recent AF-MPD-thruster development was conducted by Myers in the USA, by MAI (the Moscow Aviation Institute) in Russia, at the University of Tokyo in Japan, and SX 3 in Germany at the Institute of Space Systems, Stuttgart. The types X 9 and X 16 in table ## are the IRS’ legacy from the German Aerospace Center (X 9, X 16).

Thruster Pro-
pellant
r_A / mm r_C / mm I / A B / T F / mN c_e/ km/s η_T / % P_e / kW P_T / kW
Myers Ar 25 6.4 1000 0.12 1400 14 22 44.5 9.8
MAI Li 80 22.5 1800 0.09 2720 33.6 44.1 103.5 45.7
U Tokyo H2 40? 4 200 0.1 50 55.6 19.3 7.2 1.4
SX 3 Ar 43 6 450 0.4 2270 37.9 58 74 42.9
X 16 Ar 20 3 80 0.6 251 35.9 38.8 11.6 4.5
X 16 Xe 20 3 80 0.6 226 25.1 29.6 9.6 2.84
X 9 Ar 20 5 1200 0.17 2500 20.8 28.1 93 26.1

Design parameters and experimental performance data from various AF-MPD thrusters from over the world. Gabrielli 2018

SX-3 plume (argon propellant), IRS
Visual plume of SX 3 at the Institute of Space Systems, Stuttgart. 

Russian 100 kWe AF-MPD
Russian 100 kWe lithium AF-MPD thruster.

Self-Field MPD: When Power Isn’t a Problem

In the previous section, we looked at low and medium powered MPD thrusters. At those power levels, an external field had to be applied to ensure a powerful enough magnetic field is applied to the plasma to generate the Lorentz force. Even though it wasn’t enough to impart enough thrust, there was always a self field contribution, albeit a weak, almost negligible one. The cause of the self field contribution is the induction of a magnetic field around the arc due to the current carried. You can get an idea of the direction of a magnetic field with the “right fist rule” by closing your right fist around the generating current, with your thumb pointing towards the cathode. Your fingers will then curl in the direction of the magnetic field. To get the direction of the Lorentz force, all you have to do in the next step is aligning your right hand again. This time, your thumb has to point in the direction of the magnetic field, and – at right angle – your index finger into the direction of the current. At right angle to both fingers, the middle finger will point in the direction of the Lorentz-force. (Note that you can also use the latter three-fingers-rule to study the acceleration in AF-MPD thrusters.)

The strength of the induced self field will depend on the current. The stronger the current is, the stronger the magnetic field will be, and, in turn, the Lorentz acceleration. As a consequence, given a sufficient current, the self field will be effective enough to provide for a decent Lorentz acceleration.

The current depends on the available electric power put into the arc generator, making the applied field obsolete from certain power levels up. This reduces complications arising from using an external magnet, and provides good efficiencies and attractive performance parameters. For example, at 300 kWe, and with an arc current of almost 5 kA (compare to AF-MPDs currents ranging from 50 A to 2 kA) DT2, an SF-MPD thruster developed at the Institute of Space Systems in Stuttgart, can provide a thrust of approximately 10 N at an exhaust velocity of 12 km/s, with a thrust efficiency of 20%. The performance possibilities have many people considering the technology as a key technology for rapid, man rated interplanetary transport, in particular to Mars. In this use case, SF-MPD thrusters may even be competitive with VASIMR, weighing possible shortcomings in efficiency up with a significantly simpler construction and, hence, much smaller cost. However, lacking current astronuclear sources of sufficient power, the development is stagnant, and awaiting disruption on the power source side.

DT-2 Plume Argon
DT 2 in operation at the Institute of space systems, Stuttgart.

 

MPD Cutaway high res loose pin
Simplified model of DT 2. ISS USTUTT design, image BeyondNERVA

Another example of a “typical” self-field high powered MPD thruster application (since, like all types of electric propulsion, the amount of power applied to the thruster defines the operational parameters) is that seen in the PEGASUS drive, an electric propulsion system developed for the Space Exploration Initiative (SEI) for an electric propulsion mission to Mars. Committed research on this concept began in the mid-1980s, and was meant for a mission in the late-1990s to early 2000s, but funding for SEI was canceled, and the development has been on hold  ever since. Perhaps the most notable is the shape, which is fairly typical of nozzles designed for a concept we discussed briefly earlier in the post: the sinuous curvature of the nozzle profile is designed to minimize the amount of thermal heating that occurs within the plasma, so if a nozzle has this shape it means that the thermal contribution to the thrust is not only not needed, but is detrimental to the performance of the thruster.

 

Thruster Components
PEGASUS drive system schematic, Coomes et al 1993

 

A number of novel technologies were used in this design, and as such we’ll look at it again a couple of times during this series: first for the thruster, then for its power conversion system, and finally for its heat rejection system.

Nozzle xsection
PEGASUS MPD Thruster, Coomes et al

Pulsed Inductive Thrusters

Pulsed inductive thrusters (PIT) are a type of thruster that has many advantages over other MPD thrusters. The thrusters don’t need an electrode, which is one of the major causes of wear in most thrusters, and they also are able to maintain their specific impulse over a wide range of power levels. This is because the thruster isn’t a steady-state thruster, like many other forms of thruster that are commonly in use; instead a gaseous propellant is sprayed in brief jets onto a flat induction coil, which is then discharged for a very brief period from a bank of capacitors (usually in the nanosecond range), causing the gas to become ionized and then accelerated through the Lorentz force. The frequency of pulses is dependent on the time it takes to charge the capacitors, so the more power that is available, the faster the pulses can be discharged. This directly affects the amount of thrust that’s available from the thruster, but since the discharges and volume of gas are all the same, the Lorentz force applied – and therefore the exhaust velocity of the propellant and the isp – remain the same. Another advantage of the inductive plasma generation is the wide variety of propellants available, from water to ammonia to hydrazine, making it attractive for possible in-situ propellant use with minimal processing. In fact, one proposal by Kurt Polzin at Marshall SFC uses the Martian atmosphere for propellant, making refueling a Mars-bound interplanetary spacecraft a much easier proposition.

PIT Schematic, NuPIT
Schematic of PIT operation. Image on left is gas flow, image on right is magnetic fields. Frisbee, 2005

This gives a lot of flexibility to the system, especially for interplanetary missions, because additional thrust has distinct advantages when escaping a gravity well (such as Earth orbit), or orbital capture, but isn’t necessary for the “cruise” phase of interplanetary missions. Another nice thing about it is, for missions that are power-constrained, many thruster types have variation in specific impulse, and therefore the amount of propellant needed for the mission, depending on the amount of power available for propulsion when combined with other electricity requirements, like sensors and communications. For the PIT, this just means less thrust per unit time, while the isp remains the same. This isn’t necessarily a major advantage in all mission types, but for some it could be a significant draw.

PIT was one of the proposed propulsion types for Project Prometheus (which ended up using the HiPEP system that we’ll discuss in the next blog post), known as NuPIT. This thruster offered thrust efficiency of greater than 70%, and an isp of between 2,000-9,000 seconds, depending on the specific design that was decided upon (the isp would remain constant for whatever value was selected), using a 200 kWe nuclear power plant (which is on the lower end of what a crewed NEP mission would use), with ammonia propellant. Other propellants could have been selected, but they would have affected the performance of the thruster in different ways. An advantage to the PIT, though, is that its breadth of propellant options are far wider than most other thruster types, even thermal rockets, because if there’s chemical dissociation (which occurs to a certain degree in most propellants), anything that would become a solid doesn’t really have a surface to deposit onto effectively, and what little residue builds up is on a flat surface that doesn’t rely on thermal conductance or orifice size for its’ functionality, it’s just a plate to hold the inductive coil.

NuPIT Characteristics, Frisbee 2003
NuPIT pulsed inductive thruster characteristics, Frisbee 2005

For a “live off the land” approach to propellant, PIT thrusters offer many advantages in their flexibility (assuming replacement of the gaseous diffuser used for the gas pulses), predictable (and fairly high) specific impulse, and variable thrust. This makes them incredibly attractive for many mission types. As higher powered electrical systems become available, they may become a popular option for many mission applications.

We’ll return to PIT thrusters in a future post, to explore the implications of the variable thrust levels on mission planning, because that’s a very different topic than just propulsion mechanics. It does open fascinating possibilities for unique mission profiles, though, in some ways very similar to the VASIMR drive.

More to follow!

Thermo-electric and MPD thrusters cover a wide range of low to high power thruster options, and offer many unique capabilities for mission planners and spacecraft architects. With the future availability of dense, high-powered fission power systems for spacecraft, these systems may show that they offer unique capabilities, not just for short missions or reaction control systems, but also for interplanetary missions as well. Some will need to wait until these power sources are available to be used, but others are already in use on operational satellites, and have shown dozens of years of efficient and effective operation.

The next post will complete our look at electric propulsion systems with a look at electrostatic thrusters, including gridded ion drives, Hall effect thrusters, and other forms of electric propulsion that use differences in electric potential to accelerate an ionized propellant. These have been in use for a long time, and are far more familiar to many people, but there are some incredible designs that have yet to be flown that extend the capabilities of these systems beyond even the very efficient systems in use today.

Another thank you to Roland Gabrielli and Mikkel Haaheim for their invaluable help on this blog post. Without them, this wouldn’t be able to be nearly as comprehensive or accurate as it is.

Again, I apologize that this blog post has taken so long. Unfortunately, I reached the point that I typically decide to split one blog post into two several times in this post, and actually DID split it a couple times. Much of this information, as well as a lot of it from the next post on electrostatic thrusters, was originally going to be part of the last post, but once that one reached 25 pages in length I decided to split it between history and summary, and this occurred once again in writing this post, separating the electrostatic thrusters from the thermal and MPD thrusters. The latter two concepts also almost got their own blog posts, but as we’ve seen, the two share key features and  so it made sense to keep them together. The electrostatic thruster post is already coming along well, and I hope that it won’t take as long for me to write as this one did… sadly, I can’t promise that, but I’m trying.

Sources

Electrothermal

An Analysis of Current Propulsion Systems, Weebly website http://currentpropulsionsystems.weebly.com/electrothermal-propulsion-systems.html

Resistojet

Vela spacecraft, Gunters’ Space Page profile https://space.skyrocket.de/doc_sdat/vela.htm

Alta Space Systems Resistojet page: https://web.archive.org/web/20130604101644/http://www.alta-space.com/index.php?page=resistojet

Induction Thermal Thruster

 

Microwave/RF Thermal Thruster

Coaxial Microwave Electrothermal Thruster Performance in Hydrogen, Richardson et al, Michigan State 1968 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19950005171.pdf

Microwave Electrothermal Thruster webpage, Princeton http://alfven.princeton.edu/research/past/met

The Microwave Thermal Thruster Concept, Parkin et al, CalTech https://authors.library.caltech.edu/3304/1/PARaipcp04b.pdf

Microwave Electro-thermal Thruster patent, Rayethon 1999 https://patents.google.com/patent/US5956938

Fourth Symposium on Beamed Energy Propulsion, ed. Komurasaki and Yabe, 2005 https://sciencedocbox.com/Physics/70705799-Beamed-energy-propulsion.html

Arcjet

Arc-Jet Thruster for Space Propulsion, Wallner et al 1965 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19650017046.pdf

Aerojet MR-510 hydrazine arcjet page, Astronautix: http://www.astronautix.com/m/mr-510.html

University of Stuttgart ISS PTFE PFTE Arcjet Mission concept webpage: https://web.archive.org/web/20140318021932/

http://www.elringklinger.de/en/germany-land-of-ideas-elringklinger-drives-satellite

VASIMR

High Power Electric Propulsion with VASIMR Technology, Chiand-Diaz et al 2016

http://www.unoosa.org/documents/pdf/psa/hsti/CostaRica2016/2-4.pdf

VX-200 Magnetoplasma Thruter Performance Results Exceeding 50% Thruster Efficiency, Longmier et al 2011 https://www.researchgate.net/publication/228977378_VX-200_Magnetoplasma_Thruster_Performance_Results_Exceeding_Fifty-Percent_Thruster_Efficiency

Improved Efficiency and Throttling Range of the VX-200 Magnetoplasma Thruster, Longmier et al 2014 http://www.adastrarocket.com/Ben-JPP-2014.pdf

Low Thrust Trajectory Analysis (A Survey of Missions using VASIMR For Flexible Space Exploration, Ilin et al 2012 http://www.adastrarocket.com/VASIMR_for_flexible_space_exploration-2012.pdf

Nuclear Electric Propulsion Mission Scenarios using VASIMR, Chiang-Diaz et al 2012 https://www.lpi.usra.edu/meetings/nets2012/pdf/3091.pdf

MPD

Steady State MPD

Magnetic Nozzle Design for High-Power MPD Thrusters, Hoyt Tethers, Unlimited 2005 http://www.tethers.com/papers/IEPC05_HoytNozzlePaper.pdf

Applied Field MPD

Applied-Field MPD Thruster Geometry Effects, Myers Sverdup 1991 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19910017903.pdf

Performance of an Applied Field MPD Thruster, Paganucci et al 2001 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2001index/2002iepc/papers/t12/132_2.pdf

Mayer, T., Gabrielli, R. A., Boxberger, A., Herdrich, G., and Petkow, D.,: “Development of Analytical Scaling Models for Applied Field Magnetoplasmadynamic Thrusters,” 64th International  Astronautical Congress, International Astronautical Federation, Beijing, September 2013.

Myers, R. M., “Geometric Scaling of Applied-Field Magnetoplasmadynamic Thrusters,” Journal of Propulsion and Power, Vol. 11, No. 2, 1995, pages 343–350.

Tikhonov, V. B., Semenikhin S. A., Brophy J.R., and Polk J.E., “Performance of 130 kW MPD Thruster with an External Magnetic Field and Li as a Propellant”, International Electric Propulsion Conference, IEPC 97-117, Cleveland, Ohio, 1997, pp. 728-733.

Boxberger, A., et al.. “Experimental Investigation of Steady-State Applied-Field Magnetoplasmadynamic Thrusters at Institute of Space Systems”, 48th AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit, Atlanta, Georgia, 2012.

Boxberger, A., and G. Herdrich. “Integral Measurements of 100 kW Class Steady State Applied-Field Magnetoplasmadynamic Thruster SX3 and Perspectives of AF-MPD Technology.” 35th International Electric Propulsion Conference. 2017.

Pegasus Drive

The Pegasus Drive: A Nuclear Electric Propulsion System for the Space Exploration Initiative; Coomes and Dagle, PNL 1990 https://www.osti.gov/servlets/purl/6399282

A Low-Alpha Nuclear Electric Propulsion System for Lunar and Mars Missions; Coomes and Dagle, PNL 1992 https://www.osti.gov/servlets/purl/10116111

MPD Thruster Performance Analysis Models; Gilland and Johnson NASA GRC, 2007 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20070032052.pdf

Self-Field MPD

On the Thrust of Self-Field MPD Thrusters, Choueiri 1997 https://alfven.princeton.edu/publications/choueiri-iepc-1997-121

Pulsed Inductive Thruster (PIT)

The PIT Mark V Pulsed Inductive Thruster, Dailey et al 1993 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19930023164.pdf

The Nuclear Electric Pulsed Inductive Thruster (NuPIT) Mission Analysis for Prometheus, Frisbee et al 2005 https://trs.jpl.nasa.gov/bitstream/handle/2014/38357/05-1846.pdf

Pulsed Inductive Thruster Using Martian Atmosphere as Propellant, Polzin 2012 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20120015307.pdf

 

 

 

Categories
Development and Testing Electric propulsion History Non-nuclear Testing Nuclear Electric Propulsion Spacecraft Concepts

Electric Propulsion: The Oldest “Futuristic” Propulsion Possibility

Hello, and welcome back to Beyond NERVA. Today, we are looking at a very popular topic, but one that doesn’t necessarily require nuclear power: electric propulsion. However, it IS an area that nuclear power plants are often tied to, because the amount of thrust available is highly dependent on the amount of power available for the drive system. We will touch a little bit on the history of electric propulsion, as well as the different types of electric thrusters, their advantages and disadvantages, and how fission power plants can change the paradigm for how electric thrusters can be used. It’s important to realize that most electric propulsion is power-source-agnostic: all they require is electricity; how it’s produced usually doesn’t mean much to the drive system itself. As such, nuclear power plants are not going to be mentioned much in this post, until we look at the optimization of electric propulsion systems.

We also aren’t going to be looking at specific types of thrusters in this post. Instead, we’re going to do a brief overview of the general types of electric propulsion, their history, and how electrically propelled spacecraft differ from thermally or chemically propelled spacecraft. The next few posts will focus more on the specific technology itself, its’ application, and some of the current options for each type of thruster.

Electric Propulsion: What is It?

In its simplest definition, electric propulsion is any means of producing thrust in a spacecraft using electrical energy. There’s a wide range of different concepts that get rolled into this concept, so it’s hard to make generalizations about the capabilities of these systems. As a general rule of thumb, though, most electric propulsion systems are low-thrust, long-burn-time systems. Since they’re not used for launch, and instead for on-orbit maneuvering or interplanetary missions, the fact that these systems generally have very little thrust is a characteristic that can be worked with, although there’s a great deal of variety as far as how much thrust, and how efficient in terms of specific impulse, these systems are.

There are three very important basic concepts to understand when discussing electric propulsion: thrust-to-weight ratio (T/W), specific impulse (isp), and burn time. The first is self-explanatory: how much does the engine weigh, compared to how hard it can push, commonly in relation to Earth’s gravity: a T/W ratio of 1/1 means that the engine can hover, basically, but no more, A T/W ratio of 3/1 means that it can push just less than 3 times its weight off the ground. Specific impulse is a measure of how much thrust you get out of a given unit of propellant, and ignores everything else, including the weight of the propulsion system; it’s directly related to fuel efficiency, and is measured in seconds: if the drive system had a T/W ratio of 1/1, and was entirely made out of fuel, this would be the amount of time it could hover (assuming the engine is completely made out of propellant) for any given mass of fuel at 1 gee. Finally, you have burn time: the T/W ratio and isp give you the amount of thrust imparted per unit time based on the mass of the drive system and of the propellant, then the spacecraft’s mass is factored into the equation to give the total acceleration on the spacecraft for a given unit of time. The longer the engine burns, the more overall acceleration is produced.

Electric propulsion has a very poor thrust-to-weight ratio (as a general rule), but incredible specific impulse and burn times. The T/W ratio of many of the thrusters is very low, due to the fact that they provide very little thrust, often measured in micronewtons – often, the thrust is illustrated in pieces of paper, or pennies, in Earth gravity. However, this doesn’t matter once you’re in space: with no drag, and orbital mechanics not requiring the huge amounts of thrust over a very short period of time, the total amount of thrust is more important for most maneuvers, not how long it takes to build up said thrust. This is where the burn time comes in: most electric thrusters burn continuously, providing minute amounts of thrust over months, sometimes years; they push the spacecraft in the direction of travel until halfway through the mission, then turn around and start decelerating the spacecraft halfway through the trip (in energy budget terms, not necessarily in total mission time). The trump card for electric propulsion is in specific impulse: rather than the few hundred seconds of isp for chemical propulsion, or the thousand or so for a solid core nuclear thermal rocket, electric propulsion gives thousands of seconds of isp. This means less fuel, which in turn makes the spacecraft lighter, and allows for truly astounding total velocities; the downside to this is that it takes months or years to build these velocities, so escaping a gravity well (for instance, if you’re starting in low Earth orbit) can take months, so it’s best suited for long trips, or for very minor changes in orbit – such as for communications satellites, where it’s made these spacecraft smaller, more efficient, and with far longer lifetimes.

Electric propulsion is an old idea, but one that has yet to reach its’ full potential due to a number of challenges. Tsiolkovsy and Goddard both wrote about electric propulsion, but because neither was living in a time that it was possible to get into orbit, their ideas went unrealized in their lifetimes. The reason for this is that electric propulsion isn’t suitable for lifting rockets off the surface of a planet, but for in-space propulsion it’s incredibly promising. They both showed that the only thing that matters for a rocket engine is that, to put it simply, some mass needs to be thrown out the back of the rocket to provide thrust, it doesn’t matter what that something is. Electricity isn’t (directly) limited by thermodynamics (except through entropic losses), only by electric potential differences, and can offer very efficient conversion of electric potential to kinetic energy (the “throwing something out of the back” part of the system).

In chemical propulsion, combustion is used to cause heat to be produced, which causes the byproducts of the chemical reaction to expand and accelerate. This is then directed out of a nozzle to increase the velocity of the exhaust and provide lift. This is the first type of rocket ever developed; however, while advances are always being produced, in many ways the field is chasing after more and more esoteric or exotic ways to produce ever more marginal gains. The reason for this is that there’s only so much chemical potential energy available in a given system. The most efficient chemical engines top out around 500 seconds of specific impulse, and most hover around the 350 mark. The place that chemical engines excel though, is in thrust-to-weight ratio. They remain – arguably – our best, and currently our only, way of actually getting off Earth.

Thermal propulsion doesn’t rely on the chemical potential energy, instead the reaction mass is directly heated from some other source, causing expansion. The lighter the propellant, the more it expands, and therefore the more thrust is produced for a given mass; however, heavier propellants can be used to give more thrust per unit volume, at lower efficiencies. It should be noted that thermal propulsion is not only possible, but also common, with electrothermal thrusters, but we’ll dig more into that later.

Electric propulsion, on the other hand, is kind of a catch-all term when you start to look at it. There are many mechanisms for changing electrical energy into kinetic energy, and looking at most – but not all – of the options is what this blog post is about.

In order to get a better idea of how these systems work, and the fundamental principles behind electric propulsion, it may be best to look into the past. While the potential of electric propulsion is far from realized, it has a far longer history than many realize.

Futuristic Propulsion? … Sort Of, but With A Long Pedigree

The Origins of Electric Propulsion

Goddard drive drawing
First Patented Ion Drive, Robert Goddard 1917

When looking into the history of spaceflight, two great visionaries stand out: Konstantin Tsiolkosky and Robert Goddard. Both worked independently on the basics of rocketry, both provided much in the way of theory, and both were visionaries seeing far beyond their time to the potential of rocketry and spaceflight in general. Both were working on the questions of spaceflight and rocketry at the turn of the 20th century. Both also independently came up with the concept of electric propulsion; although who did it first requires some splitting of hairs: Goddard mentioned it first, but in a private journal, while Tsiolkovsky published the concept first in a scientific paper, even if the reference is fairly vague (considering the era, understandably so). Additionally, due to the fact that electricity was a relatively poorly understood phenomenon at the time (the nature of cathode and anode “rays” was much debated, and positively charged ions had yet to be formally described); and neither of these visionaries had a deep understanding of the concepts involved, their ideas being little more than just that: concepts that could be used as a starting point, not actual designs for systems that would be able to be used to propel a spacecraft.

 

Tsilkovsky small portrait
Konstantin Tsiolkovsky, image via Wikimedia

The first mention of electric propulsion in the formal scientific literature was in 1911, in Russia. Konstantin Tsiolkovsky wrote that “it is possible that in time we may use electricity to produce a large velocity of particles ejected from a rocket device.” He began to focus on the electron, rather than the ion, as the ejected particle. While he never designed a practical device, the promise of electric propulsion was clearly seen: “It is quite possible that electrons and ions can be used, i.e. cathode and especially anode rays. The force of electricity is unlimited and can, therefore, produce a powerful flux of ionized helium to serve a spaceship.” The lack of understanding of electric phenomena hindered him, though, and prevented him from ever designing a practical system, much less building one.

 

220px-Dr._Robert_H._Goddard_-_GPN-2002-000131
Robert Goddard, image viaWikimedia

The first mention of electric propulsion in history is from Goddard, in 1906, in a private notebook, but as noted by Edgar Choueiri, in his excellent historical paper published in 2004 (a major source for this section), these early notes don’t actually describe (or even reference the use of) an electric propulsion drive system. This wasn’t a practical design (that didn’t come until 1917), but the basic principles were laid out for the acceleration of electrons (rather than positively charged ions) to the “speed of light.” For the next few years, the concept fermented in his mind, culminating in patents in 1912 (for an ionization chamber using magnetic fields, similar to modern ionization chambers) and in 1917 (for a “Method and Means for Producing Electrified Jets of Gas”). The third of three variants was for the first recognizable electric thruster, whichwould come to be known as an electrostatic thruster. Shortly after, though, America entered WWI, and Goddard spent the rest of his life focused on the then-far-more-practical field of chemical propulsion.

 

Кондратюк,_Юрий
Yuri Kondratyuk, image wia Wikimedia

Other visionaries of rocketry also came up with concepts for electric propulsion. Yuri Kondratyuk (another, lesser-known, Russian rocket pioneer) wrote “Concerning Other Possible Reactive Drives,” which examined electric propulsion, and pointed out the high power requirements for this type of system. He didn’t just examine electron acceleration, but also ion acceleration, noting that the heavier particles provide greater thrust (in the same paper he may have designed a nascent colloid thruster, another type of electric propulsion).

 

 

 

 

Hermann_Oberth_1950s
Hermann Oberth, image via Wikimedia

Another of the first generation of rocket pioneers to look at the possibilities of electric propulsion was Hermann Oberth. His 1929 opus, “Ways to Spaceflight,” devoted an entire chapter to electric propulsion. Not only did he examine electrostatic thrusters, but he looked at the practicalities of a fully electric-powered spacecraft.

 

 

 

 

 

200px-Glushko_Valentin_Petrovich
Valentin Glushko, image via Wikimedia

Finally, we come to Valentin Glushko, another early Russian rocketry pioneer, and giant of the Soviet rocketry program. In 1929, he actually built an electric thruster (an electrothermal system, which vaporized fine wires to produce superheated particles), although this particular concept never flew.By this time, it was clear that much more work had to be done in many fields for electric propulsion to be used; and so, one by one, these early visionaries turned their attention to chemical rockets, while electric propulsion sat on the dusty shelves of spaceflight concepts that had yet to be realized. It collected dust next to centrifugal artificial gravity, solar sails, and other practical ideas that didn’t have the ability to be realized for decades.

The First Wave of Electric Propulsion

Electric propulsion began to be investigated after WWII, both in the US and in the USSR, but it would be another 19 years of development before a flight system was introduced. The two countries both focused on one general type of electric propulsion, the electrostatic thruster, but they looked at different types of this thruster, reflecting the technical capabilities and priorities of each country. The US focused on what is now known as a gridded ion thruster, most commonly called an ion drive, while the USSR focused on the Hall effect thruster, which uses a magnetic field perpendicular to the current direction to accelerate particles. Both of these concepts will be examined more in the section on electrostatic thrusters; though, for now it’s worth noting that the design differences in these concepts led to two very different systems, and two very different conceptions of how electric propulsion would be used in the early days of spaceflight.

In the US, the most vigorous early proponent of electric propulsion was Ernst Stuhlinger, who was the project manager for many of the earliest electric propulsion experiments. He was inspired by the work of Oberth, and encouraged by von Braun to pursue this area, especially now that being able to get into space to test and utilize this type of propulsion was soon to be at hand. His leadership and designs had a lasting impact on the US electric propulsion program, and can still be seen today.

sert1
SERT-I thruster, image courtesy NASA

The first spacecraft to be propelled using electric propulsion was the SERT-I spacecraft, a follow on to a suborbital test (Program 661A, Test A, first of three suborbital tests for the USAF) of the ion drives that would be used. These drive system used cesium and mercury as a propellant, rather than the inert gasses that are commonly used today. The reason for this is that these metals both have very low ionization energy, and a reasonably favorable mass for providing more significant thrust. Tungsten buttons were used in the place of the grids used in modern ion drives, and a tantalum wire was used to neutralize the ion stream. Unfortunately, the cesium engine short circuited, but the mercury system was tested for 31 minutes and 53 cycles of the engine. This not only demonstrated ion propulsion in principle, but just as importantly demonstrated ion beam neutralization. This is important for most electric propulsion systems, because this prevents the spacecraft from becoming negatively charged, and possibly even attracting the ion stream back to the spacecraft, robbing it of thrust and contaminating sensors on board (which was a common problem in early electric propulsion systems).

The SNAPSHOT program, which launched the SNAP 10A nuclear reactor on April 3, 1965, also had a cesium ion engine as a secondary experimental payload. The failure of the electrical bus prevented this from being operated, but SNAPSHOT could be considered the first nuclear electric spacecraft in history (if unsuccessful).

ATS.jpg
ATS (either 4 or 5), image courtesy NASA

The ATS program continued to develop the cesium thrusters from 1968 through 1970. The ATS-4 flight was the first demonstration of an orbital spacecraft with electric propulsion, but sadly there were problems with beam neutralization in the drive systems, indicating more work needed to be done. ATS-5 was a geostationary satellite meant to have electrically powered stationkeeping, but was not able to despin the satellite from launch, meaning that the thruster couldn’t be used for propulsion (the emission chamber was flooded with unionized propellant), although it was used as a neutral plasma source for experimentation. ATS-6 was a similar design, and successfully operated for a total of over 90 hours (one failed early due to a similar emission chamber flooding issue). SERT-II and SCATHA satellites continued to demonstrate improvements as well, using both cesium and mercury ion devices (SCATHA wasn’t optimized as a drive system, but used similar components to test spacecraft charge neutralization techniques).

These tests in the 1960s never developed into an operational satellite that used ion propulsion for another thirty years. Challenges with the aforementioned thrusters becoming saturated, spacecraft contamination issues due to highly reactive cesium and mercury propellants, and relatively low engine lifetimes (due to erosion of the screens used for this type of ion thruster) didn’t offer a large amount of promise for mission planners. The high (2000+ s) specific impulse was very promising for interplanetary spacecraft, but the low reliability, and reasonably short lifetimes, of these early ion drives made them unreliable, or of marginal use, for mission planners. Ground testing of various concepts continued in the US, but additional flight missions were rare until the end of the 1990s. This likely helped feed the idea that electric propulsion is new and futuristic, rather than having its’ conceptual roots reaching all the way back to the dawn of the age of flight.

Early Electric Propulsion in the USSR

Unlike in the US, the USSR started development of electric propulsion early, and continued its development almost continuously to the modern day. Sergei Korolev’s OKB-1 was tasked, from the beginning of the space race, with developing a wide range of technologies, including nuclear powered spacecraft and the development of electric propulsion.

Early USSR TAL, Kim et al
Early sketch of a Hall effect (TAL) thruster in USSR, image from Kim et al

Part of this may be the different architecture that the Soviet engineers used: rather than having ions be accelerated toward a pair of charged grids, the Soviet designs used a stream of ionized gas, with a perpendicular magnetic field to accelerate the ions. This is the Hall effect thruster, which has several advantages over the gridded ion thruster, including simplicity, fewer problems with erosion, as well as higher thrust (admittedly, at the cost of specific impulse). Other designs, including the PPT, or pulsed plasma thruster, were also experimented with (the ZOND-2 spacecraft carried a PPT system). However, due to the rapidly growing Soviet mastery of plasma physics, the Hall effect thruster became a very attractive system.

There are two main types of Hall thruster that were experimented with: the stationary plasma thruster (SPT) and the thruster with anode layer (TAL), which refer to how the electric charge is produced, the behavior of the plasma, and the path that the current follows through the thruster. The TAL was developed in 1957 by Askold Zharinov, and proven in 1958-1961, but a prototype wasn’t built until 1967 (using cesium, bismuth, cadmium, and xenon propellants, with isp of up to 8000 s), and it wasn’t published in open literature until 1973. This thruster can be characterized by a narrow acceleration zone, meaning it can be more compact.

E1 SPT Thruster, Kim et al
E1 SPT-type Hall thruster, image via Kim et al

The SPT, on the other hand, can be larger, and is the most common form of Hall thruster used today. Complications in the plasma dynamics of this system meant that it took longer to develop, but its’ greater electrical efficiency and thrust mean that it’s a more attractive choice for station-keeping thrusters. Research began in 1962, under Alexy Morozov at the Institute of Atomic Energy; and was later moved to the Moscow Aviation institute, and then again to what became known as FDB Fakel (now Fakel Industries, still a major producer of Hall thrusters). The first breadboard thruster was built in 1968, and flew in 1970. It was then used for the Meteor series of weather satellites for attitude control. Development continued on the design until today, but these types of thrusters weren’t widely used, despite their higher thrust and lack of spacecraft contamination (unlike similar vintage American designs).

It would be a mistake to think that only the US and the USSR were working on these concepts, though. Germany also had a diversity of programs. Arcjet thrusters, as well as magnetoplasmadynamic thrusters, were researched by the predecessors of the DLR. This work was inherited by the University of Stuttgart Institute for Space Systems, which remains a major research institution for electric propulsion in many forms. France, on the other hand, focused on the Hall effect thruster, which provides lower specific impulse, but more thrust. The Japanese program tended to focus on microwave frequency ion thrusters, which later provided the main means of propulsion for the Hayabusa sample return mission (more on that below).

The Birth of Modern Electric Propulsion

ds1logo
DS1 Mission Patch, Image courtesy JPL

For many people, electric propulsion was an unknown until 1998, when NASA launched the Deep Space 1 mission. DS1 was a technology demonstration mission, part of the New Millennium program of advanced technology testing and experimentation. A wide array of technologies were to be tested in space, after extensive ground testing; but, for the purposes of Beyond NERVA, the most important of these new concepts was the first operational ion drive, the NASA Solar Technology Applications Readiness thruster (NSTAR). As is typical of many modern NASA programs, DS1 far exceeded the minimum requirements. Originally meant to do a flyby of the asteroid 9969 Braille, the mission was extended twice: first for a transit to the comet 19/P Borrelly, and later to extend engineering testing of the spacecraft.

nstar
NSTAR thruster, image courtesy NASA

In many ways, NSTAR was a departure from most of the flight-tested American electric propulsion designs. The biggest difference was with the propellant used: cesium and mercury were easy to ionize, but a combination of problems with neutralizing the propellant stream, and the resultant contamination of the spacecraft and its’ sensors (as well as minimizing chemical reaction complications and growing conservatism concerning toxic component minimization in spacecraft), led to the decision to use noble gasses, in this case xenon. This, though, doesn’t mean that it was a great overall departure from the gridded ion drives of US development; it was an evolution, not a revolution, in propulsion technology. Despite an early (4.5 hour) failure of the NSTAR thruster, it was able to be restarted, and the overall thruster life was 8,200 hours, and the backup achieved more than 500 hours beyond that.

Not only that, but this was not the only use of this thruster. The Dawn mission to the minor planet Ceres uses an NSTAR thruster, and is still in operation around that body, sending back incredibly detailed and fascinating information about hydrocarbon content in the asteroid belt, water content, and many other exciting discoveries for when humanity begins to mine the asteroid belt.

Many satellites, especially geostationary satellites, use electric propulsion today, for stationkeeping and even for final orbital insertion. The low thrust of these systems is not a major detriment, since they can be used over long periods of time to ensure a stable orbital path; and the small amount of propellant required allows for larger payloads or longer mission lifetimes with the same mass of propellant.

After decades of being considered impractical, immature, or unreliable, electric propulsion has come out of the cold. Many designs for interplanetary spacecraft use electric propulsion, due to their high specific impulse and ability to maximize the benefits of the high-isp, low-thrust propulsion regime that these thruster systems excel at.

GT arcjet small.PNG
Electrothermal arcjet, image courtest Georgia Tech

Another type of electric thruster is also becoming popular for small-sat users: electrothermal thrusters, which offer higher thrust from chemically inert propellants in compact forms, at the cost of specific impulse. These thrusters offer the benefits of high-thrust chemical propulsion in a more compact and chemically inert form – a major requirement for most smallsats which are secondary payloads that have to demonstrate that they won’t threaten the primary payload.

So, now that we’ve looked into how we’ve gotten to this point, let’s see what the different possibilities are, and what is used today.

What are the Options?

Ion drive scematic, NASA
Ion drive schematic, image courtesy NASA

The most well-known and popularized version of electric propulsion is electrostatic propulsion, which uses an ionization chamber (or ionic fluid) to develop a positively charged stream of ions, which are then accelerated out the “nozzle” of the thruster. A stream of electrons is added to the propellant as it leaves the spacecraft, to prevent the buildup of a negative charge. There are many different variations of this concept, including the best known types of thrusters (the Hall effect and gridded ion thrusters), as well as field effect thrusters and electrospray thrusters.

NASA MPD concept
MPD Thruster concept, image courtesy NASA

The next most common version – and one with a large amount of popular mentions these days – is the electromagnetic thruster. Here, the propellant is converted to a relatively dense plasma, and usually (but not always) magnets are used to accelerate this plasma at high speed out of a magnetic nozzle using the electromagnetic and thermal properties of plasma physics. In the cases that the plasma isn’t accelerated using magnetic fields directly, magnetic nozzles and other plasma shaping functions are used to constrict or expand the plasma flow. There are many different versions, from magnetohydrodynamic thrusters (MHD, where a charge isn’t transferred into the plasma from the magnetic field), to the less-well-known magnetoplasmadynamic (MPD, where the Lorentz force is used to at least partially accelerate the plasma), electrodeless plasma, and pulsed inductive thruster (PIT).

GT arcjet small
Electrothermal arcjet, image courtesy Georgia Tech

Thirdly, we have electrothermal drive systems, basically highly advanced electric heaters used to heat a propellant. These tend to be the less energy efficient, but high thrust, systems (although, theoretically, some versions of electromagnetic thrusters can achieve high thrust as well). The most common types of electrothermal systems proposed have been arcjet, resistojet, and inductive heating drives; the first two actually being popular choices for reaction control systems for large, nuclear-powered space stations. Inductive heating has already made a number of appearances on this page, both in testing apparatus (CFEET and NTREES are both inductively heated), and as part of a bimodal NTR (the nuclear thermal electric rocket, or NTER, covered on our NTR page).

VASIMR sketch, Ad Astra
VASIMR operating principles diagram, image courtesy Ad Astra

The last two systems, MHD and electrothermal, often use similar mechanisms of operation when you look at the details, and the line between the two isn’t necessarily clear. For instance, some authors describe the pulsed plasma thruster (PPT), which most commonly uses a solid propellant such as PTFE (Teflon) as a propellant, which is vaporized and occasionally ionized electrically before it’s accelerated out of the spacecraft, as an MHD, while others describe it as an arcjet, and which term best applies depends on the particulars of the system in question. A more famous example of this gray area would be the VASIMR thruster, (VAriable Specific Impulse through Magnetic Resonance). This system uses dense plasma, contained in a magnetic field, but the plasma is inductively heated using RF energy, and then accelerated due to the thermal behavior of the plasma while being contained magnetically. Because of this, the system can be seen as an MHD thruster, or as an electrothermal thruster (that debate, and the way these terms are used, was one of the more enjoyable parts of the editing process of this blog post, and I’m sure one that will continue as we continue to examine EP).

Finally, we come to the photon drives. These use photons as the reaction mass – and as such, are sometimes somewhat jokingly called flashlight drives. They have the lowest thrust of any of these systems, but the exhaust velocity is literally the speed of light, so they have insanely high specific impulse. Just… don’t expect any sort of significant acceleration, getting up to speed with these systems could take decades, if not centuries; making them popular choices for interstellar systems, rather than interplanetary ones. Photonic drives have another option, as well, though: the power source for the photons doesn’t need to be on board the spacecraft at all! This is the principle behind the lightsail (the best-known version being the solar sail): a fixed installation can produce a laser, or other stream of photons (such as a maser, out of microwaves, in the Starwisp concept), which then impact a reflective surface to provide thrust. This type of system follows a different set of rules and limitations, however, from systems where the power supply (and associated equipment), drive system, and any propellant needed are on-board the spacecraft, so we won’t go too much into depth on that concept initially, instead focusing on designs that have everything on-board the spacecraft.

Each of these systems has its’ advantages and disadvantages. Electrostatic thrusters are very simple to build: ionization chambers are easy, and creating a charged field is easy as well; but to get it to work there has to be something generating that charge, and whatever that something is will be hit by the ionized particles used for propellant, causing erosion. Plasmadynamic thrusters can provide incredible flexibility, but generally require large power plants; and reducing the power requirements requires superconducting magnets and other materials challenges. In addition, plasma physics, while becoming increasingly well known, provides a unique set of challenges. Thermoelectric thrusters are simple, but generally provide poor specific impulse, and thermal cycling of the components causes wear. Finally, photon drives are incredibly efficient but very, very low thrust systems, requiring exceedingly long burn times to produce any noticeable thrust. Let’s look at each of these options in a bit more detail, and look at the practical limitations that each system has.

Optimizing the System: The Fiddly Bits

As we’ve seen, there’s a huge array of technologies that fall under the umbrella of “electric propulsion,” each with their advantages and disadvantages. The mission that is going to be performed is going to determine which types of thrusters are feasible or not, depending on a number of factors. If the mission is stationkeeping for a geosynchronous communications satellite, then the Hall thruster has a wonderful balance between thrust and specific impulse. If the mission is a sample return mission to an asteroid, then the lower thrust, higher specific impulse gridded ion thruster is better, because the longer mission time (and greater overall delta-v needed for the mission) make this low-thrust, high-efficiency thruster a far more ideal option. If the mission is stationkeeping on a small satellite that is a piggyback load, the arcjet may be the best option, due to its’ compactness, the chemically inert nature of the fuel, and relatively high thrust. If higher thrust is needed over a longer period for a larger spacecraft, MPD may be the best bet. Very few systems are designed to deal with a wide range of capabilities in spaceflight, and electric propulsion is no different.

There are other key concepts to consider in the selection of an electric propulsion system as well. The first is the efficiency of this system: how much electricity is required for the thruster, compared to how much energy is imparted onto the spacecraft in the form of the propellant. This efficiency will vary within each different specific design, and its’ improvement is a major goal in every thruster’s development process. The quality of electrical power needed is also an important consideration: some require direct, current, some require alternating current, some require RF or microwave power inputs, and matching the electricity produced to the thruster itself is a necessary step, which on occasion can make one thruster more attractive than another by reducing the overall mass of the system. Another key question is the total amount of change in velocity needed for the mission, and the timeframe over which this delta-v can be applied; in this case, the longer timeframe you have, the more efficient your thruster can be at lower thrust (trading thrust for specific impulse).

However, looking past just the drive itself, there are quite a few things about the spacecraft itself, and the power supply, that also have to be considered. The first consideration is the power supply available to the drive system. If you’ve got an incredibly efficient drive system that requires a MW to run, then you’re going to be severely limited in your power supply options (there are very few, if any, drive systems that require this high a charge). For more realistic systems, the mass of the power supply, and therefore of the spacecraft, is going to have a direct impact on the amount of delta-v that is able to be applied over a given time: if you want your spacecraft to be able to, say maneuver out of the way of a piece of space debris, or a mission to another planet needs to arrive within a given timeframe, the less mass for a given unit of power, the better. This is an area where nuclear power can offer real benefits: while it’s debatable whether solar or nuclear power is better for low-powered applications in terms of power per unit mass, which is known in engineering as specific power. Once higher power levels are needed, however, nuclear shines: it can be difficult (but is far from impossible) to scale nuclear down in size and power output, but it scales up very easily and efficiently, and this scaling is non-linear. A smaller output reactor and one that has 3 times the output could be very similar in terms of core size, and the power conversion systems used also often have similar scaling advantages. There are additional advantages, as well: radiators are generally speaking smaller in sail area, and harder to damage, than photovoltaic cells, and can often be repaired more easily (once a PV cell get hit with space debris, it needs to be replaced, but a radiator tube designed to be repaired can in many cases just be patched or welded and continue functioning). This concept is known as power density, or power-per-unit-volume, and also has a significant impact on the capabilities of many (especially large) spacecraft. The specific volume of the power supply is going to be a limiting factor when it comes to launching the vehicle itself, since it has to fit into the payload fairing of the launch vehicle (or the satellite bus of the satellite that will use it).

The specific power, on the other hand, has quite a few different implications, most importantly in the available payload mass fraction of the spacecraft itself. Without a payload, of whatever type is needed, either scientific missions or crew life support and habitation modules, then there’s no point to the mission, and the specific power of the entire power and propulsion unit has a large impact on the amount of mass that is able to be brought on the mission.

Another factor to consider when designing an electrically propelled spacecraft is how the capabilities and limitations of the entire power and propulsion unit interact with the spacecraft itself. Just as in chemical and thermal rockets, the ratio of wet (or fueled) to dry (unfueled) mass has a direct impact on the vehicle’s capabilities: Tsiolkovsky’s rocket equation still applies, and in long missions there can be a significant mass of propellant on-board, despite the high isp of most of these thrusters. The specific mass of the power and propulsion system will have a huge impact on this, so the more power-dense, and more mass-efficient you are when converting your electricity into useful power for your thruster, the more capable the spacecraft will be.

Finally, the overall energy budget for the mission needs to be accounted for: how much change in velocity, or delta-v, is needed for the mission, and over what time period this change in velocity can be applied, are perhaps the biggest factors in selecting one type of thruster over another. We’ve already discussed the relative advantages and disadvantages of many of the different types of thrusters earlier, so we won’t examine it in detail again, but this consideration needs to be taken into account for any designed spacecraft.

With each of these factors applied appropriately, it’s possible to create a mathematical description of the spacecraft’s capabilities, and match it to a given mission profile, or (as is more common) to go the other way and design a spacecraft’s basic design parameters for a specific mission. After all, a spacecraft designed to deliver 100 kg of science payload to Jupiter in two years is going to have a very different design than one that’s designed to carry 100 kg to the Moon in two weeks, due to the huge differences in mission profile. The math itself isn’t that difficult, but for now we’ll stick with the general concepts, rather than going into the numbers (there are a number of dimensionless variables in the equations, and for a lot of people that becomes confusing to understand).

Let’s look instead at some of the more important parts of the power and propulsion unit that are tied more directly to the drives themselves.

Just as in any electrical system, you can’t just hook wires up to a battery, solar panel, or power conversion system and feed it into the thruster, the electricity needs to be conditioned first. This ensures the correct type of current (alternating or direct), the correct amount of current, the correct amperage… all the things that are done on Earth multiple times in our power grid have to be done on-board the spacecraft as well, and this is one of the biggest factors when it comes to what specific drive is placed on a particular satellite.

After the electricity is generated, it goes through a number of control systems to first ensure protection for the spacecraft from things like power surges and inappropriate routing, and then goes to a system to actually distribute the power, not just to the thruster, but to the rest of the on-board electrical systems. Each of these requires different levels of power, and as such there’s a complex series of systems to distribute and manage this power. If electric storage is used, for instance for a solar powered satellite, this is also where that energy is tapped off and used to charge the batteries (with the appropriate voltage and battery charge management capability).

After the electricity needed for other systems has been rerouted, it is directed into a system to ensure that the correct amount and type (AC, DC, necessary voltage, etc) of electricity is delivered to the thruster. These power conditioning units, or PCUs, are some of the most complex systems in an electric propulsion systems, and have to be highly reliable. Power fluctuations will affect the functioning of a thruster (possibly even forcing it to shut down in the case of too low a current), and in extreme cases can even damage a thruster, so this is a key function that must be provided by these systems. Due to this, some designers of electrical drive systems don’t design those systems in-house, instead selling the thruster alone, and the customer must contract or design the PCU independently of the supplier (although obviously with the supplier’s support).

Finally, the thermal load on the thruster itself needs to be managed. In many cases, small enough thermal loads on the thruster mean that radiation, or thermal convection through the propellant stream, is sufficient for managing this, but for high-powered systems, an additional waste heat removal system may be necessary. If this is the case, then it’s an additional system that needs to be designed and integrated into the system, and the amount of heat generated will play a major factor in the types of heat rejection used.

There’s a lot more than just these factors to consider when integrating an electric propulsion system into a spacecraft, but it tends to get fairly esoteric fairly quickly, and the best way to understand it is to look at the relevant mathematical functions for a better understanding. Up until this point, I’ve managed to avoid using the equations behind these concepts, because for many people it’s easier to grasp the concepts without the numbers. This will change in the future (as part of the web pages associated with these blog posts), but for now I’m going to continue to try and leave the math out of the posts themselves.

Conclusions, and Upcoming Posts

As we’ve seen, electric propulsion is a huge area of research and design, and one that extends all the way back to the dawn of rocketry. Despite a slow start, research has continued more or less continuously across the world in a wide range of different types of electric propulsion.

We also saw that the term “electric propulsion” is very vague, with a huge range of capabilities and limitations for each system. I was hoping to do a brief look at each type of electric propulsion in this post (but longer than a paragraph or two each), but sadly I discovered that just covering the general concepts, history, and integration of electric propulsion was already a longer-than-average blog post. So, instead, we got a brief glimpse into the most general basics of electrothermal, electrostatic, magnetoplasmadynamic, and photonic thrusters, with a lot more to come in the coming posts.

Finally, we looked at the challenges of integrating an electric propulsion system into a spacecraft, and some of the implications for the very wide range of capabilities and limitations that this drive concept offers. This is an area that will be expanded a lot as well, since we barely scratched the surface. We also briefly looked at the other electrical systems that a spacecraft has in between the power conversion system and the thruster itself, and some of the challenges associated with using electricity as your main propulsion system.

Our next post will look at two similar in concept, but different in mechanics, designs for electric propulsion: electrothermal and magnetoplasmadynamic thrusters. I’ve already written most of the electrothermal side, and have a good friend who’s far better than I at MPD, so hopefully that one will be coming soon.

The post after that will focus on electrostatic thrusters. Due to the fact that these are some of the most widely used, and also some of the most diverse in the mechanisms used, this may end up being its’ own post, but at this point I’m planning on also covering photon drive systems (mostly on-board but also lightsail-based concepts) in that post as well to wrap up our discussion on the particulars of electric propulsion.

Once we’ve finished our look at the different drive systems, we’ll look at how these systems don’t have to be standalone concepts. Many designs for crewed spacecraft integrate both thermal and electric nuclear propulsion into a single propulsion stage, bimodal nuclear thermal rockets. We’ll examine two different design concepts, one American (the Copernicus-B), and one Russian (the TEM stage), in that post, and look at the relative advantages and disadvantages of each concept.

I would like to acknowledge the huge amount of help that Roland Antonius Gabrielli of the University of Stuttgart Institute for Space Studies has been in this post, and the ones to follow. His knowledge of these topics has made this a far better post than it would have been without his invaluable input.

As ever, I hope you’ve enjoyed the post. Feel free to leave a comment below, and join our Facebook group to join in the discussion!

References:

History

A Critical History of Electric Propulsion: The First Fifty Years, Choueiri Princeton 2004 http://mae.princeton.edu/sites/default/files/ChoueiriHistJPC04.pdf

A Method and Means of Producing Jets of Electrified Gas, US Patent 1363037A, Goddard 1917 https://patents.google.com/patent/US1363037A/en

A Synopsis of Ion Propulsion Development Projects in the United States: SERT I to Deep Space 1, Sovey et al, NASA Glenn Research Center 1999 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19990116725.pdf

History of the Hall Thruster’s Development in the USSR, Kim et al 2007 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2007index/IEPC-2007-142.pdf

NSTAR Technology Validation, Brophy et al 2000 https://trs.jpl.nasa.gov/handle/2014/13884

Review Papers for Electric Propulsion

Electric Propulsion: Which One for my Spacecraft? Jordan 2000 http://www.stsci.edu/~jordan/other/electric_propulsion_3.pdf

Electric Propulsion, Jahn and Choueiri, Princeton University 2002 https://alfven.princeton.edu/publications/ep-encyclopedia-2001

Spacecraft Optimization

Joint Optimization of the Trajectory and the Main Parameters of an Electric Propulsion Spacecraft, Petukhov et al 2017 https://reader.elsevier.com/reader/sd/D49CFC08B1988AA61C8107737D614C89A86DB8DAE56D09D3E8E60C552C9566ABCBB8497CF9D0CDCFB9773815820C7678

Power Sources and Systems of Satellites and Deep Space Probes (slideshow), Farkas ESA http://www.ujfi.fei.stuba.sk/esa/slidy_prezentacie/power_sources_and_systems_of_satellites_and_deep_space_probes_mk_2.pdf

 

Categories
Fission Power Systems Nuclear Thermal Systems

Carbides: Nuclear Thermal Fuels of the Past and Future

Hello, and welcome to Beyond NERVA!

Today, we’re looking at a different kind of fuel element than the ones we’ve been examining so far on this blog, one that promises higher operation temperatures and therefore more efficient NTRs: carbide fuel elements. We’ll also look at a few different options for NTR designs using carbide fuels: the first one being from Russia (and the only NTR to be tested outside the US), the RD-0410/0411 architecture (two different sizes of a very similar reactor type); the second is the grooved ring tricarbide NTR (a modern US design involving a unique fuel element geometry); and, finally, the SULEU reactor (Superior Use of Low Enriched Uranium, another modern US design with many unique reactor architecture and safety features).

700px-NaCl_polyhedra
NaCl cubic structure, which is very similar to the structure of UC. Image via Wikipedia

So, to begin, what are carbides? Carbides are a solid solution of carbon and at least one other, less electronegative element. These materials are known for very high temperature melting points, and are often used in high speed tooling. Tungsten carbide, for instance, is used for both high-speed wood and metal bits, blades, and other tools.

In the NERVA reactors, niobium carbide and zirconium carbide were used as fuel element cladding, to prevent the fuel elements from being aggressively eroded by the hot hydrogen propellant. By the time of the XE-Prime test, the fuel particles suspended in the graphite matrix of the fuel element were uranium carbide, individually coated with zirconium carbide.

These are monocarbide compositions, though. There are other options: tricarbides (with three electronegative components, leading to a different lattice structure, as well as different mechanical and thermal properties) and carbide nitrides (a composite material containing both carbide and nitride structures; nitrides being a similar concept to carbides, but with N instead of C) – a possibility that is apparently of great interest to Russian NTR designers, but more on that later.

Even during Rover, however, the advantages of making the fuel elements themselves out of carbides were known, and research on the fuel elements began as far back as the 1960s in the US. This research included two of the test chambers in the nuclear furnace tests (examined in the Hot Fire Part 2 blog post to a small extent); but these were considered a more advanced follow-on technology, while the graphite fuel elements with encapsulated fuel particles were the ones that were intended to be used for the planned Mars missions.

Carbides have many advantages over many other materials. One example is that carbides are able to be built up with many different processes, most notably chemical vapor deposition (CVD), where a series of chemical precursors are used to deposit the different components in the carbide structure at much lower temperature than the melting – or decomposition – point of the carbide. Another advantage is that they tend to be relatively dimensionally stable when under high heating, meaning they don’t swell that much.

The USSR, on the other hand, decided very early on to commit to using carbide fuel elements for their NTR, and came up with a novel reactor architecture to both take advantage of the high temperatures of the carbide fuel elements, and to deal with the problems that they posed.

One major disadvantage to carbides is that they are prone to cracking… to a rather severe degree. This means that any cladding material needs to be able to handle this cracking. This was seen in the fuel elements in the NF-1 test, where every (U, ZrC)Carbide fuel element had a great deal of splitting; and was one of the reasons that this fuel was not considered the best option for early NTRs, until these issues were worked out.

Another disadvantage to carbides is the difficulty in manufacturing a consistent carbide, especially if multiple different types of electronegative components are used. Often there will be clusters of different monocarbides in what is supposed to be a tricarbide solution, meaning that the physical properties (notably, the fissile properties of the fuel itself) vary at different points in the fuel element. This can be made even worse if the fuel element is exposed to the hot hydrogen propellant stream as the H2 strips away the carbon (forming CH4, C2H2, and a number of other hydrocarbons); it also changes the chemical properties of the solution, sometimes allowing droplets of metal to form at well above their melting point, resulting in various other problems.

Oxides: The Familiar Fissile Chemical Composition

Carbides have been used for nuclear fuel elements for a very long time. The fuel pellets in later Rover and NERVA engines were encapsulated carbide beads spread in a graphite matrix. This allowed the fissile fuel itself to become hotter before decomposition occurred. To understand the advantages, though, we have to compare them to the other uranium-bearing compound that is more frequently used: uranium oxide.

Nuclear_Fuel_Pellets_(14492225000)
UO2 fuel pellets. Courtesy of Areva.

In the oxide fuel pellets, the O2 would separate from the U, causing metallic crystals to form in the fuel pellet, changing its neutronic and chemical properties. To make matters worse, the O2 could then migrate outside the pyrocarbon or ZrC coating, causing chemical reactions in the surrounding graphite. All of this can occur below its melting point of 2,865 C (3,138 K). This changes the neutronic behavior within the fuel elements in different amounts at different locations within the reactor, causing control issues for the operators, and requiring more design work from the engineers to ensure the reactor can deal with these problems.

Another problem with UO2 is that it has very poor thermal conductivity. Temperature gradients of more than a thousand degrees C are seen in terrestrial fuel pellets of UO2 roughly the thickness of a pencil. There are many ways around this,the latest being the use of CERMET fuels, which use very small pellets of UO2, surrounded by refractory metals that are much better thermal conductors; but these metals themselves also limit the temperature the fuel element can operate at (with the new reactor designs that use beryllium for its’ moderation properties, the relatively low, 1,287 C melting point of Be determines the maximum specific impulse of the rocket).

The Advantages to Carbide Fuel Elements

(Chemistry warning! I’ll keep it as light as possible, but…)

Ta Hf Nb and ZrC absorption data, NASA
Neutron absorption spectra of HfC, TaC, NbC and ZrC, ENDF data, image courtesy NASA

Carbides, on the other hand, have some of the highest melting points known to humanity. Tantalum hafnium carbide (Ta4HfC5) has a melting point of 3942 C, the highest known melting point. How high the melting point is depends on a number of factors, including what materials are used and the ratio between those elements in the structure of the carbide itself.

Unfortunately, both tantalum and hafnium have fairly high neutron absorption cross sections, so they are not ideal materials for carbide nuclear fuel elements. These are typically made out of some combination of uranium carbide and either niobium carbide, zirconium carbide, or both.

Another advantage to using carbide fuel elements is that this allows the actual fissile fuel to be more evenly spread throughout the fuel element, creating a more homogeneous (i.e. consistent) fission power profile across the fuel element. This is an advantage to reactor designers, since the more heterogeneous the reactor, the more headache it is for the designer to ensure stable fission behavior in the fuel element. The more consistently the fissile material is spread, the more controllable it is, and the more evenly the power is produced, making the behavior of the reactor more predictable. This has been known since the beginning of nuclear power, and is why later Rover fuel elements were moving away from the coated pellets mixed into graphite style of fuel and toward a composite fuel element, where the uranium carbide fuel was spread in a webwork throughout the graphite matrix of the fuel element.

The Complications of Carbide Fuel Elements

What the actual melting temperature is for a given material is… complicated, though, for a number of reasons.

The first depends on what proportion everything is in, and this is difficult to get consistent. As noted in a recent paper on a unique NTR geometry (which we’ll look at in the next post), getting the perfect stoichiometric ratio (i.e. the ratio between carbon, uranium, and any other elements present) is virtually impossible, so compromises need to be made. Too much carbon, and the temperature drops slightly. Too little carbon, and the material doesn’t mix as well, causing areas that have lower melting points, or higher thermal conductivity, or a number of other undesirable properties.

The second problem is in mixing: a fuel element designer wants to have a material that’s consistent all the way through the fuel element, not discrete little clumps of different materials as one moves through the fuel element. Because of the way that carbide fuel elements are made (DC sintering, a similar process to spark plasma sintering that’s used for CERMET fuel elements), the end result is grains of NbC, ZrC, and UC2 side by side, rather than a mixture (a solid solution, to be precise) of (Nb, Zr, U)C; and each grain has different thermal, neutronic, and chemical properties. It is possible to heat the fuel element, and have the constituents become this ideal solid solution, as was discovered using CFEET for carbide fuel element testing (more on that in the next post as well). This offers hope for more consistent mixing of the elements in the fuel itself, but establishing the correct ratios remains a problem.

Erosion effects Pelaccio et al
Corrosion effects in carbide fuels, Pelaccio et al

There’s one more big problem with carbide fuel elements, though: hydrogen corrosion. Unlike in graphite composite or CERMET fuel elements, the carbon that is stripped away by hot hydrogen is actually chemically bound to the uranium, zirconium, and niobium in the fuel element, not as a material matrix surrounding the chemical components that support fission in the fuel element. This means that if there’s a clad failure, the local ratio of carbon will change, causing free metal to form, either as a pure metal or an alloy, unevenly across the fuel element. This means that hot spots can develop, or parts of the fuel element will melt far below the melting temperature of the carbide the fuel element was originally made of. Flecks or droplets of metal can be eroded into the hot hydrogen stream, potentially causing damage downstream of the fuel element failure. In a worst case scenario, uranium could collect in areas of the reactor that it’s not meant to, creating a power peak in a spot that could be… inconvenient, to say the least.

These are challenges that carbide fuel element designers have always faced, and continue to face today. Careful chemical synthesis will definitely help, but there are limits to this. Preheating the fuel elements after sintering to ensure a more consistent solid solution is already showing considerable advantages in composition, and in material properties as well. Cladding the fuel element with carefully selected clad materials (often ZrC, which is already a component of the carbide fuel element, with a similar coefficient of thermal expansion and good modulus of elasticity), and ensuring consistent high quality application (usually through chemical vapor deposition these days, which has increased in quality and consistency a lot since the days of Project Rover) of the coating will eliminate (or at the least minimize) the erosion effects of graphite.

Another option that I’ve seen mentioned, but have been unable to find much information on, is an idea mentioned in Russian papers about their RD-041X engines: carbides and nitrides (which have a similar chemical composition, but with electronegative components ionically bonded with nitrogen, rather than carbon) in a solid solution. This leads to a more complex chemical structure, and may allow for less erosion of the carbon from the fuel element. Unfortunately, this literature is hard to find; and, when it is available, it hasn’t been translated from Russian. However, according to the most commonly available paper (linked in the references), adding a nitride component to the fuel element may boost the maximum fuel element temperature.

The Other Fuel: Plutonium Carbides

We don’t talk about plutonium much on this blog (yet), but plutonium carbides have been investigated to a certain degree as well. They may not be as attractive as uranium carbide for a number of reasons, but as a potential fuel element, they may show some promise.

Why are they less attractive? First is the neutron fission cross section of Pu is skewed much more to the fast spectrum in Pu than in U. This means that the more moderated the neutron flux, the more likely it is that when a neutron interacts with a nucleus of 239Pu, it won’t fission but continue up the transuranic chain. Many of these elements are also fissile, but again much more so in the fast spectrum. This means that more and more neutron poisons can build up in your core, requiring more and more reactivity to overcome. This also means that when it’s time to decommission the core, it will be much more radioactive than a similar U-fueled reactor (on average, there are of course a lot of factors that go into this). Finally, this means that the core has to have more fuel in it; and, unlike with uranium, there’s no “Low Enriched Plutonium,” the fraction of 238Pu (used in RTGs) or 240Pu (which is gamma-active, and a headache) is very low. This is convenient if you’re making fuel elements, but a very different regulatory game than LEU, with huge restrictions on who can work with the fuel element materials for development of an NTR.

Second, 239Pu is illegal to use in space, in accordance with international treaty. Now, LEU235 is also illegal, but that is more likely to change, since it involves having less concentrated fissile material in space, unlike the use of Pu, which is considered a major nuclear proliferation risk, even if it’s out in space. The treaty was written to prevent nuclear weapons in space sneaking in the back door, and Pu has been (in the public’s mind) intimately tied to nuclear weapons development from day 1.

Mixed carbide fuels (containing both uranium and plutonium) have been investigated as an alternative to MOX (mixed oxide) fuels for fast breeder reactors, either in the (U, Pu)C or the (U, Pu)2C3 phases. The usual benefits of carbides over oxides apply to this fuel form: higher metal density and better thermal conductivity being the main two. Due to a number of challenges, including very low oxygen requirements for fabrication, minimal experience with fabrication of mixed carbide fuels, and the general lack of information on the chemistry of PuC, this is a largely unknown field, but research is being conducted to extend our knowledge of these areas.

At present, these materials are a curiosity, although they could lead to advanced fuels for terrestrial use. Until their chemistry and materials properties are better known, however, it is unlikely we’ll see an NTR powered with mixed carbide fuel.

How are Carbides Used in NTRs?

Traditional” Carbide-fueled NTRs

Reactor cell w carbide, Finseth
Sketch of NF-1 Carbide Fuel Test Cell with Carbide Fuel Cross Section, Finseth 1991

In Rover, carbide fuel elements were researched that had a very similar form factor to the fuel elements. These were hexagonal in cross section, about 33 cm long, and clad in NbC. The main difference was that there was a single large hole, rather than nineteen small holes. An NTR was in the early concept design, but was never put through the reactor geometry refinement process.

Designs have been proposed over the years using hexagonal prism fuels similar to Rover carbide fuel elements, but none are currently under development, as far as I can see. This doesn’t exclude their use, even with LEU, but NASA and the DOE are currently pursuing other fuel element geometries.

The Other Tradition: Russian NRE

twistedRibbon05Russia has been in the nuclear thermal rocket business for as long as the United States, but their design philosophy is hugely different from the American one. Just like NASA and the DOE don’t use the term “nuclear thermal rocket” (NTR), instead preferring “nuclear thermal propulsion” (NTP), Roscosmos and Rosatom (who work together to develop the Russian program) use the term “nuclear rocket engine”, or NRE.

The design changes start with the fuel element design, extend through the basic geometry of the reactor and beyond, and have major implications for testing and materials options with this system.

First, let’s look at the fuel elements. One of the considerations for fuel element design is the amount of surface area that can be contacted by the propellant. Thermal transfer is determined by the thermal emissivity of the fuel element material, and the thermal conductivity and transparency of the propellant. The more surface area, the more heat is transferred, given those previously mentioned factors are equal. Rather than using a fuel prism as American NTP has done, with increasing number of holes through a hexagonal prism, the Russian NRE uses what is commonly known as a “twisted ribbon” design, where a rectangular prism (or any number of other designs, such as a cluster of rods, square prisms, or other shapes- see the image above for the variations that have been tested) is rotated along its long axis. A cluster of these fuel elements are placed in a tube (known as a calandria, similar to the design used in CANDU reactors, but with different geometry and materials), ending in a nozzle at the end of the bundle.

twistedRibbon01

Unlike with the American NTP designs, there isn’t a single fuel element cluster running down the center of the NRE. In fact, there’s NO fuel at the center of the reactor. The Russians don’t use a homogeneous reactor design, either for neutronic power or thermal energy. The center of the reactor, rather than containing fuel, contains moderator. Since the fuel elements (and therefore all the sources of heat for the reactor) are spread around the periphery of the reactor core, rather than being evenly distributed in the core, this means that a moderator with much lower melting temperatures can be used in the design (both zirconium and lithium hydrides are mentioned as options, neither of which would be able to withstand the temperatures of a homogeneous core NTR). This also means that a bimodal design (known in the Russian program as a “nuclear power and propulsion system,” or NPPS, rather than BNTP as NASA calls it) can integrate the working fluid channels more easily into the design without a complete redesign of either the fuel element or the header and footer support plates. We’ll cover BNTRs in a later post, including the NPPS, but it’s worth mentioning that this design offers more design flexibility than the traditional, hexagonal prism NTP fuel elements used in American designs.

Fuel Bundle
Loading of fuel bundles into core, Russian film

Finally, due to the fact that a number of fuel element bundles are radially spread across the reactor, an individual fuel bundle can be tested on its’ own in a prototypic neutronic and thermal environment, rather than needing to test the entire NTP core in a hot fire test, as is required for the American designs. This testing has been conducted both at the EWG-1 research reactor [with ten consecutive restarts, a total testing time of 4000 s (although how much was at full power, and what sort of transient testing was done, is unknown), at a maximum hydrogen exhaust temperature of 3100 K, achieving a theoretical specific impulse of 925 s and a power density for the system of 10 Mwt/L] and at the rocket test stand in Semipalatinsk (although those test results are still classified). The Russians have also done full-scale electric heating tests of NRE designs, settling on two: the RD-0410 (35 kN thrust, for unmanned probes – and possibly for proof-of-concept mission use) and RD-0411 (~392 kN of thrust, for crewed missions). Statistics for the RD-0410, based on these electrically heated tests, can be seen below:

NRE Performance, Zukhov et al
NRE Performance Specifications, Zhukov et al

Sadly, there isn’t much more information available about the current NRE designs and plans. We’ll come back to its’ variant, the NPPS, when we look at bimodal designs in the future.

Grooved Ring NTR: Not All American Designs are Hexagonal

Diagram with separate ring, Taylor et al
Fuel element and element cluster, Taylor et al 2017

This is a new NTR design, designed around the use of a (Zr, Nb, U)C fuel element of a very different shape than the traditional hexagonal prism, currently under development at NASA and the University of Tennessee. Just as with the twisted ribbon fuel elements, the fuel element geometry for this NTR has been changed to maximize surface area, and allow for more heat to be transferred to the propellant. This both maximizes the specific impulse and minimizes the amount of propellant needed for cooling purposes (however, H2 remains the best moderator available, and a minimum amount for neutronic reasons will always be needed, even if not for cooling the fuel elements).

The fuel elements are radially grooved discs of uranium tricarbide (Nb, Zr, U)C, although hafnium and tantalum were also investigated (and eliminated due to the much higher neutron absorption rates). The hydrogen flows from the outside of a stack of these fuel elements, separated with beryllium spacers, and then flows down a central channel.

Due to the unique geometry of this fuel element design, much optimization was needed for the groove depth, hydrogen flow rates, uranium density in the fuel element (in the initial design, 95% enriched HEU was used for ease of calculation, however with additional optimization and research into stoichiometric ratios of U with the other electronegative components, the authors believe less than 20% enrichment is possible), and other factors.

CFEET Testing
CFEET testing of fuel element, Taylor et al

Thermal testing, including hot hydrogen testing using CFEET, has been carried out at Marshall SFC, using vanadium as a surrogate for depleted uranium. The team hopes to continue to refine such factors as manufacturing consistency, improved mixing of the solid solution of the carbide, and other manufacturing issues in carbide fuels, before hopefully moving on to electrically heated carbide tests using depleted uranium (DU) to optimize the carbide chemistry of uranium itself.

This NTR offers the potential for 3000 C exhaust temperatures at 4 psi. Unfortunately, due to the preliminary nature of the work that has been carried out to date (this reactor design is less than a year old, unlike the designs that have gone through decades of development of not just the fuel elements themselves but also the engine system), thrust and theoretical specific impulse using this reactor design has not been determined yet.

This novel fuel element form offers promise, though, of a new NTR fuel element geometry that allows for better thermal transfer to the propellant, and the team are performing extensive material fabrication and optimization experiments to further our understanding of tricarbide fuel element performance and manufacture, in addition to developing this new fuel element form factor.

Tricarbide Foam Fuel Elements: You REALLY Want Surface Area? We Got It!

This is a very different carbide fuel form, with novel manufacturing practices yielding a truly unique fuel element.

Most solid core fuel elements are chunks of material, no matter what form they take (and we’ve seen quite a few forms in this post already), with the propellant flowing around or through them; either through holes that are milled or drilled, the surface of the twisted ribbon, or through grooves cut in a disc. That’s not the case here, however!

65 ppi RVC foam tricarbide, Youchistan et al
Youchistan et al

The team at Sandia National Laboratory, Ultramet, Inc., and the University of Florida have come up with a new take on carbide manufacture, utilizing chemical vapor deposition (CVD, a common method of carbide manufacture) on a matrix that starts life as open-pore polyurethane foam. This foam is then pyrolized (baked… ish) to form a carbonized skeleton of the foam structure. This is then heated, and CVI (chemical vapor infiltration, a variation of CVD) processes are used to impregnate the carbonized skeleton with uranium, zirconium, and niobium; turning the structure’s outer surfaces to (U, Zr, Nb)C carbide (a number of factors affect the depth of the penetration). Then, CVD is used to coat the new carbide structure with ZrC or NbC to clad the more chemically fragile tricarbide, and protect it from the H2 propellant that will flow through the open pores remaining after this carbidization and CVD coating process.

Foam cross section Youchistan et alThis concept has been tested using tantalum as a surrogate for uranium (a common choice for pre-depleted uranium electrically heated testing of carbide fuel elements), with two foam densities, 78% and 85%; leading to the discovery that there’s a trade-off: the 78% had better thermal transfer properties, but the 85% offers more volume for the fissile material, meaning that lower enrichment was possible.

The team members at Sandia made a preliminary MCNP model of an NTR for use with these fuel elements, with a number of unique features. This was a heterogeneous core (meaning uneven fuel distribution), with 60% porosity foam fuel, using yttrium hydride for the moderator (which has to be maintained below 1400 K by circulating hydrogen between it and the fuel), and with a Be reflector. For these initial modeling calculations, 93.5% enriched HEU was used. It was discovered that a 500 MWt NTR was possible using this fuel form, but due to the unoptimized, preliminary nature of this design, values for thrust and specific impulse are still up in the air.

INSPI at the University of Florida will be conducting electrically heated hot hydrogen tests on DU-containing tricarbide fuel foams in the temperature range of 2500-3000 K, as these fuel foams become available, although the timeline for this is unclear. However, research is continuing in this truly novel fuel form, and the possibilities are very promising.

Carbides: Great Promise, with Complications

As we’ve seen in this post, carbide fuel elements offer many advantages for designers of nuclear thermal rockets. Their high melting point allow for higher propellant exhaust temperatures, improving the specific impulse of an NTR. Their ability to have their properties manipulated by changing the composition and ratio of the components allows a material designer to optimize the fuel elements for a number of different purposes. Their strength allows for truly novel fuel forms that give an NTR designer a lot more flexibility in design. Finally, their similar coefficient of thermal expansion, and often good modulus of elasticity, make them important materials for use in all NTRs, not just those fueled with fissile-containing carbides.

However, the chemical and materials properties of these substances, manufacturing processes required to consistently produce them, and modes of failure (including the implications for these types of failure in an operating NTR) show that there’s still much work to be done in order to bring carbide fuel elements to the same level of technological maturity currently enjoyed by graphite composite fuel elements.

The promise of carbides, though, makes developing the chemistry of fissile-bearing carbides of all forms, perhaps most especially uranium tricarbides, a worthy goal for the advancement of nuclear power in space. This research has been ongoing for decades, continues worldwide, and is bearing fruit.

References

Uranium Dioxide

Uranium Dioxide Wikipedia page: https://en.wikipedia.org/wiki/Uranium_dioxide

Thermodynamic and Transport Properties of Uranium Dioxide and Related Phases, IAEA 1965 http://www.iaea.org/inis/collection/NCLCollectionStore/_Public/24/071/24071477.pdf

Thermal Conductivity of Uranium Dioxide, IAEA 1966: http://www.iaea.org/inis/collection/NCLCollectionStore/_Public/34/065/34065217.pdf

Uranium Carbide

Nuclear Thermal Propulsion Carbide Fuel Corrosion and Key Issues; Pelaccio et al 1994

https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19950011802.pdf

Evaluation of Novel Refractory Carbide Matrix Fuels for Nuclear Thermal Propulsion; Benensky et al 2018

https://www.researchgate.net/publication/324164284_Evaluation_of_Novel_Refractory_Carbide_Matrix_Fuels_for_Nuclear_Thermal_Propulsion?ev=publicSearchHeader&_sg=c-5LZwXyF_AvDFznQi5AHQdF_KlJYE7p8Qiii6M3H6nNFhlKWQ1oQ8Kh8B40UI13RMZ_7DTLgNp1KgE

Ultra High Specific Impulse Nuclear Thermal Rocket, Part II; Charmeau et al 2009

https://www.osti.gov/servlets/purl/950459

Study of a Tricarbide Grooved Ring Fuel Element for Nuclear Thermal Propulsion; Taylor et al 2017

https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20180002033.pdf

Mixed Carbides

Plutonium Tricarbide Isomers: A Theoretical Approach; Molpeceres de Diego, 2015 https://uvadoc.uva.es/bitstream/10324/13556/1/TFM-G413.pdf

Mastery of (U, Pu)C Carbide Fuel: From Raw Materials to Final Characteristics, Christelle Duguay, 2012

https://www.epj-conferences.org/articles/epjconf/pdf/2013/12/epjconf_MINOS2012_01005.pdf

Rover Carbide Fuel Elements

Nuclear Furnace-1 Test Report, LA-5189MS; Kirk et al 1973

https://ntrl.ntis.gov/NTRL/dashboard/searchResults/titleDetail/LA5189MS.xhtml

Performance of (U, Zr)C-Graphite (Composite) and of (U, ZR)C (Carbide) Fuel Elements in the Nuclear Furnace 1 Test Reactor, LA-5398-MS; Lyon 1973

https://www.osti.gov/servlets/purl/4419566

Nuclear Rocket Engine

Russian Nuclear Rocket Engine Design for Mars Exploration Zakirov et al 2007 https://www.researchgate.net/publication/222548572_Russian_Nuclear_Rocket_Engine_Design_for_Mars_Exploration

Ticarbide Grooved Ring NTR

Grooved Fuel Rings For Nuclear Thermal Rocket Engines tech brief; MSFC 2009 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20090008640.pdf

Multiphysics Modeling of a Single Channel in a Nuclear Thermal Propulsion Grooved Ring Fuel Element; Barkett et al 2013 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20130011208.pdf

Study of a Tricarbide Grooved Ring Fuel Element for Nuclear Thermal Propulsion; Taylor et al 2017 Conference paper: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20180002033.pdf Presentation Slides: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20180002060.pdf

Tricarbide Foam Fuel Element

A Tricarbide Foam Fuel Matrix for Nuclear Thermal Propulsion, SAND-2006-3797C; Youchison et al 2006

https://www.osti.gov/servlets/purl/1266203