Categories
Development and Testing Forgotten Reactors History Non-nuclear Testing Nuclear Thermal Systems Test Stands

Timber Wind: America’s Return to Nuclear Thermal Rockets

 Hello, and welcome to Beyond NERVA! Today, we’re continuing to look at the pebble bed nuclear thermal rocket (check out the most recent blog post on the origins of the PBR nuclear thermal rocket here)!

Sorry it took so long to get this out… between the huge amount of time it took just to find the minimal references I was able to get my hands on, the ongoing COVID pandemic, and several IRL challenges, this took me far longer than I wanted – but now it’s here!

Today is special because it is covering one of the cult classics of astronuclear engineering, Project Timber Wind, part of the Strategic Defense Initiative (better known colloquially as “Star Wars”). This was the first time since Project Rover that the US put significant resources into developing a nuclear thermal rocket (NTR). For a number of reasons, Timber Wind has a legendary status among people familiar with NTRs, but isn’t well reported on, and a lot of confusion has built up around the project. It’s also interesting in that it was an incredibly (and according to the US Office of the Inspector General, overly) classified program, which means that there’s still a lot we don’t know about this program 30 years later. However, as one of the most requested topics I hear about, I’m looking forward to sharing what I’ve discovered with you… and honestly I’m kinda blown away with this concept.

Timber Wind was an effort to build a second stage for a booster rocket, to replace the second (and sometimes third) stage of anything from an MX ballistic missile to an Atlas or Delta booster. This could be used for a couple of different purposes: it could be used similarly to an advanced upper stage, increasing the payload capacity of the rocket and the range of orbits that the payload could be placed in; alternatively it could be used to accelerate a kinetic kill vehicle (basically a self-guided orbital bullet) to intercept an incoming enemy intercontinental ballistic missile before it deploys its warheads. Both options were explored, with much of the initial funding coming from the second concept, before the kill vehicle concept was dropped and the slightly more traditional upper stage took precedence.

Initially, I planned on covering both Timber Wind and the Space Nuclear Thermal Propulsion program (which it morphed into) in a single post, but the mission requirements, and even architectures, were too different to incorporate into a single blog post. So, this will end up being a two-parter, with this post focusing on the three early mission proposals for the Department of Defense (DOD) and Strategic Defense Initiative Organization (SDIO): a second stage of an ICBM to launch an anti-booster kinetic kill vehicle, an orbital transfer vehicle (basically a fancy, restartable second stage for a booster), and a multi-megawatt orbital nuclear power plant. The next post will cover when the program became more open, testing became more prevalent, and grander plans were laid out – and some key restrictions on operating parameters eliminated the first and third missions on this list.

Ah, Nomenclature, Let’s Deal with That

So, there’s a couple things to get out of the way before we begin.

The first is the name. If you do a Google/Yandex/etc search for “Timber Wind,” you aren’t going to find much compared to “Timberwind,” but from what I’ve seen in official reporting it should be the other way around. The official name of this program is Project Timber Wind (two words), which according to the information I’ve been able to find is not unusual. The anecdotal evidence I have (and if you know more, please feel free to leave a comment below!) is that for programs classified Top Secret: Special Access (as this was) had a name assigned based on picking two random words via computer, whereas other Top Secret (or Q, or equivalent) programs didn’t necessarily follow this protocol.

However, when I look for information about this program, I constantly see “Timberwind.” not the original “Timber Wind.” I don’t know when this shift happened – it didn’t ever happen with rare exceptions in official documentation, even in the post-cancellation reporting, but somehow public reporting always uses the single word variation. I kinda attribute it to reading typewritten reports when the reader is used to digitally written documents as personal head-canon, but that’s all that explanation is – my guess which makes sense to me.

So there’s a disconnect between what most easily accessible sources use (single word), and the official reporting (two words). I’m going to use the original, because the only reason I’ve gotten as far as I have by being weird about minor details in esoteric reports, so I’m not planning on stopping now (I will tag the single word in the blog, just so people can find this, but that’s as far as I’m going)!

The second is in nuclear reactor geometry definitions.

Having discrete, generally small fuel elements generally falls into two categories: particle beds and pebble beds. Particles are small, pebbles are big, and where the line falls seems to be fuzzy. In modern contexts, the line seems to fall around the 1 cm diameter mark, although finding a formal definition has so far eluded me. However, pebble beds are also a more colloquial term than particle beds in use: a particle bed is a type of pebble bed in common use, but not vice versa.

In this context, both the RBR and Timber Wind are both particle bed reactors, and I’ll call them such, but if a source calls the reactor a pebble bed (which many do), I may end up slipping up and using the term.

OK, nomenclature lesson done. Back to the reactor!

Project Timber Wind: Back to the Future

For those in the know, Timber Wind is legendary. This was the first time after Project Rover that the US put its economic and industrial might behind an NTR program. While there had been programs in nuclear electric propulsion (poorly funded, admittedly, and mostly carried through creative accounting in NASA and the DOE), nuclear thermal propulsion had taken a back seat since 1972, when Project Rover’s continued funding was canceled, along with plans for a crewed Mars mission, a crewed base on the Moon, and a whole lot of other dreams that the Apollo generation grew up on.

There was another difference, as well. Timber Wind wasn’t a NASA program. Despite all the conspiracy theories, the assumptions based on the number of astronauts with military service records, and the number of classified government payloads that NASA has handled, it remains a civilian organization, with the goal of peacefully exploring the solar system in an open and transparent manner. The Department of Defense, on the other hand, is a much more secretive organization, and as such many of the design details of this reactor were more highly classified than is typical in astronuclear engineering as they deal with military systems. However, in recent years, many details have become available on this system, which we’ll cover in brief today – and I will be linking not only my direct sources but all the other information I’ve found below.

Also unlike NTR designs since the earliest days of Rover, Timber Wind was meant to act as a rocket stage during booster flight. Most NTR designs are in-space only: the reactor is launched into a stable, “nuclear-safe” (i.e. a long-term stable orbit with minimal collision risk with other satellites and debris) orbit, then after being mated to the spacecraft is brought to criticality and used for in-space orbital transfers, interplanetary trajectories, and the like. (Interesting aside, this program’s successor seems to be the first time that now-common term was used in American literature on the subject.)

Timber Wind was meant to support the Strategic Defense Initiative (SDI), popularly known as Star Wars. Started in 1983, this extensive program was meant to provide a ballistic missile shield, among other things, for the US, and was given a high priority and funding level for a number of programs. One of these programs, the Boost Phase Intercept vehicle, meant to destroy an intercontinental ballistic missile during the boost phase of the vehicle using a kinetic impactor which would be launched either from the ground or be pre-deployed in space. A kinetic kill vehicle is basically a set of reaction control thrusters designed to guide a small autonomous spacecraft into its target at high velocity and destroy it. They are typically small, very nimble, and limited only by the sensors and autonomous guidance software available for them.

In order to do this, the NTR would need to function as the second stage of a rocket, meaning that while the engine would be fired only after it had entered the lower reaches of space or the upper reaches of the atmosphere (minimizing the radiation risk from the launch), it would still very much be in a sub-orbital flight path at the time, and would have much higher thrust-to-weight ratio requirements as a result.

The engine that was selected was based on a design by James Powell at Brookhaven National Laboratory (BNL) in the late 1970s. He presented the design to Grumman in 1982, and from there it came to the attention of the Strategic Defense Initiative Organization (SDIO), the organization responsible for all SDI activities.

Haslett 1994

SDIO proceeded to break the program up into three phases:

  • Phase I (November 1987 to September 1989): verify that the pebblebed reactor concept would meet the requirements of the upper stage of the Boost Phase Intercept vehicle, including the Preliminary Design Review of both the stage and the whole vehicle (an MX first stage, with the PBR second stage /exceeding Earth escape velocity after being ignited outside the atmosphere)
  • Phase II (September 1989-October 1991 under SDIO, October 1991-January 1994 when it was canceled under the US Air Force, scheduled completion 1999): Perform all tests to support the ground test of a full PBR NTR system in preparation for a flight test, including fuel testing, final design of the reactor, design and construction of testing facilities, etc. Phase II would be completed with the successful ground hot fire test of the PBR NTR, however the program was canceled before the ground test could be conducted.
    • Once the program was transferred to the US Air Force (USAF), the mission envelope expanded from an impactor’s upper stage to a more flexible, on-orbit multi-mission purpose, requiring a design optimization redesign. This is also when NASA became involved in the program.
    • Another change was that the program name shifted from Timber Wind to the Space Nuclear Thermal Propulsion program (SNTP), reflecting both the change in management as well as the change in the mission design requirements.
  • Phase III (never conducted, planned for 2000): Flight test of the SNTP upper stage using an Atlas II launch vehicle to place the NTR into a nuclear-safe orbit. Once on orbit, a number of on-orbit tests would be conducted on the engine, but those were not specified to any degree due to the relatively early cancellation of the program.

While the program offered promise, many factors combined to ensure the program would not be completed. First, the hot fire testing facilities required (two were proposed, one at San Tan and one at the National Nuclear Security Site) would be incredibly expensive to construct, second the Space Exploration Initiative was heavily criticized for reasons of cost (a common problem with early 90’s programs), and third the Clinton administration cut many nuclear research programs in all US federal departments in a very short period of time (the Integral Fast Reactor at Argonne National Laboratory was another program to be cut at about the same time).

The program would be transferred into a combined USAF and NASA program in 1991, and end in 1994 under those auspices, with many successful hurdles overcome, and it remains an attractive design, one which has become a benchmark for pebble bed nuclear thermal rockets, and a favorite of the astronuclear community to speculate what would be possible with this incredible engine.

To understand why it was so attractive, we need to go back to the beginning, in the late 1970s at Brookhaven National Laboratory in the aftermath of the Rotating Fluidized Bed Reactor (RBR, covered in our last post here).

The Beginning of Timber Wind

When we last left particle bed NTRs, the Rotating Fluidized Bed Reactor program had made a lot of progress on many of the fundamental challenges with the concept of a particle bed reactor, but still faced many challenges. However, the team, including Dr. John Powell, were still very enthusiastic about the promise it offered – and conscious of the limitations of the system.

Dr. Powell continued to search for funding for a particle bed reactor (PBR) NTR program, and interest in NTR was growing again in both industry and government circles, but there were no major programs and funding was scarce. In 1982, eight years after the conclusion of the RBR, he had a meeting with executives in the Grumman Corporation, where he made a pitch for the PBR NTR concept. They were interested in the promise of higher specific impulse and greater thrust to weight ratios compared to what had become the legacy NERVA architecture, but there wasn’t really a currently funded niche for the project. However, they remained interested enough to start building a team of contractors willing to work on the concept, in case the US government revived its NTR program. The companies included other major aerospace companies (such as Garrett Corp and Aerojet) and nuclear contractors (such as Babcock and Wilcox), as well as subcontractors for many components.

At the same time, they tried to sell the concept of astronuclear PBR designs to potentially interested organizations: a 1985 briefing to the Air Force Space Division on the possibility of using the PBR as a boost phase interceptor was an early, but major, presentation that would end up being a major part of the initial Timber Wind architecture, and the next year an Air Force Astronautics Laboratory issues a design study contract for a PBR-based Orbital Transfer Vehicle (OTV, a kind of advanced upper stage for an already-existing booster). While neither of these contracts was big enough to do a complete development program, they WERE enough money to continue to advance the design of the PBR, which by now was showing two distinct parts: the boost phase interceptor, and the OTV. There was also a brief flirtation with using the PBR architecture from Timber Wind as a nuclear electric power source, which we’ll examine as well, but this was never particularly well focused on or funded, so remains a footnote in the program.

Reactor Geometry

From Atomic Power in Space, INL 2015

Timber Wind was a static particle bed reactor, in the general form of a cylinder 50 cm long by 50 cm in diameter, using 19 fuel elements to heat the propellant in a folded flow path. Each fuel element was roughly cylindrical with a 6.4 cm diameter, consisting of a cold frit (a perforated cylinder) made of stainless steel and a hot frit made out of zirconium carbide (ZrC, although rhenium – Rh – clad would also meet thermal and reactivity requirements) coated carbon-carbon composite, which held a total of 125 kg (15 liters) of 500 micron diameter spheres of uranium/zirconium carbide fueled fuel particles which were clad in two layers of different carbon compositions followed by ZrC cladding. These would be held in place through mechanical means, rather than centrifugal force like in the RBR, reducing the mass of the system at the (quite significant materially) cost of developing a hot frit to mechanically contain the fuel. This is something we’ll cover more in depth in the next post.

From Atomic Power in Space, INL 2015

The propellant would then pass into a central, truncated cone central void, becoming wider from the spacecraft to the nozzle end. This is called orificing. An interesting challenge with nuclear reactors is the fact that the distribution of energy generation changes based on location within the reactor, called radial/axial power peaking (something that occurs in individual fuel elements both in isolation and in terms of their location in a core as well, part of why refueling a nuclear reactor is an incredibly complex process), and in this case it was dealt with in a number of ways, but one of the primary ones was individually changing the orificing of each fuel element to accommodate the power generation and propellant flow rate of each fuel element.

Along these lines, another advantage of this type of core is the ability to precisely control the amount of fissile fuel in each fuel element along the length of the reactor, and along the radius of the fuel element. Since the fuel particles are so small, and the manufacturing of each would be a small-batch process (even fueling a hundred of these things would only take 1500 liters of volume, with the fissile component of that volume being a small percentage of that), a variety of fuel loading options were inherently available, and adjustments to power distribution were reasonably easy to achieve from reactor to reactor. This homogenizes power distribution in some reactors, and increases local power in other, more specialized reactors (like some types of NTRs), but here an even power distribution along the length of the fuel element is desired. This power leveling is done in virtually every fuel element in every reactor, but is a difficult and complex process with large fuel elements due to the need to change how much U is in each portion of the fuel elements. With a particle bed reactor, on the other hand, the U content doesn’t need to vary inside each individual fuel paritcles, and both fueled and unfueled particles can be added in specific regions of the fuel element to achieve the desired power balance within the element. There was actually a region of unfueled particles on the last cm of the particle bed in each fuel element to maximize the efficiency of power distribution into the propellant, and the level of enrichment for the 235U fuel was varied from 70% to 93.5% throughout the fueled portions. This resulted in an incredibly flat power profile, with a ratio of only 1.01:1 from the peak power density to the average power density.

Since the propellant would pass from the outside of each fuel element to the inside, cooling the reactor was far easier, and lower-mass (or higher efficiency) options for things such as moderator were an option. This is a benefit of what’s called a folded-flow­ propellant path, something that we’ve discussed before in some depth in our post on Dumbo, the first folded flow NTR concept [insert link]. In short, instead of heating the propellant as it passes down the length of the reactor such as in Rover, a folded flow injects the cold propellant laterally into the fuel element, heating it in a very short distance, and ejecting it through a central void in the fuel element. This has the advantage of keeping the vast majority of the reactor very cool, eliminating many of the thermal structural problems that Rover experienced, at the cost of a more complex gasdynamic behavior system. This also allowed for lighter-weight materials to be used in the construction, such as aluminum structural members and pressure vessel, to further reduce the mass of the reactor.

Interestingly, many of these lower-mass options, such as Li7H moderator, were never explored, since the mass of the reactor came in at only about 0.6 tons, a very small number compared to the 10 ton payload, so it just wasn’t seen as a big enough issue to continue working on at that point.

Finally, because of the low (~1 hr) operating time of the reactor, radiation problems were minimized. With a reactor only shielded by propellant, tankage, and the structures of the NTR itself, it’s estimated that the NOTV would subject its payload to a total of 100 Gy of gamma radiation and a neutron fluence of less than 10^14 n/cm^2. Obviously, reducing this for a crewed mission would be necessary, but depending on the robotic mission payload, additional shielding may not be necessary. The residual radiation would also be minimal due to the short burn time, although if the reactor was reused this would grow over time.

In 1987, the estimated cost per unit (not including development and testing was about $4 million, a surprisingly low number, due to the ease of construction, low thermal stresses requiring fewer exotic materials and solutions, and low uranium load requirements.

This design would continue to evolve throughout Timber Wind and into SNTP as mission requirements changed (this description is based on a 1987 paper linked below), and we’ll look at the final design in the next post.

For now, let’s move on to how this reactor would be used.

Nuclear Thermal Kinetic Kill Vehicle

The true break for the project came in the same year: 1987. This is when the SDIO picked the Brookhaven (and now Grumman) concept as their best option for a nuclear-enhanced booster for their proposed ground deployed boost phase interceptor.

I don’t do nuclear weapons coverage, in fact that’s a large part of why I’ve never covered systems like Pluto here, but it is something that I’ve gained some knowledge of through osmosis through interactions with many intelligent and well-educated people on social media and in real life… but this time I’m going to make a slight exception for strategic ballistic missile shield technology, because an NTR powered booster is… extremely rare. I can think of four American proposals that continued to be pursued after the 1950s, one early (apocryphal) Soviet design in the early 1950s, one modern Chinese concept, and that’s it! I get asked about it relatively frequently, and my answer is basically always the same: unless something significant changes, it’s not a great idea, but in certain contexts it may work. I leave it up to the reader to decide if this is a good context. (The list I can think of is the Reactor In-Flight Test, or RIFT, which was the first major casualty of Rover/NERVA cutbacks; Timber Wind; and for private proposals the Liberty Ship nuclear lightbulb booster and the Nuclear Thermal Turbo Rocket single stage to orbit concept).

So, the idea behind boost stage interception is that it targets an intercontinental ballistic missile and destroys the vehicle while it’s still gaining velocity – the earlier the interception that can destroy the vehicle, the better. There were many ideas on how to do this, including high powered lasers, but the simplest idea (in theory, not in execution) was the kinetic impactor: basically a self-guided projectile would hit the very thin fuel or oxidizer tanks of the ICBM, and… boom, no more ICBM. This was especially attractive since, by this time, missiles could carry over a dozen warheads, and this would take care of all of them at once, rather than a terminal phase interceptor, which would have to deal with each warhead individually.

The general idea behind Timber Wind was that a three-stage weapon would be used to deliver a boost-phase kinetic kill vehicle. The original first stage was based on the LGM-118 Peacekeeper (“MX,” or Missile – Experimental) first stage, which had just deployed two years earlier. This solid fueled ICBM first stage normally used a 500,000 lbf (2.2 MN) SR118 solid rocket motor, although it’s not clear if this engine was modified in any way for Timber Wind. The second stage would be the PBNTR Timber Wind stage, which would achieve Earth escape velocity to prevent reactor re-entry, and the third stage was the kinetic kill vehicle (which I have not been able to find information about).

Here’s a recent Lockheed Martin KKV undergoing testing, so you can get an idea of what this “bullet” looks and behaves like: https://www.youtube.com/watch?v=KBMU6l6GsdM

Needless to say, this would be a very interesting launch profile, and one that I have not seen detailed anywhere online. It would also be incredibly challenging to

  1. detect the launch of an ICBM;
  2. counter-launch even as rapid-fire-capable a missile as a Peacekeeper;
  3. provide sufficient guidance to the missile in real-time to guide the entire stack to interception;
  4. go through three staging events (two of which were greater than Earth escape velocity!);
  5. guide the kinetic kill vehicle to the target with sufficient maneuvering capability to intercept the target;
  6. and finally have a reasonably high chance of mission success, which required both the reactor to go flying off into a heliocentric orbit and have the kinetic kill vehicle impact the target booster

all before the second (or third) staging event for the target ICBM (i.e. before warhead deployment).

This presents a number of challenges to the designers: thrust-to-weight ratio is key to a booster stage, something that to this point (and even today) NTRs struggle with – mostly due to shielding requirements for the payload.

There simply isn’t a way to mitigate gamma radiation in particular without high atomic number nuclei to absorb and re-emit these high energy photons enough times that a lighter shielding material can be used to either stop or deflect the great-great-great-great-…-great grand-daughter photons from sensitive payloads, whether crew or electronics. However, electronics are far less sensitive than humans to this sort of irradiation, so right off the bat this program had an advantage over Rover: there weren’t any people on board, so shielding mass could be minimized.

Ed. Note: figuring out shielded T/W ratio in this field is… interesting to say the least. It’s an open question whether reported T/W includes anything but the thrust structure (i.e. no turbopumps and associated hardware, generally called the “power pack” in rocket engineering), much less whether it includes shielding – and the amount of necessary shielding is another complex question which changes with time. Considering the age of many of these studies, and the advances in computational capability to model not only the radiation being emitted from the reactor vessel but the shielding ability of many different materials, every estimate of required shielding must be taken with 2-3 dump trucks of salt!!! Given that shielding is an integral part of the reactor system, this makes pretty much every T/W estimate questionable.

One of the major challenges of the program, apparently, was to ensure that the reactor would not re-enter the atmosphere, meaning that it had to achieve Earth orbit escape velocity, while still able to deploy the third stage kinetic kill vehicle. I’ve been trying to figure out this staging event for a while now, and have come to the conclusion that my orbital mechanics capabilities simply aren’t good enough to assess how difficult this is beyond “exceptionally difficult.”

However, details of this portion of the program were more highly classified than even the already-highly-classified program, and incredibly few details are available about this portion in specific. We do know that by 1991, the beginning of Phase II of Timber Wind, this portion of the program had been de-emphasized, so apparently the program managers also found it either impractical or no longer necessary, focusing instead on the Nuclear Orbital Transfer Vehicle, or NOTV.

PBR-NOTV: Advanced Upper Stage Flexibility

NOTV Mockup, Powell 1987

At the same time as Timber Wind was gaining steam, the OTV concept was going through a major evolution into the PBR-NOTV (Particle Bed Reactor – Nuclear Orbital Transfer Vehicle). This was another interesting concept, and one which played around with many concepts that are often discussed in the astronuclear field (some related to pebble bed reactors, some related to NTRs), but are almost never realized.

The goals were… modest…

  1. ~1000 s isp
  2. multi-meganewton thrust
  3. ~50% payload mass fraction from LEO to GEO
  4. LEO to GEO transfer time measured in hours, burn time measured in minutes
  5. Customizable propellant usage to change thrust level from same reactor (H2, NH3, and mixtures of the two)

These NOTVs were designed to be the second stage of a booster, similar to the KKV concept we discussed above, but rather than deliver a small kinetic impactor and then leave the cislunar system, these would be designed to place payloads into specific orbits (low Earth orbit, or LEO, mid-Earth orbit, or MEO, and geostationary orbit, GEO, as well as polar and retrograde orbits) using rockets which would normally be far too small to achieve these mission goals. Since the reactor and nozzle were quite small, it was envisioned that a variety of launch vehicles could be used as a first stage, and the tanks for the NTR could be adjusted in size to meet both mission requirements and launch vehicle dimensions. By 1987, there was even discussion of launching it in the Space Shuttle cargo bay, since (until it was taken critical) the level of danger to the crew was negligible due to the lack of oxidizer on board (a major problem facing the Shuttle-launched Centaur with its chemical engine).

There were a variety of missions that the NOTV was designed around, including single-use missions which would go to LEO/MEO/GEO, drop off the payload, and then go into a graveyard orbit for disposal, as well as two way space tug missions. The possibility of on-orbit propellant reloading was also discussed, with propellant being stored in an orbiting depot, for longer term missions. While it wasn’t discussed (since there was no military need) the stage could easily have handled interplanetary missions, but those proposals would come only after NASA got involved.

Multiple Propellants: a Novel Solution to Novel Challenges with Novel Complications

In order to achieve these different orbits, and account for many of the orbital mechanical considerations of launching satellites into particular orbits, a novel scheme for adjusting both thrust and specific impulse was devised: use a more flexible propellant scheme than just cryogenic H2. In this case, the proposal was to use NH3, H2, or a combination of the two. It was observed that the most efficient method of using the two-propellant mode was to use the NH3 first, followed by the H2, since thrust is more important earlier in the booster flight model. One paper observed that in a Hohman transfer orbit, the first part of the perigee burn would use ammonia, followed by the hydrogen to finish the burn (and I presume to circularize the orbit at the end).

When pure ammonia was used, the specific impulse of the stage was reduced to only 500 s isp (compared to the 200-300 s for most second stages), but the thrust would double from 10,000 lbs to 20,000 lbs. By the time the gas had passed into the nozzle, it would have effectively completely dissociated into 3H2 + N2.

One of the main advantages of the composite system is that it significantly reduced the propellant volume needed for the NTR, a key consideration for some of the boosters that were being investigated. In both the Shuttle and Titan rockets, center of gravity and NTR+payload length were a concern, as was volume.

Sadly, there was also a significant (5,000 lb) decrease in payload advantage over the Centaur using NH3 instead of pure H2, but the overall thrust budget could be maintained.

There’s quite a few complications to consider in this design: first, hydrogen behaves very differently than ammonia in a turbopump, not only due to density but also due to compressability: while NH3 is minimally compressible, meaning that it can be considered to have a constant volume for a given pressure and temperature while being accelerated by the turbopump, hydrogen is INCREDIBLY compressible, leading to a lot of the difficulties in designing the power pack (turbopumps, turbines, and supporting hardware of a rocket) for a hydrogen system. It is likely (although not explicitly stated) that at least two turbopumps and two turbines would be needed for this scheme, meaning increased system mass.

Next is chemical sensitivities and complications from the different propellants: while NH3 is far less reactive than H2 at the temperatures an NTR operates at, it nevertheless has its own set of behaviors which have to be accounted for in both chemical reactions and thermal behavior. Ammonia is far more opaque to radiation than hydrogen, for instance, so it’ll pick up a lot more energy from the reactor. This in turn will change the thermal reactivity behavior, which might require the reactor to run at a higher power level with NH3 than it would with H2 to maintain reactor equilibrium.

This leads us neatly into the next behavioral difference: NH3 will expand less than H2 when heated to the same temperature, but at these higher temps the molecule itself may (or will) start to dissociate, as the thermal energy in the molecule exceeds the bonding strength between the covalent bonds in the propellant. This means you’ve now got monatomic hydrogen and various partially-deconstructed nitrogen complexes with different masses and densities to deal with – although this dissociation does decrease propellant mass, increasing specific impulse, and none of the constituent atoms are solids so plating material into your reactor won’t be a concern. These gasdynamic differences have many knock-on effects though, including engine orificing.

See how the top end of the fuel element’s central void is so much narrower than the bottom? One of the reasons for this is that the propellant is hotter – and therefore less dense – at the bottom (it’s also because as you travel down the fuel element more and more propellant is being added). This is something you see in prismatic fuel elements as well, but it’s not something I’ve seen depicted well anywhere so I don’t have as handy a diagram to use.

This taper is called “orificing,” and is used to balance the propellant pressure within an NTR. It depends on the thermal capacity of the propellant, how much it expands, and how much pressure is desired at that particular portion of the reactor – and the result of these calculations is different for NH3 and H2! So some compromises would have to be reached in this cases as well.

Finally, the tankage for the propellant is another complex question. The H2 has to be stored at such a lower temperature compared to the NH3 that a common bulkhead between the tanks simply wouldn’t be possible – the hydrogen would freeze the ammonia. This could lead to a failure mode similar to what happened to SpaceX’s Falcon 9 in September 2016, when the helium tanks became super-chilled and then ruptured on the pad leading to the loss of the vehicle. Of course, the details would be different, but the danger is the same. This leads to the necessity for a complex tankage system in addition to the problems with the power pack that we discussed earlier.

All of this leads to a heavier and heavier system, with more compromises overall, and with a variety of reactor architectures being discussed it was time to consolidate the program.

Multi-Megawatt Power: Electricity Generation

While all these studies were going on, other portions of SDIO were also undergoing studies in astronuclear power systems. The primary electric power system was the SP-100, a multi-hundred kilowatt power supply using technology that had evolved out of the SNAP reactor program in the 60s and 70s. While this program was far along in its development, it was over budget, delayed, and simply couldn’t provide enough power for some of the more ambitious projects within SDIO. Because of this, SDIO (briefly) investigated higher power reactors for their more ambitious – and power-hungry – on-orbit systems.

Power generation was something that was often discussed for pebble bed reactors – the same reasons that make the concept phenomenal for nuclear thermal rockets makes a very attractive high temperature gas cooled reactors (HTGR): the high thermal transfer rates reduce the size of the needed reactor, while the pebble bed allows for very high gas flow rates (necessary due to the low thermal capacity of the coolant in an HTGR). To do this, the gas doesn’t go through a nozzle, but instead through a gas turbine – known as the Brayton cycle. This has huge efficiency advantages over thermoelectric generators, the design being used in SP-100, meaning that the same size reactor can generate much more electricity – but this would definitely not be the same size reactor!

The team behind Timber Wind (including the BNL, SNL and B&W teams) started discussing both electric generation and bimodal nuclear thermal and nuclear electric reactor geometry as early as 1986, before SDIO picked up the program. Let’s take a look at the two proposals by the team, starting with the bimodal proposal.

Particle Bed BNTR: A Hybrid of a Hybrid

Powell et al 1987

The bimodal NTR (BNTR) system never gained any traction, despite it being a potentially valuable addition to the NOTV concept. It is likely that the combination of the increased complexity and mass of the BNTR compared to the design that was finally decided on for Timber Wind, but it was interesting to the team, and they figured someone may be interested as well. This design used the same coolant channels for both the propellant and coolant, which in this case was He. This allowed for similar thermal expansion characteristics and ass flow in the coolant compared to the propellant, while minimizing both corrosion and gas escape challenges.

Horn et al 1987

A total of 37 fuel elements, similar to those used on Timber Wind, were placed in a triangular configuration, with zirconium hydride moderator surrounding them, with twelve control rods for reactivity control. Unusually for many power generation systems, this concept used a conbination of low power, closed loop coolant (using He) and a high power open loop system using H2, which would then be vented out into space through a nozzle (this second option was limited to about 30 mins of high power operation before exhausting H2 reserves). A pair of He Brayton turbines and a radiator was integrated into the BNTR structure. The low power system was designed to operate for “years at a time,” producing 555 kWe of power, while the high power system was rated to 100 Mwe in either rapid ramp or burst mode.

Horn et al 1987

However, due to the very preliminary nature of this design very few things are completely fleshed out in the only report on the concept that I’ve been able to find. The images, such as they are, are also disappointingly poor in quality, but provide at least a vague idea of the geometry and layout of the reactor:

Horn et al 1987

Multi-Megawatt Steady State and Burst Reactor Proposal

By 1989, two years into Timber Wind, SDIO wanted a powerful nuclear reactor to provide two different power configurations: a steady state, 10 Mwe reactor with a 1 year full power lifetime, which was also able to provide bursts of up to 500 MW for long enough to power neutral particle beams and free electron lasers. A variety of proposals were made, including an adaptation of Timber Wind’s reactor core, an adaptation of a NERVA A6 type core (the same family of NERVA reactors used in XE-Prime), a Project Pluto-based core, a hybrid NERVA/Pluto core, a larger, pellet fueled reactor, and two rarer types of fuel: a wire core reactor and a foam fueled reactor. This is in addition to both thermionic and metal Rankine power systems.

The designs for a PBR-based reactor, though, were very different than the Timber Wind reactor. While using the same TRISO-type fuel, they bear little resemblance to the initial reactor proposal. Both the open and closed cycle concepts were explored.

However, this concept, while considered promising, was passed over in preference for more mature fuel forms (under different reactor configurations, namely a NERVA-derived gas reactor.

Finding information about this system is… a major challenge, and one that I’m continuing to work on, but considering this is the best summary I’ve been able to find based on over a week’s searching for source material which as far as I can tel is still classified or has never been digitally documented, as unsatisfying a summary as this is I’m going to leave it here for now.

When I come back to nuclear electric concepts. we’ll come back to this study. I’ve got… words… about it, but at the present moment it’s not something I’m comfortable enough to comment on (within my very limited expertise).

Phase I Experiments

The initial portion of Timber Wind, Phase I, wasn’t just a paper study. Due to the lack of experience with PBR reactors, fuel elements, and integrating them into an NTR, a pair of experiments were run to verify that this architecture was actually workable, with more experiments being devised for Phase II.

Sandia NL ACCR, image DOE

The first of these tests was PIPE (Pulse Irradiation of a Particle Bed Fuel Element), a test of the irradiation behavior of the PBR fuel element which was divided into two testing regimes in 1988 and 1989 at Sandia National Laboratory’s Annular Core Research Reactor using fuel elements manufactured by Babcock and Wilcox. While the ACCR prevented the power density of the fuel elements to achieve what was desired for the full PBR, the data indicated that the optimism about the potential power densities was justified. Exhaust temperatures were close to that needed for an NTR, so the program continued to move forward. Sadly, there were some manufacturing and corrosion issues with the fuel elements in PIPE-II, leading to some carbon contamination in the test loop, but this didn’t impact the ability to gather the necessary data or reduce the promise of the system (just created more work for the team at SNL).

A later test, PIPET (Particle Bed Reactor Integral Performance Tester) began undergoing preliminary design reviews at the same time, which would end up consuming a huge amount of time and money while growing more and more important to the later program (more on that in the next post).

The other major test to occur at this time was CX1, or Critical Experiment 1.

Carried out at Sandia National Laboratory, CX1 was a novel configuration of prototypic fuel elements and a non-prototypical moderator to verify the nuclear worth of fuel elements in a reactor environment and then conduct post-irradiation testing. This sort of testing is vitally important to any new fuel element, since the computer modeling used to estimate reactor designs requires experimental data to confirm the required assumptions used in the calculations.

This novel architecture looked nothing like an NTR, since it was a research test-bed. In fact, because it was a low power system there wasn’t much need for many of the support structures a nuclear reactor generally uses. Instead, it used 19 fuel elements placed within polyethylene moderator plugs, which were surrounded by a tank of water for both neutron reflection and moderation. This was used to analyze a host of different characteristics, from prompt neutron production (since the delayed neutron behavior would be dependent on other materials, this wasn’t a major focus of the testing), as well as initial criticality and excess reactivity produced by the fuel elements in this configuration.

CX-1 was the first of two critical experiments carried out using the same facilities in Sandia, and led to further testing configurations, but we’ll discuss those more in the next post.

Phase II: Moving Forward, Moving Up

With the success of the programmatic, computational and basic experiments in Phase I, it was time for the program to focus on a particular mission type, prepare for ground (and eventual flight) testing, and move forward.

This began Phase II of the program, which would continue from the foundation of Phase I until a flight test was able to be flown. By this point, ground testing would be completed, and the program would be in a roughly similar position to NERVA after the XE-Prime test.

Phase II began in 1990 under the SDIO, and would continue under their auspices until October 1991. The design was narrowed further, focusing on the NOTV concept, which was renamed the Orbital Maneuvering Vehicle.

Many decisions were made at this point which I’ll go into more in the next post, but some of the major decisions were:

  1. 40,000 lbf (~175 kN) thrust level
  2. 1000 MWt power level
  3. Hot bleed cycle power pack configuration
  4. T/W of 20:1
  5. Initial isp est of 930 s

While this is a less ambitious reactor, it could be improved as the program matured and certain challenges, especially in materials and reactor dynamics uncertainties, were overcome.

Another critical experiment (CX2) was conducted at Sandia, not only further refining the nuclear properties of the fuel but also demonstrating a unique control system, called a “Peek-A-Boo” scheme. Here, revolving rings made up of aluminum and gadolinium surrounded the central fuel element, and would be rotated to either absorb neutrons or allow them to interact with the other fuel elements. While the test was promising (the worth of the system was $1.81 closed and $5.02 open, both close to calculated values), but this system would not end up being used in the final design.

Changing of the Guard: Timber Wind Falls to Space Nuclear Thermal Propulsion

Even as Timber Wind was being proposed, tensions with the USSR had been falling. By the time it got going in 1987, tensions were at an all-time low, reducing the priority of the SDIO mission. Finally, the Soviet Union fell, eliminating the need for the KKV concept.

At the same time, the program was meeting its goals (for the most part), and showed promise not just for SDIO but for the US Air Force (who were responsible for launching satellites for DOD and intelligence agencies) as well as NASA.

1990 was a major threshold year for the program. After a number of Senate-requested assessments by the Defense Science Board, as well as assessment by NASA, the program was looking like it was finding a new home, one with a less (but still significantly) military-oriented focus, and with a civilian component as well.

The end of Timber Wind would come in 1991. Control of the program would transfer from SDIO to the US Air Force, which would locate the programmatic center of the project at the Phillips Research Laboratory in Albuquerque, NM – a logical choice due to the close proximity of Sandia National Lab where much of the nuclear analysis was taking place, as well as being a major hub of astronuclear research (the TOPAZ International program was being conducted there as well). Additional stakes in the program were given to NASA, which saw the potential of the system for both uncrewed and crewed missions from LEO to the Moon and beyond.

With this, Timber Wind stopped being a thing, and the Space Nuclear Thermal Propulsion program picked up basically exactly where it left off.

The Promise of SNTP

With the demise of Timber Wind, the Space Nuclear Thermal Propulsion program gained steam. Being a wider collaboration between different portions of the US government, both civil and military, gave a lot of advantages, wider funding, and more mission options, but also brought its’ own problems.

In the next post, we’ll look at this program, what its plans, results, and complications were, and what the legacy of this program was.

References and Further Reading

Timber Wind/SNTP General References

Haslett, E. A. “SPACE NUCLEAR THERMAL PROPULSION
PROGRAM FINAL REPORT
https://apps.dtic.mil/dtic/tr/fulltext/u2/a305996.pdf

Orbital Transfer Vehicle

Powell et al, “NUCLEAR PROPULSION SYSTEMS FOR ORBIT TRANSFER BASED ON THE
PARTICLE BED REACTOR” Brookhaven NL 1987 https://www.osti.gov/servlets/purl/6383303

Araj et al, “ULTRA-HIGH TEMPERATURE DIRECT PROPULSION”” Brookhaven NL 1987 https://www.osti.gov/servlets/purl/6430200

Horn et al, “The Use of Nuclear Power for Bimodal Applications in Space,” Brookhaven NL 1987 https://www.osti.gov/servlets/purl/5555461

Multi-Megawatt Power Plant

Powell et al “HIGH POWER DENSITY REACTORS BASED ON
DIRECT COOLED PARTICLE BEDS” Brookhaven NL 1987 https://inis.iaea.org/collection/NCLCollectionStore/_Public/17/078/17078909.pdf

Marshall, A.C “A Review of Gas-Cooled Reactor
Concepts for SDI Applications” Sandia NL 1987 https://www.osti.gov/servlets/purl/5619371

“Atomic Power in Space: a History, chapter 15” https://inl.gov/wp-content/uploads/2017/08/AtomicPowerInSpaceII-AHistory_2015_chapters6-10.pdf

Categories
Development and Testing Forgotten Reactors History Non-nuclear Testing Nuclear Thermal Systems Test Stands

Pebblebed NTRs: Solid Fuel, but Different

Hello, and welcome back to Beyond NERVA!

Today, we’re going to take a break from the closed cycle gas core nuclear thermal rocket (which I’ve been working on constantly since mid-January) to look at one of the most popular designs in modern NTR history: the pebblebed reactor!

This I should have covered between solid and liquid fueled NTRs, honestly, and there’s even a couple types of reactor which MAY be able to be used for NTR between as well – the fluidized and shush fuel reactors – but with the lack of information on liquid fueled reactors online I got a bit zealous.

Beads to Explore the Solar System

Most of the solid fueled NTRs we’ve looked at have been either part of, or heavily influenced by, the Rover and NERVA programs in the US. These types of reactors, also called “prismatic fuel reactors,” use a solid block of fuel of some form, usually tileable, with holes drilled through each fuel element.

The other designs we’ve covered fall into one of two categories, either a bundled fuel element, such as the Russian RD-0410, or a folded flow disc design such as the Dumbo or Tricarbide Disc NTRs.

However, there’s another option which is far more popular for modern American high temperature gas cooled reactor designs: the pebblebed reactor. This is a clever design, which increases the surface area of the fuel by using many small, spherical fuel elements held in a (usually) unfueled structure. The coolant/propellant passes between these beads, picking up the heat as it passes between them.

This has a number of fundamental advantages over the prismatic style fuel elements:

  1. The surface area of the fuel is so much greater than with simple holes drilled in the prismatic fuel elements, increasing thermal transfer efficiency.
  2. Since all types of fuel swell when heated, the density of the packed fuel elements could be adjusted to allow for better thermal expansion behavior within the active region of the reactor.
  3. The fuel elements themselves were reasonably loosely contained within separate structures, allowing for higher temperature containment materials to be used.
  4. The individual elements could be made smaller, allowing for a lower temperature gradient from the inside to the outside of a fuel, reducing the overall thermal stress on each fuel pebble.
  5. In a folded flow design, it was possible to not even have a physical structure along the inside of the annulus if centrifugal force was applied to the fuel element structure (as we saw in the fluid fueled reactor designs), eliminating the need for as many super-high temperature materials in the highest temperature region of the reactor.
  6. Because each bead is individually clad, in the case of an accident during launch, even if the reactor core is breached and a fuel release into the environment occurs, the release of either any radiological components or any other fuel materials into the environment is minimized
  7. Because each bead is relatively small, it is less likely that they will sustain sufficient damage either during mechanical failure of the flight vehicle or impact with the ground that would breach the cladding.

However, there is a complication with this design type as well, since there are many (usually hundreds, sometimes thousands) of individual fuel elements:

  1. Large numbers of fuel beads mean large numbers of fuel beads to manufacture and perform quality control checks on.
  2. Each bead will need to be individually clad, sometimes with multiple barriers for fission product release, hydrogen corrosion, and the like.
  3. While each fuel bead will be individually clad, and so the loss of one or all the fuel will not significantly impact the environment from a radiological perspective in the case of an accident, there is potential for significant geographic dispersal of the fuel in the event of a failure-to-orbit or other accident.

There are a number of different possible flow paths through the fuel elements, but the two most common are either an axial flow, where the propellant passes through a tubular structure packed with the fuel elements, and a folded flow design, where the fuel is in a porous annular structure, with the coolant (usually) passing from the outside of the annulus, through the fuel, and the now-heated coolant exiting through the central void of the annulus. We’ll call these direct flow and folded flow pebblebed fuel elements.

In addition, there are many different possible fuel types, which regulars of this blog will be familiar with by now: oxides, carbides, nitrides, and CERMET are all possible in a pebblebed design, and if differential fissile fuel loading is needed, or gradients in fuel composition (such as using tungsten CERMET in higher temperature portions of the reactor, with beryllium or molybdenum CERMET in lower temperature sections), this can be achieved using individual, internally homogeneous fuel types in the beads, which can be loaded into the fuel support structure at the appropriate time to create the desired gradient.

Just like in “regular” fuel elements, these pebbles need to be clad in a protective coating. There have been many proposals over the years, obviously depending on what type of fissile fuel matrix the fuel uses to ensure thermal expansion and chemical compatibility with the fuel and coolant. Often, multiple layers of different materials are used to ensure structural and chemical integrity of the fuel pellets. Perhaps the best known example of this today is the TRISO fuel element, used in the US Advanced Gas Reactor fuel development program. The TRI-Structural ISOtropic fuel element uses either oxide or carbide fuel in the center, followed by a porous carbon layer, a pyrolitic carbon layer (sort of like graphite, but with some covalent bonds between the carbon sheets), followed by a silicon carbide outer shell for mechanical and fission product retention. Some variations include a burnable poison for reactivity control (the QUADRISO at Argonne), or use different outer layer materials for chemical protection. Several types have been suggested for NTR designs, and we’ll see more of them later.

The (sort of) final significant variable is the size of the pebble. As the pebbles go down in size, the available surface area of the fuel-to-coolant interface increases, but also the amount of available space between the pebbles decreases and the path that the coolant flows through becomes more resistant to higher coolant flow rates. Depending on the operating temperature and pressure, the thermal gradient acceptable in the fuel, the amount of decay heat that you want to have to deal with on shutdown (the bigger the fuel pebble, the more time it will take to cool down), fissile fuel density, clad thickness requirements, and other variables, a final size for the fuel pebbles can be calculated, and will vary to a certain degree between different reactor designs.

Not Just for NTRs: The Electricity Generation Potential of Pebblebed Reactors

Obviously, the majority of the designs for pebblebed reactors are not meant to ever fly in space, they’re mostly meant to operate as high temperature gas cooled reactors on Earth. This type of architecture has been proposed for astronuclear designs as well, although that isn’t the focus of this video.

Furthermore, the pebblebed design lends itself to other cooling methods, such as molten salt, liquid metal, and other heat-carrying fluids, which like the gas would flow through the fuel pellets, pick up the heat produced by the fissioning fuel, and carry it into a power conversion system of whatever design the reactor has integrated into its systems.

Finally, while it’s rare, pebblebed designs were popular for a while with radioisotope power systems. There are a number of reasons for this beyond being able to run a liquid coolant through the fuel (which was done on one occasion that I can think of, and we’ll cover in a future post): in an alpha-emitting radioisotope, such as 238Pu, over time the fuel will generate helium gas – the alpha particles will slow, stop, and become doubly ionized helium nuclei, which will then strip electrons off whatever materials are around and become normal 4He. This gas needs SOMEWHERE to go, which is why just like with a fissile fuel structure there are gas management mechanisms used in radioisotope power source fuel assemblies such as areas of vacuum, pressure relief valves, and the like. In some types of RTG, such as the SNAP-27 RTG used by Apollo, as well as the Multi-Hundred Watt RTG used by Voyager, the fuel was made into spheres, with the gaps in between the spheres (normally used to pass coolant through) are used for the gas expansion volume.

We’ll discuss these ideas more in the future, but I figured it was important to point out here. Let’s get back to the NTRs, and the first (and only major) NTR program to focus on the pebblebed concept: the Project Timberwind and the Space Nuclear Propulsion Program in the 1980s and early 1990s.

The Beginnings of Pebblebed NTRs

The first proposals for a gas cooled pebblebed reactor were from 1944/45, although they were never pursued beyond the concept stage, and a proposal for the “Space Vehicle Propulsion Reactor” was made by Levoy and Newgard at Thikol in 1960, with again no further development. If you can get that paper, I’d love to read it, here’s all I’ve got: “Aero/Space Engineering 19, no. 4, pgs 54-58, April 1960” and ‘AAE Journal, 68, no. 6, pgs. 46-50, June 1960,” and “Engineering 189, pg 755, June 3, 1960.” Sounds like they pushed hard, and for good reason, but at the time a pebblebed reactor was a radical concept for a terrestrial reactor, and getting a prismatic fueled reactor, something far more familiar to nuclear engineers, was a challenge that seemed far simpler and more familiar.

Sadly, while this design may end up have informed the design of its contemporary reactor, it seems like this proposal was never pursued.

Rotating Fluidized Bed Reactor (“Hatch” Reactor) and the Groundwork for Timberwind

Another proposal was made at the same time at Brookhaven National Laboratory, by L.P. Hatch, W.H. Regan, and a name that will continue to come up for the rest of this series, John R. Powell (sorry, can’t find the given names of the other two, even). This relied on very small (100-500 micrometer) fuel, held in a perforated drum to contain the fuel but also allow propellant to be injected into the fuel particles, which was spun at a high rate to provide centrifugal force to the particles and prevent them from escaping.

Now, fluidized beds need a bit of explanation, which I figured was best to put in here since this is not a generalized property of pebblebed reactors. In this reactor (and some others) the pebbles are quite small, and the coolant flow can be quite high. This means that it’s possible – and sometimes desirable – for the pebbles to move through the active zone of the reactor! This type of mobile fuel is called a “fluidized bed” reactor, and comes in several variants, including pebble (solid spheres), slurry (solid particulate suspended in a liquid), and colloid (solid particulate suspended in a gas). The best way to describe the phenomenon is with what is called the point of minimum fluidization, or when the drag forces on the mass of the solid objects from the fluid flow balances with the weight of the bed (keep in mind that life is a specialized form of drag). There’s a number of reasons to do this – in fact, many chemical reactions using a solid and a fluid component use fluidization to ensure maximum mixing of the components. In the case of an NTR, the concern is more to do with achieving as close to thermal equilibrium between the solid fuel and the gaseous propellant as possible, while minimizing the pressure drop between the cold propellant inlet and the hot propellant outlet. For an NTR, the way that the “weight” is applied is through centrifugal force on the fuel. This is a familiar concept to those that read my liquid fueled NTR series, but actually began with the fluidized bed concept.

This is calculated using two different relations between the same variables: the Reynolds number (Re), which determines how turbulent fluid flow is, and the friction coefficient (CD, or coefficient of drag, which deptermines how much force acts on the fuel particles based on fluid interactions with the particles) which can be found plotted below. The plotted lines represent either the Reynolds number or the void fraction ε, which represents the amount of gas present in the volume defined by the presence of fuel particles.

Hendrie 1970

If you don’t follow the technical details of the relationships depicted, that’s more than OK! Basically, the y axis is proportional to the gas turbulence, while the x axis is proportional to the particle diameter, so you can see that for relatively small increases in particle size you can get larger increases in propellant flow rates.

The next proposal for a pebble bed reactor grew directly out of the Hatch reactor, the Rotating Fluidized Bed Reactor for Space Nuclear Propulsion (RBR). From the documentation I’ve been able to find, from the original proposal work continued at a very low level at BNL from the time of the original proposal until 1973, but the only reports I’ve been able to find are from 1971-73 under the RBR name. A rotating fuel structure, with small, 100-500 micrometer spherical particles of uranium-zirconium carbide fuel (the ZrC forming the outer clad and a maximum U content of 10% to maximize thermal limits of the fuel particles), was surrounded by a reflector of either metallic beryllium or BeO (which was preferred as a moderator, but the increased density also increased both reactor mass and manufacturing requirements). Four drums in the reflector would control the reactivity of the engine, and an electric motor would be attached to a porous “squirrel cage” frit, which would rotate to contain the fuel.

Much discussion was had as to the form of uranium used, be it 235U or 233U. In the 235U reactor, the reactor had a cavity length of 25 in (63.5 cm), an inner diameter of 25 in (63.5 cm), and a fuel bed depth when fluidized of 4 in (10.2 cm), with a critical mass of U-ZrC being achieved at 343.5 lbs (155.8 kg) with 9.5% U content. The 233U reactor was smaller, at 23 in (56 cm) cavity length, 20 in (51 cm) bed inner diameter, 3 in (7.62 cm) deep fuel bed with a higher (70%) void fraction, and only 105.6 lbs (47.9 kg) of U-ZrC fuel at a lower (and therefore more temperature-tolerant) 7.5% U loading.

233U was the much preferred fuel in this reactor, with two options being available to the designers: either the decreased fuel loading could be used to form the smaller, higher thrust-to-weight ratio engine described above, or the reactor could remain at the dimensions of the 235U-fueled option, but the temperature could be increased to improve the specific impulse of the engine.

There was als a trade-off between the size of the fuel particles and the thermal efficiency of the reactor,:

  • Smaller particles advantages
    • Higher surface area, and therefore better thermal transfer capabilities,
    • Smaller radius reduces thermal stresses on fuel
  • Smaller particles disadvantages
    • Fluidized particle bed fuel loss would be a more immediate concern
    • More sensitive to fluid dynamic behavior in the bed
    • Bubbles could more easily form in fuel
    • Higher centrifugal force required for fuel containment
  • Larger particle advantages
    • Ease of manufacture
    • Lower centrifugal force requirements for a given propellant flow rate
  • Larger particle disadvantages
    • Higher thermal gradient and stresses in fuel pellets
    • Less surface area, so lower thermal transfer efficiency

It would require testing to determine the best fuel particle size, which could largely be done through cold flow testing.

These studies looked at cold flow testing in depth. While this is something that I’ve usually skipped over in my reporting on NTR development, it’s a crucial type of testing in any gas cooled reactor, and even more so in a fluidized bed NTR, so let’s take a look at what it’s like in a pebblebed reactor: the equipment, the data collection, and how the data modified the reactor design over time.

Cold flow testing is usually the predecessor to electrically heated flow testing in an NTR. These tests determine a number of things, including areas within the reactor that may end up with stagnant propellant (not a good thing), undesired turbulence, and other negative consequences to the flow of gas through the reactor. They are preliminary tests, since as the propellant heats up while going through the reactor, a couple major things will change: first, the density of the gas will decrease and second, as the density changes the Reynolds number (a measure of self-interaction, viscosity, and turbulent vs laminar flow behavior) will change.

In this case, the cold flow tests were especially useful, since one of the biggest considerations in this reactor type is how the gas and fuel interact.

The first consideration that needed to be examined is the pressure drop across the fuel bed – the highest pressure point in the system is always the turbopump, and the pressure will decrease from that point throughout the system due to friction with the pipes carrying propellant, heating effects, and a host of other inefficiencies. One of the biggest questions initially in this design was how much pressure would be lost from the frit (the outer containment structure and propellant injection system into the fuel) to the central void in the body of the fuel, where it exits the nozzle. Happily, this pressure drop is minimal: according to initial testing in the early 1960s (more on that below), the pressure drop was equal to the weight of the fuel bed.

The next consideration was the range between fluidizing the fuel and losing the fuel through literally blowing it out the nozzle – otherwise known as entrainment, a problem we looked at extensively on a per-molecule basis in the liquid fueled NTR posts (since that was the major problem with all those designs). Initial calculations and some basic experiments were able to map the propellant flow rate and centrifugal force required to both get the benefit of a fluidized bed and prevent fuel loss.

Rotating Fluidized Bed Reactor testbed test showing bubble formation,

Another concern is the formation of bubbles in the fuel body. As we covered in the bubbler LNTR post (which you can find here), bubbles are a problem in any fuel type, but in a fluid fueled reactor with coolant passing through it there’s special challenges. In this case, the main method of transferring heat from the fuel to the propellant is convection (i.e. contact between the fuel and the propellant causing vortices in the gas which distributes the heat), so an area that doesn’t have any (or minimal) fuel particles in it will not get heated as thoroughly. That’s a headache not only because the overall propellant temperature drops (proportional to the size of the bubbles), but it also changes the power distribution in the reactor (the bubbles are fission blank spots).

Finally, the initial experiment set looked at the particle-to-fluid thermal transfer coefficients. These tests were far from ideal, using a 1 g system rather than the much higher planned centrifugal forces, but they did give some initial numbers.

The first round of tests was done at Brookhaven National Laboratory (BNL) from 1962 to 1966, using a relatively simple test facility. A small, 10” (25.4 cm) length by 1” (2.54 cm) diameter centrifuge was installed, with gas pressure provided by a pressurized liquefied air system. 138 to 3450 grams of glass particles were loaded into the centrifuge, and various rotational velocities and gas pressures were used to test the basic behavior of the particles under both centrifugal force and gas pressure. While some bobbles were observed, the fuel beds remained stable and no fuel particles were lost during testing, a promising beginning.

These tests provided not just initial thermal transfer estimates, pressure drop calculations, and fuel bed behavioral information, but also informed the design of a new, larger test rig, this one 10 in by 10 in (25.4 by 25.4 cm), which was begun in 1966. This system would not only have a larger centrifuge, but would also use liquid nitrogen rather than liquefied air, be able to test different fuel particle simulants rather than just relatively lightweight glass, and provide much more detailed data. Sadly, the program ran out of funding later that year, and the partially completed test rig was mothballed.

Rotating Fluidized Bed Reactor (RBR): New Life for the Hatch Reactor

It would take until 1970, when the Space Nuclear Systems office of the Atomic Energy Commission and NASA provided additional funding to complete the test stand and conduct a series of experiments on particle behavior, reactor dynamics and optimization, and other analytical studies of a potential advanced pebblebed NTR.

The First Year: June 1970-June 1971

After completing the test stand, the team at BNL began a series of tests with this larger, more capable equipment in Building 835. The first, most obvious difference is the diameter of the centrifuge, which was upgraded from 1 inch to 10 inches (25.4 cm), allowing for a more prototypical fuel bed depth. This was made out of perforated aluminum, held in a stainless steel pressure housing for feeding the pressurized gas through the fuel bed. In addition, the gas system was changed from the pressurized air system to one designed to operate on nitrogen, which was stored in liquid form in trailers outside the building for ease of refilling (and safety), then pre-vaporized and held in two other, high-pressure trailers.

Photographs were used to record fluidization behavior, taken viewing the bottom of the bed from underneath the apparatus. While initially photos were only able to be taken 5 seconds apart, later upgrades would improve this over the course of the program.

The other major piece of instrumentation surrounded the pressure and flow rate of the nitrogen gas throughout the system. The gas was introduced at a known pressure through two inlets into the primary steel body of the test stand, with measurements of upstream pressure, cylindrical cavity pressure outside the frit, and finally a pitot tube to measure pressure inside the central void of the centrifuge.

Three main areas of pressure drop were of interest: due to the perforated frit itself, the passage of the gas through the fuel bed, and finally from the surface of the bed and into the central void of the centrifuge, all of which needed to be measured accurately, requiring calibration of not only the sensors but also known losses unique to the test stand itself.

The tests themselves were undertaken with a range of glass particle sizes from 100 to 500 micrometers in diameter, similar to the earlier tests, as well as 500 micrometer copper particles to more closely replicate the density of the U-ZrC fuel. Rotation rates of between1,000 and 2,000 rpm, and gas flow rates from 1,340-1,800 scf/m (38-51 m^3/min) were used with the glass beads, and from 700-1,500 rpm with the copper particles (the lower rotation rate was due to gas pressure feed limitations preventing the bed from becoming fully fluidized with the more massive particles).

Finally, there were a series of physics and mechanical engineering design calculations that were carried out to continue to develop the nuclear engineering, mechanical design, and system optimization of the final RBR.

The results from the initial testing were promising: much of the testing was focused on getting the new test stand commissioned and calibrated, with a focus on figuring out how to both use the rig as it was constructed as well as which parts (such as the photography setup) could be improved in the next fiscal year of testing. However, particle dynamics in the fuidized bed were comfortably within stable, expected behavior, and while there were interesting findings as to the variation in pressure drop along the axis of the central void, this was something that could be worked with.

Based on the calculations performed, as well as the experiments carried out in the first year of the program, a range of engines were determined for both 233U and 235U variants:

Work Continues: 1971-1972

This led directly into the 1971-72 series of experiments and calculations. Now that the test stand had been mostly completed (although modifications would continue), and the behavior of the test stand was now well-understood, more focused experimentation could continue, and the calculations of the physics and engineering considerations in the reactor and engine system could be advanced on a more firm footing.

One major change in this year’s design choices was the shift toward a low-thrust, high-isp system, in part due to greater interest at NASA and the AEC in a smaller NTR than the original design envelope. While analyzing the proposed engine size above, though, it was discovered that the smallest two reactors were simply not practical, meaning that the smallest design was over 1 GW power level.

Another thing that was emphasized during this period from the optimization side of the program was the mass of the reflector. Since the low thrust option was now the main thrust of the design, any increase in the mass of the reactor system has a larger impact on the thrust-to-weight ratio, but reducing the reflector thickness also increases the neutron leakage rate. In order to prevent this, a narrower nozzle throat is preferred, but also increases thermal loading across the throat itself, meaning that additional cooling, and probably more mass, is needed – especially in a high-specific-impulse (aka high temperature) system. This also has the effect of needing higher chamber pressures to maintain the desired thrust level (a narrower throat with the same mass flow throughput means that the pressure in the central void has to be higher).

These changes required a redesign of the reactor itself, with a new critical configuration:

Hendrie 1972

One major change is how fluidized the bed actually is during operation. In order to get full fluidization, there needs to be enough inward (“upward” in terms of force vectors) velocity at the inner surface of the fuel body to lift the fuel particles without losing them out the nozzle. During calculations in both the first and second years, two major subsystems contributed hugely to the weight and were very dependent on both the rotational speed and the pellet size/mass: the weight of the frit and motor system, which holds the fuel particles, and the weight of the nozzle, which not only forms the outlet-end containment structure for the fuel but also (through the challenges of rocket motor dynamics) is linked to the chamber pressure of the reactor – oh, and the narrower the nozzle, the less surface area is available to reject the heat from the propellant, so the harder it is to keep cool enough that it doesn’t melt.

Now, fluidization isn’t a binary system: a pebblebed reactor is able to be settled (no fluidization), partially fluidized (usually expressed as a percentage of the pebblebed being fluidized), and fully fluidized to varying degrees (usually expressed as a percentage of the volume occupied by the pebbles being composed of the fluid). So there’s a huge range, from fully settled to >95% fluid in a fully fluidized bed.

The designers of the RBR weren’t going for excess fluidization: at some point, the designer faces diminishing returns on the complications required for increased fluid flow to maintain that level of particulate (I’m sure it’s the same, with different criteria, in the chemical industry, where most fluidized beds actually are used), both due to the complications of having more powerful turbopumps for the hydrogen as well as the loss of thermalization of that hydrogen because there’s simply too much propellant to be heated fully – not to mention fuel loss from the particulate fuel being blown out of the nozzle – so the calculations for the bed dynamics assumed minimal full fluidization (i.e. when all the pebbles are moving in the reactor) as the maximum flow rate – somewhere around 70% gas in the fuel volume (that number was never specifically defined that I found in the source documentation, if it was, please let me know), but is dependent on both the pressure drop in the reactor (which is related to the mass of the particle bed) and the gas flow.

Ludewig 1974

However, the designers at this point decided that full fluidization wasn’t actually necessary – and in fact was detrimental – to this particular NTR design. Because of the dynamics of the design, the first particles to be fluidized were on the inner surface of the fuel bed, and as the fluidization percentage increased, the pebbles further toward the outer circumference became fluidized. Because the temperature difference between the fuel and the propellant is greater as the propellant is being injected through the frit and into the fuel body, more heat is carried away by the propellant per unit mass, and as the propellant warms up, thermal transfer becomes less efficient (the temperature difference between two different objects is one of the major variables in how much energy is transferred for a given surface area), and fluidization increases that efficiency between a solid and a fluid.

Because of this, the engineers re-thought what “minimal fluidization” actually meant. If the bed could be fluidized enough to maximize the benefit of that dynamic, while at a minimum level of fluidization to minimize the volume the pebblebed actually took up in the reactor, there would be a few key benefits:

  1. The fueled volume of the reactor could be smaller, meaning that the nozzle could be wider, so they could have lower chamber pressure and also more surface area for active cooling of the nozzle
  2. The amount of propellant flow could be lower, meaning that turbopump assemblies could be smaller and lighter weight
  3. The frit could be made less robustly, saving on weight and simplifying the challenges of the bearings for the frit assembly
  4. The nozzle, frit, and motor/drive assembly for the frit are all net neutron poisons in the RBR, meaning that minimizing any of these structures’ overall mass improves the neutron economy in the reactor, leading to either a lower mass reactor or a lower U mass fraction in the fuel (as we discussed in the 233U vs. 235U design trade-off)

After going through the various options, the designers decided to go with a partially fluidized bed. At this point in the design evolution, they decided on having about 50% of the bed by mass being fluidized, with the rest being settled (there’s a transition point in the fuel body where partial fluidization is occurring, and they discuss the challenges of modeling that portion in terms of the dynamics of the system briefly). This maximizes the benefit at the circumference, where the thermal difference (and therefore the thermal exchange between the fuel and the propellant) is most efficient, while also thermalizing the propellant as much as possible as the temperature difference decreases from the propellant becoming increasingly hotter. They still managed to reach an impressive 2400 K propellant cavity temperature with this reactor, which makes it one of the hottest (and therefore highest isp) solid core NTR designs proposed at that time.

This has various implications for the reactor, including the density of the fissile component of the fuel (as well as the other solid components that make up the pebbles), the void fraction of the reactor (what part of the reactor is made up of something other than fuel, in this particular instance hydrogen within the fuel), and other components, requiring a reworking of the nuclear modeling for the reactor.

An interesting thing to me in the Annual Progress Report (linked below) is the description of how this new critical configuration was modeled; while this is reasonably common knowledge in nuclear engineers from the days before computational modeling (and even to the present day), I’d never heard someone explain it in the literature before.

Basically, they made a bunch of extremely simplified (in both number of dimensions and fidelity) one-dimensional models of various points in the reactor. They then assumed that they could rotate these around that elevation to make something like an MRI slice of the nuclear behavior in the reactor. Then, they moved far enough away that it was different enough (say, where the frit turns in to the middle of the reactor to hold the fuel, or the nozzle starts, or even the center of the fuel compared to the edge) that the dynamics would change, and did the same sort of one-dimensional model; they would end up doing this 18 times. Then, sort of like an MRI in reverse, they took these models, called “few-group” models, and combined them into a larger group – called a “macro-group” – for calculations that were able to handle the interactions between these different few-group simulations to build up a two-dimensional model of the reactor’s nuclear structure and determine the critical configuration of the reactor. They added a few other ways to subdivide the reactor for modeling, for instance they split the neutron spectrum calculations into fast and thermal, but this is the general shape of how nuclear modeling is done.

Ok, let’s get back to the RBR…

Experimental testing using the rotating pebblebed simulator continued through this fiscal year, with some modifications. A new, seamless frit structure was procured to eliminate some experimental uncertainty, the pressure measuring equipment was used to test more areas of the pressure drop across the system, and a challenge for the experimental team – finding 100 micrometer copper spheres that were regularly enough shaped to provide a useful analogue to the UC-ZrC fuel (Cu specific gravity 8.9, UC-ZrC specific gravity ~6.5) were finally able to be procured.

Additionally, while thermal transfer experiments had been done with the 1-gee small test apparatus which preceded the larger centrifugal setup (with variable gee forces available), the changes were too great to allow for accurate predictions on thermal transfer behavior. Therefore, thermal transfer experiments began to be examined on the new test rig – another expansion of the capabilities of the new system, which was now being used rigorously since its completing and calibration testing of the previous year. While they weren’t conducted that year, setting up an experimental program requires careful analysis of what the test rig is capable of, and how good data accuracy can be achieved given the experimental limitations of the design.

The major achievement for the year’s ex[experimentation was a refining of the relationship between particle size, centrifugal force, and pressure drop of the propellant from the turbopump to the frit inlet to the central cavity, most especially from the frit to the inner cavity through the fuel body, on a wide range of particle sizes, flow rates, and bed fluidization levels, which would be key as the design for the RBR evolved.

The New NTR Design: Mid-Thrust, Small RBR

So, given the priorities at both the AEC and NASA, it was decided that it was best to focus primarily on a given thrust, and try and optimize thrust-to-weight ratios for the reactor around that thrust level, in part because the outlet temperature of the reactor – and therefore the specific impulse – was fixed by the engineering decisions made in regards to the rest of the reactor design. In this case, the target thrust was was 90 kN (20,230 lbf), or about 120% of a Pewee-class engine.

This, of course, constrained the reactor design, which at this point in any reactor’s development is a good thing. Every general concept has a huge variety of options to play with: fuel type (oxide, carbide, nitride, metal, CERMET, etc), fissile component (233U and 235U being the big ones, but 242mAm, 241Cf, and other more exotic options exist), thrust level, physical dimensions, fuel size in the case of a PBR, and more all can be played with to a huge degree, so having a fixed target to work towards in one metric allows a reference point that the rest of the reactor can work around.

Also, having an optimization point to work from is important, in this case thrust-to-weight ratio (T/W). Other options, such as specific impulse, for a target to maximize would lead to a very different reactor design, but at the time T/W was considered the most valuable consideration since one way or another the specific impulse would still be higher than the prismatic core NTRs currently under development as part of the NERVA program (being led by Los Alamos Scientific Laboratory and NASA, undergoing regular hot fire testing at the Jackass Flats, NV facility). Those engines, while promising, were limited by poor T/W ratios, so at the time a major goal for NTR improvement was to increase the T/W ratio of whatever came after – which might have been the RBR, if everything went smoothly.

One of the characteristics that has the biggest impact on the T/W ratio in the RBR is the nozzle throat diameter. The smaller the diameter, the higher the chamber pressure, which reduces the T/W ratio while increasing the amount of volume the fuel body can occupy given the same reactor dimensions – meaning that smaller fuel particles could be used, since there’s less chance that they would be lost out of the narrower nozzle throat. However, by increasing the nozzle throat diameter, the T/W ratio improved (up to a point), and the chamber pressure could be decreased, but at the cost of a larger particle size; this increases the thermal stresses in the fuel particles, and makes it more likely that some of them would fail – not as catastrophic as on a prismatic fueled reactor by any means, but still something to be avoided at all costs. Clearly a compromise would need to be reached.

Here are some tables looking at the design options leading up to the 90 kN engine configuration with both the 233U and 235U fueled versions of the RBR:

After analyzing the various options, a number of lessons were learned:

  1. It was preferable to work from a fixed design point (the 90 kN thrust level), because while the reactor design was flexible, operating near an optimized power level was more workable from a reactor physics and thermal engineering point of view
  2. The main stress points on the design were reflector weight (one of the biggest mass components in the system), throat diameter (from both a mass and active cooling point of view as well as fuel containment), and particle size (from a thermal stress and heat transfer point of view)
  3. On these lower-trust engines, 233U was looking far better than 235U for the fissile component, with a T/W ratio (without radiation shielding) of 65.7 N/kg compared to 33.3 N/kg respectively
    1. As reactor size increased, this difference reduced significantly, but with a constrained thrust level – and therefore reactor power – the difference was quite significant.

The End of the Line: RBR Winds Down

1973 was a bad year in the astronuclear engineering community. The flagship program, NERVA, which was approaching flight ready status with preparations for the XE-PRIME test, the successful testing of the flexible, (relatively) inexpensive Nuclear Furnace about to occur to speed not only prismatic fuel element development but also a variety of other reactor architectures (such as the nuclear lightbulb we began looking at last time), and the establishment of a robust hot fire testing structure at Jackass Flats, was fighting for its’ life – and its’ funding – in the halls of Congress. The national attention, after the success of Apollo 11, was turning away from space, and the missions that made NTR technologically relevant – and a good investment – were disappearing from the mission planners’ “to do” lists, and migrating to “if we only had the money” ideas. The Rotating Fluidized Bed Reactor would be one of those casualties, and wouldn’t even last through the 1971/72 fiscal year.

This doesn’t mean that more work wasn’t done at Brookhaven, far from it! Both analytical and experimental work would continue on the design, with the new focus on the 90 kN thrust level, T/W optimized design discussed above making the effort more focused on the end goal.

Multi-program computational architecture used in 1972/73 for RBR, Hoffman 1973

On the analytical side, many of the components had reasonably good analytical models independently, but they weren’t well integrated. Additionally, new and improved analytical models for things like the turbopump system, system mass, temp and pressure drop in the reactor, and more were developed over the last year, and these were integrated into a unified modeling structure, involving multiple stacked models. For more information, check out the 1971-72 progress report linked in the references section.

The system developed was on the verge of being able to do dynamics modeling of the proposed reactor designs, and plans were laid out for what this proposed dynamic model system would look like, but sadly by the time this idea was mature enough to implement, funding had run out.

On the experimental side, further refinement of the test apparatus was completed. Most importantly, because of the new design requirements, and the limitations of the experiments that had been conducted so far, the test-bed’s nitrogen supply system had to be modified to handle higher gas throughput to handle a much thicker fuel bed than had been experimentally tested. Because of the limited information about multi-gee centrifugal force behavior in a pebblebed, the current experimental data could only be used to inform the experimental course needed for a much thicker fuel bed, as was required by the new design.

Additionally, as was discussed from the previous year, thermal transfer testing in the multi-gee environment was necessary to properly evaluate thermal transfer in this novel reactor configuration, but the traditional methods of thermal transfer simply weren’t an option. Normally, the procedure would be to subject the bed to alternating temperatures of gas: cold gas would be used to chill the pebbles to gas-ambient temperatures, then hot gas would be used on the chilled pebbles until they achieved thermal equilibrium at the new temperature, and then cold gas would be used instead, etc. The temperature of the exit gas, pebbles, and amount of gas (and time) needed to reach equilibrium states would be analyzed, allowing for accurate heat transfer coefficients at a variety of pebble sizes, centrifugal forces, propellant flow rates, etc. would be able to be obtained, but at the same time this is a very energy-intensive process.

An alternative was proposed, which would basically split the reactor’s propellant inlet into two halves, one hot and one cold. Stationary thermocouples placed through the central void in the centrifuge would record variations in the propellant at various points, and the gradient as the pebbles moved from hot to cold gas and back could get good quality data at a much lower energy cost – at the cost of data fidelity reducing in proportion to bed thickness. However, for a cash-strapped program, this was enough to get the data necessary to proceed with the 90 kN design that the RBR program was focused on.

Looking forward, while the team knew that this was the end of the line as far as current funding was concerned, they looked to how their data could be applied most effectively. The dynamics models were ready to be developed on the analytical side, and thermal cycling capability in the centrifugal test-bed would prepare the design for fission-powered testing. The plan was to address the acknowledged limitations with the largely theoretical dynamic model with hot-fired experimental data, which could be used to refine the analytical capabilities: the more the system was constrained, and the more experimental data that was collected, the less variability the analytical methods had to account for.

NASA had proposed a cavity reactor test-bed, which would serve primarily to test the open and closed cycle gas core NTRs also under development at the time, which could theoretically be used to test the RBR as well in a hot-fore configuration due to its unique gas injection system. Sadly, this test-bed never came to be (it was canceled along with most other astronuclear programs), so the faint hope for fission-powered RBR testing in an existing facility died as well.

The Last Gasp for the RBR

The final paper that I was able to find on the Rotating Fluidized Bed Reactor was by Ludewig, Manning, and Raseman of Brookhaven in the Journal of Spacecraft, Vol 11, No 2, in 1974. The work leading up to the Brookhaven program, as well as the Brookhaven program itself, was summarized, and new ideas were thrown out as possibilities as well. It’s evident reading the paper that they still saw the promise in the RBR, and were looking to continue to develop the project under different funding structures.

Other than a brief mention of the possibility of continuous refueling, though, the system largely sits where it was in the middle of 1973, and from what I’ve seen no funding was forthcoming.

While this was undoubtedly a disappointing outcome, as virtually every astronuclear program in history has faced, and the RBR never revived, the concept of a pebblebed NTR would gain new and better-funded interest in the decades to come.

This program, which has its own complex history, will be the subject for our next blog post: Project Timberwind and the Space Nuclear Thermal Propulsion program.

Conclusion

While the RBR was no more, the idea of a pebblebed NTR would live on, as I mentioned above. With a new, physically demanding job, finishing up moving, and the impacts of everything going on in the world right now, I’m not sure exactly when the next blog post is going to come out, but I have already started it, and it should hopefully be coming in relatively short order! After covering Timberwind, we’ll look at MITEE (the whole reason I’m going down this pebblebed rabbit hole, not that the digging hasn’t been fascinating!), before returning to the closed cycle gas core NTR series (which is already over 50 pages long!).

As ever, I’d like to thank my Patrons on Patreon (www.patreon.com/beyondnerva), especially in these incredibly financially difficult times. I definitely would have far more motivation challenges now than I would have without their support! They get early access to blog posts, 3d modeling work that I’m still moving forward on for an eventual YouTube channel, exclusive content, and more. If you’re financially able, consider becoming a Patron!

You can also follow me at https://twitter.com/BeyondNerva for more regular updates!

References

Rotating Fluidized Bed Reactor

Hendrie et al, “ROTATING FLUIDIZED BED REACTOR FOR SPACE NUCLEAR PROPULSION Annual Report: Design Studies and Experimental Results, June, 1970- June, 1971,” Brookhaven NL, August 1971 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19720017961.pdf

Hendrie et al, “ROTATING FLUIDIZED BED REACTOR FOR SPACE NUCLEAR PROPULSION Annual Report: Design Studies and Experimental Results, June 1971 – June 1972,” Brookhaven NL, Sept. 1972 https://inis.iaea.org/collection/NCLCollectionStore/_Public/04/061/4061469.pdf

Hoffman et al, “ROTATING FLUIDIZED BED REACTOR FOR SPACE NUCLEAR PROPULSION Annual Report: Design Studies and Experimental Results, July 1972 – January 1973,” Brookhaven NL, Sept 1973 https://inis.iaea.org/collection/NCLCollectionStore/_Public/05/125/5125213.pdf

Cavity Test Reactor

Whitmarsh, Jr, C. “PRELIMINARY NEUTRONIC ANALYSIS OF A CAVITY TEST REACTOR,” NASA Lewis Research Center 1973 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19730009949.pdf

Whitmarsh, Jr, C. “NUCLEAR CHARACTERISTICS OF A FISSIONING URANIUM PLASMA TEST REACTOR WITH LIGHT -WATER COOLING,” NASA Lewis Research Center 1973 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19730019930.pdf

Categories
Nuclear Thermal Systems Test Stands

Fluid Fueled NTRs: A Brief Introduction

 Hello, and welcome back to Beyond NERVA! This is actually about the 6th blog post I’ve started, and then split up when they ran more than 20 pages long, in the last month, and more explanatory material was needed before I discussed the concepts I was trying to discuss (this blog post has also been split up multiple times).

I apologize about the long hiatus. A combination of personal, IRL complications (I’ve updated the “About Me” section to reflect this, but those will not affect the type of content I share on here), and the professional (and still under wraps) opportunity of a lifetime have kept me away from the blog for a while. I want to return to Nuclear Thermal Rockets (NTRs) for a while, rather than continuing Nuclear Electric Propulsion (NEP) power plants, as a fun, still-not-covered area for me to work my way back into writing regularly for y’all again.

This is the first in an extensive blog series on fluid fueled NTRs, of three main types: liquid, vapor, and gas core NTRs. These reactors avoid the thermal limitations of the fuel elements themselves, increasing the potential core temperature to above 2550 K (the generally accepted maximum thermal limit on workable carbide fuel elements), increasing the specific impulse of these rockets. At the same time, structural material thermal limits, challenges in adequately heating the propellant to gain these advantages in a practical way, fissile fuel containment, and power density issues are major concerns in these types of reactors, so we’re going to dig into the weeds of the general challenges of fluid fueled reactors in general in this blog post (with some details on each reactor type’s design envelope).

Let’s start by looking at the basics behind how a nuclear reactor can operate without any solid fuel elements, and what the advantages and disadvantages of going this route are.

Non-Solid Fuels

A nuclear reactor is, at its basic level, a method of maintaining a fission reaction in a particular region for a given time. This depends on maintaining a combination of two characteristics: the number of fissile atoms in a given volume, and the number and energy of neutrons in that same volume (the neutron flux). As long as the number of neutrons and the number of fissile atoms in the area are held in balance, a controlled fission reaction will occur in that area.

Solid Core Fuel Element, image DOE

The easiest way to maintain that reaction is to hold the fissile atoms in a given place using a solid matrix of material – a fuel element. However, a number of things have to be balanced for a fuel element to be a useful and functional piece of reactor equipment. For an astronuclear reactor, there are two main concerns: the amount of power produced by the fission reaction has to be balanced by how much thermal energy the fuel element is able to contain, and the fuel element needs to survive the chemical and thermal environment that it is exposed to in the reactor. (Another for terrestrial reactors is that the fuel element has to contain the resulting fission products from the reaction itself, as well as any secondary chemical pollutants, but this isn’t necessarily a problem for astronuclear reactors, where the only environment that’s of concern is the more heavily shielded payload of the rocket.) 

This doesn’t mean that a reactor has to use a solid fuel element. As the increasingly well known molten salt reactor, as well as various other fluid fueled reactor concepts, demonstrate, the only requirement is the combination of the number of fissile atoms and the required energy level and density of neutrons to exist in the same region of the reactor. This, especially in Russian literature, is called the “active zone” of the reactor core. This can be an especially useful as a term, since the reactor core can contain areas that aren’t as active in terms of fission activity. (A great example of this is the travelling wave reactor, most recently investigated – and then abandoned – by Terrestrial Energy.) But more generally it’s useful to differentiate the fueled areas undergoing fission from other structures in the reactor, such as neutron moderation and control regions in the reactor. The key takeaway is that, as long as there is enough fuel, and the right density of neutrons at the right energy, then a sustained – and controlled – fission reactor has been achieved.

The obvious consequence is that the solid fuel element isn’t required – and in the case of a nuclear thermal rocket, where the efficiency of the rocket is directly tied to the temperature it can achieve, the solid fuel is in fact a major limitation to a designer. The downside to this is that, unlike solids, fluids tend to move, especially under thrust. Because the materials used in a solid fueled rocket are already at the extremes of what molecular bonds can handle, this means that either very clever cooling or very robust containment methods need to be used to keep the rest of the reactor from destroying itself.

Finally, one of the interesting consequences of not having a solid fuel element is that the reactor’s power density (W/m^2) and specific power (W/kg) can be increased in proportion to how much coolant can be used in theory, but in practice it can be challenging to maintain a high power density in certain types of fluid fueled reactors due to the high rate of thermal expansion that these reactors can undergo. There are ways around this, and fluid fueled reactors can have higher power densities than even closely related solid fueled variants, but the fact that fluids are able to expand much more than solids under high temperatures is an effect that should be taken into account. On the other hand, if the fluid expands too much, it can drop the power density, but not necessarily the specific mass of the system.

Types of and Reasons for Fluid Fuels

Fluid fuels fall into three broad categories: liquids, vapors, and gasses. There are intermediate steps, and hybrids between various phase states of fuel, but these three broad categories are useful. While liquid fuels are fairly self-explanatory (a liquid state fissile material is used to fuel the core, often uranium carbide mixed with other carbides, or U-Mo, but other options exist), the vapor and gas concepts are far less straightforward overall. The vapor core has two major variants: discrete liquid droplets, or a low pressure, relatively low temperature gaseous suspension similar to a cloud. The gas core could be more appropriately called a “plasma core,” since these are very high temperature reactors, which either mechanically hold the plasma in place, or use hydrodynamic or electrodynamic forces to hold the plasma in place.

However, they all have some common advantages, so we’ll look at them as a group first. The obvious reason for using non-solid fuel, in most cases, is that they are generally less thermally limited than solid fuels are (with some exceptions). This means that higher core temperatures, and therefore higher exhaust velocity (and specific impulse) can be achieved.

Convection pattern in radiator-type
liquid fuel element, image DOE

An additional benefit to most fluid fueled designs is that the fluid nature of the fuel helps mitigate or eliminate hot spots in the fuel. With solid fuels, one of the major challenges is to distribute the fissile material throughout the fuel as evenly as possible (or along a specifically desired gradient of fissile content depending on the position of the fuel element within the reactor). If this isn’t done properly, either through a manufacturing flaw or migration of the fissile component as a fuel element becomes weakened or damaged during use, then a hot spot can develop and damage the fuel element in both its nuclear and mechanical properties, leaning to a potentially failed fuel element. If the process is widespread enough, this can damage or destroy the entire reactor.

Fluid fuels, on the other hand, have the advantage that the fuel isn’t statically held in a solid structure. Let’s look at what happens when the fuel isn’t fully homogeneous (completely mixed) to understand this:

  1. A higher density of fissile atoms in the fuel results in more fission occurring in a particular volume.
  2. The fuel heats up through both radiation absorption and fission fragment heating.
  3. The fuel in this volume becomes less dense as the temperature increases.
  4. The increased volume, combined with convective mixing of cooler fuel fluids and radiation/conduction from the surface of the hotter region cools the region further.
  5. At the same time, the lower density decreases the fission occurring in that volume, while it remains at previous levels in the “normally heated” regions.
  6. The hot spot dissipates, and the fuel returns to a (mostly) homogeneous thermal and fissile profile.

In practice, this doesn’t necessarily mean that the fuel is the same temperature throughout the element – this very rarely occurs, in fact. Power levels and temperatures will vary throughout the fuel, causing natural vortices and other structures to appear. Depending on the fuel element configuration, this can be either minimized or enhanced depending on the need of the reactor. However, the mixing of the fuel is considered a major advantage in this sort of fuel.

Another advantage to using fluid fuels (although one that isn’t necessarily high on the priority list of most designs) is that the reactor can be refueled more easily. In most solid fueled reactors, the fissile content, fission poison content, and other key characteristics are carefully distributed through the reactor before startup, to ensure that the reactor will behave as predictably as possible for as long as possible at the desired operating conditions. In terrestrial solid reactors, refueling is a complex, difficult process, which involves moving specific fuel bundles in a complex pattern to ensure the reactor will continue to operate properly, with only a little bit of new fuel added with each refueling cycle.

PEWEE Test stand, image courtesy DOE

There were only two refuelable NTR testbeds in the US Rover program: Pewee and the Nuclear Furnace. Both of these were designed to be fuel element development apparatus, rather than functional NTRs (although Pewee managed to hit the highest Isp of any NTR tested in Rover without even trying!), so this is a significant difference. While it’s possible to refuel a solid core NTR, especially one such as the RD-0410 with its discrete fuel bundles, the likely method would be to just replace the entire fueled portion of the reactor – not the best option for ease of refueling, and one that would likely require a drydock of sorts to complete the work. To give an example, even the US Navy doesn’t always refuel their reactors, opting for long-lived highly enriched uranium fuel which will last for the life of the reactor. If the ship needs refueled, the reactor is removed and replaced whole in most cases. This reticence to refuel solid core reactors is likely to still be a thing in astronuclear reactors for the indefinite future, since placing the fuel elements is a complex process that requires a lot of real-time analysis of the particulars of the individual fuel elements and reactors (in Rover this was done at the Pajarito Site in Los Alamos).

Fluid fuels, though, can be added or removed from the reactor using pumps, compressed gasses, centrifugal force, or other methods. While not all designs have the capability to be refueled, many do, and some even require online fuel removal, processing and reinsertion into the active region of the core to maintain proper operation. If this is being done in a microgravity environment, there will be other challenges to address as well, but these have already been at least partially addressed by on-orbit experiments over the decades in the various space programs. (Specific behaviors of certain fluids will likely need to be experimentally tested for this particular application, but the basic physics and engineering solutions have been researched before).

Finally, fluid fuels also allow for easier transport of the fuel from one location to another, including into orbit or another planet. Rather than having a potentially damageable solid pellet, rod, prism, or ribbon, which must be carefully packaged to not only prevent damage but accidental criticality, fluids can be transported with far less risk of damage: just ensure that accidental criticality can’t occur, chemical compatibility between the fluid and the vessel it’s carrying, and package it strongly enough to survive an accident, and the problem is solved. If chemical processing and synthesis is available wherever the fuel is being sent (likely, if extensive and complex ISRU is being conducted), then the fuel doesn’t even need to be in its final form: more chemically inert options (UF4 and UF6 can be quite corrosive, but are easily managed with current materials and techniques), or less fissile-dense options (to reduce the chance of accidental criticality further) can be used as fuel precursors, and the final fuel form can be synthesized at the fueling depot. This may not be necessary, or even desirable, in most cases, but the option is available.

So, while solid fuels offer certain advantages over fluid fuels, the combination of being more delicate (thermally, chemically, and mechanically) combine to make fluid fuels a very attractive option. Once NTRs are in use, it is likely that research into fluid fueled NTRs will accelerate, making these “advanced” systems a reality.

Fuel Elements: An Overview

Now that we’ve looked at the advantages of fluid fuels in general, let’s look at the different types of fluid fuels and the proposals for the form the fuel elements in these reactors would take. This will be a brief overview of the various types of fuels, with more in-depth examinations coming up in future blog posts.

Liquid Fuel

A liquid fueled reactor is the best known popularly, although the most common type (the molten salt reactor) uses either fluoride or chloride salts, both of which are very corrosive at the temperatures an NTR operates at. While I’ve heard arguments that the extensive use of regenerative cooling can address this thermal limitation, this would still remain a major problem for an NTR. Another liquid fuel type, the molten metal reactor, has also been tested, using highly corrosive plutonium fuel in the best known case (the Liquid Annular Molten Plutonium Reactor Experiment, or LAMPRE, run by Los Alamos Scientific Lab from 1957 to 1963, covered very well here).

Early bubbler-type liquid NTR, Barrett 1963

The first proposal for a liquid fueled NTR was in 1954, by J McCarthy in “Nuclear Reactors for Rockets.” This design spun molten uranium carbide to produce centrifugal force (a common characteristic in liquid NTRs of all designs), and passed the propellant through a porous outer wall, through the fuel mass, and into the central void in the reactor before it was ejected out of the nozzle.The main problem with this reactor was that the tube was simply too large to allow for as much heat transfer as was ideal to take place, so the next evolution of the design broke up the single large spinning fuel element up into several thinner ones of the same length, increasing the total surface area available for heating the propellant. This work was conducted at Princeton, and would continue on and off until 1973. These designs I generally call “bubblers,” due to the propellant flow path.

Princeton multi-fuel-element bubbler, Nelson et al 1963

One problem with these designs is that the fuel would vaporize in the low pressure hydrogen environment of the bubbles, and significant amounts of uranium would be lost as the propellant went through the fuel. Not only is uranium valuable, but it’s heavy, reducing the exhaust velocity and therefore the specific impulse. Another issue is that there are hard limits to how much propellant can be passed through the fuel at any given time before it starts to splatter, directly tying thrust to fuel volume. 

In order to combat this, a team at NASA’s Lewis Research Center decided to study the idea of passing the propellant only through the central void in the fuel, allowing radiation to be the sole means of heating the propellant. Additional regenerative cooling structures were needed for this design, and ensuring the propellant got heated sufficiently was a challenge, but this sort of LNTR, the radiator type, became the predominant design. Vapor losses of the uranium were still a problem, but were minimized in this configuration.

It too would be cancelled in the late 1960s, but briefly revived by a team at Brookhaven National Laboratory in the early 1990s for possible use in the Space Exploration Initiative, however this program was not selected for further development.

Despite these challenges, liquid core NTRs have the potential to reach above 1300 s isp, and a T/W ratio of up to 0.5, so there is definite promise in the concept.

Droplet/Vapor Fuel

Picture a spray bottle, the sort used for household plants, ironing, or cleaning products like window cleaner. When the trigger is pulled, there’s a fine spray of liquid exiting the nozzle, which contains a mix of liquid and gas. Using a similar system to mix liquids and gasses is possible in a nuclear reactor, and is called a droplet core NTR. This reactor type is useful in that there’s incredible surface area available for radiation to occur into the propellant, but unfortunately it also means that separating the fuel droplets from the propellant upon leaving the nozzle (as well as preventing the fuel from coating the reactor core walls) is a major hydrodynamics challenge in this type of reactor.

Vapor core NTR, Diaz et al 1992

The other option is to use a vapor as fuel. A vapor is a substance that is in a gaseous state, but not at the critical point of the material – i.e. at standard temperature and pressure it would still be a liquid. One interesting property of a vapor is that a vapor is able to be condensed or evaporated in order to change the phase state of the substance without changing its temperature, which could be a useful tool to use for reactor startup. The downside of this type of fuel is that it has to be in an enclosed vessel in order to maintain the vapor state.

So why is this useful in an NTR? Despite the headaches we’ve just (briefly, believe it or not) discussed in the liquid fuels section, liquid fuel has a major advantage over gaseous fuel (our next section): the liquid phase is far better at containing its constituent parts than the gas phase is, due to the higher interatomic bond strength. At the same time, maintaining a large, liquid body can be a challenge, especially in the context of complex molecular structures in some of the most chemically difficult elements known to humanity (the actinides and transuranics). If the liquid component is small, though, it’s far easier to manage the thermal distribution, as well as offering greater thermal diffusion options (remember, the heat IN the fissile fuel needs to be moved OUT of it, and into the propellant, which is a direct function of available surface area).

The droplet core NTR offers many advantages over a liquid fuel in that the large-scale behavior of the liquid fuel isn’t a concern for reactor dynamics, and the aforementioned high surface area offers awesome thermal transfer properties throughout the propellant feed, rather than being focused on one volume of the propellant.

Vapors offer a middle ground between liquids and gasses: the fissile fuel itself is in suspension, meaning that the individual molecules of fissile fuel are able to circulate and maintain a more or less homogeneous temperature. 

This is another design concept that has seen very little development as an NTR (although NEP applications have been investigated more thoroughly, something that we’ll discuss the application and complications of, for an NTR in the future). In fact, I’ve only ever been able to find one design of each type designed for NTR use (and a series of evolving designs for NEP), the appropriately named Droplet Core Nuclear Rocket (DCNR) and the Nuclear Vapor Thermal Reactor (NTVR).

Droplet Core NTR, Anghaie et al 1992

The DCNR was developed in the late 1980s based on an earlier design from the 1970s, the colloid core reactor. The original design used ultrafine microparticles of U-C-ZR carbide fuel, which would be suspended in the propellant flow. This sort of fuel is something that we’ll look at more when covering gas core NTRs (metal microparticles are one of the fuel types available for a GCNTR), but the use of carbides increases the fuel failure temperature to the point that structural components would fail before the fuel itself would, leading to what could be called an early pseudo-dusty plasma NTR. The droplet core NTR took this concept, and applied it to a liquid rather than solid fuel form. We’ll look at how the fuel was meant to be contained before exiting the nozzle in the next section, but this was the main challenge of the DCNR from an engineering point of view.

The NVTR was a compromise design based on NERVA fuel element development with a different fissile fuel carrier. Here, the fuel (in the form of UF4) is contained within a carbon-carbon composite fuel element in sealed channels, with interspersed coolant channels to manage the thermal load on the fuel element. While significant thrust-to-weight ratio improvements were possible, and (in advanced NTR terms) modest specific impulse gains were possible, the design didn’t undergo any significant development. We’ll cover containment in the next section, and other options for architectures as well.

Gas Fuel

Finally, there are gas core NTRs. In these, the fuel is in gaseous form, allowing for the highest core temperatures of any core configuration. Due to the very high temperatures of these reactors, the uranium (and in general the rest of the components in the fuel) become ionized, meaning that a “plasma core” is as accurate a description as a “gas core” is, but gas remains the convention. The fuel form for a gas core NTR has a few variants, with the most common being UF6, or metal fuel which vaporizes as it is injected into the core. Due to the high temperatures of these reactors, the UF6 will often break down as all of the constituent molecules become ionized, meaning that whatever structures will come in contact with the fuel itself (either containment structures or nozzle components) must be designed in such a way to prevent being attacked by high temperature fluorine ions and hydrofluoric acid vapors formed when the fluorine ions come in contact with the propellant.

Containing the gas is generally done in one of three ways: either by compressing the gas mechanically in a container, by holding the gas in the middle of the reactor using the gas pressure from the propellant being injected into the core, or by using electromagnets to contain the plasma similarly to how a spherical tokamak operates. The first concept is a closed cycle   gas core (CCGCNTR, or GC-C), while the second two are called open cycle gas core NTRs (OCGCNTR or GC-O), because while the first one physically contains the fuel and prevents fission products, unburned fuel, and the previously mentioned free fluorine from exiting in the exhaust plume of the reactor, the open cycle’s largest problem in designing a workable NTR is that the vast majority (often upwards of 90%) if the uranium ends up being stripped away from the plasma body before it undergoes fission – a truly hot radioactive mess which you don’t want to use anywhere near anything sensitive to radiation and an insanely inefficient use of fissile material. There are many other designs and hybrids of these concepts, which we’ll cover in the gas core NTR series, and will look briefly at the containment challenges below.

Fluid Fuel Elements: Containment Strategies

Fluid fuels are, well, fluid. Unlike with a solid fuel element, as we’ve looked at in the past, a fluid has to be contained somehow. This can be in a sealed container or by using some outside force to keep it in place.

Another issue with fluid fuels can be (but isn’t always) maintaining the necessary density to achieve the power requirements for an NTR (or any astronuclear system, for that matter). All materials expand when heated, but with fluids this change can be quite dramatic, especially in the case of gas core NTRs. Because of this, careful design is required in order to maintain the high density of fissile fuel necessary to make a mass-efficient rocket engine possible.

This leads to a rather obvious conclusion: rather than the fuel element being a physical object, in a fluid fueled NTR the fuel element is a containment structure. Depending on the fuel type and the reactor architecture, this can take many forms, even in the same type of fuel. This will be a long-ish review of the proposed fuel containment strategies, and how they impact the performance of the reactors themselves.

One thing to note about all of these reactor types is that 235U is not required to be the fissile component in the fuel, in fact many gas core designs use 233U instead, due to the lower requirements for critical mass. (According to most Russian literature on gas core NTRs, this  reduces the critical mass requirements by 1/3). Other options include using 242mAm, a stable isomer of 242Am, which has the lowest critical mass of any fissile fuel. By using these types of fuels rather than the typical 235U, either less of the fuel mass needs to be fissile (in the case of a liquid fueled NTR), or less fuel in general is needed (in the case of vapor/gas core NTRs). This can be a double-edged sword in some systems with high fuel loss rates (like an open cycle gas core), which would require more robust and careful fuel management strategies to prevent power transients due to fuel level variations in the active zone of the reactor, but the overall reduction in fuel requirements means that there’s less fuel that can be lost. Many other fissile fuel types also exist, but generally speaking either short half-lives, high spontaneous fission rates, or expense in manufacture have prevented them from being extensively researched.

Let’s look at each of the design types in general, with a particular focus on gas core NTRs at the end.

Liquid FE

For liquid fuels, there’s one universal option for containing the fuel: by spinning the fuel element. However, after this, there’s two main camps on how a liquid fueled NTR interacts with the propellant. The original design, first proposed in the 1950s and researched at least through the 1960s, proposed the use of either one or several spinning cylinders with porous outer walls (frits), which would be used to inject the propellant into the reactor’s active region. For those that remember the Dumbo reactor, this may be familiar as a folded flow NTR, and does two things: first, it allowed the area surrounding the fuel elements to be kept at very low temperatures, allowing the use of ZrH and other thermally sensitive materials throughout the reactor, and second it increases the heat transfer area available from the fuel to the propellant. Experiments (using water as a uranium analog) were conducted to study the basics of bubble behavior in a spinning fluid to estimate fuel mass loss rates, and the impact of evaporation or vaporization of various forms of uranium (including U metal, UC2, and others) were conducted. 

This concept is the radiator type LNTR. Here, rather than the folded flow used previously, axial flow is used: the H2 is used as a coolant for reactor structures (including the nozzle) passing from the nozzle end to the ship end, and then injected through the central void of each of the fuel elements before exiting the nozzle. This design reduces the loss of fuel mass due to bubbling in the fuel, but adds an additional challenge of severely reducing the amount of surface area available for heat transfer from the fuel to the propellant. In order to mitigate this, some designs propose to seed the propellant with microparticles of tungsten, which would absorb the significant about of UV and X rays coming off the fuel, and turn it into IR radiation which is more easily absorbed by the H. At the designed operating temperatures, this reactor would dissociate the majority of the H2 into monatomic hydrogen, increasing the specific impulse significantly.

In all these designs, there is no solid clad between the fuel itself and the propellant, because this means that the hottest portion of the fuel element would be limited by how high the temperature can reach before melting the clad. Some early LNTR designs used a mix of molten UC2 and ZrC/NbC as a fuel element, with the ZrC meant to migrate to the upper areas of the fuel element and not only provide neutron moderation but reduce the amount of erosion from the propellant. It may be possible to use a liquid metal clad as a barrier to prevent mass erosion of the fissile fuel in a metal fueled reactor as well, and possibly even add some neutron moderation for the fuel element itself. However, the material would need to have not only a very high boiling point, high thermal conductivity, low reactivity to both hydrogen and the fuel, and low neutron capture cross section, it would also need to have a high vapor pressure in order to prevent erosion from the propellant flow (although I suppose adding additional clad during the course of operation would also be an option, at the cost of higher propellant mass and therefore lost specific impulse).

Droplet/Vapor FE

Now let’s look at the vapor core NTR.

NVTR fuel element, Diaz et al 1992

Containing the UF4 vapor in the NVTR vapor core NTR is done by using a sealed tube embedded in a fuel element, which is then surrounded by propellant channels to carry away the heat. Two configurations were proposed in the NTVR concept: the first used a large central cavity, sealed at both ends, to contain the vapor, and the second design dispersed the fuel cylinders in an alternating hexagonal pattern throughout the fuel element. The second option provides a more even thermal distribution not only within the fuel element itself, but across the entire active zone of the reactor core.

Droplet core NTRs are very different in their core structure. Rather than having multiple areas that the fissile fuel is isolated in, the droplet core sprays droplets of fissile fuel into a large cylinder, which is spun to induce centrifugal force. The fuel is kept away from the walls of the reactor core using a collection of high-pressure H2 jets, injecting the propellant into the fuel suspension and maintaining hydrostatic containment on the fuel. The last section of the reactor core, instead of using hydrogen, injects a liquid lithium spray to bind with the uranium, which is then carried to the walls of the reactor due to the lack of tangential force. The fuel is then recirculated to the top of the reactor vessel, where it is once again injected into the core.

This hydrostatic equilibrium concept is very similar to how many gas core NTRs operate (which we’ll look at below), and has proven to be the biggest Achilles’ Heel of these sorts of designs. While it may be theoretically possible to do this (the lower temperatures of the droplet core allow for collection and recirculation, which may provide a means of fissile fuel loss reduction), many of the challenges of the droplet core are very similar to that of the open cycle gas core, a far more capable engine type.

Gas Core

Gas core containment is possibly the most complex topic in this post, due to the sheer variety of possible designs and extreme engineering requirements. We’ll be discussing the different designs in depth in upcoming blog posts, but it’s worth doing an overview of the different designs, their strengths and weaknesses, here.

Closed Cycle

One half of the lightbulb configuration, McLafferty et al 1968

The simplest design to describe is the closed cycle gas core, which in many ways resembles a vapor core NTR. In most iterations, a sealed cylinder with a piston at one end (similar in many ways to the piston in an automobile engine), is filled with UF6 gas. This is compressed in order to reach critical geometry, and fission occurs in the cylinder. The walls of the cylinder are generally made out of quartz, which is transparent to the majority of the radiation coming off the fissioning uranium, and is able to resist the fluorination from the gas (other options include silicon dioxide, magnesium oxide, and aluminum oxide). Additionally, while the quartz will darken under the heat, the radiation actually “anneals” the quartz to keep it transparent, and coolant is run through the cylinder to maintain the material within thermal limits; a vortex is induced during fission which, when properly managed, also keeps the majority of the uranium (now in a charged state) from coming in contact with the walls of the chamber as well, reducing thermal load on the material. Some designs have used pressure waves in place of the piston to induce fission, but the fluid-mechanical result is very similar. This results in a lightbulb-like structure, hence the common nickname “nuclear lightbulb.” One variation mentioned in Russian literature also uses a closed uranium loop, circulating the fissile fuel to minimize the fission product buildup and maintain the fissile density of the reactor.

The main advantage to these types of designs is that all fission products and particle radiation are contained within the bulb structure, meaning that fission product and radiation release into the environment is eliminated, with only gamma and x-ray radiation during operation being a concern. However, due to the fact that there’s a solid structure between the fuel element and the propellant, this engine is thermally limited more than any other gas core design, and its performance in both thrust and specific impulse suffers as a result.

Open Cycle

The next very broad category is an open cycle gas core. Here, there is usually no solid structure between the fissioning uranium and the propellant, meaning that core temperatures can reach astoundingly high temperatures (sometimes limited only by the melting temperature of the materials surrounding the active reactor zone, such as reflectors and pressure vessel). Sadly, this also means that actually containing the fuel is the single largest challenge in this type of reactor, and the exhaust tends to be incredibly radioactive as a result, On the plus side, this sort of rocket can achieve isp in the tens of thousands of seconds (similar to or better than electric propulsion), and also achieve high thrust.

Perhaps the easiest way to make a pure open cycle gas core NTR is to allow the fuel and the propellant to fully mix, similarly to how the droplet core NTR was done, and either ensure all (or most) of the fissile fuel is burned before leaving the rocket nozzle. Insanely radioactive, sure, but with a complete mixing of the fissioning atoms and the propellant the theoretically most efficient transfer of energy is possible. However, the challenge of fully fissioning the fuel in such a short period of time is significant, and I can’t find any evidence of significant research into this type of gas core reactor.

Due to the challenges of burning the fissile fuel completely enough during a single pass through the reactor, though, it is generally considered required to maintain a more stable fissile structure within the reactor’s active region. Maintaining this sort of structure is a challenge, but is generally done through gasdynamic effects: the propellant injected into the reactor is used to push the fuel back into the center of the reactor. This involves the use of a porous outer wall of the reactor, where the hydrogen propellant is inserted at a high enough pressure and evenly enough spaced intervals to counterbalance both the tendency of the plasma to expand until it’s not able to undergo fission and the tendency of the fuel to leave the nozzle before being burned.

Soviet-type Vortex Stabilized open cycle, image Koroteev et al 2007

The next way is to create a low pressure stagnant area in the center of the core, which will contain the fissile fuel. In order to maintain this type of pressure differential, a solid structure is usually needed, generally made out of a high temperature refractory metal. In a way this is a hybrid closed/open cycle gas core (even though the plasma isn’t in direct contact with the structure of the reactor itself), because the structure itself is key to generating this low pressure zone necessary for maintaining this plasma body fuel element. This type of NTR has been the focus of Russian gas core research since the 1970s, and will be covered more in the future.

Spherical gas core diagram, image NASA

As I’m sure most of you have guessed, fuel containment is a very complex and difficult problem, and one that’s had many solutions over the years (which we’ll cover in a future post). Most recent gas core NTR designs in the US are based on the spherical gas core. Here, the plasma is held in the center of the active zone using jets of propellant from all sides. This is generally called a porous wall gas core NTR, and while it takes advantage of any vortex stabilization that may occur in the fuel, it does not rely on it; in many ways, it’s a lot like an indoor skydiving arena with air jets blowing from all sides. This design, first proposed in the 1970s, uses high pressure propellant to contain the fuel in the reactor, and in many designs the flow can be adjusted to deal with the engine being under thrust, pushing the fuel toward the nozzle in traditional design configurations. Most designs suffer from massive erosion of the fuel by shear forces from the propellant eroding the fuel from the outside edge, but in some conceptual sketches this can be gotten around using non-traditional nozzle configurations which have a solid structure along the main thrust axis of the rocket. (More on that in a future post. I’m still trying to track down the sources to fully explain that pseudo-aerospike concept).

Hybrid gas core diagram, Beveridge 2017

The most promising designs as far as fuel loss rates minimize the amount of plasma required to maintain the reaction. This is what’s known as a hybrid solid-gas NTR, first proposed by Hyland in the 1970s, and also one of the designs which has been most recently investigated by Lucas Beveridge. Here, the fissile fuel is split between two components: the high-temperature plasma fuel is used for final heating of the propellant, but isn’t able to sustain fission independently. Instead, a sphere of solid fuel encases the outside of the active zone of the reactor. This minimizes the amount of fuel that can be easily eroded while ensuring that a critical mass of fissile material is contained in the active region of the reactor. This really is less complicated than it sounds, but is difficult to summarize briefly without delving into the details of critical geometry, so I’ll try to explain it this way: the interior of the reactor is viewed by the neutrons in the reactor as a high-density low temperature fuel area, surrounding a low density high temperature fuel area, with the coolant/moderator passing through the high density area and flowing around the low density area, making a complete reactor between these parts while minimizing how much of the low density fuel is needed and therefore minimizing the fuel loss. I wish I was able to make this more clear in less than a couple pages, but sadly I’m not that good at summarizing in non-technical terms. I’ll try and do better on the hybrid core post coming in the future.

All of these designs suffer from massive fuel loss, leading to highly radioactive exhaust and incredibly inefficient engines which are absurdly expensive to operate due to the amount of highly enriched fissile fuel needed. (Because everything going into the reactor needs to fission as quickly as possible, every component of the fuel itself needs to undergo fission as easily as possible.) This is the major Achilles heel of this NTR type: despite the massive potential promise, the fuel loss, and radioactive plume coming off these reactors, make them unusable with current engineering.

There’s going to be a lot more that I’m going to write about this type of NTR, and I skipped a lot of ideas, and variations on these ideas, so expect a lot more in the coming year on this subject.

Cooling the Reactor/Heating the Propellant

Finally there’s cooling, which usually comes in one of two varieties:

  1. cooling using the propellant, as in most NTR designs that we’ve seen, to reject all the heat from the reactor
  2. cooling in a closed loop, as is done in an NEP system
Hybrid gas core with secondary cooling diagram, Beveridge 2017

While the ideal situation is to reject all the heat into the propellant, which maximizes the thrust and minimizes the dry mass of the system, this is the exception in many of these systems, rather than the norm. There’s a couple reasons for this: containing a fluid with fast-moving (or high pressure) hydrogen is challenging because the gas wants to strip away the mass that it comes in contact with (far easier in a fluid than a solid), H2 is insanely difficult to contain at almost any temperature, and these reactors are designed to achieve incredibly high temperatures which can outstrip the available heat rejection area that the reactor designs allow.

Complicating the issue further, hydrogen is mostly transparent to the radiation that a nuclear reactor puts off (mostly in the hard UV/X/gamma spectrum), meaning that it takes a lot of hydrogen to reject the heat produced in the reactor (a common complaint in any gas-cooled reactor, to be fair), and that hydrogen doesn’t get heated that much on an atom-by-atom basis, all things considered.

There’s a way around this, though, which many designs, from LARS on the liquid side to basically every gas core design I’ve ever seen use: microparticle or vapor seeding. This is a form of hybrid propellant, which I mention in my NTR propellants page. Basically, a metal is ground incredibly fine (or is vaporized), and then included in the propellant feed. This captures the high-wavelength photons (due to its higher atomic mass, and greater opacity to those wavelengths as a result), which are re-emitted at a lower frequency which is more easily absorbed by the propellant. While the US prefers to use tungsten microparticles in their designs, the USSR and Russia have also examined two other types of metals: lithium and NaK vapor. These have the advantage of being lower mass, impacting the overall propellant mass less, and also far easier to control fluid insertion rates (although microparticles can act as fluidized materials due to their small size, and maintain suspension in the H2 propellant well). This is a subject that I’ll cover in more depth in the future in the gas core NTR post.

(Side note: I’ve NEVER seen data on non-hydrogen propellant in a liquid-fueled NTR, but this problem would be somewhat ameliorated by using a higher atomic mass fuel, but which one is used will determine both how much more radiation would be directly absorbed, and what kind of loss in specific impulse would accompany this substitution. Also, using other elements/molecules would significantly change the neutronic structure and hydrodynamic behavior of the reactor, a subject I’ve never seen covered in any paper.)

Sadly, in many designs there simply isn’t the heat capacity to remove all of the reactor’s thermal energy through the propellant stream. Early gas core NTRs were especially notorious for this, with some only able to reject about 3% of the reactor’s thermal energy into the propellant. In order to prevent the reactor and pressure vessel from melting, external radiators were used – hence the large, arrowhead-shaped radiators on many gas core NTR designs.

This is unfortunate, since it directly affects the dry mass of the system, making it not only heavier but less power efficient overall. Fortunately, due to the high temperatures which need to be rejected, advanced high temperature radiators can be used (such as liquid droplet radiators, membrane radiators, or high temperature liquid metal radiators) which can reject more energy in less mass and surface area.

Another example, one which I’ve never seen discussed before (with one exception) is the use of a bimodal system. If significant amounts of heat are coming off the reactor, then it may be worth it to use a power conversion system to convert some of the heat into electricity for an electric propulsion system to back up the pure thermal system. This is something that would have to be carefully considered, for a number of reasons:

  1. It increases the complexity of the system: power conversion system, power conditioning system, thrusters, and support subsystems for each must be added, and each needs extensive reliability testing.
  2. It will significantly increase the mass of the system, so either the thrust needs to be significantly increased or the overall thrust efficiency needs to offset the additional dry mass (depending on the desire for thrust or efficiency in the system).
    1. Knock on mass increases will be extensive, with likely additions being: an additional primary heat loop, larger radiators for heat rejection, main truss restructuring and brackets, additional radiation shielding for certain radiation sensitive components, possible backup power conditioning and storage systems, and many other subsystem support structures.
  3. This concept has not been extensively studied; the only example that I’ve seen is the RD-600, which used a low power mode with an MHD that the plasma passed directly through in a closed loop system (more on this system in the future); this is obviously not the same type of system being discussed here. The only other similar parallel is with the Werka-type dusty plasma fission fragment rocket, which uses a helium-xenon Brayton turbine to provide about 100 kWe for housekeeping and system electrical power. However, this system only rejected less than 1% of the total FFRE waste heat.
    1. The proper power conversion system needs to be selected, thruster selection is in a similar position, and other systems would go through similar selection and optimization processes would need to be done. This is made more complex due to the necessity to match the PCS and thermal management of the system to the reactor, which has not been finalized and is currently very inefficient in terms of fissile material. If a heat engine is used, the quality of the heat reduces, meaning larger (and heavier) radiators are needed, as well.

Fluid Fuels: Promises of Advanced Rockets, but Many Challenges to Overcome

As we’ve seen in this brief overview of fluid fueled NTRs, the diversity in advanced NTR designs is broad, with an incredible amount of research having been done over the decades on many aspects of this incredibly promising, but challenging, propulsion technology. From the chemically challenging liquid fuel NTR, with several materials and propellant feed challenges and options, to the reliable vapor core, to the challenging but incredibly promising gas core NTR, the future of nuclear thermal propulsion is far more promising than the already-impressive solid core designs we’ve examined in the past.

Coming up on Beyond NERVA, we will examine each of these types in detail in a series of blog posts, and the information both in this post and future posts will be adapted into more-easily referenced web pages. Interspersed with this, I will be working on filling in details on the Rover series of engines and tests on the webpage, and we may also cover some additional solid core concepts that haven’t been covered yet, especially the pebble-bed designs, such as Timberwind and MITEE (the pebble-bed concept is also sometimes called a fluidized bed, since the fuel is able to move in relation to the other pellets in the fueled section of the reactor in many designs, so can be considered a hybrid system in some ways).

With the holiday season, life events, and concluding the project which has kept me from working as much as I would have liked on here in the coming months, I can’t predict when the next post (the first of three on liquid fueled NTRs) will be published, but I’ve already got 7 pages written on that post, six on the next (bubblers), and 6 on the final in that trilogy (radiator LNTR) with another 4 on vapor cores, and about 10 pages on the basic physics principles of gas core reactor physics (which is insanely complex), so hopefully these will be coming in the near future!

As ever, I look forward to your feedback, and follow me on Twitter, or join the Beyond NERVA Facebook page, for more content!

References

This is just going to be a short list of references, rather than the more extensive typical one, since I’m covering all this more in depth later… but here’s a short list of references:

Liquid fuels

“Analysis of Vaporization of Liquid Uranium, Metal, and Carbon Systems at 9000 and 10000 R,” Kaufman et al 1966 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19660025363.pdf

“A Technical Report on Conceptual Design Study of a Liquid Core Nuclear Rocket,” Nelson et al 1963 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19650026954.pdf

“Performance Potential of a Radiant Heat Transfer Liquid Core Nuclear Rocket Engine,” Ragsdale 1967 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19670030774.pdf

Vapor and Droplet Core

“Droplet Core Nuclear Reactor (DCNR),” Anghaie 1992 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19920001887.pdf

“Vapor Core Propulsion Reactors,” Diaz 1992 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19920001891.pdf

Gas Core

“Analytical Design and Performance Studies of the Nuclear Light Bulb Engine,” Rogers et al 1973 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19730003969.pdf

“Open Cycle Gas Core Nuclear Rockets,” Ragsdale 1992 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19920001890.pdf

“A Study of the Potential Feasibility of a Hybrid-Fuel Open Cycle Gas Core Nuclear Thermal Rocket,” Beveridge 2017 https://etd.iri.isu.edu/ViewSpecimen.aspx?ID=439

Categories
Development and Testing RHU RTGs Test Stands

ESA’s RTG Program: Breaking New Ground

Hello, and welcome back to Beyond NERVA! I would like to start by apologizing for the infrequency in posting recently. My wife is currently finishing her thesis in wildlife biology (multispecies occupancy, or the combination of statistical normalization of detection likelihood with the interactions between various species in an ecosystem to determine inter-species interactions and sensitivities), which has definitely made our household more hectic in the last few months. She should defend her thesis soon, and things will return to somewhat normal afterward, including a return to more frequent posting.

Today, we return to radioisotope thermoelectric generators (RTGs), which have once again been in the news. This time, the new is on a happier note than the passing of one of the pioneers in the field: the refining of a different type of fuel for RTGs: Americium 241. This work was done at the United Kingdom’s National Nuclear Laboratory Cumbria Lab by a team including personnel from the University of Leicester, and announced on May 3rd. This material was isolated out of the UK’s nuclear weapons stockpile of plutonium for nuclear warheads, which is something that we’ll look at more in the post itself.

A quick note on nomenclature and notation, since this is something that varies a bit: the way I learned to notate isotopes of an element (or nuclei of a given element with different numbers or neutrons) is as follows, and is what I generally use. If the element is spelled out, the result is [Element name][atomic mass of the nucleus](isomer state – if applicable); if it’s abbreviated it’s [atomic mass](isomer)[element symbol]. An isomer shows the energetic state of the nucleus: if gamma rays are absorbed by the nucleus, then a nucleon can jump to a higher energy state, just as electrons do for lower-wavelength (and lower energy) photons, and for this post it may not matter, but will come up with this element in the future. This means that Plutonium 238 will become 238Pu, and Americium 242(m) becomes 242(m)Am – I chose that because it’s a long-lived isomer that we will return to at some point due to its incredible usefulness in astronuclear design and reactor geometry advantages.

In this post, we’ll take a brief look at the fuels we currently use, as well as this “new” fuel, and one or two more that have been used in the past.

Radioisotope Power Source Fuels: What Are They, and How Do they Work?

Radioisotopes release energy in proportion to the instability of the isotope of whatever element is being used. This level of instability is a very complex issue, and how they decay largely is determined by where they fall in relation to the Valley of Stability, or the region in the Chart of the Isotopes where the number of protons and the number of neutrons balances the strong nuclear and electroweak forces in the nucleus. We aren’t going to go into this in too much detail, but the CEA did an excellent video on this topic which is highly recommended viewing if you want more information on the mechanics of radioactive decay and its different manifestations:


For our purposes, the important things to consider about radioactive decay are: how much energy is released in a given amount of time, and how easy is that energy to harness into kinetic energy, and therefore heat, while minimizing the amount of unharnessable radiation that could potentially damage sensitive components in the payload of the spacecraft. The amount of energy released in a given time is inversely proportional to the half-life of the isotope: the shorter the half-life, the more energy is released in a given time, but as a consequence the energy is being released at a faster rate and therefore the fuel will produce useful energy levels for a shorter period of time. How much of that energy is useful is determined by the decay chain of the isotope in question, which shows the potential decay mechanisms that the isotope will go through, how much energy is released in each decay, and how what the subsequent “daughter” isotopes are – with their decay mechanisms as well. The overall energy release from a particular isotope has to take all of these daughters, granddaughters, etc into consideration in order to calculate the total energy release, and how useful that energy is. If it’s mostly (or purely) alpha radiation, that’s ideal since it’s easily converted to kinetic energy, and each decay releases a proportionally larger amount of energy on average due to the high relative mass of the alpha particle. Beta radiation is not as good, but still easily shieldable (which converts the energy into kinetic energy and therefore heat), and so while not as ideal for heating is still acceptable.

For an RTG designer, the biggest problem (in this area at least) is gamma radiation. Not only is this very hard to convert into useful energy, but it can damage the payload and components of a spacecraft. This is because of the way gamma radiation interacts with matter: the wavelength is so short that it is absorbed most efficiently by very dense elements with a high number of nucleons (protons and neutrons), after which that nucleon jumps to a higher energy state (an isomer), and usually just spits it back out at a lower energy state. This process is repeated until there isn’t enough energy in the short wavelength photon to actually do anything meaningful.

Diverted particle (blue) emitting a photon (green) through brehmsstrahlung. Image Wikipedia

Unfortunately, radioactive decay isn’t the only way to produce gamma rays, you also have to deal with one of the key concepts in radiation shielding which always ties my fingers in knots whenever I try and type it (my tongue just says “nah, not even going to try”): Brehmsstrahlung. This is also known by the name “braking radiation,” and is close to the concept of “cyclotron radiation.” Basically, when a massive, high energy particle is diverted from its original course, two things happen: it slows down, and that slowing down means that energy has to be conserved somehow. In the case of quantum mechanics, the elastic collisions between the particle and whatever force acts on it (in practice the electromagnetic field it interacts with) creates kinetic energy (heat) if it hits a particle, and in every case (including magnetic containment of the particle) a photon is produced. This photon’s wavelength is proportional to the energy the particle originally has, the degree that its direction is changed, and how much energy is lost through mechanisms other than elastic collisions with other nucleons – and it’s never able to be fully slowed by elastic collisions for very complex quantum mechanical reasons. The amount converted to brehmstrahlung is proportional to the initial velocity of the particle in the case we’re talking about (magnetic containment and diversion of radiation isn’t the subject of this blog post after all). This means that if you have a high energy alpha particle emitter, and a low energy alpha emitter, the high energy alpha emitter will create more gamma radiation, meaning that it’s more problematic in terms of energy efficiency of a radioisotope for heat production.

Another concern is what chemical form the radioisotope is used in the fuel element itself. This defines a number of things, including how well any heat produced is transmitted to where it’s useful (or if the heat isn’t transferred efficiently enough, the thermal failure point of the fuel), what kind of nuclear interactions will occur within the fuel, the chemical impact of the elemental composition of the fuel changing through the radioactive decay that is the entire point of using these materials, how much the particles are slowed within the fuel itself (and any subsequent brehmsstrahlung), how much of the radiation that comes from the radioisotope decay is shielded by the fuel itself, and a whole host of other characteristics that are critical to the real-world design of a radioisotope power source.

Now that we’ve (briefly) looked at how radioisotopes are used as power sources, let’s look at what options are available for fueling these systems, which ones have been used in the past, and which ones may be used in the future.

How Are Radioisotopes Used in Space?

The best known use of radioisotopes in space is the use of radioisotope thermoelectric generators, RTGs. RTGs are devices that use the thermoelectric effect, paired with a material containing an isotope going through radioactive decay, to produce electricity. While these are the most well known of the radioisotope power sources, they aren’t the only ones, and in fact they are made up of an even more fundamental type of power source: a radioisotope heating unit, or RHU.

These RHUs are actually more common than RTGs, since often the challenge isn’t getting power to your systems but keeping them from freezing in the extreme cold of space (when there isn’t direct sunlight, in which case it’s extreme heat which is the problem). In that case, a small pellet of radioisotope is connected to a heat transfer mechanism, which could be simple thermal conduction through metal components or the use of heat pipes, which use a wick to transfer a working fluid from a heat source to a cold sink through the boiling and then condensation of the working fluid. Heat pipes have become well known in the astronuclear community thanks to the Kilopower reactor, but are common components in electronics and other systems, and have been used in many flight systems for decades.

RTGs use RHUs as well, as the source of their heat to produce electricity. In fact, the design of a common RHU for both RTG and spacecraft thermal management requirements was a focus of NASA for years, resulting in the General Purpose Heating Unit (GPHU) and its successor variations by the same name. This is important to efficiently and reliably manufacture the radioisotope fuel needed for the wide variety of systems that NASA has used in its spacecraft over the decades. While RTGs are the focus of this technology, we aren’t interested in either the power conversion system or the heat rejection system that make up two of the four main systems in an RTG, so we won’t delve into the details of these systems in particular. Rather, our focus is on the RHU radioisotope fuel itself, and the shielding requirements that this fuel mandates for both spacecraft and payload functionality (the other two major systems of an RTG). Because of this, for the rest of the post we will be discussing RHUs rather than RTGs for the most part (although mentions of RTG implications will be liberally scattered throughout).

Another potential use for radioisotopes in space seems to be something that is rare in discussions of space systems, but is common on Earth: as a source of well-characterized and -understood radiation for both analysis and manufacture of materials and systems. Radioisotope tracking is common in everything from medicine to agriculture, and radioactive analysis is used on everything from ancient artifacts to modern currency. The ISS has had experiments that use mildly radioactive isotopes to analyze the growth of living beings and microbes in microgravity environments, for instance, a very common use in agriculture to analyze nutrient uptake in crops, and a variation of a technique used in medicine to analyze everything from circulatory flow to tumor growth in nuclear medicine. X-ray analysis of materials is also a common method used in high-end manufacturing, and as groups such as Made in Space, Relativity Space and Tethers Unlimited continue exploring 3d printing and ISRU in microgravity and low gravity environments, this will be an invaluable tool. However, this is a VERY different subject, and so this will be where we leave this biological analysis and technological development technology and move to the most common use of radioisotopes: to provide heat to do some sort of work.

RHU Fuel: The Choices Available, and the Choices Made

Most RHUs, for space applications at least, are made from 238Pu, which is an isotope of plutonium that is not only not able to undergo fission, but in fairly minute quantities will render Pu meant for nuclear weapons completely unusable. In the early days of the American (and possibly Soviet) nuclear weapons program, small amounts of this isotope were isolated from material meant for nuclear weapons, but as time went on and the irradiation process became more efficient for producing weapons (which is very different from producing power), the percentage of 238Pu dropped from the single digits to insignificant. By this time, though, the Mound Laboratories in Miamisburg, Ohio had become very interested in the material as a source of radioactive decay heat for a variety of uses. These uses ranged from spacecraft to pacemakers (yes, pacemakers… they were absolutely safe unless the person was cremated, and the fact that the removal of said pacemakers couldn’t be guaranteed is what killed the program). This doesn’t mean that they didn’t investigate the rest of what we’ll be looking at as well, but much of their later focus was on 238Pu.

The advantages of 238Pu are significant: it only undergoes alpha decay with an exceptionally small chance of spontaneous fission (which occurs in the vast majority, if not all, isotopes of uranium and above on the periodic table), it has a good middle-of the road half-life (87.84 years) for long term use as a power source for missions lasting decades (as the majority of outer solar system missions require just to get tot he target destination and provide useful science returns), it has a good amount of energy released during alpha decay (5.59 MeV), and its daughter – 234U – is incredibly stable, with a half-life so long that it basically won’t undergo any significant decay until long after the fuel is useless and the spacecraft has fulfilled its mission centuries ago (245,500 years). The overall radiation power for 238Pu is 0.57 W/g, which is one of the best power densities available for long lived radioisotopes.

Additionally, the fuel used PuO2, is incredibly chemically stable, and when 238Pu becomes 234U, those two oxygen atoms are easily able to reattach to the newly formed U to form UO2 – the same fissile fuel form used in most nuclear reactors (although since it’s 234 rather than 235, it would be useless in a reactor), which is also incredibly chemically stable. Finally, the fuel itself is largely self-shielding once encased in the usual iridium clad to protect the fuel during handling and transport, meaning that the minimal amount of gamma radiation coming off the power source is only a major consideration for instruments looking in the x-ray and gamma bands of the EM spectrum causing noise, rather than significant material or electronic degradation.

238Pu is generated by first making a target of 237Np, usually through irradiation of a precursor material in a nuclear reactor. This target is then exposed to the neutrons produced by a nuclear reactor for a set length of time, and then chemically separated into 238Pu. Due to the irradiation time and energy level, the final result is almost pure 238Pu (close enough for the DOE and NASA’s purposes, anyway), which can then be turned into the ceramic pellet and encased in the clad material. This is then mated to the system that the spacecraft will use the power source for, usually just before spacecraft integration into the launch vehicle. Due to the rigid and strict interpretations of radiation protection, this is an incredibly complex and challenging process – but one that is done on a fairly regular basis. The supply of 238Pu has been a challenge from a bureaucratic perspective, but a recent shakeup of the American astronuclear industry due to a GAO report released last year offers hope for sufficient supply for American flight systems.

This is far from the only radioisotope source used for RHUs – in fact, it wasn’t even the first. Many early US designs used strontium 90, as did many Soviet designs. The first RTG to fly, SNAP-3, used this power source, as did multiple nautical navigation buoys, Soviet lighthouses along their northern coast, and many other systems. This isotope has a half-life of 28.79 years, which means that it’s more useful for shorter-lived systems than 238Pu, but is still a long enough half-life that it’s still a useful radioisotope source. The disadvantage is that it decays via beta decay (at 546 KeV), to 90Y, which is less efficient in converting the radioactive decay into heat. However, this isotope goes through another decay as well. 90Y only has a 2.66 day (!) half-life, ejecting a 2.28 MeV beta particle, resulting in a final decay product of 90Zr, which is radiologically stable. This means that the total energy release is 2.826 MeV, through two beta emissions. The overall energy release in the initial decay is attractive, at 0.9 W/g, however, so either sufficient shielding (of sufficiently high thermal conductivity) to convert this beta radiation into kinetic energy, or a different power conversion system than one that is thermally based is a potential way to increase the efficiency of these systems.

Finally, we come to the one that scares many, and has a horrible reputation in international politics, polonium 210. This is most famous for being used as a poison in the case of Alexander Litvinenko, a Russian defector who was poisoned with Po in his tea (and a whole lot of other people being exposed), but much of the effectiveness is due to chemical toxicity, not radiotoxicity. 210Po has an incredibly short half-life, of only 138.38 days. This is only acceptable for short mission times, but the massive amount of heat generated is still incredibly attractive for designers looking for very high temperature applications. Designs such as radioisotope thermal rockets (where the decay heats hydrogen or other propellant, much like an NTR driven by decay rather than a reactor) that have their efficiency defined purely by temperature can gain significant advantages thanks to this high decay energy. 210Po decays via alpha decay into lead 206, a stable isotope, so there are no complexities from daughter products emitting additional radiation, and a 5.4 MeV alpha particle carries quite a lot of energy.

Other isotopes are also available, and there’s a fascinating table in one of my sources that shows the ESA decision process when it comes to radioisotope selection from 2012:


241Am: The Fuel in the News

Americium-241 is the big news, however, and the main focus of this post. 241Am is produced during irradiation of uranium, which goes through neutron capture during irradiation to 241Pu, which is one of the isotopes that degrades the usefulness of weapons-grade 239Pu. It decays in a very short time (14 day half-life) into 241Am, which is not useful as weapons-grade material, but is still useful in nuclear reactors. According to Ed Pheil, the CEO of Elysium Industries, the Monju sodium cooled fast reactor in Japan faced a restart problem because the 241Pu in the reactor’s fuel decayed before restarting the reactor, causing a reactivity deficit and delaying startup.

241Am decays through a 5.64 MeV alpha emission into 237Np, which in turn goes through a 4.95 MeV alpha decay with a half-life of 2.14 x 10^6 years. This means that the daughter is effectively radiologically stable. With its longer half-life of 433 years (compared to 88 years for 238Pu), 241Am won’t put out as much energy at any given time compared to 238Pu fuel, but the difference in power output as the mission continues will allow for a more steady power supply for the spacecraft. A comparison of beginning of life power to 20 year end of life power for a 238Pu RTG shows a reduction in power output of about 15%, compared to only about 3.5% for 241Am. This allows for more consistent power availability, and for extremely long range or long duration missions a greater amount of power available. However, in the ESA design documentation, this extreme longevity is not something that is examined, a curious omission on first inspection. This could be explained by two factors, however: mission component lifetime, which could be influenced by multiple factors independent of power supply, and the continuing high cost of maintaining a mission control team and science complement to support the probe.

Depending on what power level is needed (more on that in the next section about the ESA RTG design), and how long the mission is, the longer half-life could make 241Am superior in terms of useful energy released compared to 238Pu, and is one of the reasons ESA started looking at 241Am as the main focus of their RTG efforts.

Why the focus on weapons material in the description of production methods? Because that’s where the UK’s National Nuclear Laboratory gained their 241Am in the latest announcement. The NNL is responsible for production of all 241Am for Europe’s RTG programs, but doesn’t HAVE to produce the material from weapons stockpiles.

Fuel cycle using civilian power reactors, Summerer 2012

The EU reprocesses their fuel, unlike the US, and use the Pu to create mixed-oxide (MOX) fuel. If the Pu is chemically separated from the irradiated fuel pellets, then allowed to decay, the much shorter half-life of 241Pu compared to all of the others will lead to the ability to chemically separate the 241Am from the Pu fuel for the MOX. This could, in theory, allow for a steady supply of 241Am for European space missions. As to how much 241Am that would be available through reprocessing, this is a complex question, and one that I have not been able to explore sufficiently to give a good answer to how much 241Am would be available through reprocessing. Jaro Franta was kind enough to provide a pressurized water reactor spent fuel composition table, which provides a vague baseline:

However, MOX fuel generally undergoes higher burnup, and according to several experts the Pu is quickly integrated into fuel as part of the reprocessing of spent fuel. This could be to ensure weapons material is not lying around in Le Hague, but also prevents enough of a decay time to separate the 241Am – plus, as we see in 238Pu production, where the materials are fabricated in one place, separated in another, and made into fuel in a third, Le Hague and the Cumbria Laboratory are not only in different locations but different countries, and after this process the Pu is even more useful for weapons, this bureaucratic requirement makes the process of using spent nuclear fuel for 241Am production an iffy proposition at best. However, according to Summerer and Stephenson (referenced in one of the papers, but theirs is behind a paywall) the economical separation of 241Am from spent civilian fuel can be economical (I’m assuming due to the short half-life of 241Pu), so it seems like the problem is systemic, not technical.

241Am is used as RHU fuel in the form of Am2O3. This allows for very good chemical stability, as well as reasonable thermal transfer properties (for an oxide). This fuel is encased in a “multilayer containment structure similar to that of the general-purpose heat source (GPHS) system,” with thermal and structural trade-offs made to account for the different thermal profile and power level of the ESA RTG (which, as far as I can tell, doesn’t have a catchy name like “GPHS RTG” or “MMRTG” yet). Neptunium oxide most often takes the form of NpO2, meaning that a deficit of oxygen will occur over time in the fuel pellet. The implications of this are something that I am completely unable to answer, and something that is definitely distinct from the use of 238Pu, which then becomes 234U, both of which have two oxygen atoms in their most common oxide state. However, considering there’s a stoichiometric mismatch between the initial material, the partially-decayed material, and the final, fully-degenerate state of the fuel element. I know just enough to know that this is far from ideal, and will change a whole host of properties, from thermal conductivity to chemical reactivity with the clad, so there will be (potentially insignificant) other factors that have impacts on fuel element life from the chemical point of view rather than the nuclear one.

ESA’s RTG: the 241Am RTG

ESA has been interested in RTGs for basically its entire existence, but for reasons that I haven’t been able to determine for certainty, they have not investigated 238Pu production for their RTG fuel to any significant degree for that entire time. Rather, they have focused on 241Am. This comes with a trade-off: due to the longer half-life, at any given time less energy is available per unit mass (0.57 W/g for 238Pu vs 0.1 W/g for 241Am), but as previously noted there are a couple of threshold points at which this longer half-life is an advantage.

Mass to power design envelope, Ambrose et al 2013

The first advantage is in mission lifetime (assuming the radiochemistry situation is of minimal concern): the centuries-long half-life could allow for unprecedented mission durations, where overall mass budget for the power source is of less concern in mission design than longevity of the mission. The second comes to when a very small amount of power is used, and this is the focus of ESA’s RTG program. The GPHS used in American systems produces 250 Wt at the time of manufacture in no more than 1.44 kg. Sadly, it’s very difficult to determine what the ESA fuel element’s mass and specific composition, so a direct comparison is currently impossible. Based on a 2012 presentation, there were two parallel options being explored, with different specific power characteristics, but also different thermal conductivity – and more efficient thermal transport. One was the use of CERMET fuel, where the oxide fuel is encapsulated in a refractory metal (which was unspecified), manufactured by spark plasma sintering. This is a technology that we’ve examined in terms of fissile fuel rather than radioisotope fuel, but many of the advantages apply in both cases. In addition, the refractory metal provides internal brehmstrahlung shielding, potentially offsetting the need for robust clad materials, but this is potentially offset by the need to contain the radioisotope in the case of a launch failure, which requires a mechanical strength that may ensure that the reduction in radiation shielding requirements are rendered effectively moot. The second was a parallel of American RHU design, using a multi-layered clad structure, using refractory metals, carbon-based insulators, and carbon-carbon insulators. This seems to be the architecture which was ultimately chosen, after a contract with Areva TA in France and other partners (which I have not been able to find documentation on, if you have that info please post it in the comments below!).

Cross-section of 1st generation ESA RTG, Ambrosi et al 2013

ESA has two RTG designs that they’ve been discussing over the last two decades: an incredibly small one, of only 1 W of electricity, and a larger one, in the 10-50 We range. These systems are similar in that they are both starting from the same design philosophy, but at the same time different materials and design tradeoffs were required for the two systems, so they have evolved in different directions.

BiTe thermocouple unit, Summerer 2012

The 10-50 We RTG combines the 241Am RHU with a composite clad, surrounded by a bismuth telluride thermocouple structure. This is very similar to lead telluride TE converters in many ways, but is more efficient in lower operating temperatures. PbTe is also under investigation, developed and manufactured by Fraunhofer IPM in Germany, but BiTe seems to be the current technological forerunner in ESA RTG development. This seems to be commercially available, but sadly based on the large number of thermopiles (another name for TE converters) available, it is not clear which is being used.

The 1-1.5 We RTG is one that doesn’t seem to be explored in depth in the currently published literature. This power output is useful for certain applications, such as powering an individual sensor, but is a much more niche application. Details on this design are very thin on the ground, though, so we will leave this design at that and move on to the experiment performed – again, very little is available on the specifics, but the experiment WAS described in previous papers.

Gen 2 ESA RTG, Ambrosi 2015

This design went through an evolution into the model seen in the public announcement video. This is called the Gen 2 Flight System Design, and likely was introduced sometime in the 2015 timeframe. At the same time, the Radioisotope Heating Unit itself went to TRL 3.

On the fuel element fabrication side, after 241Am was selected in 2010 two phases of isotope production occurred. The first was from 2011 to 2013, and the second from 2013 to 2016.

This was the first test of that batch of RTG fuel to produce electricity, which is a major achievement, for which the entire team should be congratulated.

The Experiment in the News: What is All the Fuss Actually About?

Sadly, there’s no publicly available information about the Am-fueled test in the news. The promotional video and press release from NNL/UL provided only two pieces of information: 241Am fuel was used, and they lit a light bulb. No details about how much 241Am, its fuel form or clad, the thermocouples used, the power requirements of the light bulb… this was meant for maximum viral distribution, not for conveying technical information.

The best way to look at the test is by looking at preceeding tests. RTG design from the ground up takes time, and in the case of ESA and the NNL, this process was only funded for the last ten years. They have had to pick a fuel type, continue to go through selection on thermocouple type, and will be working to finalize the design of their flight system.

Electrically heated breadboard experiment, Ambrosi 2012

In 2012, an electrically heated breadboard experiment was conducted. The test used a cuboid form factor, rather than the more common cylindrical form, and was conducted in a liquid nitrogen cooled vacuum chamber. It was designed around a theoretical fuel element of Am2O3 which would provide 83 Wt of power to the thermocouple. This thermocouple, in the proposal phase, was either a commercially available BiTe or bespoke PbTe thermocouple, contained in a cover gas of argon. The maximum output of electrical power was 5 We, which is less than the minimum size of the “normal” 10 We RTG design. Depending on the fuel element used, it’s possible that the experiment was carried out at 83 Wt/ 5We to allow for maximum comparison between the electrically heated version and the radioisotope powered version, but the lack of RI-powered experiment data prevents us from knowing if this was the case, or whether the experiment was performed at 10 We (166 Wt?) due to manufacturing and fuel element design constraints.

The electrically heated design in the breadboard experiment demonstrated that in the 5-50 We power output range, a specific power of 2 We/kg is feasible. There are possibilities that this could be improved somewhat, but it provides a good baseline for the power range and the specific power that these units will provide. This paper (linked below, Development and Testing of Americium-241 Radioisotope Thermoelectric Generator: Concept Designs and Breadboard System, Ambrosi et al) notes that only at small power outputs can 241Am compete with 238Pu systems, but extended mission lifetime considerations were not addressed.


ESA and the University of Leicester continue to look at expanding the use of RTGs in the future. The focus for the thermocouples in the first flight design was in bismuth telluride TEGs as of 2015. They are also looking into Stirling convertors as well, continuing in the current drive to move away from the

It takes time to do the things that they’re attempting, there are few specialists in these areas (although the fundamental tasks aren’t difficult if you know what you’re doing, it takes a while to know the ins and outs of any sort of large-scale chemistry), and it requires a lot of research to verify that each step of the way is both safe and reliable without compromising efficiency.

I have reached out to Dr. Ambrosi at University of Leicester for additional information about this test. If I hear anything from him, I will add the information about this particular test to the page on this NTR system, which should release soon.

Conclusions: 241Am, Is It the RTG Fuel of the Future?

As with most things in astronuclear engineering, the choice of an RHU fuel is a very complex question, and one which has no simple answer. 238Pu remains the preferred long-duration RTG fuel for space missions in terms of specific power, but its expense, and the requirements as far as infrastructure for fabrication and manufacture provide a high cost barrier for new entrants into the use of RHUs. For Europe, this barrier to entry is considered unacceptable, and that has kept them out of the RTG-flying world community for the entirety of their history.

241Am, on the other hand, is available to nations that conduct reprocessing of spent nuclear fuel, such as signatories to the Euratom treaty (as well as the UK, who are withdrawing voluntarily from the treaty as part of Brexit… don’t ask me why, it’s not required), where reprocessing of spent nuclear fuel is normal practice. Similar challenges to bureaucratic roadblocks to significant production of 238Pu by the US can be seen in European production of 241Am, but the existence of significant reprocessing capabilities make it theoretically far more available. 241Am is also available commercially in the US, meaning that at least in one country the regulatory barriers to possession, and therefore cost, are significantly lower than 238Pu.

This choice, however, comes at a cost of halving the specific power available to RTG systems due to the lower specific power of the fuel, at least as the design is historically described. Optimization calculations by ESA and its partners, primarily the University of Leicester, show that in the 5-50 We range of electric output the impact on mission mass is minimal, and for very low power applications (1-1.5 We) it is superior. The increased availability, and lower cost of acquisition of sufficiently pure 241Am, offset the advantages of 238Pu from the European perspective, as it integrates into current industrial capabilities, which outweigh the engineering advantages of 238Pu for the organizations involved. Even so, this is an exciting development for deep space exploration nerds, and one that can’t be overstated.

While this was a fascinating experiment, and I will be trying to find more information, the significance of this experiment boils down to one thing: this is the first time, outside the US or Russia, that an RTG designed for spacecraft use produced electrical power. It opens up new mission opportunities for ESA, who have been hampered in deep space exploration by their lack of suitable RHU fuel, and offers hope for more missions, more science, and more discoveries in the future.

References

Isotope information for 241Am, Periodictable.com https://periodictable.com/Isotopes/095.241/index.html

Isotope information for 238Pu, Periodictable.com https://periodictable.com/Isotopes/094.238/index2.full.dm.prod.html

Isotope information for 90Sr, Periodictable.com https://periodictable.com/Isotopes/038.90/index2.full.dm.prod.html

Isotope information for 210Po, Periodictable.com https://periodictable.com/Isotopes/084.210/index2.full.dm.prod.html

241Am ESA RTG Design

Development and Testing of Americium-241 Radioisotope Thermoelectric Generator: Concept Designs and Breadboard System; Ambrosi et al 2012 https://www.lpi.usra.edu/meetings/nets2012/pdf/3043.pdf

Americium-241 Radioisotope Thermoelectric Generator Development for Space Applications, Ambrosi et al 2013 https://inis.iaea.org/collection/NCLCollectionStore/_Public/45/066/45066049.pdf

Nuclear Power Sources for Space Applications – a key enabling technology (slideshow), Summerer et al, ESA 2012 https://www.euronuclear.org/events/enc/enc2012/presentations/L-Summerer.pdf

Space Nuclear Power Systems: Update on Activities and Programmes in the UK, Ambrosi (University of Leicester) and Tinsley (National Nuclear Laboratory), 2015 http://www.unoosa.org/pdf/pres/stsc2015/tech-15E.pdf

Categories
Development and Testing Fission Power Systems Forgotten Reactors Nuclear Electric Propulsion Test Stands

Topaz International part II: The Transition to Collaboration


Hello, and welcome back to Beyond NERVA! Before we begin, I would like to announce that our Patreon page, at https://www.patreon.com/beyondnerva, is live! This blog consumes a considerable amount of my time, and being able to pay my bills is of critical importance to me. If you are able to support me, please consider doing so. The reward tiers are still very much up for discussion with my Patrons due to the early stage of this part of the Beyond NERVA ecosystem, but I can only promise that I will do everything I can to make it worth your support! Every dollar counts, both in terms of the financial and motivational support!

Today, we continue our look at the collaboration between the US and the USSR/Russia involving the Enisy reactor: Topaz International. Today, we’ll focus on the transfer from the USSR (which became Russia during this process) to the US, which was far more drama-ridden than I ever realized, as well as the management and bureaucratic challenges and amusements that occurred during the testing. Our next post will look at the testing program that occurred in the US, and the changes to the design once the US got involved. The final post will overview the plans for missions involving the reactors, and the aftermath of the Topaz International Program, as well as the recent history of the Enisy reactor.

For clarification: In this blog post (and the next one), the reactor will mostly be referred to as Topaz-II, however it’s the same as the Enisy (Yenisey is another common spelling) reactor discussed in the last post. Some modifications were made by the Americans over the course of the program, which will be covered in the next post, but the basic reactor architecture is the same.

When we left off, we had looked at the testing history within the USSR. The entry of the US into the list of customers for the Enisy reactor has some conflicting information: according to one document (Topaz-II Design History, Voss, linked in the references), the USSR approached a private (unnamed) US company in 1980, but the company did not purchase the reactor, instead forwarding the offer up the chain in the US, but this account has very few details other than that; according to another paper (US-Russian Cooperation… TIP, Dabrowski 2013, also linked), the exchange built out of frustration within the Department of Defense over the development of the SP-100 reactor for the Strategic Defense Initiative. We’ll look at the second, more fleshed out narrative of the start of the Topaz International Program, as the beginning of the official exchange of technology between the USSR (and soon after, Russia) and the US.

The Topaz International Program (TIP) was the final name for a number of programs that ended up coming under the same umbrella: the Thermionic System Evaluation Test (TSET) program, the Nuclear Electric Propulsion Space Test Program (NEPSTP), and some additional materials testing as part of the Thermionic Fuel Element Verification Program (TFEVP). We’ll look at the beginnings of the overall collaboration in this post, with the details of TSET, NEPSTP, TFEVP, the potential lunar base applications, and the aftermath of the Topaz International Program, in the next post.

Let’s start, though, with the official beginnings of the TIP, and the challenges involved in bringing the test articles, reactors, and test stands to the US in one of the most politically complex times in modern history. One thing to note here: this was most decidedly not the US just buying a set of test beds, reactor prototypes, and flight units (all unfueled), this was a true international technical exchange. Both the American and Soviet (later Russian) organizations involved on all levels were true collaborators in this program, with the Russian head of the program, Academician Nikolay Nikolayvich Ponomarev-Stepnoy, still being highly appreciative of the effort put into the program by his American counterparts as late as this decade, when he was still working to launch the reactor that resulted from the TIP – because it’s still not only an engineering masterpiece, but could perform a very useful role in space exploration even today.

The Beginnings of the Topaz International Program

While the US had invested in the development of thermionic power conversion systems in the 1960s, the funding cuts in the 1970s that affected so many astronuclear programs also bit into the thermionic power conversion programs, leading to their cancellation or diminution to the point of being insignificant. There were several programs run investigating this technology, but we won’t address them in this post, which is already going to run longer than typical even for this blog! An excellent resource for these programs, though, is Thermionics Quo Vadis by the Defense Threat Reduction Agency, available in PDF here: https://www.nap.edu/catalog/10254/thermionics-quo-vadis-an-assessment-of-the-dtras-advanced-thermionics (paywall warning).

Our story begins in detail in 1988. The US was at the time heavily invested in the Strategic Defense Initiative (SDI), which as its main in-space nuclear power supply was focused on the SP-100 reactor system (another reactor that we’ll be covering in a Forgotten Reactors post or two). However, certain key players in the decision making process, including Richard Verga of the Strategic Defense Initiative Organization (SDIO), the organizational lynchpin on the SDI. The SP-100 was growing in both cost and time to develop, leading him to decide to look elsewhere to either meet the specific power needs of SDI, or to find a fission power source that was able to operate as a test-bed for the SDI’s technologies.

Investigations into the technological development of all other nations’ astronuclear capabilities led Dr. Verga to realize that the most advanced designs were those of the USSR, who had just launched the two TOPOL-powered Plasma-A satellites. This led him to invite a team of Soviet space nuclear power program personnel to the Eighth Albuquerque Space Nuclear Power Symposium (the predecessor to today’s Nuclear and Emerging Technologies for Space, or NETS, conference, which just wrapped up recently at the time of this writing) in January of 1991. The invitation was accepted, and they brought a mockup of the TOPAZ. The night after their presentation, Academician Nikolay Nicolayvich Ponomarev-Stepnoy, the Russian head of the Topol program, along with his team of visiting academicians, met with Joe Wetch, the head of Space Power Incorporated (SPI, a company made up mostly of SNAP veterans working to make space fission power plants a reality), and they came to a general understanding: the US should buy this reactor from the USSR – assuming they could get both governments to agree to the sale. The terms of this “sale” would take significant political and bureaucratic wrangling, as we’ll see, and sadly the problems started less than a week later, thanks to their generosity in bringing a mockup of the Topaz reactor with them. While the researchers were warmly welcomed, and they themselves seemed to enjoy their time at the conference, when it came time to leave a significant bureaucratic hurdle was placed in their path.

Soviet researchers at Space Nuclear Power Symposium, 1991, image Dabrowski

This mockup, and the headaches surrounding being able to take it back with the researchers, were a harbinger of things to come. While this mockup was non-functional, but the Nuclear Regulatory Commission claimed that, since it could theoretically be modified to be functional (a claim which I haven’t found any evidence for, but is theoretically possible), and as such was considered a “nuclear utilization facility” which could not be shipped outside the US. Five months later, and with the direct intervention of numerous elected officials, including US Senator Pete Domenici, the mockup was finally returned to Russia. This decision by the NRC led to a different approach to importing further reactors from the USSR and Russia, when the time came to do this. The mockup was returned, however, and whatever damage the incident caused to the newly-minted (hopeful) partnership was largely weathered thanks to the interpersonal relationships that were developed in Albuquerque.

Teams of US researchers (including Susan Voss, who was the major source for the last post) traveled to the USSR, to inspect the facilities used to build the Enisy (Yenisey is another common spelling, the reactor was named after the river in Siberia). These visits started in Moscow, with Drs Wetch and Britt of SPI, when a revelation came to the American astronuclear establishment: there wasn’t one thermionic reactor in the USSR, but two, and the most promising one was available for potential export and sale!

These visits continued, and personal relationships between the team members from both sides of the Iron Curtain grew. Due to headaches and bureaucratic difficulties in getting technical documentation translated effectively in the timeframe that the program required, often it was these interpersonal relationships that allowed the US team to understand the necessary technical details of the reactor and its components. The US team also visited many of the testing and manufacturing locations used in the production and development of the Enisy reactor (if you haven’t read it yet, check out the first blog post on the Enisy for an overview of how closely these were linked), as well as observing testing in Russia of these systems. This is also the time when the term “Topaz-II” was coined by one of the American team members, to differentiate the reactor from the original Topol (known in the west as Topaz, and covered in our first blog post on Soviet astronuclear history) in the minds of the largely uninformed Western academic circles.

The seeds of the first cross-Iron Curtain technical collaboration on astronuclear systems development, planted in Albuquerque, were germinating in Russian soil.

The Business of Intergovernmental Astronuclear Development

During this time, due to the headaches involved in both the US and the USSR from a bureaucratic point of view (I’ve never found any information that showed that the two teams ever felt that there were problems in the technological exchange, rather they all seem to be political and bureaucratic in nature, and exclusively from outside the framework of what would become known as the Topaz International Program), two companies were founded to provide an administrative touchstone for various points in the technological transfer program.

The first was International Scientific Products, which from the beginning (in 1989) was made specifically to facilitate the purchase of the reactors for the US, and worked closely with the SDIO Dr. Verga was still intimately involved, and briefed after every visit to Russia on progress in the technical exchange and eventual purchase of the reactors. This company was the private lubricant for the US government to be able to purchase these reactor systems (for reasons too complex to get into in this blog post). The two main players in ISP were Drs Wetch and Britt, who also appear to be the main administrative driving force in the visits. The company gave a legal means to transmit non-classified data from the USSR to the US, and vice versa. After each visit, these three would meet, and Dr. Verga kept his management at SDIO consistently briefed on the progress of the program.

The second was the International Nuclear Energy Research and Technology corporation, known as INERTEK. This was a joint US-USSR company, involving the staff of ISP, as well as individuals from all of the Soviet team of design bureaus, manufacturing centers (except possibly in Talinn, but I haven’t been able to confirm this, it’s mainly due to the extreme loss of documentation from that facility following the collapse of the USSR), and research institutions that we saw in the last post. These included the Kurchatov Institute of Atomic Energy (headed by Academician and Director Ponomarev-Stepnoy, the head of the Russian portion of the Topaz International Program), the Scientific Industrial Association “LUCH” (represented by Deputy Director Yuri Nikolayev), the Central Design Bureau for Machine Building (represented by Director Vladmir Nikitin), and the Keldysh Institute of Rocket Research (represented by Director Academician Anatoli Koreteev). INERTEK was the vehicle by which the technology, and more importantly to the bureaucrats the hardware, would be exported from the USSR to the US. Academician Ponomarev-Stepnoy was the director of the company, and Dr Wetch was his deputy. Due to the sensitive nature of the company’s focus, the company required approval from the Ministry of Atomic Energy (Minatom) in Moscow, which was finally achieved in December 1990.

In order to gain this approval, the US had to agree to a number of demands from Minatom. This included: the Topaz-II reactors had to be returned to Russia after the testing and that the reactors could not be used for military purposes. Dr. Verga insisted on additional international cooperation, including staff from the UK and France. This not only was a cost-saving measure, but reinforced the international and transparent nature of the program, and made military use more challenging.

While this was occurring, the Americans were insistent that the non-nuclear testing of the reactors had to be duplicated in the US, to ensure they met American safety and design criteria. This was a major sticking point for Minatom, and delayed the approval of the export for months, but the Americans did not slow in their preparations for building a test facility. Due to the concentration of space nuclear power research resources in New Mexico (with Los Alamos and Sandia National Laboratories, the US Air Force Philips Laboratory, and the University of New Mexico’s New Mexico Engineering Research Institute (NMERI), as well as the presence of the powerful Republican senator Pete Domenici to smooth political feathers in Washington, DC (all of the labs were within his Senatorial district in the north of the state), it was decided to test the reactors in Albuquerque, NM. The USAF purchased an empty building from the NMERI, and hired personnel from UNM to handle the human resources side of things. The selection of UNM emphasized the transparent, exploratory nature of the program, an absolute requirement for Minatom, and the university had considerable organizational flexibility when compared to either the USAF or the DOE. According to the contract manager, Tim Stepetic:

The University was very cooperative and accommodating… UNM allowed me to open checking accounts to provide responsive payments for the support requirements of the INTERTEK and LUCH contracts – I don’t think they’ve ever permitted such checkbook arrangements either before or since…”

These freedoms were necessary to work with the Russian team members, who were in culture shock and dealing with very different organizational restrictions than their American counterparts. As has been observed both before and since, the Russian scientists and technicians preferred to save as much of their (generous in their terms) per diem for after the project and the money would go further. They also covered local travel expenses as well. One of the technicians had to leave the US for Russia for his son’s brain tumor operation, and was asked by the surgeon to bring back some Tylenol, a request that was rapidly acquiesced to with bemusement from his American colleagues. In addition, personal calls (of a limited nature due to international calling rates at the time) were allowed for the scientists and technicians to keep in touch with their families and reduce their homesickness.

As should be surprising to no-one, the highly unusual nature of this financial arrangement, as well as the large amount of money involved (which ended up coming out to about $400,000 in 1990s money), a routine audit led to the Government Accounting Office being called in to investigate the arrangement later. Fortunately, no significant irregularities in the financial dealings of the NMERI were found, and the program continued. Additionally, the reuse of over $500,000 in equipment scrounged from SNL and LANL’s junk yards allowed for incredible cost savings in the program.

With the business side of the testing underway, it was time to begin preparations for the testing of the reactors in the US, beginning with the conversion of an empty building into a non-nuclear test facility. The building’s conversion, under the head of Frank Thome on the facilities modification side, and Scott Wold as the TSET training manager, began in April of 1991, only four months after Minatom’s approval of INTERTEK. Over the course of the next year, the facility would be prepared for testing, and would be completed just before the delivery of the first shipment of reactors and equipment from Russia.

By this point, the test program had grown to include two programs. The first was the Thermionic Systems Evaluation Test (TSET), which would study mechanical, thermophysical, and chemical properties of the reactors to verify the data collected in Russia. This was to flight-qualify the reactors for American space mission use, and establish the collaboration of the various international participants in the Topaz International Program.

The second program was the Nuclear Electric Propulsion Space Test Program (NEPSTP); run by the Johns Hopkins Applied Physics Laboratory, and funded by the SDIP Ballistic Missile Defense Organization, it proposed an experimental spacecraft that would use a set of six different electric thrusters, as well as equipment to monitor the environmental effects of both the thrusters and the reactor during operation. Design work for the spacecraft began almost immediately after the TSET program began, and the program was of interest to both the American and Russian parts of the team.

Later, one final program would be added: the Thermionic Fuel Element Verification Program (TFEVP). This program, which had predated TIP, is where many of the UK and French researchers were involved, and focused of increasing the lifetime of the thermionic fuel elements from one year (the best US estimate before the TSET) to at least three, and preferably seven, years. This would be achieved through better knowledge of materials properties, as well as improved manufacturing methods.

Finally, there were smaller programs that were attached to the big three, looking at materials effets in intense radiation and plasma environments, as well as long-term contact with cesium vapor, chemcal reactions within the hardware itself, and the surface electrical properties of various ceramics. These tests, while not the primary focus of the program, WOULD contribute to the understanding of the environment an astronuclear spacecraft would experience, and would significantly affect future spacecraft designs. These tests would occur in the same building as the TSET testing, and the teams involved would frequently collaborate on all projects, leading to a very well-integrated and collegial atmosphere.

Reactor Shipment: A Funny Little Thing Occurred in Russia

While all of this was going on in the Topaz International Program, major changes were happening thoughout the USSR: it was falling apart. From the uprisings in Latvia and Lithuania (violently put down by the Soviet military), to the fall of the Berlin Wall, to the ultimate lowering of the hammer and sickle from the Kremlin in December 1991 and its replacement with the tricolor of the Russian Federation, the fall of the Iron Curtain was accelerating. The TIP teams were continuing to work at their program, knowing that it offered hope for the Topaz-II project as well as a vehicle to form closer technological collaborations with their former adversaries, but the complications would rear their heads in this small group as well.

The American purchase of the Topaz reactors was approved by President George H.W. Bush on 27 March, 1992 during a meeting with his Secretary of State, James Barker, and Secretary of Defense Richard Cheney. This freed the American side of the collaboration to do what needed to be done to make the program happen, as well as begin bringing in Russian specialists to begin test facility preparations.

Trinity site obelisk

The first group of 14 Russian scientists and technicians to arrive in the US for the TSET program arrived on April 3, 1992, but only got to sleep for a few hours before being woken up by their guests (who also brought their families) for a long van journey. This was something that the Russians greatly appreciated, because April 4 is a special day in one small part of the world: it’s one of only two days of the year that the Trinity Site, the location of the first nuclear explosion in history, is open to the public. According to one of them, Georgiy Kompaniets:

It was like for a picnic! And at the entrance to the site there were souvenir vendors selling t-shirts with bombs and rocks supposedly at the epicenter of the blast…” (note: no trinitite is allowed to be collected at the Trinity site anymore, and according to some interpretations of federal law is considered low-level radioactive waste from weapons production)

The Russians were a hit at the Trinity site, being the center of attention from those there, and were interviewed for television. They even got to tour the McDonald ranch house, where the Gadget was assembled and the blast was initiated. This made a huge impression on the visiting Russians, and did wonders in cementing the team’s culture.

Hot air balloon in New Mexico, open source

Another cultural exchange that occurred later (exactly when I’m not sure) was the chance to ride in a hot air balloon. Albuquerque’s International Balloon Fiesta is the largest hot air ballooning event in the world, and whenever atmospheric conditions are right a half dozen or more balloons can be seen floating over the city. A local ballooning club, having heard about the Russian scientists and technicians (they had become minor local celebrities at this point) offered them a free hot air balloon ride. This is something that the Russians universally accepted, since none of them had ever experienced this.

According to Boris Steppenov:

The greatest difficulty, it seemed, was landing. And it was absolutely forbidden to touch down on the reservations belonging to the Native Americans, as this would be seen as an attack on their land and an affront to their ancestors…

[after the flight] there were speeches, there were oaths, there was baptism with champagne, and many other rituals. A memory for an entire life!”

The balloon that Steppenov flew in did indeed land on the Sandia Pueblo Reservation, but before touchdown the tribal police were notified, and they showed up to the landing site, issued a ticket to the ballooning company, and allowed them to pack up and leave.

These events, as well as other uniquely New Mexican experiences, cemented the TIP team into a group of lifelong friends, and would reinforce the willingness of everyone to work together as much as possible to make TIP as much of a success as it could be.

C-141 taking off, image DOD

In late April, 1992, a team of US military personnel (led by Army Major Fred Tarantino of SDIO, with AF Major Dan Mulder in charge of logistics), including a USAF Airlift Control Element Team, landed in St. Petersburg on a C-141 and C-130, carrying the equipment needed to properly secure the test equipment and reactors that would be flown to the US. Overflight permissions were secured, and special packing cases, especially for the very delicate tungsten TISA heaters, were prepared. These preparations were complicated by the lack of effective packing materials for these heaters, until Dr. Britt of both ISP and INTERTEK had the idea of using foam bedding pads from a furniture store. Due to the large size and weight of the equipment, though, the C-141 and C-130 aircraft were not sufficient for airlifting the equipment, so the teams had to wait on the larger C-5 Galaxy transports intended for this task, which were en route from the US at the time.

Sadly, when the time came for the export licenses to be given to the customs officer, he refused to honor them – because they were Soviet documents, and the Soviet Union no longer existed. This led Academician Ponomarev-Stepnoy and INTERTEK’s director, Benjamin Usov, to travel to Moscow on April 27 to meet with the Chairman of the Government, Alexander Shokhin, to get new export licenses. After consulting with the Minister of Foreign Economic Relations, Sergei Glazev, a one-time, urgent export license was issued for the shipment to the US. This was then sent via fast courier to St. Petersburg on May 1.

C-5 Galaxy, image USAF

The C-5s, though, weren’t in Russia yet. Once they did land, though, a complex paperwork ballet needed to be carried out to get the reactors and test equipment to America. First, the reactors were purchased by INTERTEK from the Russian bureaus responsible for the various components. Then, INTERTEK would sell the reactors and equipment to Dr. Britt of ISP once the equipment was loaded onto the C-5. Dr. Britt then immediately resold the equipment to the US government. This then avoided the import issues that would have occurred on the US side if the equipment had been imported by ISP, a private company, or INTERTEK, a Russian-led international consortium.

One of them landed in St. Petersburg on May 6, was loaded with the two Topaz-II reactors (V-71 and Ya-21U) and as much equipment as could be fit in the aircraft, and left the same day. It would arrive in Albuquerque on May 7. The other developed maintenance problems, and was forced to wait in England for five days, finally arriving on May 8. The rest of the equipment was loaded up (including the Baikal vacuum chamber), and the plane left later that day. Sadly, it ran into difficulties again upon reaching England, as was forced to wait two more days for it to be repaired, arriving in Albuquerque on May 12.

Preparations for Testing: Two Worlds Coming Together

Unpacking and beryllium checks at TSET Facility in Albuquerque, Image DOE/NASA

Once the equipment was in the US, detailed examination of the payload was required due to the beryllium used in the reflectors and control drums of the reactor. Berylliosis, or the breathing in of beryllium dust, is a serious health issue, and one that the DOE takes incredibly seriously (they’ll evacuate an entire building at the slightest possibility that beryllium dust could be present, at the cost of millions of dollars on occasion). Detailed checks, both before the equipment was removed from the aircraft and during the unpackaging of the reactors. However, no detectable levels of beryllium dust were detected, and the program continued with minimal disruption.

Then it came time to unbox the equipment, but another problem arose: this required the approval of the director of the Central Design Bureau of Heavy Machine Building, Vladmir Nikitin, who was in Moscow. Rather than just call him for approval, Dr Britt called and got approval for Valery Sinkevych, the Albuquerque representative for INTERTEK, to have discretional control over these sorts of decisions. The approval was given, greatly smoothing the process of both setup and testing during TIP.

Sinkevych, Scott Wold and Glen Schmidt worked closely together in the management of the project. Both were on hand to answer questions, smooth out difficulties, and other challenges in the testing process, to the point that the Russians began calling Schmidt “The Walking Stick.” His response was classic: that’s my style, “Management by Walking Around.”

Soviet technicians at TSET Test Facility, image Dabrowski

Every day, Schmidt would hold a lab-wide meeting, ensuring everyone was present, before walking everyone through the procedures that needed to be completed for the day, as well as ensuring that everyone had the resources that they needed to complete their tasks. He also made sure that he was aware of any upcoming issues, and worked to resolve them (mostly through Wetch and Britt) before they became an issue for the facility preparations. This was a revelation to the Russian team, who despite working on the program (in Russia) for years, often didn’t know anything other than the component that they worked on. This synthesis of knowledge would continue throughout the program, leading to a far

Initial estimates for the time that it would take to prepare the facility and equipment for testing of the reactors were supposed to be 9 months. Due to both the well-integrated team, as well as the more relaxed management structure of the American effort, this was completed in only 6 ½ months. According to Sinkevych:

The trust that was formed between the Russian and American side allowed us in an unusually short time to complete the assembly of the complex and demonstrate its capabilities.”

This was so incredible to Schmidt that he went to Wetch and Britt, asking for a bonus for the Russians due to their exceptional work. This was approved, and paid proportional to technical assignment, duration, and quality of workmanship. This was yet another culture shock for the Russian team, who had never received a bonus before. The response was twofold: greatly appreciative, and also “if we continue to save time, do we get another bonus?” The answer to this was a qualified “perhaps,” and indeed one more, smaller bonus was paid due to later time savings.

Installation of Topaz-II reactor at TSET Facility, image DOE/NASA

Mid-Testing Drama, and the Second Shipment

Both in the US and Russia, there were many questions about whether this program was even possible. The reason for its success, though, is unequivocally that it was a true partnership between the American and Russian parts of TIP. This was the first Russian-US government-to-government cooperative program after the fall of the USSR. Unlike the Nunn-Lugar agreement afterward, TIP was always intended to be a true technological exchange, not an assistance program, which is one of the main reasons why the participants of TIP still look fondly and respectfully at the project, while most Russian (and other former Soviet states) participants in N-L consider it to be demeaning, condescending, and not something to ever be repeated again. More than this, though, the Russian design philosophy that allowed full-system, non-nuclear testing of the Topaz-II permanently changed American astronuclear design philosophy, and left its mark on every subsequent astronuclear design.

However, not all organizations in the US saw it this way. Drs. Thorne and Mulder provided excellent bureaucratic cover for the testing program, preventing the majority of the politics of government work from trickling down to the management of the test itself. However, as Scott Wold, the TSET training manager pointed out, they would still get letters from outside organizations stating:

[after careful consideration] they had concluded that an experiment we proposed to do wouldn’t be possible and that we should just stop all work on the project as it was obviously a waste of time. Our typical response was to provide them with the results of the experiment we had just wrapped up.”

As mentioned, this was not uncommon, but was also a minor annoyance. In fact, if anything it cemented the practicality of collaborations of this nature, and over time reduced the friction the program faced through proof of capabilities. Other headaches would arise, but overall they were relatively minor.

Sadly, one of the programs, NEPSTP, was canceled out from under the team near the completion of the spacecraft. The new Clinton administration was not nearly as open to the use of nuclear power as the Bush administration had been (to put it mildly), and as such the program ended in 1993.

One type of drama that was avoided was the second shipment of four more Topaz-II reactors from Russia to the US. These were the Eh-40, Eh-41, Eh-43, and Eh-44 reactors. The use of these terms directly contradicts the earlier-specified prefixes for Soviet determinations of capabilities (the systems were built, then assessed for suitability for mechanical, thermal, and nuclear capabilities after construction, for more on this see our first Enisy post here). These units were for: Eh-40 thermal-hydraulic mockup, with a functioning NaK heat rejection system, for “cold-test” testing of thermal covers during integration, launch, and orbital injection; Eh-41 structural mockup for mechanical testing, and demonstration of the mechanical integrity of the anticriticality device (more on that in the next post), modified thermal cover, and American launch vehicle integration; Eh-43 and -44 were potential flight systems, which would undergo modal testing, charging of the NaK coolant system, fuel loading and criticality testing, mechanical vibration, shock, and acoustic tests, 1000 hour thermal vacuum steady-state stability and NaK system integrity tests, and others before launch.

An-124, image Wikimedia

How was drama avoided in this case? The previous shipment was done by the US Air Force, which has many regulations involved in the transport of any cargo, much less flight-capable nuclear reactors containing several toxic substances. This led to delays in approval the first time this shipment method was used. The second time, in 1994, INTERTEK and ISP contracted a private cargo company, Russian Volga Dnepr Airlines, to transport these four reactors. In order to do this, Volga Dnepr Airlines used their An-124 to fly these reactors from St. Petersburg to Albuquerque.

For me personally, this was a very special event, because I was there. My dad got me out of school (I wasn’t even a teenager yet), drove me out to the landing strip fence at Kirtland AFB, and we watched with about 40 other people as this incredible aircraft landed. He told me about the shipment, and why they were bringing it in, and the seed of my astronuclear obsession was planted.

No beryllium dust was found in this shipment, and the reactors were prepared for testing. Additional thermophysical testing, as well as design work for modifications needed to get the reactors flight-qualified and able to be integrated with the American launchers, were conducted on these reactors. These tests and changes will be the subject of the next blog post, as well as the missions that were proposed for the reactors.

These tests would continue until 1995, and the end of testing in Albuquerque. All reactors were packed up, and returned to Russia per the agreement between INTERTEK and Minatom. The Enisy would continue to be developed in Russia until at least 2007.

More Coming Soon!

The story of the Topaz International Program is far from over. The testing in the US, as well as the programs that the US/Russian team had planned have not even been touched on yet besides very cursory mentions. These programs, as well as the end of the Topaz International Program and the possible future of the Enisy reactor, are the focus of our next blog post, the final one in this series.

This program provided a foundation, as well as a harbinger of challenges to come, in international astronuclear collaboration. As such, I feel that it is a very valuable subject to spend a significant amount of time on.

I hope to have the next post out in about a week and a half to two weeks, but the amount of research necessary for this series has definitely surprised me. The few documents available that fill in the gaps are, sadly, behind paywalls that I can’t afford to breach at my current funding availability.

As such, I ask, once again, that you support me on Patreon. You can find my page at https://www.patreon.com/beyondnerva every dollar counts.

References:

US-Russian Cooperation in Science and Technology: A Case Study of the TOPAZ Space-Based Nuclear Reactor International Program, Dabrowski 2013 https://www.researchgate.net/profile/Richard_Dabrowski/publication/266516447_US-Russian_Cooperation_in_Science_and_Technology_A_Case_Study_of_the_TOPAZ_Space-Based_Nuclear_Reactor_International_Program/links/5433d1e80cf2bf1f1f2634b8/US-Russian-Cooperation-in-Science-and-Technology-A-Case-Study-of-the-TOPAZ-Space-Based-Nuclear-Reactor-International-Program.pdf

Topaz-II Design Evolution, Voss 1994 http://gnnallc.com/pdfs/NPP%2014%20Voss%20Topaz%20II%20Design%20Evolution%201994.pdf

Categories
Development and Testing Fission Power Systems Forgotten Reactors History Test Stands

Topaz International part 1: ENISY, the Soviet Years

Hello, and welcome back to Beyond NERVA! Today, we’re going to return to our discussion of fission power plants, and look at a program that was unique in the history of astronuclear engineering: a Soviet-designed and -built reactor design that was purchased and mostly flight-qualified by the US for an American lunar base. This was the Enisy, known in the West as Topaz-II, and the Topaz International program.

This will be a series of three posts on the system: this post focuses on the history of the reactor in the Soviet Union, including the testing history – which as we’ll see, heavily influenced the final design of the reactor. The next will look at the Topaz International program, which began as early as 1980, while the Soviet Union still appeared strong. Finally, we’ll look at two American uses for the reactor: as a test-bed reactor system for a nuclear electric test satellite, and as a power supply for a crewed lunar base. This fascinating system, and the programs associated with it, definitely deserve a deep dive – so let’s jump right in!

We’ve looked at the history of Soviet astronuclear engineering, and their extensive mission history. The last two of these reactors were the Topaz (Topol) reactors, on the Plasma-A satellites. These reactors used a very interesting type of power conversion system: an in-core thermionic system. Thermionic power conversion takes advantage of the fact that certain materials, when heated, eject electrons, gaining a positive static charge as whatever the electrons impact gain a negative charge. Because the materials required for a thermionic system can be made incredibly neutronically robust, they can be placed inside the core of the reactor itself! This is a concept that I’ve loved since I first heard of it, and remains as cool today as it did back then.

Diagram of multi-cell thermionic fuel element concept, Bennett 1989

The original Topaz reactor used a multi-cell thermionic element concept, where fuel elements were stacked in individual thermionic conversion elements, and several of these were placed end-to-end to form the length of the core. While this is a perfectly acceptable way to set up one of these systems, there are also inefficiencies and complexities associated with so many individual fuel elements. An alternative would be to make a single, full-length thermionic cell, and use either one or several fuel rods inside the thermionic element. This is the – wait for it – single cell thermionic element design, and is the one that was chosen for the Enisy/Topaz-II reactor (which we’ll call Enisy in this post, since it’s focusing on the Soviet history of the reactor). While started in 1967, and tested thoroughly in the 70s, it wasn’t flight-qualified until the 80s… and then the Soviet Union collapsed, and the program died.

After the fall of the USSR, there was a concerted effort by the US to keep the specialist engineers and scientists of the former Soviet republics employed (to ensure they didn’t find work for international bad actors such as North Korea), and to see what technology had been developed behind the Iron Curtain that could be purchased for use by the US. This is where the RD-180 rocket engine, still in use by the United Launch Alliance Atlas rockets, came from. Another part of this program, though, focused on the extensive experience that the Soviets had in astronuclear missions, and in paricular the most advanced – but as yet unflown – design of the renowned NPO Luch design bureau, attached to the Ministry of Medium Industry: the Enisy reactor (which had the US designation of Topaz-II due to early confusion about the design by American observers).

Enisy power supply, image Department of Defense

The Enisy, in its final iteration, was designed to have a thermal output of 115 kWt (at the beginning of life), with a mission requirement of at least 6 kWe at the electrical outlet terminals for at least three years. Additional requirements included a ten year shelf life after construction (without fissile fuel, coolant, or other volatiles loaded), a maximum mass of 1061 kg, and prevention of criticality before achieving orbit (which was complicated from an American point of view, more on that below). The coolant for the reactor remained NaK-78, a common coolant in most reactors we’ve looked at so far. Cesium was stored in a reservoir at the “bottom” (away from the spacecraft) end of the reactor vessel, to ensure the proper partial pressure between the cathode and anode of the fuel elements, which would leak out over time (about 0.5 g/day during operation). This was meant to be the next upgrade in the Soviet astronuclear fleet, and as such was definitely a step above the Topaz-I reactor.

Perhaps the most interesting part of the design is that it was designed to be able to be tested as a complete system without the use of fissile fuels in the reactor. Instead, electrical resistance heaters could be inserted in the thermionic fuel elements to simulate the fission process, allowing for far more complete testing of the system in flight configuration before launch. This design decision heavily influenced US nuclear power plant design and testing procedures, and continues to influence designs today (the induction heating testing of the KRUSTY thermal simulator is a good recent example of this concept, even if it’s been heavily modified for the different reactor geometry), however, the fact that the reactor used cylindrical fuel elements made this process much easier.

So what did the Enisy look like? This changed over time, but we will look at the basics of the power plant’s design in its final Soviet iteration in this post, and the examine the changes that the Americans made during the collaboration in the next post. We’ll also look at why the design changed as it did.

First, though, we need to look at how the system worked, since compared to every system that we’ve looked at in depth, the physics behind the power conversion system are quite novel.

Thermionics: How to Keep Your Power Conversion System in the Core

We haven’t looked at power conversion systems much in this blog yet, but this is a good place to discuss the first kind as it’s so integral to this reactor. If the details of how the power conversion system actually worked don’t interest you, feel free to skip to the next section, but for many people interested in astronuclear design this power conversion system offers the promise to potentially be the most efficient and reliable option available for in-space nuclear reactors geared towards electricity production.

In short, thermionic reactions are those that occur when a material is heated and gives off charged particles. This is something that has been known since ancient times, even though the physical mechanism was completely unknown until after the discovery of the electron. The name comes from the term “thermions,” or “thermal ions.” One of the first to describe this effect used a hot anode in a vacuum: the modern incandescent lightbulb: Thomas Edison, who observed a static charge building up on the glass of his bulbs while they were turned on. However, today this has expanded to include the use of anodes, as well as solid-state systems and systems that don’t have a vacuum.

The efficiency of these systems depends on the temperature difference between the anode and cathode, the work function (or minimum thermodynamic work needed to remove an electron from a solid to a vacuum immediately outside the solid surface) of the emitter used, and the Boltzmann Constant (which relates to the average kinetic energy of particles in a gas), as well as a number of other factors. In modern systems, however, the structure of a thermionic convertor which isn’t completely solid state is fairly standard: a hot cathode is separated from a cold anode, with cesium vapor in between. For nuclear systems, the anode is often tungsten, the cathode seems to vary depending on the system, and the gap between – called the inter-electrode gap – is system specific.

The cesium exists in an interesting state of matter. Solid, liquid, gas, and plasma are familiar to pretty much everyone at this point, but other states exist under unusual circumstances; perhaps the best known is a supercritical fluid, which exhibits the properties of both a liquid and a gas (although this is a range of possibilities, with some having more liquid properties and some more gaseous). The one that concerns us today is something called Rydberg matter, one of the more exotic forms of matter – although it has been observed in many places across the universe. In its simplest form, Rydberg matter can be seen as small clusters of interconnected molecules within a gas (the largest number of atoms observed in a laboratory is 91, according to Wikipedia, although there’s evidence for far larger numbers in interstellar gas clouds). These clumps end up affecting the electron clouds of those atoms in the clusters, causing them to orbit across the nuclei of those atoms, causing a new lowest-energy state for the entire cluster to occur. These structures don’t degrade any faster under radioactive bombardment due to a number of quantum mechanical properties, which brought them to the attention of the Los Alamos Scientific Laboratory staff in the 1950s, and a short time later Soviet nuclear physicists as well.

This sounds complex, and it is, but the key point is this: because the clumps act as a unit within Rydberg matter, their ability to transmit electricity is enhanced compared to other gasses. In particular, cesium seems to be a very good vehicle for creating Rydberg matter, and cesium vapor seems to be the best available for the gap between the cathode and anode of a thermionic convertor. The density of the cesium vapor is variable and dependent on many factors, including the materials properties of the cathode and anode, the temperature of the cathode, the inter-electrode gap distance, and a number of other factors. Tuning the amount of cesium in the inter-electrode gap is something that must occur in any thermionic power conversion system; in fact the original version of the Enisy had the ability to vary the inter-electrode gap pressure (this was later dropped when it was discovered to be superfluous to the efficient function of the reactor).

This type of system comes in two varieties: in-core and out-of-core. The out-of-core variant is very similar to the power conversion systems we saw (briefly) on the SNAP systems: the coolant from the reactor passes around or through the radiation shield of the system, heats the anode, which then emits electrons into the gap, collected by the cathode, and then the electricity goes through the power conditioning unit and into the electrical system of the spacecraft. Because thermionic conversion is theoretically more efficient, and in practice is more flexible in temperature range, than thermoelectric conversion, even keeping the configuration of the power conversion system’s relationship to the rest of the power plant offers some advantages.

The in-core variant, on the other hand, wraps the power conversion system directly around the fissile fuel in the core, with electrical power being conducted out of the core itself and through the shield. The coolant runs across the outside of the thermionic unit, providing the thermal gradient for the system to work, and then exits the reactor. While this increases the volume of the core (admittedly, not by much), it also eliminates the need for more complex plumbing for the primary coolant loop. Additionally, it allows for less heat loss from the coolant having to travel a farther difference. Finally, there’s far less chance of a stray meteor hitting your power conversion system and causing problems – if a thermionic fuel element is damaged by a foreign object, you’re going to have far bigger problems with the system as a whole, since it means that it damaged your control systems and pressure vessel on the way to damaging your power conversion unit!

The in-core thermionic power conversion system, while originally proposed by the US, was seen as a curiosity on their side of the Iron Curtain. Some designs were proposed, but none were significantly researched to the level of being able to be serious contenders in the struggle to gain the significant funding needed to develop as complex a system as an astronuclear fission power plant, and the low conversion efficiency available in practice prevents its application in terrestrial power plants, which to this day continue to use steam turbine generators.

On the other side of the Iron Curtain, however, this was seen as the ideal solution for a power conversion system: the only systems needed for the system to work could be solid-state, with no moving parts: heaters to vaporize the cesium, and electromagnetic pumps to move it through the reactor. Greater radiation resistance and more flexible operating temperatures, as well as greater conversion efficiency, all offered more promise to Soviet astronuclear systems designers than the thermoelectric path that the US ended up following. The first Soviet reactor designed for in-space use, the Romashka, used a thermionic power conversion system, but the challenges involved in the system itself led the Krasnya Zvezda design bureau (who were responsible for the Romasha, Bouk, and Topol reactors) to initially choose to use thermoelectric convertors in their first flight system: the BES-5 Bouk, which we’ve seen before.

Now that we’ve looked at the physics behind how you can place your power conversion system within the reactor vessel of your power plant (and as far as I’ve been able to determine, if you’re looking to generate electricity beyond what a simple sensor needs, this is the only option without going to something very exotic), let’s look at the reactor itself.

Enisy: The Design of the TOPAZ-II Reactor

The Enisy was a uranium oxide fueled, zirconium hydride moderated, sodium-potassium eutectic cooled reactor, which used a single-element thermionic fuel element design for in-core power conversion. The multi-cell version was used in the Topol reactor, where each fuel pellet was wrapped in its own thermionic convertor. This is sometimes called a “flashlight” configuration, since it looks a bit like the batteries in a large flashlight, but this comes at the cost of complexity, mass, and increased inefficiencies. To offset this, many issues are easier to deal with in this configuration, especially as your fuel reaches higher burnup percentages and your fuel swells. The ultimate goal was single-unit thermionic fuel elements, which were realized in the Enisy reactor. While more challenging in terms of materials requirements, the greater simplicity, lower mass, and greater efficiency of the system offered more promise.

The power plant was required to provide 6 kWe of electrical power at the reactor terminals (before the power conditioning unit) at 27 volts. It had to have an operational life of three years, and a storage life if not immediately used in a mission of at least ten years. It also had to have an operational reliability of >95%, and could not under any circumstances achieve criticality before reaching orbit, nor could the coolant freeze at any time during operation. Finally, it had to do all of this in less than 1061 kg (excluding the automatic control system).

TFE Full Length, image DOD

Thirty-seven fuel elements were used in the core, which was contained in a stainless steel reactor vessel. These contained uranium oxide fuel pellets, with a central fission gas void about 22% of the diameter of the fuel pellets to prevent swelling as fission products built up. The emitters were made out of molybdenum, a fairly common choice for in-core applications. Al2O3 (sapphire) insulators were used to electrically isolate the fuel elements from the rest of the core. Three of these would be used to power the cesium heater and pump directly, while another (unknown) number powered the NaK coolant pump (my suspicion is that it’s about the same number). The rest would output power directly from the element into the power conditioning unit on the far side of the power plant.

Enisy Core Cross-section, image DOD

Nine control drums, made mostly out of beryllium but with a neutron poison along one portion of the outer surface (Boron carbide/silicon carbide) surrounded the core. Three of these drums were safety drums, with two positions: in, with the neutron poison facing the center of the core, and out, where the beryllium acted as a neutron reflector. The rest of the drums could be rotated in or out as needed to maintain reactivity at the appropriate level in the core. These had actuators mounted outside the pressure vessel to control the rotation of the drums, and were connected to an automatic control system to ensure autonomous stable function of the reactor within the mission profile that the reactor would be required to support.

Image DOD

The NaK coolant would flow around the fuel elements, driven by an electromagnetic pump, and then pass through a radiator, in an annular flow path immediately surrounding the TFEs. Two inlet and two outlet pipes were used to connect the core to the radiator. In between the radiator and the core was a radiation shield, made up of stainless steel and lithium hydride (more on this seemingly odd choice when we look at the testing history).

The coolant tubes were embedded in a zirconium hydride moderator, which was contained in stainless steel casings.

Finally, a reservoir of cesium was at the opposite end of the reactor from the radiator. This was necessary for the proper functioning of the thermionic fuel elements, and underwent many changes throughout the design history of the reactor, including a significant expansion as the design life requirements increased.

Once the Topaz International program began, additional – and quite significant – changes were made to the reactor’s design, including a new automated control system and an anti-criticality system that actually removed some of the fuel from the core until the start-up commands were sent, but that’s a discussion for the next post.

TISA Heater Installation During Topaz International, image NASA

I saved the coolest part of this system for last: the TISA, or “Thermal Simulators of Apparatus Cores” (the acronym was from the original Russian), heaters. These units were placed in the active section of the thermionic fuel elements to simulate the heat of fission occurring in the thermionic fuel elements, with the rest of the systems and subsystems being in flight configuration. This led to unprecedented levels of testing capability, but at the same time would lead to a couple of problems later in testing – which would be addressed as needed.

How did this design end up this way? In order to understand that, the development and testing process of the Soviet design team must be looked at.

The History of Enisy’s Design

The Enisy reactor started with the development of the thermionic fuel element by the Sukhumi Institute in the early 1960s, which had two options: the single cell and multiple cell variants. In 1967, these two options were split into two different programs: the Topol (Topaz), which we looked at in the Soviet Astronuclear History post, led by the Krasnaya Zvezda design bureau in Moscow, and Enisy, which was headed by the Central Design Bureau of Machine Building in Leningrad (now St. Petersburg). Aside from the lead bureau, in charge of the overall program and system management, a number of other organizations were involved with the fabrication and testing of the reactor system: the design and modeling team consisted of: the Kurchatov Institute of Atomic Energy was responsible for nuclear design and analytics, the Scientific Industrial Association Lutch was responsible for the thermionic fuel elements, the Sukhumi Institute remained involved in the reactor’s automatic control systems design; fabrication and testing was the responsibility of: the Research Institute of Chemical Machine Building for thermal vacuum testing, the Scientific Institute for Instrument Building’s Turaevo nuclear test facility, Kraznoyarsk Spacecraft Designer for mechanical testing and spacecraft integration, Prometheus Laboratory for materials development (including liquid metal eutectic development for the cooling system and materials testing) and welding, and the Enisy manufacturing facility was located in Talinn, Estonia (a decision that would cause later headaches during the collaboration).

The Enisy originally had three customers (the identities of which I am not aware of, simply that at least one was military), and each had different requirements for the reactor. Originally designed to operate at 6 kWe for one year with a >95% success rate, but customer requirements changed both of these characteristics significantly. As an example, one customer needed a one year system life, with a 6 kWe power output, while another only needed 5 kWe – but needed a three year mission lifetime. This longer lifetime ended up becoming the baseline requirement of the system, although the 6 kWe requirement and >95% mission success rate remained unchanged. This led to numerous changes, especially to the cesium reservoir needed for the thermionic convertors, as well as insulators, sensors, and other key components in the reactor itself. As the cherry on top, the manufacture of the system was moved from Moscow to Talinn, Estonia, resulting in a new set of technicians needing to be trained to the specific requirements of the system, changes in documentation, and at the fall of the Soviet Union loss of significant program documentation which could have assisted the Russia/US collaboration on the system.

The nuclear design side of things changed throughout the design life as well. An increase in the number of thermionic fuel elements (TFEs) occurred in 1974, from 31 to 37 in the reactor core, an increase in the height of the “active” section of the TFE, although whether the overall TFE length (and therefore the core length) changed is information I have not been able to find. Additional space in the TFEs was added to account for greater fuel swelling as fission products built up in the fuel pellets, and the bellows used to ensure proper fitting of the TFEs with reactor components were modified as well. The moderator blocks in the core, made out of zirconium hydride, were modified at least twice, including changing the material that the moderator was kept in. Manufacturing changes in the stainless steel reactor vessel were also required, as were changes to the gamma shielding design for the shadow shield. All in all, the reactor went through significant changes from the first model tested to theend of its design life.

Another area with significantly changing requirements was the systems integration side of things. The reactor was initially meant to be launched in a reactor-up position, but this was changed in 1979 to a reactor-down launch configuration, necessitating changes to several systems in what ended up being a significant effort. Another change in the launch integration requirements was an increase in the acceleration levels required during dynamic testing by a factor of almost two, resulting in failures in testing – and resultant redesigns of many of the structures used in the system. Another thing that changed was the boom that mounted the power plant to the spacecraft – three different designs were used through the lifetime of the system on the Russian side of things, and doubtless another two (at least) were needed for the American spacecraft integration.

Perhaps the most changed design was the coolant loop, due to significant problems during testing and manufacturing of the system.

Design Driven by (Expected) Failure: The USSR Testing Program

Flight qualification for nuclear reactors in the USSR at the time was very different from the way that the US did flight qualification, something that we’ll look at a bit more later in this post. The Soviet method of flight qualification was to heavily test a number of test-beds, using both nuclear and non-nuclear techniques, to validate the design parameters. However, the actual flight articles themselves weren’t subjected to nearly the same level of testing that the American systems would be, instead going through a relatively “basic” (according to US sources) workmanship examination before any theoretical launch.

In the US, extensive systems modeling is a routine part of nuclear design of any sort, as well as astronautical design. Failures are not unexpected, but at the same time the ideal is that the system has been studied and modeled mathematically thoroughly enough that it’s not unreasonable to predict that the system will function correctly the first time… and the second… and so on. This takes not only a large amount of skilled intellectual and manual labor to achieve, but also significant computational capabilities.

In the Soviet Union, however, the preferred method of astronautical – and astronuclear – development was to build what seemed to be a well-designed system and then test it, expecting failure. Once this happened, the causes of the failure were analyzed, the problem corrected, and then the newly upgraded design would be tested again… and again, for as many times as were needed to develop a robust system. Failure was literally built into the development process, and while it could be frustrating to correct the problems that occurred, the design team knew that the way their system could fail had been thoroughly examined, leading to a more reliable end result.

This design philosophy leads to a large number of each system needing to be built. Each reactor that was built underwent a post-manufacturing examination to determine the quality of the fabrication in the system, and from this the appropriate use of the reactor. These systems had four prefixes: SM, V, Ya, and Eh. Each system in this order was able to do everything that the previous reactor would be able to do, in addition to having superior capabilities to the previous type. The SM, or static mockup, articles were never built for anything but mechanical testing, and as such were stripped down, “boilerplate” versions of the system. The V reactors were the next step up, which were used for thermophysical (heat transfer, vibration testing, etc) or mechanical testing, but were not of sufficient quality to undergo nuclear testing. The Ya reactors were suitable for use in nuclear testing as well, and in a pinch would be able to be used in flight. The Eh reactors were the highest quality, and were designated potential flight systems.

In addition to this designation, there were four distinct generations of reactor: the first generation was from V-11 to Ya-22. This core used 31 thermionic fuel elements, with a one year design life. They were intended to be launched upright, and had a lightweight radiation shield. The next generation, V-15 to Ya-26, the operational lifetime was increased to a year and a half.

The third generation, V-71 to Eh-42 had a number of changes. The number of TFEs was increased from 31 to 37, in large part to accommodate another increase in design life, to above 3 years. The emitters on the TFEs were changed to the monocrystaline Mo emitters, and the later ones had Nb added to the Mo (more on this below). The ground testing thermal power level was reduced, to address thermal damage from the heating units in earlier non-nuclear tests. This is also when the launch configuration was changed from upright to inverted, necessitating changes in the freeze-prevention thermal shield, integration boom, and radiator mounting brackets. The last two of this generation, Eh -41 and Eh-42, had the heavier radiation shield installed, while the rest used the earlier, lighter gamma shield.

The final generation, Ya-21u to Eh-44, had the longest core lifetime requirement of three years at 5.5 kWe power output. These included all of the other changes above, as well as many smaller changes to the reactor vessel, mounting brackets, and other mechanical components. Most of these systems ended up becoming either Ya or Eh units due to lessons learned in the previous three generations, and all of the units which would later be purchased by the US as flight units came from this final generation.

A total of 29 articles were built by 1992, when the US became involved in the program. As of 1992, two of the units were not completed, and one was never assembled into its completed configuration.

Sixteen of the 21 units were tested between 1970 and 1989, providing an extensive experimental record of the reactor type. Of these tests, thirteen underwent thermal, mechanical, and integration non-nuclear testing. Nuclear testing occurred six times at the Baikal nuclear facility. As of 1992, there were two built, but untested, flight units available: the E-43 and E-44, with the E-45 still under construction.

– no title specified @page { } table { border-collapse:collapse; border-spacing:0; empty-cells:show } td, th { vertical-align:top; font-size:10pt;} h1, h2, h3, h4, h5, h6 { clear:both } ol, ul { margin:0; padding:0;} li { list-style: none; margin:0; padding:0;} li span. { clear: both; line-height:0; width:0; height:0; margin:0; padding:0; } span.footnodeNumber { padding-right:1em; } span.annotation_style_by_filter { font-size:95%; font-family:Arial; background-color:#fff000; margin:0; border:0; padding:0; } * { margin:0;} .ta1 { writing-mode:lr-tb; } .ce1 { font-family:Liberation Sans; background-color:#66ccff; border-width:0.0133cm; border-style:solid; border-color:#000000; vertical-align:middle; text-align:center ! important; margin-left:0pt; } .ce2 { font-family:Liberation Sans; border-width:0.0133cm; border-style:solid; border-color:#000000; vertical-align:middle; text-align:center ! important; margin-left:0pt; } .ce3 { font-family:Liberation Sans; background-color:#99ff99; border-width:0.0133cm; border-style:solid; border-color:#000000; vertical-align:middle; text-align:center ! important; margin-left:0pt; } .ce4 { font-family:Liberation Sans; vertical-align:middle; text-align:center ! important; margin-left:0pt; } .ce5 { font-family:Liberation Sans; border-width:0.0133cm; border-style:solid; border-color:#000000; vertical-align:middle; text-align:center ! important; margin-left:0pt; } .ce6 { font-family:Liberation Sans; background-color:#99ff99; border-width:0.0133cm; border-style:solid; border-color:#000000; vertical-align:middle; text-align:center ! important; margin-left:0pt; } .ce7 { font-family:Liberation Sans; background-color:#66ccff; vertical-align:middle; text-align:center ! important; margin-left:0pt; } .ce8 { font-family:Liberation Sans; background-color:#99ff99; vertical-align:middle; text-align:center ! important; margin-left:0pt; } .co1 { width:64.01pt; } .co2 { width:74.41pt; } .co3 { width:62.65pt; } .co4 { width:64.26pt; } .co5 { width:75.2pt; } .co6 { width:71.29pt; } .co7 { width:148.11pt; } .ro1 { height:23.95pt; } .ro10 { height:12.81pt; } .ro2 { height:35.21pt; } .ro3 { height:46.46pt; } .ro4 { height:57.71pt; } .ro5 { height:102.7pt; } .ro6 { height:68.94pt; } .ro7 { height:113.95pt; } .ro8 { height:91.45pt; } .ro9 { height:80.19pt; } { }

Unit Name

Generation

Series #

Core Life

# of TFEs

TFE Generation

ACS Unit

Launch configuration

Manufacturing location

Test type

Test stand

Testing begin

Testing end

Testing duration

System notes

SM-0

0

Static Model 1

n/a

n/a

n/a

Upright

CDBMB

Static

01/01/76

01/01/76

Original mockup, with three main load bearing systems.

SM-1

0

Static Model 2

n/a

n/a

n/a

Inverted

CDBMB

Static

Krasnoyarsk

01/01/83

01/01/84

Inverted launch configuration static test model.

SM-2

0

Static Model 3

n/a

n/a

n/a

Inverted

CDBMB

Static

Krasnoyarsk

01/01/83

01/01/84

Inverted launch configuration static test model.

V-11

1

Prototype 1

1

1

Upright

CDBMB

Electric heat

Baikal

07/23/71

02/03/72

3200

Development of system test methods and operations. Incomplete set of TFEs

V-12

1

Prototype 2

1

31

1

Upright

CDBMB

Electrical

Baikal

06/21/72

04/18/73

850

Development of technology for prelaunch operations and system testing

V-13

1

Prototype 3

1

31

1

Upright

Talinn

Mechanical

Baikal, Mechanical

08/01/72

05/01/73

?

Transportation, dynamic, shock, cold temperature testing. Reliability at freezing and heating.

Ya(?)-20

1

Specimen 1

1

31

1

Upright

Talinn

Nuclear

Romashka

10/01/72

03/01/74

2500

Zero power testing. Neutron physical characteristics, radiation field characterization. Development of nuclear tests methods.

Ya-21

1

Specimen 2

1

31

1

Upright

Talinn

Nuclear

Baikal, Romashka

?

?

?

Nuclear test methods and test stand trials. Prelaunch operations. Neutron plysical characteristics

Ya-22

1

Specimen 3

1

31

1

Upright

Talinn

n/a

n/a

n/a

n/a

n/a

Unfabricated, was intended to use Ya-21 design documents

Unit Name

Generation

Series #

Core Life

# of TFEs

TFE Generation

ACS Unit

Launch configuration

Manufacturing location

Test type

Test stand

Testing begin

Testing end

Testing duration

System notes

V-15

2

Serial 1

1-1.5

31

2

Upright

Talinn

Cold temp

Baikal, Cold Temp Testing

02/12/80

?

Operation and functioning tests at freezing and heating.

V-16

2

Serial 2

1-1.5

31

2

Upright

Talinn

Mechanical, Electrical

Mechanical

08/01/79

12/01/79

2300

Transportation, vibration, shock. Post-mechanical electirc serviceability testing.

Ya-23

2

Serial 3

1-1.5

31

2

SAU-35

Upright

Talinn

Nuclear

Romashka

03/10/75

06/30/76

5000

Nuclear testing revision and development, including fuel loading, radiation and nuclear safety. Studied unstable nuclear conditions and stainless steel material properties, disassembly and inspection. LiH moderator hydrogen loss in test.

Eh-31

2

Serial 4

1-1.5

31

2

SAU-105

Upright

Talinn

Nuclear

Romashka

02/01/76

09/01/78

4600

Nuclear ground test. ACS startup, steady-state functioning, post-operation disassembly and inspection. TFE lifetime limited to ~2 months due to fuel swelling

Ya-24

2

Serial 5

1-1.5

31

2

SAU-105

Upright

Talinn

Nuclear

Tureavo

12/01/78

04/01/81

14000

Steady state nuclear testing. Significant TFE shortening post-irradiation.

(??)-33

2

Serial 6

1-1.5

31

2

Upright

Talinn

Spacecraft integration

Tureavo

n/a

n/a

n/a

TFE needed redesign, no systems testing. Installed at Turaevo as mockup. Used to establish transport and handling procedures

V(?)-25

2

Serial 7

1-1.5

31

2

Upright

Talinn

Spacecraft integration

Krasnoyarsk

n/a

n/a

?

System incomplete. Used as spacecraft mockup, did not undergo physical testing.

(??)-35

2

Serial 8

1-1.5

31

2

Upright

Talinn

Test stand preparation

Baikal

?

?

?

Second fabrication stage not completed. Used for some experiments with Baikal test stand. Disassembled in Sosnovivord.

V(?)-26

2

Serial 9

1-1.5

31

2

Upright

Talinn, CDBMB

n/a

n/a

n/a

n/a

n/a

Refabricated at CDBMB. TFE burnt and damaged during second fadrication. Notch between TISA and emitter

Unit Name

Generation

Series #

Core Life

# of TFEs

TFE Generation

ACS Unit

Launch configuration

Manufacturing location

Test type

Test stand

Testing begin

Testing end

Testing duration

System notes

V-71

3

Serial 10

1.5

37

3

Upright, Inverted

Talinn

Mechanical, Electrical, Spacecraft integration

Baikal, Krasnoyarsk, Cold Temp Testing

01/01/81

01/01/87

1300

Converted from upright to inverted launch configuration, spacecraft integration heavily modified. First to use 37 TFE core configuration. Transport testing (railroad vibration and shock), cold temperature testing. Electrical testing post-mechanical. Zero power testing at Krasnoyarsk.

Ya-81

3

Serial 11

1.5

37

3

Ground control (no ACS)

Inverted

Talinn

Nuclear

Romashka

09/01/80

01/01/83

12500

Nuclear ground test, steady state operation. Leaks observed in two cooling pipes 120 hrs into test; leaks plugged and test continued. Disassembly and inspection.

Ya-82

3

Serial 12

1.5

37

3

Prototype Sukhumi ACS

Inverted

Talinn

Nuclear

Tureavo

09/01/83

11/01/84

8300

Nuclear ground test, startup using ACS, steady state. Initial leak in EM pump led to large leak later in test. Test ended in loss of coolant accident. Reactor disassembled and inspected post-test to determine leak cause.

Eh(?)-37

3

Serial 13

1.5

37

3

Inverted

Talinn

Static

?

?

?

?

Quality not sufficient for flight (despite Eh “flight” designation). Static and torsion tests conducted.

Eh-38

3

Serial 14

1.5

37

3

Factory #1

Inverted

Talinn

Nuclear

Romashka

02/01/86

05/01/86

4700

Nuclear ground test, pre-launch simulation. ACS startup and operation. Steady state test. Post-operation disassembly and examination.

(??)-39

3

Serial 15

1.5

37

3

Inverted

Talinn

special

special

special

special

special

Fabrication begin in Estonia, with some changed components. After changes, system name changed to Eh-41, and serial number changed to 17. Significant reactor changes.

Eh-40

3

Serial 16

1.5

37

3

Inverted

Talinn

Cold temp, coolant flow

?

01/03/88

12/31/88

?

Cold temperature testing. No electrical testing. Filled with NaK during second stage of fabrication.

Eh-41

3

Serial 17

1.5

37

3

Inverted

Talinn

Mechanical, Leak

Baikal, Mechanical

01/01/88

?

?

Began life as Eh(?)-39, post-retrofit designation. Transportation (railroad) dynamic, and impact testing. Leak testing done post-mechanical testing. First use of increased shield mass.

Eh-42

3

Serial 18

1.5

37

3

Inverted

Talinn

n/a

n/a

Critical component welding failure during fabrication. Unit never used.

Unit Name

Generation

Series #

Core Life

# of TFEs

TFE Generation

ACS Unit

Launch configuration

Manufacturing location

Test type

Test stand

Testing begin

Testing end

Testing duration

System notes

Ya-21u

4

Serial 19

3

37

4

Inverted

Talinn

Electrical

Baikal

12/01/87

12/01/89

?

First Gen 4 reactor using modified TFEs. Electrical testing on TFEs conducted. New end-cap insulation on TFEs tested.

Eh-43

4

Serial 20

3

37

4

Inverted

Talinn

n/a

6/30/88 (? Unclear what testing is indicated)

n/a

n/a

n/a

Flight unit. First fabrication phase in Talinn completed, second incomplete as of 1994

Eh-44

4

Serial 21

3

37

4

Inverted

Talinn

n/a

n/a

n/a

n/a

n/a

Flight unit. First fabrication phase in Talinn completed, second incomplete as of 1994

Eh(?)-45

4

Serial 22

3

37

4

Inverted

Talinn

n/a

n/a

n/a

n/a

n/a

Partially fabricated unit with missing components.

Not many fine details are known about the testing of these systems, but we do have some information about the tests that led to significant design changes. These changes are best broken down by power plant subsystem, because while there’s significant interplay between these various subsystems their functionality can change in minor ways quite easily without affecting the plant as a whole. Those systems are: the thermionic fuel elements, the moderator, the pressure vessel, the shield, the coolant loop (which includes the radiator piping), the radiator coatings, the launch configuration, the cesium unit, and the automatic control system (including the sensors for the system and the drum drive units). While this seems like a lot of systems to cover, many of them have very little information about their design history to pass on, so it’s less daunting than it initially appears.

Thermionic Fuel Elements

It should come as no surprise that the thermionic fuel elements (TFEs) were extensively modified throughout the testing program. One of the big problems was short circuiting across the inter-electrode gap due to fuel swelling, although other problems occurred to cause short circuits as well.

Perhaps the biggest change was the change from 31 to 37 TFEs in the core, one of the major changes to minimize fuel swelling. The active core length (where the pellets were) was increased by up to 40 mm (from 335 mm to 375 mm), the inter-electrode gap was widened by 0.05 mm (from 0.45 to 0.5 mm). In addition, the hole through the center of the fuel element was increased in diameter to allow for greater internal swelling, reducing the mechanical stress on the emitter.

The method of attaching the bellows for thermal expansion were modified (the temperature was dropped 10 K) to prevent crystalization of the palladium braze and increase bellows thermal cycling capability after failures on the Ya-24 system (1977-1981).

Perhaps the biggest change was to the materials used in the TFE. The emitter started off as a polycrystaline molybdenum in the first two generations of reactors, but the grain boundaries between the Mo crystals caused brittleness over time. Because of this, they developed the capability to use monocrystalline Mo, which improved performance in the early third generation of reactors – just not enough. In the final version seen in later 3rd generation and fourth generation systems, the Mo was doped with 3% niobium, which created the best available material for the emitter.

There were many other changes during the development of the thermionic fuel elements, including the addition of coatings on some materials for corrosion resistance, changes in electrical insulation type, and others, but these were the most significant in terms of functionality of the TFEs, and their impact on the overall systems design.

ZrH Moderator

The zirconium hydride neutron moderator was placed around the outside of the core. Failures were observed several times in testing, including the Ya-23 test, which resulted in loss of hydrogen in the core and the permanent shutdown of that reactor. Overpower issues, combined with a loss of coolant, led to moderator failure in Ya-82 as well, but in this case the improved H barriers used in the stainless steel “cans” holding the ZrH prevented a loss of hydrogen accident despite the ZrH breaking up (the failure was due to the ZrH being spread more thinly across the reactor, not the loss of H due to ZrH damage).

This development process was one of the least well documented areas of the Soviet program.

Reactor Vessel

Again, this subsystem’s development seems poorly documented. The biggest change, though, seems to be changing the way the triple coating (of chrome, then nickel, then enamel) was applied to the stainless steel of the reactor vessel. This was due to the failure of the Ya-23 unit, which failed at the join between the tube and the end of the tube on one of the TFEs. The crack self-sealed, but for future units the coatings didn’t go all the way to the weld, and the hot CO2 used as a cover gas was allowed to carbonize the steel to prevent fatigue cracking.

Radiation Shield

The LiH component of the radiation shield (for neutron shielding) seems to not have changed much throughout the development of the reactor. The LiH was contained in a 1.5 mm thick stainless steel casing, polished on the ends for reflectivity and coated black on the outside face.

However, the design of the stainless steel casing was changed in the early 1980s to meet more stringent payload gamma radiation doses. Rather than add a new material such as tungsten or depleted uranium as is typical, the designers decided to just thicken the reactor and spacecraft sides of the LiH can to 65 mm and 60 mm respectively. While this was definitely less mass-efficient than using W or U, the manufacturing change was fairly trivial to do with stainless steel, and this was considered the most effective way to ensure the required flux rates with the minimum of engineering challenges.

The first unit to use this was the E-41, fabricated in 1985, which was also the first unit to be tested in the inverted flight configuration. The heavier shield, combined with the new position, led to the failure of one of the shield-to-reactor brackets, as well as the attachment clips for the radiator piping. These components were changed, and no further challenges occurred with the shield in the rest of the test program.

Coolant Loop

The NaK coolant loop was the biggest source of headaches dueing the development of the Enisy. A brief list of failures, and actions taken to correct them, is here:

V-11 (July 1971-February 1972): A weld failed at the join between the radiator tubing and collector during thermophysical testing. The double weld was changed to a triple weld to correct the failure mode.

Ya-21 (1971): This reactor seemed to have everything go wrong with it. Another leak at the same tube-to-collector interface led to the welding on of a small sleeve to repair the crack. This fix seemed to solve the problem of failures in that location.

Ya-23 (March 1975-June 1976): Coolant leak between coolant tube and moderator cavity. Both coating changes and power ramp-up limits eliminated issues.

V-71 (January 1981-1994?): NaK leak in radiator tube after 290 hours of testing. Plugged, testing continued. New leak occurred 210 test hours later, radiator examined under x-ray. Two additional poorly-manufactured tubes replaced with structural supports. One of test reactors sent to US under Topaz International.

Ya-81 (September 1980-January 1983): Two radiator pipe leaks 180 hours into nuclear testing (no pre-nuclear thermophysical testing of unit). Piping determined to be of lower quality after switching manufacturers. Post-repair, the unit ran for 12,500 hours in nuclear power operation.

Ya-82 (September 1983 to November 1984): Slow leak led to coolant pump voiding and oscillations, then one of six pump inlet lines being split. There were two additional contributions to this failure: the square surfaces were pressed into shape from square pipes, which can cause stress microfractures at the corners, and second the inlet pump was forced into place, causing stress fracturing at the joint. This failure led to reactor overheating due to a loss-of-coolant condition, and led to the failure of the ZrH moderator blocks. This led to increased manufacturing controls on the pump assembly, and no further major pump failures were noted in the remainder of the testing.

Eh-38 (February 1986-August 1986): This failure is a source of some debate among the Russian specialists. Some believe it was a slow leak that began shortly after startup, while others believe that it was a larger leak that started at some point toward the end of the 4700 hour nuclear test. The exact location of the leak was never located, however it’s known that it was in the upper collector of the radiator assembly.

Ya-21u (December 1987-December 1989): Caustic stress-corrosion cracking occurred about a month and a half into thermophysical testing in the lower collector assembly, likely caused by a coating flaw growing during thermal cycling. This means that subsurface residual stresses existed within the collector itself. Due to the higher-than-typical (by U.S. standards) carbon content in the stainless steel (the specification allowed for 0.08%-0.12% carbon, rather than the less than 0.8% carbon content in the U.S. SS-321), the steel was less ductile than was ideal, which could have been a source of the flaw growing as it did. Additionally, increased oxygen levels in the NaK coolant could have exacerbated the problem more as well. A combination of ensuring that heat treatments had occurred post-forming, as well as ensuring a more oxygen-poor environment, were essential to reducing the chances of this failure happening again.

Radiator

Pen and ink diagram of radiator, image DOD

The only known data poing on the radiator development was during the Ya-23 test, where the radiator coating changed the nuclear properties of the system at elevated temperature (how is unknown). This was changed to something that would be less affected by the radiation environment. The final radiator configuration was a chrome and polymer substrate with an emissivity of 0.85 at beginning of life.

Launch configuration

As we saw, the orientation that the reactor was to be launched in was changed from upright to inverted, with the boom to connect the reactor to the spacecraft being side by side inside the payload fairing. This required the thermal cover used to prevent the NaK from freezing to be redesigned, and modified after the V-13 test, when it was discovered to not be able to prevent freezing of the coolant. The new cover was verified on the V-15 tests, and remained largely unchanged after this.

Some of the load-bearing brackets needed to be changed or reinforced as well, and the clips used to secure the radiator pipes to the structural components of the radiator.

Cesium Supply Unit

For the TFEs to work properly, it was critical that the Cs vapor pressure was within the right pressure range relative tot he temperature of the reactor core. This system was designed from first physical principles, leading to a novel structure that used temperature and pressure gradients to operate. While initially throttleable, but there were issues with this functionality during the Ya-24 nuclear test. This changed when it was discovered that there was an ideal pressure setting for all power levels, so the feed pressure was fixed. Sadly, on the Ya-81 test the throttle was set too high, leading to the need to cool the Cs as it returned to the reservoir.

Additional issues were found in the startup subsystem (a single-use puncture valve) used to vent the inert He gas from the interelectrode gap (this was used during launch and before startup to prevent Cs from liquefying or freezing in the system), as well as to balance the Cs pressure by venting it into space at a rate of about 0.4 g/day. The Ya-23 test saw a sensor not register the release of the He, leading to an upgraded spring for the valve.

Finally, the mission lifetime extension during the 1985/86 timeframe tripled the required lifetime of the system, necessitating a much larger Cs reservoir to account for Cs venting. This went from having 0.455 g to 1 kg. These were tested on Ya-21u and Eh-44, despite one (military) customer objecting due to insufficient testing of the upgraded system. This system would later be tested and found to be acceptable as part of the Topaz International program.

Automatic Control System

The automatic control system, or ACS, was used for automatic startup and autonomous reactor power management, and went through more significant changes than any other system, save perhaps the thermionic fuel elements. The first ACS, called the SAU-35, was used for the Ya-23 ground test, followed by the SAU-105 for the Eh-31 and Ya-24 tests. Problems arose, however, because these systems were manufactured by the Institute for Instrument Building of the Ministry of Aviation Construction, while the Enisy program was under the purview of the Ministry of Atomic Energy, and bureaucratic problems reared their heads.

This led the Enisy program to look to the Sukhumi Institute (who, if you remember, were the institute that started both the Topol and Enisy programs in the 1960s before control was transferred elsewhere) for the next generation of ACS. During this transition, the Ya-81 ground nuclear test occurred, but due to the bureaucratic wrangling, manufacturer change, and ACS certification tests there was no unit available for the test. This led the Ya-81 reactor to be controlled from the ground station. The Ya-82 test was the first to use a prototype Sukhumi-built ACS, with nine startups being successfully performed by this unit.

The loss-of-cooling accident potentially led to the final major change to the ACS for the Eh-38 test: the establishment of an upper temperature limit. After this, the dead-band was increased to allow greater power drift in the reactor (reducing the necessary control drum movement), as well as some minor modifications rerouting the wires to ensure proper thermocouple sensor readings, were the final significant modifications before Topaz International started.

Sensors

The sensors on the Enisy seem to have been regularly problematic, but rather than replace them, they were either removed or left as instrumentation sensors rather than control sensors. These included the volume accumulator sensors on the stainless steel bellows for the thermionic fuel elements (which were removed), and the set of sensors used to monitor the He gas in the TFE gas gap (for fission product buildup), the volume accumulator (which also contained Ar), and the radiation shield. This second set of sensors was kept in place, but was only able to measure absolute changes, not precise measurements, so was not useful for the ACS.

Control Drive Unit

The control drive unit was responsible for the positioning of the control drums, both on startup as well as throughout the life of the reactor to maintain appropriate reactivity and power levels. Like in the SNAP program, these drive systems were a source of engineering headaches.

Perhaps the most recurring problem during the mid-1970s was the failure of the position sensor for the drive system, which was used to monitor the rotational position of the drum relative to the core. This failed in the Ya-20, Ya-21, and Ya-23, after which it was replaced with a sensor of a new design and the problem isn’t reported again. The Ya-81 test saw the loss of the Ar gas used as the initial lubricant in the drive system, and later seizing of the bearing the drive system connected to, leading to its replacement with a graphite-based lubricant.

The news wasn’t all bad, however. The Eh-40 test demonstrated greater control of drum position by reducing the backlash in the kinematic circuit, for instance, and improvements to the materials and coatings used eliminated problems of coating delamination, improving the system’s resistance to thermal cycling and vibrational stresses, and radiator coating issues.

The Eh-44 drive unit was replaced against the advice of one of the Russian customers due to a lack of mandatory testing on the advanced drive system. This system remained installed at the time of Topaz International, and is something that we’ll look at in the next blog post.

A New Customer Enters the Fold

During this testing, an American company (which is not named) was approached about possibly purchasing nearly complete Enisy reactors: the only thing that the Soviets wouldn’t sell was the fissile fuel itself, and that they would help with the manufacturing on. This was in addition to the three Russian customers (at least one of which was military, but again all remain unnamed). This company did not purchase any units, but did go to the US government with this offer.

This led to the Topaz International program, funded by the US Department of Defense’s Ballistic Missile Defense Organization. The majority of the personnel involved were employees of Los Alamos and Sandia National Laboratories, and the testing occurred at Kirtland Air Force Base in Albuquerque, NM.

As a personal note, I was just outside the perimeter fence when the aircraft carrying the test stand and reactors landed, and it remains one of the formational events in my childhood, even though I had only the vaguest understanding of what was actually happening, or that some day, more than 20 years, later, I would be writing about this very program, which I saw reach a major inflection point.

The Topaz International program will be the subject of our next blog post. It’s likely to be a longer one (as this was), so it may take me a little longer than a week to get out, but the ability to compare and contrast Soviet and American testing standards on the same system is too golden an opportunity to pass up.

Stay tuned! More is coming soon!

References:

Topaz II Design Evolution, Voss 1994 https://www.researchgate.net/publication/234517721_TOPAZ_II_Design_Evolution

Russian Topaz II Test Program, Voss 1993 http://gnnallc.com/pdfs_r/SD%2006%20LA-UR-93-3398.pdf

Overview of the Nuclear Electric Propulsion Space Test Program, Voss 1994 https://www.osti.gov/servlets/purl/10157573

Thermionic System Evaluation Test: Ya-21U System, Topaz International Program, Schmidt et al 1996 http://www.dtic.mil/dtic/tr/fulltext/u2/b222940.pdf

Categories
Development and Testing Fission Power Systems History Nuclear Electric Propulsion Test Stands

History of US Astronuclear Reactors part 1: SNAP-2 and 10A

Hello, and welcome to Beyond NERVA! Today we’re going to look at the program that birthed the first astronuclear reactor to go into orbit, although the extent of the program far exceeds the flight record of a single launch.

Before we get into that, I have a minor administrative announcement that will develop into major changes for Beyond NERVA in the near-to-mid future! As you may have noticed, we have moved from beyondnerva.wordpress.com to beyondnerva.com. For the moment, there isn’t much different, but in the background a major webpage update is brewing! Not only will the home page be updated to make it easier to navigate the site (and see all the content that’s already available!), but the number of pages on the site is going to be increasing significantly. A large part of this is going to be integrating information that I’ve written about in the blog into a more topical, focused format – with easier access to both basic concepts and technical details being a priority. However, there will also be expansions on concepts, pages for technical concepts that don’t really fit anywhere in the blog posts, and more! As these updates become live, I’ll mention them in future blog posts. Also, I’ll post them on both the Facebook group and the new Twitter feed (currently not super active, largely because I haven’t found my “tweet voice yet,” but I hope to expand this soon!). If you are on either platform, you should definitely check them out!

The Systems for Nuclear Auxiliary Propulsion, or SNAP program, was a major focus for a wide range of organizations in the US for many decades. The program extended everywhere from the bottom of the seas (SNAP-4, which we won’t be covering in this post) to deep space travel with electric propulsion. SNAP was divided up into an odd/even numbering scheme, with the odd model numbers (starting with the SNAP-3) being radioisotope thermoelectric generators, and the even numbers (beginning with SNAP-2) being fission reactor electrical power systems.

Due to the sheer scope of the SNAP program, even eliminating systems that aren’t fission-based, this is going to be a two post subject. This post will cover the US Air Force’s portion of the SNAP reactor program: the SNAP-2 and SNAP-10A reactors; their development programs; the SNAPSHOT mission; and a look at the missions that these reactors were designed to support, including satellites, space stations, and other crewed and uncrewed installations. The next post will cover the NASA side of things: SNAP-8 and its successor designs as well as SNAP-50/SPUR. The one after that will cover the SP-100, SABRE, and other designs from the late 1970s through to the early 1990s, and will conclude with looking at a system that we mentioned briefly in the last post: the ENISY/TOPAZ II reactor, the only astronuclear design to be flight qualified by the space agencies and nuclear regulatory bodies of two different nations.

SNAP capabilities 1964
SNAP Reactor Capabilities and Status as of 1973, image DOE

The Beginnings of the US Astronuclear Program: SNAP’s Early Years

Core Cutaway Artist
Early SNAP-2 Concept Art, image courtesy DOE

Beginning in the earliest days of both the nuclear age and the space age, nuclear power had a lot of appeal for the space program: high power density, high power output, and mechanically simple systems were in high demand for space agencies worldwide. The earliest mention of a program to develop nuclear electric power systems for spacecraft was the Pied Piper program, begun in 1954. This led to the development of the Systems for Nuclear Auxiliary Power program, or SNAP, the following year (1955), which was eventually canceled in 1973, as were so many other space-focused programs.

s2 Toroidal Station
SNAP-2 powered space station concept image via DOE

Once space became a realistic place to send not only scientific payloads but personnel, the need to provide them with significant amounts of power became evident. Not only were most systems of the day far from the electricity efficient designs that both NASA and Roscosmos would develop in the coming decades; but, at the time, the vision for a semi-permanent space station wasn’t 3-6 people orbiting in a (completely epic, scientifically revolutionary, collaboratively brilliant, and invaluable) zero-gee conglomeration of tin cans like the ISS, but larger space stations that provided centrifugal gravity, staffed ‘round the clock by dozens of individuals. These weren’t just space stations for NASA, which was an infant organization at the time, but the USAF, and possibly other institutions in the US government as well. In addition, what would provide a livable habitation for a group of astronauts would also be able to power a remote, uncrewed radar station in the Arctic, or in other extreme environments. Even if crew were there, the fact that the power plant wouldn’t have to be maintained was a significant military advantage.

Responsible for both radioisotope thermoelectric generators (which run on the natural radioactive decay of a radioisotope, selected according to its energy density and half-life) as well as fission power plants, SNAP programs were numbered with an even-odd system: even numbers were fission reactors, odd numbers were RTGs. These designs were never solely meant for in-space application, but the increased mission requirements and complexities of being able to safely launch a nuclear power system into space made this aspect of their use the most stringent, and therefore the logical one to design around. Additionally, while the benefits of a power-dense electrical supply are obvious for any branch of the military, the need for this capability in space far surpassed the needs of those on the ground or at sea.

Originally jointly run by the AEC’s Department of Reactor Development (who funded the reactor itself) and the USAF’s AF Wright Air Development Center (who funded the power conversion system), full control was handed over to the AEC in 1957. Atomics International Research was the prime contractor for the program.

There are a number of similarities in almost all the SNAP designs, probably for a number of reasons. First, all of the reactors that we’ll be looking at (as well as some other designs we’ll look at in the next post) used the same type of fissile fuel, even though the form, and the cladding, varied reasonably widely between the different concepts. Uranium-zirconium hydride (U-ZrH) was a very popular fuel choice at the time. Assuming hydrogen loss could be controlled (this was a major part of the testing regime in all the reactors that we’ll look at), it provided a self-moderating, moderate-to-high-temperature fuel form, which was a very attractive feature. This type of fuel is still used today, for the TRIGA reactor – which, between it and its direct descendants is the most common form of research and test reactor worldwide. The high-powered reactors (SNAP 2 and 8) both initially used variations on the same power conversion system: a boiling mercury Rankine power conversion cycle, which was determined by the end of the testing regime to be possible to execute, however to my knowledge has never been proposed again (we’ll look at this briefly in the post on heat engines as power conversion systems, and a more in-depth look will be available in the future), although a mercury-based MHD conversion system is being offered as a power conversion system for an accelerator-driven molten salt reactor.

SNAP-2: The First American Built-For-Space Nuclear Reactor Design

S2 Artist Cutaway Core
SNAP-2 Reactor Cutaway, image DOE

The idea for the SNAP-2 reactor originally came from a 1951 Rand Corporation study, looking at the feasibility of having a nuclear powered satellite. By 1955, the possibilities that a fission power supply offered in terms of mass and reliability had captured the attention of many people in the USAF, which was (at the time) the organization that was most interested and involved (outside the Army Ballistic Missile Agency at the Redstone Arsenal, which would later become the Goddard Spaceflight Center) in the exploitation of space for military purposes.

The original request for the SNAP program, which ended up becoming known as SNAP 2, occurred in 1955, from the AEC’s Defense Reactor Development Division and the USAF Wright Air Development Center. It was for possible power sources in the 1 to 10 kWe range that would be able to autonomously operate for one year, and the original proposal was for a zirconium hydride moderated sodium-potassium (NaK) metal cooled reactor with a boiling mercury Rankine power conversion system (similar to a steam turbine in operational principles, but we’ll look at the power conversion systems more in a later post), which is now known as SNAP-2. The design was refined into a 55 kWt, 5 kWe reactor operating at about 650°C outlet temperature, massing about 100 kg unshielded, and was tested for over 10,000 hours. This epithermal neutron spectrum would remain popular throughout much of the US in-space reactor program, both for electrical power and thermal propulsion designs. This design would later be adapted to the SNAP-10A reactor, with some modifications, as well.

S2 Critical Assembly
SNAP Critical Assembly core, image DOE

SNAP-2’s first critical assembly test was in October of 1957, shortly after Sputnik-1’s successful launch. With 93% enriched 235U making up 8% of the weight of the U-ZrH fuel elements, a 1” beryllium inner reflector, and an outer graphite reflector (which could be varied in thickness), separated into two rough hemispheres to control the construction of a critical assembly; this device was able to test many of the reactivity conditions needed for materials testing on a small economic scale, as well as test the behavior of the fuel itself. The primary concerns with testing on this machine were reactivity, activation, and intrinsic steady state behavior of the fuel that would be used for SNAP-2. A number of materials were also tested for reflection and neutron absorbency, both for main core components as well as out-of-core mechanisms. This was followed by the SNAP-2 Experimental Reactor in 1959-1960 and the SNAP 2 Development Reactor in 1961-1962.

S2ER Cross Section
SNAP-2 Experimental Reactor core cros section diagram, image DOE

The SNAP-2 Experimental Reactor (S2ER or SER) was built to verify the core geometry and basic reactivity controls of the SNAP-2 reactor design, as well as to test the basics of the primary cooling system, materials, and other basic design questions, but was not meant to be a good representation of the eventual flight system. Construction started in June 1958, with construction completed by March 1959. Dry (Sept 15) and wet (Oct 20) critical testing was completed the same year, and power operations started on Nov 5, 1959. Four days later, the reactor reached design power and temperature operations, and by April 23 of 1960, 1000 hours of continuous testing at design conditions were completed. Following transient and other testing, the reactor was shut down for the last time on November 19, 1960, just over one year after it had first achieved full power operations. Between May 19 and June 15, 1961, the reactor was disassembled and decommissioned. Testing on various reactor materials, especially the fuel elements, was conducted, and these test results refined the design for the Development Reactor.

S2ER Schedule and Timeline

S2DR Core Xsec
SNAP 2 Development Reactor core cross section, image DOE

The SNAP-2 Development Reactor (S2DR or SDR, also called the SNAP-2 Development System, S2DS) was installed in a new facility at the Atomics International Santa Susana research facility to better manage the increased testing requirements for the more advanced reactor design. While this wasn’t going to be a flight-type system, it was designed to inform the flight system on many of the details that the S2ER wasn’t able to. This, interestingly, is much harder to find information on than the S2ER. This reactor incorporated many changes from the S2ER, and went through several iterations to tweak the design for a flight reactor. Zero power testing occurred over the summer of 1961, and testing at power began shortly after (although at SNAP-10 power and temperature levels. Testing continued until December of 1962, and further refined the SNAP-2 and -10A reactors.

S2DR Development Timeline

A third set of critical assembly reactors, known as the SNAP Development Assembly series, was constructed at about the same time, meant to provide fuel element testing, criticality benchmarks, reflector and control system worth, and other core dynamic behaviors. These were also built at the Santa Susana facility, and would provide key test capabilities throughout the SNAP program. This water-and-beryllium reflected core assembly allowed for a wide range of testing environments, and would continue to serve the SNAP program through to its cancellation. Going through three iterations, the designs were used more to test fuel element characteristics than the core geometries of individual core concepts. This informed all three major SNAP designs in fuel element material and, to a lesser extent, heat transfer (the SNAP-8 used thinner fuel elements) design.

Extensive testing was carried out on all aspects of the core geometry, fuel element geometry and materials, and other behaviors of the reactor; but by May 1960 there was enough confidence in the reactor design for the USAF and AEC to plan on a launch program for the reactor (and the SNAP-10A), called SNAPSHOT (more on that below). Testing using the SNAP-2 Experimental Reactor occurred in 1960-1961, and the follow-on test program, including the Snap 2 Development reactor occurred in 1962-63. These programs, as well as the SNAP Critical Assembly 3 series of tests (used for SNAP 2 and 10A), allowed for a mostly finalized reactor design to be completed.

S2 PCS Cutaway Drawing
CRU mercury Rankine power conversion system cutaway diagram, image DOE

The power conversion system (PCS), a Rankine (steam) turbine using mercury, were carried out starting in 1958, with the development of a mercury boiler to test the components in a non-nuclear environment. The turbine had many technical challenges, including bearing lubrication and wear issues, turbine blade pitting and erosion, fluid dynamics challenges, and other technical difficulties. As is often the case with advanced reactor designs, the reactor core itself wasn’t the main challenge, nor the control mechanisms for the reactor, but the non-nuclear portions of the power unit. This is a common theme in astronuclear engineering. More recently, JIMO experienced similar problems when the final system design called for a theoretical but not yet experimental supercritical CO2 Brayton turbine (as we’ll see in a future post). However, without a power conversion system of useable efficiency and low enough mass, an astronuclear power system doesn’t have a means of delivering the electricity that it’s called upon to deliver.

Reactor shielding, in the form of a metal honeycomb impregnated with a high-hydrogen material (in this case a form of paraffin), was common to all SNAP reactor designs. The high hydrogen content allowed for the best hydrogen density of the available materials, and therefore the greatest shielding per unit mass of the available options.

s10 FSM Reactor
SNAP-2/10A FSM reflector and drum mechanism pre-test, image DOE

Testing on the SNAP 2 reactor system continued until 1963, when the reactor core itself was re-purposed into the redesigned SNAP-10, which became the SNAP-10A. At this point the SNAP-2 reactor program was folded into the SNAP-10A program. SNAP-2 specific design work was more or less halted from a reactor point of view, due to a number of factors, including the slower development of the CRU power conversion system, the large number of moving parts in the Rankine turbine, and the advances made in the more powerful SNAP-8 family of reactors (which we’ll cover in the next post). However, testing on the power conversion system continued until 1967, due to its application to other programs. This didn’t mean that the reactor was useless for other missions; in fact, it was far more useful, due to its far more efficient power conversion system for crewed space operations (as we’ll see later in this post), especially for space stations. However, even this role would be surpassed by a derivative of the SNAP-8, the Advanced ZrH Reactor, and the SNAP-2 would end up being deprived of any useful mission.

The SNAP Reactor Improvement Program, in 1963-64, continued to optimize and advance the design without nuclear testing, through computer modeling, flow analysis, and other means; but the program ended without flight hardware being either built or used. We’ll look more at the missions that this reactor was designed for later in this blog post, after looking at its smaller sibling, the first reactor (and only US reactor) to ever achieve orbit: the SNAP-10A.

S2 Program History Table

SNAP-10: The Father of the First Reactor in Space

At about the same time as the SNAP 2 Development Reactor tests (1958), the USAF requested a study on a thermoelectric power conversion system, targeting a 0.3 kWe-1kWe power regime. This was the birth of what would eventually become the SNAP-10 reactor. This reactor would evolve in time to become the SNAP-10A reactor, the first nuclear reactor to go into orbit.

In the beginning, this design was superficially quite similar to the Romashka reactor that we’ll examine in the USSR part of this blog post, with plates of U-ZrH fuel, separated by beryllium plates for heat conduction, and surrounded by radial and axial beryllium reflectors. Purely conductively cooled internally, and radiatively cooled externally, this was later changed to a NaK forced convection cooling system for better thermal management (see below). The resulting design was later adapted to the SNAP-4 reactor, which was designed to be used for underwater military installations, rather than spaceflight. Outside these radial reflectors were thermoelectric power conversion systems, with a finned radiating casing being the only major component that was visible. The design looked, superficially at least, remarkably like the RTGs that would be used for the next several decades. However, the advantages to using even the low power conversion efficiency thermoelectric conversion system made this a far more powerful source of electricity than the RTGs that were available at the time (or even today) for space missions.

Reactor and Shield Cutaway
SNAP-10A Reactor sketch, image DOE

Within a short period, however, the design was changed dramatically, resulting in a design very similar to the core for the SNAP-2 reactor that was under development at the same time. Modifications were made to the SNAP-2 baseline, resulting in the reactor cores themselves becoming identical. This also led to the NaK cooling system being implemented on the SNAP-10A. Many of the test reactors for the SNAP-2 system were also used to develop the SNAP-10A. This is because the final design, while lower powered, by 20 kWe of electrical output, was largely different in the power conversion system, not the reactor structure. This reactor design was tested extensively, with the S2ER, S2DR, and SCA test series (4A, 4B, and 4C) reactors, as well as the SNAP-10 Ground Test Reactor (S10FS-1). The new design used a similar, but slightly smaller, conical radiator using NaK as the working fluid for the radiator.

This was a far lower power design than the SNAP-2, coming in at 30 kWt, but with the 1.6% power conversion ratio of the thermoelectric systems, its electrical power output was only 500 We. It also ran almost 100°C cooler (200 F), allowing for longer fuel element life, but less thermal gradient to work with, and therefore less theoretical maximum efficiency. This tradeoff was the best on offer, though, and the power conversion system’s lack of moving parts, and ease of being tested in a non-nuclear environment without extensive support equipment, made it more robust from an engineering point of view. The overall design life of the reactor, though, remained short: only about 1 year, and less than 1% fissile fuel burnup. It’s possible, and maybe even likely, that (barring spacecraft-associated failure) the reactor could have provided power for longer durations; however, the longer the reactor operates, the more the fuel swells, due to fission product buildup, and at some point this would cause the clad of the fuel to fail. Other challenges to reactor design, such as fission poison buildup, clad erosion, mechanical wear, and others would end the reactor’s operational life at some point, even if the fuel elements could still provide more power.

SNAP Meteorological Satellite
SNAP-10A satellite concept, image DOE

The SNAP-10A was not meant to power crewed facilities, since the power output was so low that multiple installations would be needed. This meant that, while all SNAP reactors were meant to be largely or wholly unmaintained by crew personnel, this reactor had no possibility of being maintained. The reliability requirements for the system were higher because of this, and the lack of moving parts in the power conversion system aided in this design requirement. The design was also designed to only have a brief (72 hour) time period where active reactivity control would be used, to mitigate any startup transients, and to establish steady-state operations, before the active control systems would be left in their final configuration, leaving the reactor entirely self-regulating. This placed additional burden on the reactor designers to have a very strong understanding of the behavior of the reactor, its long-term stability, and any effects that would occur during the year-long lifetime of the system.

Reflector Ejection Test
Reflector ejection test in progress, image DOE

At the end of the reactor’s life, it was designed to stay in orbit until the short-lived and radiotoxic portions of the reactor had gone through at least five product half-lives, reducing the radioactivity of the system to a very low level. At the end of this process, the reactor would re-enter the atmosphere, the reflectors and end reflector would be ejected, and the entire thing would burn up in the upper atmosphere. From there, winds would dilute any residual radioactivity to less than what was released by a single small nuclear test (which were still being conducted in Nevada at the time). While there’s nothing wrong with this approach from a health physics point of view, as we saw in the last post on the BES-5 reactors the Soviet Union was flying, there are major international political problems with this concept. The SNAPSHOT reactor continues to orbit the Earth (currently at an altitude of roughly 1300 km), and will do so for more than 2000 years, according to recent orbital models, so the only system of concern is not in danger of re-entry any time soon; but, at some point, the reactor will need to be moved into a graveyard orbit or collected and returned to Earth – a problem which currently has no solution.

The Runup to Flight: Vehicle Verification and Integration

1960 brought big plans for orbital testing of both the SNAP-2 and SNAP-10 reactors, under the program SNAPSHOT: Two SNAP-10 launches, and two SNAP-2 launches would be made. Lockheed Missiles System Division was chosen as the launch vehicle, systems integration, and launch operations contractor for the program; while Atomics International, working under the AEC, was responsible for the power plant.

The SNAP-10A reactor design was meant to be decommissioned by orbiting for long enough that the fission product inventory (the waste portion of the burned fuel elements, and the source of the vast majority of the radiation from the reactor post-fission) would naturally decay away, and then the reactor would be de-orbited, and burn up in the atmosphere. This was planned before the KOSMOS-954 accident, when the possibility of allowing a nuclear reactor to burn up in the atmosphere was not as anathema as it is today. This plan wouldn’t increase the amount of radioactivity that the public would receive to any appreciable degree; and, at the time, open-air testing of nuclear weapons was the norm, sending up thousands of kilograms of radioactive fallout per year. However, it was important that the fuel rods themselves would burn up high in the atmosphere, in order to dilute the fuel elements as much as possible, and this is something that needed to be tested.

RFD1
RFD-1 Experimental Payload

Enter the SNAP Reactor Flight Demonstration Number 1 mission, or RFD-1. The concept of this test was to demonstrate that the planned disassembly and burnup process would occur as expected, and to inform the further design of the reactor if there were any unexpected effects of re-entry. Sandia National Labs took the lead on this part of the SNAPSHOT program. After looking at the budget available, the launch vehicles available, and the payloads, the team realized that orbiting a nuclear reactor mockup would be too expensive, and another solution needed to be found. This led to the mission design of RFD-1: a sounding rocket would be used, and the core geometry would be changed to account for the short flight time, compared to a real reentry, in order to get the data needed for the de-orbiting testing of the actual SNAP-10A reactor that would be flown.

So what does this mean? Ideally, the development of a mockup of the SNAP-10A reactor, with the only difference being that there wouldn’t be any highly enriched uranium in the fuel elements, as normally configured; instead depleted uranium would be used. It would be launched on the same launch vehicle that the SNAPSHOT mission would use (an Atlas-Agena D), be placed in the same orbit, and then be deorbited at the same angle and the same place as the actual reactor would be; maybe even in a slightly less favorable reentry angle to know how accurate the calculations were, and what the margin of error would be. However, an Atlas-Agena rocket isn’t a cheap piece of hardware, either to purchase, or to launch, and the project managers knew that they wouldn’t be able to afford that, so they went hunting for a more economical alternative.

RFD1 Flight Path
RFD-1 Mission Profile, image DOE

This led the team to decide on a NASA Scout sounding rocket as the launch vehicle, launched from Wallops Island launch site (which still launches sounding rockets, as well as the Antares rocket, to this day, and is expanding to launch Vector Space and RocketLab orbital rockets as well in the coming years). Sounding rockets don’t reach orbital altitudes or velocities, but they get close, and so can be used effectively to test orbital components for systems that would eventually fly in orbit, but for much less money. The downside is that they’re far smaller, with less payload and less velocity than their larger, orbital cousins. This led to needing to compromise on the design of the dummy reactor in significant ways – but those ways couldn’t compromise the usefulness of the test.

Sandia Corporation (which runs Sandia National Laboratories to this day, although who runs Sandia Corp changes… it’s complicated) and Atomics International engineers got together to figure out what could be done with the Scout rocket and a dummy reactor to provide as useful an engineering validation as possible, while sticking within the payload requirements and flight profile of the relatively small, suborbital rocket that they could afford. Because the dummy reactor wouldn’t be going nearly as fast as it would during true re-entry, a steeper angle of attack when the test was returning to Earth was necessary to get the velocity high enough to get meaningful data.

Scout rocket
Scout sounding rocket, image DOE

The Scout rocket that was being used had much less payload capability than the Atlas rocket, so if there was a system that could be eliminated, it was removed to save weight. No NaK was flown on RFD-1, the power conversion system was left off, the NaK pump was simulated by an empty stainless steel box, and the reflector assembly was made out of aluminum instead of beryllium, both for weight and toxicity reasons (BeO is not something that you want to breathe!). The reactor core didn’t contain any dummy fuel elements, just a set of six stainless steel spacers to keep the grid plates at the appropriate separation. Because the angle of attack was steeper, the test would be shorter, meaning that there wouldn’t be time for the reactor’s reflectors to degrade enough to release the fuel elements. The fuel elements were the most important part of the test, however, since it needed to be demonstrated that they would completely burn up upon re-entry, so a compromise was found.

The fuel elements would be clustered on the outside of the dummy reactor core, and ejected early in the burnup test period. While the short time and high angle of attack meant that there wouldn’t be enough time to observe full burnup, the beginning of the process would be able to provide enough data to allow for accurate simulations of the process to be made. How to ensure that this data, which was the most important part of the test, would be able to be collected was another challenge, though, which forced even more compromises for RFT-1’s design. Testing equipment had to be mounted in such a way as to not change the aerodynamic profile of the dummy reactor core. Other minor changes were needed as well, but despite all of the differences between the RFD-1 and the actual SNAP-10A the thermodynamics and aerodynamics of the system were different in only very minor ways.

Testing support came from Wallops Island and NASA’s Bermuda tracking station, as well as three ships and five aircraft stationed near the impact site for radar observation. The ground stations would provide both radar and optical support for the RFD-1 mission, verifying reactor burnup, fuel element burnup, and other test objective data, while the aircraft and ships were primarily tasked with collecting telemetry data from on-board instruments, as well as providing additional radar data; although one NASA aircraft carried a spectrometer in order to analyze the visible radiation coming off the reentry vehicle as it disintegrated.

RFD-1 Burnup Splice
Film splice of RV burnup during RFD-1, image DOE

The test went largely as expected. Due to the steeper angle of attack, full fuel element burnup wasn’t possible, even with the early ejection of the simulated fuel rods, but the amount that they did disintegrate during the mission showed that the reactor’s fuel would be sufficiently distributed at a high enough altitude to prevent any radiological risk. The dummy core behaved mostly as expected, although there were some disagreements between the predicted behavior and the flight data, due to the fact that the re-entry vehicle was on such a steep angle of attack. However, the test was considered a success, and paved the way for SNAPSHOT to go forward.

The next task was to mount the SNAP-10A to the Agena spacecraft. Because the reactor was a very different power supply than was used at the time, special power conditioning units were needed to transfer power from the reactor to the spacecraft. This subsystem was mounted on the Agena itself, along with tracking and command functionality, control systems, and voltage regulation. While Atomics International worked to ensure the reactor would be as self-contained as possible, the reactor and spacecraft were fully integrated as a single system. Besides the reactor itself, the spacecraft carried a number of other experiments, including a suite of micrometeorite detectors and an experimental cesium contact thruster, which would operate from a battery system that would be recharged by electricity produced by the reactor.

S10FSM Vac Chamber
FSEM-3 in vacuum chamber for environmental and vibration tests, image DOE

In order to ensure the reactor would be able to be integrated to the spacecraft, a series of Flight System Prototypes (FSM-1, and -4; FSEM-2 and -3 were used for electrical system integration) were built. These were full scale, non-nuclear mockups that contained a heating unit to simulate the reactor core. Simulations were run using FSM-1 from launch to startup on orbit, with all testing occurring in a vacuum chamber. The final one of the series, FSM-4, was the only one that used NaK coolant in the system, which was used to verify that the thermal performance of the NaK system met with flight system requirements. FSEM-2 did not have a power system mockup, instead it used a mass mockup of the reactor, power conversion system, radiator, and other associated components. Testing with FSEM-2 showed that there were problems with the original electrical design of the spacecraft, which required a rebuild of the test-bed, and a modification of the flight system itself. Once complete, the renamed FSEM-2A underwent a series of shock, vibration, acceleration, temperature, and other tests (known as the “Shake and Bake” environmental tests), which it subsequently passed. The final mockup, FSEM-3, underwent extensive electrical systems testing at Lockheed’s Sunnyvale facility, using simulated mission events to test the compatibility of the spacecraft and the reactor. Additional electrical systems changes were implemented before the program proceeded, but by the middle of 1965, the electrical system and spacecraft integration tests were complete and the necessary changes were implemented into the flight vehicle design.

SNAP_10A_Space_Nuclear_Power_Plant
SNAP-10A F-3 (flight unit for SNAPSHOT) undergoing final checks before spacecraft integration. S10F-4 was identical.

The last round of pre-flight testing was a test of a flight-configured SNAP-10A reactor under fission power. This nuclear ground test, S10F-3, was identical to the system that would fly on SNAPSHOT, save some small ground safety modifications, and was tested from January 22 1965 to March 15, 1966. It operated uninterrupted for over 10,000 hours, with the first 390 days being at a power output of 35 kWt, and (following AEC approval) an additional 25 days of testing at 44 kWt. This testing showed that, after one year of operation, the continuing problem of hydrogen redistribution caused the reactor’s outlet temperature to drop more than expected, and additional, relatively minor, uncertainties about reactor dynamics were seen as well. However, overall, the test was a success, and paved the way for the launch of the SNAPSHOT spacecraft in April 1965; and the continued testing of S10F-3 during the SNAPSHOT mission was able to verify that the thermal behavior of astronuclear power systems during ground test is essentially identical to orbiting systems, proving the ground test strategy that had been employed for the SNAP program.

SNAPSHOT: The First Nuclear Reactor in Space

In 1963 there was a change in the way the USAF was funding these programs. While they were solely under the direction of the AEC, the USAF still funded research into the power conversion systems, since they were still operationally useful; but that changed in 1963, with the removal of the 0.3 kWe to 1 kWe portion of the program. Budget cuts killed the Zr-H moderated core of the SNAP-2 reactor, although funding continued for the Hg vapor Rankine conversion system (which was being developed by TRW) until 1966. The SNAP-4 reactor, which had not even been run through criticality testing, was canceled, as was the planned flight test of the SNAP-10A, which had been funded under the USAF, because they no longer had an operational need for the power system with the cancellation of the 0.3-1 kWe power system program. The associated USAF program that would have used the power supply was well behind schedule and over budget, and was canceled at the same time.

The USAF attempted to get more funding, but was denied. All parties involved had a series of meetings to figure out what to do to save the program, but the needed funds weren’t forthcoming. All partners in the program worked together to try and have a reduced SNAPSHOT program go through, but funding shortfalls in the AEC (who received only $8.6 million of the $15 million they requested), as well as severe restrictions on the Air Force (who continued to fund Lockheed for the development and systems integration work through bureaucratic creativity), kept the program from moving forward. At the same time, it was realized that being able to deliver kilowatts or megawatts of electrical power, rather than the watts currently able to be produced, would make the reactor a much more attractive program for a potential customer (either the USAF or NASA).

Finally, in February of 1964 the Joint Congressional Committee on Atomic Energy was able to fund the AEC to the tune of $14.6 million to complete the SNAP-10A orbital test. This reactor design had already been extensively tested and modeled, and unlike the SNAP-2 and -8 designs, no complex, highly experimental, mechanical-failure-prone power conversion system was needed.

On Orbit Artist
SNAP-10A reactor, artist’s rendering (artist unknown), image DOE

SNAPSHOT consisted of a SNAP-10A fission power system mounted to a modified Agena-D spacecraft, which by this time was an off-the-shelf, highly adaptable spacecraft used by the US Air Force for a variety of missions. An experimental cesium contact ion thruster (read more about these thrusters on the Gridded Ion Engine page) was installed on the spacecraft for in-flight testing. The mission was to validate the SNAP-10A architecture with on-orbit experience, proving the capability to operate for 9 days without active control, while providing 500 W (28.5 V DC) of electrical power. Additional requirements included the use of a SNAP-2 reactor core with minimal modification (to allow for the higher-output SNAP-2 system with its mercury vapor Rankine power conversion system to be validated as well, when the need for it arose), eliminating the need (while offering the option) for active control of the reactor once startup was achieved for one year (to prove autonomous operation capability); facilitating safe ground handling during spacecraft integration and launch; and, accommodating future growth potential in both available power and power-to-weight ratio.

While the threshold for mission success was set at 90 days, for Atomics International wanted to prove 1 year of capability for the system; so, in those 90 days, the goal was that the entire reactor system would be demonstrated to be capable of one year of operation (the SNAP-2 requirements). Atomics International imposed additional, more stringent, guidelines for the mission as well, specifying a number of design requirements, including self-containment of the power system outside the structure of the Agena, as much as possible; more stringent mass and center-of-gravity requirements for the system than specified by the US Air Force; meeting the military specifications for EM radiation exposure to the Agena; and others.

atlas-slv3_agena-d__snapshot__1
SNAPSHOT launch, image USAF via Gunter’s Space Page

The flight was formally approved in March, and the launch occurred on April 3, 1965 on an Atlas-Agena D rocket from Vandenberg Air Force Base. The launch went perfectly, and placed the SNAPSHOT spacecraft in a polar orbit, as planned. Sadly, the mission was not one that could be considered either routine or simple. One of the impedance probes failed before launch, and a part of the micrometeorite detector system failed before returning data. A number of other minor faults were detected as well, but perhaps the most troubling was that there were shorts and voltage irregularities coming from the ion thruster, due to high voltage failure modes, as well as excessive electromagnetic interference from the system, which reduced the telemetry data to an unintelligible mess. This was shut off until later in the flight, in order to focus on testing the reactor itself.

The reactor was given the startup order 3.5 hours into the flight, when the two gross adjustment control drums were fully inserted, and the two fine control drums began a stepwise reactivity insertion into the reactor. Within 6 hours, the reactor achieved on-orbit criticality, and the active control portion of the reactor test program began. For the next 154 hours, the control drums were operated with ground commands, to test reactor behavior. Due to the problems with the ion engine, the failure sensing and malfunction sensing systems were also switched off, because these could have been corrupted by the errant thruster. Following the first 200 hours of reactor operations, the reactor was set to autonomous operation at full power. Between 600 and 700 hours later, the voltage output of the reactor, as well as its temperature, began to drop; an effect that the S10-F3 test reactor had also demonstrated, due to hydrogen migration in the core.

On May 16, just over one month after being launched into orbit, contact was lost with the spacecraft for about 40 hours. Some time during this blackout, the reactor’s reflectors ejected from the core (although they remained attached to their actuator cables), shutting down the core. This spelled the end of reactor operations for the spacecraft, and when the emergency batteries died five days later all communication with the spacecraft was lost forever. Only 45 days had passed since the spacecraft’s launch, and information was received from the spacecraft for only 616 orbits.

What caused the failure? There are many possibilities, but when the telemetry from the spacecraft was read, it was obvious that something badly wrong had occurred. The only thing that can be said with complete confidence is that the error came from the Agena spacecraft rather than from the reactor. No indications had been received before the blackout that the reactor was about to scram itself (the reflector ejection was the emergency scram mechanism), and the problem wasn’t one that should have been able to occur without ground commands. However, with the telemetry data gained from the dwindling battery after the shutdown, some suppositions could be made. The most likely immediate cause of the reactor’s shutdown was traced to a possible spurious command from the high voltage command decoder, part of the Agena’s power conditioning and distribution system. This in turn was likely caused by one of two possible scenarios: either a piece of the voltage regulator failed, or it became overstressed because of either the unusual low-power vehicle loads or commanding the reactor to increase power output. Sadly, the cause of this system failure cascade was never directly determined, but all of the data received pointed to a high-voltage failure of some sort, rather than a low-voltage error (which could have also resulted in a reactor scram). Other possible causes of instrumentation or reactor failure, such as thermal or radiation environment, collision with another object, onboard explosion of the chemical propellants used on the Agena’s main engines, and previously noted flight anomalies – including the arcing and EM interference from the ion engine – were all eliminated as the cause of the error as well.

Despite the spacecraft’s mysterious early demise, SNAPSHOT provided many valuable lessons in space reactor design, qualification, ground handling, launch challenges, and many other aspects of handling an astronuclear power source for potential future missions: Suggestions for improved instrumentation design and performance characteristics; provision for a sunshade for the main radiator to eliminate the sun/shade efficiency difference that was observed during the mission; the use of a SNAP-2 type radiation shield to allow for off-the-shelf, non-radiation-hardened electronic components in order to save both money and weight on the spacecraft itself; and other minor changes were all suggested after the conclusion of the mission. Finally, the safety program developed for SNAPSHOT, including the SCA4 submersion criticality tests, the RFT-1 test, and the good agreement in reactor behavior between the on-orbit and ground test versions of the SNAP-10A showed that both the AEC and the customer of the SNAP-10A (be it the US Air Force or NASA) could have confidence that the program was ready to be used for whatever mission it was needed for.

Sadly, at the time of SNAPSHOT there simply wasn’t a mission that needed this system. 500 We isn’t much power, even though it was more power than was needed for many systems that were being used at the time. While improvements in the thermoelectric generators continued to come in (and would do so all the way to the present day, where thermoelectric systems are used for everything from RTGs on space missions to waste heat recapture in industrial facilities), the simple truth of the matter was that there was no mission that needed the SNAP-10A, so the program was largely shelved. Some follow-on paper studies would be conducted, but the lowest powered of the US astronuclear designs, and the first reactor to operate in Earth orbit, would be retired almost immediately after the SNAPSHOT mission.

Post-SNAPSHOT SNAP: the SNAP Improvement Program

The SNAP fission-powered program didn’t end with SNAPSHOT, far from it. While the SNAP reactors only ever flew once, their design was mature, well-tested, and in most particulars ready to fly in a short time – and the problems associated with those particulars had been well-addressed on the nuclear side of things. The Rankine power conversion system for the SNAP-2, which went through five iterations, reached technological maturity as well, having operated in a non-nuclear environment for close to 5,000 hours and remained in excellent condition, meaning that the 10,000 hour requirement for the PCS would be able to be met without any significant challenges. The thermoelectric power conversion system also continued to be developed, focusing on an advanced silicon-germanium thermoelectric convertor, which was highly sensitive to fabrication and manufacturing processes – however, we’ll look more at thermoelectrics in the power conversion systems series of blog posts, just keep in mind that the power conversion systems continued to improve throughout this time, not just the reactor core design.

LiH FE Cracking
Fuel element post-irradiation. Notice the cracks where the FE would rest in the endcap reflector

On the reactor side of things, the biggest challenge was definitely hydrogen migration within the fuel elements. As the hydrogen migrates away from the ZrH fuel, many problems occur; from unpredictable reactivity within the fuel elements, to temperature changes (dehydrogenated fuel elements developed hotspots – which in turn drove more hydrogen out of the fuel element), to changes in ductility of the fuel, causing major headaches for end-of-life behavior of the reactors and severely limiting the fuel element temperature that could be achieved. However, the necessary testing for the improvement of those systems could easily be conducted with less-expensive reactor tests, including the SCA4 test-bed, and didn’t require flight architecture testing to continue to be improved.

The maturity of these two reactors led to a short-lived program in the 1960s to improve them, the SNAP Reactor Improvement Program. The SNAP-2 and -10 reactors went through many different design changes, some large and some small – and some leading to new reactor designs based on the shared reactor core architecture.

By this time, the SNAP-2 had mostly faded into obscurity. However, the fact that it shared a reactor core with the SNAP-10A, and that the power conversion system was continuing to improve, warranted some small studies to improve its capabilities. The two of note that are independent of the core (all of the design changes for the -10 that will be discussed can be applied to the -2 core as well, since at this point they were identical) are the change from a single mercury boiler to three, to allow more power throughput and to reduce loads on one of the more challenging components, and combining multiple cores into a single power unit. These were proposed together for a space station design (which we’ll look at later) to allow an 11 kWe power supply for a crewed station.

The vast majority of this work was done on the -10A. Any further reactors of this type would have had an additional three sets of 1/8” beryllium shims on the external reflector, increasing the initial reactivity by about 50 cents (1 dollar of reactivity is exactly break even, all other things being equal; reactivity potential is often somewhere around $2-$3, however, to account for fission product buildup); this means that additional burnable poisons (elements which absorb neutrons, then decay into something that is mostly neutron transparent, to even out the reactivity of the reactor over its lifetime) could be inserted in the core at construction, mitigating the problems of reactivity loss that were experienced during earlier operation of the reactor. With this, and a number of other minor tweaks to reflector geometry and lowering the core outlet temperature slightly, the life of the SNAP-10A was able to be extended from the initial design goal of one year to five years of operation. The end-of-life power level of the improved -10A was 39.5 kWt, with an outlet temperature of 980 F (527°C) and a power density of 0.14 kWt/lb (0.31 kWt/kg).

Thes

Interim 10A 2
Interim SNAP 10A/2, image DOE

e design modifications led to another iteration of the SNAP-10A, the Interim SNAP-10A/2 (I-10A/2). This reactor’s core was identical, but the reflector was further enhanced, and the outlet temperature and reactor power were both increased. In addition, even more burnable poisons were added to the core to account for the higher power output of the reactor. Perhaps the biggest design change with the Interim -10A/2 was the method of reactor control: rather than the passive control of the reactor, as was done on the -10A, the entire period of operation for the I-10A/2 was actively controlled, using the control drums to manage reactivity and power output of the reactor. As with the improved -10A design, this reactor would be able to have an operational lifetime of five years. These improvements led the I-10A/2 to have an end of life power rating of 100 kWt, an outlet temperature of 1200 F (648°C), and an improved power density of 0.33 kWt/lb (0.73 kWt/kg).

10A2 Table
Interim SNAP 10A/2 Design References, image DOE

This design, in turn, led to the Upgraded SNAP-10A/2 (U-10A/2). The biggest in-core difference between the I-10A/2 and the U-10A/2 was the hydrogen barrier used in the fuel elements: rather than using the initial design that was common to the -2, -10A, and I-10A/2, this reactor used the hydrogen barrier from the SNAP-8 reactor, which we’ll look at in the next blog post. This is significant, because the degradation of the hydrogen barrier over time, and the resulting loss of hydrogen from the fuel elements, was the major lifetime limiting factor of the SNAP-10 variants up until this point. This reactor also went back to static control, rather than the active control used in the I-10A/2. As with the other -10A variants, the U-10A/2 had a possible core lifetime of five years, and other than an improvement of 100 F in outlet temperature (to 1300 F), and a marginal drop in power density to 0.31 kWt/lb, it shared many of the characteristics that the I-10A/2 had.

SNAP-10B: The Upgrade that Could Have Been

10B Cutaway System
SNAP=10B Cutaway Diagram, image DOE

One consistent mass penalty in the SNAP-10A variants that we’ve looked at so far is the control drums: relatively large reactivity insertions were possible with a minimum of movement due to the wide profile of the control drums, but this also meant that they extended well away from the reflector, especially early in the mission. This meant that, in order to prevent neutron backscatter from hitting the rest of the spacecraft, the shield had to be relatively wide compared the the size of the core – and the shield was not exactly a lightweight system component.

The SNAP-10B reactor was designed to address this problem. It used a similar core to the U-10A/2, with the upgraded hydrogen barrier from the -8, but the reflector was tapered to better fit the profile of the shadow shield, and axially sliding control cylinders would be moved in and out to provide control instead of the rotating drums of the -10A variants. A number of minor reactor changes were needed, and some of the reactor physics parameters changed due to this new control system; but, overall, very few modifications were needed.

The first -10B reactor, the -10B Basic (B-10B), was a very simple and direct evolution of the U-10A/2, with nothing but the reflector and control structures changed to the -10B configuration. Other than a slight drop in power density (to 0.30 kWt/lb), the rest of the performance characteristics of the B-10B were identical to the U-10A/2. This design would have been a simple evolution of the -10A/2, with a slimmer profile to help with payload integration challenges.

10B Basic Table
Image DOE

The next iteration of the SNAP-10B, the Advanced -10B (A-10B), had options for significant changes to the reactor core and the fuel elements themselves. One thing to keep in mind about these reactors is that they were being designed above and beyond any specific mission needs; and, on top of that, a production schedule hadn’t been laid out for them. This means that many of the design characteristics of these reactors were never “frozen,” which is the point in the design process when the production team of engineers need to have a basic configuration that won’t change in order to proceed with the program, although obviously many minor changes (and possibly some major ones) would continue to be made up until the system was flight qualified.

Up until now, every SNAP-10 design used a 37 fuel element core, with the only difference in the design occurring in the Upgraded -10A/2 and Basic -10B reactors (which changed the hydrogen barrier ceramic enamel inside the fuel element clad). However, with the A-10B there were three core size options: the first kept the 37 fuel element core, a medium-sized 55-element core, and a large 85-element core. There were other questions about the final design, as well, looking at two other major core changes (as well as a lot of open minor questions). The first option was to add a “getter,” a sheath of hydrophilic (highly hydrogen-absorbing) metal to the clad outside the steel casing, but still within the active region of the core. While this isn’t as ideal as containing the hydrogen within the U-ZrH itself, the neutron moderation provided by the hydrogen would be lost at a far lower rate. The second option was to change the core geometry itself, as the temperature of the core changed, with devices called “Thermal Coefficient Augmenters” (TCA). There were two options that were suggested: first, there was a bellows system that was driven by NaK core temperature (using ruthenium vapor), which moves a portion of the radial reflector to change the core’s reactivity coefficient; second, the securing grids for the fuel elements themselves would expand as the NaK increased in temperature, and contract as the coolant dropped in temperature.

Between the options available, with core size, fuel element design, and variable core and fuel element configuration all up in the air, the Advanced SNAP-10B was a wide range of reactors, rather than just one. Many of the characteristics of the reactors remained identical, including the fissile fuel itself, the overall core size, maximum outlet temperature, and others. However, the number of fuel elements in the core alone resulted in a wide range of different power outputs; and, which core modification the designers ultimately decided upon (Getter vs TCA, I haven’t seen any indication that the two were combined) would change what the capabilities of the reactor core would actually be. However, both for simplicity’s sake, and due to the very limited documentation available on the SNAP-10B program, other than a general comparison table from the SNAP Systems Capability Study from 1966, we’ll focus on the 85 fuel element core of the two options: the Getter core and the TCA core.

A final note, which isn’t clear from these tables: each of these reactor cores was nominally optimized to a 100 kWt power output, the additional fuel elements reduced the power density required at any time from the core in order to maximize fuel lifetime. Even with the improved hydrogen barriers, and the variable core geometry, while these systems CAN offer higher power, it comes at the cost of a shorter – but still minimum one year – life on the reactor system. Because of this, all reported estimates assumed a 100 kWt power level unless otherwise stated.

Yt Getter FE
Fuel element with yttrium getter, image DOE

The idea of a hydrogen “getter” was not a new one at the time that it was proposed, but it was one that hadn’t been investigated thoroughly at that point (and is a very niche requirement in terrestrial nuclear engineering). The basic concept is to get the second-best option when it comes to hydrogen migration: if you can’t keep the hydrogen in your fuel element itself, then the next best option is keeping it in the active region of the core (where fission is occurring, and neutron moderation is the most directly useful for power production). While this isn’t as good as increasing the chance of neutron capture within the fuel element itself, it’s still far better than hydrogen either dissolving into your coolant, or worse yet, migrating outside your reactor and into space, where it’s completely useless in terms of reactor dynamics. Of course, there’s a trade-off: because of the interplay between the various aspects of reactor physics and design, it wasn’t practical to change the external geometry of the fuel elements themselves – which means that the only way to add a hydrogen “getter” was to displace the fissile fuel itself. There’s definitely an optimization question to be considered; after all, the overall reactivity of the reactor will have to be reduced because the fuel is worth more in terms of reactivity than the hydrogen that would be lost, but the hydrogen containment in the core at end of life means that the system itself would be more predictable and reliable. Especially for a static control system like the A-10B, this increase in behavioral predictability can be worth far more than the reactivity that the additional fuel would offer. Of the materials options that were tested for the “getter” system, yttrium metal was found to be the most effective at the reactor temperatures and radiation flux that would be present in the A-10B core. However, while improvements had been made in the fuel element design to the point that the “getter” program continued until the cancellation of the SNAP-2/10 core experiments, there were many uncertainties left as to whether the concept was worth employing in a flight system.

The second option was to vary the core geometry with temperature, the Thermal Coefficient Augmentation (TCA) variant of the A-10B. This would change the reactivity of the reactor mechanically, but not require active commands from any systems outside the core itself. There were two options investigated: a bellows arrangement, and a design for an expanding grid holding the fuel elements themselves.

A-10B Bellows Diagram
Ruthenium vapor bellows design for TCA, image DOE

The first variant used a bellows to move a portion of the reflector out as the temperature increased. This was done using a ruthenium reservoir within the core itself. As the NaK increased in temperature, the ruthenium would boil, pushing a bellows which would move some of the beryllium shims away from the reactor vessel, reducing the overall worth of the radial reflector. While this sounds simple in theory, gas diffusion from a number of different sources (from fission products migrating through the clad to offgassing of various components) meant that the gas in the bellows would not just be ruthenium vapor. While this could have been accounted for, a lot of study would have needed to have been done with a flight-type system to properly model the behavior.

A-10B Expandable BaseplateThe second option would change the distance between the fuel elements themselves, using base plate with concentric accordion folds for each ring of fuel elements called the “convoluted baseplate.” As the NaK heated beyond optimized design temperature, the base plates would expand radially, separating the fuel elements and reducing the reactivity in the core. This involved a different set of materials tradeoffs, with just getting the device constructed causing major headaches. The design used both 316 stainless steel and Hastelloy C in its construction, and was cold annealed. The alternative, hot annealing, resulted in random cracks, and while explosive manufacture was explored it wasn’t practical to map the shockwave propagation through such a complex structure to ensure reliable construction at the time.

A-10B Expandable Baseplate 2While this is certainly a concept that has caused me to think a lot about the concept of a variable reactor geometry of this nature, there are many problems with this approach (which could have possibly been solved, or proven insurmountable). Major lifetime concerns would include ductility and elasticity changes through the wide range of temperatures that the baseplate would be exposed to; work hardening of the metal, thermal stresses, and neutron bombardment considerations of the base plates would also be a major concern in this concept.

These design options were briefly tested, but most of these ended up not being developed fully. Because the reactor’s design was never frozen, many engineering challenges remained in every option that had been presented. Also, while I know that a report was written on the SNAP-10B reactor’s design (R. J . Gimera, “SNAP 1OB Reactor Conceptual Design,” NAA-SR-10422), I can’t find it… yet. This makes writing about the design difficult, to say the least.

Because of this, and the extreme paucity of documentation on this later design, it’s time to turn to what these innovative designs could have offered when it comes to actual missions.

The Path Not Taken: Missions for SNAP-2, -10A

Every space system has to have a mission, or it will never fly. Both SNAP-2 and SNAP-10 offered a lot for the space program, both for crewed and uncrewed missions; and what they offered only grew with time. However, due to priorities at the time, and the fact that many records from these programs appear to never have been digitized, it’s difficult to point to specific mission proposals for these reactors in a lot of cases, and the missions have to be guessed at from scattered data, status reports, and other piecemeal sources.

SNAP-10 was always a lower-powered system, even with its growth to a kWe-class power supply. Because of this, it was always seen as a power supply for unmanned probes, mostly in low Earth orbit, but certainly it would also have been useful in interplanetary studies as well, which at this point were just appearing on the horizon as practical. Had the SNAPSHOT system worked as planned, the cesium thruster that had been on board the Agena spacecraft would have been an excellent propulsion source for an interplanetary mission. However, due to the long mission times and relatively fragile fuel of the original SNAP-10A, it is unlikely that these missions would have been initially successful, while the SNAP-10A/2 and SNAP-B systems, with their higher power output and lifetimes, would have been ideal for many interplanetary missions.

As we saw in the US-A program, one of the major advantages that a nuclear reactor offers over photovoltaic cells – which were just starting to be a practical technology at the time – is that they offer very little surface area, and therefore the atmospheric drag that all satellites experience due to the thin atmosphere in lower orbits is less of a concern. There are many cases where this lower altitude offers clear benefits, but the vast majority of them deal with image resolution: the lower you are, the more clear your imagery can be with the same sensors. For the Russians, the ability to get better imagery of US Navy movements in all weather conditions was of strategic importance, leading to the US-A program. For Americans, who had other means of surveillance (and an opponent’s far less capable blue-water navy to track), radar surveillance was not a major focus – although it should be noted that 500 We isn’t going to give you much, if any resolution, no matter what your altitude.

SNAP Meteorological Satellite
SNAP-10 powered meteorological satellite, image DOE

One area that SNAP-10A was considered for was for meteorological satellites. With a growing understanding of how weather could be monitored, and what types of data were available through orbital systems, the ability to take and transmit pictures from on-orbit using the first generations of digital cameras (which were just coming into existence, and not nearly good enough to interest the intelligence organizations at the time), along with transmitting the data back to Earth, would have allowed for the best weather tracking capability in the world at the time. By using a low orbit, these satellites would be able to make the most of the primitive equipment available at the time, and possibly (speculation on my part) have been able to gather rudimentary moisture content data as well.

However, while SNAP-10A was worked on for about a decade, for the entire program there was always the question of “what do you do with 500-1000 We?” Sure, it’s not an insignificant amount of power, even then, but… communications and propulsion, the two things that are the most immediately interesting for satellites with reliable power, both have a linear relationship between power level and capability: the more power, the more bandwidth, or delta-vee, you have available. Also, the -10A was only ever rated for one year of operations, although it was always suspected it could be limped along for longer, which precluded many other missions.

The later SNAP-10A/2 and -10B satellites, with their multi-kilowatt range and years-long lifespans, offered far more flexibility, but by this point many in the AEC, the US Air Force, NASA, and others were no longer very interested in the program; with newer, more capable, reactor designs being available (we’ll look at some of those in the next post). While the SNAP-10A was the only flight-qualified and -tested reactor design (and the errors on the mission were shown to not be the fault of the reactor, but the Agena spacecraft), it was destined to fade into obscurity.

SNAP-10A was always the smallest of the reactors, and also the least powerful. What about the SNAP-2, the 3-6 kWe reactor system?

Initial planning for the SNAP-2 offered many options, with communications satellites being mentioned as an option early on – especially if the reactor lifetime could be extended. While not designed specifically for electric propulsion, it could have utilized that capability either on orbit around the Earth or for interplanetary missions. Other options were also proposed, but one was seized on early: a space station.

S2 Cylinder Station
Cylindrical space station, image DOE

At the time, most space station designs were nuclear powered, and there were many different configurations. However, there were two that were the most common: first was the simple cylinder, launched as a single piece (although there were multiple module designs proposed which kept the basic cylinder shape) which would be finally realized with the Skylab mission; second was a torus-shaped space station, which was proposed almost a half a century before by Tsiolkovsky, and popularized at the time by Werner von Braun. SNAP-2 was adapted to both of these types of stations. Sadly, while I can find one paper on the use of the SNAP-2 on a station, it focuses exclusively on the reactor system, and doesn’t use a particular space station design, instead laying out the ground limits of the use of the reactor on each type of station, and especially the shielding requirements for each station’s geometry. It was also noted that the reactors could be clustered, providing up to 11 kWe of power for a space station, without significant change to the radiation shield geometry. We’ll look at radiation shielding in a couple posts, and look at the particulars of these designs there.

s2 Toroidal Station
Hexagonal/Toroid space station. Note the wide radiation shield. Image DOE

Since space stations were something that NASA didn’t have the budget for at the time, most designs remained vaguely defined, without much funding or impetus within the structure of either NASA or the US Air Force (although SNAP-2 would have definitely been an option for the Manned Orbiting Laboratory program of the USAF). By the time NASA was seriously looking at space stations as a major funding focus, the SNAP-8 derived Advanced Zirconium Hydride reactor, and later the SNAP-50 (which we’ll look at in the next post) offered more capability than the more powerful SNAP-2. Once again, the lack of a mission spelled the doom of the SNAP-2 reactor.

Hg Rankine Cutaway Drawing
Power conversion system, SNAP-2

The SNAP-2 reactor met its piecemeal fate even earlier than the SNAP-10A, but oddly enough both the reactor and the power conversion system lasted just as long as the SNAP-10A did. The reactor core for the SNAP-2 became the SNAP-10A/2 core, and the CRU power conversion system continued under development until after the reactor cores had been canceled. However, mention of the SNAP-2 as a system disappears in the literature around 1966, while the -2/10A core and CRU power conversion system continued until the late 1960s and late 1970s, respectively.

The Legacy of The Early SNAP Reactors

The SNAP program was canceled in 1971 (with one ongoing exception), after flying a single reactor which was operational for 43 days, and conducting over five years of fission powered testing on the ground. The death of the program was slow and drawn out, with the US Air Force canceling the program requirement for the SNAP-10A in 1963 (before the SNAPSHOT mission even launched), the SNAP-2 reactor development being canceled in 1967, all SNAP reactors (including the SNAP-8, which we’ll look at next week) being canceled by 1974, and the CRU power conversion system being continued until 1979 as a separate internal, NASA-supported but not fully funded, project by Rockwell International.

The promise of SNAP was not enough to save the program from the massive cuts to space programs, both for NASA and the US Air Force, that fell even as humanity stepped onto the Moon for the first time. This is an all-too-common fate, both in advanced nuclear reactor engineering and design as well as aerospace engineering. As one of the engineers who worked on the Molten Salt Reactor Experiment noted in a recent documentary on that technology, “everything I ever worked on got canceled.”

However, this does not mean that the SNAP-2/10A programs were useless, or that nothing except a permanently shut down reactor in orbit was achieved. In fact, the SNAP program has left a lasting mark on the astronuclear engineering world, and one that is still felt today. The design of the SNAP-2/10A core, and the challenges that were faced with both this reactor core and the SNAP-8 core informed hydride fuel element development, including the thermal limits of this fuel form, hydrogen migration mitigation strategies, and materials and modeling for multiple burnable poison options for many different fuel types. The thermoelectric conversion system (germanium-silicon) became a common one for high-temperature thermoelectric power conversion, both for power conversion and for thermal testing equipment. Many other materials and systems that were used in this reactor system continued to be developed through other programs.

Possibly the most broad and enduring legacy of this program is in the realm of launch safety, flight safety, and operational paradigms for crewed astronuclear power systems. The foundation of the launch and operational safety guidelines that are used today, for both fission power systems and radioisotope thermoelectric generators, were laid out, refined, or strongly informed by the SNAPSHOT and Space Reactor Safety program – a subject for a future web page, or possibly a blog post. From the ground handling of a nuclear reactor being integrated to a spacecraft, to launch safety and abort behavior, to characterizing nuclear reactor behavior if it falls into the ocean, to operating crewed space stations with on-board nuclear power plants, the SNAP-2/10A program literally wrote the book on how to operate a nuclear power supply for a spacecraft.

While the reactors themselves never flew again, nor did their direct descendants in design, the SNAP reactors formed the foundation for astronuclear engineering of fission power plants for decades. When we start to launch nuclear power systems in the future, these studies, and the carefully studied lessons of the program, will continue to offer lessons for future mission planners.

More Coming Soon!

The SNAP program extended well beyond the SNAP-2/10A program. The SNAP-8 reactor, started in 1959, was the first astronuclear design specifically developed for a nuclear electric propulsion spacecraft. It evolved into several different reactors, notably the Advanced ZrH reactor, which remained the preferred power option for NASA’s nascent modular space station through the mid-to-late 1970s, due to its ability to be effectively shielded from all angles. Its eventual replacement, the SNAP-50 reactor, offered megawatts of power using technology from the Aircraft Nuclear Propulsion program. Many other designs were proposed in this time period, including the SP-100 reactor, the ancestor of Kilopower (the SABRE heat pipe cooled reactor concept), as well as the first American in-core thermionic power system, advances in fuel element designs, and many other innovations.

Originally, these concepts were included in this blog post, but this post quickly expanded to the point that there simply wasn’t room for them. While some of the upcoming post has already been written, and a lot of the research has been done, this next post is going to be a long one as well. Because of this, I don’t know exactly when the post will end up being completed.

After we look at the reactor programs from the 1950s to the late 1980s, we’ll look at NASA and Rosatom’s collaboration on the TOPAZ-II reactor program, and the more recent history of astronuclear designs, from SDI through the Fission Surface Power program. We’ll finish up the series by looking at the most recent power systems from around the world, from JIMO to Kilopower to the new Russian on-orbit nuclear electric tug.

After this, we’ll look at shielding for astronuclear power plants, and possibly ground handling, launch safety, and launch abort considerations, then move on to power conversion systems, which will be a long series of posts due to the sheer number of options available.

These next posts are more research-intensive than usual, even for this blog, so while I’ll be hard at work on the next posts, it may be a bit more time than usual before these posts come out.

References

SNAP

SNAP Reactor Overview, Voss 1984 http://www.dtic.mil/dtic/tr/fulltext/u2/a146831.pdf

SNAP-2

Preliminary Results of the SNAP-2 Experimental Reactor, Hulin et al 1961 https://www.osti.gov/servlets/purl/4048774

Application of the SNAP 2 to Manned Orbiting Stations, Rosenberg et al 1962 https://www.osti.gov/servlets/purl/4706177

The ORNL-SNAP Shielding Program, Mynatt et al 1971 https://www.osti.gov/servlets/purl/4045094

SNAP-10/10A

SNAP-10A Nuclear Analysis, Dayes et al 1965 https://www.osti.gov/servlets/purl/4471077

SNAP 10 FS-3 Reactor Performance Hawley et al 1966 https://www.osti.gov/servlets/purl/7315563

SNAPSHOT and the Flight Safety Program

SNAP-10A SNAPSHOTProgram Development, Atomics International 1962 https://www.osti.gov/servlets/purl/4194781

Reliability Improvement Program Planning Report for the SNAP-10A Reactor, Coombs et al 1961 https://www.osti.gov/servlets/purl/966760

Aerospace Safety Reentry Analytical and Experimental Program SNAP 2 and 10A Interim Report, Elliot 1963 https://www.osti.gov/servlets/purl/4657830

SNAPSHOT orbit, Heavens Above https://www.heavens-above.com/orbit.aspx?satid=1314

SNAP Improvement Program

Static Control of SNAP Reactors, Birney et al 1966 https://digital.library.unt.edu/ark:/67531/metadc1029222/m2/1/high_res_d/4468078.pdf

SNAP Systems Capabilities Vol 2, Study Introduction, Reactors, Shielding, Atomics International 1965 https://www.osti.gov/servlets/purl/4480419

Progress Report, SNAP Reactor Improvement Program, April-June 1965 https://www.osti.gov/servlets/purl/4467051

Progress Report for SNAP General Supporting Technology May-July 1964 https://www.osti.gov/servlets/purl/4480424/

Categories
Electric propulsion Electrostatic Propulsion Non-nuclear Testing Nuclear Electric Propulsion Test Stands Uncategorized

Electric Propulsion Part 2: Electrostatic Propulsion

Hello, and welcome back to Beyond NERVA! Today, we finish our look at electric propulsion systems by looking at electrostatic propulsion. This is easily the most common form of in-space electric propulsion system, and as we saw in our History of Electric Propulsion post, it’s also the first that was developed.

I apologize about how long it’s taken to get this blog post published. As I’ve mentioned before, electric propulsion is one of my weak subjects, so I’ve been very careful to try to ensure that the information that I’m giving is correct. Another complication came from the fact that I had no idea how complex and varied each type of drive system is. I have glossed over many details in this blog post on many of the systems, but I’ve also included an extensive list of documentation on all of the drive systems I discuss in the post at the end, so if you’re curious about the details of these systems, please check out the published papers on them!

Electrostatic Drives

By far the most common type of electric propulsion today, and the type most likely to be called an “ion thruster,” is electrostatic propulsion. The electrostatic effect was one of the first electrical effects ever formally described, and the first ever observed (lightning is an electrostatic phenomenon, after all). Electrostatics as a general field of study refers to the study of electric charges at rest (hence, “electro-static”). The electrostatic effect is the tendency of objects with a differential charge (one positive, one negative) to attract each other, and with the same charge to repel each other. This occurs when electrons are stripped or added to one material. Some of the earliest scientific experiments involving this effect involved bars of amber and wool – the amber would become negatively ionized, and the wool would be positively ionized, due to the interactions of the very fine hairs of the wool and the crystalline and elemental composition of the amber (for the nitpicky types, this is known as the triboelectric effect, but is still a manifestation of the electrostatic effect). Other experimenters during the 18th and 19th centuries used cat fur instead of wool, a much more mentally amusing way to build an electrostatic charge. However, we aren’t going to be discussing using a rotating wheel of cats to produce an electric thruster (although, if someone feels like animating said concept, I’d love to see it).

There are a number of designs that use electrostatic effects to produce thrust. Some are very similar to some of the concepts that we discussed in the last post, like the RF ionized thruster (a major area of focus in Japan), the Electron Cyclotron Resonance thrusters (which use the same mechanisms as VASIMR’s acceleration mechanism), and the largely-abandoned Cesium Contact thruster (which has a fair amount of similarities with a pulsed plasma or arcjet thruster). Others, such as the Field Emission Electrostatic Thruster (FEEP) and Ionic Liquid Ion Source thruster (also sometimes called an electrospray) thruster, have far fewer similarities. None of these, though, are nearly as common as the electron bombardment noble gas thruster types: the gridded ion (either electron bombardment, cyclotron resonance, or RF ionization) thruster and the Hall effect thruster (which also has two types: the thruster with anode layer and stationary plasma thruster). The gridded ion thruster, commonly just called an ion thruster, is the propulsion system of choice for interplanetary missions, because it has the highest specific impulse of any currently available propulsion system. Hall effect thrusters have lower specific impulse, but higher thrust, making them a popular choice for reaction control systems on commercial and military satellites.

Most electrostatic drives use an ionization chamber or zone, to strip off electrons from an easily-ionized material. These now-positively charged ions are then accelerated toward a negatively charged structure (or accelerated by an electromagnetic field, in some cases), which is then switched off after accelerating the ions, which then are spat out the back of the thruster. Because of the low density of these ion streams, and the lack of an expanding gas, a physical nozzle isn’t used, because the characteristic bell-shaped de Laval nozzle of chemical or thermal engines is absolutely useless in this case. However, there are many ways that this ion stream can be ionized, and many ways that it can be accelerated, leading to a huge variety of design options within the area of electrostatic propulsion.

Goddard drive drawing
Drawing for first patented electrostatic thruster, Goddard 1917

The first design for a practical electric propulsion system, patented by Robert Goddard in 1917, was an electrostatic device, and  most designs, both in the US and the USSR, have used this concept. In the earliest days of electric propulsion design, each went a different way in the development of this drive concept: the US focused on the technically simpler, but materially more problematic, gridded ion thruster, while the Soviet Union worked to develop the more technically promising, but more difficult to engineer, Hall thruster. Variations of each have been produced over the years, and additional options have been explored as well. These systems have traveled to almost every body of in the Solar System, including Pluto and many of the asteroids in the Main Belt, and provide a lot of the station-keeping thrust for satellites in orbit around Earth. Let’s go ahead and look at what the different types are, what their advantages and disadvantages are, and how they’ve been used in the past.

Gridded Ion Drives

nstar
NSTAR Gridded Ion Thruster, image courtesy NASA

This is the best-known of the electric propulsion thrusters of any type, and is often shortened to “ion drive.” Here, the thruster has four main parts: the propellant supply, an ionization chamber, an array of conductive grids, and a neutralizing beam emitter. The propellant can be anything that is able to be easily ionized, with cesium and mercury being the first options, these have largely been replaced by xenon and argon, though.

Ion drive scematic, NASA
Gridded Ion Drive Schematic, image courtesy NASA

The type of ionization chamber varies widely, and is the main difference in the different types of ion drive. Particle beams, radio frequency or microwave excitation, in addition to magnetic field agitation, are all methods used in different gridded ion drives over the years and across the different manufacturers. The first designs used gaseous agitation to cause electrons to be stripped, but many higher-powered systems use particle (mostly electron) beams, radio frequency or microwave agitation, or cyclotron resonance to strip the electrons off the atoms. The efficiency of the ionization chamber, and its capacity, define how much propellant mass flow is possible, which is one of the main limiting factors for the overall thrust possible for the thruster.

3 grid ion schematic
Schematic of 3 Grid Ion engine, image courtesy ESA

After being ionized, the gas and plasma are then separated, using a negatively charged grid to extract the positively charged ions, leaving the neutral gas in the ionization chamber to be ionized. In most modern designs, this is also the beginning of the acceleration process. Often, two or three grids are used, and the term “ion optics” is often used instead of “grids.” This is because these structures not only extract and change the acceleration of the plasma, but they also shape the beam of the plasma as well. The amount of charge, and the geometry of these grids, defines the exhaust velocity of the ions; and the desired specific impulse produced by the thruster is largely determined by the charge applied to these screens. Many US designs use a more highly charged inner screen to ensure better separation of the ions, and a charge potential difference between this grid and the second accelerates the ions. Because of this, the first grid is often called the extractor, and the second is called the accelerator grid. The charge potential possible on each grid is another major limitator of the possible power level – and therefore the maximum exhaust velocity – of these thrusters.

Wear Pattern on grids CSSA
Idealized wear pattern of a grid. Image Sangregorio et al CSAA 2018

These screens also are one of the main limitators for the thruster’s lifetime, since the ions will impact the grid to a certain degree as they’re flowing past (although the difference in charge potential on the plasma in the ionization chamber between the apertures and the structure of the grid tends to minimize this). With many of the early gridded ion thrusters that used highly reactive materials, chemical interactions in the grids could change the conductivity of these surfaces, cause more rapid erosion, and produce other problems; the transition to noble gas propellants has made this less of an issue. Finally, the geometry of the grids have a huge impact on the direction and velocity of the ions themselves, so there’s a wide variety of options available through the manipulation of this portion of the thruster as well.

At the end of the drive cycle, after the ions are leaving the thruster, a spray of electrons is added to the propellant stream, to prevent the spacecraft from becoming negatively charged over time, and thereby attracting some of the propellant back toward the spacecraft due to the same electrostatic effect that was used to accelerate them in the first place. Problems with incomplete ion stream neutralization were common in early electrostatic thrusters; and with the cesium and mercury propellants used in these thrusters, chemical contamination of the spacecraft became an issue for some missions. Incomplete neutralization is something that is still a concern for some thruster designs, although experiments in the 1970s showed that a spacecraft can ground itself without the ion stream if the differential charge becomes too great. In three grid systems (or four, more on that concept later), the final grid takes the place of this electron beam, and ensures better neutralization of the plasma beam, as well as greater possible exhaust velocity.

Gridded ion thrusters offer very attractive specific impulse, in the range of 1500-4000 seconds, with exhaust velocities up to about 100 km/s for typical designs. The other side of the coin is their low thrust, generally from 20-100 microNewtons (lower than average even for electric propulsion, although their specific impulse is higher than average), which is a mission planning constraint, but isn’t a major show-stopper for many applications. An advanced concept, from the Australian National University and European Space Agency, the Dual Stage 4 Grid (DS4G) thruster, achieved far higher exhaust velocities by using a staged gridded ion thruster, up to 210 km/s.

Past and Current Gridded Ion Thrusters

sert1
SERT 1 Gridded Ion Thruster, inage courtesy NASA

These drive systems have been used on a number of different missions over the years, starting with the SERT missions mentioned in the history of electric propulsion section, and continuing through on an experimental basis until the Deep Space 1 technology demonstration mission – the first spacecraft to use ion propulsion as its main form of propulsion. That same thruster, the NSTAR, is still in use today on the Dawn mission, studying the minor planet Ceres. Hughes Aircraft developed a number of thrusters for station-keeping for their geosynchronous satellite bus (the XIPS thruster).

Hayabusa
Hayabusa probe, image courtesy JAXA

JAXA used this type of drive system for their Hayabusa mission to the asteroid belt, but this thruster used microwaves to ionize the propellant. This thruster operated successfully throughout the mission’s life, and propelled the first spacecraft to return a sample from an asteroid back to Earth.

ESA has used different variations of this thruster on multiple different satellites as well, all of which have been radio frequency ionization types. The ArianeSpace RIT-10 has been used on multiple missions, and the Qinetiq T5 thruster was used successfully on the GOCE mission mapping the Earth’s magnetic field.

NASA certainly hasn’t given up on further developing this technology. The NEXT thruster is three times as powerful in terms of thrust compared to the NSTAR thruster, although it operates on similar principles. The testing regime for this thruster has been completed, demonstrating 4150 s of isp and 236 mN of thrust over a testing life of over 48,000 hours, and it is currently awaiting a mission for it to fly on. This has also been a testbed for using new designs and materials on many of the drive system components, including a new hollow cathode made out of LaB6 (a lanthanum-boron alloy), and several new screen materials.

HiPEP: NASA’s Nuclear Ion Propulsion System

HiPEP Prefire, Foster 2004
HiPEP Being Readied for Test, image courtesy NASA

Another NASA project in gridded ion propulsion, although one that has since been canceled, is far more germane to the specific use of nuclear electric propulsion: the High Power Electric Propulsion drive (HiPEP) for the Jupiter Icy Moons Observer mission. JIMO was a NEP propelled mission to Jupiter which was canceled in 2005, meant to study Europa, Ganymede, and Callisto (this mission will get an in-depth look later in this blog series on NEP). HiPEP used two types of ionization chamber: Electron Cyclotron Resonance ionization, which combines leveraging the small number of free electrons present in any gas by moving them in a circle with the magnetic containment of the ionization chamber with microwaves that are tuned to be in resonance with these moving electrons to more efficiently ionize the xenon gas; and direct current ionization using a hollow cathode to strip off electrons, which has additional problems with cathode failure and so is the less preferred option. Cathode failure of this sort is another major failure point for ion drives, so being able to eliminate this is a significant advantage, but the microwave system ends up consuming more power, so in less-energy-intensive applications it’s often not used.

HiPEP Schematic Foster 2004
HiPEP Schematic with Neutralizer, Foster et al 2004

One very unusual thing about this system is its’ shape: rather than the typical circular discharge chamber and grids, this system uses a rectangular configuration. The designers note that not only does this make the system more compact to stack multiple units together (reducing the structural, propellant, and electrical feed system mass requirements for the full system), it also means that the current density across the grids can be lower for the same electrostatic potential, reducing current erosion in the grids. This means that the grid can support a 100 kg/kW throughput margin for both of the isp configurations that were studied (6000 and 8000 s isp). The longest distance between two supported sections of grid can be reduced as well, preventing issues like thermal deformation, launch vibration damage, and electrostatic attraction between the grids and either the propellant or the back of the ionization chamber itself. The fact that it makes the system more scalable from a structural engineering standpoint is one final benefit to this system.

As the power of the thruster increases, so do the beam neutralization requirements. In this case, up to 9 Amperes of continuous throughput are required, which is very high compared to most systems. This means that the neutralizing beam has to be both powerful and reliable. While the HiPEP team discuss using a common neutralization system for tightly packed thrusters, the baseline design is a fairly typical hollow cathode, similar to what was used on the NSTAR thruster, but with a rectangular cross section rather than a circular one to accommodate the different thruster geometry. Other concepts, like using microwave beam neutralization, were also discussed; however, due to the success and long life of this type of system on NSTAR, the designers felt that this would be the most reliable way to deal with the high throughput requirements that this system requires.

HiPEP DC 34 kW
HiPEP operating at 34 kW, Foster et al 2004

HiPEP consistently met its program guidelines, for both engine thrust efficiency and erosion studies. Testing was conducted at both 2.45 and 5.85 GHz for the microwave ionization system, and was successfully concluded. The 2.45 GHz test, with 16 kW of power, achieved a specific impulse of 4500-5500 seconds, allowing for the higher-powered MW emitter to be used. The 5.85 GHz ionization chamber was tested at multiple current loads, from 9.7 to 39.3 kW, and achieved a maximum specific impulse of 9620 s, and showed a clear increase in thrust of up to close to 800 mN during this test.

Sadly, with the cancellation of JIMO (a program we will continue to come back to frequently as we continue looking at NEP), the need for a high powered gridded ion thruster (and the means of powering it) went away. Much like the fate of NERVA, and almost every nuclear spacecraft ever designed, the canceling of the mission it was meant to be used on spelled the death knell of the drive system. However, HiPEP remains on the books as an attractive, powerful gridded ion drive, for when an NEP spacecraft becomes a reality.

DS4G: Fusion Research-Inspired, High-isp Drives to Travel to the Edge of the Solar System

DS4G Photo
DS4G Thruster, all images Bramanti et al 2006

The Dual Stage 4 Grid (DS4G) ion drive is perhaps the most efficient electric drive system ever proposed, offering specific impulse well over 10,000 seconds. While there are some drive systems that offer higher isp, they’re either rare concepts (like the fission fragment rocket, a concept that we’ll cover in a future post), or have difficulties in the development process (such as Orion derivatives, which run afoul of nuclear weapons test bans and treaty limitations concerning the use of nuclear explosives in space).

 

DS4G Diagram
Cutaway DS4G Diagram, with ionization chamber at the top

So how does this design work? Traditional ion drives use either two grids (like the HiPEP drive) combining the extraction and acceleration stages in these grids and then using a hollow cathode or electron emitter to neutralize the beam, or use three grids, where the third grid is used in place of the hollow cathode. In either case, these are very closely spaced grids, which has its’ advantages, but also a couple of disadvantages: the extraction system and acceleration system being combined forces a compromise between efficiency of extraction and capability of acceleration, and the close spacing limits the acceleration possible of the propellants. The DS4G, as the name implies, does things slightly differently: there are two pairs of grids, each pair is close to its’ partner, but further apart from the other pair, allowing for a greater acceleration chamber length, and therefore higher exhaust velocity, and the distance between the extraction grid and the final acceleration grid allows for each to be better optimized for their individual purposes. As an added benefit, the plasma beam of the propellant is better collimated than that of a traditional ion drive, which means that the drive is able to be more efficient with the mass of the propellant, increasing the specific impulse even further.

DS4G Diagram of Principle
DS4G Concept diagram (above) as compared to a 3-grid ion thruster (bottom)

3 grid ion schematic

This design didn’t come out of nowhere, though. In fact, most tokamak-type fusion reactors use a device very similar to an ion drive to accelerate beams of hydrogen to high velocities, but in order to get through the intense magnetic fields surrounding the reactor the atoms can’t be ionized. This means that a very effective neutralizer needs to be stuck on the back of what’s effectively an ion drive… and these designs all use four screens, rather than three. Dr. David Fearn knew of these devices, and decided to try and adapt it to space propulsion, with the help of ESA, leading to a 2005 test-bed prototype in collaboration with the Australian National University. An RF ionization system was designed for the plasma production unit, and a 35 kV electrical system was designed for the thruster prototype’s ion optics. This was not optimized for in-space use; rather, it was used as a low cost test-bed for optics geometry testing and general troubleshooting of the concept. Another benefit to this design is a higher-than-usual thrust density of 0.86 mN/cm^2, which was seen in the second phase of testing.

Two rounds of highly successful testing were done at ESA’s CORONA test chamber in 2005 and 2006, the results of which can be seen in the tables above. The first test series used a single aperture design, which while highly inefficient was good enough to demonstrate the concept; this was later upgraded to a 37 aperture design. The final test results in 2006 showed impressive specific impulse (14000-14500 s), thrust (2.7 mN), electrical, mass, and total efficiency (0.66, 0.96, and 0.63, respectively). The team is confident that total efficiencies of about 70% are possible with this design, once optimization is complete.

DS4G ESTEC Test ResultsDS4G Round 2 Testing

There remain significant engineering challenges, but nothing that’s incredibly different from any other high powered ion drive. Indeed, many of the complications concerning ion optics, and electrostatic field impingement in the plasma chamber, are largely eliminated by the 4-grid design. Unfortunately, there are no missions that currently have funding that require this type of thruster, so it remains on the books as “viable, but in need of some final development for application” when there’s a high-powered mission to the outer solar system.

Cesium Contact Thrusters: Liquid Metal Fueled Gridded Ion Drives

As we saw in our history of electric propulsion blog post, many of the first gridded ion engines were fueled with cesium (Cs). These systems worked well, and the advantages of having an easily storable, easily ionized, non-volatile propellant (in vapor terms, at least) were significant. However, cesium is also a reactive metal, and is toxic to boot, so by the end of the 1970s development on this type of thruster was stopped. As an additional problem, due to the inefficient and incomplete beam neutralization with the cathodes available at the time, contamination of the spacecraft by the Cs ions (as well as loss of thrust) were a significant challenge for the thrusters of the time.

Perhaps the most useful part of this type of thruster to consider is the propellant feed system, since it can be applied to many different low-melting-point metals. The propellant itself was stored as a liquid in a porous metal sponge made out of nickel, which was attached to two tungsten resistive heaters. By adjusting the size of the pores of the sponge (called Feltmetal in the documentation), the flow rate of the Cs is easily, reliably, and simply controlled. Wicks of graded-pore metal sponges were used to draw the Cs to a vaporizer, made of porous tungsten and heated with two resistive heaters. This then fed to the contact ionizer, and once ionized the propellant was accelerated using two screens.

As we’ll see in the propellant section, after looking at Hall Effect thruster, Cs (as well as other metals, such as barium) could have a role to play in the future of electric propulsion, and looking at the solutions of the past can help develop ideas in the future.

Hall Effect Thrusters

When the US was beginning to investigate the gridded ion drive, the Soviet Union was investigating the Hall Effect thruster (HET). This is a very similar concept in many ways to the ion drive, in that it uses the electrostatic effect to accelerate propellant, but the mechanism is very different. Rather than using a system of grids that are electrically charged to produce the electrostatic potential needed to accelerate the ionized propellant, in a HET the plasma itself creates the electrostatic charge through the Hall effect, discovered in the 1870s by Edwin Hall. In these thrusters, the backplate functions as both a gas injector and an anode. A radial magnetic field is produced by a set of radial solenoids and and a central solenoid, which traps the electrons that have been stripped off the propellant as it’s become ionized (mostly through electron friction), forming a toroidal electric field moving through the plasma. After the ions are ejected out of the thruster, a hollow cathode that’s very similar to the one used in the ion drives that we’ve been looking at neutralizes the plasma beam, for the same reasons as this is done on an ion drive system (this is also the source of approximately 10% of the mass flow of the propellant). This also provides the electrostatic potential used to accelerate the propellant to produce thrust. Cathodes are commonly mounted external to the thruster on a small arm, however some designs – especially modern NASA designs – use a central cathode instead.

The list of propellants used tends to be similar to what other ion drives use: krypton, argon, iodine, bismuth, magnesium and zinc have all been used (along with some others, such as NaK), but Kr and Ar are the most popular propellants. While this system has a lower average specific impulse (1500-3000 s isp) than the gridded ion drives, it has more thrust (a typical drive system used today uses 1.35 kW of power to generate 83 mN of thrust), meaning that it’s very good for either orbital inclination maintenance or reaction control systems on commercial satellites.

There are a number of types of Hall effect thruster, with the most common being the Thruster with Anode Layer (TAL), the Stationary Plasma Thruster (SPT), and the cylindrical Hall thruster (CHT). The cylindrical thruster is optimized for low power applications, such as for cubesats, and I haven’t seen a high power design, so we aren’t going to really go into those. There are two obvious differences between these designs:

  1. What the walls of the acceleration chamber are made out of: the TAL uses metallic (usually boron nitride) material for the walls, while the SPT uses an insulator, which has the effect of the TAL having higher electron velocities in the plasma than the SPT.
  2. The length of the acceleration zone, and its impact on ionization behavior: The TAL has a far shorter acceleration zone than the SPT (sort of, see Chouieri’s analytical comparison of the two systems for dimensional vs non-dimensional characteristics http://alfven.princeton.edu/publications/choueiri-jpc-2001-3504). Since the walls of the acceleration zone are a major lifetime limitator for any Hall effect thruster, there’s an engineering trade-off available here for the designer (or customer) of an HET to consider.

There’s a fourth type of thruster as well, the External Discharge Plasma Thruster, which doesn’t have an acceleration zone that’s physically constrained, that we’ll also look at, but as far as I’ve been able to find there are very few designs, most of those operating at low voltage, so they, too, aren’t as attractive for nuclear electric propulsion.

Commercially available HETs generally have a total efficiency in the range of 50-60%, however all thrusters that I’ve seen increase in efficiency as the power increases up to the design power limits, so higher-powered systems, such as ones that would be used on a nuclear electric spacecraft, would likely have higher efficiencies. Some designs, such as the dual stage TAL thruster that we’ll look at, approach 80% efficiency or better.

SPT Hall Effect Thrusters

SPT Hall cutaway rough
SPT type thruster

Stationary Plasma Thrusters use an insulating material for the propellant channel immediately downstream of the anode. This means that the electrostatic potential in the drive can be further separated than in other thruster designs, leading to greater separation of ionized vs. non-ionized propellant, and therefore potentially more complete ionization – and therefore thrust efficiency. While they have been proposed since the beginning of research into Hall effect thrusters in the Soviet Union, a lack of an effective and cost-efficient insulator that was able to survive for long enough to allow for a useful thruster lifetime was a major limitator in early designs, leading to an early focus on the TAL.

SPT Liu et al
SPT Diagram, Liu et al 2010

The SPT has the greatest depth between the gas diffuser (or propellant injector) and the nozzle of the thruster. This is nice, because it gives volume and distance to work with in terms of propellant ionization. The ionized propellant is accelerated toward the nozzle, and the not-yet-ionized portion can still be ionized, even if the plasma component is scooting it toward the nozzle by simply bouncing off the unionized portion like a billiard ball. Because of this, SPT thrusters can have much higher propellant ionization percentages than the other types of Hall effect thruster, which directly translates into greater thrust efficiency. This extended ionization chamber is made out of an electromagnetic insulator, usually boron nitride, although Borosil, a solution of BN and SiO2, is also used. Other types of materials, such as nanocrystalline diamond, graphene, and a new material called ultra-BN, or plasma assisted chemical vapor deposition built BN, have also been proposed and tested.

Worn SPT-30, NASA 1998
SPT-30 Hall thruster post testing, image courtesy NASA

The downside to this type of thruster is that the insulator is eroded during operation. Because the erosion of the propellant channel is the main lifetime limitator of this type of thruster, the longer length of the propellant channel in this type of thruster is a detriment to thruster lifetime. Improved materials for the insulator cavity are a major research focus, but replacing boron nitride is going to be a challenge because there are a number of ways that it’s advantageous for a Hall effect thruster (and also in the other application we’ve looked at for reactor shielding): in addition to it being a good electromagnetic insulator, it’s incredibly strong and very thermally conductive. The only major downside is its’ expense, especially forming it into single, large, complex shapes; so, often, SPT thrusters have two boron carbide inserts: one at the base, near the anode, and another at the “waist,” or start of the nozzle, of the SPT thruster. Inconsistencies in the composition and conductivity of the insulator can lead to plasma instabilities in the propellant due to the local magnetic field gradients, which can cause losses in ionization efficiency. Additionally, as the magnetic field strength increases, plasma instabilities develop in proportion to the total field strength along the propellant channel.

Another problem that surfaces with these sorts of thrusters is that under high power, electrical arcing can occur, especially in the cathode or at a weak point in the insulator channel. This is especially true for a design that uses a segmented insulator lining for the propellant channel.

HERMeS: NASA’s High Power Single Channel SPT Thruster

peterson_hallthruster
Dr. Peterson with TAL-2 HERMeS Test Bed, image courtesy NASA

The majority of NASA’s research into Hall thrusters is currently focused on the Advanced Electric Propulsion System, or AEPS. This is a solar electric propulsion system which encompasses the power generation and conditioning equipment, as well as a 14 kW SPT thruster known as HERMeS, or the Hall Effect Rocket with Magnetic Shielding. Originally meant to be the primary propulsion unit for the Asteroid Redirect Mission, the AEPS is currently planned for the Power and Propulsion Element (PPE) for the Gateway platform (formerly Lunar Gateway and LOP-G) around the Moon. Since the power and conditioning equipment would be different for a nuclear electric mission, though, our focus will be on the HERMeS thruster itself.

HERMeSThis thruster is designed to operate as part of a 40 kW system, meaning that three thrusters will be clustered together (complications in clustering Hall thrusters will be covered later as part of the Japanese RIAJIN TAL system). Each thruster has a central hollow cathode, and is optimized for xenon propellant.

Many materials technologies are being experimented with in the HERMeS thruster. For instance, there are two different hollow cathodes being experimented with: LaB6 (which was experimented with extensively for the NEXT gridded ion thruster) and barium oxide (BaO). Since the LaB6 was already extensively tested, the program has focused on the BaO cathode. Testing is still underway for the 2000 hour wear test; however, the testing conducted to date has confirmed the behavior of the BaO cathode. Another example is the propellant discharge channel: normally boron nitride is used for the discharge channel, however the latest iteration of the HERMeS thruster is using a boron nitride-silicon (BN-Si) composite discharge channel. This could potentially improve the erosion effects in the discharge channel, and increase the life of the thruster. As of today, the differences in plasma plume characterization are minimal to the point of being insignificant, and erosion tests are similarly inconclusive; however, theoretically, BN-Si composite could improve the lifetime of the thruster. It is also worth noting that, as with any new material, it takes time to fully develop the manufacture of the material to optimize it for a particular use.

As of the latest launch estimates, the PPE is scheduled to launch in 2022, and all development work of the AEPS is on schedule to meet the needs of the Gateway.

Nested Channel SPT Thrusters: Increasing Power and Thrust Density

Nested HET
Nested SPT Thruster, Liang 2013

One concept that has grown more popular recently (although it’s far from new) is to increase the number of propellant channels in a single thruster in what’s called a nested channel Hall thruster. Several designs have used two nested channels for the thruster. While there are a number of programs investigating nested Hall effect thrusters, including in Japan and China, we’ll use the X2 as an example, studied at the University of Michigan. While this design has been supplanted by the X3 (more on that below), many of the questions about the operation of these types of thrusters were addressed by experimenting with the X2 thruster. Generally speaking, the amount of propellant flow in the different channels is proportional to the surface area of the emitter anode, and the power and flow rate of the cathode (which is centrally mounted) is adjusted to match whether one or multiple channels are firing. Since these designs often use a single central cathode, despite having multiple anodes, a lot of development work has gone into improving the hollow cathodes for increased life and power capability. None of the designs that I saw used external cathodes, like those sometimes seen with single-channel HETs, but I’m not sure if that is just because of the design philosophies of the institutions (primarily JPL and University of Michigan) that I found while investigating this type of design, and for which I was able to access the papers.

Nested SPT Tradeoffs Liang
Image Liang 2013

There are a number of advantages to the nested-channel design. Not only is it possible to get more propellant flow from less mass and volume, but the thruster can be throttled as well. For higher thrust operation (such as rapid orbital changes), both channels are fired at once, and the mass flow through the cathode is increased to match. By turning off the central channel and leaving the outer channel firing, a medium “gear” is possible, with mass flow similar to a typical SPT thruster. The smallest channel can be used for the highest-isp operation for interplanetary cruise operations, where the lower mass flow allows for greater exhaust velocities.

X2 Different Channel Operation
Ignition sequence for dual channel operation in X2, Liang 2013

A number of important considerations were studied during the X2 program, including the investigation of anode efficiency during the different modes of operation (slight decrease in efficiency during two-channel operation, highest efficiency during inner channel only operation), interactions between the plasma plumes (harmonic oscillations were detected at 125 and 150 V, more frequent in the outer channel, but not detected at 200 V operation, indicating some cross-channel interactions that would need to be characterized in any design), and power to thrust efficiency (slightly higher during two-channel operation compared to the sum of each channel operating independently, for reasons that weren’t fully able to be characterized). The success of this program led to its’ direct successor, which is currently under development by the University of Michigan, Aerojet Rocketdyne, and NASA: the X3 SPT thruster.

X3 on Test Stand
X3 Three Channel Nested SPT on test stand, Hall 2018

The X3 is a 100 kWe design that uses three nested discharge chambers. The cathode for this thruster is a centrally mounted hollow cathode, which accounts for 7% of the total gas flow of the thruster under all modes of operation. Testing during 2016 and 2017 ranging from 5 to 102 kW, 300 to 500 V, and 16 to 247 A, demonstrated a specific impulse range of 1800 to 2650 s, with a maximum thrust of 5.4 N. As part of the NextSTEP program, the X3 thruster is part of the XR-100 electric propulsion system that is currently being developed as a flexible, high-powered propulsion system for a number of missions, both crewed and uncrewed.

LaB6 Cathode
LaB6 Cathode, Hall 2018

While this thruster is showing a great deal of promise, there remain a number of challenges to overcome. One of the biggest is cathode efficiency, which was shown to be only 23% during operation of just the outermost channel. This is a heavy-duty cathode, rated to 120 A. Due to the concerns of erosion, especially under high-power, high-flow conditions, there are three different gas injection points: through the central bore of the cathode (limited to 20 sccm), external flow injectors around the cathode keeper, and supplementary internal injectors.

The cross-channel thrust increases seen in the X2 thruster weren’t observed, meaning that this effect could have been something particular to that design. In addition, due to the interactions between the different magnetic lenses used in each of the discharge channels, the strength and configuration of each magnetic field has to be adjusted depending on the other channels that are operating, a challenge that increases with magnetic field strength.

Finally, the BN insulator was shown to expand in earlier tests to the point that a gap was formed, allowing arcing to occur from the discharge plasma to the body of the thruster. Not only does this mean that the plasma is losing energy – and therefore decreasing thrust – but it also heats the body of the thruster as well.

These challenges are all being addressed, and in the next year the 100-hour, full power test of the system will be conducted at NASA’s Glenn Research Center.

X3 Firing Modes
X3 firing in all anode configurations: 1. Inner only, 2. Middle only, 3. Outer only, 4. Inner and middle, 5. Middle and outer, 6. Inner and outer, 7 Inner, middle and outer; Florenz 2013

TAL Hall Effect Thrusters

Early USSR TAL, Kim et al
Early TAL concept, Kim et al 2007

The TAL concept has been around since the beginning of the development of the Hall thruster. In the USSR, the development of the TAL was tasked to the Central Research Institute for Machine Building (TsNIIMash). Early challenges with the design led to it not being explored as thoroughly in the US, however. In Europe and Asia, however, this sort of thruster has been a major focus of research for a number of decades. Recently, the US has also increased their study of this design as well. Since these designs have (as a general rule) higher power requirements for operation, they have not been used nearly as much as the SPT-type Hall thruster, but for high powered systems they offer a lot of promise.

As we mentioned before, the TAL uses a conductor for the walls of the plasma chamber, meaning that the radial electric charge moving across the plasma is continuous across the acceleration chamber of the thruster. Because of the high magnetic fields in this type of thruster (0.1-0.2 T), the electron cyclotron radius is very small, allowing for more efficient ionization of the propellant, and therefore limiting the size necessary for the acceleration zone. However, because a fraction of the ion stream is directed toward these conduction walls, leading to degradation, the lifetime of these types of thrusters is often shorter than their SPT counterparts. This is one area of investigation for designers of TAL thrusters, especially higher-powered variants.

As a general rule, TAL thrusters have lower thrust, but higher isp, than SPT thrusters. Since the majority of Hall thrusters are used for station-keeping, where thrust levels are a significant design consideration, this has also mitigated in favor of the SPT thruster to be in wider deployment.

High-Powered TAL Development in Japan: Clustered TAL with a Common Cathode

RAIJIN94, Hamada 2017
RAIJIN94, Hamada et al 2017

One country that has been doing a lot of development work on the TAL thruster is Japan. Most of their designs seem to be in the 5 kW range, and are being designed to operate clustered around a single cathode for charge neutralization. The RAIJIN program (Robust Anode-layer Intelligent Thruster for Japanese IN-space propulsion system) has been focusing on addressing many of the issues with high-powered TAL operation, mainly for raising satellites from low earth orbit to geosynchronous orbit (an area that has a large impact on the amount of propellant needed for many satellite launches today, and directly applicable to de-orbiting satellites as well). The RAIJIN94 thruster is a 5 kW TAL thruster under development by Kyushu University, the University of Tokyo, and the University of Mizayaki. Overall targets for the program are for a thruster that operates at 6 kW, providing 360 mN of thrust, 1700 s isp, with an anode mass flow rate of 20 mg/s and a cathode flow rate of 1 mg/s. The ultimate goal of the program is a 25 kW TAL system, containing 5 of these thrusters with a common cathode. Based on mass penalty analysis, this is a more mass-efficient method for a TAL than having a single large TAL with increased thermal management requirements. Management of anode and conductor erosion is a major focus of this program, but not one that has been written about extensively. The limiting of the thruster power to about 5 kW, though, seems to indicate that scaling a traditional TAL beyond this size, at least with current materials, is impractical.

Clustered HETs, Miyasaka 2013
Side by side HETs for testing with common cathode, Miyazaka et al 2013

There are challenges with this design paradigm, however, which also will impact other clustered Hall designs. Cathode performance, as we saw in the SPT section is a concern, especially if operating at very high power and mass flow rates, which a large cluster would need. Perhaps a larger consideration was plasma oscillations that occurred in the 20 kHz range when two thrusters were fired side by side, as was done (and continues to be done) at Gifu University. It was found that by varying the mass flow rate, operating at a slightly lower power, and maintaining a wider spacing of the thruster heads, the plasma flow instabilities could be accounted for. Experiments continue there to study this phenomenon, and the researchers, headed by Dr. Miyasaka, are confident that this issue can be managed.

Dual Stage TAL and VHITAL

Two Stage TALOne of the most interesting concepts investigated at TsNIIMash was the dual-stage TAL, which used two anodes. The first anode is very similar to the one used in a typical TAL or SPT, which also serves as an injector for the majority of the propellant and provides the charge to ionize the propellant. As the plasma exits this first anode, it encounters a second anode at the opening of the propellant channel, which accelerates the propellant. An external cathode is used to neutralize the beam. This design demonstrated specific impulses of up to 8000s, among the highest (if not the highest) of any Hall thruster to date. The final iteration during this phase of research was the water-cooled TAL-160, which operated at a power consumption from 10-140 kW.

VHITAL Propellant Reservoir
VHITAL bismuth propellant feed system, Sengupta et al 2007

Another point of interest with this design is the use of bismuth as the propellant for the thruster. As we’ll see below, propellant choice for an electrostatic thruster is very broad, and the choice of propellant you use is subject to a number of characteristics. Bismuth is reasonably inexpensive, relatively common, and storable as a solid. This last point is also a headache for an electrostatic thruster, since ionized powders are notorious for sticking to surfaces and gumming up the works, as it were. In this case, since bismuth has a reasonably low melting temperature, a pre-heater was used to resistively heat the bismuth, and then an electromagnetic pump was used to propel it toward the anode. Just before injection into the thruster, a vaporization plug of carbon was used to ensure proper mass flow into the thruster. As long as the operating temperature of the thruster was high enough, and the mass flow was carefully regulated, this novel fueling concept was not a problem.

VHITAL 160 at TsNIIMash
VHITAL mounted to test bracket at TsNIIMash, Sangupta et al 2007

This design was later picked up in 2004 by NASA, who worked with TsNIIMash researchers to develop the VHITAL, or Very High Isp Thruster with Anode Layer, over two and a half years. While this thruster uses significantly less power (36 kW as opposed to up to 140 kW), many of the design details are the same, but with a few major differences: the NASA design is radiatively cooled rather than water cooled, it added a resistive heater to the base of the first anode as well, and tweaks were made to the propellant feed system. The original TAL-160 was used for characterization tests, and the new VHITAL-160 thruster and propellant feed system were built to characterize the system using modern design and materials. Testing was carried out at TsNIIMash in 2006, and demonstrated stable operation without using a neutralizing cathode, and expected metrics were met.

While I have been able to find a summary presentation from the time of the end of the program in the US, I have been unable to find verified final results of this program. However, 8000 s isp was demonstrated experimentally at 36 kW, with a thrust of ~700 mN and a thrust efficiency of close to 80%.

If anyone has additional information about this program, please comment below or contact me via email!

Hybrid and Non-Traditional Hall Effect Thrusters: Because Tweaking Happens

PLAS40 Sketch
PLa

As we saw with the VHITAL, the traditional SPT and TAL thrusters – while the most common – are far from the only way to use these technologies. One interesting concept, studied by EDB Fakel in Russia, is a hybrid SPT-TAL thruster. SPT thrusters, due to their extended ionization chamber lined with an insulator, generally provide fuller ionization of propellant. TAL thrusters, on the other hand, are better able to efficiently accelerate the propellant once it’s ionized. So the designers at EDB Fakel, led by M. Potapenko, developed, built, and tested the PlaS-40 Hybrid PT, rated at up to 0.4 kW, and proposed and breadbox tested a larger (up to 4.5 kW) PlaS-120 thruster as well, during the 1990s (initial conception was in the early 90s, but the test

PLAS40
PLaS40 Thruster

model was built in 1999). While fairly similar in outward appearance to an SPT, the acceleration chamber was shorter. The PlaS-40 achieved 1000-1750 s isp and a thrust of 23.5 mN, while the PlaS-120 showed the capability of reaching 4000 s isp and up to 400 mN of thrust (these tests were not continued, due to a lack of funding). This design concept could offer advances in specific impulse and thrust efficiency beyond traditional thruster designs, but currently there isn’t enough research to show a clear advantage.

 

Gridded Hall concept
Early EKB Fakel concept for SPT thruster with “magnetic screens.” Kim et al 2007

Another interesting hybrid design was a gridded Hall thruster, researched by V. Kim at Fakel in 1973-1975. Here, again, an SPT-type ionization chamber was used, and the screens were used to more finely control the magnetic lensing effect of the thruster. This was an early design, and one that was used due to the limitations of the knowledge and technology to do away with the grids. However, it’s possible that a hybrid Hall-gridded ion thruster may offer higher specific impulse while taking advantage of the more efficient ionization of an SPT thruster. As we saw with both the DS4G and VHITAL, increasing separation of the ionization, ion extraction, and acceleration portions of the thruster allows for a greater thrust efficiency, and this may be another mechanism to do that.

One design, out of the University of Michigan, modifies the anode itself, by segmenting it into many different parts. This was done to manage plasma instabilities within the propellant plume, which cause parasitic power losses. While it’s unclear exactly how much efficiency can be gained by this, it solves a problem that had been observed since the 1960s close to the anode of the thruster. Small tweaks like this may end up changing the geometry of the thruster significantly over time as optimization occurs.

Other modifications have been made as well, including combining discharge chambers, using conductive materials for discharge chambers but retaining a dielectric ceramic in the acceleration zone of the thruster, and many other designs. Many of these were early ideas that were demonstrated but not carried through for one reason or another. For instance, the metal discharge chambers were considered an economic benefit, because the ceramic liners are the major cost-limiting factor in SPT thrusters. With improved manufacturing and availability, costs went down, and the justification went away.

There remains an incredible amount of flexibility in the Hall effect thruster design space. While two stage, nested, and clustered designs are the current most advanced high power designs, it’s difficult to guess if someone will come up with a new idea, or revisit an old one, to rewrite the field once again.

Propellants: Are the Current Propellant Choices Still Effective For High Powered Missions?

One of the interesting things to consider about these types of thrusters, both the gridded ion and Hall effect thrusters, is propellant choice. Xenon is, as of today, the primary propellant used by all operational electrostatic thrusters (although some early thrusters used cesium and mercury for propellants), however, Xe is rare and reasonably expensive. In smaller Hall thruster designs, such as for telecommunications satellites in the 5-10 kWe thruster range, the propellant load (as of 1999) for many spacecraft is less than 100 kg – a significant but not exorbitant amount of propellant, and launch costs (and design considerations) make this a cost effective decision. For larger spacecraft, such as a Hall-powered spacecraft to Mars, the propellant mas could easily be in the 20-30 ton range (assuming 2500 s isp, and a 100 mg/s flow rate of Xe), which is a very different matter in terms of Xe availability and cost. Alternatives, then, become far more attractive if possible.

Argon is also an attractive option, and is often proposed as a propellant as well, being less rare. However, it’s also considerably lower mass, leading to higher specific impulses but lower levels of thrust. Depending on the mission, this could be a problem if large changes in delta-vee are needed in a shorter period of time, The higher ionization energy requirements also mean that either the propellant won’t be as completely ionized, leading to loss of efficiency, or more energy is required to ionize the propellant

The next most popular choice for propellant is krypton (Kr), the next lightest noble gas. The chemical advantages of Kr are basically identical, but there are a couple things that make this trade-off far from straightforward: first, tests with Kr in Hall effect thrusters often demonstrate an efficiency loss of 15-25% (although this may be able to be mitigated slightly by optimizing the thruster design for the use of Kr rather than Xe), and second the higher ionization energy of Kr compared to Xe means that more power is required to ionize the same amount of propellant (or with an SPT, a deeper ionization channel, with the associated increased erosion concerns). Sadly, several studies have shown that the higher specific impulse gained from the lower atomic mass of Kr aren’t sufficient to make up for the other challenges, including losses from Joule heating (which we briefly discussed during our discussion of MPD thrusters in the last post), radiation, increased ionization energy requirements, and even geometric beam divergence.

This has led some designers to propose a mixture of Xe and Kr propellants, to gain the advantages of lower ionization energy for part of the propellant, as a compromise solution. The downside is that this doesn’t necessarily improve many of the problems of Kr as a propellant, including Joule heating, thermal diffusion into the thruster itself, and other design headaches for an electrostatic thruster. Additionally, some papers report that there is no resonant ionization phenomenon that facilitates the increase of partial krypton utilization efficiency, so the primary advantage remains solely cost and availability of Kr over Xe.

Atomic Mass (Ar, std.) Ionization Energy (1st, kJ/mol) Density (g/cm^3) Melting Point (K) Boiling Point (K) Estimated Cost ($/kg)
Xenon 131.293 1170.4 2.942 (BP) 161.4 165.051 1200
Krypton 83.798 1350.8 2.413 (BP) 115.78 119.93 75
Bismuth 208.98 703 10.05 (MP) 544.7 1837 29
Mercury 200.592 1007.1 13.534 (at STP) 234.32 629.88 500
Cesium 132.905 375.7 1.843 (at MP) 301.7 944 >5000
Sodium 22.989 495.8 0.927 (at MP) 0.968 (solid) 370.94 1156.09 250
Potassium 39.098 418.8 0.828 (MP) 0.862 (solid) 336.7 1032 1000
Argon 39.792 1520.6 1.395 (BP) 83.81 87.302 5
NaK Varies Differential 0.866 (20 C) 260.55 1445 Varies
Iodine 126.904 1008.4 4.933 (at STP) 386.85 457.4 80
Magnesium 24.304 737.7 1.584 (MP) 923 1363 6
Cadmium 112.414 867.8 7.996 (MP) 594.22 1040 5

 

Early thrusters used cesium and mercury for propellant, and for higher-powered systems this may end up being an option. As we’ve seen earlier in this post, neither Cs or Hg are unknown in electrostatic propulsion (another design that we’ll look at a little later is the cesium contact ion thruster), however they’ve fallen out of favor. The primary reason always given for this is environmental and occupational health concerns, for the development of the thrusters, the handling of the propellant during construction and launch, as well as the immediate environment of the spacecraft. The thrusters have to be built and extensively tested before they’re used on a mission, and all these experiments are a perfect way to strongly contaminate delicate (and expensive) equipment such as thrust stands, vacuum chambers, and sensing apparatus – not to mention the lab and surrounding environment in the case of an accident. Additionally, any accident that leads to the exposure of workers to Hg or Cs will be expensive and difficult to address, notwithstanding any long term health effects of chemical exposure to any personnel involved (handling procedures have been well established, but one worker not wearing the correct personal protective equipment could be constantly safe both in terms of personal and programmatic health) Perfect propellant stream neutralization is something that doesn’t actually occur in electrostatic drives (although as time goes on, this has consistently improved), leading to a buildup of negative charge in the spacecraft; and, subsequently, a portion of the positive ions used for propellant end up circling back around the magnetic fields and impacting the spacecraft. Not only is this something that’s a negative impact for the thrust of the spacecraft, but if the propellant is something that’s chemically active (as both Cs and Hg are), it can lead to chemical reactions with spacecraft structural components, sensors, and other systems, accelerating degradation of the spacecraft.

A while back on the Facebook group I asked the members about the use of these propellants, and an interesting discussion developed (primarily between Mikkel Haaheim, my head editor and frequent contributor to this blog, and Ed Pheil, who has extensive experience in nuclear power, including the JIMO mission, and is currently the head of Elysium Industries, developing a molten chloride fast reactor) concerning the pros and cons of using these propellants. Two other options, with their own complications from the engineering side, were also proposed, which we’ll touch on briefly: sodium and potassium both have low ionization energies, and form a low melting temperature eutectic, so they may offer additional options for future electrostatic propellants as well. Three major factors came up in the discussion: environmental and occupational health concerns during testing, propellant cost (which is a large part of what brings us to this discussion in the first place), and tankage considerations.

As far as cost goes, this is listed in the table above. These costs are all ballpark estimates, and costs for space-qualified supplies are generally higher, but it illustrates the general costs associated with each propellant. So, from an economic point of view, Cs is the least attractive, while Hg, Kr, and Na are all attractive options for bulk propellants.

Tankage in and of itself is a simpler question than the question of the full propellant feed question, however it can offer some insights into the overall challenges in storing and using the various propellants. Xe, our baseline propellant, has a density as a liquid of 2.942 g/cm, Kr of 2.413, and Hg of 13.53. All other things aside, this indicates that the overall tankage mass requirements for the same mass of Hg are less than 1/10th that of Xe or Kr. However, additional complications arise when considering tank material differences. For instance, both Xe and Kr require cryogenic cooling (something we discussed in the LEU NTP series briefly, which you can read here [insert LEU NTP 3 link]. While the challenges of Xe and Kr cryogenics are less difficult than H2 cryogenics due to the higher atomic mass and lower chemical reactivity, many of the same considerations do still apply. Hg on the other hand, has to be kept in a stainless steel tank (by law), other common containers, such as glass, don’t lend themselves to spacecraft tank construction. However, a stainless steel liner of a carbon composite tank is a lower-mass option.

The last type of fluid propellant to mention is NaK, a common fast reactor coolant which has been extensively studied. Many of the problems with tankage of NaK are similar to those seen in Cs or Hg: chemical reactivity (although different particulars on the tankage), however, all the research into using NaK for fast reactor coolant has largely addressed the immediate corrosion issues.

The main problem with NaK would be differential ionization causing plating of the higher-ionization-energy metal (Na in this case) onto the anode or propellant channels of the thruster. While it may be possible to deal with this, either by shortening the propellant channel (like in a TAL or EDPT), or by ensuring full ionization through excess charge in the anode and cathode. The possibility of using NaK was studied in an SPT thruster in the Soviet Union, but unfortunately I cannot find the papers associated with these studies. However, NaK remains an interesting option for future thrusters.

Solid propellants are generally considered to be condensable propellant thrusters. These designs have been studied for a number of decades. Most designs use a resistive heater to melt the propellant, which is then vaporized just before entering the anode. This was first demonstrated with the cesium contact gridded ion thrusters that were used as part of the SERT program. There (as mentioned earlier) a metal foam was used as the storage medium, which was kept warm to the point that the cesium was kept liquid. By varying the pore size, a metal wick was made which controlled the flow of the propellant from the reservoir to the ionization head. This results in a greater overall mass for the propellant tankage, but on the other hand the lack of moving parts, and the ability to ensure even heating across the propellant volume, makes this an attractive option in some cases.

A more recent design that we also discussed (the VHITAL) uses bismuth propellant for a TAL thruster, a NASA update of a Soviet TsNIIMash design from the 1970s (which was shelved due to the lack of high-powered space power systems at the time). This design uses a reservoir of liquid bismuth, which is resistively heated to above the melting temperature. An argon pressurization system is used to force the liquid bismuth through an outlet, where it’s then electromagnetically pumped into a carbon vaporization plug. This then discharges into the anode (which in the latest iteration is also resistively heated), where the Hall current then ionizes the propellant. It may be possible with this design to use multiple reservoirs to reduce the power demand for the propellant feed system; however, this would also lead to greater tankage mass requirements, so it will largely depend on the particulars of the system whether the increase in mass is worth the power savings of using a more modular system. This propellant system was successfully tested in 2007, and could be adapted to other designs as well.

Other propellants have been proposed as well, including magnesium, iodine, and cadmium. Each has its’ advantages and disadvantages in tankage, chemical reactivity limiting thruster materials considerations, and other factors, but all remain possible for future thruster designs.

For the foreseeable future, most designs will continue to use xenon, with argon being the next most popular choice, but as the amount of propellant needed increases with the development of nuclear electric propulsion, it’s possible that these other propellant options will become more prominent as tankage mass, propellant cost, and other considerations become more significant.

Electrospray Thrusters

Electrospray thrusters use electrically charged liquids as a propellant. They fall into three main categories: colloid thrusters, which accelerate charged droplets dissolved in a solvent such as glycerol or formamide; field emission electric propulsion (FEEP) thrusters, which use liquid metals to produce positively charged metal ions; and, finally, ionic liquid ion source (ILIS) thrusters, which use room temperature molten salts to produce a beam of salt ions.

Colloid Thruster Schematic,
Colloid thruster operational diagram, Prajana et al 2001

All types of electrospray end up demonstrating a phenomenon known as a Taylor cone, which occurs in an electrically charged fluid when exposed to an electrical field. If the field is strong enough, the tip of the cone is extruded to the point that it breaks, and a spray of droplets from the liquid is emitted. This is now commonly used in many different industrial applications, and the advances in these fields have made the electrospray thruster more attractive, as have a focus on volume of propulsion systems. Additionally, the amount of thrust produced, and the thrust density, is directly proportional to the density of emitters in a given area. Recent developments in nanomaterials fabrication have made it possible to increase the thrust density of these designs significantly. However, the main lifetime limitation of this type of thruster is emitter wear, which is dependent on both mass flow rates and any chemical interactions between the emitters and the propellant.

TILE5000
TILE5000, Accion Space Systems

The vast majority of these systems focus on cube-sat propulsion; but one company, Accion Systems, has developed a tileable system which could offer high-powered operation through the use of dozens of thrusters arrayed in a grid. Their largest thruster (which measures 35mm by 35 mm by 16 mm, including propellant) produces a total of 200,000 N of impulse, a thrust of 10 mN, at an isp of 1500 s. While their primary focus is on cubesats, the CEO, Natalya Bailey, has mentioned before that it would be possible to use many of their TILE drive systems in parallel for high-powered missions.

 

One of the biggest power demands of an electrostatic engine of almost any type is the ionization cost of the propellant. Depending on the mass flow and power, different systems are used to ionize the propellant, including electron beams, RF ionization, cyclotron resonance, and the Hall effect. What if we could get rid of that power cost, and instead use all of the energy accelerating the propellant? Especially in small spacecraft, this is very attractive, and it may be possible to scale this up significantly as well (to the limits of the electrical charge that is able to be placed on the screens themselves). Some fluids are ionic, meaning that they’re positively charged, reasonably chemically stable, and easily storable. By replacing the uncharged propellant with one that carries an electric charge without the need for on-board ionization equipment, mass, volume, and power can be conserved. Not all electrospray thrusters use an ionic liquid, but ones that do offer considerable advantages in terms of energy efficiency, and possibly can offer greater overall thruster efficiency as well. I have yet to see a design for a gridded ion or Hall effect thruster that utilizes these types of propellants, but it may be possible to do so.

Conclusions

With that, we come to the end of our overview of electric thrusters. While there are some types of thruster that we did not discuss, they are unlikely to be able to be used in high powered systems such as would be found on an NEP spacecraft. When I began this series of blog posts, I knew that electric propulsion is a very broad topic, but the learning process during writing these three posts has been far more intense, and broad, than I was expecting. Electric propulsion has never been my strong suit, so I’ve been even more careful than usual to stick to the resources available to write these posts, and I’ve had a lot of help from some very talented people to get to this point.

I was initially planning on writing a post about the power conditioning units that are used to prepare the power provided by the power supply to these thrusters, but the more I researched, the less these systems made sense to me – something that I’ve been assured isn’t uncommon – so I’m going to skip that for now.

Instead, the next post is going to look at the power conversion systems that nuclear electric spacecraft can use. Due to the unique combination of available temperature from a nuclear reactor, the high power levels available, and the unique properties of in-space propulsion, there are many options available that aren’t generally considered for terrestrial power plants, and many designs that are used by terrestrial plants aren’t available due to mass or volume requirements. I’ve already started writing the post, but if there’s anything writing on NEP has taught me, it’s that these posts take longer than I expect, so I’m not going to give a timeline on when that will be available – hopefully in the next 2-3 weeks, though.

After that, we’ll look more in depth at thermal management and heat rejection systems for a wide range of temperatures, how they work, and the fundamental limitations that each type has. After another look at the core of an NEP spacecraft’s reactor, we will then look at combining electric and thermal propulsion in a post on bimodal NTRs, before moving on to our next blog post series (probably on pulse propulsion, but we may return to NTRs briefly to look at liquid core NTRs and the LARS proposal).

I hope you enjoyed the post. Leave a comment below with any comments, questions, or corrections, and don’t forget to check out our Facebook group, where I post work-in-progress visuals, papers I come across during research, and updates on the blog (and if you do, don’t feel shy about posting yourself on astronuclear propulsion designs and news!).

References

Electrostatic Thrusters

Gridded Ion Thrusters

MIT Open Courseware Astronautics Course Notes, Lecture 10-11: Kaufmann Ion Drives https://ocw.mit.edu/courses/aeronautics-and-astronautics/16-522-space-propulsion-spring-2015/lecture-notes/MIT16_522S15_Lecture10-11.pdf

NSTAR Technology Validation, Brophy et al 2000 https://trs.jpl.nasa.gov/handle/2014/13884

The High Power Electric Propulsion (HiPEP) Ion Thruster, Foster et al 2004 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20040139476.pdf

Dual stage 4 Grid thruster page, ASU: https://physics.anu.edu.au/cpf/sp3/ds4g/

Dual Stage Gridded Ion Thruster ESA page: http://www.esa.int/gsp/ACT/projects/ds4g_overview.html

RIT-10 http://www.space-propulsion.com/brochures/electric-propulsion/electric-propulsion-thrusters.pdf

The NASA Evolutionary Xenon Thruster: The Next Step for U.S. Deep Space Propulsion, Schmidt et al NASA GRC 2008 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20080047732.pdf

Deflectable Beam Linear Strip Cesium Contact Ion Thruster System, Dulgeroff et al 1971 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19710027900.pdf

Hall Effect Thrusters

Fundamental Difference Between the Two Variants of Hall Thruster: SPT and TAL http://alfven.princeton.edu/publications/choueiri-jpc-2001-3504

History of the Hall Thrusters Development in the USSR, Kim et al 2007 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2007index/IEPC-2007-142.pdf

Recent Progress and Perspectives on Space Electric Propulsion Systems Based on Smart Nanomaterials, Levchenko et al 2018 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5830404/pdf/41467_2017_Article_2269.pdf

SPT

Testing of the PPU Mk 3 with the XR-5 Hall Effect Thruster, Xu et al, Aerojet Rocketdyne 2017 https://iepc2017.org/sites/default/files/speaker-papers/iepc-2017-199.pdf

AEPS and HERMeS

Overview of the Development and Mission Application of the Advanced Electric Propulsion System (AEPS), Herman et al 2017 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20180001297.pdf

13kW Advanced Electric Propulsion Flight System Development and Qualification, Jackson et al 2017 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20180000353.pdf

Wear Testing of the HERMeS Thruster, Williams et al 2017 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170000963.pdf

Performance, Stability, and Plume Characterization of the HERMeS Thruster with Boron Nitride Silica Composite Discharge Channel, Kamhawi et al 2017 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20180000687.pdf

Hollow Cathode Assembly Development for the HERMeS Hall Thruster, Sarver-Verhey et al 2017 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170001280.pdf

Nested HET

Constant-Power Performance and Plume Effects of a Nested-Channel Hall-Effect Thruster, Liang et al University of Michigan 2011 http://pepl.engin.umich.edu/pdf/IEPC-2011-049.pdf

The Combination of Two Concentric Discharge Channels into a Nested Hall Effect Thruster, Liang PhD Thesis, University of Michigan 2013 http://pepl.engin.umich.edu/pdf/2013_Liang_Thesis.pdf

Plasma Oscillation Effects on Nested Hall Thruter Operation and Stability, McDonald et al University of Michigan 2013 http://pepl.engin.umich.edu/pdf/IEEE-2013-2502.pdf

Investigation of Channel Interactions in a Nested Hall Thruster Part 1, Georgin et al University of Michigan 2016 http://pepl.engin.umich.edu/pdf/JPC_2016_Georgin.pdf

Investigation of Channel Interactions in a Nested Hall Thruster Part 2, Cusson et al University of Michigan 2016 http://pepl.engin.umich.edu/pdf/JPC_2016_Cusson.pdf

X3 NHT

The X3 100 kWclass Nested Hall Thruster: Motivation, Implementation, and Initial Performance; Florenz, University of Michigan doctoral thesis http://pepl.engin.umich.edu/pdf/2014_Florenz_Thesis.pdf

First Firing of a 100-kW Nested Channel Hall Thruster, Florenz et al University of Michigan, 2013 http://www.dtic.mil/dtic/tr/fulltext/u2/a595910.pdf

High-Power Performance of a 100-kW Class Nested Hall Thruster, Hall et al University of Michigan 2017 http://pepl.engin.umich.edu/pdf/IEPC-2017-228.pdf

Update on the Nested Hall Thruster Subsystem for the NextSTEP XR-100 System, Jorns et al University of Michigan 2018 http://pepl.engin.umich.edu/pdf/AIAA-2018-4418.pdf

Multichannel Hall Effect Thruster patent, McVey et al, Aerojet Rocketdyne https://patents.google.com/patent/US7030576B2/en

MW-Class Electric Propulsion Systems Designs for Mars Cargo Transport; Gilland et al 2011 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20120001636.pdf

TAL

An Overview of the TsNIIMASH/TsE Efforts under the VHITAL Program, Tverdokhlebov et al, TsNIIMASH, 2005 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2005index/141.pdf

Thrust Performance in a 5 kW Class Anode Layer Type Hall Thruster, Yamamoto et al Kyushu University 2015 https://www.jstage.jst.go.jp/article/tastj/14/ists30/14_Pb_183/_pdf

Investigation of a Side by Side Hall Thruster System, Miyasaka et al 2013 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2013index/1vrun4h9.pdf

Modeling of a High Power Thruster with Anode Layer, Keidar et al University of Michigan 2004 https://deepblue.lib.umich.edu/bitstream/handle/2027.42/69797/PHPAEN-11-4-1715-1.pdf;sequence=2

Design and Performance Evaluation of Thruster with Anode Layer UT-58 For High-Power Application, Schonherr et al, University of Tokyo 2013 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2013index/k97rps25.pdf

Very High Isp Thruster with Anode Layer (VHITAL): An Overview, Marrese-Reading et al, JPL 2004 https://trs.jpl.nasa.gov/handle/2014/40499

The Development of a Bismuth Feed for the Very High Isp Thruster with Anode Layer VHITAL program, Marrese-Reading et al, JPL 2004 https://trs.jpl.nasa.gov/handle/2014/37752

Hybrid

Characteristic Relationship Between Dimensions and Parameters of a Hybrid Plasma Thruster, Potapenko et al 2011 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2011index/IEPC-2011-042.pdf

Measurement of Cross-Field Electron Current in a Hall Thruster due to Rotating Spoke Instabilities, McDonald et al 2011 https://core.ac.uk/download/pdf/3148750.pdf

Metallic Wall Hall Thruster patent, Goebel et al 2016 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20160012003.pdf

History of the Hall Thrusters Development in the USSR, Kim et al 2007 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2007index/IEPC-2007-142.pdf

Propellant Considerations:

High Power Hall Thrusters; Jankovsky et al, 1999

Energetics of Propellant Options for High-Power Hall Thrusters, Kieckhafer and King, Michigan Technological University, 2005 http://aerospace.mtu.edu/__reports/Conference_Proceedings/2005_Kieckhafer_1.pdf

A Performance Comparison Of Xenon and Krypton Propellant on an SPT-100 Hall Thruster, Nakles et al 2011 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2011index/IEPC-2011-003.pdf

Evaluation of Magnesium as Hall Thruster Propellant; Hopkins, Michigan Tech, 2015 https://pdfs.semanticscholar.org/0520/494153e1d19a46eaa0a63c9bc5bd466c06eb.pdf

Modeling of an Iodine Hall Thruster Plume in the Iodine Satellite (ISAT), Choi 2017 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170001565.pdf

Electrospray Thrusters

Electrospray Thruster lecture notes, MIT Open Courseware 2015 https://ocw.mit.edu/courses/aeronautics-and-astronautics/16-522-space-propulsion-spring-2015/lecture-notes/MIT16_522S15_Lecture20.pdf

Application of Ion Electrospray Propulsion to Lunar and Interplanetary Missions, Whitlock and Lozano 2014 http://ssl.mit.edu/files/website/theses/SM-2014-WhitlockCaleb.pdf

Preliminary Sizing of an Electrospray Thruster, Rostello 2017 http://tesi.cab.unipd.it/56748/1/Rostello_Marco_1109399.pdf

Accion Systems TILE Data sheet https://beyondnerva.files.wordpress.com/2018/10/bd6da-tileproductfamilycombineddatasheet.pdf

 

Categories
Development and Testing Low Enriched Uranium Nuclear Thermal Systems Test Stands

NTR Hot Fire Testing 2: Modern Designs, New Plans for the LEU NTP

Hello, and welcome back to Beyond NERVA in the second part of our two-part series on ground testing NTRs. In part one, we examined the testing done at the National Defense Research Site in Nevada as part of Project Rover, and also a little bit of the zero power testing that was done at the Los Alamos Scientific Laboratory to support the construction, assembly, and zero-power reactivity characterization of these reactors. We saw that the environmental impact to the population (even those living closest to the test) rarely exceeded the equivalent dose of a full-body high contrast MRI. However, even this low amount of radioisotope release is unacceptable in today’s regulatory environment, so new avenues of testing must be explored.

NERVAEngineTest, AEC
NRX (?) Hot-fire test, image courtesy DOE

We will look at the proposals over the last 25 years for new ways of testing nuclear thermal rockets in full flow, fission-powered testing, as well as looking at cost estimates (which, as always, should be taken with a grain of salt) and the challenges associated with each concept.

Finally, we’re going to look at NASA’s current plans for test facilities, facility costs, construction schedules, and testing schedules for the LEU NTP program. This information is based on the preliminary estimates released by NASA, and as such there’s still a lot that’s up in the air about these concepts and cost estimates, but we’ll look at what’s available.

Diagram side by side with A3
Full exhaust capture at NASA’s A3 test stand, Stennis Space Center. Image courtesy NASA

Pre-Hot Fire Testing: Thermal Testing, Neutronic Analysis, and Preparation for Prototypic Fuel Testing

Alumina sleeve during test, Bradley
CFEET test, NASA MSFC

We’ve already taken a look at the test stands that are currently in use for fuel element development, CFEET and NTREES. These test stands allow for electrically heated testing in a hydrogen environment, allowing for testing of he thermal and chemical properties of NTR fuel. They also allow for things like erosion tests to be done, to ensure clad materials are able to withstand not just the thermal stresses of the test but also the erosive effects of the hot hydrogen moving through them at a high rate.

However, there are a number of other effects that the fuel elements will be exposed to during reactor operation, and the behavior of these materials in an irradiated environment is something that still needs to be characterized. Fuel element irradiation is done using existing reactors, either in a beamline for out-of-core initial testing, or using specially designed capsules to ensure the fuel elements won’t adversely affect the operation of the reactor, and to ensure the fuel element is in the proper environment for its’ operation, for in-core testing.

 

TrigaReactorCore
TRIGA reactor core, image courtesy Wikimedia

A number of reactors could be used for these tests, including TRIGA-type reactors that are common in many universities around the US. This is one of the advantages of LEU, rather than the traditional HEU: there are fewer restrictions on LEU fuels, so many of these early tests could be carried out by universities and contractors who have these types of reactors. This will be less expensive than using DOE facilities, and has the additional advantage of supporting additional research and education in the field of astronuclear engineering.

 

 

Irradiation vessel design for ATF, Thody
Design of an irradiation capsule for use with the ATF, Thody OSU 2018

The initial fuel element prototypes for in-pile testing will be unfueled versions of the fuel element, to ensure the behavior of the rest of the materials involved won’t have adverse reactions to the neutronic and radiation environment that they’ll be subjected to. This is less of a concern then it used to be, because material properties under radiation flux have been continually refined over the decades, but caution is the watchword with nuclear reactors, so this sort of test will still need to be carried out. These experiments will be finally characterized in the Safety Analysis Report and Technical Safety Review documents, a major milestone for any fuel element development program. These documents will provide the reactor operators all the necessary information for the behavior of these fuel elements in the research reactor in preparation for fueled in-pile testing. Concurrently with these plans, extensive neutronic and thermal analysis will be carried out based on any changes necessitated by the in-pile unfueled testing. Finally, a Quality Assurance Plan must be formulated, verified, and approved. Each material has different challenges to producing fuel elements of the required quality, and each facility has slightly different regulations and guidelines to meet their particular needs and research guidelines. After these studies are completed, the in-pile, unfueled fuel elements are irradiated, and then subjected to post irradiation examination, for chemical, mechanical, and radiological behavior changes. Fracture toughness, tensile strength, thermal diffusivity, and microstructure examination through both scanning electron and transmission electron microscopy are particular areas of focus at this point in the testing process.

 

One last thing to consider for in-pile testing is that the containment vessel (often called a can) that the fuel elements will be held in inside the reactor has to be characterized, especially its’ impact on the neutron flux and thermal transfer properties, before in-pile testing can be done. This is a relatively straightforward, but still complex due to the number of variables involved, process, involving making an MCNP model of the fuel element in the can at various points in each potential test reactor, in order to verify the behavior of the test article in the test reactor. This is something that can be done early in the process, but may need to be slightly modified after the refinements and experimental regime that we’ve been looking at above.

Another consideration for the can will be its’ thermal insulation properties. NTR fuel elements are run at the edge of the thermal capabilities of the materials they’re made out of, since this maximizes thermal transfer and therefore specific impulse. This also means that, for the test to be as accurate as possible, the fuel element itself must be far hotter than the surrounding reactor, generally in the ballpark of 2500 K. The ORNL Irradiation Plan suggests the use of SIGRATHERM, a soft graphite felt, for this insulating material. Graphite’s behavior is well understood in reactors (and for those in the industry, the fact that it has about 4% of the density of solid graphite makes Wigner energy release minimal).

Pre-Hot Fire Testing: In-Pile Prototypic Fuel Testing

 

406px-High_Flux_Isotope_Reactor_Vertical_Cross_Section
High Flux Isotope Reactor (HFIR), Oak Ridge National Lab, image courtesy Wikimedia

Once this extensive testing regime for fuel elements has been completed, a fueled set of fuel elements would be manufactured and transported to the appropriate test reactor. Not only are TRIGA-type reactors common to many universities an option, but three research reactors are also available with unique capabilities. The first is the High Flux Isotope Reactor at Oak Ridge, which is one of the longest-operating research reactors with quite a few ports for irradiation studies at different neutron flux densities. As an incredibly well-characterized reactor, there are many advantages to using this well-understood system, especially for analysis at different levels of fuel burnup and radiation flux.

 

 

 

 

 

TREAT INL
Transient Reactor Test (TREAT) at Idaho NL. Image courtesy DOE

The second is a newly-reactivated reactor at Idaho National Laboratory, the Transient Reactor Test (TREAT). An air cooled, graphite moderated thermal reactor, the most immediately useful instrument for this sort of experiment is the hodoscope. This device uses fast neutrons to detect fission activity in the prototypic fuel element in real time, allowing unique analysis of fuel element behavior, burnup behavior, and other characteristics that can only be estimated after in-pile testing in other reactors.

 

800px-Advanced_Test_Reactor_001
Advanced Test Reactor, Idaho NL. Image courtesy DOE

The third is also at Idaho National Lab, this is the Advanced Test Reactor. A pressurized light water reactor, the core of this reactor has four lobes, and almost looks like a clover from above. This allows for very fine control of the neutron flux the fuel elements would experience. In addition, six of the locations in the core allow independent cooling systems that are separated from the primary cooling system. This would allow (with modification, and possible site permission requirements due to the explosive nature of H2) the use of hydrogen coolant to examine the chemical and thermal transfer behaviors of the NTR fuel element while undergoing fission.

Each of these reactors uses a slightly different form of canister to contain the test article. This is required to prevent any damage to the fuel element contaminating the rest of the reactor core, an incredibly expensive, difficult, and lengthy process that can be avoided by isolated the fuel elements from their surrounding environment chemically. Most often, these cans are made out of aluminum-6061, 300 series stainless steel, or grade 5 titanium (links in the reference section). According to a recent Oak Ridge document (linked in references), the most preferred material would be the titanium, with the stainless being the least attractive due to 59Fe and 60Co activation leading to the can to become highly gamma-active. This makes the transportation and disposal of the cans post-irradiation much more costly.

Here’s an example of the properties that would be tested by the time that the tests we’ve looked at so far have been completed: