Hello, and welcome back to Beyond NERVA! Today’s blog post is a special one, spurred on by the announcement recently about the Transport and Energy Module, Russia’s new nuclear electric space tug! Because of the extra post, the next post on liquid fueled NTRs will come out on Monday or Tuesday next week.
This is a fascinating system with a lot of promise, but has also gone through major changes in the last year that seem to have delayed the program. However, once it’s flight certified (which is to be in the 2030s), Roscosmos is planning on mass-producing the spacecraft for a variety of missions, including cislunar transport services and interplanetary mission power and propulsion.
Begun in 2009, the TEM is being developed by Energia on the spacecraft side and the Keldysh Center on the reactor side. This 1 MWe (4MWt) nuclear reactor will power a number of gridded ion engines for high-isp missions over the spacecraft’s expected 10-year mission life.
First publicly revealed in 2013 at the MAKS aerospace show, a new model last year showed significant changes, with additional reporting coming out in the last week indicating that more changes are on the horizon (there’s a section below on the current TEM status).
This is a rundown of the TEM, and its YaDEU reactor. I also did a longer analysis of the history of the TEM on my Patreon page (patreon.com/beyondnerva), including a year-by-year analysis of the developments and design changes. Consider becoming a Patron for only $1 a month for additional content like early blog access, extra blog posts and visuals, and more!
The TEM is a nuclear electric spacecraft, designed around a gas-cooled high temperature reactor and a cluster of ion engines.
The TEM is designed to be delivered by either Proton or Angara rockets, although with the retirement of the Proton the only available launcher for it currently is the Angara-5.
Secondary Power System
Both versions of the TEM have had secondary folding photovoltaic power arrays. Solar panels are relatively commonly used for what’s known as “hotel load,” or the load used by instrumentation, sensors, and other, non-propulsion systems.
It is unclear if these feed into the common electrical bus of the spacecraft or form a secondary system. Both schemes are possible; if the power is run through a common electrical bus the system is simpler, but a second power distribution bus allows for greater redundancy in the spacecraft.
The ID-500 was designed by the Keldysh Center specifically to be used on the TEM, in conjunction with YaEDU. Due to the very high power availability of the YaEDU, standard ion engines simply weren’t able to handle either the power input or the needed propellant flow rates, so a new design had to be come up with.
The ID-500 is a xenon-propelled ion engine, with each thruster having a maximum power level of about 35 kW, with a grid diameter of 500 mm. The initially tested design in 2014 (see references below) had a tungsten cathode, with an expected lifetime of 5000 hours, although additional improvements through the use of a carbon-carbon cathode were proposed which could increase the lifetime by a factor of 10 (more than 50,000 hours of operation).
Each ID-500 is designed to throttle from 375-750 mN of thrust, varying both propellant flow rate and ionization chamber pressure. The projected exhaust velocity of the engine is 70,000 m/s (7000 s isp), making it an attractive option for the types of orbit-altering, long duration missions that the TEM is expected to undertake.
The fact that this system uses a gridded ion thruster, rather than a Hall effect thruster (HET), is interesting, since HETs are the area that Soviet, then Russian, engineers and scientists have excelled at. The higher isp makes sense for a long-term tug, but with a system that seems that it could refuel, the isp-to-thrust trade-off is an interesting decision.
The initial design released at MAKS 2013 had a total of 16 ion thrusters on four foldable arms, but the latest version from MAKS-2019 has only five thrusters. The new design is visible below:
The first design is ideal for the tug configuration: the distance between the thrusters and the payload ensure that a minimal amount of the propellant hits the payload, robbing the spacecraft of thrust, contaminating the spacecraft, and possibly building up a skin charge on the payload. The downside is that those arms, and their hinge system, cost mass and complexity.
The new design clusters only five (less than one third) thrusters clustered in the center-line of the spacecraft. This saves mass, but the decrease in the number of thrusters, and the fact that they’re placed in the exact location that the payload makes most sense to attach, has me curious about what the mission profile for this initial TEM is.
It is unclear if the thrusters are the same design.
This may be the most interesting thing in in the TEM: the heat rejection system.
Most of the time, spacecraft use what are commonly called “bladed tubular radiators.” These are tubes which carry coolant after it reaches its maximum temperature. Welded to the tube are plates, which do two things: it increases the surface area of the tube (with the better conductivity of metal compared to most fluids this means that the heat can be further distributed than the diameter of the pipe) and it protects the pipe from debris impacts. However, there are limitations in how much heat can be rejected by this type of radiator: the pipes, and joints between pipes, have definite thermal limits, with the joins often being the weakest part in folding radiators.
The TEM has the option of using a panel-type radiator, in fact there’s many renderings of the spacecraft using this type of radiator, such as this one:
However, many more renderings present a far more exciting possibility: a liquid droplet radiator, called a “drip refrigerator” in Russian. This design uses a spray of droplets in place of the panels of the radiator. This increases the surface area greatly, and therefore allows far more heat to be rejected. In addition it can reduce the mass of the system significantly, both due to the increased surface area and also the potentially higher temperature, assuming the system can recapture the majority of its coolant.
This system was also tested on the ground throughout 2018 (https://ria.ru/20181029/1531649544.html?referrer_block=index_main_2), and appears to have passed all the vacuum chamber ground tests needed. Based on the reporting, more in-orbit tests will be needed, but with Drop-2 already on-station it may be possible to conduct these tests reasonably easily.
I have been unable to determine what the working fluid that would be used is, but anything with a sufficiently low vapor pressure to survive the vacuum of space and the right working fluid range can be used, from oils to liquid metals.
Nothing is known of the reaction control system for the TEM. A number of options are available and currently used in Russian systems, but it doesn’t seem that this part of the design has been discussed publicly.
The biggest noticeable change in the rest of the spacecraft is the change in the spine structure. The initial model and renders had a square cross section telescoping truss with an open triangular girder profile. The new version has a cylindrical truss structure, with a tetrahedral girder structure which almost looks like the same structure that chicken-wire uses. I’m certain that there’s a trade-off between mass and rigidity in this change, but what precisely it is is unclear due to the fact that we don’t have dimensions or materials for the two structures. The change in the cross-section also means that while the new design is likely stronger from all angles, it makes it harder to pack into the payload fairing of the launch vehicle.
The TEM seems like it has gone through a major redesign in the last couple years. Because of this, it’s difficult to tell what other changes are going to be occurring with the spacecraft, especially if there’s a significant decrease in electrical power available.
It is safe to assume that the first version of the TEM will be more heavily instrumented than later versions, in order to support flight testing and problem-solving, but this is purely an assumption on my part. The reconfiguration of the spacecraft at MAKS-2019 does seem to indicate, at least for one spacecraft, the loss of the payload capability, but at this point it’s impossible to say.
The YaEDU is the reactor that will be used on the TEM spacecraft. Overall, with power conversion system, the power system will weigh about 6800 kg.
The reactor itself is a gas cooled, fast neutron spectrum, oxide fueled reactor, designed with an electrical output requirement rather than a thermal output requirement, oddly enough (choice in power conversion system changes the ratio of thermal to electrical power significantly, and as we’ll see it’s not set in stone yet) of 1 Mwe. This requires a thermal output of at least 4 MWt, although depending on power conversion efficiency it may be higher. Currently, though, the 4 MWt figure seems to be the baseline for the design. It is meant to have a ten year reactor lifetime.
This system has undergone many changes over its 11 year life, and due to the not-completely-clear nature of much of its development and architecture, there’s much about the system that we have conflicting or incomplete information on. Therefore, I’m going to be providing line-by-line references for the design details in these sections, and if you’ve got confirmable technical details on any part of this system, please comment below with your references!
The fuel for the reactor appears to be highly enriched uranium oxide, encased in a monocrystalline molybdenum clad. According to some reporting (https://habr.com/en/post/381701/ ), the total fuel mass is somewhere between 80-150 kg, depending on enrichment level. There have been some mentions of carbonitride fuel, which offers a higher fissile fuel density but is more thermally sensitive (although how much is unclear), but these have been only passing mentions.
The use of monocrystalline structures in nuclear reactors is something that the Russians have been investigating and improving for decades, going all the way back to the Romashka reactor in the 1950s. The reason for this is simple: grain boundaries, or the places where different crystalline structures interact within a solid material, act as refractory points for neutrons, similarly to how a cracked pane of glass distorts the light coming through it through internal reflection and the disruption of light waves undergoing refraction in the material. There’s two ways around this: either make sure that there are no grain boundaries (the Russian method), or make it so that the entire structure – or as close to it as possible – are grain boundaries, called nanocrystalline materials (the preferred method of the US and other Western countries. While the monocrystalline option is better in many ways, since it makes an effectively transparent, homogeneous material, it’s difficult to grow large monocrystalline structures, and they can be quite fragile in certain materials and circumstances. This led the US and others to investigate the somewhat easier to execute, but more loss-intensive, nanocrystalline material paradigm. For astronuclear reactors, particularly ones with a relatively low keff (effective neutron multiplication rate, or how many neutrons the reactor has to work with), this monocrystalline approach makes sense, but I’ve been unable to find the keff of this reactor anywhere, so it may be quite high in theory.
The TEM uses a mix of helium and xenon as its primary coolant, a common choice for fast-spectrum reactors. Initial reporting indicated an inlet temperature of 1200K, with an outlet temperature of 1500K, although I haven’t been able to confirm this in any more recent sources. Molybdenum, tantalum, tungsten and niobium alloys are used for the primary coolant tubes.
Testing of the coolant loop took place at the MIR research reactor in NIIAR, in the city of Dimitrovgrad. Due to the high reactor temperature, a special test loop was built in 2013 to conduct the tests. Interestingly, other options, including liquid metal coolant, were considered (http://osnetdaily.com/2014/01/russia-advances-development-of-nuclear-powered-spacecraft/ ), but rejected due to lower efficiency and the promise of the initial He-Xe testing.
Power Conversion System
There have been two primary options proposed for the power conversion system of the TEM, and in many ways it seems to bounce back and forth between them: the Brayton cycle gas turbine and a thermionic power conversion system. The first offers far superior power conversion ratios, but is notoriously difficult to make into a working system for a high temperature astronuclear system; the second is a well-understood system that has been used through multiple iterations in flown Soviet astronuclear systems, and was demonstrated on the Buk, Topol, and Yenesiy reactors (the first two types flew, the third is the only astronuclear reactor to be flight-certified by both Russia and the US).
In 2013, shortly after the design outline for the TEM was approved, the MAKS trade show had models of many components of the TEM, including a model of the Brayton system. At the time, the turbine was advertised to be a 250 kW system, meaning that four would have been used by the TEM to support YaEDU. This system was meant to operate at an inlet temperature of 1550K, with a rotational speed of 60,000 rpm and a turbine tip speed of 500 m/s. The design work was being primarily carried out at Keldysh Center.
The Brayton system would include both DC/AC and AC/DC convertors, buffer batteries as part of a power conditioning system, and a secondary coolant system for both the power conversion system bearing lubricant and the batteries.
As early as 2015, though, there were reports (https://habr.com/en/post/381701/ ) that RSC Energia, the spacecraft manufacturer, were considering going with a simpler power conversion system, a thermionic one. Thermionic power conversion heats a material, which emits electrons (thermions). These electrons pass through either a vacuum or certain types of exotic materials (called Cs-Rydberg matter) to deposit on another surface, creating a current.
This would reduce the power conversion efficiency, so would reduce the overall electric power available, but is a technology that the Russians have a long history with. These reactors were designed by the Arsenal Design Bureau, who apparently had designs for a large (300-500 kW) thermionic design. If you’d like to learn more about the history of thermionic reactors in the USSR and Russia, check out these posts:
This was potentially confirmed just a few days ago by the website Atomic Energy (http://www.atomic-energy.ru/news/2020/01/28/100970 ) by the first deputy head of Roscosmos, Yuri Urlichich. If so, this is not only a major change, but a recent one. Assuming the reactor itself remains in the same configuration, this would be a departure from the historical precedent of Soviet designs, which used in-core thermionics (due to their radiation hardness) rather than out-of-core designs, which were investigated by the US for the SNAP-8 program (something we’ll cover in the future).
So, for now we wait and see what the system will be. If it is indeed the thermionic system, then system efficiency will drop significantly (from somewhere around 30-40% to about 10-15%), meaning that far less electrical power will be available for the TEM.
The hydrogen is useful to shield most types of radiation, but the inclusion of boron materials stops neutron radiation very effectively. This is important to minimize damage from neutron irradiation through both atomic displacement and neutron capture, and boron does a very good job of this.
Current TEM Status
Two Russian media articles came out within the past week about the TEM, which spurred me to write this article.
RIA, an official state media outlet, reported a couple days ago that the first flight of a test unit is scheduled for 2030. In addition:
Roscosmos announced the completion of the first project to create a unique space “tug” – a transport and energy module (TEM) – based on a megawatt-class nuclear power propulsion system (YaEDU), designed to transport goods in deep space, including the creation of long-term bases on the planets. A technical complex for the preparation of satellites with a nuclear tug is planned to be built at Vostochny Cosmodrome and put into operation in 2030. https://ria.ru/20200128/1563959168.html
A second report (http://www.atomic-energy.ru/news/2020/01/28/100970) said that the reactor was now using a thermionic power conversion system, which is consistent with the reports that Arsenal is now involved with the program. This is a major design change from the Brayton cycle option, however it’s one that could be considered not surprising: in the US, both Rankine and Brayton cycles have often been proposed for space reactors, only to have them replaced by thermoelectric power conversion systems. While the Russians have extensive thermoelectric experience, their experience in the more efficient thermionic systems is also quite extensive.
“Creation of theoretical and experimental and experimental backlogs to ensure the development of highly efficient rocket propulsion and power plants for promising rocket technology products, substantiation of their main directions (concepts) of innovative development, the formation of basic requirements, areas of rational use, design and rational level of parameters with development software and methodological support and guidance documents on the design and solution of problematic issues of creating a new generation of propulsion and power plants.”
Work continues on the Vostnochy Cosmodrome facilities, and the reporting still concludes that it will be completed by 2030, when the first mass-production TEMs are planned to be deployed.
According to Yuri Urlichich, deputy head of Roscosmos, the prototype for the power plant would be completed by 2025, and life testing on the reactor would be completed by 2030. This is the second major delay in the program, and may indicate that there’s a massive redesign of the reactor. If the system has been converted to thermionic power, it would explain both the delay and the redesign of the spacecraft, but it’s not clear if this is the reason.
For now, we just have to wait and see. It still appears that the TEM is a major goal of both Roscosmos and Rosatom, but it is also becoming apparent that there have been challenges with the program.
Conclusions and Author Commentary
It deserves reiterating: I’m some random person on the Internet for all intents and purposes, but my research record, as well as my care in reporting on developments with extensive documentation, is something that I think deserves paying attention to. So I’m gonna put my opinion on this spacecraft out there.
This is a fascinating possibility. As I’ve commented on Twitter, the capabilities of this spacecraft are invaluable. Decommissioning satellites is… complicated. The so-called “graveyard orbits,” or those above geosynchronous where you park satellites to die, are growing crowded. Satellites break early in valuable orbits, and the operators, and the operating nations, are on the hook for dealing with that – except they can’t.
Additionally, while many low-cost launchers are available for low and mid Earth orbit launches, geostationary orbit is a whole different thing. The fact that India has a “Polar Satellite Launch Vehicle” (PSLV) and “Geostationary Satellite Launch Vehicle” (GSLV) classification for two very different satellites drives this home within a national space launch architecture.
The ability to contract whatever operator runs TEM missions (I’m guessing Roscosmos, but I may be wrong) with an orbital path post-booster-cutoff, and specify a new orbital pat, and have what is effectively an external, orbital-class stage come and move the satellite into a final orbit is… unprecedented. The idea of an inter-orbital tug is one that’s been proposed since the 1960s, before electric propulsion was practical. If this works the way that the design specs are put at, this literally rewrites the way mission planning can be done for any satellite operator who’s willing to take advantage of it in cislunar space (most obviously, military and intelligence customers outside Russia won’t be willing to take advantage of it).
The other thing to consider in cislunar space is decommissioning satellites: dragging things into a low enough orbit that they’ll burn up from GEO is costly in mass, and assumes that the propulsion and guidance, navigation, and control systems survive to the end of the satellite’s mission. As a satellite operator, and a host nation to that satellite with all the treaty obligations the OST requires the nation to take on, being able to drag defunct satellites out of orbit is incredibly valuable. The TEM can deliver one satellite and drag another into a disposal orbit on the way back. To paraphrase a wonderful character from Sir Terry Pratchett (Harry King)“They pay me to take it away, and they pay me to buy it after.” In this case, it’s opposite: they pay me to take it out, they pay me to take it back. Especially in graveyard orbit challenge mitigation, this is a potentially golden opportunity financially for the TEM operator: every mm/s of mission dV can potentially be operationally profitable. This is potentially the only system I’ve ever seen that can actually say that.
More than that, depending on payload restrictions for TEM cargoes, interplanetary missions can gain significant delta-vee from using this spacecraft. It may even be possible, should mass production actually take place, that it may be possible to purchase the end of life (or more) dV of a TEM during decommissioning (something I’ve never seen discussed) to boost an interplanetary mission without having to pay the launch mass penalty for the Earth’s escape velocity. The spacecraft was proposed for Mars crewed mission propulsion for the first half of its existence, so it has the capability, but just as SpaceX Starship interplanetary missions require SpaceX to lose a Starship, the same applies here, and it’s got to be worth the while of the (in this case interplanetary) launch provider to lose the spacecraft to get them to agree to it.
This is an exciting spacecraft, and one that I want to know more about. If you’re familiar with technical details about either the spacecraft or the reactor that I haven’t covered, please either comment or contact me via email at email@example.com
We’ll continue with our coverage of fluid fueled NTRs in the next post. These systems offer many advantages over both traditional, solid core NTRs and electrically propelled spacecraft such as the TEM, and making the details more available is something I’ve greatly enjoyed. We’ll finish up liquid fueled NTRS, followed by vapor fuels, then closed and open fueled gas core NTRs, probably by the end of the summer
If you’re able to support my efforts to continue to make these sorts of posts possible, consider becoming a Patron at patreon.com/beyondnerva. My supporters help me cover systems like this, and also make sure that this sort of research isn’t lost, forgotten, or unavailable to people who come into the field after programs have ended.
Hello, and welcome back to Beyond NERVA! Before we begin, I would like to announce that our Patreon page, at https://www.patreon.com/beyondnerva, is live! This blog consumes a considerable amount of my time, and being able to pay my bills is of critical importance to me. If you are able to support me, please consider doing so. The reward tiers are still very much up for discussion with my Patrons due to the early stage of this part of the Beyond NERVA ecosystem, but I can only promise that I will do everything I can to make it worth your support! Every dollar counts, both in terms of the financial and motivational support!
Today, we continue our look at the collaboration between the US and the USSR/Russia involving the Enisy reactor: Topaz International. Today, we’ll focus on the transfer from the USSR (which became Russia during this process) to the US, which was far more drama-ridden than I ever realized, as well as the management and bureaucratic challenges and amusements that occurred during the testing. Our next post will look at the testing program that occurred in the US, and the changes to the design once the US got involved. The final post will overview the plans for missions involving the reactors, and the aftermath of the Topaz International Program, as well as the recent history of the Enisy reactor.
For clarification: In this blog post (and the next one), the reactor will mostly be referred to as Topaz-II, however it’s the same as the Enisy (Yenisey is another common spelling) reactor discussed in the last post. Some modifications were made by the Americans over the course of the program, which will be covered in the next post, but the basic reactor architecture is the same.
When we left off, we had looked at the testing history within the USSR. The entry of the US into the list of customers for the Enisy reactor has some conflicting information: according to one document (Topaz-II Design History, Voss, linked in the references), the USSR approached a private (unnamed) US company in 1980, but the company did not purchase the reactor, instead forwarding the offer up the chain in the US, but this account has very few details other than that; according to another paper (US-Russian Cooperation… TIP, Dabrowski 2013, also linked), the exchange built out of frustration within the Department of Defense over the development of the SP-100 reactor for the Strategic Defense Initiative. We’ll look at the second, more fleshed out narrative of the start of the Topaz International Program, as the beginning of the official exchange of technology between the USSR (and soon after, Russia) and the US.
The Topaz International Program (TIP) was the final name for a number of programs that ended up coming under the same umbrella: the Thermionic System Evaluation Test (TSET) program, the Nuclear Electric Propulsion Space Test Program (NEPSTP), and some additional materials testing as part of the Thermionic Fuel Element Verification Program (TFEVP). We’ll look at the beginnings of the overall collaboration in this post, with the details of TSET, NEPSTP, TFEVP, the potential lunar base applications, and the aftermath of the Topaz International Program, in the next post.
Let’s start, though, with the official beginnings of the TIP, and the challenges involved in bringing the test articles, reactors, and test stands to the US in one of the most politically complex times in modern history. One thing to note here: this was most decidedly not the US just buying a set of test beds, reactor prototypes, and flight units (all unfueled), this was a true international technical exchange. Both the American and Soviet (later Russian) organizations involved on all levels were true collaborators in this program, with the Russian head of the program, Academician Nikolay Nikolayvich Ponomarev-Stepnoy, still being highly appreciative of the effort put into the program by his American counterparts as late as this decade, when he was still working to launch the reactor that resulted from the TIP – because it’s still not only an engineering masterpiece, but could perform a very useful role in space exploration even today.
The Beginnings of the Topaz International Program
While the US had invested in the development of thermionic power conversion systems in the 1960s, the funding cuts in the 1970s that affected so many astronuclear programs also bit into the thermionic power conversion programs, leading to their cancellation or diminution to the point of being insignificant. There were several programs run investigating this technology, but we won’t address them in this post, which is already going to run longer than typical even for this blog! An excellent resource for these programs, though, is Thermionics Quo Vadis by the Defense Threat Reduction Agency, available in PDF here: https://www.nap.edu/catalog/10254/thermionics-quo-vadis-an-assessment-of-the-dtras-advanced-thermionics (paywall warning).
Our story begins in detail in 1988. The US was at the time heavily invested in the Strategic Defense Initiative (SDI), which as its main in-space nuclear power supply was focused on the SP-100 reactor system (another reactor that we’ll be covering in a Forgotten Reactors post or two). However, certain key players in the decision making process, including Richard Verga of the Strategic Defense Initiative Organization (SDIO), the organizational lynchpin on the SDI. The SP-100 was growing in both cost and time to develop, leading him to decide to look elsewhere to either meet the specific power needs of SDI, or to find a fission power source that was able to operate as a test-bed for the SDI’s technologies.
Investigations into the technological development of all other nations’ astronuclear capabilities led Dr. Verga to realize that the most advanced designs were those of the USSR, who had just launched the two TOPOL-powered Plasma-A satellites. This led him to invite a team of Soviet space nuclear power program personnel to the Eighth Albuquerque Space Nuclear Power Symposium (the predecessor to today’s Nuclear and Emerging Technologies for Space, or NETS, conference, which just wrapped up recently at the time of this writing) in January of 1991. The invitation was accepted, and they brought a mockup of the TOPAZ. The night after their presentation, Academician Nikolay Nicolayvich Ponomarev-Stepnoy, the Russian head of the Topol program, along with his team of visiting academicians, met with Joe Wetch, the head of Space Power Incorporated (SPI, a company made up mostly of SNAP veterans working to make space fission power plants a reality), and they came to a general understanding: the US should buy this reactor from the USSR – assuming they could get both governments to agree to the sale. The terms of this “sale” would take significant political and bureaucratic wrangling, as we’ll see, and sadly the problems started less than a week later, thanks to their generosity in bringing a mockup of the Topaz reactor with them. While the researchers were warmly welcomed, and they themselves seemed to enjoy their time at the conference, when it came time to leave a significant bureaucratic hurdle was placed in their path.
This mockup, and the headaches surrounding being able to take it back with the researchers, were a harbinger of things to come. While this mockup was non-functional, but the Nuclear Regulatory Commission claimed that, since it could theoretically be modified to be functional (a claim which I haven’t found any evidence for, but is theoretically possible), and as such was considered a “nuclear utilization facility” which could not be shipped outside the US. Five months later, and with the direct intervention of numerous elected officials, including US Senator Pete Domenici, the mockup was finally returned to Russia. This decision by the NRC led to a different approach to importing further reactors from the USSR and Russia, when the time came to do this. The mockup was returned, however, and whatever damage the incident caused to the newly-minted (hopeful) partnership was largely weathered thanks to the interpersonal relationships that were developed in Albuquerque.
Teams of US researchers (including Susan Voss, who was the major source for the last post) traveled to the USSR, to inspect the facilities used to build the Enisy (Yenisey is another common spelling, the reactor was named after the river in Siberia). These visits started in Moscow, with Drs Wetch and Britt of SPI, when a revelation came to the American astronuclear establishment: there wasn’t one thermionic reactor in the USSR, but two, and the most promising one was available for potential export and sale!
These visits continued, and personal relationships between the team members from both sides of the Iron Curtain grew. Due to headaches and bureaucratic difficulties in getting technical documentation translated effectively in the timeframe that the program required, often it was these interpersonal relationships that allowed the US team to understand the necessary technical details of the reactor and its components. The US team also visited many of the testing and manufacturing locations used in the production and development of the Enisy reactor (if you haven’t read it yet, check out the first blog post on the Enisy for an overview of how closely these were linked), as well as observing testing in Russia of these systems. This is also the time when the term “Topaz-II” was coined by one of the American team members, to differentiate the reactor from the original Topol (known in the west as Topaz, and covered in our first blog post on Soviet astronuclear history) in the minds of the largely uninformed Western academic circles.
The seeds of the first cross-Iron Curtain technical collaboration on astronuclear systems development, planted in Albuquerque, were germinating in Russian soil.
The Business of Intergovernmental Astronuclear Development
During this time, due to the headaches involved in both the US and the USSR from a bureaucratic point of view (I’ve never found any information that showed that the two teams ever felt that there were problems in the technological exchange, rather they all seem to be political and bureaucratic in nature, and exclusively from outside the framework of what would become known as the Topaz International Program), two companies were founded to provide an administrative touchstone for various points in the technological transfer program.
The first was International Scientific Products, which from the beginning (in 1989) was made specifically to facilitate the purchase of the reactors for the US, and worked closely with the SDIO Dr. Verga was still intimately involved, and briefed after every visit to Russia on progress in the technical exchange and eventual purchase of the reactors. This company was the private lubricant for the US government to be able to purchase these reactor systems (for reasons too complex to get into in this blog post). The two main players in ISP were Drs Wetch and Britt, who also appear to be the main administrative driving force in the visits. The company gave a legal means to transmit non-classified data from the USSR to the US, and vice versa. After each visit, these three would meet, and Dr. Verga kept his management at SDIO consistently briefed on the progress of the program.
The second was the International Nuclear Energy Research and Technology corporation, known as INERTEK. This was a joint US-USSR company, involving the staff of ISP, as well as individuals from all of the Soviet team of design bureaus, manufacturing centers (except possibly in Talinn, but I haven’t been able to confirm this, it’s mainly due to the extreme loss of documentation from that facility following the collapse of the USSR), and research institutions that we saw in the last post. These included the Kurchatov Institute of Atomic Energy (headed by Academician and Director Ponomarev-Stepnoy, the head of the Russian portion of the Topaz International Program), the Scientific Industrial Association “LUCH” (represented by Deputy Director Yuri Nikolayev), the Central Design Bureau for Machine Building (represented by Director Vladmir Nikitin), and the Keldysh Institute of Rocket Research (represented by Director Academician Anatoli Koreteev). INERTEK was the vehicle by which the technology, and more importantly to the bureaucrats the hardware, would be exported from the USSR to the US. Academician Ponomarev-Stepnoy was the director of the company, and Dr Wetch was his deputy. Due to the sensitive nature of the company’s focus, the company required approval from the Ministry of Atomic Energy (Minatom) in Moscow, which was finally achieved in December 1990.
In order to gain this approval, the US had to agree to a number of demands from Minatom. This included: the Topaz-II reactors had to be returned to Russia after the testing and that the reactors could not be used for military purposes. Dr. Verga insisted on additional international cooperation, including staff from the UK and France. This not only was a cost-saving measure, but reinforced the international and transparent nature of the program, and made military use more challenging.
While this was occurring, the Americans were insistent that the non-nuclear testing of the reactors had to be duplicated in the US, to ensure they met American safety and design criteria. This was a major sticking point for Minatom, and delayed the approval of the export for months, but the Americans did not slow in their preparations for building a test facility. Due to the concentration of space nuclear power research resources in New Mexico (with Los Alamos and Sandia National Laboratories, the US Air Force Philips Laboratory, and the University of New Mexico’s New Mexico Engineering Research Institute (NMERI), as well as the presence of the powerful Republican senator Pete Domenici to smooth political feathers in Washington, DC (all of the labs were within his Senatorial district in the north of the state), it was decided to test the reactors in Albuquerque, NM. The USAF purchased an empty building from the NMERI, and hired personnel from UNM to handle the human resources side of things. The selection of UNM emphasized the transparent, exploratory nature of the program, an absolute requirement for Minatom, and the university had considerable organizational flexibility when compared to either the USAF or the DOE. According to the contract manager, Tim Stepetic:
“The University was very cooperative and accommodating… UNM allowed me to open checking accounts to provide responsive payments for the support requirements of the INTERTEK and LUCH contracts – I don’t think they’ve ever permitted such checkbook arrangements either before or since…”
These freedoms were necessary to work with the Russian team members, who were in culture shock and dealing with very different organizational restrictions than their American counterparts. As has been observed both before and since, the Russian scientists and technicians preferred to save as much of their (generous in their terms) per diem for after the project and the money would go further. They also covered local travel expenses as well. One of the technicians had to leave the US for Russia for his son’s brain tumor operation, and was asked by the surgeon to bring back some Tylenol, a request that was rapidly acquiesced to with bemusement from his American colleagues. In addition, personal calls (of a limited nature due to international calling rates at the time) were allowed for the scientists and technicians to keep in touch with their families and reduce their homesickness.
As should be surprising to no-one, the highly unusual nature of this financial arrangement, as well as the large amount of money involved (which ended up coming out to about $400,000 in 1990s money), a routine audit led to the Government Accounting Office being called in to investigate the arrangement later. Fortunately, no significant irregularities in the financial dealings of the NMERI were found, and the program continued. Additionally, the reuse of over $500,000 in equipment scrounged from SNL and LANL’s junk yards allowed for incredible cost savings in the program.
With the business side of the testing underway, it was time to begin preparations for the testing of the reactors in the US, beginning with the conversion of an empty building into a non-nuclear test facility. The building’s conversion, under the head of Frank Thome on the facilities modification side, and Scott Wold as the TSET training manager, began in April of 1991, only four months after Minatom’s approval of INTERTEK. Over the course of the next year, the facility would be prepared for testing, and would be completed just before the delivery of the first shipment of reactors and equipment from Russia.
By this point, the test program had grown to include two programs. The first was the Thermionic Systems Evaluation Test (TSET), which would study mechanical, thermophysical, and chemical properties of the reactors to verify the data collected in Russia. This was to flight-qualify the reactors for American space mission use, and establish the collaboration of the various international participants in the Topaz International Program.
The second program was the Nuclear Electric Propulsion Space Test Program (NEPSTP); run by the Johns Hopkins Applied Physics Laboratory, and funded by the SDIP Ballistic Missile Defense Organization, it proposed an experimental spacecraft that would use a set of six different electric thrusters, as well as equipment to monitor the environmental effects of both the thrusters and the reactor during operation. Design work for the spacecraft began almost immediately after the TSET program began, and the program was of interest to both the American and Russian parts of the team.
Later, one final program would be added: the Thermionic Fuel Element Verification Program (TFEVP). This program, which had predated TIP, is where many of the UK and French researchers were involved, and focused of increasing the lifetime of the thermionic fuel elements from one year (the best US estimate before the TSET) to at least three, and preferably seven, years. This would be achieved through better knowledge of materials properties, as well as improved manufacturing methods.
Finally, there were smaller programs that were attached to the big three, looking at materials effets in intense radiation and plasma environments, as well as long-term contact with cesium vapor, chemcal reactions within the hardware itself, and the surface electrical properties of various ceramics. These tests, while not the primary focus of the program, WOULD contribute to the understanding of the environment an astronuclear spacecraft would experience, and would significantly affect future spacecraft designs. These tests would occur in the same building as the TSET testing, and the teams involved would frequently collaborate on all projects, leading to a very well-integrated and collegial atmosphere.
Reactor Shipment: A Funny Little Thing Occurred in Russia
While all of this was going on in the Topaz International Program, major changes were happening thoughout the USSR: it was falling apart. From the uprisings in Latvia and Lithuania (violently put down by the Soviet military), to the fall of the Berlin Wall, to the ultimate lowering of the hammer and sickle from the Kremlin in December 1991 and its replacement with the tricolor of the Russian Federation, the fall of the Iron Curtain was accelerating. The TIP teams were continuing to work at their program, knowing that it offered hope for the Topaz-II project as well as a vehicle to form closer technological collaborations with their former adversaries, but the complications would rear their heads in this small group as well.
The American purchase of the Topaz reactors was approved by President George H.W. Bush on 27 March, 1992 during a meeting with his Secretary of State, James Barker, and Secretary of Defense Richard Cheney. This freed the American side of the collaboration to do what needed to be done to make the program happen, as well as begin bringing in Russian specialists to begin test facility preparations.
The first group of 14 Russian scientists and technicians to arrive in the US for the TSET program arrived on April 3, 1992, but only got to sleep for a few hours before being woken up by their guests (who also brought their families) for a long van journey. This was something that the Russians greatly appreciated, because April 4 is a special day in one small part of the world: it’s one of only two days of the year that the Trinity Site, the location of the first nuclear explosion in history, is open to the public. According to one of them, Georgiy Kompaniets:
“It was like for a picnic! And at the entrance to the site there were souvenir vendors selling t-shirts with bombs and rocks supposedly at the epicenter of the blast…” (note: no trinitite is allowed to be collected at the Trinity site anymore, and according to some interpretations of federal law is considered low-level radioactive waste from weapons production)
The Russians were a hit at the Trinity site, being the center of attention from those there, and were interviewed for television. They even got to tour the McDonald ranch house, where the Gadget was assembled and the blast was initiated. This made a huge impression on the visiting Russians, and did wonders in cementing the team’s culture.
Another cultural exchange that occurred later (exactly when I’m not sure) was the chance to ride in a hot air balloon. Albuquerque’s International Balloon Fiesta is the largest hot air ballooning event in the world, and whenever atmospheric conditions are right a half dozen or more balloons can be seen floating over the city. A local ballooning club, having heard about the Russian scientists and technicians (they had become minor local celebrities at this point) offered them a free hot air balloon ride. This is something that the Russians universally accepted, since none of them had ever experienced this.
According to Boris Steppenov:
“The greatest difficulty, it seemed, was landing. And it was absolutely forbidden to touch down on the reservations belonging to the Native Americans, as this would be seen as an attack on their land and an affront to their ancestors…
[after the flight] there were speeches, there were oaths, there was baptism with champagne, and many other rituals. A memory for an entire life!”
The balloon that Steppenov flew in did indeed land on the Sandia Pueblo Reservation, but before touchdown the tribal police were notified, and they showed up to the landing site, issued a ticket to the ballooning company, and allowed them to pack up and leave.
These events, as well as other uniquely New Mexican experiences, cemented the TIP team into a group of lifelong friends, and would reinforce the willingness of everyone to work together as much as possible to make TIP as much of a success as it could be.
In late April, 1992, a team of US military personnel (led by Army Major Fred Tarantino of SDIO, with AF Major Dan Mulder in charge of logistics), including a USAF Airlift Control Element Team, landed in St. Petersburg on a C-141 and C-130, carrying the equipment needed to properly secure the test equipment and reactors that would be flown to the US. Overflight permissions were secured, and special packing cases, especially for the very delicate tungsten TISA heaters, were prepared. These preparations were complicated by the lack of effective packing materials for these heaters, until Dr. Britt of both ISP and INTERTEK had the idea of using foam bedding pads from a furniture store. Due to the large size and weight of the equipment, though, the C-141 and C-130 aircraft were not sufficient for airlifting the equipment, so the teams had to wait on the larger C-5 Galaxy transports intended for this task, which were en route from the US at the time.
Sadly, when the time came for the export licenses to be given to the customs officer, he refused to honor them – because they were Soviet documents, and the Soviet Union no longer existed. This led Academician Ponomarev-Stepnoy and INTERTEK’s director, Benjamin Usov, to travel to Moscow on April 27 to meet with the Chairman of the Government, Alexander Shokhin, to get new export licenses. After consulting with the Minister of Foreign Economic Relations, Sergei Glazev, a one-time, urgent export license was issued for the shipment to the US. This was then sent via fast courier to St. Petersburg on May 1.
The C-5s, though, weren’t in Russia yet. Once they did land, though, a complex paperwork ballet needed to be carried out to get the reactors and test equipment to America. First, the reactors were purchased by INTERTEK from the Russian bureaus responsible for the various components. Then, INTERTEK would sell the reactors and equipment to Dr. Britt of ISP once the equipment was loaded onto the C-5. Dr. Britt then immediately resold the equipment to the US government. This then avoided the import issues that would have occurred on the US side if the equipment had been imported by ISP, a private company, or INTERTEK, a Russian-led international consortium.
One of them landed in St. Petersburg on May 6, was loaded with the two Topaz-II reactors (V-71 and Ya-21U) and as much equipment as could be fit in the aircraft, and left the same day. It would arrive in Albuquerque on May 7. The other developed maintenance problems, and was forced to wait in England for five days, finally arriving on May 8. The rest of the equipment was loaded up (including the Baikal vacuum chamber), and the plane left later that day. Sadly, it ran into difficulties again upon reaching England, as was forced to wait two more days for it to be repaired, arriving in Albuquerque on May 12.
Preparations for Testing: Two Worlds Coming Together
Once the equipment was in the US, detailed examination of the payload was required due to the beryllium used in the reflectors and control drums of the reactor. Berylliosis, or the breathing in of beryllium dust, is a serious health issue, and one that the DOE takes incredibly seriously (they’ll evacuate an entire building at the slightest possibility that beryllium dust could be present, at the cost of millions of dollars on occasion). Detailed checks, both before the equipment was removed from the aircraft and during the unpackaging of the reactors. However, no detectable levels of beryllium dust were detected, and the program continued with minimal disruption.
Then it came time to unbox the equipment, but another problem arose: this required the approval of the director of the Central Design Bureau of Heavy Machine Building, Vladmir Nikitin, who was in Moscow. Rather than just call him for approval, Dr Britt called and got approval for Valery Sinkevych, the Albuquerque representative for INTERTEK, to have discretional control over these sorts of decisions. The approval was given, greatly smoothing the process of both setup and testing during TIP.
Sinkevych, Scott Wold and Glen Schmidt worked closely together in the management of the project. Both were on hand to answer questions, smooth out difficulties, and other challenges in the testing process, to the point that the Russians began calling Schmidt “The Walking Stick.” His response was classic: that’s my style, “Management by Walking Around.”
Every day, Schmidt would hold a lab-wide meeting, ensuring everyone was present, before walking everyone through the procedures that needed to be completed for the day, as well as ensuring that everyone had the resources that they needed to complete their tasks. He also made sure that he was aware of any upcoming issues, and worked to resolve them (mostly through Wetch and Britt) before they became an issue for the facility preparations. This was a revelation to the Russian team, who despite working on the program (in Russia) for years, often didn’t know anything other than the component that they worked on. This synthesis of knowledge would continue throughout the program, leading to a far
Initial estimates for the time that it would take to prepare the facility and equipment for testing of the reactors were supposed to be 9 months. Due to both the well-integrated team, as well as the more relaxed management structure of the American effort, this was completed in only 6 ½ months. According to Sinkevych:
“The trust that was formed between the Russian and American side allowed us in an unusually short time to complete the assembly of the complex and demonstrate its capabilities.”
This was so incredible to Schmidt that he went to Wetch and Britt, asking for a bonus for the Russians due to their exceptional work. This was approved, and paid proportional to technical assignment, duration, and quality of workmanship. This was yet another culture shock for the Russian team, who had never received a bonus before. The response was twofold: greatly appreciative, and also “if we continue to save time, do we get another bonus?” The answer to this was a qualified “perhaps,” and indeed one more, smaller bonus was paid due to later time savings.
Mid-Testing Drama, and the Second Shipment
Both in the US and Russia, there were many questions about whether this program was even possible. The reason for its success, though, is unequivocally that it was a true partnership between the American and Russian parts of TIP. This was the first Russian-US government-to-government cooperative program after the fall of the USSR. Unlike the Nunn-Lugar agreement afterward, TIP was always intended to be a true technological exchange, not an assistance program, which is one of the main reasons why the participants of TIP still look fondly and respectfully at the project, while most Russian (and other former Soviet states) participants in N-L consider it to be demeaning, condescending, and not something to ever be repeated again. More than this, though, the Russian design philosophy that allowed full-system, non-nuclear testing of the Topaz-II permanently changed American astronuclear design philosophy, and left its mark on every subsequent astronuclear design.
However, not all organizations in the US saw it this way. Drs. Thorne and Mulder provided excellent bureaucratic cover for the testing program, preventing the majority of the politics of government work from trickling down to the management of the test itself. However, as Scott Wold, the TSET training manager pointed out, they would still get letters from outside organizations stating:
“[after careful consideration] they had concluded that an experiment we proposed to do wouldn’t be possible and that we should just stop all work on the project as it was obviously a waste of time. Our typical response was to provide them with the results of the experiment we had just wrapped up.”
As mentioned, this was not uncommon, but was also a minor annoyance. In fact, if anything it cemented the practicality of collaborations of this nature, and over time reduced the friction the program faced through proof of capabilities. Other headaches would arise, but overall they were relatively minor.
Sadly, one of the programs, NEPSTP, was canceled out from under the team near the completion of the spacecraft. The new Clinton administration was not nearly as open to the use of nuclear power as the Bush administration had been (to put it mildly), and as such the program ended in 1993.
One type of drama that was avoided was the second shipment of four more Topaz-II reactors from Russia to the US. These were the Eh-40, Eh-41, Eh-43, and Eh-44 reactors. The use of these terms directly contradicts the earlier-specified prefixes for Soviet determinations of capabilities (the systems were built, then assessed for suitability for mechanical, thermal, and nuclear capabilities after construction, for more on this see our first Enisy post here). These units were for: Eh-40 thermal-hydraulic mockup, with a functioning NaK heat rejection system, for “cold-test” testing of thermal covers during integration, launch, and orbital injection; Eh-41 structural mockup for mechanical testing, and demonstration of the mechanical integrity of the anticriticality device (more on that in the next post), modified thermal cover, and American launch vehicle integration; Eh-43 and -44 were potential flight systems, which would undergo modal testing, charging of the NaK coolant system, fuel loading and criticality testing, mechanical vibration, shock, and acoustic tests, 1000 hour thermal vacuum steady-state stability and NaK system integrity tests, and others before launch.
How was drama avoided in this case? The previous shipment was done by the US Air Force, which has many regulations involved in the transport of any cargo, much less flight-capable nuclear reactors containing several toxic substances. This led to delays in approval the first time this shipment method was used. The second time, in 1994, INTERTEK and ISP contracted a private cargo company, Russian Volga Dnepr Airlines, to transport these four reactors. In order to do this, Volga Dnepr Airlines used their An-124 to fly these reactors from St. Petersburg to Albuquerque.
For me personally, this was a very special event, because I was there. My dad got me out of school (I wasn’t even a teenager yet), drove me out to the landing strip fence at Kirtland AFB, and we watched with about 40 other people as this incredible aircraft landed. He told me about the shipment, and why they were bringing it in, and the seed of my astronuclear obsession was planted.
No beryllium dust was found in this shipment, and the reactors were prepared for testing. Additional thermophysical testing, as well as design work for modifications needed to get the reactors flight-qualified and able to be integrated with the American launchers, were conducted on these reactors. These tests and changes will be the subject of the next blog post, as well as the missions that were proposed for the reactors.
These tests would continue until 1995, and the end of testing in Albuquerque. All reactors were packed up, and returned to Russia per the agreement between INTERTEK and Minatom. The Enisy would continue to be developed in Russia until at least 2007.
More Coming Soon!
The story of the Topaz International Program is far from over. The testing in the US, as well as the programs that the US/Russian team had planned have not even been touched on yet besides very cursory mentions. These programs, as well as the end of the Topaz International Program and the possible future of the Enisy reactor, are the focus of our next blog post, the final one in this series.
This program provided a foundation, as well as a harbinger of challenges to come, in international astronuclear collaboration. As such, I feel that it is a very valuable subject to spend a significant amount of time on.
I hope to have the next post out in about a week and a half to two weeks, but the amount of research necessary for this series has definitely surprised me. The few documents available that fill in the gaps are, sadly, behind paywalls that I can’t afford to breach at my current funding availability.
Hello, and welcome back to Beyond NERVA! Today, we’re going to return to our discussion of fission power plants, and look at a program that was unique in the history of astronuclear engineering: a Soviet-designed and -built reactor design that was purchased and mostly flight-qualified by the US for an American lunar base. This was the Enisy, known in the West as Topaz-II, and the Topaz International program.
This will be a series of three posts on the system: this post focuses on the history of the reactor in the Soviet Union, including the testing history – which as we’ll see, heavily influenced the final design of the reactor. The next will look at the Topaz International program, which began as early as 1980, while the Soviet Union still appeared strong. Finally, we’ll look at two American uses for the reactor: as a test-bed reactor system for a nuclear electric test satellite, and as a power supply for a crewed lunar base. This fascinating system, and the programs associated with it, definitely deserve a deep dive – so let’s jump right in!
We’ve looked at the history of Soviet astronuclear engineering, and their extensive mission history. The last two of these reactors were the Topaz (Topol) reactors, on the Plasma-A satellites. These reactors used a very interesting type of power conversion system: an in-core thermionic system. Thermionic power conversion takes advantage of the fact that certain materials, when heated, eject electrons, gaining a positive static charge as whatever the electrons impact gain a negative charge. Because the materials required for a thermionic system can be made incredibly neutronically robust, they can be placed inside the core of the reactor itself! This is a concept that I’ve loved since I first heard of it, and remains as cool today as it did back then.
The original Topaz reactor used a multi-cell thermionic element concept, where fuel elements were stacked in individual thermionic conversion elements, and several of these were placed end-to-end to form the length of the core. While this is a perfectly acceptable way to set up one of these systems, there are also inefficiencies and complexities associated with so many individual fuel elements. An alternative would be to make a single, full-length thermionic cell, and use either one or several fuel rods inside the thermionic element. This is the – wait for it – single cell thermionic element design, and is the one that was chosen for the Enisy/Topaz-II reactor (which we’ll call Enisy in this post, since it’s focusing on the Soviet history of the reactor). While started in 1967, and tested thoroughly in the 70s, it wasn’t flight-qualified until the 80s… and then the Soviet Union collapsed, and the program died.
After the fall of the USSR, there was a concerted effort by the US to keep the specialist engineers and scientists of the former Soviet republics employed (to ensure they didn’t find work for international bad actors such as North Korea), and to see what technology had been developed behind the Iron Curtain that could be purchased for use by the US. This is where the RD-180 rocket engine, still in use by the United Launch Alliance Atlas rockets, came from. Another part of this program, though, focused on the extensive experience that the Soviets had in astronuclear missions, and in paricular the most advanced – but as yet unflown – design of the renowned NPO Luch design bureau, attached to the Ministry of Medium Industry: the Enisy reactor (which had the US designation of Topaz-II due to early confusion about the design by American observers).
The Enisy, in its final iteration, was designed to have a thermal output of 115 kWt (at the beginning of life), with a mission requirement of at least 6 kWe at the electrical outlet terminals for at least three years. Additional requirements included a ten year shelf life after construction (without fissile fuel, coolant, or other volatiles loaded), a maximum mass of 1061 kg, and prevention of criticality before achieving orbit (which was complicated from an American point of view, more on that below). The coolant for the reactor remained NaK-78, a common coolant in most reactors we’ve looked at so far. Cesium was stored in a reservoir at the “bottom” (away from the spacecraft) end of the reactor vessel, to ensure the proper partial pressure between the cathode and anode of the fuel elements, which would leak out over time (about 0.5 g/day during operation). This was meant to be the next upgrade in the Soviet astronuclear fleet, and as such was definitely a step above the Topaz-I reactor.
Perhaps the most interesting part of the design is that it was designed to be able to be tested as a complete system without the use of fissile fuels in the reactor. Instead, electrical resistance heaters could be inserted in the thermionic fuel elements to simulate the fission process, allowing for far more complete testing of the system in flight configuration before launch. This design decision heavily influenced US nuclear power plant design and testing procedures, and continues to influence designs today (the induction heating testing of the KRUSTY thermal simulator is a good recent example of this concept, even if it’s been heavily modified for the different reactor geometry), however, the fact that the reactor used cylindrical fuel elements made this process much easier.
So what did the Enisy look like? This changed over time, but we will look at the basics of the power plant’s design in its final Soviet iteration in this post, and the examine the changes that the Americans made during the collaboration in the next post. We’ll also look at why the design changed as it did.
First, though, we need to look at how the system worked, since compared to every system that we’ve looked at in depth, the physics behind the power conversion system are quite novel.
Thermionics: How to Keep Your Power Conversion System in the Core
We haven’t looked at power conversion systems much in this blog yet, but this is a good place to discuss the first kind as it’s so integral to this reactor. If the details of how the power conversion system actually worked don’t interest you, feel free to skip to the next section, but for many people interested in astronuclear design this power conversion system offers the promise to potentially be the most efficient and reliable option available for in-space nuclear reactors geared towards electricity production.
In short, thermionic reactions are those that occur when a material is heated and gives off charged particles. This is something that has been known since ancient times, even though the physical mechanism was completely unknown until after the discovery of the electron. The name comes from the term “thermions,” or “thermal ions.” One of the first to describe this effect used a hot anode in a vacuum: the modern incandescent lightbulb: Thomas Edison, who observed a static charge building up on the glass of his bulbs while they were turned on. However, today this has expanded to include the use of anodes, as well as solid-state systems and systems that don’t have a vacuum.
The efficiency of these systems depends on the temperature difference between the anode and cathode, the work function (or minimum thermodynamic work needed to remove an electron from a solid to a vacuum immediately outside the solid surface) of the emitter used, and the Boltzmann Constant (which relates to the average kinetic energy of particles in a gas), as well as a number of other factors. In modern systems, however, the structure of a thermionic convertor which isn’t completely solid state is fairly standard: a hot cathode is separated from a cold anode, with cesium vapor in between. For nuclear systems, the anode is often tungsten, the cathode seems to vary depending on the system, and the gap between – called the inter-electrode gap – is system specific.
The cesium exists in an interesting state of matter. Solid, liquid, gas, and plasma are familiar to pretty much everyone at this point, but other states exist under unusual circumstances; perhaps the best known is a supercritical fluid, which exhibits the properties of both a liquid and a gas (although this is a range of possibilities, with some having more liquid properties and some more gaseous). The one that concerns us today is something called Rydberg matter, one of the more exotic forms of matter – although it has been observed in many places across the universe. In its simplest form, Rydberg matter can be seen as small clusters of interconnected molecules within a gas (the largest number of atoms observed in a laboratory is 91, according to Wikipedia, although there’s evidence for far larger numbers in interstellar gas clouds). These clumps end up affecting the electron clouds of those atoms in the clusters, causing them to orbit across the nuclei of those atoms, causing a new lowest-energy state for the entire cluster to occur. These structures don’t degrade any faster under radioactive bombardment due to a number of quantum mechanical properties, which brought them to the attention of the Los Alamos Scientific Laboratory staff in the 1950s, and a short time later Soviet nuclear physicists as well.
This sounds complex, and it is, but the key point is this: because the clumps act as a unit within Rydberg matter, their ability to transmit electricity is enhanced compared to other gasses. In particular, cesium seems to be a very good vehicle for creating Rydberg matter, and cesium vapor seems to be the best available for the gap between the cathode and anode of a thermionic convertor. The density of the cesium vapor is variable and dependent on many factors, including the materials properties of the cathode and anode, the temperature of the cathode, the inter-electrode gap distance, and a number of other factors. Tuning the amount of cesium in the inter-electrode gap is something that must occur in any thermionic power conversion system; in fact the original version of the Enisy had the ability to vary the inter-electrode gap pressure (this was later dropped when it was discovered to be superfluous to the efficient function of the reactor).
This type of system comes in two varieties: in-core and out-of-core. The out-of-core variant is very similar to the power conversion systems we saw (briefly) on the SNAP systems: the coolant from the reactor passes around or through the radiation shield of the system, heats the anode, which then emits electrons into the gap, collected by the cathode, and then the electricity goes through the power conditioning unit and into the electrical system of the spacecraft. Because thermionic conversion is theoretically more efficient, and in practice is more flexible in temperature range, than thermoelectric conversion, even keeping the configuration of the power conversion system’s relationship to the rest of the power plant offers some advantages.
The in-core variant, on the other hand, wraps the power conversion system directly around the fissile fuel in the core, with electrical power being conducted out of the core itself and through the shield. The coolant runs across the outside of the thermionic unit, providing the thermal gradient for the system to work, and then exits the reactor. While this increases the volume of the core (admittedly, not by much), it also eliminates the need for more complex plumbing for the primary coolant loop. Additionally, it allows for less heat loss from the coolant having to travel a farther difference. Finally, there’s far less chance of a stray meteor hitting your power conversion system and causing problems – if a thermionic fuel element is damaged by a foreign object, you’re going to have far bigger problems with the system as a whole, since it means that it damaged your control systems and pressure vessel on the way to damaging your power conversion unit!
The in-core thermionic power conversion system, while originally proposed by the US, was seen as a curiosity on their side of the Iron Curtain. Some designs were proposed, but none were significantly researched to the level of being able to be serious contenders in the struggle to gain the significant funding needed to develop as complex a system as an astronuclear fission power plant, and the low conversion efficiency available in practice prevents its application in terrestrial power plants, which to this day continue to use steam turbine generators.
On the other side of the Iron Curtain, however, this was seen as the ideal solution for a power conversion system: the only systems needed for the system to work could be solid-state, with no moving parts: heaters to vaporize the cesium, and electromagnetic pumps to move it through the reactor. Greater radiation resistance and more flexible operating temperatures, as well as greater conversion efficiency, all offered more promise to Soviet astronuclear systems designers than the thermoelectric path that the US ended up following. The first Soviet reactor designed for in-space use, the Romashka, used a thermionic power conversion system, but the challenges involved in the system itself led the Krasnya Zvezda design bureau (who were responsible for the Romasha, Bouk, and Topol reactors) to initially choose to use thermoelectric convertors in their first flight system: the BES-5 Bouk, which we’ve seen before.
Now that we’ve looked at the physics behind how you can place your power conversion system within the reactor vessel of your power plant (and as far as I’ve been able to determine, if you’re looking to generate electricity beyond what a simple sensor needs, this is the only option without going to something very exotic), let’s look at the reactor itself.
Enisy: The Design of the TOPAZ-II Reactor
The Enisy was a uranium oxide fueled, zirconium hydride moderated, sodium-potassium eutectic cooled reactor, which used a single-element thermionic fuel element design for in-core power conversion. The multi-cell version was used in the Topol reactor, where each fuel pellet was wrapped in its own thermionic convertor. This is sometimes called a “flashlight” configuration, since it looks a bit like the batteries in a large flashlight, but this comes at the cost of complexity, mass, and increased inefficiencies. To offset this, many issues are easier to deal with in this configuration, especially as your fuel reaches higher burnup percentages and your fuel swells. The ultimate goal was single-unit thermionic fuel elements, which were realized in the Enisy reactor. While more challenging in terms of materials requirements, the greater simplicity, lower mass, and greater efficiency of the system offered more promise.
The power plant was required to provide 6 kWe of electrical power at the reactor terminals (before the power conditioning unit) at 27 volts. It had to have an operational life of three years, and a storage life if not immediately used in a mission of at least ten years. It also had to have an operational reliability of >95%, and could not under any circumstances achieve criticality before reaching orbit, nor could the coolant freeze at any time during operation. Finally, it had to do all of this in less than 1061 kg (excluding the automatic control system).
Thirty-seven fuel elements were used in the core, which was contained in a stainless steel reactor vessel. These contained uranium oxide fuel pellets, with a central fission gas void about 22% of the diameter of the fuel pellets to prevent swelling as fission products built up. The emitters were made out of molybdenum, a fairly common choice for in-core applications. Al2O3 (sapphire) insulators were used to electrically isolate the fuel elements from the rest of the core. Three of these would be used to power the cesium heater and pump directly, while another (unknown) number powered the NaK coolant pump (my suspicion is that it’s about the same number). The rest would output power directly from the element into the power conditioning unit on the far side of the power plant.
Nine control drums, made mostly out of beryllium but with a neutron poison along one portion of the outer surface (Boron carbide/silicon carbide) surrounded the core. Three of these drums were safety drums, with two positions: in, with the neutron poison facing the center of the core, and out, where the beryllium acted as a neutron reflector. The rest of the drums could be rotated in or out as needed to maintain reactivity at the appropriate level in the core. These had actuators mounted outside the pressure vessel to control the rotation of the drums, and were connected to an automatic control system to ensure autonomous stable function of the reactor within the mission profile that the reactor would be required to support.
The NaK coolant would flow around the fuel elements, driven by an electromagnetic pump, and then pass through a radiator, in an annular flow path immediately surrounding the TFEs. Two inlet and two outlet pipes were used to connect the core to the radiator. In between the radiator and the core was a radiation shield, made up of stainless steel and lithium hydride (more on this seemingly odd choice when we look at the testing history).
The coolant tubes were embedded in a zirconium hydride moderator, which was contained in stainless steel casings.
Finally, a reservoir of cesium was at the opposite end of the reactor from the radiator. This was necessary for the proper functioning of the thermionic fuel elements, and underwent many changes throughout the design history of the reactor, including a significant expansion as the design life requirements increased.
Once the Topaz International program began, additional – and quite significant – changes were made to the reactor’s design, including a new automated control system and an anti-criticality system that actually removed some of the fuel from the core until the start-up commands were sent, but that’s a discussion for the next post.
I saved the coolest part of this system for last: the TISA, or “Thermal Simulators of Apparatus Cores” (the acronym was from the original Russian), heaters. These units were placed in the active section of the thermionic fuel elements to simulate the heat of fission occurring in the thermionic fuel elements, with the rest of the systems and subsystems being in flight configuration. This led to unprecedented levels of testing capability, but at the same time would lead to a couple of problems later in testing – which would be addressed as needed.
How did this design end up this way? In order to understand that, the development and testing process of the Soviet design team must be looked at.
The History of Enisy’s Design
The Enisy reactor started with the development of the thermionic fuel element by the Sukhumi Institute in the early 1960s, which had two options: the single cell and multiple cell variants. In 1967, these two options were split into two different programs: the Topol (Topaz), which we looked at in the Soviet Astronuclear History post, led by the Krasnaya Zvezda design bureau in Moscow, and Enisy, which was headed by the Central Design Bureau of Machine Building in Leningrad (now St. Petersburg). Aside from the lead bureau, in charge of the overall program and system management, a number of other organizations were involved with the fabrication and testing of the reactor system: the design and modeling team consisted of: the Kurchatov Institute of Atomic Energy was responsible for nuclear design and analytics, the Scientific Industrial Association Lutch was responsible for the thermionic fuel elements, the Sukhumi Institute remained involved in the reactor’s automatic control systems design; fabrication and testing was the responsibility of: the Research Institute of Chemical Machine Building for thermal vacuum testing, the Scientific Institute for Instrument Building’s Turaevo nuclear test facility, Kraznoyarsk Spacecraft Designer for mechanical testing and spacecraft integration, Prometheus Laboratory for materials development (including liquid metal eutectic development for the cooling system and materials testing) and welding, and the Enisy manufacturing facility was located in Talinn, Estonia (a decision that would cause later headaches during the collaboration).
The Enisy originally had three customers (the identities of which I am not aware of, simply that at least one was military), and each had different requirements for the reactor. Originally designed to operate at 6 kWe for one year with a >95% success rate, but customer requirements changed both of these characteristics significantly. As an example, one customer needed a one year system life, with a 6 kWe power output, while another only needed 5 kWe – but needed a three year mission lifetime. This longer lifetime ended up becoming the baseline requirement of the system, although the 6 kWe requirement and >95% mission success rate remained unchanged. This led to numerous changes, especially to the cesium reservoir needed for the thermionic convertors, as well as insulators, sensors, and other key components in the reactor itself. As the cherry on top, the manufacture of the system was moved from Moscow to Talinn, Estonia, resulting in a new set of technicians needing to be trained to the specific requirements of the system, changes in documentation, and at the fall of the Soviet Union loss of significant program documentation which could have assisted the Russia/US collaboration on the system.
The nuclear design side of things changed throughout the design life as well. An increase in the number of thermionic fuel elements (TFEs) occurred in 1974, from 31 to 37 in the reactor core, an increase in the height of the “active” section of the TFE, although whether the overall TFE length (and therefore the core length) changed is information I have not been able to find. Additional space in the TFEs was added to account for greater fuel swelling as fission products built up in the fuel pellets, and the bellows used to ensure proper fitting of the TFEs with reactor components were modified as well. The moderator blocks in the core, made out of zirconium hydride, were modified at least twice, including changing the material that the moderator was kept in. Manufacturing changes in the stainless steel reactor vessel were also required, as were changes to the gamma shielding design for the shadow shield. All in all, the reactor went through significant changes from the first model tested to theend of its design life.
Another area with significantly changing requirements was the systems integration side of things. The reactor was initially meant to be launched in a reactor-up position, but this was changed in 1979 to a reactor-down launch configuration, necessitating changes to several systems in what ended up being a significant effort. Another change in the launch integration requirements was an increase in the acceleration levels required during dynamic testing by a factor of almost two, resulting in failures in testing – and resultant redesigns of many of the structures used in the system. Another thing that changed was the boom that mounted the power plant to the spacecraft – three different designs were used through the lifetime of the system on the Russian side of things, and doubtless another two (at least) were needed for the American spacecraft integration.
Perhaps the most changed design was the coolant loop, due to significant problems during testing and manufacturing of the system.
Design Driven by (Expected) Failure: The USSR Testing Program
Flight qualification for nuclear reactors in the USSR at the time was very different from the way that the US did flight qualification, something that we’ll look at a bit more later in this post. The Soviet method of flight qualification was to heavily test a number of test-beds, using both nuclear and non-nuclear techniques, to validate the design parameters. However, the actual flight articles themselves weren’t subjected to nearly the same level of testing that the American systems would be, instead going through a relatively “basic” (according to US sources) workmanship examination before any theoretical launch.
In the US, extensive systems modeling is a routine part of nuclear design of any sort, as well as astronautical design. Failures are not unexpected, but at the same time the ideal is that the system has been studied and modeled mathematically thoroughly enough that it’s not unreasonable to predict that the system will function correctly the first time… and the second… and so on. This takes not only a large amount of skilled intellectual and manual labor to achieve, but also significant computational capabilities.
In the Soviet Union, however, the preferred method of astronautical – and astronuclear – development was to build what seemed to be a well-designed system and then test it, expecting failure. Once this happened, the causes of the failure were analyzed, the problem corrected, and then the newly upgraded design would be tested again… and again, for as many times as were needed to develop a robust system. Failure was literally built into the development process, and while it could be frustrating to correct the problems that occurred, the design team knew that the way their system could fail had been thoroughly examined, leading to a more reliable end result.
This design philosophy leads to a large number of each system needing to be built. Each reactor that was built underwent a post-manufacturing examination to determine the quality of the fabrication in the system, and from this the appropriate use of the reactor. These systems had four prefixes: SM, V, Ya, and Eh. Each system in this order was able to do everything that the previous reactor would be able to do, in addition to having superior capabilities to the previous type. The SM, or static mockup, articles were never built for anything but mechanical testing, and as such were stripped down, “boilerplate” versions of the system. The V reactors were the next step up, which were used for thermophysical (heat transfer, vibration testing, etc) or mechanical testing, but were not of sufficient quality to undergo nuclear testing. The Ya reactors were suitable for use in nuclear testing as well, and in a pinch would be able to be used in flight. The Eh reactors were the highest quality, and were designated potential flight systems.
In addition to this designation, there were four distinct generations of reactor: the first generation was from V-11 to Ya-22. This core used 31 thermionic fuel elements, with a one year design life. They were intended to be launched upright, and had a lightweight radiation shield. The next generation, V-15 to Ya-26, the operational lifetime was increased to a year and a half.
The third generation, V-71 to Eh-42 had a number of changes. The number of TFEs was increased from 31 to 37, in large part to accommodate another increase in design life, to above 3 years. The emitters on the TFEs were changed to the monocrystaline Mo emitters, and the later ones had Nb added to the Mo (more on this below). The ground testing thermal power level was reduced, to address thermal damage from the heating units in earlier non-nuclear tests. This is also when the launch configuration was changed from upright to inverted, necessitating changes in the freeze-prevention thermal shield, integration boom, and radiator mounting brackets. The last two of this generation, Eh -41 and Eh-42, had the heavier radiation shield installed, while the rest used the earlier, lighter gamma shield.
The final generation, Ya-21u to Eh-44, had the longest core lifetime requirement of three years at 5.5 kWe power output. These included all of the other changes above, as well as many smaller changes to the reactor vessel, mounting brackets, and other mechanical components. Most of these systems ended up becoming either Ya or Eh units due to lessons learned in the previous three generations, and all of the units which would later be purchased by the US as flight units came from this final generation.
A total of 29 articles were built by 1992, when the US became involved in the program. As of 1992, two of the units were not completed, and one was never assembled into its completed configuration.
Sixteen of the 21 units were tested between 1970 and 1989, providing an extensive experimental record of the reactor type. Of these tests, thirteen underwent thermal, mechanical, and integration non-nuclear testing. Nuclear testing occurred six times at the Baikal nuclear facility. As of 1992, there were two built, but untested, flight units available: the E-43 and E-44, with the E-45 still under construction.
Nuclear testing revision and development, including fuel loading, radiation and nuclear safety. Studied unstable nuclear conditions and stainless steel material properties, disassembly and inspection. LiH moderator hydrogen loss in test.
Nuclear ground test. ACS startup, steady-state functioning, post-operation disassembly and inspection. TFE lifetime limited to ~2 months due to fuel swelling
Steady state nuclear testing. Significant TFE shortening post-irradiation.
TFE needed redesign, no systems testing. Installed at Turaevo as mockup. Used to establish transport and handling procedures
System incomplete. Used as spacecraft mockup, did not undergo physical testing.
Test stand preparation
Second fabrication stage not completed. Used for some experiments with Baikal test stand. Disassembled in Sosnovivord.
Refabricated at CDBMB. TFE burnt and damaged during second fadrication. Notch between TISA and emitter
# of TFEs
Mechanical, Electrical, Spacecraft integration
Baikal, Krasnoyarsk, Cold Temp Testing
Converted from upright to inverted launch configuration, spacecraft integration heavily modified. First to use 37 TFE core configuration. Transport testing (railroad vibration and shock), cold temperature testing. Electrical testing post-mechanical. Zero power testing at Krasnoyarsk.
Ground control (no ACS)
Nuclear ground test, steady state operation. Leaks observed in two cooling pipes 120 hrs into test; leaks plugged and test continued. Disassembly and inspection.
Prototype Sukhumi ACS
Nuclear ground test, startup using ACS, steady state. Initial leak in EM pump led to large leak later in test. Test ended in loss of coolant accident. Reactor disassembled and inspected post-test to determine leak cause.
Quality not sufficient for flight (despite Eh “flight” designation). Static and torsion tests conducted.
Nuclear ground test, pre-launch simulation. ACS startup and operation. Steady state test. Post-operation disassembly and examination.
Fabrication begin in Estonia, with some changed components. After changes, system name changed to Eh-41, and serial number changed to 17. Significant reactor changes.
Cold temp, coolant flow
Cold temperature testing. No electrical testing. Filled with NaK during second stage of fabrication.
Began life as Eh(?)-39, post-retrofit designation. Transportation (railroad) dynamic, and impact testing. Leak testing done post-mechanical testing. First use of increased shield mass.
Critical component welding failure during fabrication. Unit never used.
# of TFEs
First Gen 4 reactor using modified TFEs. Electrical testing on TFEs conducted. New end-cap insulation on TFEs tested.
6/30/88 (? Unclear what testing is indicated)
Flight unit. First fabrication phase in Talinn completed, second incomplete as of 1994
Flight unit. First fabrication phase in Talinn completed, second incomplete as of 1994
Partially fabricated unit with missing components.
Not many fine details are known about the testing of these systems, but we do have some information about the tests that led to significant design changes. These changes are best broken down by power plant subsystem, because while there’s significant interplay between these various subsystems their functionality can change in minor ways quite easily without affecting the plant as a whole. Those systems are: the thermionic fuel elements, the moderator, the pressure vessel, the shield, the coolant loop (which includes the radiator piping), the radiator coatings, the launch configuration, the cesium unit, and the automatic control system (including the sensors for the system and the drum drive units). While this seems like a lot of systems to cover, many of them have very little information about their design history to pass on, so it’s less daunting than it initially appears.
Thermionic Fuel Elements
It should come as no surprise that the thermionic fuel elements (TFEs) were extensively modified throughout the testing program. One of the big problems was short circuiting across the inter-electrode gap due to fuel swelling, although other problems occurred to cause short circuits as well.
Perhaps the biggest change was the change from 31 to 37 TFEs in the core, one of the major changes to minimize fuel swelling. The active core length (where the pellets were) was increased by up to 40 mm (from 335 mm to 375 mm), the inter-electrode gap was widened by 0.05 mm (from 0.45 to 0.5 mm). In addition, the hole through the center of the fuel element was increased in diameter to allow for greater internal swelling, reducing the mechanical stress on the emitter.
The method of attaching the bellows for thermal expansion were modified (the temperature was dropped 10 K) to prevent crystalization of the palladium braze and increase bellows thermal cycling capability after failures on the Ya-24 system (1977-1981).
Perhaps the biggest change was to the materials used in the TFE. The emitter started off as a polycrystaline molybdenum in the first two generations of reactors, but the grain boundaries between the Mo crystals caused brittleness over time. Because of this, they developed the capability to use monocrystalline Mo, which improved performance in the early third generation of reactors – just not enough. In the final version seen in later 3rd generation and fourth generation systems, the Mo was doped with 3% niobium, which created the best available material for the emitter.
There were many other changes during the development of the thermionic fuel elements, including the addition of coatings on some materials for corrosion resistance, changes in electrical insulation type, and others, but these were the most significant in terms of functionality of the TFEs, and their impact on the overall systems design.
The zirconium hydride neutron moderator was placed around the outside of the core. Failures were observed several times in testing, including the Ya-23 test, which resulted in loss of hydrogen in the core and the permanent shutdown of that reactor. Overpower issues, combined with a loss of coolant, led to moderator failure in Ya-82 as well, but in this case the improved H barriers used in the stainless steel “cans” holding the ZrH prevented a loss of hydrogen accident despite the ZrH breaking up (the failure was due to the ZrH being spread more thinly across the reactor, not the loss of H due to ZrH damage).
This development process was one of the least well documented areas of the Soviet program.
Again, this subsystem’s development seems poorly documented. The biggest change, though, seems to be changing the way the triple coating (of chrome, then nickel, then enamel) was applied to the stainless steel of the reactor vessel. This was due to the failure of the Ya-23 unit, which failed at the join between the tube and the end of the tube on one of the TFEs. The crack self-sealed, but for future units the coatings didn’t go all the way to the weld, and the hot CO2 used as a cover gas was allowed to carbonize the steel to prevent fatigue cracking.
The LiH component of the radiation shield (for neutron shielding) seems to not have changed much throughout the development of the reactor. The LiH was contained in a 1.5 mm thick stainless steel casing, polished on the ends for reflectivity and coated black on the outside face.
However, the design of the stainless steel casing was changed in the early 1980s to meet more stringent payload gamma radiation doses. Rather than add a new material such as tungsten or depleted uranium as is typical, the designers decided to just thicken the reactor and spacecraft sides of the LiH can to 65 mm and 60 mm respectively. While this was definitely less mass-efficient than using W or U, the manufacturing change was fairly trivial to do with stainless steel, and this was considered the most effective way to ensure the required flux rates with the minimum of engineering challenges.
The first unit to use this was the E-41, fabricated in 1985, which was also the first unit to be tested in the inverted flight configuration. The heavier shield, combined with the new position, led to the failure of one of the shield-to-reactor brackets, as well as the attachment clips for the radiator piping. These components were changed, and no further challenges occurred with the shield in the rest of the test program.
The NaK coolant loop was the biggest source of headaches dueing the development of the Enisy. A brief list of failures, and actions taken to correct them, is here:
V-11 (July 1971-February 1972): A weld failed at the join between the radiator tubing and collector during thermophysical testing. The double weld was changed to a triple weld to correct the failure mode.
Ya-21 (1971): This reactor seemed to have everything go wrong with it. Another leak at the same tube-to-collector interface led to the welding on of a small sleeve to repair the crack. This fix seemed to solve the problem of failures in that location.
Ya-23 (March 1975-June 1976): Coolant leak between coolant tube and moderator cavity. Both coating changes and power ramp-up limits eliminated issues.
V-71 (January 1981-1994?): NaK leak in radiator tube after 290 hours of testing. Plugged, testing continued. New leak occurred 210 test hours later, radiator examined under x-ray. Two additional poorly-manufactured tubes replaced with structural supports. One of test reactors sent to US under Topaz International.
Ya-81 (September 1980-January 1983): Two radiator pipe leaks 180 hours into nuclear testing (no pre-nuclear thermophysical testing of unit). Piping determined to be of lower quality after switching manufacturers. Post-repair, the unit ran for 12,500 hours in nuclear power operation.
Ya-82 (September 1983 to November 1984): Slow leak led to coolant pump voiding and oscillations, then one of six pump inlet lines being split. There were two additional contributions to this failure: the square surfaces were pressed into shape from square pipes, which can cause stress microfractures at the corners, and second the inlet pump was forced into place, causing stress fracturing at the joint. This failure led to reactor overheating due to a loss-of-coolant condition, and led to the failure of the ZrH moderator blocks. This led to increased manufacturing controls on the pump assembly, and no further major pump failures were noted in the remainder of the testing.
Eh-38 (February 1986-August 1986): This failure is a source of some debate among the Russian specialists. Some believe it was a slow leak that began shortly after startup, while others believe that it was a larger leak that started at some point toward the end of the 4700 hour nuclear test. The exact location of the leak was never located, however it’s known that it was in the upper collector of the radiator assembly.
Ya-21u (December 1987-December 1989): Caustic stress-corrosion cracking occurred about a month and a half into thermophysical testing in the lower collector assembly, likely caused by a coating flaw growing during thermal cycling. This means that subsurface residual stresses existed within the collector itself. Due to the higher-than-typical (by U.S. standards) carbon content in the stainless steel (the specification allowed for 0.08%-0.12% carbon, rather than the less than 0.8% carbon content in the U.S. SS-321), the steel was less ductile than was ideal, which could have been a source of the flaw growing as it did. Additionally, increased oxygen levels in the NaK coolant could have exacerbated the problem more as well. A combination of ensuring that heat treatments had occurred post-forming, as well as ensuring a more oxygen-poor environment, were essential to reducing the chances of this failure happening again.
The only known data poing on the radiator development was during the Ya-23 test, where the radiator coating changed the nuclear properties of the system at elevated temperature (how is unknown). This was changed to something that would be less affected by the radiation environment. The final radiator configuration was a chrome and polymer substrate with an emissivity of 0.85 at beginning of life.
As we saw, the orientation that the reactor was to be launched in was changed from upright to inverted, with the boom to connect the reactor to the spacecraft being side by side inside the payload fairing. This required the thermal cover used to prevent the NaK from freezing to be redesigned, and modified after the V-13 test, when it was discovered to not be able to prevent freezing of the coolant. The new cover was verified on the V-15 tests, and remained largely unchanged after this.
Some of the load-bearing brackets needed to be changed or reinforced as well, and the clips used to secure the radiator pipes to the structural components of the radiator.
Cesium Supply Unit
For the TFEs to work properly, it was critical that the Cs vapor pressure was within the right pressure range relative tot he temperature of the reactor core. This system was designed from first physical principles, leading to a novel structure that used temperature and pressure gradients to operate. While initially throttleable, but there were issues with this functionality during the Ya-24 nuclear test. This changed when it was discovered that there was an ideal pressure setting for all power levels, so the feed pressure was fixed. Sadly, on the Ya-81 test the throttle was set too high, leading to the need to cool the Cs as it returned to the reservoir.
Additional issues were found in the startup subsystem (a single-use puncture valve) used to vent the inert He gas from the interelectrode gap (this was used during launch and before startup to prevent Cs from liquefying or freezing in the system), as well as to balance the Cs pressure by venting it into space at a rate of about 0.4 g/day. The Ya-23 test saw a sensor not register the release of the He, leading to an upgraded spring for the valve.
Finally, the mission lifetime extension during the 1985/86 timeframe tripled the required lifetime of the system, necessitating a much larger Cs reservoir to account for Cs venting. This went from having 0.455 g to 1 kg. These were tested on Ya-21u and Eh-44, despite one (military) customer objecting due to insufficient testing of the upgraded system. This system would later be tested and found to be acceptable as part of the Topaz International program.
Automatic Control System
The automatic control system, or ACS, was used for automatic startup and autonomous reactor power management, and went through more significant changes than any other system, save perhaps the thermionic fuel elements. The first ACS, called the SAU-35, was used for the Ya-23 ground test, followed by the SAU-105 for the Eh-31 and Ya-24 tests. Problems arose, however, because these systems were manufactured by the Institute for Instrument Building of the Ministry of Aviation Construction, while the Enisy program was under the purview of the Ministry of Atomic Energy, and bureaucratic problems reared their heads.
This led the Enisy program to look to the Sukhumi Institute (who, if you remember, were the institute that started both the Topol and Enisy programs in the 1960s before control was transferred elsewhere) for the next generation of ACS. During this transition, the Ya-81 ground nuclear test occurred, but due to the bureaucratic wrangling, manufacturer change, and ACS certification tests there was no unit available for the test. This led the Ya-81 reactor to be controlled from the ground station. The Ya-82 test was the first to use a prototype Sukhumi-built ACS, with nine startups being successfully performed by this unit.
The loss-of-cooling accident potentially led to the final major change to the ACS for the Eh-38 test: the establishment of an upper temperature limit. After this, the dead-band was increased to allow greater power drift in the reactor (reducing the necessary control drum movement), as well as some minor modifications rerouting the wires to ensure proper thermocouple sensor readings, were the final significant modifications before Topaz International started.
The sensors on the Enisy seem to have been regularly problematic, but rather than replace them, they were either removed or left as instrumentation sensors rather than control sensors. These included the volume accumulator sensors on the stainless steel bellows for the thermionic fuel elements (which were removed), and the set of sensors used to monitor the He gas in the TFE gas gap (for fission product buildup), the volume accumulator (which also contained Ar), and the radiation shield. This second set of sensors was kept in place, but was only able to measure absolute changes, not precise measurements, so was not useful for the ACS.
Control Drive Unit
The control drive unit was responsible for the positioning of the control drums, both on startup as well as throughout the life of the reactor to maintain appropriate reactivity and power levels. Like in the SNAP program, these drive systems were a source of engineering headaches.
Perhaps the most recurring problem during the mid-1970s was the failure of the position sensor for the drive system, which was used to monitor the rotational position of the drum relative to the core. This failed in the Ya-20, Ya-21, and Ya-23, after which it was replaced with a sensor of a new design and the problem isn’t reported again. The Ya-81 test saw the loss of the Ar gas used as the initial lubricant in the drive system, and later seizing of the bearing the drive system connected to, leading to its replacement with a graphite-based lubricant.
The news wasn’t all bad, however. The Eh-40 test demonstrated greater control of drum position by reducing the backlash in the kinematic circuit, for instance, and improvements to the materials and coatings used eliminated problems of coating delamination, improving the system’s resistance to thermal cycling and vibrational stresses, and radiator coating issues.
The Eh-44 drive unit was replaced against the advice of one of the Russian customers due to a lack of mandatory testing on the advanced drive system. This system remained installed at the time of Topaz International, and is something that we’ll look at in the next blog post.
A New Customer Enters the Fold
During this testing, an American company (which is not named) was approached about possibly purchasing nearly complete Enisy reactors: the only thing that the Soviets wouldn’t sell was the fissile fuel itself, and that they would help with the manufacturing on. This was in addition to the three Russian customers (at least one of which was military, but again all remain unnamed). This company did not purchase any units, but did go to the US government with this offer.
This led to the Topaz International program, funded by the US Department of Defense’s Ballistic Missile Defense Organization. The majority of the personnel involved were employees of Los Alamos and Sandia National Laboratories, and the testing occurred at Kirtland Air Force Base in Albuquerque, NM.
As a personal note, I was just outside the perimeter fence when the aircraft carrying the test stand and reactors landed, and it remains one of the formational events in my childhood, even though I had only the vaguest understanding of what was actually happening, or that some day, more than 20 years, later, I would be writing about this very program, which I saw reach a major inflection point.
The Topaz International program will be the subject of our next blog post. It’s likely to be a longer one (as this was), so it may take me a little longer than a week to get out, but the ability to compare and contrast Soviet and American testing standards on the same system is too golden an opportunity to pass up.
Hello, and welcome to Beyond NERVA, for our first blog post of the year! Today, we reach the end of the reactor portion of the SNAP program. A combination of the holidays and personal circumstances prevented me from finishing this post as early as I would have liked to, but it’s finally here! Check the end of the blog post for information on an upcoming blog format change. [Author’s note: somehow the references section didn’t attach to the original post, that issue is now corrected, and I apologize, references are everything in as technical a field as this.]
The SNAP-50 was the last, and most powerful, of the SNAP series of reactors, and had a very different start when compared to the other three reactors that we’ve looked at. A fifth reactor, SNAP-4, also underwent some testing, but was meant for undersea applications for the Navy. The SNAP-50 reactor started life in the Aircraft Nuclear Propulsion program for the US Air Force, and ended its life with NASA, as a power plant for the future modular space station that NASA was planning before the budget cuts of the mid to late 1970s took hold.
Because it came from a different program originally, it also uses different technology than the reactors we’ve looked at on the blog so far: uranium nitride fuel, and higher-temperature, lithium coolant made this reactor a very different beast than the other reactors in SNAP. However, these changes also allowed for a more powerful reactor, and a less massive power plant overall, thanks to the advantages of the higher-temperature design. It was also the first major project to move the space reactor development process away from SNAP-2/10A legacy designs.
The SNAP-50 would permanently alter the way that astronuclear reactors were designed, and would change the course of in-space reactor development for over 20 years. By the time of its cancellation in 1973, it had approached flight readiness to the point that funding and time allowed, but changes in launch vehicle configuration rang the death knell of the SNAP-50.
The Birth of the SNAP-50
Up until now, the SNAP program had focused on a particular subset of nuclear reactor designs. They were all fueled with uranium-zirconium hydride fuel (within a small range of uranium content, all HEU), cooled with NaK-78, and fed either mercury Rankine generators or thermoelectric power conversion systems. This had a lot of advantages for the program: fuel element development improvements for one reactor could be implemented in all of them, challenges in one reactor system that weren’t present in another allowed for distinct data points to figure out what was going on, and the engineers and reactor developers were able to look at each others’ work for ideas on how to improve reliability, efficiency, and other design questions.
However, there was another program that was going on at about the same time which had a very different purpose, but similar enough design constraints that it could be very useful for an in-space fission power plant: the Aircraft Nuclear Propulsion program (ANP), which was primarily run out of Oak Ridge National Laboratory. Perhaps the most famous part of the ANP program was the series of direct cycle ramjets for Project PLUTO: the TORY series. These ramjets were nuclear fission engines using the atmosphere itself as the working fluid. There were significant challenges to this approach, because the clad for the fuel elements must not fail, or else the fission products from the fuel elements would be released as what would be virtually identical to nuclear fallout, only different due to the method that it was generated. The fuel elements themselves would be heavily eroded by the hot air moving through the reactor (which turned out to be a much smaller problem than was initially anticipated). The advantage to this system, though, is that it was simple, and could be made to be relatively lightweight.
Another option was what was known as the semi-indirect cycle, where the reactor would heat a working fluid in a closed loop, which would then heat the air through a heat exchanger built into the engine pod. While this was marginally safer from a fission product release point of view, there were a number of issues with the design. The reactor would have to run at a higher temperature than the direct cycle, because there are always losses whenever you transfer heat from one working fluid to another, and the increased mass of the system also required greater thrust to maintain the desired flight characteristics. The primary coolant loop would become irradiated when going through the reactor, leading to potential irradiation of the air as it passed through the heat exchanger. Another concern was that the heat exchanger could fail, leading to the working fluid (usually a liquid metal) being exposed at high temperature to the superheated air, where it could easily explode. Finally, if a clad failure occurred in the fuel elements, fission products could migrate into the working fluid, making the primary loop even more radioactive, increasing the irradiation of the air as it passed through the engine – and releasing fission products into the atmosphere if the heat exchanger failed.
The alternative to these approaches was an indirect cycle, where the reactor heated a working fluid in a closed loop, transferred this to another working fluid, which then heated the air. The main difference between these systems is that, rather than having the possibly radioactive primary coolant come in close proximity with the air and therefore transferring ionizing radiation, there is an additional coolant loop to minimize this concern, at the cost of both mass and thermal efficiency. This setup allowed for far greater assurances that the air passing through the engine would not be irradiated, because the irradiation of the secondary coolant loop would be so low as to be functionally nonexistent. However, if the semi-indirect cycle was more massive, this indirect cycle would be the heaviest of all of the designs, meaning far higher power outputs and temperatures were needed in order to get the necessary thrust-to-weight ratios for the aircraft. Nevertheless, from the point of view of the people responsible for the ANP program, this was the most attractive design for a crewed aircraft.
Both SNAP and ANP needed many of the same things out of a nuclear reactor: it had to be compact, it had to be lightweight, it had to have a VERY high power density and it needed to be able to operate virtually maintenance-free in a variety of high-power conditions. These requirements are in stark contrast to terrestrial, stationary nuclear reactors which can afford heavy weight, voluminous construction and can thus benefit of low power density. As a general rule of thumb, an increase in power density, will also intensify the engineering, materials, and maintenance challenges. The fact that the ANP program needed high outlet temperatures to run a jet engine also bore the potential of having a large thermal gradient across a power conversion system – meaning that high-conversion-efficiency electrical generation was possible. That led SNAP program leaders to see about adapting an aircraft system into a spacecraft system.
The selected design was under development at the Connecticut Advanced Nuclear Engine Laboratory (CANEL) in Middletown, Connecticut. The prime contractor was Pratt and Whitney. Originally part of the indirect-cycle program, the challenges of heat exchanger design, adequate thrust, and a host of other problems continually set back the indirect cycle program, and when the ANP program was canceled in 1961, Pratt and Whitney no longer had a customer for their reactor, despite doing extensive testing and even fabricating novel alloys to deal with certain challenges that their reactor design presented. This led them to look for another customer for the reactor, and they discovered that both NASA and the US Air Force were both interested in high-power-density, high temperature reactors for in-space use. Both were interested in this high powered reactor, and the SNAP-50 was born.
This reactor was an evolution of a series of test reactors, the PWAR series of test reactors. Three reactors (the PWAR-2, -4, and -8, for 2, 4, and 8 MW of thermal power per reactor core) had already been run for initial design of an aircraft reactor, focused on testing not only the critical geometry of the reactor, but the materials needed to contain its unique (at the time) coolant: liquid lithium. This is because lithium has an excellent specific heat capacity, or the amount of energy that can be contained as heat per unit mass at a given temperature: 3.558 J/kg-C, compared to the 1.124 J/kg-C of NaK78, the coolant of the other SNAP reactors. This means that less coolant would be needed to transport the energy away from the reactor and into the engine in the ANP program, and for SNAP this meant that less working fluid mass would be needed transferring from the reactor to the power conversion system. The facts that Li is much less massive than NaK, and that less of it would be needed, makes lithium a highly coveted option for an astronuclear reactor design. However, this design decision also led to needing novel concepts for how to contain liquid lithium. Even compared to NaK, lithium is highly toxic, highly corrosive in most materials and led, during the ANP program, to Pratt and Whitney investigating novel elemental compositions for their containment structures. We’ll look at just what they did later.
SNAP-50: Designing the Reactor Core
This reactor ended up using a form of fuel element that we have yet to look at in this blog: uranium nitride, UN. While both UC (you can read more about carbide fuels here) and UN were considered at the beginning of the program, the reactor designers ended up settling on UN because of a unique capacity that this fuel form offers: it has the highest fissile fuel density of any type of fuel element. This is offset by the fact that UN isn’t the most heat tolerant of fuel elements, requiring a lower core operating temperature. Other options were considered as well, including CERMET fuels using oxides, carbides, and nitrides suspended in a tungsten metal matrix to increase thermal conductivity and reduce the temperature of the fissile fuel itself. The decision between UN, with its higher mass efficiency (due to its higher fissile density), and uranium carbide (UC), with the highest operating temperature of any solid fuel element, was a difficult decision, and a lot of fuel element testing occurred at CANEL before a decision was reached. After a lot of study, it was determined that UN in a tungsten CERMET fuel was the best balance of high fissile fuel density, high thermal conductivity, and the ability to manage low fuel burnup over the course of the reactor’s life.
Perhaps the most important design consideration for the fuel elements after the type of fuel was how dense the fuel would be, and how to increase the density if this was desired in the final design. While higher density fuel is generally speaking a better idea when it comes to specific power, it was discovered that the higher density the fuel was, the lower the amount of burnup would be possible before the fuel would fail due to fission product gas buildup within the fuel itself. Initial calculations showed that there was an effectively unlimited fuel burnup potential of UN at 80% of its theoretical density since a lot of the gasses could diffuse out of the fuel element. However, once the fuel reached 95% density, this was limited to 1% fuel burnup. Additional work was done to determine that this low burnup was in fact not a project killer for a 10,000 hour reactor lifetime, as was specified by NASA, and the program moved ahead.
These fuel pellets needed a cladding material, as most fuel does, and this led to some additional unique materials challenges. With the decision to use lithium coolant, and the need for both elasticity and strength in the fuel element cladding (to deal with both structural loads and fuel swelling), it was necessary to do extensive experimentation on the metal that would be used for the clad. Eventually, a columbium-zirconium alloy with a small amount of carbon (CB-1ZR-0.6C) was decided on as a barrier between the Cb-Zr alloy of the clad (which resisted the high-temperature lithium erosion on the pressure vessel side of the clad) and the UN-W CERMET fuel (which would react strongly without the carburized layer).
This decisions led to an interesting reactor design, but not necessarily one that is unique from a non-materials point of view. The fuel would be formed into high-density pellets, which would then be loaded into a clad, with a spring to keep the fuel to the bottom (spacecraft end) of the reactor. The gap between the top of the fuel elements and the top of the clad was for the release of fission product gasses produced during operation of the reactor. These rods would be loaded in a hexagonal prism pattern into a larger collection of fuel elements, called a can. Seven of these cans, placed side by side (one regular hexagon, surrounded by six slightly truncated hexagons), would form the fueled portion of the reactor core. Shims of beryllium would shape the core into a cylinder, which was surrounded by a pressure vessel and lateral reflectors. Six poison-backed control drums mounted within the reflector would rotate to provide reactor control. Should the reactor need to be scrammed, a spring mechanism would return all the drums to a position with the neutron poison facing the reactor, stopping fission from occurring.
The lithium, after being heated to a temperature of 2000°F (1093°C), would feed into a potassium boiler, before being returned to the core at an inlet temperature of 1900 F (1037°C). From the boiler, the potassium vapor, which is 1850°F (1010°C), would enter a Rankine turbine which would produce electricity. The potassium vapor would cool down to 1118°F (603°C) in the process and return – condensed to its liquid form – to the boiler, thus closing the circulation. Several secondary coolant loops were used in this reactor: the main one was for the neutron reflectors, shield actuators, control drums, and other radiation hardened equipment, and used NaK as a coolant; this coolant was also used as a lubricant for the condensate pump in the potassium system. Another, lower temperature organic coolant was used for other systems that weren’t in as high a radiation flux. The radiators that were used to reject heat also used NaK as a working fluid, and were split into a primary and secondary radiator array. The primary array pulled heat from the condenser, and reduced it from 1246°F (674°C) to 1096°F (591°C), while the secondary array took the lower-temperature coolant from 730°F (388°C) to 490°F (254°C). This design was designed to operate in both single and dual loop situations, with the second (identical) loop used for high powered operation and to increase redundancy in the power plant.
These design decisions led to a flexible reactor core size, and the ability to adapt to changing requirements from either NASA or the USAF, both of which were continuing to show interest in the SNAP-50 for powering the new, larger space stations that were becoming a major focus of both organizations.
The Power Plant: Getting the Juice Flowing
By 1973, the SNAP 2/10A program had ended, and the SNAP-8/ZrHR program was winding down. These systems simply didn’t provide enough power for the new, larger space station designs that were being envisaged by NASA, and the smaller reactor sizes (the 10B advanced designs that we looked at a couple blog posts back, and the 5 kWe Thermoelectric Reactor) didn’t provide capabilities that were needed at the time. This left the SNAP-50 as the sole reactor design that was practical to take on a range of mission types… but there was a need to have different reactor power outputs, so the program ended up developing two reactor sizes. The first was a 35 kWe reactor design, meant for smaller space stations and lunar bases, although this particular part of the 35 kWe design seems to have never been fully fleshed out. A larger, 300 kWe type was designed for NASA’s proposed modular space station, a project which would eventually evolve into the actual ISS.
Unlike in the SNAP-2 and SNAP-8 programs, the SNAP-50 kept its Rankine turbine design, which had potassium vapor as its working fluid. This meant that the power plant was able to meet its electrical power output requirements far more easily than the lower efficiency demanded by thermoelectric conversion systems. The CRU system meant for the SNAP-2 ended up reaching its design requirements for reliability and life by this time, but sadly the overall program had been canceled, so there was no reactor to pair to this ingenious design (sadly, it’s so highly toxic that testing would be nearly impossible on Earth). The boiler, pumps, and radiators for the secondary loop were tested past the 10,000 hour design lifetime of the power plant, and all major complications discovered during the testing process were addressed, proving that the power conversion system was ready for the next stage of testing in a flight configuration.
One concern that was studied in depth was the secondary coolant loop’s tendency to become irradiated in the neutron flux coming off the reactor. Potassium has a propensity for absorbing neutrons, and in particular 41K (6% of unrefined K) can capture a neutron and become 42K. This is a problem, because 42K goes through gamma decay, so anywhere that the secondary coolant goes needs to have gamma radiation shielding to prevent the radiation from reaching the crew. This limited where the power conversion system could be mounted, to keep it inside the gamma shielding of the temporary, reactor-mounted shield, however the compact nature of both the reactor core and the power conversion system meant that this was a reasonably small concern, but one worthy of in-depth examination by the design team.
The power conversion system and auxiliary equipment, including the actuators for the control drums, power conditioning equipment, and other necessary equipment was cooled by a third coolant loop, which used an organic coolant (basically the oil needed for the moving parts to be lubricated), which ran through its own set of pumps and radiators. This tertiary loop was kept isolated from the vast majority of the radiation flux coming off the reactor, and as such wasn’t a major concern for irradiation damage of the coolant/lubricant.
Some Will Stay, Some Will Go: Mounting SNAP-50 To A Space Station
Each design used a 4-pi (a fully enclosing) shield with a secondary shadow shield pointing to the space station in order to reduce radiation exposure for crews of spacecraft rendezvousing or undocking from the space station. This primary shield was made out of a layer of beryllium to reflect neutrons back into the core, and boron carbide (B4C, enriched in boron-10) to absorb the neutrons that weren’t reflected back into the core. These structures needed to be cooled to ensure that the shield wouldn’t degrade, so a NaK shield coolant system (using technology adapted from the SNAP-8 program) was used to keep the shield at an acceptable temperature.
The shadow shield was built in two parts: the entire structure would be launched at the same time for the initial reactor installation for the space station, and then when the reactor needed to be replaced only a portion of the shield would be jettisoned with the reactor. The remainder, as well as the radiators for the reactor’s various coolant systems, would be kept mounted to the space station in order to reduce the amount of mass that needed to be launched for the station resupply. The shadow shield was made out of layers of tungsten and LiH, for gamma and neutron shielding respectively.
When it came time to replace the core of the reactor at the end of its 10,000 hour design life (which was a serious constraint on the UN fuels that they were working with due to fuel burnup issues), everything from the separation plane back would be jettisoned. This could theoretically have been dragged to a graveyard orbit by an automated mission, but the more likely scenario at the time would have been to leave it in a slowly degrading orbit to give the majority of the short-lived isotopes time to decay, and then design it to burn up in the atmosphere at a high enough altitude that diffusion would dilute the impact of any radioisotopes from the reactor. This was, of course, before the problems that the USSR ran into with their US-A program [insert link], which eliminated this lower cost decommissioning option.
After the old reactor core was discarded, the new core, together with the small forward shield and power conversion system, could be put in place using a combination of off-the-shelf hardware, which at the time was expected to be common enough: either Titan-III or Saturn 1B rockets, with appropriate upper stages to handle the docking procedure with the space station. The reactor would then be attached to the radiator, the docking would be completed, and within 8 hours the reactor would reach steady-state operations for another 10,000 hours of normal use. The longest that the station would be running on backup power would be four days. Unfortunately, information on the exact docking mechanism used is thin, so the details on how they planned this stage are still somewhat hazy, but there’s nothing preventing this from being done.
A number of secondary systems, including accumulators, pumps, and other equipment are mounted along with the radiator in the permanent section of the power supply installation. Many other systems, especially anything that has been exposed to a large radiation flux or high temperatures during operation (LiH, the primary shielding material, loses hydrogen through outgassing at a known rate depending on temperature, and can almost be said to have a half-life), will be separated with the core, but everything that was practicable to leave in place was kept.
This basic design principle for reloadable (which in astronuclear often just means “replaceable core”) reactors will be revisited time and again for orbital installations. Variations on the concept abound, although surface power units seem to favor “abandon in place” far more. In the case of large future installations, it’s not unreasonable to suspect that refueling of a reactor core would be possible, but at this point in astronuclear mission utilization, even having this level of reusability was an impressive feat.
35 kWe SNAP-50: The Starter Model
In the 1960s, having 35 kWe of power for a space station was considered significant enough to supply the vast majority of mission needs. Because of this, a smaller version of the SNAP-50 was designed to fit this mission design niche. While the initial power plant would require the use of a Saturn 1B to launch it into orbit, the replacement reactors could be launched on either an Atlas-Centaur or Titan IIIA-Centaur launch vehicle. This was billed as a low cost option, as a proof of concept for the far larger – and at this point, far less fully tested – 300 kWe version to come.
NASA was still thinking of very large space stations at this time. The baseline crew requirements alone were incredible: 24-36 crew, with rotations lasting from 3 months to a year, and a station life of five years. While 35 kWe wouldn’t be sufficient for the full station, it would be an attractive option. Other programs had looked at nuclear power plants for space stations as well, like we saw with the Manned Orbiting Laboratory and the Orbital Workshop (later Skylab), and facilities of that size would be good candidates for the 35 kWe system.
The core itself measured 8.3 inches (0.211 m) across, 11.2 inches (0.284 m) long, and used 236 fuel elements arranged into seven fuel element cans within the pressure vessel of the core. Six poison-backed control drums were used for primary reactor control. The core would produce up to 400 kW of thermal power. The pressure vessel, control drums, and all other control and reflective materials together measured just 19.6 inches (4.98 m) by 27.9 inches (7.09 m), and the replaceable portion of the reactor was between four and five feet (1.2 m and 1.5 m) tall, and five and six feet (1.5 m and 1.8 m) across – including shielding.
This reactor could also have been a good prototype reactor for a nuclear electric probe, a concept that will be revisited later, although there’s little evidence that this path was ever seriously explored. Like many smaller reactor designs, this one did not get the amount of attention that its larger brother offered, but at the time this was considered a good, solid space station power supply.
300 kWe SNAP-50: The Most Powerful Space Reactor to Date
While there were sketches for more powerful reactors than the 300 kWe SNAP-50 variant, they never really developed the reactors to any great extent, and certainly not to the point of experimental verification that SNAP-50 had achieved. This was considered to be a good starting point for possibly a crewed nuclear electric spacecraft, as well as being able to power a truly huge space station.
The 300 kWe variant of the reactor was slightly different in more than size when compared to its smaller brother. Despite using the same fuel, clad, and coolant as the 35 kWe system, the 300 kWe system could achieve over four times the fuel burnup of the smaller reactor (0.32% vs 1.3%), and had a higher maximum fuel power density as well, both of which have a huge impact on core lifetimes and dynamics. This was partially achieved by making the fuel elements almost half as narrow, and increasing the number of fuel elements to 1093, held in 19 cans within the core. This led to a core that was 10.2 inches (0.259 m) wide, and 14.28 inches (0.363 m) long (keeping the same 1:1.4 gore geometry between the reactors), and a pressure vessel that was 12” (0.305 m) in diameter by 43” (1.092 m) in length. It also increased the thermal output of the reactor to 2200 kWt. The number of control drums was increased from six to eight longer control drums to fit the longer core, and some rearrangement of lithium pumps and other equipment for the power conversion system occurred within the larger 4 pi shield structure. The entire reactor assembly that would undergo replacement was five to six feet high, and six to seven feet in diameter (1.5 m; 1.8 m; 2.1 m).
Sadly, even the ambitious NASA space station wasn’t big enough to need even the smaller 35 kWe version of the reactor, much less the 300 kWe variants. Plans had been made for a fleet of nuclear electric tugs that would ferry equipment back and forth to a permanent Moon base, but cancellation of that program occurred at the same time as the death of the moon base itself.
Mass Tradeoffs: Why Nuclear Instead of Solar?
By the middle of the 1960s, photovoltaic solar panels had become efficient and reliable enough for use in spacecraft on a regular basis. Because of this, it was a genuine question for the first time ever whether to go with solar panels or a nuclear reactor, whereas in the 1950s and early 60s nuclear was pretty much the only option. However, solar panels have a downside: drag. Even in orbit, there is a very thin atmosphere, and so for lower orbits a satellite has to regularly raise itself up or it will burn up in the atmosphere. Another down side comes from MM/OD: micro meteorites and orbital debris. Since solar panels are large, flat, and all pointing at the sun all the time, there’s a greater chance that something will strike one of those panels, damaging or possibly even destroying it. Managing these two issues is the primary concern of using solar panels as a power supply in terms of orbital behavior, and determines the majority of the refueling mass needed for a solar powered space station.
On the nuclear side, by 1965, there were two power plant options on the table: the SNAP-8 (pre-ZrHR redesign) and the SNAP-50, and solar photovoltaics had developed to the point that they could be deployed in space. Because of this, a comparison was done by Pratt and Whitney of the three systems to determine the mass efficiency of each system, not only in initial deployment but also in yearly fueling and tankage requirements. Each of the systems was compared at a 35 kWe power level to the space station in order to allow for a level playing field.
One thing that stands out about the solar system (based on a pair of Lockheed and General Electric studies) is that it’s marginally the lightest of all the systems at launch, but within a year the total system maintenance mass required far outstrips the mass of the nuclear power plants, especially the SNAP-50. This is because the solar panels have a large sail area, which catches the very thin atmosphere at the station’s orbital altitude and drags the station down into the thicker atmosphere, so thrust is needed to re-boost the space station. This is something that has to be done on a regular basis for the ISS. The mass of the fuel, tankage, and structure to allow for this reboost is extensive. Even back in 1965 there were discussions on using electric propulsion for the reboosting of the space station, in order to significantly reduce the mass needed for this procedure. That discussion is still happening casually with the ISS, and Ad Astra still hopes to use VASIMR for this purpose – a concept that’s been floated for the last ten or so years.
Overall, the mass difference between the SNAP-50 and the optimistic Lockheed proposal of the time was significant: the original deployment was only about 70 lbs (31.75 kg) different, but the yearly maintenance mass requirements would be 5,280 lbs (2395 kg) different – quite a large amount of mass.
Because the SNAP-50 and SNAP-8 don’t have these large sail areas, and the radiators needed can be made aerodynamically enough to greatly reduce the drag on the station, the reboost requirements are significantly lower than for the solar panels. The SNAP-50 weighs significantly less than the SNAP-8, and has significantly less surface area, because the reactor operates at a far higher temperature, and therefore needs a smaller radiator. Another difference between the reactors is volume: the SNAP-50 is physically smaller than the SNAP-8 because of that same higher temperature, and also due to the fact that the UN fuel is far more dense than its U-ZrH fueled counterpart.
These reactors were designed to be replaced once a year, with the initial launch being significantly more massive than the follow-up launches, benefitting of the sectioned architecture with a separation plane just at the small of the shadow shield as described above. Only the smaller section of shield remained with the reactor when it was separated. The larger, heavier section, on the other hand, would remain with the space station, as well as the radiators, and serve as the mounting point for the new reactor core and power conversion system, which would be sent via an automated refueling launch to the space station.
Solar panels, on the other hand, require both reboost to compensate for drag as well as equipment to repair or replace the panels, batteries, and associated components as they wear out. This in turn requires a somewhat robust repair capability for ongoing maintenance – a requirement for any large, long term space station, but the more area you have to get hit by space debris, which means more time and mass spent on repairs rather than doing science.
Of course, today solar panels are far lighter, and electric thrusters are also far more mature than they were at that time. This, in addition to widespread radiophobia, make solar the most widespread occurrence in most satellites, and all space stations, to date. However, the savings available in overall lifetime mass and a sail area that is both smaller and more physically robust, remain key advantages for a nuclear powered space station in the future
The End of an Era: Changing Priorities, Changing Funding
The SNAP-50, even the small 35 kWe version, offered more power, more efficiency, and less mass and volume than the most advanced of SNAP-8’s children: the A-ZrHR [Link]. This was the end of the zirconium hydride fueled reactor era for the Atomic Energy Commission, and while this type of fuel continues to be used in reactors all over the world in TRIGA research and training reactors (a common type of small reactor for colleges and research organizations), its time as the preferred fuel for astronuclear designs was over.
In fact, by the end of the study period, the SNAP-50 was extended to 1.5 MWe in some designs, the most powerful design to be proposed until the 1980s, and one of the most powerful ever proposed… but this ended up going nowhere, as did much of the mission planning surrounding the SNAP program.
At the same time as these higher-powered reactor designs were coming to maturity, funding for both civilian and military space programs virtually disappeared. National priorities, and perceptions of nuclear power, were shifting. Technological advances eliminated many future military crewed missions in favor of uncrewed ones with longer lifetimes, less mass, less cost – and far smaller power requirements. NASA funding began falling under the axe even as we were landing on the Moon for the first time, and from then on funding became very scarce on the ground.
The transition from the Atomic Energy Commission to the Department of Energy wasn’t without its hiccups, or reductions in funding, either, and where once every single AEC lab seemed to have its own family of reactor designs, the field narrowed greatly. As we’ll see, even at the start of Star Wars the reactor design was not too different from the SNAP-50.
Finally, the changes in launch system had their impact as well. NASA was heavily investing in the Space Transport System (the Space Shuttle), which was assumed to be the way that most or all payloads would be launched, so the nuclear reactor had to be able to be flown up – and in some cases returned – by the Shuttle. This placed a whole different set of constraints on the reactor, requiring a large rewrite of the basic design. The follow-on design, the SP-100, used the same UN fuel and Li coolant as the SNAP-50, but was designed to be launched and retrieved by the Shuttle. The fact that the STS never lived up to its promise in launch frequency or cost (and that other launchers were available continuously) means that this was ultimately a diversion, but at the time it was a serious consideration.
All of this spelled the death of the SNAP-50 program, as well as the end of dedicated research into a single reactor design until 1983, with the SP-100 nuclear reactor system, a reactor we’ll look at another time.
While I would love to go into many of the reactors that were developed up to this time, including heat pipe cooled reactors (SABRE at Los Alamos), thermionic power conversion systems (5 kWe Thermionic Reactor), and other ideas, there simply isn’t time to go into them here. As we look at different reactor components they’ll come up, and we’ll mention them there. Sadly, while some labs were able to continue funding some limited research with the help of NASA and sometimes the Department of Defense or the Defense Nuclear Safety Agency. The days of big astronuclear programs, though, were fading into a thing of the past. Both space and nuclear power would refocus, and then fade in the rankings of budgetary requirements over the years. We will be looking at these reactors more as time goes on, in our new “Forgotten Reactors” column (more on that below).
The Blog is Changing!
With the new year, I’ve been thinking a lot about the format of both the website and the blog, and where I hope to go in the next year. I’ve had several organizational projects on the back burner, and some of them are going to be started here soon. The biggest part is going to be the relationship between the blog and the website, and what I write more about where.
Expect another blog post shortly (it’s already written, just not edited yet) about our plans for the next year!
I’ve got big plans for Beyond NERVA this year, and there are a LOT of things that are slowly getting started in the background which will greatly improve the quality of the blog and the website, and this is just the start!
Hello, and welcome back to Beyond NERVA! As some of you may have noticed, the website has moved! Yes, we’re now at beyondnerva.com! I’m working on updating the webpage, and am getting the pieces together for a major website redesign (still a ways off, but lots of the pieces are starting to fall into place) to make the site easier to navigate and more user friendly. Make sure to update your bookmarks with this new address! With that brief administrative announcement out of the way, let’s get back to our look at in-space fission power plants.
Today, we’re going to continue our look at the SNAP program, America’s first major attempt to provide electric power in space using nuclear energy, and finishing up our look at the zirconium hydride fueled reactors that defined the early SNAP reactors by looking at the SNAP-8, and its two children – the 5 kW Thermoelectric Reactor and the Advanced Zirconium Hydride Reactor.
SNAP 8 was the first reactor designed with these space stations in mind in mind. While SNAP-10A was a low-power system (at 500 watts when flown, later upgraded to 1 kW), and SNAP-2 was significantly larger (3 kW), there was a potential need for far more power. Crewed space stations take a lot of power (the ISS uses close to 100 kWe, as an example), and neither the SNAP-10 or the SNAP-2 were capable of powering the space stations that NASA was in the beginning stages of planning.
Initially designed to be far higher powered, with 30-60 kilowatts of electrical power, this was an electric supply that could power a truly impressive outpost for humanity in orbit. However, the Atomic Energy Commission and NASA (which was just coming into existence at the time this program was started) didn’t want to throw the baby out with the bath water, as it were. While the reactor was far higher powered than the SNAP 2 reactor that we looked at last time, many of the power system’s components are shared: both use the same fuel (with minor exceptions), both use similar control drum structures for reactor control, both use mercury Rankine cycle power conversion systems, and perhaps most attractively both were able to evolve with lessons learned from the other part of the program.
While SNAP 8 never flew, it was developed to a very high degree of technical understanding, so that if the need for the reactor arose, it would be available. One design modification late in the SNAP 8 program (when the reactor wasn’t even called SNAP 8 anymore, but the Advanced Zirconium Hydride Reactor) had a very rare attribute in astronuclear designs: it was shielded on all sides for use on a space station, providing more than twice the electrical power available to the International Space Station without any of the headaches normally associated with approach and docking with a nuclear powered facility.
Let’s start back in 1959, though, with the SNAP 8, the first nuclear electric propulsion reactor system.
SNAP 8: NASA Gets Involved Directly
The SNAP 2 and SNAP 10A reactors were both collaborations between the Atomic Energy Commission (AEC), who were responsible for the research, development, and funding of the reactor core and primary coolant portions of the system, and the US Air Force, who developed the secondary coolant system, the power conversion system, the heat rejection system, the power conditioning unit, and the rest of the components. Each organization had a contractor that they used: the AEC used Atomics International (AI), one of the movers and shakers of the advanced reactor industry, while the US Air Force went to Thompson Ramo Wooldridge (better known by their acronym, TRW) for the SNAP-2 mercury (Hg) Rankine turbine and Westinghouse Electric Corporation for the SNAP-10’s thermoelectric conversion unit.
1959 brought NASA directly into the program on the reactor side of things, when they requested a fission reactor in the 30-60 kWe range for up to one year; one year later the SNAP-8 Reactor Development Program was born. It would use a similar Hg-based Rankine cycle as the SNAP-2 reactor, which was already under development, but the increased power requirements and unique environment that the power conversion system necessitated significant redesign work, which was carried out by Aerojet General as the prime contractor. This led to a 600 kWe rector core, with a 700 C outlet temperature As with the SNAP-2 and SNAP-10 programs, the SNAP 8’s reactor core was funded by the AEC, but in this case the power conversion system was the funding responsibility of NASA.
The fuel itself was similar to that in the SNAP-2 and -10A reactors, but the fuel elements were far longer and thinner than those of the -2 and -10A. Because the fuel element geometry was different, and the power level of the reactor was so much higher than the SNAP-2 reactor, the SNAP-8 program required its own experimental and developmental reactor program to run in parallel to the initial SNAP Experimental and Development reactors, although the materials testing undertaken by the SNAP-2 reactor program, and especially the SCA4 tests, were very helpful in refining the final design of the SNAP-8 reactor.
The power conversion system for this reactor was split in two: identical Hg turbines would be used, with either one or both running at any given time depending on the power needs of the mission. This allows for more flexibility in operation, and also simplifies the design challenges involved in the turbines themselves: it’s easier to design a turbine with a smaller power output range than a larger one. If the reactor was at full power, and both turbines were used, the design was supposed to produce up to 60 kW of electrical power, while the minimum power output of a single turbine would be in the 30 kWe range. Another advantage was that if one was damaged, the reactor would continue to be able to produce power.
Due to the much higher power levels, an extensive core redesign was called for, meaning that different test reactors would need to be used to verify this design. While the fuel elements were very similar, and the overall design philosophy was operating in parallel to the SNAP-2/10A program, there was only so much that the tests done for the USAF system would be able to help the new program. This led to the SNAP-8 development program, which began in 1960, and had its first reactor, the SNAP-8 Experimental Reactor, come online in 1963.
SNAP-8 Experimental Reactor: The First of the Line
The first reactor in this series, the SNAP 8 Experimental Reactor (S8ER), went critical in May 1963, and operated until 1965. it operated for 2522 hours at above 600 kWt, and over 8000 hours at lower power levels. The fuel elements for the reactor were 14 inches in length, and 0.532 inches in diameter, with uranium-zirconium hydride (U-ZrH, the same basic fuel type as the SNAP-2/10A system that we looked at last time) enriched to 93.15% 235U, with 6 X 10^22 atoms of hydrogen per cubic centimeter.
The biggest chemical change in this reactor’s fuel elements compared to the SNAP-2/10A system was the hydrogen barrier inside the metal clad: instead of using gadolinium as a burnable poison (which would absorb neutrons, then decay into a neutron-transparent element as the reactor underwent fission over time), the S8ER used samarium. The reasons for the change are rather esoteric, relating to the neutron spectrum of the reactor, the particular fission products and their ratios, thermal and chemical characteristics of the fuel elements, and other factors. However, the change was so advantageous that eventually the different burnable poison would be used in the SNAP-2/10A system as well.
The fuel elements were still loaded in a triangle array, but makes more of a cylinder than a hexagon like in the -2/10A, with small internal reflectors to fill out the smooth cylinder of the pressure vessel. The base and head plates that hold the fuel elements are very similar to the smaller design, but obviously have more holes to hold the increased number of fuel elements. The NaK-78 coolant (identical to the SNAP-2/10A system) entered in the bottom of the reactor into a space in the pressure vessel (a plenum), flowed through the base plate and up the reactor, then exits the top of the pressure vessel through an upper plenum. A small neutron source used as a startup neutron source (sort of like a spark plug for a reactor) was mounted to the top of the pressure vessel, by the upper coolant plenum. The pressure vessel itself was made out of 316 stainless steel.
Instead of four control drums, the S8ER used six void-backed control drums. These were directly derived from the SNAP-2/10A control system. Two of the drums were used for gross reactivity control – either fully rotated in or out, depending on if the reactor is under power or not. Two were used for finer control, but at least under nominal operation would be pretty much fixed in their location over longer periods of time. As the reactor approached end of life, these drums would rotate in to maintain the reactivity of the system. The final two were used for fine control, to adjust the reactivity for both reactor stability and power demand adjustment. The drums used the same type of bearings as the -2/10A system.
The S8ER first underwent criticality benchmark tests (pre-dry critical testing) from September to December 1962 to establish the reactor’s precise control parameters. Before filling the reactor with the NaK coolant, water immersion experiments for failure-to-orbit safety testing (as an additional set of tests to the SCA-4 testing which also supported SNAP-8) was carried out between January and March of 1963. After a couple months of modifications and refurbishment, dry criticality tests were once again conducted on May 19, 1963, followed in the next month with the reactor reaching wet critical power levels on June 23. Months of low-power testing followed, to establish the precise reactor control element characteristics, thermal transfer characteristics, and a host of other technical details before the reactor was increased in power to full design characteristics.
The reactor was shut down from early August to late October, because some of the water coolant channels used for the containment vessel failed, necessitating the entire structure to be dug up, repaired, and reinstalled, with significant reworking of the facility being required to complete this intensive repair process. Further modifications and upgrades to the facility continued into November, but by November 22, the reactor underwent its first “significant” power level testing. Sadly, this revealed that there were problems with the control drum actuators, requiring the reactor to be shut down again.
After more modifications and repairs, lower power testing resumed to verify the repairs, study reactor transient behavior, and other considerations. The day finally came for the SNAP-8 Experimental Reactor achieved its first full power, at temperature testing on December 11, 1963. Shortly after, the reactor had to be shut down again to repair a NaK leak in one of the primary coolant loop pumps, but the reactor was up and operating again shortly after. Lower power tests were conducted to evaluate the samarium burnable poisons in the fuel elements, measure xenon buildup, and measure hydrogen migration in the core until April 28, interrupted briefly by another NaK pump failure and a number of instrumentation malfunctions in the automatic scram system (which was designed to automatically shut down the reactor in the case of an accident or certain types of reactor behaviors). However, despite these problems, April 28 marked 60 days of continuous operation at 450 kWt and 1300 F (design temperature, but less-than-nominal power levels).
After a shutdown to repair the control drive mechanisms (again), the reactor went into near-continuous operation, either at 450 or 600 kWt of power output and 1300 F outlet temperature until April 15, 1965, when the reactor was shut down for the last time. By September 2 of 1964, the S8ER had operated at design power and temperature levels for 1000 continuous hours, and went on in that same test to exceed the maximum continuous operation time of any SNAP reactor to date on November 5 (1152 hours). January 18 of 1965 it achieved 10,000 hours of total operations, and in February of that year reached 100 days of continuous operation at design power and temperature conditions. Just 8 days later, on February 12, it exceeded the longest continuous operation of any reactor to that point (147 days, beating the Yankee reactor). March 5 marked the one year anniversary of the core outlet temperature being continuously at over 1200 F. By April 15, when the reactor was shut down for the last time it achieved an impressive set of accomplishments:
5016.5 continuous operations immediately preceeding the shutdown (most at 450 kWt, all at 1200 F or greater)
12,080 hours of total operations
A total of 5,154,332 kilowatt-hours of thermal energy produced
91.09% Time Operated Efficiency (percentage of time that the reactor was critical) from November 22, 1963 (the day of first significant power operations of the reactor), and 97.91% efficiency in the last year of operations.
Once the tests were concluded, the reactor was disassembled, inspected, and fuel elements were examined. These tests took place at the Atomics International Hot Laboratory (also at Santa Susana) starting on July 28, 1965. For about 6 weeks, this was all that the facility focused on; the core was disassembled and cleaned, and the fuel elements were each examined, with many of them being disassembled and run through a significant testing regime to determine everything from fuel burnup to fission product percentages to hydrogen migration. The fuel element tests were the most significant, because to put it mildly there were problems.
Of the 211 fuel elements in the core, only 44 were intact. Many of the fuel elements also underwent dimensional changes, either swelling (with a very small number actually decreasing) across the diameter or the length, becoming oblong, dishing, or other changes in geometry. The clad on most elements was damaged in one way or another, leading to a large amount of hydrogen migrating out of the fuel elements, mostly into the coolant and then out of the reactor. This means that much of the neutron moderation needed for the reactor to operate properly migrated out of the core, reducing the overall available reactivity even as the amount of fission poisons in the form of fission products was increasing. For a flight system, this is a major problem, and one that definitely needs to be addressed. However, this is exactly the sort of problem that an experimental reactor is meant to discover and assess, so in this way as well the reactor was a complete success, if not as smooth a development as the designers would likely have preferred.
It was also discovered that, while the cracks in the clad would indicate that the hydrogen would be migrating out of the cracks in the hydrogen diffusion barrier, far less hydrogen was lost than was expected based on the amount of damage the fuel elements underwent. In fact, the hydrogen migration in these tests was low enough that the core would most likely be able to carry out its 10,000 hour operational lifetime requirement as-is; without knowing what the mechanism that was preventing the hydrogen migration was, though, it would be difficult if not impossible to verify this without extensive additional testing, when changes in the fuel element design could result in a more satisfactory fuel clad lifetime, reduced damage, and greater insurance that the hydrogen migration would not become an issue.
The SNAP-8 Experimental Reactor was an important stepping stone to nuclear development in high-temperature ZrH nuclear fuel development, and greatly changed the direction of the whole SNAP-8 program in some ways. The large number of failures in cladding, the hydrogen migration from the fuel elements, and the phase changes within the crystalline structure of the U-ZrH itself were a huge wake-up call to the reactor developers. With the SNAP-2/10A reactor, these issues were minor at best, but that was a far lower-powered reactor, with very different geometry. The large number of fuel elements, the flow of the coolant through the reactor, and numerous other factors made the S8ER reactor far more complex to deal with on a practical level than most, if any, anticipated. Plating of the elements associated with Hastelloy on the stainless steel elements caused concern about the materials that had been selected causing blockages in flow channels, further exacerbating the problems of local hot spots in the fuel elements that caused many of the problems in the first place. The cladding material could (and would) be changed relatively easily to account for the problems with the metal’s ductility (the ability to undergo significant plastic deformation before rupture, in other words to endure fuel swelling without the metal splitting, cracking, fracturing or other ways that the clad could be breached) under high temperature and radiation fluxes over time. A number of changes were proposed to the reactor’s design, which strongly encouraged – or required – changes in the SNAP-8 Development Reactor that was currently being designed and fabricated. Those changes would alter what the SNAP-8 reactor would become, and what missions it would be proposed for, until the program was finally put to rest.
After the S8ER test, a mockup reactor, the SNAP-8 Development Mockup, was built based on the 1962 version of the design. This mockup never underwent nuclear testing, but was used for extensive non-nuclear testing of the designs components. Basically, every component that could be tested under non-nuclear conditions (but otherwise identical, including temperature, stress loading, vacuum, etc.) was tested and refined with this mockup. The tweaks to the design that this mockup suggested are far more minute than we have time to cover here, but it was an absolutely critical step to preparing the SNAP-8 reactor’s systems for flight test.
SNAP-8 Development Reactor: Facing Challenges with the Design
The final reactor in the series, the SNAP-8 Development Reactor, was a shorter-lived reactor, in part because many of the questions that needed to be answered about the geometry had been answered by the S8ER, and partly because the unanswered materials questions were able to be answered with the SCA4 reactor. This reactor underwent dry critical testing in June 1968, and power testing began at the beginning of the next year. From January 1969 to December 1969, when the reactor was shut down for the final time, the reactor operated at nominal (600 kWt) power for 668 hours, and operated at 1000 kWt for 429 hours.
The SNAP-8 Development Reactor (S8DR) was installed in the same facility as the S8ER, although it operated under different conditions than the S8ER. Instead of having a cover gas, the S8DR was tested in a vacuum, and a flight-type radiation shield was mounted below it to facilitate shielding design and materials choices. Fuel loading began on June 18, 1968, and criticality was achieved on June 22, with 169 out of the 211 fuel elements containing the U-ZrH fuel (the rest of the fuel elements were stainless steel “dummy” elements) installed in the core. Reactivity experiments for the control mechanisms were carried out before the remainder of the dummy fuel elements were replaced with actual fuel in order to better calibrate the system.
Finally, on June 28, all the fuel was loaded and the final calibration experiments were carried out. These tests then led to automatic startup testing of the reactor, beginning on December 13, 1968, as well as transient analysis, flow oscillation, and temperature reactivity coefficient testing on the reactor. From January 10 to 15, 1969, the reactor was started using the proposed automated startup process a total of five times, proving the design concept.
1969 saw the beginning of full-power testing, with the ramp up to full design power occurring on January 17. Beginning at 25% power, the reactor was stepped up to 50% after 8 hours, then another 8 hours in it was brought up to full power. The coolant flow rates in both the primary and secondary loops started at full flow, then were reduced to maintain design operating temperatures, even at the lower power setting. Immediately following these tests on January 23, an additional set of testing was done to verify that the power conversion system would start up as well. The biggest challenge was verification that the initial injection of mercury into the boiler would operate as expected, so a series of mercury injection tests were carried out successfully. While they weren’t precisely at design conditions due to test stand limitations, the tests were close enough that it was possible to verify that the design would work as planned.
After these tests, the endurance testing of the reactor began. From January 25 to February 24 was the 500-hour test at design conditions (600 kWt and 1300 F), although there were two scram incidents that led to short interruptions. Starting on March 20, the 9000 hour endurance run at design conditions lasted until April 10. This was followed by a ramp up to the alternate design power of 1 MWt. While this was meant to operate at only 1100 F (to reduce thermal stress on the fuel elements, among other things), the airblast heat exchanger used for heat rejection couldn’t keep up with the power flow at that temperature, so the outlet temperature was increased to 1150 F (the greater the temperature difference between a radiator and its environment, the more efficient it is, something we’ll discuss more in the heat rejection posts). After 18 days of 1 MWt testing, the power was once again reduced to 600 kWt for another 9000 hour test, but on June 1, the reactor scrammed itself again due to a loss of coolant flow. At this point, there was a significant loss of reactivity in the core, which led the team to decide to proceed at a lower temperature to mitigate hydrogen migration in the fuel elements. Sadly, reducing the outlet temperature (to 1200 F) wasn’t enough to prevent this test from ending prematurely due to a severe loss in reactivity, and the reactor scrammed itself again.
The final power test on the S8ER began on November 20, 1969. For the first 11 days, it operated at 300 kWt and 1200 F, when it was then increased in power back to 600 kWt, but the outlet temperature was reduced to 1140F, for an additional 7 days. An increase of outlet temperature back to 1200 F was then dialed in for the final 7 days of the test, and then the reactor was shut down.
This shutdown was an interesting and long process, especially compared to just removing all the reactivity of the control drums by rotating them all fully out. First, the temperature was dropped to 1000 F while the reactor was still at 600 kWt, and then the reactor’s power was reduced to the point that both the outlet and inlet coolant temperatures were 800 F. This was held until December 21 to study the xenon transient behavior, and then the temperatures were further reduced to 400 F to study the decay power level of the reactor. On January 7, the temperature was once again increased to 750 F, and two days later the coolant was removed. The core temperature then dropped steadily before leveling off at 180-200F.
Once again, the reactor was disassembled and examined at the Hot Laboratory, with special attention being paid to the fuel elements. These fuel elements held up much better than the S8ER’s fuel elements, with only 67 of the 211 fuel elements showing cracking. However, quite a few elements, while not cracked, showed significant dimensional changes and higher hydrogen loss rates. Another curiosity was that a thin (less than 0.1 mil thick) metal film, made up of iron, nickel, and chromium, developed fairly quickly on the exterior of the cladding (the exact composition changed based on location, and therefore on local temperature, within the core and along each fuel element).
The fuel elements that had intact cladding and little to no deformation showed very low hydrogen migration, an average of 2.4% (this is consistent with modeling showing that the permeation barrier was damaged early in its life, perhaps during the 1 MWt run). However, those with some damage lost between 6.8% and 13.2 percent of their hydrogen. This damage wasn’t limited to just cracked cladding, though – the swelling of the fuel element was a better indication of the amount of hydrogen lost than the clad itself being split. This is likely due to phase changes in the fuel elements, when the UzrH changes crystalline structure, usually due to high temperatures. This changes how well – and at what bond angle – the hydrogen is kept within the fuel element’s crystalline structure, and can lead to more intense hot spots in the fuel element, causing the problem to become worse. The loss of reactivity scrams from the testing in May-July 1969 seem to be consistent with the worst failures in the fuel elements, called Type 3 in the reports: high hydrogen loss, highly oval cross section of the swollen fuel elements (there were a total of 31 of these, 18 of them were intact, 13 were cracked). One interesting note about the clad composition is that where there was a higher copper content due to irregularities in metallography there was far less swelling of the Hastelloy N clad, although the precise mechanism was not understood at the time (and my rather cursory perusal of current literature didn’t show any explanation either). However, at the time testing showed that these problems could be mitigated, to the point of insignificance even, by maintaining a lower core temperature to ensure localized over-temperature failures (like the changes in crystalline structure) would not occur.
The best thing that can be said about the reactivity loss rate (partially due to hydrogen losses, and partially due to fission product buildup) is that it was able to be extrapolated given the data available, and that the failure would have occurred after the design’s required lifetime (had S8DR been operated at design temperature and power, the reactor would have lost all excess reactivity – and therefore the ability to maintain criticality – between October and November of 1970).
On this mixed news note, the reactor’s future was somewhat in doubt. NASA was certainly still interested in a nuclear reactor of a similar core power, but this particular configuration was neither the most useful to their needs, nor was it exceptionally hopeful in many of the particulars of its design. While NASA’s reassessment of the program was not solely due to the S8DR’s testing history, this may have been a contributing factor.
One way or the other, NASA was looking for something different out of the reactor system, and this led to many changes. Rather than an electric propulsion system, focus shifted to a crewed space station, which has different design requirements, most especially in shielding. In fact, the reactor was split into three designs, none of which kept the SNAP name (but all kept the fuel element and basic core geometry).
A New Life: the Children of SNAP-8
Even as the SNAP-8 Development Reactor was undergoing tests, the mission for the SNAP-8 system was being changed. This would have major consequences for the design of the reactor, its power conversion system, and what missions it would be used in. These changes would be so extensive that the SNAP-8 reactor name would be completely dropped, and the reactor would be split into four concepts.
The first concept was the Space Power Facility – Plumbrook (SPT) reactor, which would be used to test shielding and other components at NASA’s Plum Brook Research Center outside Cleveland, OH, and could also be used for space missions if needed. The smallest of the designs (at 300 kWt), it was designed to avoid many of the problems associated with the S8ER and S8DR; however, funding was cut before the reactor could be built. In fact, it was cut so early that details on the design are very difficult to find.
The second reactor, the Reactor Core Test, was very similar to the SPF reactor, but it was the same power output as the nominal “full power” reactor, at 600 kWt. Both of these designs increased the number of control drums to eight, and were designed to be used with a traditional shadow shield. Neither of them were developed to any great extent, much less built.
A third design, the 5 kWe Thermoelectric Reactor, was a space system, meant to take many of the lessons from the SNAP-8 program and apply them to a medium-power system which would apply both the lessons of the SNAP-8 ER and DR as well as the SNAP-10A’s experience with thermoelectric power conversion systems to a reactor between the SNAP-10B and Reference Zirconium Hydride reactor in power output.
The final design, the Reference Zirconium Hydride Reactor (ZrHR), was extensively developed, even if geometry-specific testing was never conducted. This was the most direct replacement for the SNAP-8 reactor, and the last of the major U-ZrH fueled space reactors in the SNAP program. Rather than powering a nuclear electric spacecraft, however, this design was meant to power space stations.
The 5 kWe Thermoelectric Reactor: Simpler, Cleaner, and More Reliable
The 5 kWe Thermoelectric Reactor (5 kWe reactor) was a reasonably simple adaptation of the SNAP-8 design, intended to be used with a shadow shield. Unsurprisingly, a lot of the design changes mirrored some of the work done on the SNAP-10B Interim design, which was undergoing work at about the same time. Meant to supply 5 kWe of power for 5 years using lead telluride thermoelectric convertors (derived from the SNAP-10A convertors), this system was meant to provide power for everything from small crewed space stations to large communications satellites. In many ways, this was a very different departure from the SNAP-8 reactor, but at the same time the changes that were proposed were based on evolutionary changes during the S8ER and S8DR experimental runs, as well as advances in the SNAP 2/10 core which was undergoing parallel post-SNAPSHOT design evolution (the SNAP-10A design had been frozen for the SNAPSHOT program at this point, so these changes were either for the followon SNAP-10A Advanced or SNAP-10B reactors). The change from mercury Rankine to thermoelectric power conversion, though, paralleled a change in the SNAP-2/10A origram, where greater efficiency was seen as unnecessary due to the constantly-lower power requirements of the systems.
The first thing (in the reactor itself, at least) that was different about the design was that the axial reflector was tapered, rather than cylindrical. This was done to keep the exterior profile of the reactor cleaner. While aerodynamic considerations aren’t a big deal (although they do still play a minute part in low Earth orbit) for astronuclear power plants, everything that’s exposed to the unshielded reactor becomes a radiation source itself, due to radiation scattering and material activation under neutron bombardment. If you could get your reactor to be a continuation of the taper of your shadow shield, rather than sticking out from that cone shape, you can make the shadow shield smaller for a given reactor. Since the shield is often many times heavier than the power system itself, especially for crewed applications, the single biggest place a designer can save mass is in the shadow shield.
This tapered profile meant two things: first, there would be a gradient in the amount of neutron moderation between the top and the bottom of the reactor, and second, the control system would have to be reworked. It’s unclear exactly how far the neutronics analysis for the new reflector configuration had proceeded, sadly, but the control systems were adaptations of the design changes that were proposed for the SNAP-10B reactor: instead of having the wide, partial cylinder control drums of the original design, large sections (235 degrees in total) of the reflector would be slid up or down around the core containment vessel to control the amount of reactivity available. This is somewhat similar to the SNAP-10B and BES-5 concepts in its execution, but the mechanism is quite different from a neutronics perspective: rather than capturing the unwanted neutrons using a neutron poison like boron or europium, they’re essentially vented into space.
A few other big changes from the SNAP-8 reference design when it comes to the core itself. The first is in the fuel: instead of having a single long fuel rod in the clad, the U-XrH fuel was split into five different “slugs,” which were held together by the clad. This would create a far more complex thermal distribution situation in the fuel, but would also allow for better thermal stress management within the hydride itself. The number of fuel elements was reduced to 85, and they came in three configurations: one set of 27 had radial fins to control the flow that spiralled around the fuel element in a right-hand direction, another set of 27 had fins in the left-hand direction, and the final 31 were unfinned. This was done to better manage the flow of the NaK coolant through the core, and avoid some of the hydrodynamic problems that were experienced on the S8DR.
The U-ZrH Reactor: Power for America’s Latest and Greatest Space Stations.
The Reference ZrH Reactor was begun in 1968, while the S8DR was still under construction. Because of this increased focus on having a crewed space station configuration, and the shielding requirement changes, some redesign of the reactor core was needed. The axial shield would change the reactivity of the core, and the control drums would no longer be able to effectively expose portions of the core to the vacuum of space to get rid of excess reactivity. Because of this, the number of fuel elements in the core were increased from 211 to 295. Another change was that rather than the even spacing of fuel elements used in the S8DR, the fuel elements were spaced in such a way that the amount of coolant around each fuel element was proportional to the amount of power produced by each fuel element. This means that the fuel elements on the interior of the core were wider spaced than the fuel elements around the periphery. This made it far more unlikely that local hot spots will develop which could lead to fuel element failures, but it also meant that the flow of coolant through the reactor core would need to be far more thoroughly studied than was done on the SNAP 8 reactor design. These thermohydrodynamic studies would be a major focus of the ZrHR program.
Another change was in the control drum configuration, as well as the need to provide coolant to the drums. This was because the drums were now not only fully enclosed solid cylinders, but were surrounded by a layer of molten lead gamma shielding. Each drum would be a solid cylinder in overall cross section; the main body was beryllium, but a crescent of europium alloy was used as a neutron poison (this is one of the more popular alternatives to boron for control mechanisms that operate in a high temperature environment) to absorb neutrons when this portion of the control drum was turned toward the core. These drums would be placed in dry wells, with NaK coolant flowing around them from the spacecraft (bottom) end before entering the upper reactor core plenum to flow through the core itself. The bearings would be identical to those used on the SNAP-8 Development Reactor, and minimal modifications would be needed for the drum motion control and position sensing apparatus. Fixed cylindrical beryllium reflectors, one small one along the interior radius of the control drums and a larger one along the outside of the drums, filled the gaps left by the control drums in this annular reflector structure. These, too, would be kept cool by the NaK coolant flowing around them.
Surrounding this would be an axial gamma shield, with the preferred material being molten lead encased in stainless steel – but tungsten was also considered as an alternative. Why the lead was kept molten is still a mystery to me, but my best guess is that this was due to the thermal conditions of the axial shield, which would have forced the lead to remain above its melting point. This shield would have made it possible to maneuver near the space station without having to remain in the shadow of the directional shield – although obviously dose rates would still be higher than being aboard the station itself.
Another interesting thing about the shielding is that the shadow shield was divided in two, in order to balance thermal transfer and radiation protection for the power conversion system, and also to maximize the effectiveness of the shadow shields. Most designs used a 4 pi shield design, which is basically a frustrum completely surrounding the reactor core with the wide end pointing at the spacecraft. The primary coolant loops wrapped around this structure before entering the thermoelectric conversion units. After this, there’s a small “galley” where the power conversion system is mounted, followed by a slightly larger shadow shield, with the heat rejection system feed loops running across the outside as well. Finally, the radiator – usually cylindrical or conical – completed the main body of the power system. The base of the radiator would meet up with the mounting hardware for attachment to the spacecraft, although the majority of the structural load was an internal spar running from the core all the way to the spacecraft.
While the option for using a pure shadow shield concept was always kept on the table, the complications in docking with a nuclear powered space station which has an unshielded nuclear reactor at one end of the structure were significant. Because of this, the ZrHR was designed with full shielding around the entire core, with supplementary shadow shields between the reactor itself and the power conversion system, and also a second shadow shield after the power conversion system. These shadow shields could be increased to so-called 4-pi shields for more complete shielding area, assuming the mission mass budget allowed, but as a general rule the shielding used was a combination of the liquid lead gamma shield and the combined shadow shield configuration. These shields would change to a fairly large extent depending on the mission that the ZrHR would be used on.
Another thing that was highly variable was the radiator configuration. Some designs had a radiator that was fixed in relation to the reactor, even if it was extended on a boom (as was the case of the Saturn V Orbital Workshop, later known as Skylab). Others would telescope out, as was the case for the later Modular Space Station (much later this became the International Space Station). The last option was for the radiators to be hinged, with flexible joints that the NaK coolant would flow through (this was the configuration for the lunar surface mission), and those joints took a lot of careful study, design, and material testing to verify that they would be reliable, seal properly, and not cause too many engineering compromises. We’ll look at the challenges of designing a radiator in the future, when we look at heat rejection systems (at this point, possibly next summer), but suffice to say that designing and executing a hinged radiator is a significant challenge for engineers, especially with a material at hot, and as reactive, as liquid NaK.
The ZrHR was continually being updated, since there was no reason to freeze the majority of the design components (although the fuel element spacing and fin configuration in the core may have indeed been frozen to allow for more detailed hydrodynamic predictability), until the program’s cancellation in 1973. Because of this, many design details were still in flux, and the final reactor configuration wasn’t ever set in stone. Additional modifications for surface use for a crewed lunar base would have required tweaking, as well, so there is a lot of variety in the final configurations.
The Stations: Orbital Missions for SNAP-8 Reactors
At the time of the redesign, three space stations were being proposed for the near future: the first, the Manned Orbiting Research Laboratory, (later changed to the Manned Orbiting Laboratory, or MOL), was a US Air Force project as part of the Blue Gemini program. Primarily designed as a surveillance platform, advances in photorecoinnasance satellites made this program redundant after just a single flight of an uncrewed, upgraded Gemini capsule.
The second was part of the Apollo Applications Program. Originally known as the Saturn V Orbital Workshop (OWS), this later evolved into Skylab. Three crews visited this space station after it was launched on the final Saturn V, and despite huge amounts of work needed to repair damage caused during a particularly difficult launch, the scientific return in everything from anatomy and physiology to meteorology to heliophysics (the study of the Sun and other stars) fundamentally changed our understanding of the solar system around us, and the challenges associated with continuing our expansion into space.
The final space station that was then under development was the Modular Space Station, which would in the late 1980s and early 1990s evolve into Space Station Freedom, and at the start of its construction in 1998 (exactly 20 years ago as of the day I’m writing this, actually) was known as the International Space Station. While many of the concepts from the MSS were carried over through its later iterations, this design was also quite different from the ISS that we know today.
Because of this change in mission, quite a few of the subsystems for the power plant were changed extensively, starting just outside the reactor core and extending through to shielding, power conversion systems, and heat rejection systems. The power conversion system was changed to four parallel thermoelectric convertors, a more advanced setup than the SNAP-10 series of reactors used. These allowed for partial outages of the PCS without complete power loss. The heat rejection system was one of the most mission-dependent structures, so would vary in size and configuration quite a bit from mission to mission. It, too, would use NaK-78 as the working fluid, and in general would be 1200 (on the OWS) to 1400 (reference mission) sq. ft in surface area. We’ll look more at these concepts in later posts on power conversion and heat rejection systems, but these changes took up a great deal of the work that was done on the ZrHR program.
One of the biggest reasons for this unusual shielding configuration was to allow a compromise between shielding mass and crew radiation dose. In this configuration, there would be three zones of radiation exposure: only shielded by the 4 pi shield during rendezvous and docking (a relatively short time period) called the rendezvous zone; a more significant shielding for the spacecraft but still slightly higher than fully shielded (because the spacecraft would be empty when docked the vast majority of the time) called the scatter shield zone; and the crewed portion of the space station itself, which would be the most shielded, called the primary shielded zone. With the 4 pi shield, the entire system would mass 24,450 pounds, of which 16,500 lbs was radiation shielding, leading to a crew dose of between 20 and 30 rem a year from the reactor.
The mission planning for the OWS was flexible in its launch configuration: it could have launched integral to the OWS on a Saturn V (although, considering the troubles that the Skylab launch actually had, I’m curious how well the system would have performed), or it could have been launched on a separate launcher and had an upper stage to attach it to the OWS. The two options proposed were either a Saturn 1B with a modified Apollo Service Module as a trans-stage, or a Titan IIIF with the Titan Trans-stage for on-orbit delivery (the Titan IIIC was considered unworkable due to mass restrictions).
After the 3-5 years of operational life, the reactor could be disposed of in two ways: either it would be deorbited into a deep ocean area (as with the SNAP-10A, although as we saw during the BES-5’s operational history this ended up not being considered a good option), or it could be boosted into a graveyard orbit. One consideration which is very different from the SNAP-10A is that the reactor would likely be intact due to the 4 pi shield, rather than burning up as the SNAP-10A would have, meaning that a terrestrial impact could lead to civilian population exposures to fission products, and also having highly enriched (although not quite bomb grade) uranium somewhere for someone to be able to relatively easily pick up. This made the deorbiting of the reactor a bit pickier in terms of location, and so an uncontrolled re-entry was not considered. The ideal was to leave it in a parking orbit of at least 400 nautical miles in altitude for a few hundred years to allow the fission products to completely decay away before de-orbiting the reactor over the ocean.
Nuclear Power for the Moon
The final configuration that was examined for the Advanced ZrH Reactor was for the lunar base that was planned as a follow-on to the Apollo Program. While this never came to fruition, it was still studied carefully. Nuclear power on the Moon was nothing new: the SNAP-27 radioisotope thermoelectric generator had been used on every single Apollo surface mission as part of the Apollo Lunar Surface Experiment Package (ALSEP). However, these RTGs would not provide nearly enough power for a permanently crewed lunar base. As an additional complication, all of the power sources available would be severely taxed by the 24 day long, incredibly cold lunar night that the base would have to contend with. Only nuclear fission offered both the power and the heat needed for a permanently staffed lunar base, and the reactor that was considered the best option was the Advanced ZrH Reactor.
The configuration of this form of the reactor was very different. There are three options for a surface power plant: the reactor is offloaded from the lander and buried in the lunar regolith for shielding (which is how the Kilopower reactor is being planned for surface operations); an integral lander and power plant which is assembled in Earth (or lunar) orbit before landing, with a 4 pi shield configuration; finally an integrated lander and reactor with a deployable radiator which is activated once the reactor is on the surface of the moon, again with a 4 pi shield configuration. There are, of course, in-between options between the last two configurations, where part of the radiator is fixed and part deploys. The designers of the ZrHR decided to go with the second option as their best design option, due to the ability to check out the reactor before deployment to the lunar surface but also minimizing the amount of effort needed by the astronauts to prepare the reactor for power operations after landing. This makes sense because, while on-orbit assembly and checkout is a complex and difficult process, it’s still cheaper in terms of manpower to do this work in Earth orbit rather than a lunar EVA due to the value of every minute on the lunar surface. If additional heat rejection was required, a deployable radiator could be used, but this would require flexible joints for the NaK coolant, which would pose a significant materials and design challenge. A heat shield was used when the reactor wasn’t in operation to prevent exessive heat loss from the reactor. This eased startup transient issues, as well as ensuring that the NaK coolant remained liquid even during reactor shutdown (frozen working fluids are never good for a mechanical system, after all). The power conversion system was exactly the same configuration as would be used in the OWS configuration that we discussed earlier, with the upgraded, larger tubes rather than the smaller, more numerous ones (we’ll discuss the tradeoffs here in the power conversion system blog posts).
This power plant would end up providing a total of 35.5 kWe of conditioned (i.e. usable, reliable power) electricity out of the 962 kWt reactor core, with 22.9 kWe being delivered to the habitat itself, for at least 5 years. The overall power supply system, including radiator, shield, power conditioning unit, and the rest of the ancillary bits and pieces that make a nuclear reactor core into a fission power plant, ended up massing a total of 23,100 lbs, which is comfortably under the 29,475 lb weight limit of the lander design that was selected (unfortunately, finding information on that design is proving difficult). A total dose rate at a half mile for an unshielded astronaut would be 7.55 mrem/hr was considered sufficient for crew radiation safety (this is a small radiation dose compared to the lunar radiation environment, and the astronauts will spend much of their time in the much better shielded habitat.
Sadly, this power supply was not developed to a great extent (although I was unable to find the source document for this particular design: NAA-SR-12374, “Reactor Power Plants for Lunar Base Applications, Atomics International 1967), because the plans for the crewed lunar base were canceled before much work was done on this design. The plans were developed to the point that future lunar base plans would have a significant starting off point, but again the design was never frozen, so there was a lot of flexibility remaining in the design.
The End of the Line
Sadly, these plans never reached fruition. The U-ZrH Reactor had its budget cut by 75% in 1971, with cuts to alternate power conversion systems such as the use of thermionic power conversion (30%) and reactor safety (50%), and the advanced Brayton system (completely canceled) happening at the same time. NERVA, which we covered in a number of earlier posts, also had its funding slashed at the same time. This was due to a reorientation of funds away from many current programs, and instead focusing on the Space Shuttle and a modular space station, whose power requirements were higher than the U-ZrH Reactor would be able to offer.
At this point, the AEC shifted their funding philosophy, moving away from preparing specific designs for flight readiness and instead moving toward a long-term development strategy. In 1973 head of the AEC’s Space Nuclear Systems Division said that, given the lower funding levels that NASA was forced to work within, “…the missions which were likely to require large amounts of energy, now appear to be postponed until around 1990 or later.” This led to the cancellation of all nuclear reactor systems, and a shift in focus to radioisotope thermoelectric generators, which gave enough power for NASA and the DoD’s current mission priorities in a far simpler form.
Funding would continue at a low level all the way to the current day for space fission power systems, but the shift in focus led to a very different program. While new reactor concepts continue to be regularly put forth, both by Department of Energy laboratories and NASA, for decades the focus was more on enhancing the technological capability of many areas, especially materials, which could be used by a wide range of reactor systems. This meant that specific systems wouldn’t be developed to the same level of technological readiness in the US for over 30 years, and in fact it wouldn’t be until 2018 that another fission power system of US design would undergo criticality testing (the KRUSTY test for Kilopower, in early 2018).
More Coming Soon!
Originally, I was hoping to cover another system in this blog post as well, but the design is so different compared to the ZrH fueled reactors that we’ve been discussing so far in this series that it warranted its own post. This reactor is the SNAP-50, which didn’t start out as a space reactor, but rather one of the most serious contenders for the indirect-cycle Aircraft Nuclear Propulsion program. It used uranium carbide/nitride fuel elements, liquid lithium coolant, and was far more powerful than anything that weve discussed yet in terms of electric power plants. Having it in its own post will also allow me to talk a little bit about the ANP program, something that I’ve wanted to cover for a while now, but considering how much more there is to discuss about in-space systems (and my personal aversion to nuclear reactors for atmospheric craft on Earth), hasn’t really been in the cards until now.
This series continues to expand, largely because there’s so much to cover that we haven’t gotten to yet – and no-one else has covered these systems much either! I’m currently planning on doing the SNAP-50/SPUR system as a standalone post, followed by the SP-100 and a selection of other reactor designs. After that, we’ll cover the ENISY reactor program in its own post, followed by the NEP designs from the 90s and early 00s, both in the US and Russia. Finally, we’ll cover the predecessors to Kilopower, and round out our look at fission power plant cores by revisiting Kilopower to have a look at what’s changed, and what’s stayed the same, over the last year since the KRUSTY test. We will then move on to shielding materials and design (probably two or three posts, because there’s a lot to cover there) before moving on to power conversion systems, another long series. We’ll finish up the nuclear systems side of nuclear power supplies by looking at heat sinks, radiators, and other heat rejection systems, followed by a look at nuclear electric spacecraft architecture and design considerations.
A lot of work continues in the background, especially in terms of website planning and design, research on a lot of the lesser-known reactor systems, and planning for the future of the web page. The blog is definitely set for topics for at least another year, probably more like two, just covering the basics and history of astronuclear design, but getting the web page to be more functional is a far more complex, and planning-heavy, task.
I hope you enjoyed this post, and much more is coming next year! Don’t forget to join us on Facebook, or follow me on Twitter!
Hello, and welcome to Beyond NERVA! Today we’re going to look at the program that birthed the first astronuclear reactor to go into orbit, although the extent of the program far exceeds the flight record of a single launch.
Before we get into that, I have a minor administrative announcement that will develop into major changes for Beyond NERVA in the near-to-mid future! As you may have noticed, we have moved from beyondnerva.wordpress.com to beyondnerva.com. For the moment, there isn’t much different, but in the background a major webpage update is brewing! Not only will the home page be updated to make it easier to navigate the site (and see all the content that’s already available!), but the number of pages on the site is going to be increasing significantly. A large part of this is going to be integrating information that I’ve written about in the blog into a more topical, focused format – with easier access to both basic concepts and technical details being a priority. However, there will also be expansions on concepts, pages for technical concepts that don’t really fit anywhere in the blog posts, and more! As these updates become live, I’ll mention them in future blog posts. Also, I’ll post them on both the Facebook group and the new Twitter feed (currently not super active, largely because I haven’t found my “tweet voice yet,” but I hope to expand this soon!). If you are on either platform, you should definitely check them out!
The Systems for Nuclear Auxiliary Propulsion, or SNAP program, was a major focus for a wide range of organizations in the US for many decades. The program extended everywhere from the bottom of the seas (SNAP-4, which we won’t be covering in this post) to deep space travel with electric propulsion. SNAP was divided up into an odd/even numbering scheme, with the odd model numbers (starting with the SNAP-3) being radioisotope thermoelectric generators, and the even numbers (beginning with SNAP-2) being fission reactor electrical power systems.
Due to the sheer scope of the SNAP program, even eliminating systems that aren’t fission-based, this is going to be a two post subject. This post will cover the US Air Force’s portion of the SNAP reactor program: the SNAP-2 and SNAP-10A reactors; their development programs; the SNAPSHOT mission; and a look at the missions that these reactors were designed to support, including satellites, space stations, and other crewed and uncrewed installations. The next post will cover the NASA side of things: SNAP-8 and its successor designs as well as SNAP-50/SPUR. The one after that will cover the SP-100, SABRE, and other designs from the late 1970s through to the early 1990s, and will conclude with looking at a system that we mentioned briefly in the last post: the ENISY/TOPAZ II reactor, the only astronuclear design to be flight qualified by the space agencies and nuclear regulatory bodies of two different nations.
The Beginnings of the US Astronuclear Program: SNAP’s Early Years
Beginning in the earliest days of both the nuclear age and the space age, nuclear power had a lot of appeal for the space program: high power density, high power output, and mechanically simple systems were in high demand for space agencies worldwide. The earliest mention of a program to develop nuclear electric power systems for spacecraft was the Pied Piper program, begun in 1954. This led to the development of the Systems for Nuclear Auxiliary Power program, or SNAP, the following year (1955), which was eventually canceled in 1973, as were so many other space-focused programs.
Once space became a realistic place to send not only scientific payloads but personnel, the need to provide them with significant amounts of power became evident. Not only were most systems of the day far from the electricity efficient designs that both NASA and Roscosmos would develop in the coming decades; but, at the time, the vision for a semi-permanent space station wasn’t 3-6 people orbiting in a (completely epic, scientifically revolutionary, collaboratively brilliant, and invaluable) zero-gee conglomeration of tin cans like the ISS, but larger space stations that provided centrifugal gravity, staffed ‘round the clock by dozens of individuals. These weren’t just space stations for NASA, which was an infant organization at the time, but the USAF, and possibly other institutions in the US government as well. In addition, what would provide a livable habitation for a group of astronauts would also be able to power a remote, uncrewed radar station in the Arctic, or in other extreme environments. Even if crew were there, the fact that the power plant wouldn’t have to be maintained was a significant military advantage.
Responsible for both radioisotope thermoelectric generators (which run on the natural radioactive decay of a radioisotope, selected according to its energy density and half-life) as well as fission power plants, SNAP programs were numbered with an even-odd system: even numbers were fission reactors, odd numbers were RTGs. These designs were never solely meant for in-space application, but the increased mission requirements and complexities of being able to safely launch a nuclear power system into space made this aspect of their use the most stringent, and therefore the logical one to design around. Additionally, while the benefits of a power-dense electrical supply are obvious for any branch of the military, the need for this capability in space far surpassed the needs of those on the ground or at sea.
Originally jointly run by the AEC’s Department of Reactor Development (who funded the reactor itself) and the USAF’s AF Wright Air Development Center (who funded the power conversion system), full control was handed over to the AEC in 1957. Atomics International Research was the prime contractor for the program.
There are a number of similarities in almost all the SNAP designs, probably for a number of reasons. First, all of the reactors that we’ll be looking at (as well as some other designs we’ll look at in the next post) used the same type of fissile fuel, even though the form, and the cladding, varied reasonably widely between the different concepts. Uranium-zirconium hydride (U-ZrH) was a very popular fuel choice at the time. Assuming hydrogen loss could be controlled (this was a major part of the testing regime in all the reactors that we’ll look at), it provided a self-moderating, moderate-to-high-temperature fuel form, which was a very attractive feature. This type of fuel is still used today, for the TRIGA reactor – which, between it and its direct descendants is the most common form of research and test reactor worldwide. The high-powered reactors (SNAP 2 and 8) both initially used variations on the same power conversion system: a boiling mercury Rankine power conversion cycle, which was determined by the end of the testing regime to be possible to execute, however to my knowledge has never been proposed again (we’ll look at this briefly in the post on heat engines as power conversion systems, and a more in-depth look will be available in the future), although a mercury-based MHD conversion system is being offered as a power conversion system for an accelerator-driven molten salt reactor.
SNAP-2: The First American Built-For-Space Nuclear Reactor Design
The idea for the SNAP-2 reactor originally came from a 1951 Rand Corporation study, looking at the feasibility of having a nuclear powered satellite. By 1955, the possibilities that a fission power supply offered in terms of mass and reliability had captured the attention of many people in the USAF, which was (at the time) the organization that was most interested and involved (outside the Army Ballistic Missile Agency at the Redstone Arsenal, which would later become the Goddard Spaceflight Center) in the exploitation of space for military purposes.
The original request for the SNAP program, which ended up becoming known as SNAP 2, occurred in 1955, from the AEC’s Defense Reactor Development Division and the USAF Wright Air Development Center. It was for possible power sources in the 1 to 10 kWe range that would be able to autonomously operate for one year, and the original proposal was for a zirconium hydride moderated sodium-potassium (NaK) metal cooled reactor with a boiling mercury Rankine power conversion system (similar to a steam turbine in operational principles, but we’ll look at the power conversion systems more in a later post), which is now known as SNAP-2. The design was refined into a 55 kWt, 5 kWe reactor operating at about 650°C outlet temperature, massing about 100 kg unshielded, and was tested for over 10,000 hours. This epithermal neutron spectrum would remain popular throughout much of the US in-space reactor program, both for electrical power and thermal propulsion designs. This design would later be adapted to the SNAP-10A reactor, with some modifications, as well.
SNAP-2’s first critical assembly test was in October of 1957, shortly after Sputnik-1’s successful launch. With 93% enriched 235U making up 8% of the weight of the U-ZrH fuel elements, a 1” beryllium inner reflector, and an outer graphite reflector (which could be varied in thickness), separated into two rough hemispheres to control the construction of a critical assembly; this device was able to test many of the reactivity conditions needed for materials testing on a small economic scale, as well as test the behavior of the fuel itself. The primary concerns with testing on this machine were reactivity, activation, and intrinsic steady state behavior of the fuel that would be used for SNAP-2. A number of materials were also tested for reflection and neutron absorbency, both for main core components as well as out-of-core mechanisms. This was followed by the SNAP-2 Experimental Reactor in 1959-1960 and the SNAP 2 Development Reactor in 1961-1962.
The SNAP-2 Experimental Reactor (S2ER or SER) was built to verify the core geometry and basic reactivity controls of the SNAP-2 reactor design, as well as to test the basics of the primary cooling system, materials, and other basic design questions, but was not meant to be a good representation of the eventual flight system. Construction started in June 1958, with construction completed by March 1959. Dry (Sept 15) and wet (Oct 20) critical testing was completed the same year, and power operations started on Nov 5, 1959. Four days later, the reactor reached design power and temperature operations, and by April 23 of 1960, 1000 hours of continuous testing at design conditions were completed. Following transient and other testing, the reactor was shut down for the last time on November 19, 1960, just over one year after it had first achieved full power operations. Between May 19 and June 15, 1961, the reactor was disassembled and decommissioned. Testing on various reactor materials, especially the fuel elements, was conducted, and these test results refined the design for the Development Reactor.
The SNAP-2 Development Reactor (S2DR or SDR, also called the SNAP-2 Development System, S2DS) was installed in a new facility at the Atomics International Santa Susana research facility to better manage the increased testing requirements for the more advanced reactor design. While this wasn’t going to be a flight-type system, it was designed to inform the flight system on many of the details that the S2ER wasn’t able to. This, interestingly, is much harder to find information on than the S2ER. This reactor incorporated many changes from the S2ER, and went through several iterations to tweak the design for a flight reactor. Zero power testing occurred over the summer of 1961, and testing at power began shortly after (although at SNAP-10 power and temperature levels. Testing continued until December of 1962, and further refined the SNAP-2 and -10A reactors.
A third set of critical assembly reactors, known as the SNAP Development Assembly series, was constructed at about the same time, meant to provide fuel element testing, criticality benchmarks, reflector and control system worth, and other core dynamic behaviors. These were also built at the Santa Susana facility, and would provide key test capabilities throughout the SNAP program. This water-and-beryllium reflected core assembly allowed for a wide range of testing environments, and would continue to serve the SNAP program through to its cancellation. Going through three iterations, the designs were used more to test fuel element characteristics than the core geometries of individual core concepts. This informed all three major SNAP designs in fuel element material and, to a lesser extent, heat transfer (the SNAP-8 used thinner fuel elements) design.
Extensive testing was carried out on all aspects of the core geometry, fuel element geometry and materials, and other behaviors of the reactor; but by May 1960 there was enough confidence in the reactor design for the USAF and AEC to plan on a launch program for the reactor (and the SNAP-10A), called SNAPSHOT (more on that below). Testing using the SNAP-2 Experimental Reactor occurred in 1960-1961, and the follow-on test program, including the Snap 2 Development reactor occurred in 1962-63. These programs, as well as the SNAP Critical Assembly 3 series of tests (used for SNAP 2 and 10A), allowed for a mostly finalized reactor design to be completed.
The power conversion system (PCS), a Rankine (steam) turbine using mercury, were carried out starting in 1958, with the development of a mercury boiler to test the components in a non-nuclear environment. The turbine had many technical challenges, including bearing lubrication and wear issues, turbine blade pitting and erosion, fluid dynamics challenges, and other technical difficulties. As is often the case with advanced reactor designs, the reactor core itself wasn’t the main challenge, nor the control mechanisms for the reactor, but the non-nuclear portions of the power unit. This is a common theme in astronuclear engineering. More recently, JIMO experienced similar problems when the final system design called for a theoretical but not yet experimental supercritical CO2 Brayton turbine (as we’ll see in a future post). However, without a power conversion system of useable efficiency and low enough mass, an astronuclear power system doesn’t have a means of delivering the electricity that it’s called upon to deliver.
Reactor shielding, in the form of a metal honeycomb impregnated with a high-hydrogen material (in this case a form of paraffin), was common to all SNAP reactor designs. The high hydrogen content allowed for the best hydrogen density of the available materials, and therefore the greatest shielding per unit mass of the available options.
Testing on the SNAP 2 reactor system continued until 1963, when the reactor core itself was re-purposed into the redesigned SNAP-10, which became the SNAP-10A. At this point the SNAP-2 reactor program was folded into the SNAP-10A program. SNAP-2 specific design work was more or less halted from a reactor point of view, due to a number of factors, including the slower development of the CRU power conversion system, the large number of moving parts in the Rankine turbine, and the advances made in the more powerful SNAP-8 family of reactors (which we’ll cover in the next post). However, testing on the power conversion system continued until 1967, due to its application to other programs. This didn’t mean that the reactor was useless for other missions; in fact, it was far more useful, due to its far more efficient power conversion system for crewed space operations (as we’ll see later in this post), especially for space stations. However, even this role would be surpassed by a derivative of the SNAP-8, the Advanced ZrH Reactor, and the SNAP-2 would end up being deprived of any useful mission.
The SNAP Reactor Improvement Program, in 1963-64, continued to optimize and advance the design without nuclear testing, through computer modeling, flow analysis, and other means; but the program ended without flight hardware being either built or used. We’ll look more at the missions that this reactor was designed for later in this blog post, after looking at its smaller sibling, the first reactor (and only US reactor) to ever achieve orbit: the SNAP-10A.
SNAP-10: The Father of the First Reactor in Space
At about the same time as the SNAP 2 Development Reactor tests (1958), the USAF requested a study on a thermoelectric power conversion system, targeting a 0.3 kWe-1kWe power regime. This was the birth of what would eventually become the SNAP-10 reactor. This reactor would evolve in time to become the SNAP-10A reactor, the first nuclear reactor to go into orbit.
In the beginning, this design was superficially quite similar to the Romashka reactor that we’ll examine in the USSR part of this blog post, with plates of U-ZrH fuel, separated by beryllium plates for heat conduction, and surrounded by radial and axial beryllium reflectors. Purely conductively cooled internally, and radiatively cooled externally, this was later changed to a NaK forced convection cooling system for better thermal management (see below). The resulting design was later adapted to the SNAP-4 reactor, which was designed to be used for underwater military installations, rather than spaceflight. Outside these radial reflectors were thermoelectric power conversion systems, with a finned radiating casing being the only major component that was visible. The design looked, superficially at least, remarkably like the RTGs that would be used for the next several decades. However, the advantages to using even the low power conversion efficiency thermoelectric conversion system made this a far more powerful source of electricity than the RTGs that were available at the time (or even today) for space missions.
Within a short period, however, the design was changed dramatically, resulting in a design very similar to the core for the SNAP-2 reactor that was under development at the same time. Modifications were made to the SNAP-2 baseline, resulting in the reactor cores themselves becoming identical. This also led to the NaK cooling system being implemented on the SNAP-10A. Many of the test reactors for the SNAP-2 system were also used to develop the SNAP-10A. This is because the final design, while lower powered, by 20 kWe of electrical output, was largely different in the power conversion system, not the reactor structure. This reactor design was tested extensively, with the S2ER, S2DR, and SCA test series (4A, 4B, and 4C) reactors, as well as the SNAP-10 Ground Test Reactor (S10FS-1). The new design used a similar, but slightly smaller, conical radiator using NaK as the working fluid for the radiator.
This was a far lower power design than the SNAP-2, coming in at 30 kWt, but with the 1.6% power conversion ratio of the thermoelectric systems, its electrical power output was only 500 We. It also ran almost 100°C cooler (200 F), allowing for longer fuel element life, but less thermal gradient to work with, and therefore less theoretical maximum efficiency. This tradeoff was the best on offer, though, and the power conversion system’s lack of moving parts, and ease of being tested in a non-nuclear environment without extensive support equipment, made it more robust from an engineering point of view. The overall design life of the reactor, though, remained short: only about 1 year, and less than 1% fissile fuel burnup. It’s possible, and maybe even likely, that (barring spacecraft-associated failure) the reactor could have provided power for longer durations; however, the longer the reactor operates, the more the fuel swells, due to fission product buildup, and at some point this would cause the clad of the fuel to fail. Other challenges to reactor design, such as fission poison buildup, clad erosion, mechanical wear, and others would end the reactor’s operational life at some point, even if the fuel elements could still provide more power.
The SNAP-10A was not meant to power crewed facilities, since the power output was so low that multiple installations would be needed. This meant that, while all SNAP reactors were meant to be largely or wholly unmaintained by crew personnel, this reactor had no possibility of being maintained. The reliability requirements for the system were higher because of this, and the lack of moving parts in the power conversion system aided in this design requirement. The design was also designed to only have a brief (72 hour) time period where active reactivity control would be used, to mitigate any startup transients, and to establish steady-state operations, before the active control systems would be left in their final configuration, leaving the reactor entirely self-regulating. This placed additional burden on the reactor designers to have a very strong understanding of the behavior of the reactor, its long-term stability, and any effects that would occur during the year-long lifetime of the system.
At the end of the reactor’s life, it was designed to stay in orbit until the short-lived and radiotoxic portions of the reactor had gone through at least five product half-lives, reducing the radioactivity of the system to a very low level. At the end of this process, the reactor would re-enter the atmosphere, the reflectors and end reflector would be ejected, and the entire thing would burn up in the upper atmosphere. From there, winds would dilute any residual radioactivity to less than what was released by a single small nuclear test (which were still being conducted in Nevada at the time). While there’s nothing wrong with this approach from a health physics point of view, as we saw in the last post on the BES-5 reactors the Soviet Union was flying, there are major international political problems with this concept. The SNAPSHOT reactor continues to orbit the Earth (currently at an altitude of roughly 1300 km), and will do so for more than 2000 years, according to recent orbital models, so the only system of concern is not in danger of re-entry any time soon; but, at some point, the reactor will need to be moved into a graveyard orbit or collected and returned to Earth – a problem which currently has no solution.
The Runup to Flight: Vehicle Verification and Integration
1960 brought big plans for orbital testing of both the SNAP-2 and SNAP-10 reactors, under the program SNAPSHOT: Two SNAP-10 launches, and two SNAP-2 launches would be made. Lockheed Missiles System Division was chosen as the launch vehicle, systems integration, and launch operations contractor for the program; while Atomics International, working under the AEC, was responsible for the power plant.
The SNAP-10A reactor design was meant to be decommissioned by orbiting for long enough that the fission product inventory (the waste portion of the burned fuel elements, and the source of the vast majority of the radiation from the reactor post-fission) would naturally decay away, and then the reactor would be de-orbited, and burn up in the atmosphere. This was planned before the KOSMOS-954 accident, when the possibility of allowing a nuclear reactor to burn up in the atmosphere was not as anathema as it is today. This plan wouldn’t increase the amount of radioactivity that the public would receive to any appreciable degree; and, at the time, open-air testing of nuclear weapons was the norm, sending up thousands of kilograms of radioactive fallout per year. However, it was important that the fuel rods themselves would burn up high in the atmosphere, in order to dilute the fuel elements as much as possible, and this is something that needed to be tested.
Enter the SNAP Reactor Flight Demonstration Number 1 mission, or RFD-1. The concept of this test was to demonstrate that the planned disassembly and burnup process would occur as expected, and to inform the further design of the reactor if there were any unexpected effects of re-entry. Sandia National Labs took the lead on this part of the SNAPSHOT program. After looking at the budget available, the launch vehicles available, and the payloads, the team realized that orbiting a nuclear reactor mockup would be too expensive, and another solution needed to be found. This led to the mission design of RFD-1: a sounding rocket would be used, and the core geometry would be changed to account for the short flight time, compared to a real reentry, in order to get the data needed for the de-orbiting testing of the actual SNAP-10A reactor that would be flown.
So what does this mean? Ideally, the development of a mockup of the SNAP-10A reactor, with the only difference being that there wouldn’t be any highly enriched uranium in the fuel elements, as normally configured; instead depleted uranium would be used. It would be launched on the same launch vehicle that the SNAPSHOT mission would use (an Atlas-Agena D), be placed in the same orbit, and then be deorbited at the same angle and the same place as the actual reactor would be; maybe even in a slightly less favorable reentry angle to know how accurate the calculations were, and what the margin of error would be. However, an Atlas-Agena rocket isn’t a cheap piece of hardware, either to purchase, or to launch, and the project managers knew that they wouldn’t be able to afford that, so they went hunting for a more economical alternative.
This led the team to decide on a NASA Scout sounding rocket as the launch vehicle, launched from Wallops Island launch site (which still launches sounding rockets, as well as the Antares rocket, to this day, and is expanding to launch Vector Space and RocketLab orbital rockets as well in the coming years). Sounding rockets don’t reach orbital altitudes or velocities, but they get close, and so can be used effectively to test orbital components for systems that would eventually fly in orbit, but for much less money. The downside is that they’re far smaller, with less payload and less velocity than their larger, orbital cousins. This led to needing to compromise on the design of the dummy reactor in significant ways – but those ways couldn’t compromise the usefulness of the test.
Sandia Corporation (which runs Sandia National Laboratories to this day, although who runs Sandia Corp changes… it’s complicated) and Atomics International engineers got together to figure out what could be done with the Scout rocket and a dummy reactor to provide as useful an engineering validation as possible, while sticking within the payload requirements and flight profile of the relatively small, suborbital rocket that they could afford. Because the dummy reactor wouldn’t be going nearly as fast as it would during true re-entry, a steeper angle of attack when the test was returning to Earth was necessary to get the velocity high enough to get meaningful data.
The Scout rocket that was being used had much less payload capability than the Atlas rocket, so if there was a system that could be eliminated, it was removed to save weight. No NaK was flown on RFD-1, the power conversion system was left off, the NaK pump was simulated by an empty stainless steel box, and the reflector assembly was made out of aluminum instead of beryllium, both for weight and toxicity reasons (BeO is not something that you want to breathe!). The reactor core didn’t contain any dummy fuel elements, just a set of six stainless steel spacers to keep the grid plates at the appropriate separation. Because the angle of attack was steeper, the test would be shorter, meaning that there wouldn’t be time for the reactor’s reflectors to degrade enough to release the fuel elements. The fuel elements were the most important part of the test, however, since it needed to be demonstrated that they would completely burn up upon re-entry, so a compromise was found.
The fuel elements would be clustered on the outside of the dummy reactor core, and ejected early in the burnup test period. While the short time and high angle of attack meant that there wouldn’t be enough time to observe full burnup, the beginning of the process would be able to provide enough data to allow for accurate simulations of the process to be made. How to ensure that this data, which was the most important part of the test, would be able to be collected was another challenge, though, which forced even more compromises for RFT-1’s design. Testing equipment had to be mounted in such a way as to not change the aerodynamic profile of the dummy reactor core. Other minor changes were needed as well, but despite all of the differences between the RFD-1 and the actual SNAP-10A the thermodynamics and aerodynamics of the system were different in only very minor ways.
Testing support came from Wallops Island and NASA’s Bermuda tracking station, as well as three ships and five aircraft stationed near the impact site for radar observation. The ground stations would provide both radar and optical support for the RFD-1 mission, verifying reactor burnup, fuel element burnup, and other test objective data, while the aircraft and ships were primarily tasked with collecting telemetry data from on-board instruments, as well as providing additional radar data; although one NASA aircraft carried a spectrometer in order to analyze the visible radiation coming off the reentry vehicle as it disintegrated.
The test went largely as expected. Due to the steeper angle of attack, full fuel element burnup wasn’t possible, even with the early ejection of the simulated fuel rods, but the amount that they did disintegrate during the mission showed that the reactor’s fuel would be sufficiently distributed at a high enough altitude to prevent any radiological risk. The dummy core behaved mostly as expected, although there were some disagreements between the predicted behavior and the flight data, due to the fact that the re-entry vehicle was on such a steep angle of attack. However, the test was considered a success, and paved the way for SNAPSHOT to go forward.
The next task was to mount the SNAP-10A to the Agena spacecraft. Because the reactor was a very different power supply than was used at the time, special power conditioning units were needed to transfer power from the reactor to the spacecraft. This subsystem was mounted on the Agena itself, along with tracking and command functionality, control systems, and voltage regulation. While Atomics International worked to ensure the reactor would be as self-contained as possible, the reactor and spacecraft were fully integrated as a single system. Besides the reactor itself, the spacecraft carried a number of other experiments, including a suite of micrometeorite detectors and an experimental cesium contact thruster, which would operate from a battery system that would be recharged by electricity produced by the reactor.
In order to ensure the reactor would be able to be integrated to the spacecraft, a series of Flight System Prototypes (FSM-1, and -4; FSEM-2 and -3 were used for electrical system integration) were built. These were full scale, non-nuclear mockups that contained a heating unit to simulate the reactor core. Simulations were run using FSM-1 from launch to startup on orbit, with all testing occurring in a vacuum chamber. The final one of the series, FSM-4, was the only one that used NaK coolant in the system, which was used to verify that the thermal performance of the NaK system met with flight system requirements. FSEM-2 did not have a power system mockup, instead it used a mass mockup of the reactor, power conversion system, radiator, and other associated components. Testing with FSEM-2 showed that there were problems with the original electrical design of the spacecraft, which required a rebuild of the test-bed, and a modification of the flight system itself. Once complete, the renamed FSEM-2A underwent a series of shock, vibration, acceleration, temperature, and other tests (known as the “Shake and Bake” environmental tests), which it subsequently passed. The final mockup, FSEM-3, underwent extensive electrical systems testing at Lockheed’s Sunnyvale facility, using simulated mission events to test the compatibility of the spacecraft and the reactor. Additional electrical systems changes were implemented before the program proceeded, but by the middle of 1965, the electrical system and spacecraft integration tests were complete and the necessary changes were implemented into the flight vehicle design.
The last round of pre-flight testing was a test of a flight-configured SNAP-10A reactor under fission power. This nuclear ground test, S10F-3, was identical to the system that would fly on SNAPSHOT, save some small ground safety modifications, and was tested from January 22 1965 to March 15, 1966. It operated uninterrupted for over 10,000 hours, with the first 390 days being at a power output of 35 kWt, and (following AEC approval) an additional 25 days of testing at 44 kWt. This testing showed that, after one year of operation, the continuing problem of hydrogen redistribution caused the reactor’s outlet temperature to drop more than expected, and additional, relatively minor, uncertainties about reactor dynamics were seen as well. However, overall, the test was a success, and paved the way for the launch of the SNAPSHOT spacecraft in April 1965; and the continued testing of S10F-3 during the SNAPSHOT mission was able to verify that the thermal behavior of astronuclear power systems during ground test is essentially identical to orbiting systems, proving the ground test strategy that had been employed for the SNAP program.
SNAPSHOT: The First Nuclear Reactor in Space
In 1963 there was a change in the way the USAF was funding these programs. While they were solely under the direction of the AEC, the USAF still funded research into the power conversion systems, since they were still operationally useful; but that changed in 1963, with the removal of the 0.3 kWe to 1 kWe portion of the program. Budget cuts killed the Zr-H moderated core of the SNAP-2 reactor, although funding continued for the Hg vapor Rankine conversion system (which was being developed by TRW) until 1966. The SNAP-4 reactor, which had not even been run through criticality testing, was canceled, as was the planned flight test of the SNAP-10A, which had been funded under the USAF, because they no longer had an operational need for the power system with the cancellation of the 0.3-1 kWe power system program. The associated USAF program that would have used the power supply was well behind schedule and over budget, and was canceled at the same time.
The USAF attempted to get more funding, but was denied. All parties involved had a series of meetings to figure out what to do to save the program, but the needed funds weren’t forthcoming. All partners in the program worked together to try and have a reduced SNAPSHOT program go through, but funding shortfalls in the AEC (who received only $8.6 million of the $15 million they requested), as well as severe restrictions on the Air Force (who continued to fund Lockheed for the development and systems integration work through bureaucratic creativity), kept the program from moving forward. At the same time, it was realized that being able to deliver kilowatts or megawatts of electrical power, rather than the watts currently able to be produced, would make the reactor a much more attractive program for a potential customer (either the USAF or NASA).
Finally, in February of 1964 the Joint Congressional Committee on Atomic Energy was able to fund the AEC to the tune of $14.6 million to complete the SNAP-10A orbital test. This reactor design had already been extensively tested and modeled, and unlike the SNAP-2 and -8 designs, no complex, highly experimental, mechanical-failure-prone power conversion system was needed.
SNAPSHOT consisted of a SNAP-10A fission power system mounted to a modified Agena-D spacecraft, which by this time was an off-the-shelf, highly adaptable spacecraft used by the US Air Force for a variety of missions. An experimental cesium contact ion thruster (read more about these thrusters on the Gridded Ion Engine page) was installed on the spacecraft for in-flight testing. The mission was to validate the SNAP-10A architecture with on-orbit experience, proving the capability to operate for 9 days without active control, while providing 500 W (28.5 V DC) of electrical power. Additional requirements included the use of a SNAP-2 reactor core with minimal modification (to allow for the higher-output SNAP-2 system with its mercury vapor Rankine power conversion system to be validated as well, when the need for it arose), eliminating the need (while offering the option) for active control of the reactor once startup was achieved for one year (to prove autonomous operation capability); facilitating safe ground handling during spacecraft integration and launch; and, accommodating future growth potential in both available power and power-to-weight ratio.
While the threshold for mission success was set at 90 days, for Atomics International wanted to prove 1 year of capability for the system; so, in those 90 days, the goal was that the entire reactor system would be demonstrated to be capable of one year of operation (the SNAP-2 requirements). Atomics International imposed additional, more stringent, guidelines for the mission as well, specifying a number of design requirements, including self-containment of the power system outside the structure of the Agena, as much as possible; more stringent mass and center-of-gravity requirements for the system than specified by the US Air Force; meeting the military specifications for EM radiation exposure to the Agena; and others.
The flight was formally approved in March, and the launch occurred on April 3, 1965 on an Atlas-Agena D rocket from Vandenberg Air Force Base. The launch went perfectly, and placed the SNAPSHOT spacecraft in a polar orbit, as planned. Sadly, the mission was not one that could be considered either routine or simple. One of the impedance probes failed before launch, and a part of the micrometeorite detector system failed before returning data. A number of other minor faults were detected as well, but perhaps the most troubling was that there were shorts and voltage irregularities coming from the ion thruster, due to high voltage failure modes, as well as excessive electromagnetic interference from the system, which reduced the telemetry data to an unintelligible mess. This was shut off until later in the flight, in order to focus on testing the reactor itself.
The reactor was given the startup order 3.5 hours into the flight, when the two gross adjustment control drums were fully inserted, and the two fine control drums began a stepwise reactivity insertion into the reactor. Within 6 hours, the reactor achieved on-orbit criticality, and the active control portion of the reactor test program began. For the next 154 hours, the control drums were operated with ground commands, to test reactor behavior. Due to the problems with the ion engine, the failure sensing and malfunction sensing systems were also switched off, because these could have been corrupted by the errant thruster. Following the first 200 hours of reactor operations, the reactor was set to autonomous operation at full power. Between 600 and 700 hours later, the voltage output of the reactor, as well as its temperature, began to drop; an effect that the S10-F3 test reactor had also demonstrated, due to hydrogen migration in the core.
On May 16, just over one month after being launched into orbit, contact was lost with the spacecraft for about 40 hours. Some time during this blackout, the reactor’s reflectors ejected from the core (although they remained attached to their actuator cables), shutting down the core. This spelled the end of reactor operations for the spacecraft, and when the emergency batteries died five days later all communication with the spacecraft was lost forever. Only 45 days had passed since the spacecraft’s launch, and information was received from the spacecraft for only 616 orbits.
What caused the failure? There are many possibilities, but when the telemetry from the spacecraft was read, it was obvious that something badly wrong had occurred. The only thing that can be said with complete confidence is that the error came from the Agena spacecraft rather than from the reactor. No indications had been received before the blackout that the reactor was about to scram itself (the reflector ejection was the emergency scram mechanism), and the problem wasn’t one that should have been able to occur without ground commands. However, with the telemetry data gained from the dwindling battery after the shutdown, some suppositions could be made. The most likely immediate cause of the reactor’s shutdown was traced to a possible spurious command from the high voltage command decoder, part of the Agena’s power conditioning and distribution system. This in turn was likely caused by one of two possible scenarios: either a piece of the voltage regulator failed, or it became overstressed because of either the unusual low-power vehicle loads or commanding the reactor to increase power output. Sadly, the cause of this system failure cascade was never directly determined, but all of the data received pointed to a high-voltage failure of some sort, rather than a low-voltage error (which could have also resulted in a reactor scram). Other possible causes of instrumentation or reactor failure, such as thermal or radiation environment, collision with another object, onboard explosion of the chemical propellants used on the Agena’s main engines, and previously noted flight anomalies – including the arcing and EM interference from the ion engine – were all eliminated as the cause of the error as well.
Despite the spacecraft’s mysterious early demise, SNAPSHOT provided many valuable lessons in space reactor design, qualification, ground handling, launch challenges, and many other aspects of handling an astronuclear power source for potential future missions: Suggestions for improved instrumentation design and performance characteristics; provision for a sunshade for the main radiator to eliminate the sun/shade efficiency difference that was observed during the mission; the use of a SNAP-2 type radiation shield to allow for off-the-shelf, non-radiation-hardened electronic components in order to save both money and weight on the spacecraft itself; and other minor changes were all suggested after the conclusion of the mission. Finally, the safety program developed for SNAPSHOT, including the SCA4 submersion criticality tests, the RFT-1 test, and the good agreement in reactor behavior between the on-orbit and ground test versions of the SNAP-10A showed that both the AEC and the customer of the SNAP-10A (be it the US Air Force or NASA) could have confidence that the program was ready to be used for whatever mission it was needed for.
Sadly, at the time of SNAPSHOT there simply wasn’t a mission that needed this system. 500 We isn’t much power, even though it was more power than was needed for many systems that were being used at the time. While improvements in the thermoelectric generators continued to come in (and would do so all the way to the present day, where thermoelectric systems are used for everything from RTGs on space missions to waste heat recapture in industrial facilities), the simple truth of the matter was that there was no mission that needed the SNAP-10A, so the program was largely shelved. Some follow-on paper studies would be conducted, but the lowest powered of the US astronuclear designs, and the first reactor to operate in Earth orbit, would be retired almost immediately after the SNAPSHOT mission.
Post-SNAPSHOT SNAP: the SNAP Improvement Program
The SNAP fission-powered program didn’t end with SNAPSHOT, far from it. While the SNAP reactors only ever flew once, their design was mature, well-tested, and in most particulars ready to fly in a short time – and the problems associated with those particulars had been well-addressed on the nuclear side of things. The Rankine power conversion system for the SNAP-2, which went through five iterations, reached technological maturity as well, having operated in a non-nuclear environment for close to 5,000 hours and remained in excellent condition, meaning that the 10,000 hour requirement for the PCS would be able to be met without any significant challenges. The thermoelectric power conversion system also continued to be developed, focusing on an advanced silicon-germanium thermoelectric convertor, which was highly sensitive to fabrication and manufacturing processes – however, we’ll look more at thermoelectrics in the power conversion systems series of blog posts, just keep in mind that the power conversion systems continued to improve throughout this time, not just the reactor core design.
On the reactor side of things, the biggest challenge was definitely hydrogen migration within the fuel elements. As the hydrogen migrates away from the ZrH fuel, many problems occur; from unpredictable reactivity within the fuel elements, to temperature changes (dehydrogenated fuel elements developed hotspots – which in turn drove more hydrogen out of the fuel element), to changes in ductility of the fuel, causing major headaches for end-of-life behavior of the reactors and severely limiting the fuel element temperature that could be achieved. However, the necessary testing for the improvement of those systems could easily be conducted with less-expensive reactor tests, including the SCA4 test-bed, and didn’t require flight architecture testing to continue to be improved.
The maturity of these two reactors led to a short-lived program in the 1960s to improve them, the SNAP Reactor Improvement Program. The SNAP-2 and -10 reactors went through many different design changes, some large and some small – and some leading to new reactor designs based on the shared reactor core architecture.
By this time, the SNAP-2 had mostly faded into obscurity. However, the fact that it shared a reactor core with the SNAP-10A, and that the power conversion system was continuing to improve, warranted some small studies to improve its capabilities. The two of note that are independent of the core (all of the design changes for the -10 that will be discussed can be applied to the -2 core as well, since at this point they were identical) are the change from a single mercury boiler to three, to allow more power throughput and to reduce loads on one of the more challenging components, and combining multiple cores into a single power unit. These were proposed together for a space station design (which we’ll look at later) to allow an 11 kWe power supply for a crewed station.
The vast majority of this work was done on the -10A. Any further reactors of this type would have had an additional three sets of 1/8” beryllium shims on the external reflector, increasing the initial reactivity by about 50 cents (1 dollar of reactivity is exactly break even, all other things being equal; reactivity potential is often somewhere around $2-$3, however, to account for fission product buildup); this means that additional burnable poisons (elements which absorb neutrons, then decay into something that is mostly neutron transparent, to even out the reactivity of the reactor over its lifetime) could be inserted in the core at construction, mitigating the problems of reactivity loss that were experienced during earlier operation of the reactor. With this, and a number of other minor tweaks to reflector geometry and lowering the core outlet temperature slightly, the life of the SNAP-10A was able to be extended from the initial design goal of one year to five years of operation. The end-of-life power level of the improved -10A was 39.5 kWt, with an outlet temperature of 980 F (527°C) and a power density of 0.14 kWt/lb (0.31 kWt/kg).
e design modifications led to another iteration of the SNAP-10A, the Interim SNAP-10A/2 (I-10A/2). This reactor’s core was identical, but the reflector was further enhanced, and the outlet temperature and reactor power were both increased. In addition, even more burnable poisons were added to the core to account for the higher power output of the reactor. Perhaps the biggest design change with the Interim -10A/2 was the method of reactor control: rather than the passive control of the reactor, as was done on the -10A, the entire period of operation for the I-10A/2 was actively controlled, using the control drums to manage reactivity and power output of the reactor. As with the improved -10A design, this reactor would be able to have an operational lifetime of five years. These improvements led the I-10A/2 to have an end of life power rating of 100 kWt, an outlet temperature of 1200 F (648°C), and an improved power density of 0.33 kWt/lb (0.73 kWt/kg).
This design, in turn, led to the Upgraded SNAP-10A/2 (U-10A/2). The biggest in-core difference between the I-10A/2 and the U-10A/2 was the hydrogen barrier used in the fuel elements: rather than using the initial design that was common to the -2, -10A, and I-10A/2, this reactor used the hydrogen barrier from the SNAP-8 reactor, which we’ll look at in the next blog post. This is significant, because the degradation of the hydrogen barrier over time, and the resulting loss of hydrogen from the fuel elements, was the major lifetime limiting factor of the SNAP-10 variants up until this point. This reactor also went back to static control, rather than the active control used in the I-10A/2. As with the other -10A variants, the U-10A/2 had a possible core lifetime of five years, and other than an improvement of 100 F in outlet temperature (to 1300 F), and a marginal drop in power density to 0.31 kWt/lb, it shared many of the characteristics that the I-10A/2 had. SNAP-10B: The Upgrade that Could Have Been
One consistent mass penalty in the SNAP-10A variants that we’ve looked at so far is the control drums: relatively large reactivity insertions were possible with a minimum of movement due to the wide profile of the control drums, but this also meant that they extended well away from the reflector, especially early in the mission. This meant that, in order to prevent neutron backscatter from hitting the rest of the spacecraft, the shield had to be relatively wide compared the the size of the core – and the shield was not exactly a lightweight system component.
The SNAP-10B reactor was designed to address this problem. It used a similar core to the U-10A/2, with the upgraded hydrogen barrier from the -8, but the reflector was tapered to better fit the profile of the shadow shield, and axially sliding control cylinders would be moved in and out to provide control instead of the rotating drums of the -10A variants. A number of minor reactor changes were needed, and some of the reactor physics parameters changed due to this new control system; but, overall, very few modifications were needed.
The first -10B reactor, the -10B Basic (B-10B), was a very simple and direct evolution of the U-10A/2, with nothing but the reflector and control structures changed to the -10B configuration. Other than a slight drop in power density (to 0.30 kWt/lb), the rest of the performance characteristics of the B-10B were identical to the U-10A/2. This design would have been a simple evolution of the -10A/2, with a slimmer profile to help with payload integration challenges.
The next iteration of the SNAP-10B, the Advanced -10B (A-10B), had options for significant changes to the reactor core and the fuel elements themselves. One thing to keep in mind about these reactors is that they were being designed above and beyond any specific mission needs; and, on top of that, a production schedule hadn’t been laid out for them. This means that many of the design characteristics of these reactors were never “frozen,” which is the point in the design process when the production team of engineers need to have a basic configuration that won’t change in order to proceed with the program, although obviously many minor changes (and possibly some major ones) would continue to be made up until the system was flight qualified.
Up until now, every SNAP-10 design used a 37 fuel element core, with the only difference in the design occurring in the Upgraded -10A/2 and Basic -10B reactors (which changed the hydrogen barrier ceramic enamel inside the fuel element clad). However, with the A-10B there were three core size options: the first kept the 37 fuel element core, a medium-sized 55-element core, and a large 85-element core. There were other questions about the final design, as well, looking at two other major core changes (as well as a lot of open minor questions). The first option was to add a “getter,” a sheath of hydrophilic (highly hydrogen-absorbing) metal to the clad outside the steel casing, but still within the active region of the core. While this isn’t as ideal as containing the hydrogen within the U-ZrH itself, the neutron moderation provided by the hydrogen would be lost at a far lower rate. The second option was to change the core geometry itself, as the temperature of the core changed, with devices called “Thermal Coefficient Augmenters” (TCA). There were two options that were suggested: first, there was a bellows system that was driven by NaK core temperature (using ruthenium vapor), which moves a portion of the radial reflector to change the core’s reactivity coefficient; second, the securing grids for the fuel elements themselves would expand as the NaK increased in temperature, and contract as the coolant dropped in temperature.
Between the options available, with core size, fuel element design, and variable core and fuel element configuration all up in the air, the Advanced SNAP-10B was a wide range of reactors, rather than just one. Many of the characteristics of the reactors remained identical, including the fissile fuel itself, the overall core size, maximum outlet temperature, and others. However, the number of fuel elements in the core alone resulted in a wide range of different power outputs; and, which core modification the designers ultimately decided upon (Getter vs TCA, I haven’t seen any indication that the two were combined) would change what the capabilities of the reactor core would actually be. However, both for simplicity’s sake, and due to the very limited documentation available on the SNAP-10B program, other than a general comparison table from the SNAP Systems Capability Study from 1966, we’ll focus on the 85 fuel element core of the two options: the Getter core and the TCA core.
A final note, which isn’t clear from these tables: each of these reactor cores was nominally optimized to a 100 kWt power output, the additional fuel elements reduced the power density required at any time from the core in order to maximize fuel lifetime. Even with the improved hydrogen barriers, and the variable core geometry, while these systems CAN offer higher power, it comes at the cost of a shorter – but still minimum one year – life on the reactor system. Because of this, all reported estimates assumed a 100 kWt power level unless otherwise stated.
The idea of a hydrogen “getter” was not a new one at the time that it was proposed, but it was one that hadn’t been investigated thoroughly at that point (and is a very niche requirement in terrestrial nuclear engineering). The basic concept is to get the second-best option when it comes to hydrogen migration: if you can’t keep the hydrogen in your fuel element itself, then the next best option is keeping it in the active region of the core (where fission is occurring, and neutron moderation is the most directly useful for power production). While this isn’t as good as increasing the chance of neutron capture within the fuel element itself, it’s still far better than hydrogen either dissolving into your coolant, or worse yet, migrating outside your reactor and into space, where it’s completely useless in terms of reactor dynamics. Of course, there’s a trade-off: because of the interplay between the various aspects of reactor physics and design, it wasn’t practical to change the external geometry of the fuel elements themselves – which means that the only way to add a hydrogen “getter” was to displace the fissile fuel itself. There’s definitely an optimization question to be considered; after all, the overall reactivity of the reactor will have to be reduced because the fuel is worth more in terms of reactivity than the hydrogen that would be lost, but the hydrogen containment in the core at end of life means that the system itself would be more predictable and reliable. Especially for a static control system like the A-10B, this increase in behavioral predictability can be worth far more than the reactivity that the additional fuel would offer. Of the materials options that were tested for the “getter” system, yttrium metal was found to be the most effective at the reactor temperatures and radiation flux that would be present in the A-10B core. However, while improvements had been made in the fuel element design to the point that the “getter” program continued until the cancellation of the SNAP-2/10 core experiments, there were many uncertainties left as to whether the concept was worth employing in a flight system. The second option was to vary the core geometry with temperature, the Thermal Coefficient Augmentation (TCA) variant of the A-10B. This would change the reactivity of the reactor mechanically, but not require active commands from any systems outside the core itself. There were two options investigated: a bellows arrangement, and a design for an expanding grid holding the fuel elements themselves.
The first variant used a bellows to move a portion of the reflector out as the temperature increased. This was done using a ruthenium reservoir within the core itself. As the NaK increased in temperature, the ruthenium would boil, pushing a bellows which would move some of the beryllium shims away from the reactor vessel, reducing the overall worth of the radial reflector. While this sounds simple in theory, gas diffusion from a number of different sources (from fission products migrating through the clad to offgassing of various components) meant that the gas in the bellows would not just be ruthenium vapor. While this could have been accounted for, a lot of study would have needed to have been done with a flight-type system to properly model the behavior. The second option would change the distance between the fuel elements themselves, using base plate with concentric accordion folds for each ring of fuel elements called the “convoluted baseplate.” As the NaK heated beyond optimized design temperature, the base plates would expand radially, separating the fuel elements and reducing the reactivity in the core. This involved a different set of materials tradeoffs, with just getting the device constructed causing major headaches. The design used both 316 stainless steel and Hastelloy C in its construction, and was cold annealed. The alternative, hot annealing, resulted in random cracks, and while explosive manufacture was explored it wasn’t practical to map the shockwave propagation through such a complex structure to ensure reliable construction at the time. While this is certainly a concept that has caused me to think a lot about the concept of a variable reactor geometry of this nature, there are many problems with this approach(which could have possibly been solved, or proven insurmountable). Major lifetime concerns would include ductility and elasticity changes through the wide range of temperatures that the baseplate would be exposed to; work hardening of the metal, thermal stresses, and neutron bombardment considerations of the base plates would also be a major concern in this concept.
These design options were briefly tested, but most of these ended up not being developed fully. Because the reactor’s design was never frozen, many engineering challenges remained in every option that had been presented. Also, while I know that a report was written on the SNAP-10B reactor’s design (R. J . Gimera, “SNAP 1OB Reactor Conceptual Design,” NAA-SR-10422), I can’t find it… yet. This makes writing about the design difficult, to say the least.
Because of this, and the extreme paucity of documentation on this later design, it’s time to turn to what these innovative designs could have offered when it comes to actual missions.
The Path Not Taken: Missions for SNAP-2, -10A
Every space system has to have a mission, or it will never fly. Both SNAP-2 and SNAP-10 offered a lot for the space program, both for crewed and uncrewed missions; and what they offered only grew with time. However, due to priorities at the time, and the fact that many records from these programs appear to never have been digitized, it’s difficult to point to specific mission proposals for these reactors in a lot of cases, and the missions have to be guessed at from scattered data, status reports, and other piecemeal sources.
SNAP-10 was always a lower-powered system, even with its growth to a kWe-class power supply. Because of this, it was always seen as a power supply for unmanned probes, mostly in low Earth orbit, but certainly it would also have been useful in interplanetary studies as well, which at this point were just appearing on the horizon as practical. Had the SNAPSHOT system worked as planned, the cesium thruster that had been on board the Agena spacecraft would have been an excellent propulsion source for an interplanetary mission. However, due to the long mission times and relatively fragile fuel of the original SNAP-10A, it is unlikely that these missions would have been initially successful, while the SNAP-10A/2 and SNAP-B systems, with their higher power output and lifetimes, would have been ideal for many interplanetary missions.
As we saw in the US-A program, one of the major advantages that a nuclear reactor offers over photovoltaic cells – which were just starting to be a practical technology at the time – is that they offer very little surface area, and therefore the atmospheric drag that all satellites experience due to the thin atmosphere in lower orbits is less of a concern. There are many cases where this lower altitude offers clear benefits, but the vast majority of them deal with image resolution: the lower you are, the more clear your imagery can be with the same sensors. For the Russians, the ability to get better imagery of US Navy movements in all weather conditions was of strategic importance, leading to the US-A program. For Americans, who had other means of surveillance (and an opponent’s far less capable blue-water navy to track), radar surveillance was not a major focus – although it should be noted that 500 We isn’t going to give you much, if any resolution, no matter what your altitude.
One area that SNAP-10A was considered for was for meteorological satellites. With a growing understanding of how weather could be monitored, and what types of data were available through orbital systems, the ability to take and transmit pictures from on-orbit using the first generations of digital cameras (which were just coming into existence, and not nearly good enough to interest the intelligence organizations at the time), along with transmitting the data back to Earth, would have allowed for the best weather tracking capability in the world at the time. By using a low orbit, these satellites would be able to make the most of the primitive equipment available at the time, and possibly (speculation on my part) have been able to gather rudimentary moisture content data as well.
However, while SNAP-10A was worked on for about a decade, for the entire program there was always the question of “what do you do with 500-1000 We?” Sure, it’s not an insignificant amount of power, even then, but… communications and propulsion, the two things that are the most immediately interesting for satellites with reliable power, both have a linear relationship between power level and capability: the more power, the more bandwidth, or delta-vee, you have available. Also, the -10A was only ever rated for one year of operations, although it was always suspected it could be limped along for longer, which precluded many other missions.
The later SNAP-10A/2 and -10B satellites, with their multi-kilowatt range and years-long lifespans, offered far more flexibility, but by this point many in the AEC, the US Air Force, NASA, and others were no longer very interested in the program; with newer, more capable, reactor designs being available (we’ll look at some of those in the next post). While the SNAP-10A was the only flight-qualified and -tested reactor design (and the errors on the mission were shown to not be the fault of the reactor, but the Agena spacecraft), it was destined to fade into obscurity.
SNAP-10A was always the smallest of the reactors, and also the least powerful. What about the SNAP-2, the 3-6 kWe reactor system?
Initial planning for the SNAP-2 offered many options, with communications satellites being mentioned as an option early on – especially if the reactor lifetime could be extended. While not designed specifically for electric propulsion, it could have utilized that capability either on orbit around the Earth or for interplanetary missions. Other options were also proposed, but one was seized on early: a space station.
At the time, most space station designs were nuclear powered, and there were many different configurations. However, there were two that were the most common: first was the simple cylinder, launched as a single piece (although there were multiple module designs proposed which kept the basic cylinder shape) which would be finally realized with the Skylab mission; second was a torus-shaped space station, which was proposed almost a half a century before by Tsiolkovsky, and popularized at the time by Werner von Braun. SNAP-2 was adapted to both of these types of stations. Sadly, while I can find one paper on the use of the SNAP-2 on a station, it focuses exclusively on the reactor system, and doesn’t use a particular space station design, instead laying out the ground limits of the use of the reactor on each type of station, and especially the shielding requirements for each station’s geometry. It was also noted that the reactors could be clustered, providing up to 11 kWe of power for a space station, without significant change to the radiation shield geometry. We’ll look at radiation shielding in a couple posts, and look at the particulars of these designs there.
Since space stations were something that NASA didn’t have the budget for at the time, most designs remained vaguely defined, without much funding or impetus within the structure of either NASA or the US Air Force (although SNAP-2 would have definitely been an option for the Manned Orbiting Laboratory program of the USAF). By the time NASA was seriously looking at space stations as a major funding focus, the SNAP-8 derived Advanced Zirconium Hydride reactor, and later the SNAP-50 (which we’ll look at in the next post) offered more capability than the more powerful SNAP-2. Once again, the lack of a mission spelled the doom of the SNAP-2 reactor.
The SNAP-2 reactor met its piecemeal fate even earlier than the SNAP-10A, but oddly enough both the reactor and the power conversion system lasted just as long as the SNAP-10A did. The reactor core for the SNAP-2 became the SNAP-10A/2 core, and the CRU power conversion system continued under development until after the reactor cores had been canceled. However, mention of the SNAP-2 as a system disappears in the literature around 1966, while the -2/10A core and CRU power conversion system continued until the late 1960s and late 1970s, respectively.
The Legacy of The Early SNAP Reactors
The SNAP program was canceled in 1971 (with one ongoing exception), after flying a single reactor which was operational for 43 days, and conducting over five years of fission powered testing on the ground. The death of the program was slow and drawn out, with the US Air Force canceling the program requirement for the SNAP-10A in 1963 (before the SNAPSHOT mission even launched), the SNAP-2 reactor development being canceled in 1967, all SNAP reactors (including the SNAP-8, which we’ll look at next week) being canceled by 1974, and the CRU power conversion system being continued until 1979 as a separate internal, NASA-supported but not fully funded, project by Rockwell International.
The promise of SNAP was not enough to save the program from the massive cuts to space programs, both for NASA and the US Air Force, that fell even as humanity stepped onto the Moon for the first time. This is an all-too-common fate, both in advanced nuclear reactor engineering and design as well as aerospace engineering. As one of the engineers who worked on the Molten Salt Reactor Experiment noted in a recent documentary on that technology, “everything I ever worked on got canceled.”
However, this does not mean that the SNAP-2/10A programs were useless, or that nothing except a permanently shut down reactor in orbit was achieved. In fact, the SNAP program has left a lasting mark on the astronuclear engineering world, and one that is still felt today. The design of the SNAP-2/10A core, and the challenges that were faced with both this reactor core and the SNAP-8 core informed hydride fuel element development, including the thermal limits of this fuel form, hydrogen migration mitigation strategies, and materials and modeling for multiple burnable poison options for many different fuel types. The thermoelectric conversion system (germanium-silicon) became a common one for high-temperature thermoelectric power conversion, both for power conversion and for thermal testing equipment. Many other materials and systems that were used in this reactor system continued to be developed through other programs.
Possibly the most broad and enduring legacy of this program is in the realm of launch safety, flight safety, and operational paradigms for crewed astronuclear power systems. The foundation of the launch and operational safety guidelines that are used today, for both fission power systems and radioisotope thermoelectric generators, were laid out, refined, or strongly informed by the SNAPSHOT and Space Reactor Safety program – a subject for a future web page, or possibly a blog post. From the ground handling of a nuclear reactor being integrated to a spacecraft, to launch safety and abort behavior, to characterizing nuclear reactor behavior if it falls into the ocean, to operating crewed space stations with on-board nuclear power plants, the SNAP-2/10A program literally wrote the book on how to operate a nuclear power supply for a spacecraft.
While the reactors themselves never flew again, nor did their direct descendants in design, the SNAP reactors formed the foundation for astronuclear engineering of fission power plants for decades. When we start to launch nuclear power systems in the future, these studies, and the carefully studied lessons of the program, will continue to offer lessons for future mission planners.
More Coming Soon!
The SNAP program extended well beyond the SNAP-2/10A program. The SNAP-8 reactor, started in 1959, was the first astronuclear design specifically developed for a nuclear electric propulsion spacecraft. It evolved into several different reactors, notably the Advanced ZrH reactor, which remained the preferred power option for NASA’s nascent modular space station through the mid-to-late 1970s, due to its ability to be effectively shielded from all angles. Its eventual replacement, the SNAP-50 reactor, offered megawatts of power using technology from the Aircraft Nuclear Propulsion program. Many other designs were proposed in this time period, including the SP-100 reactor, the ancestor of Kilopower (the SABRE heat pipe cooled reactor concept), as well as the first American in-core thermionic power system, advances in fuel element designs, and many other innovations.
Originally, these concepts were included in this blog post, but this post quickly expanded to the point that there simply wasn’t room for them. While some of the upcoming post has already been written, and a lot of the research has been done, this next post is going to be a long one as well. Because of this, I don’t know exactly when the post will end up being completed.
After we look at the reactor programs from the 1950s to the late 1980s, we’ll look at NASA and Rosatom’s collaboration on the TOPAZ-II reactor program, and the more recent history of astronuclear designs, from SDI through the Fission Surface Power program. We’ll finish up the series by looking at the most recent power systems from around the world, from JIMO to Kilopower to the new Russian on-orbit nuclear electric tug.
After this, we’ll look at shielding for astronuclear power plants, and possibly ground handling, launch safety, and launch abort considerations, then move on to power conversion systems, which will be a long series of posts due to the sheer number of options available.
These next posts are more research-intensive than usual, even for this blog, so while I’ll be hard at work on the next posts, it may be a bit more time than usual before these posts come out.
Hello, and welcome back to Beyond NERVA, where we’re getting back into issues directly related to nuclear power in space, rather than how that power is used (as we’ve examined in our last three blog posts on electric propulsion)! However, the new Electric Propulsion page is up on the website, including a summary of all the information that we’ve covered in the last three blog posts, which you can find here [Insert Link]! Also, each type of thruster has its own page as well for easier reference, which are all linked on that summary page! Make sure to check it out!
In this blog series, we’re going to look at nuclear electric power system reactor cores themselves. While we’ve looked at a number of designs for nuclear thermal reactor cores (insert link for NTR-S page), there are a number of differences in those reactor cores compared to ones that are designed purely for electricity production. Perhaps the biggest one is operating temperature, and therefor core lifetime, but because the coolant doesn’t have to be hydrogen, and because the amount of heat produced doesn’t have to be increased as much as possible (there will be a LOT more discussion on this concept in the next series on power conversion systems), the reactor can be run at cooler temperatures, preventing a large amount of thermally related headaches, which makes far more more materials available for the reactor core, and generally simplifying matters.
Nuclear electric power systems are also unique in that they’re the only type of fission powered electrical supply system that’s ever flown. We’ve mentioned those systems briefly before, but we’ll look at some of them more in depth today, and in the next post as well. While there have been many reactor designs proposed over the years, we’re going to focus on the programs developed by the USSR during the Cold War, since they have the longest operational history of any sort of fission-powered heat source in space.
The United States were the first to fly a nuclear reactor in space, the SNAPSHOT mission in 1963; but, sadly, another American reactor was never placed on a spacecraft. The Soviet program was far longer running, flying reactors almost continuously from 1970-1988, often two spacecraft at once. With the fall of the Berlin Wall, and the end of the Cold War, the Soviet astronuclear program ended, and Russia hasn’t flown another nuclear reactor since then. There was a time, though, in the 1990s, that a mission was on the books (but never funded to a sufficient level to have ever flown) to use US-purchased, Russian-built nuclear reactors for an American crewed moon base!
History of Soviet In-Space Fission Power Systems
From the beginning, the Soviet in-space nuclear power designers focused on two different design concepts for their power systems: single-cell thermal fuel elements and multi-cell thermal fuel elements. The biggest difference between the two is how many fuel elements are in each thermal fuel element system: the single cell design uses a single fuel element, while the multi-cell option uses multiple fuel elements, separated by passive spacers, moderation blocks, or thermionic power conversion systems. Both designs were extensively researched, and eventually flown, but the initial research focused on the multi-cell approach with the Romashka, (sometimes translated as “Chamomile,” other times as “Daisy”) reactor. This design type led to the BES-5 (Bouk, or “Beech,” flight reactor), whereas the single cell variation led to the TEU-5 TOPOL (TOPAZ-1), which flew twice, and ENISY (TOPAZ-2) reactor, and was later purchased by the US. We won’t be looking in-depth at the ENISY reactor in this post, despite its close relation to the TOPOL, because later in the blog series we’ll be focusing on it far more, the time the Americans bought two of them (and took out an option on another four) and how it could have powered an American lunar base in the 1990s, had the funding been available.
As is our wont here, let’s begin at the beginning, with one of Korolev’s pet projects and the first in-space reactor design of the USSR: Romashka.
Romashka: The Reactor that Started it All
The Romashka (daisy or chamomile in English) was a Soviet adaptation of an American idea first developed in the US at Los Alamos Labs in the mid 1950s: in-core thermionic energy conversion. We’ll be looking at thermionics much more in-depth in the next post on power conversion systems, but the short version is that it combines a heat pipe (which we looked at in the Kilopower posts) with the tendency for an incandescent light to develop a static charge on its’ bulb. More on the conversion system itself in the next post; but, for now, it’s worth noting that this is a way to actually stick your power conversion system in the core of your reactor; and, as far as I’ve seen, it’s the only one.
Design on this reactor started in 1957, following a trip by Soviet scientists to Los Alamos (where thermionic energy conversion had been proposed, but not yet tested). The design offered the potential to have no moving parts, no pumps, and only needed conductive cooling from the reactor body for thermal management; all very attractive properties for a reactor that would not be able to be maintained for its lifetime. Work was begun at the Institute of Atomic Energy, I.V. Kurchatov in Moscow, but by the end of the program there were many design bureaus involved in the conceptual design, manufacturing, and testing of this reactor.
Core and moderator blocks
Radial reflector and radiator
A series of disc-shaped uranium carbide (UC2) fuel elements were used in this reactor (90% 235U), with holes drilled through the center, and roughly halfway from the central hole of the disc to the edge of the fuel element. Both of these holes were used to thread the thermionic power conversion system through the core of the reactor. Spacing of the fuel elements was provided by a mixture of beryllium oxide and graphite, which was also used to slightly moderate the neutron spectrum – but the neutron spectrum in the reactor remained in the fast spectrum. Surrounding the reactor core itself, both radially and at the ends of the core, were beryllium reflectors. Boron and boron nitride control rods placed in the radial reflector and base axial reflector were used to maintain reactor control through the use of a hydraulic system, however a large negative thermal reactivity coefficient in the reactor core was also meant to largely control the reactor in the case of normal operations. Finally, the reactor was surrounded by a finned steel casing that provided all heat rejection through passive radiation – no pumps required! The nominal operating temperature of the reactor was meant to be between 1200 C and 1800 C at the center of the core, and about 800 C at the edges of the core at the ends of the cylinder.
Construction and warm-critical tests were completed by April, 1966, and testing began in Moscow. There are some indications that materials incompatibilities in the first Romashka built led to the need to rebuild it with different materials, but it’s unclear what would have been changed (the only other reference, besides on a CIA document, to this is that the thermionic fuel element materials were changed in the reactor, so that may be what occurred – more on that in the direct power conversion post). This reactor underwent about 15,000 hours of testing, and in that time period it produced about 6,100 kWh of electricity at a relatively constant rate of 40 kW of thermal and 500-800 W of electrical power (1.5%-2% energy conversion efficiency). Initial testing (about 1200 hours) only rejected heat into a vacuum chamber using the fins’ radiative cooling capability; and testing of other reactor behavior particulars was carried out, including core self-regulation capability. Later tests (about 14,000 hours) were done using natural convection in a helium environment. During these tests, thermal deformation of the core and the reflector led to a reduction in reactivity, which was compensated for with the control system. By the end of the test cycle, electrical power production had dropped by 25%, and overall reactivity had dropped by 30%. Maximum sustained power production was about 450 W, and 88 amps, if all thermionic converters were activated, and pulsed power of up to 800 W was observed at the beginning of the actively controlled tests.
Korolev planned to pair this reactor with a plasma pulsed power thruster (based on the time period, possibly a pulsed inductive thruster, or PIT, which we looked at briefly in the second blog post on electric propulsion systems). However, two things conspired to end the Romashka system: Korolev’s death in 1966 meant the loss of its’ most powerful proponent; and the development of the more powerful, more efficient Bouk reactor became advanced enough to make that design available for space travel in the same time frame.
While there were plans to adapt Romashka into a small power plant for remote outposts (the core was known as “Gamma”), the testing program ended in 1966, to be supplanted by the BES-5 “Beech”. The legacy of the Romashka reactor lives on, however, as the first successful design of a thermionic energy conversion system for in-core use, a test-bed for the development and testing of thermionic energy conversion materials (more on that in the first power conversion system post); and it remains the father and grandfather of all Russian in-space reactors to ever fly.
Bouk: The Most Flown Nuclear Reactor in History
The Bouk (“Beech”) reactor, also known as the “Buk,” or BES-5 reactor, is arguably the most successful astronuclear design in history. Begun in 1960 by the Krasnya Zvesda Scientific and Propulsion Association, this reactor promised greater power output than the Romashka, at the cost of additional complexity, and requiring coolant to operate. From 1963 to 1969, testing of the fuel elements and reactor core was carried out without using the thermoelectric fuel elements (TFE), which were still under development. From 1968 to 1970, three reactor cores with full TFEs were tested at Baikal; and, with successful testing completed, the reactor design was prepared for launch, integrated into the Upravlenniye Sputnik Aktivny (US-A; in the West, RORSAT, for Radar Ocean Reconnaisance SATellite) spacecraft, designed to use radar for naval surveillance.
Rather than having stacked discs of UC2, the BES-5 used 79 fuel rods made out of uranium (90% enriched, total U mass 30 kg) molybdenum alloy metal, encased in high-temperature steel. NaK was used as a coolant for the reactor, pumped using the energy from 19 of the fuel assemblies to run an electromagnetic fuel pump. Producing over 100 kW of thermal energy, after electric conversion using in-core germanium-silicon thermoelectric power conversion elements (which use the difference in charge potential between two different metals along a boundary to create an electrical charge when a temperature gradient is applied across the join; again, more in a later post), a maximum of 5 kW of electrical energy was available for the spacecraft’s instrumentation. The fact that this core used thermoelectric conversion rather than thermionic is a good indicator that the common use of the term, TOPAZ, for this reactor is incorrect. Reactor control was provided by six beryllium reflector drums that would be slowly lowered through holes in the radial reflector over the reactor’s life to increase the local neutron flux to account for the buildup of neutron poisons.
One unique aspect to the BES-5 is that the reactor was able to decommission itself at end of life (although this wasn’t always successful) by moving the reactor to a higher orbit and then ejecting the end reflector and fuel assemblies (which were subcritical at time of assembly, and required the Be control rods to be inserted to reach delayed criticality), as well as dumping the NaK coolant overboard. This ensured that the reactor core would not re-enter the atmosphere (although there were two notable exceptions to this, and one late unexpected success). As an additional safety measure following the failure of KOSMOS-954 (more on that below), the reactor was redesigned so that the fuel elements would burn up upon re-entry, diluting the radioactive material to the point that no significant increase in radiation would occur. Over the reactor’s long operational history (31 BES-5 reactors were launched), the lifetime of the reactors was constantly extended, beginning with a lifetime of just 110 minutes (used for radar broadcast testing) to up to 135 days of operational life.
The first BES-5 to be launched was serial number 37 on the KOSMOS-367 satellite on October 3, 1970 (there’s some confusion on this score, with another source claiming it was KOSMOS-469, launched on 25 December 1971). After a very short (110 minute) operational life, the spacecraft was moved into a graveyard orbit and the reactor ejected due to overheating in the reactor core. Three more spacecraft (KOSMOS-402, -469, and 516) were launched over the next two years, with the -469 spacecraft possibly being the first to have the 8.2 GHz side looking radar system that the power plant was selected for. Over time, the US-A spacecraft were launched in parallel, co-planar orbits, usually deployed in pairs with closely attending Russian US-P electronics intelligence satellites (for more on the operational use of the US-A, check out Sven Grahn’s excellent blog on the operational history of the US-A).
The US-A program wasn’t without its failures, sadly, and one led to one of the biggest radiological cleanup missions in the history of nuclear power. On September 18, 1977, a Tsyklon-2 rocket launched from Baikonur Cosmodrome in Khazakhstan carrying the KOSMOS-954 US-A spacecraft on an orbital inclination of 65 degrees. By December, the spacecraft’s orbital maneuvering had become erratic, and Soviet officials contacted US officials that they had lost control of the satellite before they were able to move the reactor core into its’ designated graveyard orbit. On January 24, 1968, the satellite re-entered over Canada, spreading debris over a 600 km long section of the country. Operation Morning Light, the resulting CNES and US DOE program, was able to clear all the debris over several months, in a program that involved hundreds of people from the CNES, DOE, the NEST teams that were then available, and US Military Airlift Command. No fatalities or radiation poisoning cases were reported as a result of KOSMOS-954’s unplanned re-entry, although the remote nature of the re-entry was probably as much of a help as a challenge in this regard. A second KOSMOS spacecraft, KOSMOS-1402, also had its fuel elements re-enter the atmosphere following a failure of the spacecraft to ascend into its graveyard orbit, this time over the North Atlantic. The core re-entered the atmosphere on 23 January 1983, breaking up over the North Atlantic, north of England. No fragments of this reactor were ever recovered, and no significant increase in radioactivity as a result of this unplanned re-entry were detected.
These two incidents caused significant delays in the US-A program, and subsequent redesigns in the reactor as well. However, launches of this system continued until March 14, 1988, with the KOSMOS-1932 mission, which was moved into a graveyard orbit on 20 May, 1988, after a mission time of 66 days. The fate of its’ immediate predecessor, KOSMOS-1900, showed that the additional safety mechanisms for the US-A spacecraft’s reactor were successful: despite an apparent loss of control of the spacecraft, an increasingly eccentric orbit, and the buildup of aerodynamic forces, the reactor core was able to be boosted to a stable graveyard orbit, with the maneuver being completed on 17 October 1988. The main body of the spacecraft re-entered over the Indian Ocean 16 days earlier.
One interesting note on the controversy surrounding these reactor cores’ re-entry into Earth’s atmosphere is that the US planned on doing the exact same thing with the SNAP-10A reactors. The design was supposed to orbit for long enough (on the order of hundreds of year) for the short-lived fission products to decay away, and then the entire reactor would self-disassemble through a combination of mechanical, explosive, and aerodynamic systems; and, as a result, burn up in the upper atmosphere. While the amount of radioactivity that would be added to the atmosphere would be negligible, these accidents showed that this disposal method would not be acceptable; further complicating the American astronuclear program, as well as the one in the USSR. The SNAPSHOT reactor is still in orbit, and is expected to remain there for 2800 years, but considering the fallout of these accidents, retrieval or boosting to a graveyard orbit may be a future mission necessity for this reactor.
The US-A spacecraft demonstrated in-space nuclear fission power, and serial fission power plant production, for over two decades. Despite two major failures resulting in re-entry of the reactor core, the US-A program managed successful operation of the BES-5 reactor for 29 missions, and minimal impact from the two failures. The rest of the BES-5 cores remain parked in graveyard orbits, where they will remain for many hundreds of years until the radioactivity has dropped to natural background radiation.
There is one long-lasting legacy of the BES-5 program on in-orbit space travel, however: the ejected NaK coolant. The coolant remains a cratering hazard for spacecraft in certain orbits, but is not thought to be an object multiplication hazard. It is doubtful that the same core ejection system would be used in a newly designed astronuclear reactor, but this legacy lives on as another example of humanity’s ignorance at the time of a Kessler Syndrome situation.
While this program was not 100% successful, whether from a mission success point of view or from the point of view of it having no ongoing impact from the operations that were carried out, over 25 years of operation of a series of BES-5 reactors remains to this day the most extensive and successful of any astronuclear fission powered design, and it meets or exceeds even the service histories of any RTG design that has been deployed by any country.
TOPOL: The Most Powerful Reactor Ever Flown
The TEU-5 TOPOL (TOPAZ-1) program is the second type of Soviet reactor to fly; and, although it only flew twice, it can be argued to have been even more successful than the BES-5 reactor design. The TEU-5 was the return of the in-core thermionic power conversion system that was first utilized in Romashka; and, just as the Bouk was a step above the Romashka, the Topol was a step beyond that. Thermionic conversion remained more attractive than thermoelectric in terms of wider range of operating capabilities, increased temperature potential, and more forgiving materials requirements, but thermoelectric conversion was able to be readied for flight first. Because of this, and because of the inertia that any flight-tested and more-refined (from a programmatic and serial production sense) program has over one that has yet to fly, the BES-5 flew for over a decade before the TEU-5 would take to orbit.
Despite the different structure, and much higher power, of the TEU-5, the design was able to fulfill the same role of ocean radar reconnaissance; but, initially, it was meant to be a powerful on-orbit TV transmission station. The major advantage of the TEU-5 over the BES-5 is that, due to its higher power level, it wasn’t forced to be in a very low orbit, which increased atmospheric drag, caused the dry mass of the craft to be severely reduced in order to allow for more propellant to be on board, and created a lot of complexity in terms of reactor decommissioning and disposal. Following the KOSMOS-954 and -1402 accidents, the low-flying profile of the US-A satellite was no longer available for astronuclear reactors, and so the orbital altitude increased. TEU-5 offered the capability to get useful image resolution at this higher altitude due to its higher power, and improvements to the (never flown, but ground tested) radar systems.
The TOPOL program was begun in the 1960s, under the Russian acronym for Thermionic Experimental Converter in the Active Zone, which translates directly into Topaz in English, but ground testing didn’t begin until 1970. This was a multi-cell thermionic fuel element design similar in basic concept to Romashka, however it was a far more complex design. Instead of a single stack of disc-shaped fuel elements, a “garland” of fuel elements were formed into a thermionic fuel element. The fissile fuel element was surrounded by a thimble of tungsten or molybdenum, which formed the cathode of the thermionic converter, while the anode of the converter was a thin niobium tube; as with most thermionic converters the gap between cathode and anode was filled with cesium vapor. The anode was cooled with pumped NaK, although some sources indicate that lithium was also considered as a coolant for higher-powered versions of the reactor.
The differences between the BES-5 and TEU-5 were far more than the power conversion system. Instead of being a fast reactor, the Topaz was designed for the thermal neutron spectrum, and as such used zirconium hydride for in-core moderation (also creating a thermal limitation for the materials in the core; however, hydrogen loss mitigation measures were taken throughout the development process). Rather than using the metal fuels that its predecessor had, or the carbides of the Romashka, the Topol used a far more familiar material to nuclear power plant operators: uranium oxide (UO2), enriched to 90% 235U. This, along with reactor core geometry changes, allowed the amount of uranium needed for the core to drop from 30 kg in the BES-5 to 11.5 kg. NaK remained the coolant, due to its low melting temperature, good thermal conductivity, and neutronic transparency. The cathode temperature in the TEU-5 was in the range of 1500-1800C, which resulted in an electrical power output of up to 10 kW.
One of the most technically challenging parts of this reactor’s design was in the cesium management system. The metal would only be a gas inside the core, and electromagnetic pumps were used to move the liquid through a series of filters, heaters, and pipes. The purity of the cesium had a large impact on the efficiency of the thermionic elements, so a number of filters were installed, including for gaseous fission waste products, to be evacuated into space.
The first flight of the TEU-5 was on the KOSMOS-1818 satellite, launched on February 1st, 1987, onto a significantly different orbital trajectory than the rest of the US-A series of spacecraft, despite the fact that superficially it appeared to be quite similar. This was because it was the test-bed of a new type of US-A spacecraft, the US-AM, taking advantage of not only the more powerful nuclear reactor but also employing numerous other technologies. The USSR eventually announced that the spacecraft’s name was the Plasma-A, and was a technology demonstrator for a number of new systems. These included six SPT-70 Hall thrusters for maneuvering and reaction control, and a suite of electromagnetic and sun-finding sensors. Some sources indicate that part of the mission for the spacecraft was the development of a magnetospherically-based navigation system for the USSR. An additional advantage to the higher orbit of this spacecraft was that it eliminated the need for the ascent stage for the reactor core and fuel elements, saving spacecraft mass to complete its’ mission. It had an operational life of 187 days, before the reactor was placed in its graveyard orbit, and the remainder of the spacecraft was allowed to re-enter the atmosphere as its orbit decayed.
The second Plasma-A (KOSMOS-1867) launch was on July 10th, 1987. While the initial flight profile was remarkably similar to the original Plasma-A satellite, the later portions of the mission showed a much larger variation in orbital period, possibly indicating more extensive testing of the thrusters. It was operational for just over a year before it, too, was decommissioned.
Neither of the TEU-5 launches carried radar equipment aboard; but, considering the cancellation of the program also coincided with the fall of the Soviet Union, it’s possible that the increased power output of the TEU-5 would have allowed acceptable radar resolution from this higher orbit (the US-A spacecraft’s orbit was determined by the distance and power requirements of its radar system, and due to the higher aerodynamic drag also significantly limited the lifetime of each spacecraft).
After decommissioning, similar problems with NaK coolant from the reactor core were experienced with the TEU-5 reactors. There is one additional complication from the decommissioning of these larger reactor cores, however, which led to some confusion during the Solar Maximum Mission (SMM) to study solar behavior. Due to the higher operational altitude during the time that the reactor was being operated at full power, and the behavior of the materials that the reactor was made out of, what is often a minor curiosity in reactor physics caused some confusion among some astrophysical and heliophysical researchers: when some materials are bombarded by sufficiently high gamma flux, they will eject electron-positron pairs, which were then trapped in the magnetosphere of the Earth. While these radiation fluxes are minuscule, and unable to adversely affect living tissue, for scientists carefully studying solar behavior during the solar maximum the difference in the number of positrons was not only noticeable, but statistically significant. Both the SMM satellite and one other (Ginga, a Japanese X-ray telescope launched in 1987, which reentered in 1991) have been confirmed to have some instrument interference due to either the gamma wave flux or the resulting positron emissions from the two flown TEU-5 reactors. While this is a problem that only affected a very small number of missions, should astronuclear reactors become more commonly used in orbit, these types of emissions will need to be taken into account for future astrophysical missions.
The Topol program as a whole would survive the collapse of the Soviet Union, but just as with the BES-5, the TEU-5 never flew again after the Berlin Wall came down. KOSMOS-1867 was the last TEU-5 reactor, and the last US-AM satellite, to fly.
ENISY, The Final Soviet Reactor
The single-element thermionic reactor concept never went away. In fact, it remained in side-by-side development with the TOPOL reactor, and shared many of the basic characteristics, but was not ready in as timely a fashion as TOPOL was. The program was begun in 1967, with a total of 26 units built.
ENISY was seen to Soviet planners to be the logical extension of the TEU-5 program, and in many ways the reactor designs are linked. While the TEU-5 was designed for high-powered radar reconnaissance, the ENISY reactor was designed to be a communications and TV broadcast satellite. The amount of data that’s able to be transmitted is directly proportional to the amount of power available, and remains one of the most attractive advantages that astronuclear power plants offer to deep space probes (along with propulsion).
We’ll look at this design more in a later post, but it’s important to mention here since it is, in many ways, a direct evolution of the TEU-5. One nice thing about this reactor is that, due to the geometry of the reactor, its non-nuclear components were able to be tested as a unit without fissile fuel. Instead, induction heating units of the same size as the fuel elements could be slid into the location that the fuel rods would be for preflight testing without issues of neutron activation and material degradation due to the radiation flux.
This capability was demonstrated at the 8th US Symposium on Nuclear Energy in Albuquerque, NM, and led to the US purchasing two already-tested units from Russia (numbers V-71 and I-21U), with a buy option taken out on an additional four units, if needed. This purchase included technical information in the fuel elements, and offers of assistance from Russia to help in the fabrication of the fuel elements, but no actual fuel was sold. This reactor design would form the core of the American crewed lunar base concept in the 1990s as part of the Constellation program, as well as the core of a proposed technology demonstration mission deep space probe, but those programs never reached fruition.
We’ll look at this design in our usual depth in a couple blog posts. For now, it’s worth noting that this design reached flight-ready status; but, due to the financial situation of Russia after the collapse of the USSR, the increased availability of high-powered photovoltaic communications satellites, and the lack of funding for an American astronuclear flight test program, this reactor never achieved orbit as its predecessors did.
The Legacy of the USSR Astronuclear Program
The USSR flew more nuclear power plants than the rest of the world combined, 33 times more to be precise. Their program focused on an area of power generation that continues to hold great promise in the future, and in many ways helped define the problem for the rest of the world: in-core direct power conversion (something we’ll talk more about in the power conversion series). Even the failures of the program have taught the world much about the challenges of astronuclear design, and changed the face of what is and isn’t acceptable when it comes to flying a nuclear reactor in Earth orbit. The ENISY reactor went on to be the preferred power plant for American lunar bases for over a decade, and remains the only astronuclear design that’s been flight-certified by multiple countries.
Russia continues to build on the experience and expertise gained during the Romashka, BES-5, TEU-5, and ENISY programs. A recent test of a heat rejection system that offers far higher heat rejection capacity for its mass than any that has flown to date (a liquid droplet radiator, a concept we’ll cover in the thermal management post that will be coming up in a few months), their focus on high-power Hall thrusters, and their design for an on-orbit nuclear electric tug with a far more powerful reactor than any that we looked at today (1 MWe, depending on the power conversion system, likely between 2-5 MWt) shows that this experience has not been shoved into a closet and left to gather dust, but continues to be developed to advance the future of spaceflight.
More Coming Soon!
This post focused on the USSR and Russia’s astronuclear power plant expertise and operational history, a subject that very little has been written about in English (outside a number of reports, mostly focusing on the ENISY/TOPAZ-2 reactor), and is a subject that has long fascinated me. However, the USSR wasn’t the only country focusing on the idea, and wasn’t even the first to fly a reactor, just the most successful at making an astronuclear program.
The next post (which might be split into two due to the sheer number of fission power plant designs proposed in the US) is on the American programs from the same time, the Systems for Nuclear Auxiliary Propulsion, or SNAP, series of reactors (if split, the first post will cover SNAP-2, -10A, SNAPSHOT, -8, and the three reactors that evolved from SNAP-8, with SNAP50/SPUR, SABRE, SP-100, and possibly a couple more, as well as the ENISY/TOPAZ II US-USSR TSET/NEP Space Test Program/lunar base program). While the majority of the SNAP designs that were used were radioisotope thermoelectric generators, the ones that we’ll be focusing on are the fission power plants: the SNAP-2, SNAP-8, SNAP-10A (the first reactor to be launched into orbit), and the SNAP-100/SPUR reactor.
Following that, we’ll wrap up our look at the history of astronuclear electric power plants (the reactors themselves, at least) with a look at the designs proposed for the Strategic Defense Initiative (Reagan’s “Star Wars” program), return to a Russian-designed reactor which would have powered an America lunar base, had the funding for the base been available (ENISY), and the designs that rounded out the 20th century’s exploration of this fascinating and promising concept.
We’ll do one last post on NEP reactor cores looking at more recent designs from the last twenty years up to the present time, including the JIMO mission and a look at where Kilopower stands today, and then move on to power conversion systems in what’s likely to be another long series. As it stands that one will have a post on direct energy conversion, one on general heat engines and Stirling power conversion systems, one on Rankine cycle power conversion systems, one on Brayton cycle systems (including the ever-popular, never-realized, supercritical CO2 turbines), one on AMTEC and magnetohydrodynamic power conversion systems (possibly with a couple other non-mechanical heat engines as well), and a wrap up of the concepts, looking at which designs work best for which power levels and mission types. After that, it’ll be on to: heat rejection systems, for another multi-part series; a post on NEP ship and mission design; and, finally, one on bimodal NTR/NEP systems, looking at how to get the thrust of an NTR when it’s convenient and the efficiency of an NEP system when it’s most useful.
Hello, and welcome back to the Beyond NERVA blog! Before we get started on today’s blog post, it has come to my attention that many people, even regulars to this blog, have been missing out on a lot of the content available here at BN. While the blog is a major focus of Beyond NERVA, it is far from the only thing that’s going on at beyondnerva.wordpress.com. There is a host of topical websites available, with a combination of content that’s been covered in this blog, as well as other areas that I didn’t have space for in the blog posts, didn’t fit anywhere, or that I discovered after I wrote the post on a subject. Each page is updated intermittently, depending on information received, time available, and other factors; but I will start posting on the homepage when each page is updated or added to make it easier to see what’s new, what’s updated, and what’s interesting. Two pages that have had major overhauls recently are the Fuel Elements page, and the Test Stands and Equipment page, so I’d encourage you to check them out. We also have the BN Facebook group, where I post page updates, interesting papers, and occasional 3D images (and now movie clips!) for the forthcoming YT channel (still a ways off yet, but work is continuing). Please check out the rest of the site, and come by to join the conversation on FB! With the community announcements out of the way, let’s get on to today’s blog post!
Fission-Based Electric Power systems
Today, we’re going to start looking at another way to use nuclear propulsion in space. One not directly using the heat of the reactor like we’ve been examining for the last few months. Yes, today we’re beginning to look at nuclear electric propulsion. This will be an overview of the technology and options available, with a focus on the reactor and power conversion system options. This isn’t a new topic on this blog by any stretch of the imagination. Kilopower has been proposed as a spacecraft power supply to power electric thrusters; and, while this isn’t considered the most attractive option to many in NASA (they prefer to use the reactor to power surface installations, especially for crewed missions), this option is still being explored, researched, and utilized.
Most people that have looked into in-space nuclear power know that using a nuclear reactor for electrical power is nothing new. The US has flown one nuclear reactor (SNAP-10a); and the USSR and Russia have flown over thirty, using two different designs. All of these systems were designed to provide electrical power for a satellite. In the case of the US it was an experimental version of the Agena spacecraft; and for the USSR and Russia, it was used for RORSATs (Radar Ocean Reconnaissance SATellites). Radar is a notorious power hog, since resolution and power supply are directly proportional to each other. When these designs were first flown, photovoltaic panels were not nearly as efficient as they are today, so they weren’t considered a practical option.
We’ll look into these spacecraft more as this series continues on. For today, though, we’re going to take a broad overview of the mechanisms that a fission power supply needs; and a brief look at the different options available as far as the reactor itself, power conversion systems, and the reasons one would be used on a mission in the first place.
Electricity in On Earth
The vast majority of people are familiar with the way a nuclear power plant works on Earth: a reactor heats up water from the heat generated by fission in fuel elements (rods, bundles, or pellets, usually), which then goes through a steam generator. This steam then drives a set of steam turbines, which spin a generator and produce electricity. That water is then run through a cooling system (either cooling towers or a body of water), before being returned to the reactor core to start the process over again. Most reactors use what’s known as a two-loop system, meaning that the water that goes through the generator never enters the reactor core, instead the water from the core is run through a heat exchanger that transfers this heat to a second loop of water (the primary and secondary loops, respectively), to prevent the generator from becoming radioactive over time, while greatly simplifying the maintenance of the power plant. This is known as a two-loop pressurized water reactor (PWR), and is the most common type of nuclear reactor in the world. Another version actually produces the steam for the generators in the reactor vessel itself, and then runs this through the generator. This much simpler system is called a boiling water reactor (BWR), and has the advantage of mechanical simplicity; but adds the challenge of the turbine becoming radioactive over time, making maintenance more difficult. This design is most common in Japan. However, the systems are very similar: heat water, produce steam, run turbines, cool the water, return it to the core.
One notable exception to this general trend for deployed reactors is in the UK, where gas cooled reactors are the most common type of reactor (in this case the AGR, or Advanced Gas Cooled Reactor). We’ve been looking at gas cooled reactors (sometimes known as HTGRs, or “High Temperature Gas Cooled Reactors”) a lot in the previous few months, so this concept should be familiar to regulars of Beyond NERVA. If you aren’t familiar, check out our Nuclear Thermal Rocket page for more information. The fuel elements have a different shape, and helium is used instead of hydrogen (as we’ve seen, if you can get away with NOT using hydrogen, that’s a good thing).
However, as many readers of this blog are familiar, nuclear power is going through a renaissance worldwide right now, with many reactor concepts that were proposed, but never deployed, being heavily investigated once again. Terms like “Liquid Metal Fast Breeder Reactor,” “Aqueous Homogeneous Reactor,” the ubiquitous “Molten Salt Reactor,” and others are being discussed, each with their own set of advantages and disadvantages over the current PWRs and BWRs that have been deployed worldwide. Even the HTGR is being talked about more, despite the fact that it’s been deployed in the UK fleet for decades, for many of the same reasons these other reactor designs are being investigated: it’s possible to make each of these designs “walk-away safe”, among other safety features inherent in their design, making nuclear accidents far more unlikely.
This doesn’t mean that these reactors are maintenance-free, however. Many of the components, especially the generators, require regular, sometimes quite difficult, maintenance. Some of the more advanced designs also have unique subsystems that will require maintenance or replacement (which is why many of the reactor designers are making modular designs, where the core itself, or peripheral components, can be disconnected, loaded onto a barge or truck, and delivered to a dedicated maintenance facility, while a replacement module is installed at the power plant) on a timeline that isn’t always clear – but will be once these reactors are being deployed for power generation. This brings us to the biggest driving factor in space nuclear power supply design, and leads the conversation to what we’re all here for: in-space nuclear power.
In-Space Fission Power Systems
With the notable exception of the Hubble Space Telescope; a few military and reconnaissance satellites retrieved by the now-long-retired Space Transport System (otherwise known as the Space Shuttle); and the various space stations launched by the US, the USSR/Russia, and now the Chinese; if something breaks after it’s launched into space, then it stays broken. Satellite operators have gotten very good over the decades at working around malfunctions, but the fact remains that on-orbit repair is by far the exception, not the rule. It’s difficult, if not impossible, to send a repair crew out to make repairs; and often those repairs would be impossible, even if it were possible to get to the satellite, due to the design considerations that went into the satellite in the first place.
This is just as true for an in-space nuclear reactor as any other system, and malfunctions have occurred on several missions involving fission power systems in space. Possibly the best known on-orbit failure is that of SNAP-10a, the only American nuclear reactor to ever fly in space. Shortly after achieving orbit, and after initial check-out of the spacecraft, the reactor was activated, and operated normally… for 14 minutes. Then, an electrical bus on the spacecraft (not part of the nuclear reactor) failed, and the entire mission was lost, including telemetry on reactor behavior. It still follows its original polar orbit, and will continue to do so for thousands of years, or until someone retrieves it at some future date (a challenging prospect for any satellite, and this was a larger-than-average one). Other failures have occurred as well, but we’ll look at these as we look at the individual systems that have been used in space so far, as well as proposed and tested systems.
There are two main concerns for space systems engineers: first, the satellite has to survive launch and deployment on orbit; and, second, everything on the spacecraft has to be as reliable as possible. The first consideration isn’t something that we are going to be examining in depth on this blog (at least not the near future). The second part, on the other hand, is something that affects most design decisions that are made in fission power supply design for space missions of all types. Maintenance is not an option in space.
The Parts of an In-Space Fission Power Supply
The same general parts for a terrestrial power plant are used for in-space fission power supply systems, with a reactor core using a working fluid (water or gas in the examples that we’ve seen so far), which then runs through a power conversion system (either a steam or gas turbine in terrestrial designs), and is then cooled before returning to the reactor core. However, water is heavy, and isn’t liquid over a very large range of temperatures unless it’s pressurized (which requires heavy pressure-tolerant pipes and reactor chambers), so it’s generally not used. Gas is sometimes used (often helium, argon, or xenon, all noble gases that won’t generally react chemically with the components of the spacecraft), but liquid metals (often sodium) are the most common option.
The Reactor Core
The geometry of the reactor core is often quite different from that seen in terrestrial reactor designs. The biggest difference is that it will be far smaller; often only a couple meters long and wide, or smaller; as opposed to the dozens or hundreds of meters that large terrestrial fission plants have. This is, of course, to save mass of the spacecraft, and to get more power out of less mass and volume. We’ll examine the different options for core geometry more as we go through this blog series, but the shape ranges from plugs of uranium oxide or carbide arranged in a row, surrounded by a power conversion system that we’ll look at later in this post; to cylindrical, square, or hexagonal fuel elements with coolant channels running through them (as we’ve seen with the Kilopower reactor, although that’s unique in that there’s just one fuel element, not several); to more exotic options, which all depend on a number of factors.
One big difference between these reactor cores and the ones that we’ve been examining in the nuclear thermal rocket posts is that they don’t run nearly as hot. While there are efficiency benefits to having a hotter reactor, as we’ve seen the thermal stresses, increased chemical reaction rates, and limitations on the materials that can be used, often mean that this is simply more trouble than its worth in most cases. An NTR has to be as hot as possible to maximize the specific impulse, or rocket efficiency, of the engine; whereas to produce electricity there are many options that work well enough at cooler temperatures, so engineers generally decide to run at cooler temperatures and save themselves a lot of the concerns and headaches that these high temperatures cause.
How hot is hot enough? Well, that depends on a number of factors, mostly to do with the power conversion system and the heat rejection system. The fact that these systems are linked together (in much the same way as the turbopumps, propellant being used, and other components of an NTR work together) is by now a familiar concept to regulars of this blog. These considerations, no matter what the details are, are called “balance of plant” issues; and if anything they’re the biggest concern for a reactor designer just beginning to design an in-space fission power system. Many concepts, like amount of power that needs to be provided, mass and volume of individual components, maximum and minimum working fluid outlet temperature, radiator requirements, and many other considerations, all work together to define a system; but perhaps the most important considerations are the first and third: power requirements and power conversion system. The power requirements, as well as mass requirements, will be defined by the mission that will use this reactor, so they can be taken as a given: if NASA wants a reactor to provide 100 megawatts of electricity (MWe) for a manned spacecraft using nuclear electric propulsion… well it’s the job of the reactor designer to do that, ideally within the mass budget allotted (or else other systems on the spacecraft have to get lighter, something that’s difficult or impossible to do).
Power Conversion Systems
A nuclear reactor doesn’t produce electricity directly. In most cases, what it does produce is heat (caused by the collision of the fission fragments and neutrons as they slow down inside the fuel element, or by hitting reactor components), which then has to be converted into electricity. There are a number of ways to do this, many more than are used on Earth in nuclear power plants. For the terrestrial reactors, it’s often a matter of using off-the-shelf technology for other power plant types – a nuclear reactor builder will likely buy their steam turbines from a company that makes steam turbines for coal-fired power plants, for instance (although for a single-loop system they have to be manufactured with much higher tolerances, because working on them will be much harder; so they have to break less, and require less maintenance).
In space, the field is much more wide open, partially because there’s very little off-the-shelf going on in space. This means that a reactor designer has a lot more options when it comes to their power conversion system. However, these designs tend to fall into three broad categories: heat engines, thermal-electric materials conversion, and charged particle conversion.
Heat Engines: The Workhorses of Modern Life
The most familiar is the heat engine, where heat is turned into mechanical work, and this in turn spins a generator. The technical name for these types of devices is a heat engine, which was extensively studied by a 19th century military engineer, Nicholas Leonard Sadi Carnot, who worked out the theoretical limits of the efficiency of these types of engines in 1824. For this, he’s often called the “grandfather of thermodynamics.” These engines, no matter what specific type, have a hot side and a cold side, and use the temperature difference to create mechanical work. The bigger the difference, the more efficient; within certain limits. The most common types are the Rankine cycle (the working principle of steam turbines), the Brayton cycle (the working principle of gas turbines), and the Stirling cycle (which we saw when we examined the Kilopower system). This temperature difference is why large cooling towers or ponds are used in most power plants: the amount of heat coming out is a fixed quantity, but the amount of cold available could be improved (down to ambient temperature at least) to maximize the efficiency of the system. In greatly simplified terms, the theoretical maximum efficiency of this type of engine is: [hot end temperature]-[cold end temperature])/[hot end temperature] (somewhere around 80%, but this is a matter of some debate). In practical terms, a free-piston Stirling engine can achieve 50% efficiency, while a supercritical-CO2 Brayton convertor (more on that in the power conversion blog post, coming up soon!) can achieve similar numbers – but with many more moving parts to break.
These power conversion systems will be covered in depth in a later post, but for right now let’s look at the thing that they all have in common: moving parts, often lots of moving parts. Moving parts wear out, they can get out of balance, they can grind against each other and create flecks of metal to get in the way or cause short circuits… so space fission power designers, as a general rule, don’t want to use them unless they have to.
The Stirling engines used in Kilopower were tested on the ground for far longer than the life of any proposed mission; and not just for Kilopower. The Advanced Stirling Radioisotope Generator was the first program those exact same power conversion units were used in; and, indeed, KRUSTY reused the exact same hardware. These are incredibly mechanically simple systems, and that makes them attractive. Stirling engines are also able to achieve a higher theoretical efficiency (more on that in the power conversion systems blog post) than the other two common options, which makes them attractive as well; but it should be noted that theory and practice don’t always go hand in hand.
Regardless of how the motion is produced, it is then transferred to an electrical generator, which spins a magnet inside a coil of conductive wire, thereby producing an electric current. These generators, since they have moving parts, must be very carefully manufactured; and, again, are a potential source of mechanical or material failure over time.
Materials-Based Electricity Production
So, at this point, an interesting question rears its’ head: what if we could convert heat to electricity without moving parts? There is in fact a way, in fact a number of them, but they come with a big trade-off: single-digit power conversion efficiency. This tradeoff was still worth it to every nuclear electric power source designer that has had hardware fly, though. No moving parts, combined with rigorous quality control, means that whatever’s going to break is not going to be your power conversion system. There are two main options for directly converting heat to electricity, with similar names: the thermoelectric effect and the thermionic effect.
The thermoelectric effect exploits how different metals hold onto electrons at different strengths, and also react to heat differently, by running heat along a join between two different metals, and along the seam having one side be heated and the other cooled. This creates an electrical current (how much depends on the temperature difference and the combination of metals used). The main downsides (other than low efficiency) to this process is that for any two given metals, the sweet spot for system efficiency is very small, so unless the reactor is at JUST the right temperature, and unless your radiators are working properly, no power is going to be produced. Additionally, these usually are relatively low-heat systems due to the materials that efficiently produce this effect in combination; although high-temperature thermoelectrics are gaining in efficiency. These systems are often used for energy capture in industrial facilities; wrapped around waste pipes, chimneys, and other areas.
The next option is the thermionic effect. This is an incredibly ancient concept; known since the days of ancient Greece, and described in detail by Alexander Graham Bell, before the electron was ever discovered. Certain materials, when heated, emit electrons, which then build up a static charge on another material. The tendency for a coal on the end of a stick to attract ash was observed and described in ancient Greece, and this is that principle in action. Bell’s experiments were more efficient, since he used a vacuum: an incandescent bulb’s filament will emit electrons and become positively charged, depositing them on the glass of the bulb rather more efficiently than the coal, because there’s no air to capture the electrons and become mildly ionized. Once again, a static charge builds up on the glass, and this then can be drawn off to be used (assuming the positive side of the circuit connects back to the filament). In this case, the thermal difference being used is the filament (which gets quite hot) and the glass (which has a lot of surface area, and rejects heat by radiation of convection). This option is the one that has been used in space nuclear power sources, although with somewhat more efficient design of the power conversion system. These power conversion systems use an interesting property found in cesium (which we aren’t going to explore in this post), where it turns into a very unusual state of matter: Cesium-Ryberg vapor. This vapor collects the negative charge from the hot end of the power convertor, and then condenses on the cold end, transferring the electricity as it does so. Usually, a wick is then used to return the cooled cesium to the hot end of the convertor, and the process begins again.
Theoretically, thermionics can achieve greater efficiency for a couple reasons: first, they can run much hotter; and, just as in all power conversion system options, the larger the thermal difference, the more efficient the system is. Secondly, the Rydberg matter state allows for greater conductivity than the metals of the thermoelectric effect, so less energy is lost in conduction (as heat, which makes the system less efficient the less conductive it is, in a couple different ways). These systems also have another property that makes them interesting to in-space fission power system designers: they can be installed in the core of the reactor itself. This was actually done in the TOPAZ series of nuclear reactors, which flew 30 times for the USSR and Russia, on the KOSMOS RORSAT program. Individual fuel elements were effectively wrapped in thermionic power conversion systems, in a design often called a “flashlight core,” since they look something like the batteries in an old-style flashlight handle.
However, these systems are still fundamentally limited, in that they are so low efficiency, usually in the single or low double digits. This means that if you’ve got a 1MWt nuclear reactor on your spacecraft, you’re going to have to get rid of that 1 MW of heat in order to keep your spacecraft running, but you’ll only get a few kilowatts of power. This is fine in many cases, and can be designed around (a fission reactor’s core doesn’t have to get much bigger to put out a lot more power, and fission and fusion scale up much better than they scale down), but that doesn’t mean that this is an ideal circumstance. Which leads to the next question: what other options are there?
Advanced Power Conversion Systems
As we saw from both of the previous concepts, there are theoretical fundamental limitations to both heat engines and material power conversion options; primarily, that the heat is needed to induce another phenomenon, and the difference between the hot and cold temperatures of the system or material define how efficient it can be. Is there a way to get around this thermal limitation? Are there other options available? The answer to the first question is, unfortunately, no, in most cases; but the answer to the second is a resounding yes!
The first concept to look at is the magnetohydrodynamic generator (MHD). This is often discussed in spaceflight, but not as a power generator. This works in largely the same way that an electric generator and an electric motor are effectively the same thing, in reverse; it just matters if you’re putting in electricity to make the shaft spin, or spinning the shaft to make electricity. In this case, charged particles are sent through a series of magnetic coils, which slow down the charged particles, changing their kinetic energy into a charge in the coils, which is then used as a power source. This, obviously, requires a source of charged particles, though, which leads reactor designers that want to use this power conversion system to use some very different reactor core geometries, including one that we’ll look at more when we cover this subject in more depth: the vapor core reactor, where the fuel is a liquid that is sprayed in a very fine mist within a reaction chamber, allowing the particles to have less physical interaction while undergoing fission. These fission products are highly charged, and moving at very high speeds, so theoretically this can be a very efficient form of power production (and could make a good NTR as well, so we’ll be coming back to this a couple different ways). Work on a vapor core MHD propulsion system has been explored since the early 90s at the Institute for Nuclear Space Propulsion Innovation at the University of Florida, and shows promise; although only a handful of tests examining the criticality requirements of this reactor have been done, so it remains in advanced early stages of design (other experimentation has been done, and continues to be done, though).
The second option is an oddity among power conversion systems: AMTEC, or the Alkali Metal Thermal to Electric Convertor. Here, the capacity for alkali metals to carry high negative charge is exploited by running a liquid metal (usually sodium, but potassium has been used as well, at lower operating temperatures) in a closed loop, through convection between a high pressure hot end and a low pressure cold end. Due to the high energy potential difference, this process doesn’t require mechanical pumps, merely a difference in heat. The theoretical efficiency of this system approaches 40%. The fact that this approaches the efficiency of a heat engine, but without moving parts, makes this a very attractive option for space nuclear power designers, but the details get technical rather quickly, so we’ll leave them for the power conversion system post.
Getting Rid of Heat
As we all know, fission generates a lot of heat very efficiently, that’s the whole reason that it’s so attractive. Unfortunately, that heat all has to go somewhere, or two things will happen: the power conversion system stops working because the hot and cold ends approach the same temperature; and, your spacecraft gets baked until it’s destroyed. Heat always transfers from a higher temperature to a lower one, and how much heat is transferred depends on the temperature difference, the surface area of the thermal difference, and the thermal conduction of the materials involved. There are three mechanisms for heat to transfer: conduction (through physical contact with another material), convection (through gas heating up and then rising away from the heat source), and radiation (the infrared photons that actually make up the heat leave the substance in straight lines). Sadly, in the vacuum of space only radiation works, unless you’re throwing large amounts of gas overboard (as in an NTR, which operates on convection and radiation), which weighs a fair bit and takes up a lot of volume – and the container weighs a fair bit, too. This means that if we want to generate electricity, we have to get rid of heat through radiators, which heat a working fluid or other substance, and then use convection or conduction to an area with a large surface area to maximize the amount of surface area available for radiation. Radiators emit heat as a function of the temperature of the liquid (or gas) inside them, the surface area of the radiating surface, and several other factors, known as the Stefan-Boltzmann law (more on that in the heat rejection blog post coming up in a few posts). On the practical side of a radiator, if the substance used as a working fluid freezes at ambient temperatures, the designer needs to make sure it’s not going to freeze anywhere in the radiator and gum up the works.
On the international space station, a pumped loop radiator using ammonia is used (and is responsible for a lot of the maintenance work done during EVAs, as was its predecessor on Mir). Higher temperature radiators can use sodium, lead-bismuth, or other liquids. The advantage to this system is that, to a degree, the more heat that needs to be rejected, the faster the pump can be run.
The most popular type of radiator for modern spacecraft is the titanium-water heat pipe radiator. This system doesn’t require a pump, since it works through wicking, as all heat pipes do; which means one less set of moving parts is needed. Other heat pipes are possible, at higher temperatures, including the sodium heat pipes used in Kilopower; but in order to use these, and not lose power conversion efficiency, the reactor has to operate at a higher temperature. The advantage of these heat pipes is that, due to the Stefan-Boltzmann law, these radiators can get rid of much more heat with much less mass.
More advanced options don’t use pipes to contain the liquid, but flexible membranes. The simplest uses a simple spray of liquid, which is sprayed in a controlled fashion into space toward a collector, which them pumps the liquid back into the reactor. This has a much higher surface area than a pipe-based heat radiator, since the droplets themselves have a huge surface area to volume ratio compared to the pipe surface area that’s facing space. A waterfall on the Moon may be a good method of rejecting heat, both due to the low gravity and the low average temperature (in the shade, at least).
Of course, if the spacecraft maneuvers before all the spray is collected, a certain amount will be lost; so it can’t effectively be used during maneuvering… unless the drops are magnetic. This is the idea behind the Curie point liquid droplet radiator, which sprays liquid metal droplets that are then collected by electromagnets. The liquid itself will be sprayed at above the Curie point of the liquid, which means that it will become magnetic after it cools past its Curie point. This system is likely to be a heavy power hog, though, and the returning working mass is probably a solid, which has its own (not insurmountable) issues with return to the core to be recycled. One advantage to this is that the phase change from solid to liquid absorbs a fair bit of energy, and vice versa releases a fair bit, so it is more efficient at power transfer in that way.
Another option, which would require less mass, is the membrane radiator, where the droplets are sprayed onto a membrane, which then collects them through centrifugal force. This has been proposed in a number of configurations, including spherical shapes, belts, and others. The most studied version was proposed by Boeing for a nuclear electric manned spacecraft, this reactor used liquid sodium that transferred the heat through a heat exchanger into water, and then had a distorted spherical membrane to catch the droplets. This system was called the Rotating Multi-Megawatt Boiling Liquid Reactor, and will be one of the systems we cover in the future.
However, these aren’t the only options for radiators. Anything that can efficiently conduct heat, and that has a high surface area, can be used as a radiator. Assuming carbon nanotubes are able to be made miles long, a collection of these would make a good radiator, due to their high thermal conductivity and large surface area. This is the principle behind “belt”, “wire”, or “spaghetti” radiator designs.
Ideas abound on how to maximize radiation abound, and we’ll cover them more in depth in a later post. I leave this topic with a chart comparing many different options for heat rejection systems, from Winchell Chung’s wonderful Atomic Rockets website, showing heat capacity, specific area mass, radiation area, and array mass (including supporting structures) for various proposed and historical systems, most of which we’ll cover at some later point.
Why Use Fission Power Systems, When Most Missions Use Less Power than My Microwave?
So what needs a lot of power in space? Not much, really. The ISS uses about the same amount of power as 14 normal American houses, but those are notorious for power loss, and this is to support multiple humans and dozens of experiments. Curiosity, NASA’s current flagship mission, is an SUV-sized, six wheeled rover with: two chemistry labs; two sets of cameras; a drill; a robotic arm; and a laser that vaporizes pinpricks of rock from dozens of feet away, and analyzes the resultant color of the vaporized rock to determine the composition. All of this is done with only 110 watts (admittedly, it only does one of these things at a time, and would be challenged to travel faster than a small tortoise, but still). Instrumentation as a general rule doesn’t need a lot of power, and NASA and other space agencies over the years have gotten very good at using less and less power to do more and more.
There are really only four reasons for high power use: life support, in-situ resource utilization (ISRU), radar, and propulsion. Life support is a major focus of the Kilopower program, with a strong possibility that the first deployment of the system will be to the surface of the Moon, to support a crewed base (not a new idea, when we cover the BES-5, otherwise known as TOPAZ-II, we’ll look at a NASA mission concept that used a Soviet-built reactor to do the same thing). However, since while on-orbit in the inner solar system most life support systems can be supported by solar panels (unless there’s weather or an inconveniently long day-night cycle) those will likely be used.
In situ resource utilization is a very hot topic right now: most Mars sample return missions assume that either incredibly power-dense propellants will be used, or much of the propellant will be manufactured on Mars (often using some small portion of the raw materials brought from Earth, such as hydrogen feedstock, which is hard to get on Mars easily). Other uses include using either lunar or Martian regolith to construct habitats using 3D printing; extraction of valuable resources, either for their monetary or practical value; producing tools and other products for use on future crewed bases… the list is extensive. The other thing that’s extensive is the power requirements for this type of activity, not only for the process itself, but also for associated materials and tools needed. Many forms of chemical synthesis or extraction need precursor chemicals, solvents, reagents, and other substances that also have to be extracted, refined, and used. If a furnace is needed for metal refining, that furnace is likely going to have to be built in situ in order to be able to begin the actual end goal of refining metals, then the molds, forgings, or other associated metalworking tools to make a useful product need to be built as well. All of these things require large amounts of energy, and may require acres of solar panels (with intermittency issues, maintenance issues, and the cost of manufacturing THOSE in situ as well), or can be satisfied with a relatively small, power-dense fission power system.
In Earth orbit, radar can now be provided by solar panels (unlike the earlier RORSATs, when photovoltaics were far less efficient than today). Within the main belt asteroids, unless a high degree of reliability is needed for crewed surface installations (the focus for Kilopower at the moment), solar power is, again, usually sufficient. Out of Earth orbit, other systems have been devised to gather similar data for far less power, but these systems have limitations as far as data gathering is concerned. Given a powerful, sufficiently power-dense , power source, we may see on-orbit ground penetrating radar used to, say, map the rocky core and liquid layer of Enceladus, or probe the depths of the ice giants; but, until Kilopower becomes operational, there’s not a power supply that can support that, and radar tends to be bulky – and, even worse, heavy.
In the outer solar system, the situation is quite different; but, even then, power requirements tend to be low. Kilopower’s 1 kWe system is more than was available to Cassini/Galileo, New Horizons, or any other outer planet flagship mission. All of these (except Juno, which used experimental, and very expensive, solar panels, with a collective surface area of a tennis court) were powered by radioisotope thermoelectric generators (another upcoming topic, if not for a blog post, then for the website), which are fundamentally limited by the decay of the isotope used. These missions usually run on just a few hundred watts of power, if not less, and bring back incredible science returns.
The last use is going to be the subject of our next blog post, since it’s another hugely diverse area in its’ own right: propulsion. Briefly, electric propulsion uses electric potential rather than chemical potential to accelerate a spacecraft. While it is often seen as a new thing, the concept has been around since the earliest days of spaceflight, although the details of how it’s used have changed greatly over time. The thrust and efficiency available in these systems is often dependent on having more electrical power, though, so fission power systems offer a uniquely attractive set of characteristics for this type of propulsion. More on that in the next post, though.
What’s Coming Up?
While this has been a long blog post, it’s also been a very brief summary of the basic needs of a fission power system, its limitations, and a couple of its uses, if not the one that most people think of. Each of these subsystems is a hugely important area, and each must balance with the others in terms of power output, mass, volume, and a host of other requirements, to make a cohesive whole, or a nuclear electric spacecraft would be impossible.
The next post will focus on the most commonly thought of reason for nuclear power in space: high powered electric propulsion. This will be another overview post, looking at the various options for electric propulsion, with some of the advantages and disadvantages of each.
In the following few posts in this series, we’ll focus on the basics of the power system itself. We’ll focus on the core of the reactor first, including different cooling systems for the reactor, and touch briefly on one sort of power conversion system: in-core thermionic power conversion systems. This may also be our introduction to the TOPAZ reactor, depending on how the blog post goes. After that, we’ll look at power conversion systems, a hugely diverse area… so that may become two posts as well, depending on how things go: heat engines (Rankine, Brayton, and Stirling engines) for one post, and materials and advanced power conversion options for the other. After that, we’ll move on to heat rejection technologies, an area with far more options than most people realize, and one that has some difficult-to-explain or unusual concepts to cover. Finally, we’ll have a set of blog posts for each type of electric thruster: electrostatic, electromagnetic, electrothermal, and photonic. Finally, we’ll bring it all together in a post examining the concept of balance-of-plant, and a few of the basics of the unique challenges of designing a nuclear electric spacecraft.
We will likely have interludes in here examining other subjects as well, possibly on different proposed reactor systems or spacecraft over the years… and maybe even a look at non-solid-core NTRs, depending on how things go. There will, of course, also be updates to the website on many subjects, as I come across more information on a subject, or to flesh out information from previous or forthcoming blog posts. Again, I’ll try and keep the webpage updated with information on which pages are new or updated!
This is going to be a long and grand adventure into the depths of nuclear electric spacecraft! Don’t worry if the blog posts start slow (not in frequency, though… many of these posts may come out more frequently than others have, due to their narrower focus): just like the systems we’re looking into, it may take time to get up to speed, but once everything’s had time to build up thrust the posts will seem to fly by!
Today, we’re looking at a different kind of fuel element than the ones we’ve been examining so far on this blog, one that promises higher operation temperatures and therefore more efficient NTRs: carbide fuel elements. We’ll also look at a few different options for NTR designs using carbide fuels: the first one being from Russia (and the only NTR to be tested outside the US), the RD-0410/0411 architecture (two different sizes of a very similar reactor type); the second is the grooved ring tricarbide NTR (a modern US design involving a unique fuel element geometry); and, finally, the SULEU reactor (Superior Use of Low Enriched Uranium, another modern US design with many unique reactor architecture and safety features).
So, to begin, what are carbides? Carbides are a solid solution of carbon and at least one other, less electronegative element. These materials are known for very high temperature melting points, and are often used in high speed tooling. Tungsten carbide, for instance, is used for both high-speed wood and metal bits, blades, and other tools.
In the NERVA reactors, niobium carbide and zirconium carbide were used as fuel element cladding, to prevent the fuel elements from being aggressively eroded by the hot hydrogen propellant. By the time of the XE-Prime test, the fuel particles suspended in the graphite matrix of the fuel element were uranium carbide, individually coated with zirconium carbide.
These are monocarbide compositions, though. There are other options: tricarbides (with three electronegative components, leading to a different lattice structure, as well as different mechanical and thermal properties) and carbide nitrides (a composite material containing both carbide and nitride structures; nitrides being a similar concept to carbides, but with N instead of C) – a possibility that is apparently of great interest to Russian NTR designers, but more on that later.
Even during Rover, however, the advantages of making the fuel elements themselves out of carbides were known, and research on the fuel elements began as far back as the 1960s in the US. This research included two of the test chambers in the nuclear furnace tests (examined in the Hot Fire Part 2 blog post to a small extent); but these were considered a more advanced follow-on technology, while the graphite fuel elements with encapsulated fuel particles were the ones that were intended to be used for the planned Mars missions.
Carbides have many advantages over many other materials. One example is that carbides are able to be built up with many different processes, most notably chemical vapor deposition (CVD), where a series of chemical precursors are used to deposit the different components in the carbide structure at much lower temperature than the melting – or decomposition – point of the carbide. Another advantage is that they tend to be relatively dimensionally stable when under high heating, meaning they don’t swell that much.
The USSR, on the other hand, decided very early on to commit to using carbide fuel elements for their NTR, and came up with a novel reactor architecture to both take advantage of the high temperatures of the carbide fuel elements, and to deal with the problems that they posed.
One major disadvantage to carbides is that they are prone to cracking… to a rather severe degree. This means that any cladding material needs to be able to handle this cracking. This was seen in the fuel elements in the NF-1 test, where every (U, ZrC)Carbide fuel element had a great deal of splitting; and was one of the reasons that this fuel was not considered the best option for early NTRs, until these issues were worked out.
Another disadvantage to carbides is the difficulty in manufacturing a consistent carbide, especially if multiple different types of electronegative components are used. Often there will be clusters of different monocarbides in what is supposed to be a tricarbide solution, meaning that the physical properties (notably, the fissile properties of the fuel itself) vary at different points in the fuel element. This can be made even worse if the fuel element is exposed to the hot hydrogen propellant stream as the H2 strips away the carbon (forming CH4, C2H2, and a number of other hydrocarbons); it also changes the chemical properties of the solution, sometimes allowing droplets of metal to form at well above their melting point, resulting in various other problems.
Oxides: The Familiar Fissile Chemical Composition
Carbides have been used for nuclear fuel elements for a very long time. The fuel pellets in later Rover and NERVA engines were encapsulated carbide beads spread in a graphite matrix. This allowed the fissile fuel itself to become hotter before decomposition occurred. To understand the advantages, though, we have to compare them to the other uranium-bearing compound that is more frequently used: uranium oxide.
In the oxide fuel pellets, the O2 would separate from the U, causing metallic crystals to form in the fuel pellet, changing its neutronic and chemical properties. To make matters worse, the O2 could then migrate outside the pyrocarbon or ZrC coating, causing chemical reactions in the surrounding graphite. All of this can occur below its melting point of 2,865 C (3,138 K). This changes the neutronic behavior within the fuel elements in different amounts at different locations within the reactor, causing control issues for the operators, and requiring more design work from the engineers to ensure the reactor can deal with these problems.
Another problem with UO2 is that it has very poor thermal conductivity. Temperature gradients of more than a thousand degrees C are seen in terrestrial fuel pellets of UO2 roughly the thickness of a pencil. There are many ways around this,the latest being the use of CERMET fuels, which use very small pellets of UO2, surrounded by refractory metals that are much better thermal conductors; but these metals themselves also limit the temperature the fuel element can operate at (with the new reactor designs that use beryllium for its’ moderation properties, the relatively low, 1,287 C melting point of Be determines the maximum specific impulse of the rocket).
The Advantages to Carbide Fuel Elements
(Chemistry warning! I’ll keep it as light as possible, but…)
Carbides, on the other hand, have some of the highest melting points known to humanity. Tantalum hafnium carbide (Ta4HfC5) has a melting point of 3942 C, the highest known melting point. How high the melting point is depends on a number of factors, including what materials are used and the ratio between those elements in the structure of the carbide itself.
Unfortunately, both tantalum and hafnium have fairly high neutron absorption cross sections, so they are not ideal materials for carbide nuclear fuel elements. These are typically made out of some combination of uranium carbide and either niobium carbide, zirconium carbide, or both.
Another advantage to using carbide fuel elements is that this allows the actual fissile fuel to be more evenly spread throughout the fuel element, creating a more homogeneous (i.e. consistent) fission power profile across the fuel element. This is an advantage to reactor designers, since the more heterogeneous the reactor, the more headache it is for the designer to ensure stable fission behavior in the fuel element. The more consistently the fissile material is spread, the more controllable it is, and the more evenly the power is produced, making the behavior of the reactor more predictable. This has been known since the beginning of nuclear power, and is why later Rover fuel elements were moving away from the coated pellets mixed into graphite style of fuel and toward a composite fuel element, where the uranium carbide fuel was spread in a webwork throughout the graphite matrix of the fuel element.
The Complications of Carbide Fuel Elements
What the actual melting temperature is for a given material is… complicated, though, for a number of reasons.
The first depends on what proportion everything is in, and this is difficult to get consistent. As noted in a recent paper on a unique NTR geometry (which we’ll look at in the next post), getting the perfect stoichiometric ratio (i.e. the ratio between carbon, uranium, and any other elements present) is virtually impossible, so compromises need to be made. Too much carbon, and the temperature drops slightly. Too little carbon, and the material doesn’t mix as well, causing areas that have lower melting points, or higher thermal conductivity, or a number of other undesirable properties.
The second problem is in mixing: a fuel element designer wants to have a material that’s consistent all the way through the fuel element, not discrete little clumps of different materials as one moves through the fuel element. Because of the way that carbide fuel elements are made (DC sintering, a similar process to spark plasma sintering that’s used for CERMET fuel elements), the end result is grains of NbC, ZrC, and UC2 side by side, rather than a mixture (a solid solution, to be precise) of (Nb, Zr, U)C; and each grain has different thermal, neutronic, and chemical properties. It is possible to heat the fuel element, and have the constituents become this ideal solid solution, as was discovered using CFEET for carbide fuel element testing (more on that in the next post as well). This offers hope for more consistent mixing of the elements in the fuel itself, but establishing the correct ratios remains a problem.
There’s one more big problem with carbide fuel elements, though: hydrogen corrosion. Unlike in graphite composite or CERMET fuel elements, the carbon that is stripped away by hot hydrogen is actually chemically bound to the uranium, zirconium, and niobium in the fuel element, not as a material matrix surrounding the chemical components that support fission in the fuel element. This means that if there’s a clad failure, the local ratio of carbon will change, causing free metal to form, either as a pure metal or an alloy, unevenly across the fuel element. This means that hot spots can develop, or parts of the fuel element will melt far below the melting temperature of the carbide the fuel element was originally made of. Flecks or droplets of metal can be eroded into the hot hydrogen stream, potentially causing damage downstream of the fuel element failure. In a worst case scenario, uranium could collect in areas of the reactor that it’s not meant to, creating a power peak in a spot that could be… inconvenient, to say the least.
These are challenges that carbide fuel element designers have always faced, and continue to face today. Careful chemical synthesis will definitely help, but there are limits to this. Preheating the fuel elements after sintering to ensure a more consistent solid solution is already showing considerable advantages in composition, and in material properties as well. Cladding the fuel element with carefully selected clad materials (often ZrC, which is already a component of the carbide fuel element, with a similar coefficient of thermal expansion and good modulus of elasticity), and ensuring consistent high quality application (usually through chemical vapor deposition these days, which has increased in quality and consistency a lot since the days of Project Rover) of the coating will eliminate (or at the least minimize) the erosion effects of graphite.
Another option that I’ve seen mentioned, but have been unable to find much information on, is an idea mentioned in Russian papers about their RD-041X engines: carbides and nitrides (which have a similar chemical composition, but with electronegative components ionically bonded with nitrogen, rather than carbon) in a solid solution. This leads to a more complex chemical structure, and may allow for less erosion of the carbon from the fuel element. Unfortunately, this literature is hard to find; and, when it is available, it hasn’t been translated from Russian. However, according to the most commonly available paper (linked in the references), adding a nitride component to the fuel element may boost the maximum fuel element temperature.
The Other Fuel: Plutonium Carbides
We don’t talk about plutonium much on this blog (yet), but plutonium carbides have been investigated to a certain degree as well. They may not be as attractive as uranium carbide for a number of reasons, but as a potential fuel element, they may show some promise.
Why are they less attractive? First is the neutron fission cross section of Pu is skewed much more to the fast spectrum in Pu than in U. This means that the more moderated the neutron flux, the more likely it is that when a neutron interacts with a nucleus of 239Pu, it won’t fission but continue up the transuranic chain. Many of these elements are also fissile, but again much more so in the fast spectrum. This means that more and more neutron poisons can build up in your core, requiring more and more reactivity to overcome. This also means that when it’s time to decommission the core, it will be much more radioactive than a similar U-fueled reactor (on average, there are of course a lot of factors that go into this). Finally, this means that the core has to have more fuel in it; and, unlike with uranium, there’s no “Low Enriched Plutonium,” the fraction of 238Pu (used in RTGs) or 240Pu (which is gamma-active, and a headache) is very low. This is convenient if you’re making fuel elements, but a very different regulatory game than LEU, with huge restrictions on who can work with the fuel element materials for development of an NTR.
Second, 239Pu is illegal to use in space, in accordance with international treaty. Now, LEU235 is also illegal, but that is more likely to change, since it involves having less concentrated fissile material in space, unlike the use of Pu, which is considered a major nuclear proliferation risk, even if it’s out in space. The treaty was written to prevent nuclear weapons in space sneaking in the back door, and Pu has been (in the public’s mind) intimately tied to nuclear weapons development from day 1.
Mixed carbide fuels (containing both uranium and plutonium) have been investigated as an alternative to MOX (mixed oxide) fuels for fast breeder reactors, either in the (U, Pu)C or the (U, Pu)2C3 phases. The usual benefits of carbides over oxides apply to this fuel form: higher metal density and better thermal conductivity being the main two. Due to a number of challenges, including very low oxygen requirements for fabrication, minimal experience with fabrication of mixed carbide fuels, and the general lack of information on the chemistry of PuC, this is a largely unknown field, but research is being conducted to extend our knowledge of these areas.
At present, these materials are a curiosity, although they could lead to advanced fuels for terrestrial use. Until their chemistry and materials properties are better known, however, it is unlikely we’ll see an NTR powered with mixed carbide fuel.
How are Carbides Used in NTRs?
“Traditional” Carbide-fueled NTRs
In Rover, carbide fuel elements were researched that had a very similar form factor to the fuel elements. These were hexagonal in cross section, about 33 cm long, and clad in NbC. The main difference was that there was a single large hole, rather than nineteen small holes. An NTR was in the early concept design, but was never put through the reactor geometry refinement process.
Designs have been proposed over the years using hexagonal prism fuels similar to Rover carbide fuel elements, but none are currently under development, as far as I can see. This doesn’t exclude their use, even with LEU, but NASA and the DOE are currently pursuing other fuel element geometries.
The Other Tradition: Russian NRE
Russia has been in the nuclear thermal rocket business for as long as the United States, but their design philosophy is hugely different from the American one. Just like NASA and the DOE don’t use the term “nuclear thermal rocket” (NTR), instead preferring “nuclear thermal propulsion” (NTP), Roscosmos and Rosatom (who work together to develop the Russian program) use the term “nuclear rocket engine”, or NRE.
The design changes start with the fuel element design, extend through the basic geometry of the reactor and beyond, and have major implications for testing and materials options with this system.
First, let’s look at the fuel elements. One of the considerations for fuel element design is the amount of surface area that can be contacted by the propellant. Thermal transfer is determined by the thermal emissivity of the fuel element material, and the thermal conductivity and transparency of the propellant. The more surface area, the more heat is transferred, given those previously mentioned factors are equal. Rather than using a fuel prism as American NTP has done, with increasing number of holes through a hexagonal prism, the Russian NRE uses what is commonly known as a “twisted ribbon” design, where a rectangular prism (or any number of other designs, such as a cluster of rods, square prisms, or other shapes- see the image above for the variations that have been tested) is rotated along its long axis. A cluster of these fuel elements are placed in a tube (known as a calandria, similar to the design used in CANDU reactors, but with different geometry and materials), ending in a nozzle at the end of the bundle.
Unlike with the American NTP designs, there isn’t a single fuel element cluster running down the center of the NRE. In fact, there’s NO fuel at the center of the reactor. The Russians don’t use a homogeneous reactor design, either for neutronic power or thermal energy. The center of the reactor, rather than containing fuel, contains moderator. Since the fuel elements (and therefore all the sources of heat for the reactor) are spread around the periphery of the reactor core, rather than being evenly distributed in the core, this means that a moderator with much lower melting temperatures can be used in the design (both zirconium and lithium hydrides are mentioned as options, neither of which would be able to withstand the temperatures of a homogeneous core NTR). This also means that a bimodal design (known in the Russian program as a “nuclear power and propulsion system,” or NPPS, rather than BNTP as NASA calls it) can integrate the working fluid channels more easily into the design without a complete redesign of either the fuel element or the header and footer support plates. We’ll cover BNTRs in a later post, including the NPPS, but it’s worth mentioning that this design offers more design flexibility than the traditional, hexagonal prism NTP fuel elements used in American designs.
Finally, due to the fact that a number of fuel element bundles are radially spread across the reactor, an individual fuel bundle can be tested on its’ own in a prototypic neutronic and thermal environment, rather than needing to test the entire NTP core in a hot fire test, as is required for the American designs. This testing has been conducted both at the EWG-1 research reactor [with ten consecutive restarts, a total testing time of 4000 s (although how much was at full power, and what sort of transient testing was done, is unknown), at a maximum hydrogen exhaust temperature of 3100 K, achieving a theoretical specific impulse of 925 s and a power density for the system of 10 Mwt/L] and at the rocket test stand in Semipalatinsk (although those test results are still classified). The Russians have also done full-scale electric heating tests of NRE designs, settling on two: the RD-0410 (35 kN thrust, for unmanned probes – and possibly for proof-of-concept mission use) and RD-0411 (~392 kN of thrust, for crewed missions). Statistics for the RD-0410, based on these electrically heated tests, can be seen below:
Sadly, there isn’t much more information available about the current NRE designs and plans. We’ll come back to its’ variant, the NPPS, when we look at bimodal designs in the future.
Grooved Ring NTR: Not All American Designs are Hexagonal
This is a new NTR design, designed around the use of a (Zr, Nb, U)C fuel element of a very different shape than the traditional hexagonal prism, currently under development at NASA and the University of Tennessee. Just as with the twisted ribbon fuel elements, the fuel element geometry for this NTR has been changed to maximize surface area, and allow for more heat to be transferred to the propellant. This both maximizes the specific impulse and minimizes the amount of propellant needed for cooling purposes (however, H2 remains the best moderator available, and a minimum amount for neutronic reasons will always be needed, even if not for cooling the fuel elements).
The fuel elements are radially grooved discs of uranium tricarbide (Nb, Zr, U)C, although hafnium and tantalum were also investigated (and eliminated due to the much higher neutron absorption rates). The hydrogen flows from the outside of a stack of these fuel elements, separated with beryllium spacers, and then flows down a central channel.
Due to the unique geometry of this fuel element design, much optimization was needed for the groove depth, hydrogen flow rates, uranium density in the fuel element (in the initial design, 95% enriched HEU was used for ease of calculation, however with additional optimization and research into stoichiometric ratios of U with the other electronegative components, the authors believe less than 20% enrichment is possible), and other factors.
Thermal testing, including hot hydrogen testing using CFEET, has been carried out at Marshall SFC, using vanadium as a surrogate for depleted uranium. The team hopes to continue to refine such factors as manufacturing consistency, improved mixing of the solid solution of the carbide, and other manufacturing issues in carbide fuels, before hopefully moving on to electrically heated carbide tests using depleted uranium (DU) to optimize the carbide chemistry of uranium itself.
This NTR offers the potential for 3000 C exhaust temperatures at 4 psi. Unfortunately, due to the preliminary nature of the work that has been carried out to date (this reactor design is less than a year old, unlike the designs that have gone through decades of development of not just the fuel elements themselves but also the engine system), thrust and theoretical specific impulse using this reactor design has not been determined yet.
This novel fuel element form offers promise, though, of a new NTR fuel element geometry that allows for better thermal transfer to the propellant, and the team are performing extensive material fabrication and optimization experiments to further our understanding of tricarbide fuel element performance and manufacture, in addition to developing this new fuel element form factor.
Tricarbide Foam Fuel Elements: You REALLY Want Surface Area? We Got It!
This is a very different carbide fuel form, with novel manufacturing practices yielding a truly unique fuel element.
Most solid core fuel elements are chunks of material, no matter what form they take (and we’ve seen quite a few forms in this post already), with the propellant flowing around or through them; either through holes that are milled or drilled, the surface of the twisted ribbon, or through grooves cut in a disc. That’s not the case here, however!
The team at Sandia National Laboratory, Ultramet, Inc., and the University of Florida have come up with a new take on carbide manufacture, utilizing chemical vapor deposition (CVD, a common method of carbide manufacture) on a matrix that starts life as open-pore polyurethane foam. This foam is then pyrolized (baked… ish) to form a carbonized skeleton of the foam structure. This is then heated, and CVI (chemical vapor infiltration, a variation of CVD) processes are used to impregnate the carbonized skeleton with uranium, zirconium, and niobium; turning the structure’s outer surfaces to (U, Zr, Nb)C carbide (a number of factors affect the depth of the penetration). Then, CVD is used to coat the new carbide structure with ZrC or NbC to clad the more chemically fragile tricarbide, and protect it from the H2 propellant that will flow through the open pores remaining after this carbidization and CVD coating process.
This concept has been tested using tantalum as a surrogate for uranium (a common choice for pre-depleted uranium electrically heated testing of carbide fuel elements), with two foam densities, 78% and 85%; leading to the discovery that there’s a trade-off: the 78% had better thermal transfer properties, but the 85% offers more volume for the fissile material, meaning that lower enrichment was possible.
The team members at Sandia made a preliminary MCNP model of an NTR for use with these fuel elements, with a number of unique features. This was a heterogeneous core (meaning uneven fuel distribution), with 60% porosity foam fuel, using yttrium hydride for the moderator (which has to be maintained below 1400 K by circulating hydrogen between it and the fuel), and with a Be reflector. For these initial modeling calculations, 93.5% enriched HEU was used. It was discovered that a 500 MWt NTR was possible using this fuel form, but due to the unoptimized, preliminary nature of this design, values for thrust and specific impulse are still up in the air.
INSPI at the University of Florida will be conducting electrically heated hot hydrogen tests on DU-containing tricarbide fuel foams in the temperature range of 2500-3000 K, as these fuel foams become available, although the timeline for this is unclear. However, research is continuing in this truly novel fuel form, and the possibilities are very promising.
Carbides: Great Promise, with Complications
As we’ve seen in this post, carbide fuel elements offer many advantages for designers of nuclear thermal rockets. Their high melting point allow for higher propellant exhaust temperatures, improving the specific impulse of an NTR. Their ability to have their properties manipulated by changing the composition and ratio of the components allows a material designer to optimize the fuel elements for a number of different purposes. Their strength allows for truly novel fuel forms that give an NTR designer a lot more flexibility in design. Finally, their similar coefficient of thermal expansion, and often good modulus of elasticity, make them important materials for use in all NTRs, not just those fueled with fissile-containing carbides.
However, the chemical and materials properties of these substances, manufacturing processes required to consistently produce them, and modes of failure (including the implications for these types of failure in an operating NTR) show that there’s still much work to be done in order to bring carbide fuel elements to the same level of technological maturity currently enjoyed by graphite composite fuel elements.
The promise of carbides, though, makes developing the chemistry of fissile-bearing carbides of all forms, perhaps most especially uranium tricarbides, a worthy goal for the advancement of nuclear power in space. This research has been ongoing for decades, continues worldwide, and is bearing fruit.
Hello, and welcome back to Beyond NERVA, where today we are looking at ground testing of nuclear rockets. This is the first of two posts on ground testing NTRs, focusing on the testing methods used during Project ROVER, including a look at the zero power testing and assembly tests carried out at Los Alamos Scientific Laboratory, and the hot-fire testing done at the National Defense Research Station at Jackass Flats, Nevada. The next post will focus on the options that both have and are being considered for hot fire testing the next generation of LEU NTP, as well as a brief look at cost estimates for the different options, and the plans that NASA has proposed for the facilities that are needed to support this program (what little of it is available).
We have examined how to test NTR fuel elements in nun-nuclear situations before, and looked at two of the test stands that were developed for testing thermal, chemical, and erosion effects on them as individual components, the Compact Fuel Element Environment Simulator (CFEET) and the Nuclear Thermal Rocket Environment Effects Simulator (NTREES). These test stands provide economical means of testing fuel elements before loading them into a nuclear reactor for neutronic and reactor physics behavioral testing, and can catch many problems in terms of chemical and structural problems without dealing with the headaches of testing a nuclear reactor.
However, as any engineer can tell you, computer modeling is far from enough to test a full system. Without extensive real-life testing, no system can be used in real-life situations. This is especially true of something as complex as a nuclear reactor – much less a rocket engine. NTRs have the challenge of being both.
Back in the days of Project Rover, there were many nuclear propulsion tests performed. The most famous of these were the tests carried out at Jackass Flats, NV, in the National Nuclear Test Site (Now the National Criticality Experiment Research Center), in open-air testing on specialized rail cars. This was far from the vast majority of human habitation (there was one small – less than 100 people – ranch upwind of the facility, but downwind was the test site for nuclear weapons tests, so any fallout from a reactor meltdown was not considered a major concern).
The test program at the Nevada site started with the arrival of the fully-constructed and preliminary tested rocket engines arriving by rail from Los Alamos, NM, along with a contingent of scientists, engineers, and additional technicians. After doing another check-out of the reactor, they were hooked up (still attached to the custom rail car it was shipped on) to instrumentation and hydrogen propellant, and run through a series of tests, ramping up to either full power or engine failure. Rocket engine development in those days (and even today, sometimes) could be an explosive business, and hydrogen was a new propellant to use, so accidents were unfortunately common in the early days of Rover.
After the test, the rockets were wheeled off onto a remote stretch of track to cool down (from a radiation point of view) for a period of time, before being disassembled in a hot cell (a heavily shielded facility using remote manipulators to protect the engineers) and closely examined. This examination verified how much power was produced based on the fission product ratios of the fuel, examined and detailed all of the material and mechanical failures that had occurred, and started the reactor decommissioning and disposal procedures.
As time went on, great strides were made not only in NTR design, but in metallurgy, reactor dynamics, fluid dynamics, materials engineering, manufacturing techniques, cryogenics, and a host of other areas. These rocket engines were well beyond the bleeding edge of technology, even for NASA and the AEC – two of the most scientifically advanced organizations in the world at that point. This, unfortunately, also meant that early on there were many failures, for reasons that either weren’t immediately apparent or that didn’t have a solution based on the design capabilities of the day. However, they persisted, and by the end of the Rover program in 1972, a nuclear thermal rocket was tested successfully in flight configuration repeatedly, the fuel elements for the rocket were advancing by leaps and bounds past the needed specifications, and with the ability to cheaply iterate and test new versions of these elements in new, versatile, and reusable test reactors, the improvements were far from stalling out – they were accelerating.
However, as we know, the Rover program was canceled after NASA was no longer going to Mars, and the development program was largely scrapped. Scientists and engineers at Westinghouse Astronuclear Laboratory (the commercial contractor for the NERVA flight engine), Oak Ridge National Laboratory (where much of the fuel element fabrication was carried out) and Los Alamos Scientific Laboratory (the AEC facility primarily responsible for reactor design and initial testing) spent about another year finishing paperwork and final reports, and the program was largely shut down. The final report on the hot-fire test programs for NASA, though, wouldn’t be released until 1991.
Behind the Scenes: Pre-Hot Fire Testing of ROVER reactors
These hot fire tests were actually the end result of many more tests carried out in New Mexico, at Los Alamos Scientific Laboratory – specifically the Pajarito Test Area. Here, there were many test stands and experimental reactors used to measure such things as neutronics, reactor behavior, material behavior, critical assembly limitations and more.
The first of these was known as Honeycomb, due to its use of square grids made out of aluminum (which is mostly transparent to neutrons), held in large aluminum frames. Prisms of nuclear fuel, reflectors, neutron absorbers, moderator, and other materials were assembled carefully (to prevent accidental criticality, something that the Pajarito Test Site had seen early in its’ existence in the Demon Core experiments and subsequent accident) to ensure that the modeled behavior of possible core configurations matched closely enough to predicted behavior to justify going through the effort and expense of going on to the next steps of refining and testing fuel elements in an operating reactor core. Especially for cold and warm criticality tests, this test stand was invaluable, but with the cancellation of Project Rover, there was no need to continue using the test stand, and so it was largely mothballed.
The second was a modified KIWI-A reactor, which used a low-pressure, heavy water moderated island in the center of the reactor to reduce the amount of fissile fuel necessary for the reactor to achieve criticality. This reactor, known as Zepo-A (for zero-power, or cold criticality), was the first of an experiment that was carried out with each successive design in the Rover program, supporting Westinghouse Astronuclear Laboratory and the NNTS design and testing operations. As each reactor went through its’ zero-power neutronic testing, the design was refined, and problems corrected. This sort of testing was completed late in 2017 and early in 2018 at the NCERC in support of the KRUSTY series of tests, which culminated in March with the first full-power test of a new nuclear reactor in the US for more than 40 years, and remain a crucial testing phase for all nuclear reactor and fuel element development. An early, KIWI-type critical assembly test ended up being re-purposed into a test stand called PARKA, which was used to test liquid metal fast breeder reactor (LMFBR, now known as “Integral Fast Reactor or IFR, under development at Idaho National Labs) fuel pins in a low-power, epithermal neutron environment for startup and shutdown transient behavior testing, as well as being a well-understood general radiation source.
Finally, there was a pair of hot gas furnaces (one at LASL, one at WANL) for electrical heating of fuel elements in an H2 environment that used resistive heating to bring the fuel element up to temperature. This became more and more important as the project continued, since development of the clad on the fuel element was a major undertaking. As the fuel elements became more complex, or as materials that were used in the fuel element changed, the thermal properties (and chemical properties at temperature) of these new designs needed to be tested before irradiation testing to ensure the changes didn’t have unintended consequences. This was not just for the clad, the graphite matrix composition changed over time as well, transitioning from using graphite flour with thermoset resin to a mix of flour and flakes, and the fuel particles themselves changed from uranium oxide to uranium carbide, and the particles themselves were coated as well by the end of the program. The gas furnace was invaluable in these tests, and can be considered the grandfather of today’s NTREES and CFEET test stands.
An excellent example of the importance of these tests, and the careful checkout that each of the Rover reactors received, can be seen with the KIWI-B4 reactor. Initial mockups, both on Honeycomb and in more rigorous Zepo mockups of the reactor, showed that the design had good reactivity and control capability, but while the team at Los Alamos was assembling the actual test reactor, it was discovered that there was so much reactivity the core couldn’t be assembled! Inert material was used in place of some of the fuel elements, and neutron poisons were added to the core, to counteract this excess reactivity. Careful testing showed that the uranium carbide fuel particles that were suspended in the graphite matrix underwent hydrolysis, moderating the neutrons and therefore increasing the reactivity of the core. Later versions of the fuel used larger particles of UC2, which was then individually coated before being distributed through the graphite matrix, to prevent this absorption of hydrogen. Careful testing and assembly of these experimental reactors by the team at Los Alamos ensured the safe testing and operation of these reactors once they reached the Nevada test site, and supported Westinghouse’s design work, Oak Ridge National Lab’s manufacturing efforts, and the ultimate full-power testing carried out at Jackass Flats.
Once this series of mockup crude criticality testing, zero-power testing, assembly, and checkout was completed, the reactors were loaded onto a special rail car that would also act as a test stand with the nozzle up, and – accompanied by a team of scientists and engineers from both New Mexico and Nevada – transported by train to the test site at Jackass Flats, adjacent to Nellis Air Force Base and the Nevada Test Site, where nuclear weapons testing was done. Once there, a final series of checks was done on the reactors to ensure that nothing untoward had happened during transport, and the reactors were hooked up to test instrumentation and the coolant supply of hydrogen for testing.
Problems at Jackass Flats: Fission is the Easy Part!
The testing challenges that the Nevada team faced extended far beyond the nuclear testing that was the primary goal of this test series. Hydrogen is a notoriously difficult material to handle due to its’ incredibly tiny size and mass. It seeps through solid metal, valves have to be made with incredibly tight clearances, and when it’s exposed to the atmosphere it is a major explosion hazard. To add to the problems, these were the first days of cryogenic H2 experimentation. Even today, handling of cryogenic H2 is far from a routine procedure, and the often unavoidable problems with using hydrogen as a propellant can be seen in many areas – perhaps the most spectacular can be seen during the launch of a Delta-IV Heavy rocket, which is a hydrolox (H2/O2) rocket. Upon ignition of the rocket engines, it appears that the rocket isn’t launching from the pad, but exploding on it, due to the outgassing of H2 not only from the pressure relief valves in the tanks, but seepage from valves, welds, and through the body of the tanks themselves – the rocket catching itself on fire is actually standard operating procedure!
In the late 1950s, these problems were just being discovered – the hard way. NASA’s Plum Brook Research Station in Ohio was a key facility for exploring techniques for handling gaseous and liquid hydrogen safely. Not only did they experiment with cryogenic equipment, hydrogen densification methods, and liquid H2 transport and handling, they did materials and mechanical testing on valves, sensors, tanks, and other components, as well as developed welding techniques and testing and verification capabilities to improve the ability to handle this extremely difficult, potentially explosive, but also incredible valuable (due to its’ low atomic mass – the exact same property that caused the problems in the first place!) propellant, coolant, and nuclear moderator. The other options available for NTR propellant (basically anything that’s a gas at reactor operating temperatures and won’t leave excessive residue) weren’t nearly as good of an option due to the lower exhaust velocity – and therefore lower specific impulse.
Plum Brook is another often-overlooked facility that was critical to the success of not just NERVA, but all current liquid hydrogen fueled systems. I plan on doing another post (this one’s already VERY long) looking into the history of the various facilities involved with the Rover and NERVA program.
Indeed, all the KIWI-A tests and the KIWI-B1A used gaseous hydrogen instead of liquid hydrogen, because the equipment that was planned to be used (and would be used in subsequent tests) was delayed due to construction problems, welding issues, valve failures, and fires during checkout of the new systems. These teething troubles with the propellant caused major problems at Jackass Flats, and caused many of the flashiest accidents that occurred during the testing program. Hydrogen fires were commonplace, and an accident during the installation of propellant lines in one reactor ended up causing major damage to the test car, the shed it was contained in, and exposed instrumentation, but only minor apparent damage to the reactor itself, delaying the test of the reactor for a full month while repairs were made (this test also saw two hydrogen fires during testing, a common problem that improved as the program continued and the methods for handling the H2 were improved).
While the H2 coolant was the source of many problems at Jackass Flats, other issues arose due to the fact that these NTRs were using technology that was well beyond bleeding-edge at the time. New construction methods doesn’t begin to describe the level of technological innovation in virtually every area that these engines required. Materials that were theoretical chemical engineering possibilities only a few years before (sometimes even months!) were being utilized to build innovative, very high temperature, chemically and neutronically complex reactors – that also functioned as rocket engines. New metal alloys were developed, new forms of graphite were employed, experimental methods of coating the fuel elements to prevent hydrogen from attacking the carbon of the fuel element matrix (as seen in the KIWI-A reactor, which used unclad graphite plates for fuel, this was a major concern) were constantly being adjusted – indeed, the clad material experimentation continues to this day, but with advanced micro-imaging capabilities and a half century of materials science and manufacturing experience since then, the results now are light-years ahead of what was available for the scientists and engineers in the 50s and 60s. Hydrodynamic principles that were only poorly understood, stress and vibrational patterns that weren’t able to be predicted, and material interactions at temperatures higher than are experienced in the vast majority of situations caused many problems for the Rover reactors.
One common problem in many of these reactors was transverse fuel element cracking, where a fuel element would split across the narrow axis, disrupting coolant flow through the interior channels, exposing the graphite matrix to the hot H2 (which it then would ferociously eat away, exposing both fission products and unburned fuel to the H2 stream and carry it elsewhere – mostly out of the nozzle, but it turned out the uranium would congregate at the hottest points in the reactor – even against the H2 stream – which could have terrifying implications for accidental fission power hot spots. Sometimes, large sections of the fuel elements would be ejected out of the nozzle, spraying partially burned nuclear fuel into the air – sometimes as large chunks, but almost always some of the fuel was aerosolized. Today, this would definitely be unacceptable, but at the time the US government was testing nuclear weapons literally next door to this facility, so it wasn’t considered a cause of major concern.
If this sounds like there were major challenges and significant accidents that were happening at Jackass Flats, well in the beginning of the program that was certainly correct. These early problems were also cited in Congress’ decision to not continue to fund the program (although, without a manned Mars mission, there was really no reason to use the expensive and difficult to build systems, anyway). The thing to remember, though, is that they were EARLY tests, with materials that had been a concept in a material engineer’s imagination only a few years (or sometimes months) beforehand, mechanical and thermal stresses that no-one had ever dealt with, and a technology that seemed the only way to send humans to another planet. The moon was hard enough, Mars was millions of miles further away.
Hot Fire Testing: What Did a Test Look Like?
Nuclear testing is far more complex than just hooking up the test reactor to coolant and instrumentation lines, turning the control drums and hydrogen valves, and watching the dials. Not only are there many challenges associated with just deciding what instrumentation is possible, and where it would be placed, but the installation of these instruments and collecting data from them was often a challenge as well early in the program.
To get an idea of what a successful hot fire test looks like, let’s look at a single reactor’s test series from later in the program: the NRX A2 technology demonstration test. This was the first NERVA reactor design to be tested at full power by Westinghouse ANL, the others, including KIWI and PHOEBUS, were not technology demonstration tests, but proof-of-concept and design development tests leading up to NERVA, and were tested by LASL. The core itself consisted of 1626 hexagonal prismatic fuel elements This reactor was significantly different from the XE-PRIME reactor that would be tested five years later. One way that it was different was the hydrogen flow path: after going through the nozzle, it would enter a chamber beside the nozzle and above the axial reflector (the engine was tested nozzle-up, in flight configuration this would be below the reflector), then pass through the reflector to cool it, before being diverted again by the shield, through the support plate, and into the propellant channels in the core before exiting the nozzle
Two power tests were conducted, on September 24 and October 15, 1964.
With two major goals and 22 lesser goals, the September 24 test packed a lot into the six minutes of half-to-full power operation (the reactor was only at full power for 40 seconds). The major goals were: 1. Provide significant information for verifying steady-state design analysis for powered operation, and 2. Provide significant information to aid in assessing the reactor’s suitability for operation at steady-state power and temperature levels that were required if it was to be a component in an experimental engine system. In addition to these major, but not very specific, test goals, a number of more specific goals were laid out, including top priority goals of evaluating environmental conditions on the structural integrity of the reactor and its’ components, core assembly performance evaluation, lateral support and seal performance analysis, core axial support system analysis, outer reflector assembly evaluation, control drum system evaluation, and overall reactivity assessment. The less urgent goals were also more extensive, and included nozzle assembly performance, pressure vessel performance, shield design assessment, instrumentation analysis, propellant feed and control system analysis, nucleonic and advanced power control system analysis, radiological environment and radiation hazard evaluation, thermal environment around the reactor, in-core and nozzle chamber temperature control system evaluation, reactivity and thermal transient analysis, and test car evaluation.
Several power holds were conducted during the test, at 51%, 84%, and 93-98%, all of which were slightly above the power that the holds were planned at. This was due to compressability of the hydrogen gas (leading to more moderation than planned) and issues with the venturi flowmeters used to measure H2 flow rates, as well as issues with the in-core thermocouples used for instrumentation (a common problem in the program), and provides a good example of the sorts of unanticipated challenges that these tests are meant to evaluate. The test length was limited by the availability of hydrogen to drive the turbopump, but despite this being a short test, it was a sweet one: all of the objectives of the test were met, and an ideal specific impulse in a vacuum equivalent of 811 s was determined (low for an NTR, but still over twice as good as any chemical engine at the time).
The October 15th test was a low-power, low flow test meant to evaluate the reactor’s operation when not operating in a high power, steady state of operation, focusing on reactor behavior at startup and cool-down. The relevant part of the test lasted for about 20 minutes, and operated at 21-53 MW of power and a flow rate of 2.27-5.9 kg/s of LH2. As with any system, operating at the state that the reactor was designed to operate in was easier to evaluate and model than at startup and shutdown, two conditions that every engine has to go through but are far outside the “ideal” conditions for the system, and operating with liquid hydrogen just made the questions greater. Only four specific objectives were set for this test: demonstration of stability at low LH2 flow (using dewar pressure as a gauge), demonstration of suitability at constant power but with H2 flow variation, demonstration of stability with fixed control drums but variable H2 flow to effect a change in reactor power, and getting a reactivity feedback value associated with LH2 at the core entrance. Many of these tests hinge on the fact that the LH2 isn’t just a coolant, but a major source of neutron moderation, so the flow rate (and associated changes in temperature and pressure) of the propellant have impacts extending beyond just the temperature of the exhaust. This test showed that there were no power or flow instabilities in the low-power, low-flow conditions that would be seen even during reactor startup (when the H2 entering the core was at its’ densest, and therefore most moderating). The predicted behavior and the test results showed good correlation, especially considering the instrumentation used (like the reactor itself) really wasn’t designed for these conditions, and the majority of the transducers used were operating at the extreme low range of their scale.
After the October test, the reactor was wheeled down a shunt track to radiologically cool down (allow the short-lived fission products to decay, reducing the gamma radiation flux coming off the reactor), and then was disassembled in the NRDC hot cell. These post-mortem examinations were an incredibly important tool for evaluating a number of variables, including how much power was generated during the test (based on the distribution of fission products, which would change depending on a number of factors, but mainly due to the power produced and the neutron spectrum that the reactor was operating in when they were produced), chemical reactivity issues, mechanical problems in the reactor itself, and several other factors. Unfortunately, disassembling even a simple system without accidentally breaking something is difficult, and this was far from a simple system. A challenge became “did the reactor break that itself, or did we?” This is especially true of fuel elements, which often broke due to inadequate lateral support along their length, but also would often break due to the way they were joined to the cold end of the core (which usually involved high-temperature, reasonably neutronically stable adhesives).
This issue was illustrated in the A2 test, when there were multiple broken fuel elements that did not have erosion at the break. This is a strong indicator that they broke during disassembly, not during the test itself: hot H2 tends to heavily erode the carbon in the graphite matrix – and the carbide fuel pellets – and is a very good indicator if the fuel rods broke during a power test. Broken fuel elements were a persistent problem in the entire Rover and NERVA programs (sometimes leading to ejection of the hot end portion of the fuel elements), and the fact that all of the fueled ones seem to have not broken was a major victory for the fuel fabricators.
This doesn’t mean that the fuel elements weren’t without their problems. Each generation of reactors used different fuel elements, sometimes multiple different types in a single core, and in this case the propellant channels, fuel element ends, and the tips of the exterior of the elements were clad in NbC, but the full length of the outside of the elements was not, to attempt to save mass and not overly complicate the neutronic environment of the reactor itself. Unfortunately, this means that the small amount of gas that slipped between the filler strips and pyro-tiles placed to prevent this problem could eat away at the middle of the outside of the fuel element (toward the hot end), something known as mid-band corrosion. This occurred mostly on the periphery of the core, and had a characteristic pattern of striations on the fuel elements. A change was made, to ensure that all of the peripheral fuel elements were fully clad with NbC, since the areas that had this clad were unaffected. Once again, the core became more complex, and more difficult to model and build, but a particular problem was addressed due to empirical data gathered during the test. A number of unfueled, instrumented fuel elements in the core were found to have broken in such a way that it wasn’t possible to conclusively rule out handling during disassembly, however, so the integrity of the fuel elements was still in doubt.
The problems associated with these graphite composite fuel elements never really went away during ROVER or NERVA, with a number of broken fuel elements (which were known to have been broken during the test) were found in the PEWEE reactor, the last test of this sort of fuel element matrix (NF-1 used either CERMET – then called composite – or carbide fuel elements, no GC fuel elements were used). The follow-on A3 reactor exhibited a form of fuel erosion known as pin-hole erosion, which the NbC clad was unable to address, forcing the NERVA team to other alternatives. This was another area where long-term use of the GC fuel elements was shown to be unsustainable for long-duration use past the specific mission parameters, and a large part of why the entire NERVA engine was discarded during staging, rather than just the propellant tanks as in modern designs. New clad materials and application techniques show a lot of promise, and GC is able to be used in a carefully designed LEU reactor, but this is something that isn’t really being explored in any depth in most cases (both the LANTR and NTER concepts still use GC fuel elements, with the NTER specifying them exclusively due to fuel swelling issues, but that seems to be the only time it’s actually required).
Worse Than Worst Case: KIWI-TNT
One question that often is asked by those unfamiliar with NTRs is “what happens if it blows up?” The short answer is that they can’t, for a number of reasons. There is only so much reactivity in a nuclear reactor, and only so fast that it can be utilized. The amount of reactivity is carefully managed through fuel loading in the fuel elements and strategically placed neutron poisons. Also, the control systems used for the nuclear reactors (in this case, control drums placed around the reactor in the radial reflector) can only be turned so fast. I recommend checking out the report on Safety Neutronics in Rover Reactors liked at the end of this post if this is something you’d like to look at more closely.
However, during the Rover testing at NRDS one WAS blown up, after significant modification that would not ever be done to a flight reactor. This is the KIWI-TNT test (TNT is short for Transient Nuclear Test). The behavior of a nuclear reactor as it approaches a runaway reaction, or a failure of some sort, is something that is studied in all types of reactors, usually in specially constructed types of reactors. This is required, since the production design of every reactor is highly optimized to prevent this sort of failure from occurring. This was also true of the Rover reactors. However, knowing what a fast excursion reaction would do to the reactor was an important question early in the program, and so a test was designed to discover exactly how bad things could be, and characterize what happened in a worse-than-worst-case scenario. It yielded valuable data for the possibility of an abort during launch that resulted in the reactor falling into the ocean (water being an excellent moderator, making it more likely that accidental criticality would occur), if the launch vehicle exploded on the pad, and also tested the option of destroying the reactor in space after it had been exhausted of its’ propellant (something that ended up not being planned for in the final mission profiles).
What was the KIWI-TNT reactor? The last of the KIWI series of reactors, its’ design was very similar to the KIWI-B4A reactor (the predecessor for the NERVA-1 series of reactors), which was originally designed as a 1000 MW reactor with an exhaust exit chamber temperature of 2000 C. However, a number of things prevented a fast excursion from happening in this reactor: first, the shims used for the fuel elements were made of tantalum, a neutron poison, to prevent excess reactivity; second, the control drums used stepping motors that were slow enough that a runaway reaction wasn’t possible; finally, this experiment would be done without coolant, which also acted as moderator, so much more reactivity was needed than the B4A design allowed. With the shims removed, excess reactivity added to the point that the reactor was less than 1 sub-critical (with control drums fully inserted) and $6 of excess reactivity available relative to prompt critical, and the drum rotation rate increased by a factor of 89(!!), from 45 deg/s to 4000 deg/s, the stage was set for this rapid scheduled disassembly on January 12, 1965. This degree of modification shows how difficult it would be to have an accidental criticality accident in a standard NTR design.
The test had six specific goals: 1. Measure reaction history and total fissions produced under a known reactivity and compare to theoretical predictions in order to improve calculations for accident predictions, 2. to determine distribution of fission energy between core heating and vaporization, and kinetic energies, 3. determination of the nature of the core breakup, including the degree of vaporization and particle sizes produced, to test a possible nuclear destruct system, 4. measure the release into the atmosphere of fission debris under known conditions to better calculate other possible accident scenarios, 5. measure the radiation environment during and after the power transient, and 6. to evaluate launch site damage and clean-up techniques for a similar accident, should it occur (although the degree of modification required to the reactor core shows that this is a highly unlikely event, and should an explosive accident occur on the pad, it would have been chemical in nature with the reactor never going critical, so fission products would not be present in any meaningful quantities).
There were 11 measurements taken during the test: reactivity time history, fission rate time history, total fissions, core temperatures, core and reflector motion, external pressures, radiation effects, cloud formation and composition, fragmentation and particle studies, and geographic distribution of debris. An angled mirror above the reactor core (where the nozzle would be if there was propellant being fed into the reactor) was used in conjunction with high-speed cameras at the North bunker to take images of the hot end of the core during the test, and a number of thermocouples placed in the core.
As can be expected, this was a very short test, with a total of 3.1×10^20 fissions achieved after only 12.4 milliseconds. This was a highly unusual explosion, not consistent with either a chemical or nuclear explosion. The core temperature exceeded 17.5000 C in some locations, vaporizing approximately 5-15% of the core (the majority of the rest either burned in the air or was aerosolized into the cloud of effluent), and produced 150 MW/sec of kinetic energy about the same amount of kinetic energy as approximately 100 pounds of high explosive (although due to the nature of this explosion, caused by rapid overheating rather than chemical combustion, in order to get the same effect from chemical explosives would take considerably more HE). Material in the core was observed to be moving at 7300 m/sec before it came into contact with the pressure vessel, and flung the largest intact piece of the pressure vessel (a 0.9 sq. m, 67 kg piece of the pressure vessel) 229 m away from the test location. There were some issues with instrumentation in this test, namely with the pressure transducers used to measure the shock wave. All of these instruments but two (placed 100 ft away) didn’t record the pressure wave, but rather an electromagnetic signal at the time of peak power (those two recorded a 3-5 psi overpressure).
Radioactive Release during Rover Testing Prequel: Radiation is Complicated
Radiation is a major source of fear for many people, and is the source of a huge amount of confusion in the general population. To be completely honest, when I look into the nitty gritty of health physics (the study of radiation’s effects on living tissue), I spend a lot of time re-reading most of the documents because it is easy to get confused by the terms that are used. To make matters worse, especially for the Rover documentation, everything is in the old, outdated measures of radioactivity. Sorry, SI users out there, all the AEC and NASA documentation uses Ci, rad, and rem, and converting all of it would be a major headache. If someone would like to volunteer helping me convert everything to common sense units, please contact me, I’d love the help! However, the natural environment is radioactive, and the Sun emits a prodigious amount of radiation, only some of which is absorbed by the atmosphere. Indeed, there is evidence that the human body REQUIRES a certain amount of radiation to maintain health, based on a number of studies done in the Soviet Union using completely non-radioactive, specially prepared caves and diets.
Exactly how much is healthy and not is a matter of intense debate, and not much study, though, and three main competing theories have arisen. The first, the linear-no-threshold model, is the law of the land, and states that there’s a maximum amount of radiation that is allowable to a person over the course of a year, no matter if it’s in one incident (which usually is a bad thing), or evenly spaced throughout the whole year. Each rad (or gray, we’ll get to that below) of radiation increases a person’s chance of getting cancer by a certain percentage in a linear fashion, and so effectively the LNT model (as it’s known) determines a minimum acceptable increase in the chance of a person getting cancer in a given timeframe (usually quarters and years). This doesn’t take into account the human body’s natural repair mechanisms, though, which can replace damaged cells (no matter how they’re damaged), which leads most health physicists to see issues with the model, even as they work within the model for their professions.
The second model is known as the linear-threshold model, which states that low level radiation (under the threshold of the body’s repair mechanisms) doesn’t make sense to count toward the likelihood of getting cancer. After all, if you replace your Formica counter top in your kitchen with a granite one, the natural radioactivity in the granite is going to expose you to more radiation, but there’s no difference in the likelihood that you’re going to get cancer from the change. Ramsar, Iran (which has the highest natural background radiation of any inhabited place on Earth) doesn’t have higher cancer rates, in fact they’re slightly lower, so why not set the threshold to where the normal human body’s repair mechanisms can control any damage, and THEN start using the linear model of increase in likelihood of cancer?
The third model, hormesis, takes this one step further. In a number of cases, such as Ramsar, and an apartment building in Taiwan which was built with steel contaminated with radioactive cobalt (causing the residents to be exposed to a MUCH higher than average chronic, or over time, dose of gamma radiation), people have not only been exposed to higher than typical doses of radiation, but had lower cancer rates when other known carcinogenic factors were accounted for. This is evidence that having an increased exposure to radiation may in fact stimulate the immune system and make a person more healthy, and reduce the chance of that person getting cancer! A number of places in the world actually use radioactive sources as places of healing, including radium springs in Japan, Europe, and the US, and the black monazite sands in Brazil. There has been very little research done in this area, since the standard model of radiation exposure says that this is effectively giving someone a much higher risk for cancer, though.
I am not a health physicist. It has become something of a hobby for me in the last year, but this is a field that is far more complex than astronuclear engineering. As such, I’m not going to weigh in on the debate as to which of these three theories is right, and would appreciate it if the comments section on the blog didn’t become a health physics flame war. Talking to friends of mine that ARE health physicists (and whom I consult when this subject comes up), I tend to lean somewhere between the linear threshold and hormesis theories of radiation exposure, but as I noted before, LNT is the law of the land, and so that’s what this blog is going to mostly work within.
Radiation (in the context of nuclear power, especially) starts with the emission of either a particle or ray from a radioisotope, or unstable nucleus of an atom. This is measured with the Curie (Cu) which is a measure of how much radioactivity IN GENERAL is released, or 3.7X10^10 emissions (either alpha, beta, neutron, or gamma) per second. SI uses the term Becquerels (Bq), which is simple: one decay = 1 Bq. So 1 Cu = 3.7X10^10 Bq. Because it’s so small, megaBequerels (Mbq) is often used, because unless you’re looking at highly sensitive laboratory experiments, even a dozen Bq is effectively nothing.
Each different type of radiation affects both materials and biological systems differently, though, so there’s another unit used to describe energy produced by radiation being deposited onto a material, the absorbed dose: this is the rad, and SI unit is the gray (Gy). The rad is defined as 100 ergs of energy deposited in one gram of material, and the gray is defined as 1 joule of radiation absorbed by one kilogram of matter. This means that 1 rad = 0.01 Gy. This is mostly seen for inert materials, such as reactor components, shielding materials, etc. If it’s being used for living tissue, that’s generally a VERY bad sign, since it’s pretty much only used that way in the case of a nuclear explosion or major reactor accident. It is used in the case of an acute – or sudden – dose of radiation, but not for longer term exposures.
This is because there’s many things that go into how bad a particular radiation dose is: if you’ve got a gamma beam that goes through your hand, for instance, it’s far less damaging than if it goes through your brain, or your stomach. This is where the final measurement comes into play: in NASA and AEC documentation, they use the term rem (or radiation equivalent man), but in SI units it’s known as the Sievert. This is the dose equivalent, or normalizing all the different radiation types’ effects on the various tissues of the body, by applying a quality factor to each type of radiation for each part of a human body that is exposed to that type of radiation. If you’ve ever wondered what health physicists do, it’s all the hidden work that goes on when that quality factor is applied.
The upshot of all of this is the way that radiation dose is assessed. There are a number of variables that were assessed at the time (and currently are assessed, with this as an effective starting point for ground testing, which has a minuscule but needing to be assessed consideration as far as release of radioactivity to the general public). The exposure was broadly divided into three types of exposure: full-body (5 rem/yr for an occupational worker, 0.5 rem/yr for the public); skin, bone, and thyroid exposure (30 rem/yr occupational, 3 rem/yr for the public); and other organs (15 rem/yr occupational, 1.5 rem/yr for the public). In 1971, the guidelines for the public were changed to 0.5 rem/yr full body and 1.5 rem/yr for the general population, but as has been noted (including in the NRDS Effluent Final Report) this was more an administrative convenience rather than biomedical need.
Additional considerations were made for discrete fuel element particles ejected from the core – a less than one in ten thousand chance that a person would come in contact with one, and a number of factors were considered in determining this probability. The biggest concern is skin contact can result in a lesion, at an exposure of above 750 rads (this is an energy deposition measure, not an expressly medical one, because it is only one type of tissue that is being assessed).
Finally, and perhaps the most complex to address, is the aerosolized effluent from the exhaust plume, which could be both gaseous fission products (which were not captured by the clad materials used) and from small enough particles to float through the atmosphere for a longer duration – and possibly be able to be inhaled. The relevant limits of radiation exposure for these tests for off-site populations were 170 mrem/yr whole body gamma dose, and a thyroid exposure dose of 500 mrem/yr. The highest full body dose recorded in the program was in 1966, of 20 mrem, and the highest thyroid dose recorded was from 1965 of 72 mrem.
The Health and Environmental Impact of Nuclear Propulsion Testing Development at Jackass Flats
So how bad were these tests about releasing radioactive material, exactly? Considering the sparsely populated area few people – if any – that weren’t directly associated with the program received any dose of radiation from aerosolized (inhalable, fine particulate) radioactive material. By the regulations of the day, no dose of greater than 15% of the allowable AEC/FRC (Federal Radiation Council, an early federal health physics advisory board) dose for the general public was ever estimated or recorded. The actual release of fission products in the atmosphere (with the exception of Cadmium-115) was never more than 10%, and often less than 1% (115Cd release was 50%). The vast majority of these fission products are very short lived, decaying in minutes or days, so there was not much – if any – change for migration of fallout (fission products bound to atmospheric dust that then fell along the exhaust plume of the engine) off the test site. According to a 1995 study by the Department of Energy, the total radiation release from all Rover and Tory-II nuclear propulsion tests was approximately 843,000 Curies. To put this in perspective, a nuclear explosive produces 30,300,000 Curies per kiloton (depending on the size and efficiency of the explosive), so the total radiation release was the equivalent of a 30 ton TNT equivalent explosion.
This release came from either migration of the fission products through the metal clad and into the hydrogen coolant, or due to cladding or fuel element failure, which resulted in the hot hydrogen aggressively attacking the graphite fuel elements and carbide fuel particles.
The amount of fission product released is highly dependent on the temperature and power level the reactors were operated at, the duration of the test, how quickly the reactors were brought to full power, and a number of other factors. The actual sampling of the reactor effluent occurred three ways: sampling by aircraft fitted with special sensors for both radiation and particulate matter, the “Elephant gun” effluent sampler placed in the exhaust stream of the engine, and by postmortem chemical analysis of the fuel elements to determine fuel burnup, migration, and fission product inventory. One thing to note is that for the KIWI tests, effluent release was not nearly as well characterized as for the later Phoebus, NRX, Pewee, and Nuclear Furnace tests, so the data for these tests is not only more accurate, but far more complete as well.
Two sets of aircraft data were collected: one (by LASL/WANL) was from fixed heights and transects in the six miles surrounding the effluent plume, collecting particulate effluent which would be used (combined with known release rates of 115Cd and post-mortem analysis of the reactor) to determine the total fission product inventory release at those altitudes and vectors, and was discontinued in 1967; the second (NERC) method used a fixed coordinate system to measure cloud size and density, utilizing a mass particulate sampler, charcoal bed, cryogenic sampler, external radiation sensor, and other equipment, but due to the fact that these samples were taken more than ten miles from the reactor tests, it’s quite likely that more of the fission products had either decayed or come down to the ground as fallout, so depletion of much of the fission product inventory could easily have occurred by the time the cloud reached the plane’s locations. This technique was used after 1967.
The next sampling method also came online in 1967 – the Elephant Gun. This was a probe that was stuck directly into the hot hydrogen coming out of the nozzle, and collected several moles of the hot hydrogen from the exhaust stream at several points throughout the test, which were then stored in sampling tanks. Combined with hydrogen temperature and pressure data, acid leaching analysis of fission products, and gas sample data, this provided a more close-to-hand estimate of the fission product release, as well as getting a better view of the gaseous fission products that were released by the engine.
Finally, after testing and cool-down, each engine was put through a rigorous post-mortem inspection. Here, the amount of reactivity lost compared to the amount of uranium present, power levels and test duration, and chemical and radiological analysis were used to determine which fission products were present (and in which ratios) compared to what SHOULD have been present. This technique enhanced understanding of reactor behavior, neutronic profile, and actual power achieved during the test as well as the radiological release in the exhaust stream.
Radioactive release from these engine tests varied widely, as can be seen in the table above, however the total amount released by the “dirtiest” of the reactor tests, the Phoebus 1B second test, was only 240,000 Curies, and the majority of the tests released less than 2000 Curies. Another thing that varied widely was HOW the radiation was released. The immediate area (within a few meters) of the reactor would be exposed to radiation during operation, in the form of both neutron and gamma radiation. The exhaust plume would contain not only the hydrogen propellant (which wasn’t in the reactor for long enough to accumulate additional neutrons and turn into deuterium, much less tritium, in any meaningful quantities), but the gaseous fission products (most of which the human body isn’t able to absorb, such as 135Xe) and – if fuel element erosion or breakage occurred – a certain quantity of particles that may either have become irradiated or contain burned or unburned fission fuel.