Hello, and welcome back to Beyond NERVA! Today’s blog post is a special one, spurred on by the announcement recently about the Transport and Energy Module, Russia’s new nuclear electric space tug! Because of the extra post, the next post on liquid fueled NTRs will come out on Monday or Tuesday next week.
This is a fascinating system with a lot of promise, but has also gone through major changes in the last year that seem to have delayed the program. However, once it’s flight certified (which is to be in the 2030s), Roscosmos is planning on mass-producing the spacecraft for a variety of missions, including cislunar transport services and interplanetary mission power and propulsion.
Begun in 2009, the TEM is being developed by Energia on the spacecraft side and the Keldysh Center on the reactor side. This 1 MWe (4MWt) nuclear reactor will power a number of gridded ion engines for high-isp missions over the spacecraft’s expected 10-year mission life.
First publicly revealed in 2013 at the MAKS aerospace show, a new model last year showed significant changes, with additional reporting coming out in the last week indicating that more changes are on the horizon (there’s a section below on the current TEM status).
This is a rundown of the TEM, and its YaDEU reactor. I also did a longer analysis of the history of the TEM on my Patreon page (patreon.com/beyondnerva), including a year-by-year analysis of the developments and design changes. Consider becoming a Patron for only $1 a month for additional content like early blog access, extra blog posts and visuals, and more!
The TEM is a nuclear electric spacecraft, designed around a gas-cooled high temperature reactor and a cluster of ion engines.
The TEM is designed to be delivered by either Proton or Angara rockets, although with the retirement of the Proton the only available launcher for it currently is the Angara-5.
Secondary Power System
Both versions of the TEM have had secondary folding photovoltaic power arrays. Solar panels are relatively commonly used for what’s known as “hotel load,” or the load used by instrumentation, sensors, and other, non-propulsion systems.
It is unclear if these feed into the common electrical bus of the spacecraft or form a secondary system. Both schemes are possible; if the power is run through a common electrical bus the system is simpler, but a second power distribution bus allows for greater redundancy in the spacecraft.
The ID-500 was designed by the Keldysh Center specifically to be used on the TEM, in conjunction with YaEDU. Due to the very high power availability of the YaEDU, standard ion engines simply weren’t able to handle either the power input or the needed propellant flow rates, so a new design had to be come up with.
The ID-500 is a xenon-propelled ion engine, with each thruster having a maximum power level of about 35 kW, with a grid diameter of 500 mm. The initially tested design in 2014 (see references below) had a tungsten cathode, with an expected lifetime of 5000 hours, although additional improvements through the use of a carbon-carbon cathode were proposed which could increase the lifetime by a factor of 10 (more than 50,000 hours of operation).
Each ID-500 is designed to throttle from 375-750 mN of thrust, varying both propellant flow rate and ionization chamber pressure. The projected exhaust velocity of the engine is 70,000 m/s (7000 s isp), making it an attractive option for the types of orbit-altering, long duration missions that the TEM is expected to undertake.
The fact that this system uses a gridded ion thruster, rather than a Hall effect thruster (HET), is interesting, since HETs are the area that Soviet, then Russian, engineers and scientists have excelled at. The higher isp makes sense for a long-term tug, but with a system that seems that it could refuel, the isp-to-thrust trade-off is an interesting decision.
The initial design released at MAKS 2013 had a total of 16 ion thrusters on four foldable arms, but the latest version from MAKS-2019 has only five thrusters. The new design is visible below:
The first design is ideal for the tug configuration: the distance between the thrusters and the payload ensure that a minimal amount of the propellant hits the payload, robbing the spacecraft of thrust, contaminating the spacecraft, and possibly building up a skin charge on the payload. The downside is that those arms, and their hinge system, cost mass and complexity.
The new design clusters only five (less than one third) thrusters clustered in the center-line of the spacecraft. This saves mass, but the decrease in the number of thrusters, and the fact that they’re placed in the exact location that the payload makes most sense to attach, has me curious about what the mission profile for this initial TEM is.
It is unclear if the thrusters are the same design.
This may be the most interesting thing in in the TEM: the heat rejection system.
Most of the time, spacecraft use what are commonly called “bladed tubular radiators.” These are tubes which carry coolant after it reaches its maximum temperature. Welded to the tube are plates, which do two things: it increases the surface area of the tube (with the better conductivity of metal compared to most fluids this means that the heat can be further distributed than the diameter of the pipe) and it protects the pipe from debris impacts. However, there are limitations in how much heat can be rejected by this type of radiator: the pipes, and joints between pipes, have definite thermal limits, with the joins often being the weakest part in folding radiators.
The TEM has the option of using a panel-type radiator, in fact there’s many renderings of the spacecraft using this type of radiator, such as this one:
However, many more renderings present a far more exciting possibility: a liquid droplet radiator, called a “drip refrigerator” in Russian. This design uses a spray of droplets in place of the panels of the radiator. This increases the surface area greatly, and therefore allows far more heat to be rejected. In addition it can reduce the mass of the system significantly, both due to the increased surface area and also the potentially higher temperature, assuming the system can recapture the majority of its coolant.
This system was also tested on the ground throughout 2018 (https://ria.ru/20181029/1531649544.html?referrer_block=index_main_2), and appears to have passed all the vacuum chamber ground tests needed. Based on the reporting, more in-orbit tests will be needed, but with Drop-2 already on-station it may be possible to conduct these tests reasonably easily.
I have been unable to determine what the working fluid that would be used is, but anything with a sufficiently low vapor pressure to survive the vacuum of space and the right working fluid range can be used, from oils to liquid metals.
Nothing is known of the reaction control system for the TEM. A number of options are available and currently used in Russian systems, but it doesn’t seem that this part of the design has been discussed publicly.
The biggest noticeable change in the rest of the spacecraft is the change in the spine structure. The initial model and renders had a square cross section telescoping truss with an open triangular girder profile. The new version has a cylindrical truss structure, with a tetrahedral girder structure which almost looks like the same structure that chicken-wire uses. I’m certain that there’s a trade-off between mass and rigidity in this change, but what precisely it is is unclear due to the fact that we don’t have dimensions or materials for the two structures. The change in the cross-section also means that while the new design is likely stronger from all angles, it makes it harder to pack into the payload fairing of the launch vehicle.
The TEM seems like it has gone through a major redesign in the last couple years. Because of this, it’s difficult to tell what other changes are going to be occurring with the spacecraft, especially if there’s a significant decrease in electrical power available.
It is safe to assume that the first version of the TEM will be more heavily instrumented than later versions, in order to support flight testing and problem-solving, but this is purely an assumption on my part. The reconfiguration of the spacecraft at MAKS-2019 does seem to indicate, at least for one spacecraft, the loss of the payload capability, but at this point it’s impossible to say.
The YaEDU is the reactor that will be used on the TEM spacecraft. Overall, with power conversion system, the power system will weigh about 6800 kg.
The reactor itself is a gas cooled, fast neutron spectrum, oxide fueled reactor, designed with an electrical output requirement rather than a thermal output requirement, oddly enough (choice in power conversion system changes the ratio of thermal to electrical power significantly, and as we’ll see it’s not set in stone yet) of 1 Mwe. This requires a thermal output of at least 4 MWt, although depending on power conversion efficiency it may be higher. Currently, though, the 4 MWt figure seems to be the baseline for the design. It is meant to have a ten year reactor lifetime.
This system has undergone many changes over its 11 year life, and due to the not-completely-clear nature of much of its development and architecture, there’s much about the system that we have conflicting or incomplete information on. Therefore, I’m going to be providing line-by-line references for the design details in these sections, and if you’ve got confirmable technical details on any part of this system, please comment below with your references!
The fuel for the reactor appears to be highly enriched uranium oxide, encased in a monocrystalline molybdenum clad. According to some reporting (https://habr.com/en/post/381701/ ), the total fuel mass is somewhere between 80-150 kg, depending on enrichment level. There have been some mentions of carbonitride fuel, which offers a higher fissile fuel density but is more thermally sensitive (although how much is unclear), but these have been only passing mentions.
The use of monocrystalline structures in nuclear reactors is something that the Russians have been investigating and improving for decades, going all the way back to the Romashka reactor in the 1950s. The reason for this is simple: grain boundaries, or the places where different crystalline structures interact within a solid material, act as refractory points for neutrons, similarly to how a cracked pane of glass distorts the light coming through it through internal reflection and the disruption of light waves undergoing refraction in the material. There’s two ways around this: either make sure that there are no grain boundaries (the Russian method), or make it so that the entire structure – or as close to it as possible – are grain boundaries, called nanocrystalline materials (the preferred method of the US and other Western countries. While the monocrystalline option is better in many ways, since it makes an effectively transparent, homogeneous material, it’s difficult to grow large monocrystalline structures, and they can be quite fragile in certain materials and circumstances. This led the US and others to investigate the somewhat easier to execute, but more loss-intensive, nanocrystalline material paradigm. For astronuclear reactors, particularly ones with a relatively low keff (effective neutron multiplication rate, or how many neutrons the reactor has to work with), this monocrystalline approach makes sense, but I’ve been unable to find the keff of this reactor anywhere, so it may be quite high in theory.
The TEM uses a mix of helium and xenon as its primary coolant, a common choice for fast-spectrum reactors. Initial reporting indicated an inlet temperature of 1200K, with an outlet temperature of 1500K, although I haven’t been able to confirm this in any more recent sources. Molybdenum, tantalum, tungsten and niobium alloys are used for the primary coolant tubes.
Testing of the coolant loop took place at the MIR research reactor in NIIAR, in the city of Dimitrovgrad. Due to the high reactor temperature, a special test loop was built in 2013 to conduct the tests. Interestingly, other options, including liquid metal coolant, were considered (http://osnetdaily.com/2014/01/russia-advances-development-of-nuclear-powered-spacecraft/ ), but rejected due to lower efficiency and the promise of the initial He-Xe testing.
Power Conversion System
There have been two primary options proposed for the power conversion system of the TEM, and in many ways it seems to bounce back and forth between them: the Brayton cycle gas turbine and a thermionic power conversion system. The first offers far superior power conversion ratios, but is notoriously difficult to make into a working system for a high temperature astronuclear system; the second is a well-understood system that has been used through multiple iterations in flown Soviet astronuclear systems, and was demonstrated on the Buk, Topol, and Yenesiy reactors (the first two types flew, the third is the only astronuclear reactor to be flight-certified by both Russia and the US).
In 2013, shortly after the design outline for the TEM was approved, the MAKS trade show had models of many components of the TEM, including a model of the Brayton system. At the time, the turbine was advertised to be a 250 kW system, meaning that four would have been used by the TEM to support YaEDU. This system was meant to operate at an inlet temperature of 1550K, with a rotational speed of 60,000 rpm and a turbine tip speed of 500 m/s. The design work was being primarily carried out at Keldysh Center.
The Brayton system would include both DC/AC and AC/DC convertors, buffer batteries as part of a power conditioning system, and a secondary coolant system for both the power conversion system bearing lubricant and the batteries.
As early as 2015, though, there were reports (https://habr.com/en/post/381701/ ) that RSC Energia, the spacecraft manufacturer, were considering going with a simpler power conversion system, a thermionic one. Thermionic power conversion heats a material, which emits electrons (thermions). These electrons pass through either a vacuum or certain types of exotic materials (called Cs-Rydberg matter) to deposit on another surface, creating a current.
This would reduce the power conversion efficiency, so would reduce the overall electric power available, but is a technology that the Russians have a long history with. These reactors were designed by the Arsenal Design Bureau, who apparently had designs for a large (300-500 kW) thermionic design. If you’d like to learn more about the history of thermionic reactors in the USSR and Russia, check out these posts:
This was potentially confirmed just a few days ago by the website Atomic Energy (http://www.atomic-energy.ru/news/2020/01/28/100970 ) by the first deputy head of Roscosmos, Yuri Urlichich. If so, this is not only a major change, but a recent one. Assuming the reactor itself remains in the same configuration, this would be a departure from the historical precedent of Soviet designs, which used in-core thermionics (due to their radiation hardness) rather than out-of-core designs, which were investigated by the US for the SNAP-8 program (something we’ll cover in the future).
So, for now we wait and see what the system will be. If it is indeed the thermionic system, then system efficiency will drop significantly (from somewhere around 30-40% to about 10-15%), meaning that far less electrical power will be available for the TEM.
The hydrogen is useful to shield most types of radiation, but the inclusion of boron materials stops neutron radiation very effectively. This is important to minimize damage from neutron irradiation through both atomic displacement and neutron capture, and boron does a very good job of this.
Current TEM Status
Two Russian media articles came out within the past week about the TEM, which spurred me to write this article.
RIA, an official state media outlet, reported a couple days ago that the first flight of a test unit is scheduled for 2030. In addition:
Roscosmos announced the completion of the first project to create a unique space “tug” – a transport and energy module (TEM) – based on a megawatt-class nuclear power propulsion system (YaEDU), designed to transport goods in deep space, including the creation of long-term bases on the planets. A technical complex for the preparation of satellites with a nuclear tug is planned to be built at Vostochny Cosmodrome and put into operation in 2030. https://ria.ru/20200128/1563959168.html
A second report (http://www.atomic-energy.ru/news/2020/01/28/100970) said that the reactor was now using a thermionic power conversion system, which is consistent with the reports that Arsenal is now involved with the program. This is a major design change from the Brayton cycle option, however it’s one that could be considered not surprising: in the US, both Rankine and Brayton cycles have often been proposed for space reactors, only to have them replaced by thermoelectric power conversion systems. While the Russians have extensive thermoelectric experience, their experience in the more efficient thermionic systems is also quite extensive.
“Creation of theoretical and experimental and experimental backlogs to ensure the development of highly efficient rocket propulsion and power plants for promising rocket technology products, substantiation of their main directions (concepts) of innovative development, the formation of basic requirements, areas of rational use, design and rational level of parameters with development software and methodological support and guidance documents on the design and solution of problematic issues of creating a new generation of propulsion and power plants.”
Work continues on the Vostnochy Cosmodrome facilities, and the reporting still concludes that it will be completed by 2030, when the first mass-production TEMs are planned to be deployed.
According to Yuri Urlichich, deputy head of Roscosmos, the prototype for the power plant would be completed by 2025, and life testing on the reactor would be completed by 2030. This is the second major delay in the program, and may indicate that there’s a massive redesign of the reactor. If the system has been converted to thermionic power, it would explain both the delay and the redesign of the spacecraft, but it’s not clear if this is the reason.
For now, we just have to wait and see. It still appears that the TEM is a major goal of both Roscosmos and Rosatom, but it is also becoming apparent that there have been challenges with the program.
Conclusions and Author Commentary
It deserves reiterating: I’m some random person on the Internet for all intents and purposes, but my research record, as well as my care in reporting on developments with extensive documentation, is something that I think deserves paying attention to. So I’m gonna put my opinion on this spacecraft out there.
This is a fascinating possibility. As I’ve commented on Twitter, the capabilities of this spacecraft are invaluable. Decommissioning satellites is… complicated. The so-called “graveyard orbits,” or those above geosynchronous where you park satellites to die, are growing crowded. Satellites break early in valuable orbits, and the operators, and the operating nations, are on the hook for dealing with that – except they can’t.
Additionally, while many low-cost launchers are available for low and mid Earth orbit launches, geostationary orbit is a whole different thing. The fact that India has a “Polar Satellite Launch Vehicle” (PSLV) and “Geostationary Satellite Launch Vehicle” (GSLV) classification for two very different satellites drives this home within a national space launch architecture.
The ability to contract whatever operator runs TEM missions (I’m guessing Roscosmos, but I may be wrong) with an orbital path post-booster-cutoff, and specify a new orbital pat, and have what is effectively an external, orbital-class stage come and move the satellite into a final orbit is… unprecedented. The idea of an inter-orbital tug is one that’s been proposed since the 1960s, before electric propulsion was practical. If this works the way that the design specs are put at, this literally rewrites the way mission planning can be done for any satellite operator who’s willing to take advantage of it in cislunar space (most obviously, military and intelligence customers outside Russia won’t be willing to take advantage of it).
The other thing to consider in cislunar space is decommissioning satellites: dragging things into a low enough orbit that they’ll burn up from GEO is costly in mass, and assumes that the propulsion and guidance, navigation, and control systems survive to the end of the satellite’s mission. As a satellite operator, and a host nation to that satellite with all the treaty obligations the OST requires the nation to take on, being able to drag defunct satellites out of orbit is incredibly valuable. The TEM can deliver one satellite and drag another into a disposal orbit on the way back. To paraphrase a wonderful character from Sir Terry Pratchett (Harry King)“They pay me to take it away, and they pay me to buy it after.” In this case, it’s opposite: they pay me to take it out, they pay me to take it back. Especially in graveyard orbit challenge mitigation, this is a potentially golden opportunity financially for the TEM operator: every mm/s of mission dV can potentially be operationally profitable. This is potentially the only system I’ve ever seen that can actually say that.
More than that, depending on payload restrictions for TEM cargoes, interplanetary missions can gain significant delta-vee from using this spacecraft. It may even be possible, should mass production actually take place, that it may be possible to purchase the end of life (or more) dV of a TEM during decommissioning (something I’ve never seen discussed) to boost an interplanetary mission without having to pay the launch mass penalty for the Earth’s escape velocity. The spacecraft was proposed for Mars crewed mission propulsion for the first half of its existence, so it has the capability, but just as SpaceX Starship interplanetary missions require SpaceX to lose a Starship, the same applies here, and it’s got to be worth the while of the (in this case interplanetary) launch provider to lose the spacecraft to get them to agree to it.
This is an exciting spacecraft, and one that I want to know more about. If you’re familiar with technical details about either the spacecraft or the reactor that I haven’t covered, please either comment or contact me via email at email@example.com
We’ll continue with our coverage of fluid fueled NTRs in the next post. These systems offer many advantages over both traditional, solid core NTRs and electrically propelled spacecraft such as the TEM, and making the details more available is something I’ve greatly enjoyed. We’ll finish up liquid fueled NTRS, followed by vapor fuels, then closed and open fueled gas core NTRs, probably by the end of the summer
If you’re able to support my efforts to continue to make these sorts of posts possible, consider becoming a Patron at patreon.com/beyondnerva. My supporters help me cover systems like this, and also make sure that this sort of research isn’t lost, forgotten, or unavailable to people who come into the field after programs have ended.
Hello, and welcome back to Beyond NERVA! Before we begin, I would like to announce that our Patreon page, at https://www.patreon.com/beyondnerva, is live! This blog consumes a considerable amount of my time, and being able to pay my bills is of critical importance to me. If you are able to support me, please consider doing so. The reward tiers are still very much up for discussion with my Patrons due to the early stage of this part of the Beyond NERVA ecosystem, but I can only promise that I will do everything I can to make it worth your support! Every dollar counts, both in terms of the financial and motivational support!
Today, we continue our look at the collaboration between the US and the USSR/Russia involving the Enisy reactor: Topaz International. Today, we’ll focus on the transfer from the USSR (which became Russia during this process) to the US, which was far more drama-ridden than I ever realized, as well as the management and bureaucratic challenges and amusements that occurred during the testing. Our next post will look at the testing program that occurred in the US, and the changes to the design once the US got involved. The final post will overview the plans for missions involving the reactors, and the aftermath of the Topaz International Program, as well as the recent history of the Enisy reactor.
For clarification: In this blog post (and the next one), the reactor will mostly be referred to as Topaz-II, however it’s the same as the Enisy (Yenisey is another common spelling) reactor discussed in the last post. Some modifications were made by the Americans over the course of the program, which will be covered in the next post, but the basic reactor architecture is the same.
When we left off, we had looked at the testing history within the USSR. The entry of the US into the list of customers for the Enisy reactor has some conflicting information: according to one document (Topaz-II Design History, Voss, linked in the references), the USSR approached a private (unnamed) US company in 1980, but the company did not purchase the reactor, instead forwarding the offer up the chain in the US, but this account has very few details other than that; according to another paper (US-Russian Cooperation… TIP, Dabrowski 2013, also linked), the exchange built out of frustration within the Department of Defense over the development of the SP-100 reactor for the Strategic Defense Initiative. We’ll look at the second, more fleshed out narrative of the start of the Topaz International Program, as the beginning of the official exchange of technology between the USSR (and soon after, Russia) and the US.
The Topaz International Program (TIP) was the final name for a number of programs that ended up coming under the same umbrella: the Thermionic System Evaluation Test (TSET) program, the Nuclear Electric Propulsion Space Test Program (NEPSTP), and some additional materials testing as part of the Thermionic Fuel Element Verification Program (TFEVP). We’ll look at the beginnings of the overall collaboration in this post, with the details of TSET, NEPSTP, TFEVP, the potential lunar base applications, and the aftermath of the Topaz International Program, in the next post.
Let’s start, though, with the official beginnings of the TIP, and the challenges involved in bringing the test articles, reactors, and test stands to the US in one of the most politically complex times in modern history. One thing to note here: this was most decidedly not the US just buying a set of test beds, reactor prototypes, and flight units (all unfueled), this was a true international technical exchange. Both the American and Soviet (later Russian) organizations involved on all levels were true collaborators in this program, with the Russian head of the program, Academician Nikolay Nikolayvich Ponomarev-Stepnoy, still being highly appreciative of the effort put into the program by his American counterparts as late as this decade, when he was still working to launch the reactor that resulted from the TIP – because it’s still not only an engineering masterpiece, but could perform a very useful role in space exploration even today.
The Beginnings of the Topaz International Program
While the US had invested in the development of thermionic power conversion systems in the 1960s, the funding cuts in the 1970s that affected so many astronuclear programs also bit into the thermionic power conversion programs, leading to their cancellation or diminution to the point of being insignificant. There were several programs run investigating this technology, but we won’t address them in this post, which is already going to run longer than typical even for this blog! An excellent resource for these programs, though, is Thermionics Quo Vadis by the Defense Threat Reduction Agency, available in PDF here: https://www.nap.edu/catalog/10254/thermionics-quo-vadis-an-assessment-of-the-dtras-advanced-thermionics (paywall warning).
Our story begins in detail in 1988. The US was at the time heavily invested in the Strategic Defense Initiative (SDI), which as its main in-space nuclear power supply was focused on the SP-100 reactor system (another reactor that we’ll be covering in a Forgotten Reactors post or two). However, certain key players in the decision making process, including Richard Verga of the Strategic Defense Initiative Organization (SDIO), the organizational lynchpin on the SDI. The SP-100 was growing in both cost and time to develop, leading him to decide to look elsewhere to either meet the specific power needs of SDI, or to find a fission power source that was able to operate as a test-bed for the SDI’s technologies.
Investigations into the technological development of all other nations’ astronuclear capabilities led Dr. Verga to realize that the most advanced designs were those of the USSR, who had just launched the two TOPOL-powered Plasma-A satellites. This led him to invite a team of Soviet space nuclear power program personnel to the Eighth Albuquerque Space Nuclear Power Symposium (the predecessor to today’s Nuclear and Emerging Technologies for Space, or NETS, conference, which just wrapped up recently at the time of this writing) in January of 1991. The invitation was accepted, and they brought a mockup of the TOPAZ. The night after their presentation, Academician Nikolay Nicolayvich Ponomarev-Stepnoy, the Russian head of the Topol program, along with his team of visiting academicians, met with Joe Wetch, the head of Space Power Incorporated (SPI, a company made up mostly of SNAP veterans working to make space fission power plants a reality), and they came to a general understanding: the US should buy this reactor from the USSR – assuming they could get both governments to agree to the sale. The terms of this “sale” would take significant political and bureaucratic wrangling, as we’ll see, and sadly the problems started less than a week later, thanks to their generosity in bringing a mockup of the Topaz reactor with them. While the researchers were warmly welcomed, and they themselves seemed to enjoy their time at the conference, when it came time to leave a significant bureaucratic hurdle was placed in their path.
This mockup, and the headaches surrounding being able to take it back with the researchers, were a harbinger of things to come. While this mockup was non-functional, but the Nuclear Regulatory Commission claimed that, since it could theoretically be modified to be functional (a claim which I haven’t found any evidence for, but is theoretically possible), and as such was considered a “nuclear utilization facility” which could not be shipped outside the US. Five months later, and with the direct intervention of numerous elected officials, including US Senator Pete Domenici, the mockup was finally returned to Russia. This decision by the NRC led to a different approach to importing further reactors from the USSR and Russia, when the time came to do this. The mockup was returned, however, and whatever damage the incident caused to the newly-minted (hopeful) partnership was largely weathered thanks to the interpersonal relationships that were developed in Albuquerque.
Teams of US researchers (including Susan Voss, who was the major source for the last post) traveled to the USSR, to inspect the facilities used to build the Enisy (Yenisey is another common spelling, the reactor was named after the river in Siberia). These visits started in Moscow, with Drs Wetch and Britt of SPI, when a revelation came to the American astronuclear establishment: there wasn’t one thermionic reactor in the USSR, but two, and the most promising one was available for potential export and sale!
These visits continued, and personal relationships between the team members from both sides of the Iron Curtain grew. Due to headaches and bureaucratic difficulties in getting technical documentation translated effectively in the timeframe that the program required, often it was these interpersonal relationships that allowed the US team to understand the necessary technical details of the reactor and its components. The US team also visited many of the testing and manufacturing locations used in the production and development of the Enisy reactor (if you haven’t read it yet, check out the first blog post on the Enisy for an overview of how closely these were linked), as well as observing testing in Russia of these systems. This is also the time when the term “Topaz-II” was coined by one of the American team members, to differentiate the reactor from the original Topol (known in the west as Topaz, and covered in our first blog post on Soviet astronuclear history) in the minds of the largely uninformed Western academic circles.
The seeds of the first cross-Iron Curtain technical collaboration on astronuclear systems development, planted in Albuquerque, were germinating in Russian soil.
The Business of Intergovernmental Astronuclear Development
During this time, due to the headaches involved in both the US and the USSR from a bureaucratic point of view (I’ve never found any information that showed that the two teams ever felt that there were problems in the technological exchange, rather they all seem to be political and bureaucratic in nature, and exclusively from outside the framework of what would become known as the Topaz International Program), two companies were founded to provide an administrative touchstone for various points in the technological transfer program.
The first was International Scientific Products, which from the beginning (in 1989) was made specifically to facilitate the purchase of the reactors for the US, and worked closely with the SDIO Dr. Verga was still intimately involved, and briefed after every visit to Russia on progress in the technical exchange and eventual purchase of the reactors. This company was the private lubricant for the US government to be able to purchase these reactor systems (for reasons too complex to get into in this blog post). The two main players in ISP were Drs Wetch and Britt, who also appear to be the main administrative driving force in the visits. The company gave a legal means to transmit non-classified data from the USSR to the US, and vice versa. After each visit, these three would meet, and Dr. Verga kept his management at SDIO consistently briefed on the progress of the program.
The second was the International Nuclear Energy Research and Technology corporation, known as INERTEK. This was a joint US-USSR company, involving the staff of ISP, as well as individuals from all of the Soviet team of design bureaus, manufacturing centers (except possibly in Talinn, but I haven’t been able to confirm this, it’s mainly due to the extreme loss of documentation from that facility following the collapse of the USSR), and research institutions that we saw in the last post. These included the Kurchatov Institute of Atomic Energy (headed by Academician and Director Ponomarev-Stepnoy, the head of the Russian portion of the Topaz International Program), the Scientific Industrial Association “LUCH” (represented by Deputy Director Yuri Nikolayev), the Central Design Bureau for Machine Building (represented by Director Vladmir Nikitin), and the Keldysh Institute of Rocket Research (represented by Director Academician Anatoli Koreteev). INERTEK was the vehicle by which the technology, and more importantly to the bureaucrats the hardware, would be exported from the USSR to the US. Academician Ponomarev-Stepnoy was the director of the company, and Dr Wetch was his deputy. Due to the sensitive nature of the company’s focus, the company required approval from the Ministry of Atomic Energy (Minatom) in Moscow, which was finally achieved in December 1990.
In order to gain this approval, the US had to agree to a number of demands from Minatom. This included: the Topaz-II reactors had to be returned to Russia after the testing and that the reactors could not be used for military purposes. Dr. Verga insisted on additional international cooperation, including staff from the UK and France. This not only was a cost-saving measure, but reinforced the international and transparent nature of the program, and made military use more challenging.
While this was occurring, the Americans were insistent that the non-nuclear testing of the reactors had to be duplicated in the US, to ensure they met American safety and design criteria. This was a major sticking point for Minatom, and delayed the approval of the export for months, but the Americans did not slow in their preparations for building a test facility. Due to the concentration of space nuclear power research resources in New Mexico (with Los Alamos and Sandia National Laboratories, the US Air Force Philips Laboratory, and the University of New Mexico’s New Mexico Engineering Research Institute (NMERI), as well as the presence of the powerful Republican senator Pete Domenici to smooth political feathers in Washington, DC (all of the labs were within his Senatorial district in the north of the state), it was decided to test the reactors in Albuquerque, NM. The USAF purchased an empty building from the NMERI, and hired personnel from UNM to handle the human resources side of things. The selection of UNM emphasized the transparent, exploratory nature of the program, an absolute requirement for Minatom, and the university had considerable organizational flexibility when compared to either the USAF or the DOE. According to the contract manager, Tim Stepetic:
“The University was very cooperative and accommodating… UNM allowed me to open checking accounts to provide responsive payments for the support requirements of the INTERTEK and LUCH contracts – I don’t think they’ve ever permitted such checkbook arrangements either before or since…”
These freedoms were necessary to work with the Russian team members, who were in culture shock and dealing with very different organizational restrictions than their American counterparts. As has been observed both before and since, the Russian scientists and technicians preferred to save as much of their (generous in their terms) per diem for after the project and the money would go further. They also covered local travel expenses as well. One of the technicians had to leave the US for Russia for his son’s brain tumor operation, and was asked by the surgeon to bring back some Tylenol, a request that was rapidly acquiesced to with bemusement from his American colleagues. In addition, personal calls (of a limited nature due to international calling rates at the time) were allowed for the scientists and technicians to keep in touch with their families and reduce their homesickness.
As should be surprising to no-one, the highly unusual nature of this financial arrangement, as well as the large amount of money involved (which ended up coming out to about $400,000 in 1990s money), a routine audit led to the Government Accounting Office being called in to investigate the arrangement later. Fortunately, no significant irregularities in the financial dealings of the NMERI were found, and the program continued. Additionally, the reuse of over $500,000 in equipment scrounged from SNL and LANL’s junk yards allowed for incredible cost savings in the program.
With the business side of the testing underway, it was time to begin preparations for the testing of the reactors in the US, beginning with the conversion of an empty building into a non-nuclear test facility. The building’s conversion, under the head of Frank Thome on the facilities modification side, and Scott Wold as the TSET training manager, began in April of 1991, only four months after Minatom’s approval of INTERTEK. Over the course of the next year, the facility would be prepared for testing, and would be completed just before the delivery of the first shipment of reactors and equipment from Russia.
By this point, the test program had grown to include two programs. The first was the Thermionic Systems Evaluation Test (TSET), which would study mechanical, thermophysical, and chemical properties of the reactors to verify the data collected in Russia. This was to flight-qualify the reactors for American space mission use, and establish the collaboration of the various international participants in the Topaz International Program.
The second program was the Nuclear Electric Propulsion Space Test Program (NEPSTP); run by the Johns Hopkins Applied Physics Laboratory, and funded by the SDIP Ballistic Missile Defense Organization, it proposed an experimental spacecraft that would use a set of six different electric thrusters, as well as equipment to monitor the environmental effects of both the thrusters and the reactor during operation. Design work for the spacecraft began almost immediately after the TSET program began, and the program was of interest to both the American and Russian parts of the team.
Later, one final program would be added: the Thermionic Fuel Element Verification Program (TFEVP). This program, which had predated TIP, is where many of the UK and French researchers were involved, and focused of increasing the lifetime of the thermionic fuel elements from one year (the best US estimate before the TSET) to at least three, and preferably seven, years. This would be achieved through better knowledge of materials properties, as well as improved manufacturing methods.
Finally, there were smaller programs that were attached to the big three, looking at materials effets in intense radiation and plasma environments, as well as long-term contact with cesium vapor, chemcal reactions within the hardware itself, and the surface electrical properties of various ceramics. These tests, while not the primary focus of the program, WOULD contribute to the understanding of the environment an astronuclear spacecraft would experience, and would significantly affect future spacecraft designs. These tests would occur in the same building as the TSET testing, and the teams involved would frequently collaborate on all projects, leading to a very well-integrated and collegial atmosphere.
Reactor Shipment: A Funny Little Thing Occurred in Russia
While all of this was going on in the Topaz International Program, major changes were happening thoughout the USSR: it was falling apart. From the uprisings in Latvia and Lithuania (violently put down by the Soviet military), to the fall of the Berlin Wall, to the ultimate lowering of the hammer and sickle from the Kremlin in December 1991 and its replacement with the tricolor of the Russian Federation, the fall of the Iron Curtain was accelerating. The TIP teams were continuing to work at their program, knowing that it offered hope for the Topaz-II project as well as a vehicle to form closer technological collaborations with their former adversaries, but the complications would rear their heads in this small group as well.
The American purchase of the Topaz reactors was approved by President George H.W. Bush on 27 March, 1992 during a meeting with his Secretary of State, James Barker, and Secretary of Defense Richard Cheney. This freed the American side of the collaboration to do what needed to be done to make the program happen, as well as begin bringing in Russian specialists to begin test facility preparations.
The first group of 14 Russian scientists and technicians to arrive in the US for the TSET program arrived on April 3, 1992, but only got to sleep for a few hours before being woken up by their guests (who also brought their families) for a long van journey. This was something that the Russians greatly appreciated, because April 4 is a special day in one small part of the world: it’s one of only two days of the year that the Trinity Site, the location of the first nuclear explosion in history, is open to the public. According to one of them, Georgiy Kompaniets:
“It was like for a picnic! And at the entrance to the site there were souvenir vendors selling t-shirts with bombs and rocks supposedly at the epicenter of the blast…” (note: no trinitite is allowed to be collected at the Trinity site anymore, and according to some interpretations of federal law is considered low-level radioactive waste from weapons production)
The Russians were a hit at the Trinity site, being the center of attention from those there, and were interviewed for television. They even got to tour the McDonald ranch house, where the Gadget was assembled and the blast was initiated. This made a huge impression on the visiting Russians, and did wonders in cementing the team’s culture.
Another cultural exchange that occurred later (exactly when I’m not sure) was the chance to ride in a hot air balloon. Albuquerque’s International Balloon Fiesta is the largest hot air ballooning event in the world, and whenever atmospheric conditions are right a half dozen or more balloons can be seen floating over the city. A local ballooning club, having heard about the Russian scientists and technicians (they had become minor local celebrities at this point) offered them a free hot air balloon ride. This is something that the Russians universally accepted, since none of them had ever experienced this.
According to Boris Steppenov:
“The greatest difficulty, it seemed, was landing. And it was absolutely forbidden to touch down on the reservations belonging to the Native Americans, as this would be seen as an attack on their land and an affront to their ancestors…
[after the flight] there were speeches, there were oaths, there was baptism with champagne, and many other rituals. A memory for an entire life!”
The balloon that Steppenov flew in did indeed land on the Sandia Pueblo Reservation, but before touchdown the tribal police were notified, and they showed up to the landing site, issued a ticket to the ballooning company, and allowed them to pack up and leave.
These events, as well as other uniquely New Mexican experiences, cemented the TIP team into a group of lifelong friends, and would reinforce the willingness of everyone to work together as much as possible to make TIP as much of a success as it could be.
In late April, 1992, a team of US military personnel (led by Army Major Fred Tarantino of SDIO, with AF Major Dan Mulder in charge of logistics), including a USAF Airlift Control Element Team, landed in St. Petersburg on a C-141 and C-130, carrying the equipment needed to properly secure the test equipment and reactors that would be flown to the US. Overflight permissions were secured, and special packing cases, especially for the very delicate tungsten TISA heaters, were prepared. These preparations were complicated by the lack of effective packing materials for these heaters, until Dr. Britt of both ISP and INTERTEK had the idea of using foam bedding pads from a furniture store. Due to the large size and weight of the equipment, though, the C-141 and C-130 aircraft were not sufficient for airlifting the equipment, so the teams had to wait on the larger C-5 Galaxy transports intended for this task, which were en route from the US at the time.
Sadly, when the time came for the export licenses to be given to the customs officer, he refused to honor them – because they were Soviet documents, and the Soviet Union no longer existed. This led Academician Ponomarev-Stepnoy and INTERTEK’s director, Benjamin Usov, to travel to Moscow on April 27 to meet with the Chairman of the Government, Alexander Shokhin, to get new export licenses. After consulting with the Minister of Foreign Economic Relations, Sergei Glazev, a one-time, urgent export license was issued for the shipment to the US. This was then sent via fast courier to St. Petersburg on May 1.
The C-5s, though, weren’t in Russia yet. Once they did land, though, a complex paperwork ballet needed to be carried out to get the reactors and test equipment to America. First, the reactors were purchased by INTERTEK from the Russian bureaus responsible for the various components. Then, INTERTEK would sell the reactors and equipment to Dr. Britt of ISP once the equipment was loaded onto the C-5. Dr. Britt then immediately resold the equipment to the US government. This then avoided the import issues that would have occurred on the US side if the equipment had been imported by ISP, a private company, or INTERTEK, a Russian-led international consortium.
One of them landed in St. Petersburg on May 6, was loaded with the two Topaz-II reactors (V-71 and Ya-21U) and as much equipment as could be fit in the aircraft, and left the same day. It would arrive in Albuquerque on May 7. The other developed maintenance problems, and was forced to wait in England for five days, finally arriving on May 8. The rest of the equipment was loaded up (including the Baikal vacuum chamber), and the plane left later that day. Sadly, it ran into difficulties again upon reaching England, as was forced to wait two more days for it to be repaired, arriving in Albuquerque on May 12.
Preparations for Testing: Two Worlds Coming Together
Once the equipment was in the US, detailed examination of the payload was required due to the beryllium used in the reflectors and control drums of the reactor. Berylliosis, or the breathing in of beryllium dust, is a serious health issue, and one that the DOE takes incredibly seriously (they’ll evacuate an entire building at the slightest possibility that beryllium dust could be present, at the cost of millions of dollars on occasion). Detailed checks, both before the equipment was removed from the aircraft and during the unpackaging of the reactors. However, no detectable levels of beryllium dust were detected, and the program continued with minimal disruption.
Then it came time to unbox the equipment, but another problem arose: this required the approval of the director of the Central Design Bureau of Heavy Machine Building, Vladmir Nikitin, who was in Moscow. Rather than just call him for approval, Dr Britt called and got approval for Valery Sinkevych, the Albuquerque representative for INTERTEK, to have discretional control over these sorts of decisions. The approval was given, greatly smoothing the process of both setup and testing during TIP.
Sinkevych, Scott Wold and Glen Schmidt worked closely together in the management of the project. Both were on hand to answer questions, smooth out difficulties, and other challenges in the testing process, to the point that the Russians began calling Schmidt “The Walking Stick.” His response was classic: that’s my style, “Management by Walking Around.”
Every day, Schmidt would hold a lab-wide meeting, ensuring everyone was present, before walking everyone through the procedures that needed to be completed for the day, as well as ensuring that everyone had the resources that they needed to complete their tasks. He also made sure that he was aware of any upcoming issues, and worked to resolve them (mostly through Wetch and Britt) before they became an issue for the facility preparations. This was a revelation to the Russian team, who despite working on the program (in Russia) for years, often didn’t know anything other than the component that they worked on. This synthesis of knowledge would continue throughout the program, leading to a far
Initial estimates for the time that it would take to prepare the facility and equipment for testing of the reactors were supposed to be 9 months. Due to both the well-integrated team, as well as the more relaxed management structure of the American effort, this was completed in only 6 ½ months. According to Sinkevych:
“The trust that was formed between the Russian and American side allowed us in an unusually short time to complete the assembly of the complex and demonstrate its capabilities.”
This was so incredible to Schmidt that he went to Wetch and Britt, asking for a bonus for the Russians due to their exceptional work. This was approved, and paid proportional to technical assignment, duration, and quality of workmanship. This was yet another culture shock for the Russian team, who had never received a bonus before. The response was twofold: greatly appreciative, and also “if we continue to save time, do we get another bonus?” The answer to this was a qualified “perhaps,” and indeed one more, smaller bonus was paid due to later time savings.
Mid-Testing Drama, and the Second Shipment
Both in the US and Russia, there were many questions about whether this program was even possible. The reason for its success, though, is unequivocally that it was a true partnership between the American and Russian parts of TIP. This was the first Russian-US government-to-government cooperative program after the fall of the USSR. Unlike the Nunn-Lugar agreement afterward, TIP was always intended to be a true technological exchange, not an assistance program, which is one of the main reasons why the participants of TIP still look fondly and respectfully at the project, while most Russian (and other former Soviet states) participants in N-L consider it to be demeaning, condescending, and not something to ever be repeated again. More than this, though, the Russian design philosophy that allowed full-system, non-nuclear testing of the Topaz-II permanently changed American astronuclear design philosophy, and left its mark on every subsequent astronuclear design.
However, not all organizations in the US saw it this way. Drs. Thorne and Mulder provided excellent bureaucratic cover for the testing program, preventing the majority of the politics of government work from trickling down to the management of the test itself. However, as Scott Wold, the TSET training manager pointed out, they would still get letters from outside organizations stating:
“[after careful consideration] they had concluded that an experiment we proposed to do wouldn’t be possible and that we should just stop all work on the project as it was obviously a waste of time. Our typical response was to provide them with the results of the experiment we had just wrapped up.”
As mentioned, this was not uncommon, but was also a minor annoyance. In fact, if anything it cemented the practicality of collaborations of this nature, and over time reduced the friction the program faced through proof of capabilities. Other headaches would arise, but overall they were relatively minor.
Sadly, one of the programs, NEPSTP, was canceled out from under the team near the completion of the spacecraft. The new Clinton administration was not nearly as open to the use of nuclear power as the Bush administration had been (to put it mildly), and as such the program ended in 1993.
One type of drama that was avoided was the second shipment of four more Topaz-II reactors from Russia to the US. These were the Eh-40, Eh-41, Eh-43, and Eh-44 reactors. The use of these terms directly contradicts the earlier-specified prefixes for Soviet determinations of capabilities (the systems were built, then assessed for suitability for mechanical, thermal, and nuclear capabilities after construction, for more on this see our first Enisy post here). These units were for: Eh-40 thermal-hydraulic mockup, with a functioning NaK heat rejection system, for “cold-test” testing of thermal covers during integration, launch, and orbital injection; Eh-41 structural mockup for mechanical testing, and demonstration of the mechanical integrity of the anticriticality device (more on that in the next post), modified thermal cover, and American launch vehicle integration; Eh-43 and -44 were potential flight systems, which would undergo modal testing, charging of the NaK coolant system, fuel loading and criticality testing, mechanical vibration, shock, and acoustic tests, 1000 hour thermal vacuum steady-state stability and NaK system integrity tests, and others before launch.
How was drama avoided in this case? The previous shipment was done by the US Air Force, which has many regulations involved in the transport of any cargo, much less flight-capable nuclear reactors containing several toxic substances. This led to delays in approval the first time this shipment method was used. The second time, in 1994, INTERTEK and ISP contracted a private cargo company, Russian Volga Dnepr Airlines, to transport these four reactors. In order to do this, Volga Dnepr Airlines used their An-124 to fly these reactors from St. Petersburg to Albuquerque.
For me personally, this was a very special event, because I was there. My dad got me out of school (I wasn’t even a teenager yet), drove me out to the landing strip fence at Kirtland AFB, and we watched with about 40 other people as this incredible aircraft landed. He told me about the shipment, and why they were bringing it in, and the seed of my astronuclear obsession was planted.
No beryllium dust was found in this shipment, and the reactors were prepared for testing. Additional thermophysical testing, as well as design work for modifications needed to get the reactors flight-qualified and able to be integrated with the American launchers, were conducted on these reactors. These tests and changes will be the subject of the next blog post, as well as the missions that were proposed for the reactors.
These tests would continue until 1995, and the end of testing in Albuquerque. All reactors were packed up, and returned to Russia per the agreement between INTERTEK and Minatom. The Enisy would continue to be developed in Russia until at least 2007.
More Coming Soon!
The story of the Topaz International Program is far from over. The testing in the US, as well as the programs that the US/Russian team had planned have not even been touched on yet besides very cursory mentions. These programs, as well as the end of the Topaz International Program and the possible future of the Enisy reactor, are the focus of our next blog post, the final one in this series.
This program provided a foundation, as well as a harbinger of challenges to come, in international astronuclear collaboration. As such, I feel that it is a very valuable subject to spend a significant amount of time on.
I hope to have the next post out in about a week and a half to two weeks, but the amount of research necessary for this series has definitely surprised me. The few documents available that fill in the gaps are, sadly, behind paywalls that I can’t afford to breach at my current funding availability.
Hello, and welcome to Beyond NERVA, for our first blog post of the year! Today, we reach the end of the reactor portion of the SNAP program. A combination of the holidays and personal circumstances prevented me from finishing this post as early as I would have liked to, but it’s finally here! Check the end of the blog post for information on an upcoming blog format change. [Author’s note: somehow the references section didn’t attach to the original post, that issue is now corrected, and I apologize, references are everything in as technical a field as this.]
The SNAP-50 was the last, and most powerful, of the SNAP series of reactors, and had a very different start when compared to the other three reactors that we’ve looked at. A fifth reactor, SNAP-4, also underwent some testing, but was meant for undersea applications for the Navy. The SNAP-50 reactor started life in the Aircraft Nuclear Propulsion program for the US Air Force, and ended its life with NASA, as a power plant for the future modular space station that NASA was planning before the budget cuts of the mid to late 1970s took hold.
Because it came from a different program originally, it also uses different technology than the reactors we’ve looked at on the blog so far: uranium nitride fuel, and higher-temperature, lithium coolant made this reactor a very different beast than the other reactors in SNAP. However, these changes also allowed for a more powerful reactor, and a less massive power plant overall, thanks to the advantages of the higher-temperature design. It was also the first major project to move the space reactor development process away from SNAP-2/10A legacy designs.
The SNAP-50 would permanently alter the way that astronuclear reactors were designed, and would change the course of in-space reactor development for over 20 years. By the time of its cancellation in 1973, it had approached flight readiness to the point that funding and time allowed, but changes in launch vehicle configuration rang the death knell of the SNAP-50.
The Birth of the SNAP-50
Up until now, the SNAP program had focused on a particular subset of nuclear reactor designs. They were all fueled with uranium-zirconium hydride fuel (within a small range of uranium content, all HEU), cooled with NaK-78, and fed either mercury Rankine generators or thermoelectric power conversion systems. This had a lot of advantages for the program: fuel element development improvements for one reactor could be implemented in all of them, challenges in one reactor system that weren’t present in another allowed for distinct data points to figure out what was going on, and the engineers and reactor developers were able to look at each others’ work for ideas on how to improve reliability, efficiency, and other design questions.
However, there was another program that was going on at about the same time which had a very different purpose, but similar enough design constraints that it could be very useful for an in-space fission power plant: the Aircraft Nuclear Propulsion program (ANP), which was primarily run out of Oak Ridge National Laboratory. Perhaps the most famous part of the ANP program was the series of direct cycle ramjets for Project PLUTO: the TORY series. These ramjets were nuclear fission engines using the atmosphere itself as the working fluid. There were significant challenges to this approach, because the clad for the fuel elements must not fail, or else the fission products from the fuel elements would be released as what would be virtually identical to nuclear fallout, only different due to the method that it was generated. The fuel elements themselves would be heavily eroded by the hot air moving through the reactor (which turned out to be a much smaller problem than was initially anticipated). The advantage to this system, though, is that it was simple, and could be made to be relatively lightweight.
Another option was what was known as the semi-indirect cycle, where the reactor would heat a working fluid in a closed loop, which would then heat the air through a heat exchanger built into the engine pod. While this was marginally safer from a fission product release point of view, there were a number of issues with the design. The reactor would have to run at a higher temperature than the direct cycle, because there are always losses whenever you transfer heat from one working fluid to another, and the increased mass of the system also required greater thrust to maintain the desired flight characteristics. The primary coolant loop would become irradiated when going through the reactor, leading to potential irradiation of the air as it passed through the heat exchanger. Another concern was that the heat exchanger could fail, leading to the working fluid (usually a liquid metal) being exposed at high temperature to the superheated air, where it could easily explode. Finally, if a clad failure occurred in the fuel elements, fission products could migrate into the working fluid, making the primary loop even more radioactive, increasing the irradiation of the air as it passed through the engine – and releasing fission products into the atmosphere if the heat exchanger failed.
The alternative to these approaches was an indirect cycle, where the reactor heated a working fluid in a closed loop, transferred this to another working fluid, which then heated the air. The main difference between these systems is that, rather than having the possibly radioactive primary coolant come in close proximity with the air and therefore transferring ionizing radiation, there is an additional coolant loop to minimize this concern, at the cost of both mass and thermal efficiency. This setup allowed for far greater assurances that the air passing through the engine would not be irradiated, because the irradiation of the secondary coolant loop would be so low as to be functionally nonexistent. However, if the semi-indirect cycle was more massive, this indirect cycle would be the heaviest of all of the designs, meaning far higher power outputs and temperatures were needed in order to get the necessary thrust-to-weight ratios for the aircraft. Nevertheless, from the point of view of the people responsible for the ANP program, this was the most attractive design for a crewed aircraft.
Both SNAP and ANP needed many of the same things out of a nuclear reactor: it had to be compact, it had to be lightweight, it had to have a VERY high power density and it needed to be able to operate virtually maintenance-free in a variety of high-power conditions. These requirements are in stark contrast to terrestrial, stationary nuclear reactors which can afford heavy weight, voluminous construction and can thus benefit of low power density. As a general rule of thumb, an increase in power density, will also intensify the engineering, materials, and maintenance challenges. The fact that the ANP program needed high outlet temperatures to run a jet engine also bore the potential of having a large thermal gradient across a power conversion system – meaning that high-conversion-efficiency electrical generation was possible. That led SNAP program leaders to see about adapting an aircraft system into a spacecraft system.
The selected design was under development at the Connecticut Advanced Nuclear Engine Laboratory (CANEL) in Middletown, Connecticut. The prime contractor was Pratt and Whitney. Originally part of the indirect-cycle program, the challenges of heat exchanger design, adequate thrust, and a host of other problems continually set back the indirect cycle program, and when the ANP program was canceled in 1961, Pratt and Whitney no longer had a customer for their reactor, despite doing extensive testing and even fabricating novel alloys to deal with certain challenges that their reactor design presented. This led them to look for another customer for the reactor, and they discovered that both NASA and the US Air Force were both interested in high-power-density, high temperature reactors for in-space use. Both were interested in this high powered reactor, and the SNAP-50 was born.
This reactor was an evolution of a series of test reactors, the PWAR series of test reactors. Three reactors (the PWAR-2, -4, and -8, for 2, 4, and 8 MW of thermal power per reactor core) had already been run for initial design of an aircraft reactor, focused on testing not only the critical geometry of the reactor, but the materials needed to contain its unique (at the time) coolant: liquid lithium. This is because lithium has an excellent specific heat capacity, or the amount of energy that can be contained as heat per unit mass at a given temperature: 3.558 J/kg-C, compared to the 1.124 J/kg-C of NaK78, the coolant of the other SNAP reactors. This means that less coolant would be needed to transport the energy away from the reactor and into the engine in the ANP program, and for SNAP this meant that less working fluid mass would be needed transferring from the reactor to the power conversion system. The facts that Li is much less massive than NaK, and that less of it would be needed, makes lithium a highly coveted option for an astronuclear reactor design. However, this design decision also led to needing novel concepts for how to contain liquid lithium. Even compared to NaK, lithium is highly toxic, highly corrosive in most materials and led, during the ANP program, to Pratt and Whitney investigating novel elemental compositions for their containment structures. We’ll look at just what they did later.
SNAP-50: Designing the Reactor Core
This reactor ended up using a form of fuel element that we have yet to look at in this blog: uranium nitride, UN. While both UC (you can read more about carbide fuels here) and UN were considered at the beginning of the program, the reactor designers ended up settling on UN because of a unique capacity that this fuel form offers: it has the highest fissile fuel density of any type of fuel element. This is offset by the fact that UN isn’t the most heat tolerant of fuel elements, requiring a lower core operating temperature. Other options were considered as well, including CERMET fuels using oxides, carbides, and nitrides suspended in a tungsten metal matrix to increase thermal conductivity and reduce the temperature of the fissile fuel itself. The decision between UN, with its higher mass efficiency (due to its higher fissile density), and uranium carbide (UC), with the highest operating temperature of any solid fuel element, was a difficult decision, and a lot of fuel element testing occurred at CANEL before a decision was reached. After a lot of study, it was determined that UN in a tungsten CERMET fuel was the best balance of high fissile fuel density, high thermal conductivity, and the ability to manage low fuel burnup over the course of the reactor’s life.
Perhaps the most important design consideration for the fuel elements after the type of fuel was how dense the fuel would be, and how to increase the density if this was desired in the final design. While higher density fuel is generally speaking a better idea when it comes to specific power, it was discovered that the higher density the fuel was, the lower the amount of burnup would be possible before the fuel would fail due to fission product gas buildup within the fuel itself. Initial calculations showed that there was an effectively unlimited fuel burnup potential of UN at 80% of its theoretical density since a lot of the gasses could diffuse out of the fuel element. However, once the fuel reached 95% density, this was limited to 1% fuel burnup. Additional work was done to determine that this low burnup was in fact not a project killer for a 10,000 hour reactor lifetime, as was specified by NASA, and the program moved ahead.
These fuel pellets needed a cladding material, as most fuel does, and this led to some additional unique materials challenges. With the decision to use lithium coolant, and the need for both elasticity and strength in the fuel element cladding (to deal with both structural loads and fuel swelling), it was necessary to do extensive experimentation on the metal that would be used for the clad. Eventually, a columbium-zirconium alloy with a small amount of carbon (CB-1ZR-0.6C) was decided on as a barrier between the Cb-Zr alloy of the clad (which resisted the high-temperature lithium erosion on the pressure vessel side of the clad) and the UN-W CERMET fuel (which would react strongly without the carburized layer).
This decisions led to an interesting reactor design, but not necessarily one that is unique from a non-materials point of view. The fuel would be formed into high-density pellets, which would then be loaded into a clad, with a spring to keep the fuel to the bottom (spacecraft end) of the reactor. The gap between the top of the fuel elements and the top of the clad was for the release of fission product gasses produced during operation of the reactor. These rods would be loaded in a hexagonal prism pattern into a larger collection of fuel elements, called a can. Seven of these cans, placed side by side (one regular hexagon, surrounded by six slightly truncated hexagons), would form the fueled portion of the reactor core. Shims of beryllium would shape the core into a cylinder, which was surrounded by a pressure vessel and lateral reflectors. Six poison-backed control drums mounted within the reflector would rotate to provide reactor control. Should the reactor need to be scrammed, a spring mechanism would return all the drums to a position with the neutron poison facing the reactor, stopping fission from occurring.
The lithium, after being heated to a temperature of 2000°F (1093°C), would feed into a potassium boiler, before being returned to the core at an inlet temperature of 1900 F (1037°C). From the boiler, the potassium vapor, which is 1850°F (1010°C), would enter a Rankine turbine which would produce electricity. The potassium vapor would cool down to 1118°F (603°C) in the process and return – condensed to its liquid form – to the boiler, thus closing the circulation. Several secondary coolant loops were used in this reactor: the main one was for the neutron reflectors, shield actuators, control drums, and other radiation hardened equipment, and used NaK as a coolant; this coolant was also used as a lubricant for the condensate pump in the potassium system. Another, lower temperature organic coolant was used for other systems that weren’t in as high a radiation flux. The radiators that were used to reject heat also used NaK as a working fluid, and were split into a primary and secondary radiator array. The primary array pulled heat from the condenser, and reduced it from 1246°F (674°C) to 1096°F (591°C), while the secondary array took the lower-temperature coolant from 730°F (388°C) to 490°F (254°C). This design was designed to operate in both single and dual loop situations, with the second (identical) loop used for high powered operation and to increase redundancy in the power plant.
These design decisions led to a flexible reactor core size, and the ability to adapt to changing requirements from either NASA or the USAF, both of which were continuing to show interest in the SNAP-50 for powering the new, larger space stations that were becoming a major focus of both organizations.
The Power Plant: Getting the Juice Flowing
By 1973, the SNAP 2/10A program had ended, and the SNAP-8/ZrHR program was winding down. These systems simply didn’t provide enough power for the new, larger space station designs that were being envisaged by NASA, and the smaller reactor sizes (the 10B advanced designs that we looked at a couple blog posts back, and the 5 kWe Thermoelectric Reactor) didn’t provide capabilities that were needed at the time. This left the SNAP-50 as the sole reactor design that was practical to take on a range of mission types… but there was a need to have different reactor power outputs, so the program ended up developing two reactor sizes. The first was a 35 kWe reactor design, meant for smaller space stations and lunar bases, although this particular part of the 35 kWe design seems to have never been fully fleshed out. A larger, 300 kWe type was designed for NASA’s proposed modular space station, a project which would eventually evolve into the actual ISS.
Unlike in the SNAP-2 and SNAP-8 programs, the SNAP-50 kept its Rankine turbine design, which had potassium vapor as its working fluid. This meant that the power plant was able to meet its electrical power output requirements far more easily than the lower efficiency demanded by thermoelectric conversion systems. The CRU system meant for the SNAP-2 ended up reaching its design requirements for reliability and life by this time, but sadly the overall program had been canceled, so there was no reactor to pair to this ingenious design (sadly, it’s so highly toxic that testing would be nearly impossible on Earth). The boiler, pumps, and radiators for the secondary loop were tested past the 10,000 hour design lifetime of the power plant, and all major complications discovered during the testing process were addressed, proving that the power conversion system was ready for the next stage of testing in a flight configuration.
One concern that was studied in depth was the secondary coolant loop’s tendency to become irradiated in the neutron flux coming off the reactor. Potassium has a propensity for absorbing neutrons, and in particular 41K (6% of unrefined K) can capture a neutron and become 42K. This is a problem, because 42K goes through gamma decay, so anywhere that the secondary coolant goes needs to have gamma radiation shielding to prevent the radiation from reaching the crew. This limited where the power conversion system could be mounted, to keep it inside the gamma shielding of the temporary, reactor-mounted shield, however the compact nature of both the reactor core and the power conversion system meant that this was a reasonably small concern, but one worthy of in-depth examination by the design team.
The power conversion system and auxiliary equipment, including the actuators for the control drums, power conditioning equipment, and other necessary equipment was cooled by a third coolant loop, which used an organic coolant (basically the oil needed for the moving parts to be lubricated), which ran through its own set of pumps and radiators. This tertiary loop was kept isolated from the vast majority of the radiation flux coming off the reactor, and as such wasn’t a major concern for irradiation damage of the coolant/lubricant.
Some Will Stay, Some Will Go: Mounting SNAP-50 To A Space Station
Each design used a 4-pi (a fully enclosing) shield with a secondary shadow shield pointing to the space station in order to reduce radiation exposure for crews of spacecraft rendezvousing or undocking from the space station. This primary shield was made out of a layer of beryllium to reflect neutrons back into the core, and boron carbide (B4C, enriched in boron-10) to absorb the neutrons that weren’t reflected back into the core. These structures needed to be cooled to ensure that the shield wouldn’t degrade, so a NaK shield coolant system (using technology adapted from the SNAP-8 program) was used to keep the shield at an acceptable temperature.
The shadow shield was built in two parts: the entire structure would be launched at the same time for the initial reactor installation for the space station, and then when the reactor needed to be replaced only a portion of the shield would be jettisoned with the reactor. The remainder, as well as the radiators for the reactor’s various coolant systems, would be kept mounted to the space station in order to reduce the amount of mass that needed to be launched for the station resupply. The shadow shield was made out of layers of tungsten and LiH, for gamma and neutron shielding respectively.
When it came time to replace the core of the reactor at the end of its 10,000 hour design life (which was a serious constraint on the UN fuels that they were working with due to fuel burnup issues), everything from the separation plane back would be jettisoned. This could theoretically have been dragged to a graveyard orbit by an automated mission, but the more likely scenario at the time would have been to leave it in a slowly degrading orbit to give the majority of the short-lived isotopes time to decay, and then design it to burn up in the atmosphere at a high enough altitude that diffusion would dilute the impact of any radioisotopes from the reactor. This was, of course, before the problems that the USSR ran into with their US-A program [insert link], which eliminated this lower cost decommissioning option.
After the old reactor core was discarded, the new core, together with the small forward shield and power conversion system, could be put in place using a combination of off-the-shelf hardware, which at the time was expected to be common enough: either Titan-III or Saturn 1B rockets, with appropriate upper stages to handle the docking procedure with the space station. The reactor would then be attached to the radiator, the docking would be completed, and within 8 hours the reactor would reach steady-state operations for another 10,000 hours of normal use. The longest that the station would be running on backup power would be four days. Unfortunately, information on the exact docking mechanism used is thin, so the details on how they planned this stage are still somewhat hazy, but there’s nothing preventing this from being done.
A number of secondary systems, including accumulators, pumps, and other equipment are mounted along with the radiator in the permanent section of the power supply installation. Many other systems, especially anything that has been exposed to a large radiation flux or high temperatures during operation (LiH, the primary shielding material, loses hydrogen through outgassing at a known rate depending on temperature, and can almost be said to have a half-life), will be separated with the core, but everything that was practicable to leave in place was kept.
This basic design principle for reloadable (which in astronuclear often just means “replaceable core”) reactors will be revisited time and again for orbital installations. Variations on the concept abound, although surface power units seem to favor “abandon in place” far more. In the case of large future installations, it’s not unreasonable to suspect that refueling of a reactor core would be possible, but at this point in astronuclear mission utilization, even having this level of reusability was an impressive feat.
35 kWe SNAP-50: The Starter Model
In the 1960s, having 35 kWe of power for a space station was considered significant enough to supply the vast majority of mission needs. Because of this, a smaller version of the SNAP-50 was designed to fit this mission design niche. While the initial power plant would require the use of a Saturn 1B to launch it into orbit, the replacement reactors could be launched on either an Atlas-Centaur or Titan IIIA-Centaur launch vehicle. This was billed as a low cost option, as a proof of concept for the far larger – and at this point, far less fully tested – 300 kWe version to come.
NASA was still thinking of very large space stations at this time. The baseline crew requirements alone were incredible: 24-36 crew, with rotations lasting from 3 months to a year, and a station life of five years. While 35 kWe wouldn’t be sufficient for the full station, it would be an attractive option. Other programs had looked at nuclear power plants for space stations as well, like we saw with the Manned Orbiting Laboratory and the Orbital Workshop (later Skylab), and facilities of that size would be good candidates for the 35 kWe system.
The core itself measured 8.3 inches (0.211 m) across, 11.2 inches (0.284 m) long, and used 236 fuel elements arranged into seven fuel element cans within the pressure vessel of the core. Six poison-backed control drums were used for primary reactor control. The core would produce up to 400 kW of thermal power. The pressure vessel, control drums, and all other control and reflective materials together measured just 19.6 inches (4.98 m) by 27.9 inches (7.09 m), and the replaceable portion of the reactor was between four and five feet (1.2 m and 1.5 m) tall, and five and six feet (1.5 m and 1.8 m) across – including shielding.
This reactor could also have been a good prototype reactor for a nuclear electric probe, a concept that will be revisited later, although there’s little evidence that this path was ever seriously explored. Like many smaller reactor designs, this one did not get the amount of attention that its larger brother offered, but at the time this was considered a good, solid space station power supply.
300 kWe SNAP-50: The Most Powerful Space Reactor to Date
While there were sketches for more powerful reactors than the 300 kWe SNAP-50 variant, they never really developed the reactors to any great extent, and certainly not to the point of experimental verification that SNAP-50 had achieved. This was considered to be a good starting point for possibly a crewed nuclear electric spacecraft, as well as being able to power a truly huge space station.
The 300 kWe variant of the reactor was slightly different in more than size when compared to its smaller brother. Despite using the same fuel, clad, and coolant as the 35 kWe system, the 300 kWe system could achieve over four times the fuel burnup of the smaller reactor (0.32% vs 1.3%), and had a higher maximum fuel power density as well, both of which have a huge impact on core lifetimes and dynamics. This was partially achieved by making the fuel elements almost half as narrow, and increasing the number of fuel elements to 1093, held in 19 cans within the core. This led to a core that was 10.2 inches (0.259 m) wide, and 14.28 inches (0.363 m) long (keeping the same 1:1.4 gore geometry between the reactors), and a pressure vessel that was 12” (0.305 m) in diameter by 43” (1.092 m) in length. It also increased the thermal output of the reactor to 2200 kWt. The number of control drums was increased from six to eight longer control drums to fit the longer core, and some rearrangement of lithium pumps and other equipment for the power conversion system occurred within the larger 4 pi shield structure. The entire reactor assembly that would undergo replacement was five to six feet high, and six to seven feet in diameter (1.5 m; 1.8 m; 2.1 m).
Sadly, even the ambitious NASA space station wasn’t big enough to need even the smaller 35 kWe version of the reactor, much less the 300 kWe variants. Plans had been made for a fleet of nuclear electric tugs that would ferry equipment back and forth to a permanent Moon base, but cancellation of that program occurred at the same time as the death of the moon base itself.
Mass Tradeoffs: Why Nuclear Instead of Solar?
By the middle of the 1960s, photovoltaic solar panels had become efficient and reliable enough for use in spacecraft on a regular basis. Because of this, it was a genuine question for the first time ever whether to go with solar panels or a nuclear reactor, whereas in the 1950s and early 60s nuclear was pretty much the only option. However, solar panels have a downside: drag. Even in orbit, there is a very thin atmosphere, and so for lower orbits a satellite has to regularly raise itself up or it will burn up in the atmosphere. Another down side comes from MM/OD: micro meteorites and orbital debris. Since solar panels are large, flat, and all pointing at the sun all the time, there’s a greater chance that something will strike one of those panels, damaging or possibly even destroying it. Managing these two issues is the primary concern of using solar panels as a power supply in terms of orbital behavior, and determines the majority of the refueling mass needed for a solar powered space station.
On the nuclear side, by 1965, there were two power plant options on the table: the SNAP-8 (pre-ZrHR redesign) and the SNAP-50, and solar photovoltaics had developed to the point that they could be deployed in space. Because of this, a comparison was done by Pratt and Whitney of the three systems to determine the mass efficiency of each system, not only in initial deployment but also in yearly fueling and tankage requirements. Each of the systems was compared at a 35 kWe power level to the space station in order to allow for a level playing field.
One thing that stands out about the solar system (based on a pair of Lockheed and General Electric studies) is that it’s marginally the lightest of all the systems at launch, but within a year the total system maintenance mass required far outstrips the mass of the nuclear power plants, especially the SNAP-50. This is because the solar panels have a large sail area, which catches the very thin atmosphere at the station’s orbital altitude and drags the station down into the thicker atmosphere, so thrust is needed to re-boost the space station. This is something that has to be done on a regular basis for the ISS. The mass of the fuel, tankage, and structure to allow for this reboost is extensive. Even back in 1965 there were discussions on using electric propulsion for the reboosting of the space station, in order to significantly reduce the mass needed for this procedure. That discussion is still happening casually with the ISS, and Ad Astra still hopes to use VASIMR for this purpose – a concept that’s been floated for the last ten or so years.
Overall, the mass difference between the SNAP-50 and the optimistic Lockheed proposal of the time was significant: the original deployment was only about 70 lbs (31.75 kg) different, but the yearly maintenance mass requirements would be 5,280 lbs (2395 kg) different – quite a large amount of mass.
Because the SNAP-50 and SNAP-8 don’t have these large sail areas, and the radiators needed can be made aerodynamically enough to greatly reduce the drag on the station, the reboost requirements are significantly lower than for the solar panels. The SNAP-50 weighs significantly less than the SNAP-8, and has significantly less surface area, because the reactor operates at a far higher temperature, and therefore needs a smaller radiator. Another difference between the reactors is volume: the SNAP-50 is physically smaller than the SNAP-8 because of that same higher temperature, and also due to the fact that the UN fuel is far more dense than its U-ZrH fueled counterpart.
These reactors were designed to be replaced once a year, with the initial launch being significantly more massive than the follow-up launches, benefitting of the sectioned architecture with a separation plane just at the small of the shadow shield as described above. Only the smaller section of shield remained with the reactor when it was separated. The larger, heavier section, on the other hand, would remain with the space station, as well as the radiators, and serve as the mounting point for the new reactor core and power conversion system, which would be sent via an automated refueling launch to the space station.
Solar panels, on the other hand, require both reboost to compensate for drag as well as equipment to repair or replace the panels, batteries, and associated components as they wear out. This in turn requires a somewhat robust repair capability for ongoing maintenance – a requirement for any large, long term space station, but the more area you have to get hit by space debris, which means more time and mass spent on repairs rather than doing science.
Of course, today solar panels are far lighter, and electric thrusters are also far more mature than they were at that time. This, in addition to widespread radiophobia, make solar the most widespread occurrence in most satellites, and all space stations, to date. However, the savings available in overall lifetime mass and a sail area that is both smaller and more physically robust, remain key advantages for a nuclear powered space station in the future
The End of an Era: Changing Priorities, Changing Funding
The SNAP-50, even the small 35 kWe version, offered more power, more efficiency, and less mass and volume than the most advanced of SNAP-8’s children: the A-ZrHR [Link]. This was the end of the zirconium hydride fueled reactor era for the Atomic Energy Commission, and while this type of fuel continues to be used in reactors all over the world in TRIGA research and training reactors (a common type of small reactor for colleges and research organizations), its time as the preferred fuel for astronuclear designs was over.
In fact, by the end of the study period, the SNAP-50 was extended to 1.5 MWe in some designs, the most powerful design to be proposed until the 1980s, and one of the most powerful ever proposed… but this ended up going nowhere, as did much of the mission planning surrounding the SNAP program.
At the same time as these higher-powered reactor designs were coming to maturity, funding for both civilian and military space programs virtually disappeared. National priorities, and perceptions of nuclear power, were shifting. Technological advances eliminated many future military crewed missions in favor of uncrewed ones with longer lifetimes, less mass, less cost – and far smaller power requirements. NASA funding began falling under the axe even as we were landing on the Moon for the first time, and from then on funding became very scarce on the ground.
The transition from the Atomic Energy Commission to the Department of Energy wasn’t without its hiccups, or reductions in funding, either, and where once every single AEC lab seemed to have its own family of reactor designs, the field narrowed greatly. As we’ll see, even at the start of Star Wars the reactor design was not too different from the SNAP-50.
Finally, the changes in launch system had their impact as well. NASA was heavily investing in the Space Transport System (the Space Shuttle), which was assumed to be the way that most or all payloads would be launched, so the nuclear reactor had to be able to be flown up – and in some cases returned – by the Shuttle. This placed a whole different set of constraints on the reactor, requiring a large rewrite of the basic design. The follow-on design, the SP-100, used the same UN fuel and Li coolant as the SNAP-50, but was designed to be launched and retrieved by the Shuttle. The fact that the STS never lived up to its promise in launch frequency or cost (and that other launchers were available continuously) means that this was ultimately a diversion, but at the time it was a serious consideration.
All of this spelled the death of the SNAP-50 program, as well as the end of dedicated research into a single reactor design until 1983, with the SP-100 nuclear reactor system, a reactor we’ll look at another time.
While I would love to go into many of the reactors that were developed up to this time, including heat pipe cooled reactors (SABRE at Los Alamos), thermionic power conversion systems (5 kWe Thermionic Reactor), and other ideas, there simply isn’t time to go into them here. As we look at different reactor components they’ll come up, and we’ll mention them there. Sadly, while some labs were able to continue funding some limited research with the help of NASA and sometimes the Department of Defense or the Defense Nuclear Safety Agency. The days of big astronuclear programs, though, were fading into a thing of the past. Both space and nuclear power would refocus, and then fade in the rankings of budgetary requirements over the years. We will be looking at these reactors more as time goes on, in our new “Forgotten Reactors” column (more on that below).
The Blog is Changing!
With the new year, I’ve been thinking a lot about the format of both the website and the blog, and where I hope to go in the next year. I’ve had several organizational projects on the back burner, and some of them are going to be started here soon. The biggest part is going to be the relationship between the blog and the website, and what I write more about where.
Expect another blog post shortly (it’s already written, just not edited yet) about our plans for the next year!
I’ve got big plans for Beyond NERVA this year, and there are a LOT of things that are slowly getting started in the background which will greatly improve the quality of the blog and the website, and this is just the start!
Hello, and welcome to Beyond NERVA! Today we’re going to look at the program that birthed the first astronuclear reactor to go into orbit, although the extent of the program far exceeds the flight record of a single launch.
Before we get into that, I have a minor administrative announcement that will develop into major changes for Beyond NERVA in the near-to-mid future! As you may have noticed, we have moved from beyondnerva.wordpress.com to beyondnerva.com. For the moment, there isn’t much different, but in the background a major webpage update is brewing! Not only will the home page be updated to make it easier to navigate the site (and see all the content that’s already available!), but the number of pages on the site is going to be increasing significantly. A large part of this is going to be integrating information that I’ve written about in the blog into a more topical, focused format – with easier access to both basic concepts and technical details being a priority. However, there will also be expansions on concepts, pages for technical concepts that don’t really fit anywhere in the blog posts, and more! As these updates become live, I’ll mention them in future blog posts. Also, I’ll post them on both the Facebook group and the new Twitter feed (currently not super active, largely because I haven’t found my “tweet voice yet,” but I hope to expand this soon!). If you are on either platform, you should definitely check them out!
The Systems for Nuclear Auxiliary Propulsion, or SNAP program, was a major focus for a wide range of organizations in the US for many decades. The program extended everywhere from the bottom of the seas (SNAP-4, which we won’t be covering in this post) to deep space travel with electric propulsion. SNAP was divided up into an odd/even numbering scheme, with the odd model numbers (starting with the SNAP-3) being radioisotope thermoelectric generators, and the even numbers (beginning with SNAP-2) being fission reactor electrical power systems.
Due to the sheer scope of the SNAP program, even eliminating systems that aren’t fission-based, this is going to be a two post subject. This post will cover the US Air Force’s portion of the SNAP reactor program: the SNAP-2 and SNAP-10A reactors; their development programs; the SNAPSHOT mission; and a look at the missions that these reactors were designed to support, including satellites, space stations, and other crewed and uncrewed installations. The next post will cover the NASA side of things: SNAP-8 and its successor designs as well as SNAP-50/SPUR. The one after that will cover the SP-100, SABRE, and other designs from the late 1970s through to the early 1990s, and will conclude with looking at a system that we mentioned briefly in the last post: the ENISY/TOPAZ II reactor, the only astronuclear design to be flight qualified by the space agencies and nuclear regulatory bodies of two different nations.
The Beginnings of the US Astronuclear Program: SNAP’s Early Years
Beginning in the earliest days of both the nuclear age and the space age, nuclear power had a lot of appeal for the space program: high power density, high power output, and mechanically simple systems were in high demand for space agencies worldwide. The earliest mention of a program to develop nuclear electric power systems for spacecraft was the Pied Piper program, begun in 1954. This led to the development of the Systems for Nuclear Auxiliary Power program, or SNAP, the following year (1955), which was eventually canceled in 1973, as were so many other space-focused programs.
Once space became a realistic place to send not only scientific payloads but personnel, the need to provide them with significant amounts of power became evident. Not only were most systems of the day far from the electricity efficient designs that both NASA and Roscosmos would develop in the coming decades; but, at the time, the vision for a semi-permanent space station wasn’t 3-6 people orbiting in a (completely epic, scientifically revolutionary, collaboratively brilliant, and invaluable) zero-gee conglomeration of tin cans like the ISS, but larger space stations that provided centrifugal gravity, staffed ‘round the clock by dozens of individuals. These weren’t just space stations for NASA, which was an infant organization at the time, but the USAF, and possibly other institutions in the US government as well. In addition, what would provide a livable habitation for a group of astronauts would also be able to power a remote, uncrewed radar station in the Arctic, or in other extreme environments. Even if crew were there, the fact that the power plant wouldn’t have to be maintained was a significant military advantage.
Responsible for both radioisotope thermoelectric generators (which run on the natural radioactive decay of a radioisotope, selected according to its energy density and half-life) as well as fission power plants, SNAP programs were numbered with an even-odd system: even numbers were fission reactors, odd numbers were RTGs. These designs were never solely meant for in-space application, but the increased mission requirements and complexities of being able to safely launch a nuclear power system into space made this aspect of their use the most stringent, and therefore the logical one to design around. Additionally, while the benefits of a power-dense electrical supply are obvious for any branch of the military, the need for this capability in space far surpassed the needs of those on the ground or at sea.
Originally jointly run by the AEC’s Department of Reactor Development (who funded the reactor itself) and the USAF’s AF Wright Air Development Center (who funded the power conversion system), full control was handed over to the AEC in 1957. Atomics International Research was the prime contractor for the program.
There are a number of similarities in almost all the SNAP designs, probably for a number of reasons. First, all of the reactors that we’ll be looking at (as well as some other designs we’ll look at in the next post) used the same type of fissile fuel, even though the form, and the cladding, varied reasonably widely between the different concepts. Uranium-zirconium hydride (U-ZrH) was a very popular fuel choice at the time. Assuming hydrogen loss could be controlled (this was a major part of the testing regime in all the reactors that we’ll look at), it provided a self-moderating, moderate-to-high-temperature fuel form, which was a very attractive feature. This type of fuel is still used today, for the TRIGA reactor – which, between it and its direct descendants is the most common form of research and test reactor worldwide. The high-powered reactors (SNAP 2 and 8) both initially used variations on the same power conversion system: a boiling mercury Rankine power conversion cycle, which was determined by the end of the testing regime to be possible to execute, however to my knowledge has never been proposed again (we’ll look at this briefly in the post on heat engines as power conversion systems, and a more in-depth look will be available in the future), although a mercury-based MHD conversion system is being offered as a power conversion system for an accelerator-driven molten salt reactor.
SNAP-2: The First American Built-For-Space Nuclear Reactor Design
The idea for the SNAP-2 reactor originally came from a 1951 Rand Corporation study, looking at the feasibility of having a nuclear powered satellite. By 1955, the possibilities that a fission power supply offered in terms of mass and reliability had captured the attention of many people in the USAF, which was (at the time) the organization that was most interested and involved (outside the Army Ballistic Missile Agency at the Redstone Arsenal, which would later become the Goddard Spaceflight Center) in the exploitation of space for military purposes.
The original request for the SNAP program, which ended up becoming known as SNAP 2, occurred in 1955, from the AEC’s Defense Reactor Development Division and the USAF Wright Air Development Center. It was for possible power sources in the 1 to 10 kWe range that would be able to autonomously operate for one year, and the original proposal was for a zirconium hydride moderated sodium-potassium (NaK) metal cooled reactor with a boiling mercury Rankine power conversion system (similar to a steam turbine in operational principles, but we’ll look at the power conversion systems more in a later post), which is now known as SNAP-2. The design was refined into a 55 kWt, 5 kWe reactor operating at about 650°C outlet temperature, massing about 100 kg unshielded, and was tested for over 10,000 hours. This epithermal neutron spectrum would remain popular throughout much of the US in-space reactor program, both for electrical power and thermal propulsion designs. This design would later be adapted to the SNAP-10A reactor, with some modifications, as well.
SNAP-2’s first critical assembly test was in October of 1957, shortly after Sputnik-1’s successful launch. With 93% enriched 235U making up 8% of the weight of the U-ZrH fuel elements, a 1” beryllium inner reflector, and an outer graphite reflector (which could be varied in thickness), separated into two rough hemispheres to control the construction of a critical assembly; this device was able to test many of the reactivity conditions needed for materials testing on a small economic scale, as well as test the behavior of the fuel itself. The primary concerns with testing on this machine were reactivity, activation, and intrinsic steady state behavior of the fuel that would be used for SNAP-2. A number of materials were also tested for reflection and neutron absorbency, both for main core components as well as out-of-core mechanisms. This was followed by the SNAP-2 Experimental Reactor in 1959-1960 and the SNAP 2 Development Reactor in 1961-1962.
The SNAP-2 Experimental Reactor (S2ER or SER) was built to verify the core geometry and basic reactivity controls of the SNAP-2 reactor design, as well as to test the basics of the primary cooling system, materials, and other basic design questions, but was not meant to be a good representation of the eventual flight system. Construction started in June 1958, with construction completed by March 1959. Dry (Sept 15) and wet (Oct 20) critical testing was completed the same year, and power operations started on Nov 5, 1959. Four days later, the reactor reached design power and temperature operations, and by April 23 of 1960, 1000 hours of continuous testing at design conditions were completed. Following transient and other testing, the reactor was shut down for the last time on November 19, 1960, just over one year after it had first achieved full power operations. Between May 19 and June 15, 1961, the reactor was disassembled and decommissioned. Testing on various reactor materials, especially the fuel elements, was conducted, and these test results refined the design for the Development Reactor.
The SNAP-2 Development Reactor (S2DR or SDR, also called the SNAP-2 Development System, S2DS) was installed in a new facility at the Atomics International Santa Susana research facility to better manage the increased testing requirements for the more advanced reactor design. While this wasn’t going to be a flight-type system, it was designed to inform the flight system on many of the details that the S2ER wasn’t able to. This, interestingly, is much harder to find information on than the S2ER. This reactor incorporated many changes from the S2ER, and went through several iterations to tweak the design for a flight reactor. Zero power testing occurred over the summer of 1961, and testing at power began shortly after (although at SNAP-10 power and temperature levels. Testing continued until December of 1962, and further refined the SNAP-2 and -10A reactors.
A third set of critical assembly reactors, known as the SNAP Development Assembly series, was constructed at about the same time, meant to provide fuel element testing, criticality benchmarks, reflector and control system worth, and other core dynamic behaviors. These were also built at the Santa Susana facility, and would provide key test capabilities throughout the SNAP program. This water-and-beryllium reflected core assembly allowed for a wide range of testing environments, and would continue to serve the SNAP program through to its cancellation. Going through three iterations, the designs were used more to test fuel element characteristics than the core geometries of individual core concepts. This informed all three major SNAP designs in fuel element material and, to a lesser extent, heat transfer (the SNAP-8 used thinner fuel elements) design.
Extensive testing was carried out on all aspects of the core geometry, fuel element geometry and materials, and other behaviors of the reactor; but by May 1960 there was enough confidence in the reactor design for the USAF and AEC to plan on a launch program for the reactor (and the SNAP-10A), called SNAPSHOT (more on that below). Testing using the SNAP-2 Experimental Reactor occurred in 1960-1961, and the follow-on test program, including the Snap 2 Development reactor occurred in 1962-63. These programs, as well as the SNAP Critical Assembly 3 series of tests (used for SNAP 2 and 10A), allowed for a mostly finalized reactor design to be completed.
The power conversion system (PCS), a Rankine (steam) turbine using mercury, were carried out starting in 1958, with the development of a mercury boiler to test the components in a non-nuclear environment. The turbine had many technical challenges, including bearing lubrication and wear issues, turbine blade pitting and erosion, fluid dynamics challenges, and other technical difficulties. As is often the case with advanced reactor designs, the reactor core itself wasn’t the main challenge, nor the control mechanisms for the reactor, but the non-nuclear portions of the power unit. This is a common theme in astronuclear engineering. More recently, JIMO experienced similar problems when the final system design called for a theoretical but not yet experimental supercritical CO2 Brayton turbine (as we’ll see in a future post). However, without a power conversion system of useable efficiency and low enough mass, an astronuclear power system doesn’t have a means of delivering the electricity that it’s called upon to deliver.
Reactor shielding, in the form of a metal honeycomb impregnated with a high-hydrogen material (in this case a form of paraffin), was common to all SNAP reactor designs. The high hydrogen content allowed for the best hydrogen density of the available materials, and therefore the greatest shielding per unit mass of the available options.
Testing on the SNAP 2 reactor system continued until 1963, when the reactor core itself was re-purposed into the redesigned SNAP-10, which became the SNAP-10A. At this point the SNAP-2 reactor program was folded into the SNAP-10A program. SNAP-2 specific design work was more or less halted from a reactor point of view, due to a number of factors, including the slower development of the CRU power conversion system, the large number of moving parts in the Rankine turbine, and the advances made in the more powerful SNAP-8 family of reactors (which we’ll cover in the next post). However, testing on the power conversion system continued until 1967, due to its application to other programs. This didn’t mean that the reactor was useless for other missions; in fact, it was far more useful, due to its far more efficient power conversion system for crewed space operations (as we’ll see later in this post), especially for space stations. However, even this role would be surpassed by a derivative of the SNAP-8, the Advanced ZrH Reactor, and the SNAP-2 would end up being deprived of any useful mission.
The SNAP Reactor Improvement Program, in 1963-64, continued to optimize and advance the design without nuclear testing, through computer modeling, flow analysis, and other means; but the program ended without flight hardware being either built or used. We’ll look more at the missions that this reactor was designed for later in this blog post, after looking at its smaller sibling, the first reactor (and only US reactor) to ever achieve orbit: the SNAP-10A.
SNAP-10: The Father of the First Reactor in Space
At about the same time as the SNAP 2 Development Reactor tests (1958), the USAF requested a study on a thermoelectric power conversion system, targeting a 0.3 kWe-1kWe power regime. This was the birth of what would eventually become the SNAP-10 reactor. This reactor would evolve in time to become the SNAP-10A reactor, the first nuclear reactor to go into orbit.
In the beginning, this design was superficially quite similar to the Romashka reactor that we’ll examine in the USSR part of this blog post, with plates of U-ZrH fuel, separated by beryllium plates for heat conduction, and surrounded by radial and axial beryllium reflectors. Purely conductively cooled internally, and radiatively cooled externally, this was later changed to a NaK forced convection cooling system for better thermal management (see below). The resulting design was later adapted to the SNAP-4 reactor, which was designed to be used for underwater military installations, rather than spaceflight. Outside these radial reflectors were thermoelectric power conversion systems, with a finned radiating casing being the only major component that was visible. The design looked, superficially at least, remarkably like the RTGs that would be used for the next several decades. However, the advantages to using even the low power conversion efficiency thermoelectric conversion system made this a far more powerful source of electricity than the RTGs that were available at the time (or even today) for space missions.
Within a short period, however, the design was changed dramatically, resulting in a design very similar to the core for the SNAP-2 reactor that was under development at the same time. Modifications were made to the SNAP-2 baseline, resulting in the reactor cores themselves becoming identical. This also led to the NaK cooling system being implemented on the SNAP-10A. Many of the test reactors for the SNAP-2 system were also used to develop the SNAP-10A. This is because the final design, while lower powered, by 20 kWe of electrical output, was largely different in the power conversion system, not the reactor structure. This reactor design was tested extensively, with the S2ER, S2DR, and SCA test series (4A, 4B, and 4C) reactors, as well as the SNAP-10 Ground Test Reactor (S10FS-1). The new design used a similar, but slightly smaller, conical radiator using NaK as the working fluid for the radiator.
This was a far lower power design than the SNAP-2, coming in at 30 kWt, but with the 1.6% power conversion ratio of the thermoelectric systems, its electrical power output was only 500 We. It also ran almost 100°C cooler (200 F), allowing for longer fuel element life, but less thermal gradient to work with, and therefore less theoretical maximum efficiency. This tradeoff was the best on offer, though, and the power conversion system’s lack of moving parts, and ease of being tested in a non-nuclear environment without extensive support equipment, made it more robust from an engineering point of view. The overall design life of the reactor, though, remained short: only about 1 year, and less than 1% fissile fuel burnup. It’s possible, and maybe even likely, that (barring spacecraft-associated failure) the reactor could have provided power for longer durations; however, the longer the reactor operates, the more the fuel swells, due to fission product buildup, and at some point this would cause the clad of the fuel to fail. Other challenges to reactor design, such as fission poison buildup, clad erosion, mechanical wear, and others would end the reactor’s operational life at some point, even if the fuel elements could still provide more power.
The SNAP-10A was not meant to power crewed facilities, since the power output was so low that multiple installations would be needed. This meant that, while all SNAP reactors were meant to be largely or wholly unmaintained by crew personnel, this reactor had no possibility of being maintained. The reliability requirements for the system were higher because of this, and the lack of moving parts in the power conversion system aided in this design requirement. The design was also designed to only have a brief (72 hour) time period where active reactivity control would be used, to mitigate any startup transients, and to establish steady-state operations, before the active control systems would be left in their final configuration, leaving the reactor entirely self-regulating. This placed additional burden on the reactor designers to have a very strong understanding of the behavior of the reactor, its long-term stability, and any effects that would occur during the year-long lifetime of the system.
At the end of the reactor’s life, it was designed to stay in orbit until the short-lived and radiotoxic portions of the reactor had gone through at least five product half-lives, reducing the radioactivity of the system to a very low level. At the end of this process, the reactor would re-enter the atmosphere, the reflectors and end reflector would be ejected, and the entire thing would burn up in the upper atmosphere. From there, winds would dilute any residual radioactivity to less than what was released by a single small nuclear test (which were still being conducted in Nevada at the time). While there’s nothing wrong with this approach from a health physics point of view, as we saw in the last post on the BES-5 reactors the Soviet Union was flying, there are major international political problems with this concept. The SNAPSHOT reactor continues to orbit the Earth (currently at an altitude of roughly 1300 km), and will do so for more than 2000 years, according to recent orbital models, so the only system of concern is not in danger of re-entry any time soon; but, at some point, the reactor will need to be moved into a graveyard orbit or collected and returned to Earth – a problem which currently has no solution.
The Runup to Flight: Vehicle Verification and Integration
1960 brought big plans for orbital testing of both the SNAP-2 and SNAP-10 reactors, under the program SNAPSHOT: Two SNAP-10 launches, and two SNAP-2 launches would be made. Lockheed Missiles System Division was chosen as the launch vehicle, systems integration, and launch operations contractor for the program; while Atomics International, working under the AEC, was responsible for the power plant.
The SNAP-10A reactor design was meant to be decommissioned by orbiting for long enough that the fission product inventory (the waste portion of the burned fuel elements, and the source of the vast majority of the radiation from the reactor post-fission) would naturally decay away, and then the reactor would be de-orbited, and burn up in the atmosphere. This was planned before the KOSMOS-954 accident, when the possibility of allowing a nuclear reactor to burn up in the atmosphere was not as anathema as it is today. This plan wouldn’t increase the amount of radioactivity that the public would receive to any appreciable degree; and, at the time, open-air testing of nuclear weapons was the norm, sending up thousands of kilograms of radioactive fallout per year. However, it was important that the fuel rods themselves would burn up high in the atmosphere, in order to dilute the fuel elements as much as possible, and this is something that needed to be tested.
Enter the SNAP Reactor Flight Demonstration Number 1 mission, or RFD-1. The concept of this test was to demonstrate that the planned disassembly and burnup process would occur as expected, and to inform the further design of the reactor if there were any unexpected effects of re-entry. Sandia National Labs took the lead on this part of the SNAPSHOT program. After looking at the budget available, the launch vehicles available, and the payloads, the team realized that orbiting a nuclear reactor mockup would be too expensive, and another solution needed to be found. This led to the mission design of RFD-1: a sounding rocket would be used, and the core geometry would be changed to account for the short flight time, compared to a real reentry, in order to get the data needed for the de-orbiting testing of the actual SNAP-10A reactor that would be flown.
So what does this mean? Ideally, the development of a mockup of the SNAP-10A reactor, with the only difference being that there wouldn’t be any highly enriched uranium in the fuel elements, as normally configured; instead depleted uranium would be used. It would be launched on the same launch vehicle that the SNAPSHOT mission would use (an Atlas-Agena D), be placed in the same orbit, and then be deorbited at the same angle and the same place as the actual reactor would be; maybe even in a slightly less favorable reentry angle to know how accurate the calculations were, and what the margin of error would be. However, an Atlas-Agena rocket isn’t a cheap piece of hardware, either to purchase, or to launch, and the project managers knew that they wouldn’t be able to afford that, so they went hunting for a more economical alternative.
This led the team to decide on a NASA Scout sounding rocket as the launch vehicle, launched from Wallops Island launch site (which still launches sounding rockets, as well as the Antares rocket, to this day, and is expanding to launch Vector Space and RocketLab orbital rockets as well in the coming years). Sounding rockets don’t reach orbital altitudes or velocities, but they get close, and so can be used effectively to test orbital components for systems that would eventually fly in orbit, but for much less money. The downside is that they’re far smaller, with less payload and less velocity than their larger, orbital cousins. This led to needing to compromise on the design of the dummy reactor in significant ways – but those ways couldn’t compromise the usefulness of the test.
Sandia Corporation (which runs Sandia National Laboratories to this day, although who runs Sandia Corp changes… it’s complicated) and Atomics International engineers got together to figure out what could be done with the Scout rocket and a dummy reactor to provide as useful an engineering validation as possible, while sticking within the payload requirements and flight profile of the relatively small, suborbital rocket that they could afford. Because the dummy reactor wouldn’t be going nearly as fast as it would during true re-entry, a steeper angle of attack when the test was returning to Earth was necessary to get the velocity high enough to get meaningful data.
The Scout rocket that was being used had much less payload capability than the Atlas rocket, so if there was a system that could be eliminated, it was removed to save weight. No NaK was flown on RFD-1, the power conversion system was left off, the NaK pump was simulated by an empty stainless steel box, and the reflector assembly was made out of aluminum instead of beryllium, both for weight and toxicity reasons (BeO is not something that you want to breathe!). The reactor core didn’t contain any dummy fuel elements, just a set of six stainless steel spacers to keep the grid plates at the appropriate separation. Because the angle of attack was steeper, the test would be shorter, meaning that there wouldn’t be time for the reactor’s reflectors to degrade enough to release the fuel elements. The fuel elements were the most important part of the test, however, since it needed to be demonstrated that they would completely burn up upon re-entry, so a compromise was found.
The fuel elements would be clustered on the outside of the dummy reactor core, and ejected early in the burnup test period. While the short time and high angle of attack meant that there wouldn’t be enough time to observe full burnup, the beginning of the process would be able to provide enough data to allow for accurate simulations of the process to be made. How to ensure that this data, which was the most important part of the test, would be able to be collected was another challenge, though, which forced even more compromises for RFT-1’s design. Testing equipment had to be mounted in such a way as to not change the aerodynamic profile of the dummy reactor core. Other minor changes were needed as well, but despite all of the differences between the RFD-1 and the actual SNAP-10A the thermodynamics and aerodynamics of the system were different in only very minor ways.
Testing support came from Wallops Island and NASA’s Bermuda tracking station, as well as three ships and five aircraft stationed near the impact site for radar observation. The ground stations would provide both radar and optical support for the RFD-1 mission, verifying reactor burnup, fuel element burnup, and other test objective data, while the aircraft and ships were primarily tasked with collecting telemetry data from on-board instruments, as well as providing additional radar data; although one NASA aircraft carried a spectrometer in order to analyze the visible radiation coming off the reentry vehicle as it disintegrated.
The test went largely as expected. Due to the steeper angle of attack, full fuel element burnup wasn’t possible, even with the early ejection of the simulated fuel rods, but the amount that they did disintegrate during the mission showed that the reactor’s fuel would be sufficiently distributed at a high enough altitude to prevent any radiological risk. The dummy core behaved mostly as expected, although there were some disagreements between the predicted behavior and the flight data, due to the fact that the re-entry vehicle was on such a steep angle of attack. However, the test was considered a success, and paved the way for SNAPSHOT to go forward.
The next task was to mount the SNAP-10A to the Agena spacecraft. Because the reactor was a very different power supply than was used at the time, special power conditioning units were needed to transfer power from the reactor to the spacecraft. This subsystem was mounted on the Agena itself, along with tracking and command functionality, control systems, and voltage regulation. While Atomics International worked to ensure the reactor would be as self-contained as possible, the reactor and spacecraft were fully integrated as a single system. Besides the reactor itself, the spacecraft carried a number of other experiments, including a suite of micrometeorite detectors and an experimental cesium contact thruster, which would operate from a battery system that would be recharged by electricity produced by the reactor.
In order to ensure the reactor would be able to be integrated to the spacecraft, a series of Flight System Prototypes (FSM-1, and -4; FSEM-2 and -3 were used for electrical system integration) were built. These were full scale, non-nuclear mockups that contained a heating unit to simulate the reactor core. Simulations were run using FSM-1 from launch to startup on orbit, with all testing occurring in a vacuum chamber. The final one of the series, FSM-4, was the only one that used NaK coolant in the system, which was used to verify that the thermal performance of the NaK system met with flight system requirements. FSEM-2 did not have a power system mockup, instead it used a mass mockup of the reactor, power conversion system, radiator, and other associated components. Testing with FSEM-2 showed that there were problems with the original electrical design of the spacecraft, which required a rebuild of the test-bed, and a modification of the flight system itself. Once complete, the renamed FSEM-2A underwent a series of shock, vibration, acceleration, temperature, and other tests (known as the “Shake and Bake” environmental tests), which it subsequently passed. The final mockup, FSEM-3, underwent extensive electrical systems testing at Lockheed’s Sunnyvale facility, using simulated mission events to test the compatibility of the spacecraft and the reactor. Additional electrical systems changes were implemented before the program proceeded, but by the middle of 1965, the electrical system and spacecraft integration tests were complete and the necessary changes were implemented into the flight vehicle design.
The last round of pre-flight testing was a test of a flight-configured SNAP-10A reactor under fission power. This nuclear ground test, S10F-3, was identical to the system that would fly on SNAPSHOT, save some small ground safety modifications, and was tested from January 22 1965 to March 15, 1966. It operated uninterrupted for over 10,000 hours, with the first 390 days being at a power output of 35 kWt, and (following AEC approval) an additional 25 days of testing at 44 kWt. This testing showed that, after one year of operation, the continuing problem of hydrogen redistribution caused the reactor’s outlet temperature to drop more than expected, and additional, relatively minor, uncertainties about reactor dynamics were seen as well. However, overall, the test was a success, and paved the way for the launch of the SNAPSHOT spacecraft in April 1965; and the continued testing of S10F-3 during the SNAPSHOT mission was able to verify that the thermal behavior of astronuclear power systems during ground test is essentially identical to orbiting systems, proving the ground test strategy that had been employed for the SNAP program.
SNAPSHOT: The First Nuclear Reactor in Space
In 1963 there was a change in the way the USAF was funding these programs. While they were solely under the direction of the AEC, the USAF still funded research into the power conversion systems, since they were still operationally useful; but that changed in 1963, with the removal of the 0.3 kWe to 1 kWe portion of the program. Budget cuts killed the Zr-H moderated core of the SNAP-2 reactor, although funding continued for the Hg vapor Rankine conversion system (which was being developed by TRW) until 1966. The SNAP-4 reactor, which had not even been run through criticality testing, was canceled, as was the planned flight test of the SNAP-10A, which had been funded under the USAF, because they no longer had an operational need for the power system with the cancellation of the 0.3-1 kWe power system program. The associated USAF program that would have used the power supply was well behind schedule and over budget, and was canceled at the same time.
The USAF attempted to get more funding, but was denied. All parties involved had a series of meetings to figure out what to do to save the program, but the needed funds weren’t forthcoming. All partners in the program worked together to try and have a reduced SNAPSHOT program go through, but funding shortfalls in the AEC (who received only $8.6 million of the $15 million they requested), as well as severe restrictions on the Air Force (who continued to fund Lockheed for the development and systems integration work through bureaucratic creativity), kept the program from moving forward. At the same time, it was realized that being able to deliver kilowatts or megawatts of electrical power, rather than the watts currently able to be produced, would make the reactor a much more attractive program for a potential customer (either the USAF or NASA).
Finally, in February of 1964 the Joint Congressional Committee on Atomic Energy was able to fund the AEC to the tune of $14.6 million to complete the SNAP-10A orbital test. This reactor design had already been extensively tested and modeled, and unlike the SNAP-2 and -8 designs, no complex, highly experimental, mechanical-failure-prone power conversion system was needed.
SNAPSHOT consisted of a SNAP-10A fission power system mounted to a modified Agena-D spacecraft, which by this time was an off-the-shelf, highly adaptable spacecraft used by the US Air Force for a variety of missions. An experimental cesium contact ion thruster (read more about these thrusters on the Gridded Ion Engine page) was installed on the spacecraft for in-flight testing. The mission was to validate the SNAP-10A architecture with on-orbit experience, proving the capability to operate for 9 days without active control, while providing 500 W (28.5 V DC) of electrical power. Additional requirements included the use of a SNAP-2 reactor core with minimal modification (to allow for the higher-output SNAP-2 system with its mercury vapor Rankine power conversion system to be validated as well, when the need for it arose), eliminating the need (while offering the option) for active control of the reactor once startup was achieved for one year (to prove autonomous operation capability); facilitating safe ground handling during spacecraft integration and launch; and, accommodating future growth potential in both available power and power-to-weight ratio.
While the threshold for mission success was set at 90 days, for Atomics International wanted to prove 1 year of capability for the system; so, in those 90 days, the goal was that the entire reactor system would be demonstrated to be capable of one year of operation (the SNAP-2 requirements). Atomics International imposed additional, more stringent, guidelines for the mission as well, specifying a number of design requirements, including self-containment of the power system outside the structure of the Agena, as much as possible; more stringent mass and center-of-gravity requirements for the system than specified by the US Air Force; meeting the military specifications for EM radiation exposure to the Agena; and others.
The flight was formally approved in March, and the launch occurred on April 3, 1965 on an Atlas-Agena D rocket from Vandenberg Air Force Base. The launch went perfectly, and placed the SNAPSHOT spacecraft in a polar orbit, as planned. Sadly, the mission was not one that could be considered either routine or simple. One of the impedance probes failed before launch, and a part of the micrometeorite detector system failed before returning data. A number of other minor faults were detected as well, but perhaps the most troubling was that there were shorts and voltage irregularities coming from the ion thruster, due to high voltage failure modes, as well as excessive electromagnetic interference from the system, which reduced the telemetry data to an unintelligible mess. This was shut off until later in the flight, in order to focus on testing the reactor itself.
The reactor was given the startup order 3.5 hours into the flight, when the two gross adjustment control drums were fully inserted, and the two fine control drums began a stepwise reactivity insertion into the reactor. Within 6 hours, the reactor achieved on-orbit criticality, and the active control portion of the reactor test program began. For the next 154 hours, the control drums were operated with ground commands, to test reactor behavior. Due to the problems with the ion engine, the failure sensing and malfunction sensing systems were also switched off, because these could have been corrupted by the errant thruster. Following the first 200 hours of reactor operations, the reactor was set to autonomous operation at full power. Between 600 and 700 hours later, the voltage output of the reactor, as well as its temperature, began to drop; an effect that the S10-F3 test reactor had also demonstrated, due to hydrogen migration in the core.
On May 16, just over one month after being launched into orbit, contact was lost with the spacecraft for about 40 hours. Some time during this blackout, the reactor’s reflectors ejected from the core (although they remained attached to their actuator cables), shutting down the core. This spelled the end of reactor operations for the spacecraft, and when the emergency batteries died five days later all communication with the spacecraft was lost forever. Only 45 days had passed since the spacecraft’s launch, and information was received from the spacecraft for only 616 orbits.
What caused the failure? There are many possibilities, but when the telemetry from the spacecraft was read, it was obvious that something badly wrong had occurred. The only thing that can be said with complete confidence is that the error came from the Agena spacecraft rather than from the reactor. No indications had been received before the blackout that the reactor was about to scram itself (the reflector ejection was the emergency scram mechanism), and the problem wasn’t one that should have been able to occur without ground commands. However, with the telemetry data gained from the dwindling battery after the shutdown, some suppositions could be made. The most likely immediate cause of the reactor’s shutdown was traced to a possible spurious command from the high voltage command decoder, part of the Agena’s power conditioning and distribution system. This in turn was likely caused by one of two possible scenarios: either a piece of the voltage regulator failed, or it became overstressed because of either the unusual low-power vehicle loads or commanding the reactor to increase power output. Sadly, the cause of this system failure cascade was never directly determined, but all of the data received pointed to a high-voltage failure of some sort, rather than a low-voltage error (which could have also resulted in a reactor scram). Other possible causes of instrumentation or reactor failure, such as thermal or radiation environment, collision with another object, onboard explosion of the chemical propellants used on the Agena’s main engines, and previously noted flight anomalies – including the arcing and EM interference from the ion engine – were all eliminated as the cause of the error as well.
Despite the spacecraft’s mysterious early demise, SNAPSHOT provided many valuable lessons in space reactor design, qualification, ground handling, launch challenges, and many other aspects of handling an astronuclear power source for potential future missions: Suggestions for improved instrumentation design and performance characteristics; provision for a sunshade for the main radiator to eliminate the sun/shade efficiency difference that was observed during the mission; the use of a SNAP-2 type radiation shield to allow for off-the-shelf, non-radiation-hardened electronic components in order to save both money and weight on the spacecraft itself; and other minor changes were all suggested after the conclusion of the mission. Finally, the safety program developed for SNAPSHOT, including the SCA4 submersion criticality tests, the RFT-1 test, and the good agreement in reactor behavior between the on-orbit and ground test versions of the SNAP-10A showed that both the AEC and the customer of the SNAP-10A (be it the US Air Force or NASA) could have confidence that the program was ready to be used for whatever mission it was needed for.
Sadly, at the time of SNAPSHOT there simply wasn’t a mission that needed this system. 500 We isn’t much power, even though it was more power than was needed for many systems that were being used at the time. While improvements in the thermoelectric generators continued to come in (and would do so all the way to the present day, where thermoelectric systems are used for everything from RTGs on space missions to waste heat recapture in industrial facilities), the simple truth of the matter was that there was no mission that needed the SNAP-10A, so the program was largely shelved. Some follow-on paper studies would be conducted, but the lowest powered of the US astronuclear designs, and the first reactor to operate in Earth orbit, would be retired almost immediately after the SNAPSHOT mission.
Post-SNAPSHOT SNAP: the SNAP Improvement Program
The SNAP fission-powered program didn’t end with SNAPSHOT, far from it. While the SNAP reactors only ever flew once, their design was mature, well-tested, and in most particulars ready to fly in a short time – and the problems associated with those particulars had been well-addressed on the nuclear side of things. The Rankine power conversion system for the SNAP-2, which went through five iterations, reached technological maturity as well, having operated in a non-nuclear environment for close to 5,000 hours and remained in excellent condition, meaning that the 10,000 hour requirement for the PCS would be able to be met without any significant challenges. The thermoelectric power conversion system also continued to be developed, focusing on an advanced silicon-germanium thermoelectric convertor, which was highly sensitive to fabrication and manufacturing processes – however, we’ll look more at thermoelectrics in the power conversion systems series of blog posts, just keep in mind that the power conversion systems continued to improve throughout this time, not just the reactor core design.
On the reactor side of things, the biggest challenge was definitely hydrogen migration within the fuel elements. As the hydrogen migrates away from the ZrH fuel, many problems occur; from unpredictable reactivity within the fuel elements, to temperature changes (dehydrogenated fuel elements developed hotspots – which in turn drove more hydrogen out of the fuel element), to changes in ductility of the fuel, causing major headaches for end-of-life behavior of the reactors and severely limiting the fuel element temperature that could be achieved. However, the necessary testing for the improvement of those systems could easily be conducted with less-expensive reactor tests, including the SCA4 test-bed, and didn’t require flight architecture testing to continue to be improved.
The maturity of these two reactors led to a short-lived program in the 1960s to improve them, the SNAP Reactor Improvement Program. The SNAP-2 and -10 reactors went through many different design changes, some large and some small – and some leading to new reactor designs based on the shared reactor core architecture.
By this time, the SNAP-2 had mostly faded into obscurity. However, the fact that it shared a reactor core with the SNAP-10A, and that the power conversion system was continuing to improve, warranted some small studies to improve its capabilities. The two of note that are independent of the core (all of the design changes for the -10 that will be discussed can be applied to the -2 core as well, since at this point they were identical) are the change from a single mercury boiler to three, to allow more power throughput and to reduce loads on one of the more challenging components, and combining multiple cores into a single power unit. These were proposed together for a space station design (which we’ll look at later) to allow an 11 kWe power supply for a crewed station.
The vast majority of this work was done on the -10A. Any further reactors of this type would have had an additional three sets of 1/8” beryllium shims on the external reflector, increasing the initial reactivity by about 50 cents (1 dollar of reactivity is exactly break even, all other things being equal; reactivity potential is often somewhere around $2-$3, however, to account for fission product buildup); this means that additional burnable poisons (elements which absorb neutrons, then decay into something that is mostly neutron transparent, to even out the reactivity of the reactor over its lifetime) could be inserted in the core at construction, mitigating the problems of reactivity loss that were experienced during earlier operation of the reactor. With this, and a number of other minor tweaks to reflector geometry and lowering the core outlet temperature slightly, the life of the SNAP-10A was able to be extended from the initial design goal of one year to five years of operation. The end-of-life power level of the improved -10A was 39.5 kWt, with an outlet temperature of 980 F (527°C) and a power density of 0.14 kWt/lb (0.31 kWt/kg).
e design modifications led to another iteration of the SNAP-10A, the Interim SNAP-10A/2 (I-10A/2). This reactor’s core was identical, but the reflector was further enhanced, and the outlet temperature and reactor power were both increased. In addition, even more burnable poisons were added to the core to account for the higher power output of the reactor. Perhaps the biggest design change with the Interim -10A/2 was the method of reactor control: rather than the passive control of the reactor, as was done on the -10A, the entire period of operation for the I-10A/2 was actively controlled, using the control drums to manage reactivity and power output of the reactor. As with the improved -10A design, this reactor would be able to have an operational lifetime of five years. These improvements led the I-10A/2 to have an end of life power rating of 100 kWt, an outlet temperature of 1200 F (648°C), and an improved power density of 0.33 kWt/lb (0.73 kWt/kg).
This design, in turn, led to the Upgraded SNAP-10A/2 (U-10A/2). The biggest in-core difference between the I-10A/2 and the U-10A/2 was the hydrogen barrier used in the fuel elements: rather than using the initial design that was common to the -2, -10A, and I-10A/2, this reactor used the hydrogen barrier from the SNAP-8 reactor, which we’ll look at in the next blog post. This is significant, because the degradation of the hydrogen barrier over time, and the resulting loss of hydrogen from the fuel elements, was the major lifetime limiting factor of the SNAP-10 variants up until this point. This reactor also went back to static control, rather than the active control used in the I-10A/2. As with the other -10A variants, the U-10A/2 had a possible core lifetime of five years, and other than an improvement of 100 F in outlet temperature (to 1300 F), and a marginal drop in power density to 0.31 kWt/lb, it shared many of the characteristics that the I-10A/2 had. SNAP-10B: The Upgrade that Could Have Been
One consistent mass penalty in the SNAP-10A variants that we’ve looked at so far is the control drums: relatively large reactivity insertions were possible with a minimum of movement due to the wide profile of the control drums, but this also meant that they extended well away from the reflector, especially early in the mission. This meant that, in order to prevent neutron backscatter from hitting the rest of the spacecraft, the shield had to be relatively wide compared the the size of the core – and the shield was not exactly a lightweight system component.
The SNAP-10B reactor was designed to address this problem. It used a similar core to the U-10A/2, with the upgraded hydrogen barrier from the -8, but the reflector was tapered to better fit the profile of the shadow shield, and axially sliding control cylinders would be moved in and out to provide control instead of the rotating drums of the -10A variants. A number of minor reactor changes were needed, and some of the reactor physics parameters changed due to this new control system; but, overall, very few modifications were needed.
The first -10B reactor, the -10B Basic (B-10B), was a very simple and direct evolution of the U-10A/2, with nothing but the reflector and control structures changed to the -10B configuration. Other than a slight drop in power density (to 0.30 kWt/lb), the rest of the performance characteristics of the B-10B were identical to the U-10A/2. This design would have been a simple evolution of the -10A/2, with a slimmer profile to help with payload integration challenges.
The next iteration of the SNAP-10B, the Advanced -10B (A-10B), had options for significant changes to the reactor core and the fuel elements themselves. One thing to keep in mind about these reactors is that they were being designed above and beyond any specific mission needs; and, on top of that, a production schedule hadn’t been laid out for them. This means that many of the design characteristics of these reactors were never “frozen,” which is the point in the design process when the production team of engineers need to have a basic configuration that won’t change in order to proceed with the program, although obviously many minor changes (and possibly some major ones) would continue to be made up until the system was flight qualified.
Up until now, every SNAP-10 design used a 37 fuel element core, with the only difference in the design occurring in the Upgraded -10A/2 and Basic -10B reactors (which changed the hydrogen barrier ceramic enamel inside the fuel element clad). However, with the A-10B there were three core size options: the first kept the 37 fuel element core, a medium-sized 55-element core, and a large 85-element core. There were other questions about the final design, as well, looking at two other major core changes (as well as a lot of open minor questions). The first option was to add a “getter,” a sheath of hydrophilic (highly hydrogen-absorbing) metal to the clad outside the steel casing, but still within the active region of the core. While this isn’t as ideal as containing the hydrogen within the U-ZrH itself, the neutron moderation provided by the hydrogen would be lost at a far lower rate. The second option was to change the core geometry itself, as the temperature of the core changed, with devices called “Thermal Coefficient Augmenters” (TCA). There were two options that were suggested: first, there was a bellows system that was driven by NaK core temperature (using ruthenium vapor), which moves a portion of the radial reflector to change the core’s reactivity coefficient; second, the securing grids for the fuel elements themselves would expand as the NaK increased in temperature, and contract as the coolant dropped in temperature.
Between the options available, with core size, fuel element design, and variable core and fuel element configuration all up in the air, the Advanced SNAP-10B was a wide range of reactors, rather than just one. Many of the characteristics of the reactors remained identical, including the fissile fuel itself, the overall core size, maximum outlet temperature, and others. However, the number of fuel elements in the core alone resulted in a wide range of different power outputs; and, which core modification the designers ultimately decided upon (Getter vs TCA, I haven’t seen any indication that the two were combined) would change what the capabilities of the reactor core would actually be. However, both for simplicity’s sake, and due to the very limited documentation available on the SNAP-10B program, other than a general comparison table from the SNAP Systems Capability Study from 1966, we’ll focus on the 85 fuel element core of the two options: the Getter core and the TCA core.
A final note, which isn’t clear from these tables: each of these reactor cores was nominally optimized to a 100 kWt power output, the additional fuel elements reduced the power density required at any time from the core in order to maximize fuel lifetime. Even with the improved hydrogen barriers, and the variable core geometry, while these systems CAN offer higher power, it comes at the cost of a shorter – but still minimum one year – life on the reactor system. Because of this, all reported estimates assumed a 100 kWt power level unless otherwise stated.
The idea of a hydrogen “getter” was not a new one at the time that it was proposed, but it was one that hadn’t been investigated thoroughly at that point (and is a very niche requirement in terrestrial nuclear engineering). The basic concept is to get the second-best option when it comes to hydrogen migration: if you can’t keep the hydrogen in your fuel element itself, then the next best option is keeping it in the active region of the core (where fission is occurring, and neutron moderation is the most directly useful for power production). While this isn’t as good as increasing the chance of neutron capture within the fuel element itself, it’s still far better than hydrogen either dissolving into your coolant, or worse yet, migrating outside your reactor and into space, where it’s completely useless in terms of reactor dynamics. Of course, there’s a trade-off: because of the interplay between the various aspects of reactor physics and design, it wasn’t practical to change the external geometry of the fuel elements themselves – which means that the only way to add a hydrogen “getter” was to displace the fissile fuel itself. There’s definitely an optimization question to be considered; after all, the overall reactivity of the reactor will have to be reduced because the fuel is worth more in terms of reactivity than the hydrogen that would be lost, but the hydrogen containment in the core at end of life means that the system itself would be more predictable and reliable. Especially for a static control system like the A-10B, this increase in behavioral predictability can be worth far more than the reactivity that the additional fuel would offer. Of the materials options that were tested for the “getter” system, yttrium metal was found to be the most effective at the reactor temperatures and radiation flux that would be present in the A-10B core. However, while improvements had been made in the fuel element design to the point that the “getter” program continued until the cancellation of the SNAP-2/10 core experiments, there were many uncertainties left as to whether the concept was worth employing in a flight system. The second option was to vary the core geometry with temperature, the Thermal Coefficient Augmentation (TCA) variant of the A-10B. This would change the reactivity of the reactor mechanically, but not require active commands from any systems outside the core itself. There were two options investigated: a bellows arrangement, and a design for an expanding grid holding the fuel elements themselves.
The first variant used a bellows to move a portion of the reflector out as the temperature increased. This was done using a ruthenium reservoir within the core itself. As the NaK increased in temperature, the ruthenium would boil, pushing a bellows which would move some of the beryllium shims away from the reactor vessel, reducing the overall worth of the radial reflector. While this sounds simple in theory, gas diffusion from a number of different sources (from fission products migrating through the clad to offgassing of various components) meant that the gas in the bellows would not just be ruthenium vapor. While this could have been accounted for, a lot of study would have needed to have been done with a flight-type system to properly model the behavior. The second option would change the distance between the fuel elements themselves, using base plate with concentric accordion folds for each ring of fuel elements called the “convoluted baseplate.” As the NaK heated beyond optimized design temperature, the base plates would expand radially, separating the fuel elements and reducing the reactivity in the core. This involved a different set of materials tradeoffs, with just getting the device constructed causing major headaches. The design used both 316 stainless steel and Hastelloy C in its construction, and was cold annealed. The alternative, hot annealing, resulted in random cracks, and while explosive manufacture was explored it wasn’t practical to map the shockwave propagation through such a complex structure to ensure reliable construction at the time. While this is certainly a concept that has caused me to think a lot about the concept of a variable reactor geometry of this nature, there are many problems with this approach(which could have possibly been solved, or proven insurmountable). Major lifetime concerns would include ductility and elasticity changes through the wide range of temperatures that the baseplate would be exposed to; work hardening of the metal, thermal stresses, and neutron bombardment considerations of the base plates would also be a major concern in this concept.
These design options were briefly tested, but most of these ended up not being developed fully. Because the reactor’s design was never frozen, many engineering challenges remained in every option that had been presented. Also, while I know that a report was written on the SNAP-10B reactor’s design (R. J . Gimera, “SNAP 1OB Reactor Conceptual Design,” NAA-SR-10422), I can’t find it… yet. This makes writing about the design difficult, to say the least.
Because of this, and the extreme paucity of documentation on this later design, it’s time to turn to what these innovative designs could have offered when it comes to actual missions.
The Path Not Taken: Missions for SNAP-2, -10A
Every space system has to have a mission, or it will never fly. Both SNAP-2 and SNAP-10 offered a lot for the space program, both for crewed and uncrewed missions; and what they offered only grew with time. However, due to priorities at the time, and the fact that many records from these programs appear to never have been digitized, it’s difficult to point to specific mission proposals for these reactors in a lot of cases, and the missions have to be guessed at from scattered data, status reports, and other piecemeal sources.
SNAP-10 was always a lower-powered system, even with its growth to a kWe-class power supply. Because of this, it was always seen as a power supply for unmanned probes, mostly in low Earth orbit, but certainly it would also have been useful in interplanetary studies as well, which at this point were just appearing on the horizon as practical. Had the SNAPSHOT system worked as planned, the cesium thruster that had been on board the Agena spacecraft would have been an excellent propulsion source for an interplanetary mission. However, due to the long mission times and relatively fragile fuel of the original SNAP-10A, it is unlikely that these missions would have been initially successful, while the SNAP-10A/2 and SNAP-B systems, with their higher power output and lifetimes, would have been ideal for many interplanetary missions.
As we saw in the US-A program, one of the major advantages that a nuclear reactor offers over photovoltaic cells – which were just starting to be a practical technology at the time – is that they offer very little surface area, and therefore the atmospheric drag that all satellites experience due to the thin atmosphere in lower orbits is less of a concern. There are many cases where this lower altitude offers clear benefits, but the vast majority of them deal with image resolution: the lower you are, the more clear your imagery can be with the same sensors. For the Russians, the ability to get better imagery of US Navy movements in all weather conditions was of strategic importance, leading to the US-A program. For Americans, who had other means of surveillance (and an opponent’s far less capable blue-water navy to track), radar surveillance was not a major focus – although it should be noted that 500 We isn’t going to give you much, if any resolution, no matter what your altitude.
One area that SNAP-10A was considered for was for meteorological satellites. With a growing understanding of how weather could be monitored, and what types of data were available through orbital systems, the ability to take and transmit pictures from on-orbit using the first generations of digital cameras (which were just coming into existence, and not nearly good enough to interest the intelligence organizations at the time), along with transmitting the data back to Earth, would have allowed for the best weather tracking capability in the world at the time. By using a low orbit, these satellites would be able to make the most of the primitive equipment available at the time, and possibly (speculation on my part) have been able to gather rudimentary moisture content data as well.
However, while SNAP-10A was worked on for about a decade, for the entire program there was always the question of “what do you do with 500-1000 We?” Sure, it’s not an insignificant amount of power, even then, but… communications and propulsion, the two things that are the most immediately interesting for satellites with reliable power, both have a linear relationship between power level and capability: the more power, the more bandwidth, or delta-vee, you have available. Also, the -10A was only ever rated for one year of operations, although it was always suspected it could be limped along for longer, which precluded many other missions.
The later SNAP-10A/2 and -10B satellites, with their multi-kilowatt range and years-long lifespans, offered far more flexibility, but by this point many in the AEC, the US Air Force, NASA, and others were no longer very interested in the program; with newer, more capable, reactor designs being available (we’ll look at some of those in the next post). While the SNAP-10A was the only flight-qualified and -tested reactor design (and the errors on the mission were shown to not be the fault of the reactor, but the Agena spacecraft), it was destined to fade into obscurity.
SNAP-10A was always the smallest of the reactors, and also the least powerful. What about the SNAP-2, the 3-6 kWe reactor system?
Initial planning for the SNAP-2 offered many options, with communications satellites being mentioned as an option early on – especially if the reactor lifetime could be extended. While not designed specifically for electric propulsion, it could have utilized that capability either on orbit around the Earth or for interplanetary missions. Other options were also proposed, but one was seized on early: a space station.
At the time, most space station designs were nuclear powered, and there were many different configurations. However, there were two that were the most common: first was the simple cylinder, launched as a single piece (although there were multiple module designs proposed which kept the basic cylinder shape) which would be finally realized with the Skylab mission; second was a torus-shaped space station, which was proposed almost a half a century before by Tsiolkovsky, and popularized at the time by Werner von Braun. SNAP-2 was adapted to both of these types of stations. Sadly, while I can find one paper on the use of the SNAP-2 on a station, it focuses exclusively on the reactor system, and doesn’t use a particular space station design, instead laying out the ground limits of the use of the reactor on each type of station, and especially the shielding requirements for each station’s geometry. It was also noted that the reactors could be clustered, providing up to 11 kWe of power for a space station, without significant change to the radiation shield geometry. We’ll look at radiation shielding in a couple posts, and look at the particulars of these designs there.
Since space stations were something that NASA didn’t have the budget for at the time, most designs remained vaguely defined, without much funding or impetus within the structure of either NASA or the US Air Force (although SNAP-2 would have definitely been an option for the Manned Orbiting Laboratory program of the USAF). By the time NASA was seriously looking at space stations as a major funding focus, the SNAP-8 derived Advanced Zirconium Hydride reactor, and later the SNAP-50 (which we’ll look at in the next post) offered more capability than the more powerful SNAP-2. Once again, the lack of a mission spelled the doom of the SNAP-2 reactor.
The SNAP-2 reactor met its piecemeal fate even earlier than the SNAP-10A, but oddly enough both the reactor and the power conversion system lasted just as long as the SNAP-10A did. The reactor core for the SNAP-2 became the SNAP-10A/2 core, and the CRU power conversion system continued under development until after the reactor cores had been canceled. However, mention of the SNAP-2 as a system disappears in the literature around 1966, while the -2/10A core and CRU power conversion system continued until the late 1960s and late 1970s, respectively.
The Legacy of The Early SNAP Reactors
The SNAP program was canceled in 1971 (with one ongoing exception), after flying a single reactor which was operational for 43 days, and conducting over five years of fission powered testing on the ground. The death of the program was slow and drawn out, with the US Air Force canceling the program requirement for the SNAP-10A in 1963 (before the SNAPSHOT mission even launched), the SNAP-2 reactor development being canceled in 1967, all SNAP reactors (including the SNAP-8, which we’ll look at next week) being canceled by 1974, and the CRU power conversion system being continued until 1979 as a separate internal, NASA-supported but not fully funded, project by Rockwell International.
The promise of SNAP was not enough to save the program from the massive cuts to space programs, both for NASA and the US Air Force, that fell even as humanity stepped onto the Moon for the first time. This is an all-too-common fate, both in advanced nuclear reactor engineering and design as well as aerospace engineering. As one of the engineers who worked on the Molten Salt Reactor Experiment noted in a recent documentary on that technology, “everything I ever worked on got canceled.”
However, this does not mean that the SNAP-2/10A programs were useless, or that nothing except a permanently shut down reactor in orbit was achieved. In fact, the SNAP program has left a lasting mark on the astronuclear engineering world, and one that is still felt today. The design of the SNAP-2/10A core, and the challenges that were faced with both this reactor core and the SNAP-8 core informed hydride fuel element development, including the thermal limits of this fuel form, hydrogen migration mitigation strategies, and materials and modeling for multiple burnable poison options for many different fuel types. The thermoelectric conversion system (germanium-silicon) became a common one for high-temperature thermoelectric power conversion, both for power conversion and for thermal testing equipment. Many other materials and systems that were used in this reactor system continued to be developed through other programs.
Possibly the most broad and enduring legacy of this program is in the realm of launch safety, flight safety, and operational paradigms for crewed astronuclear power systems. The foundation of the launch and operational safety guidelines that are used today, for both fission power systems and radioisotope thermoelectric generators, were laid out, refined, or strongly informed by the SNAPSHOT and Space Reactor Safety program – a subject for a future web page, or possibly a blog post. From the ground handling of a nuclear reactor being integrated to a spacecraft, to launch safety and abort behavior, to characterizing nuclear reactor behavior if it falls into the ocean, to operating crewed space stations with on-board nuclear power plants, the SNAP-2/10A program literally wrote the book on how to operate a nuclear power supply for a spacecraft.
While the reactors themselves never flew again, nor did their direct descendants in design, the SNAP reactors formed the foundation for astronuclear engineering of fission power plants for decades. When we start to launch nuclear power systems in the future, these studies, and the carefully studied lessons of the program, will continue to offer lessons for future mission planners.
More Coming Soon!
The SNAP program extended well beyond the SNAP-2/10A program. The SNAP-8 reactor, started in 1959, was the first astronuclear design specifically developed for a nuclear electric propulsion spacecraft. It evolved into several different reactors, notably the Advanced ZrH reactor, which remained the preferred power option for NASA’s nascent modular space station through the mid-to-late 1970s, due to its ability to be effectively shielded from all angles. Its eventual replacement, the SNAP-50 reactor, offered megawatts of power using technology from the Aircraft Nuclear Propulsion program. Many other designs were proposed in this time period, including the SP-100 reactor, the ancestor of Kilopower (the SABRE heat pipe cooled reactor concept), as well as the first American in-core thermionic power system, advances in fuel element designs, and many other innovations.
Originally, these concepts were included in this blog post, but this post quickly expanded to the point that there simply wasn’t room for them. While some of the upcoming post has already been written, and a lot of the research has been done, this next post is going to be a long one as well. Because of this, I don’t know exactly when the post will end up being completed.
After we look at the reactor programs from the 1950s to the late 1980s, we’ll look at NASA and Rosatom’s collaboration on the TOPAZ-II reactor program, and the more recent history of astronuclear designs, from SDI through the Fission Surface Power program. We’ll finish up the series by looking at the most recent power systems from around the world, from JIMO to Kilopower to the new Russian on-orbit nuclear electric tug.
After this, we’ll look at shielding for astronuclear power plants, and possibly ground handling, launch safety, and launch abort considerations, then move on to power conversion systems, which will be a long series of posts due to the sheer number of options available.
These next posts are more research-intensive than usual, even for this blog, so while I’ll be hard at work on the next posts, it may be a bit more time than usual before these posts come out.
Hello, and welcome back to Beyond NERVA, where we’re getting back into issues directly related to nuclear power in space, rather than how that power is used (as we’ve examined in our last three blog posts on electric propulsion)! However, the new Electric Propulsion page is up on the website, including a summary of all the information that we’ve covered in the last three blog posts, which you can find here [Insert Link]! Also, each type of thruster has its own page as well for easier reference, which are all linked on that summary page! Make sure to check it out!
In this blog series, we’re going to look at nuclear electric power system reactor cores themselves. While we’ve looked at a number of designs for nuclear thermal reactor cores (insert link for NTR-S page), there are a number of differences in those reactor cores compared to ones that are designed purely for electricity production. Perhaps the biggest one is operating temperature, and therefor core lifetime, but because the coolant doesn’t have to be hydrogen, and because the amount of heat produced doesn’t have to be increased as much as possible (there will be a LOT more discussion on this concept in the next series on power conversion systems), the reactor can be run at cooler temperatures, preventing a large amount of thermally related headaches, which makes far more more materials available for the reactor core, and generally simplifying matters.
Nuclear electric power systems are also unique in that they’re the only type of fission powered electrical supply system that’s ever flown. We’ve mentioned those systems briefly before, but we’ll look at some of them more in depth today, and in the next post as well. While there have been many reactor designs proposed over the years, we’re going to focus on the programs developed by the USSR during the Cold War, since they have the longest operational history of any sort of fission-powered heat source in space.
The United States were the first to fly a nuclear reactor in space, the SNAPSHOT mission in 1963; but, sadly, another American reactor was never placed on a spacecraft. The Soviet program was far longer running, flying reactors almost continuously from 1970-1988, often two spacecraft at once. With the fall of the Berlin Wall, and the end of the Cold War, the Soviet astronuclear program ended, and Russia hasn’t flown another nuclear reactor since then. There was a time, though, in the 1990s, that a mission was on the books (but never funded to a sufficient level to have ever flown) to use US-purchased, Russian-built nuclear reactors for an American crewed moon base!
History of Soviet In-Space Fission Power Systems
From the beginning, the Soviet in-space nuclear power designers focused on two different design concepts for their power systems: single-cell thermal fuel elements and multi-cell thermal fuel elements. The biggest difference between the two is how many fuel elements are in each thermal fuel element system: the single cell design uses a single fuel element, while the multi-cell option uses multiple fuel elements, separated by passive spacers, moderation blocks, or thermionic power conversion systems. Both designs were extensively researched, and eventually flown, but the initial research focused on the multi-cell approach with the Romashka, (sometimes translated as “Chamomile,” other times as “Daisy”) reactor. This design type led to the BES-5 (Bouk, or “Beech,” flight reactor), whereas the single cell variation led to the TEU-5 TOPOL (TOPAZ-1), which flew twice, and ENISY (TOPAZ-2) reactor, and was later purchased by the US. We won’t be looking in-depth at the ENISY reactor in this post, despite its close relation to the TOPOL, because later in the blog series we’ll be focusing on it far more, the time the Americans bought two of them (and took out an option on another four) and how it could have powered an American lunar base in the 1990s, had the funding been available.
As is our wont here, let’s begin at the beginning, with one of Korolev’s pet projects and the first in-space reactor design of the USSR: Romashka.
Romashka: The Reactor that Started it All
The Romashka (daisy or chamomile in English) was a Soviet adaptation of an American idea first developed in the US at Los Alamos Labs in the mid 1950s: in-core thermionic energy conversion. We’ll be looking at thermionics much more in-depth in the next post on power conversion systems, but the short version is that it combines a heat pipe (which we looked at in the Kilopower posts) with the tendency for an incandescent light to develop a static charge on its’ bulb. More on the conversion system itself in the next post; but, for now, it’s worth noting that this is a way to actually stick your power conversion system in the core of your reactor; and, as far as I’ve seen, it’s the only one.
Design on this reactor started in 1957, following a trip by Soviet scientists to Los Alamos (where thermionic energy conversion had been proposed, but not yet tested). The design offered the potential to have no moving parts, no pumps, and only needed conductive cooling from the reactor body for thermal management; all very attractive properties for a reactor that would not be able to be maintained for its lifetime. Work was begun at the Institute of Atomic Energy, I.V. Kurchatov in Moscow, but by the end of the program there were many design bureaus involved in the conceptual design, manufacturing, and testing of this reactor.
Core and moderator blocks
Radial reflector and radiator
A series of disc-shaped uranium carbide (UC2) fuel elements were used in this reactor (90% 235U), with holes drilled through the center, and roughly halfway from the central hole of the disc to the edge of the fuel element. Both of these holes were used to thread the thermionic power conversion system through the core of the reactor. Spacing of the fuel elements was provided by a mixture of beryllium oxide and graphite, which was also used to slightly moderate the neutron spectrum – but the neutron spectrum in the reactor remained in the fast spectrum. Surrounding the reactor core itself, both radially and at the ends of the core, were beryllium reflectors. Boron and boron nitride control rods placed in the radial reflector and base axial reflector were used to maintain reactor control through the use of a hydraulic system, however a large negative thermal reactivity coefficient in the reactor core was also meant to largely control the reactor in the case of normal operations. Finally, the reactor was surrounded by a finned steel casing that provided all heat rejection through passive radiation – no pumps required! The nominal operating temperature of the reactor was meant to be between 1200 C and 1800 C at the center of the core, and about 800 C at the edges of the core at the ends of the cylinder.
Construction and warm-critical tests were completed by April, 1966, and testing began in Moscow. There are some indications that materials incompatibilities in the first Romashka built led to the need to rebuild it with different materials, but it’s unclear what would have been changed (the only other reference, besides on a CIA document, to this is that the thermionic fuel element materials were changed in the reactor, so that may be what occurred – more on that in the direct power conversion post). This reactor underwent about 15,000 hours of testing, and in that time period it produced about 6,100 kWh of electricity at a relatively constant rate of 40 kW of thermal and 500-800 W of electrical power (1.5%-2% energy conversion efficiency). Initial testing (about 1200 hours) only rejected heat into a vacuum chamber using the fins’ radiative cooling capability; and testing of other reactor behavior particulars was carried out, including core self-regulation capability. Later tests (about 14,000 hours) were done using natural convection in a helium environment. During these tests, thermal deformation of the core and the reflector led to a reduction in reactivity, which was compensated for with the control system. By the end of the test cycle, electrical power production had dropped by 25%, and overall reactivity had dropped by 30%. Maximum sustained power production was about 450 W, and 88 amps, if all thermionic converters were activated, and pulsed power of up to 800 W was observed at the beginning of the actively controlled tests.
Korolev planned to pair this reactor with a plasma pulsed power thruster (based on the time period, possibly a pulsed inductive thruster, or PIT, which we looked at briefly in the second blog post on electric propulsion systems). However, two things conspired to end the Romashka system: Korolev’s death in 1966 meant the loss of its’ most powerful proponent; and the development of the more powerful, more efficient Bouk reactor became advanced enough to make that design available for space travel in the same time frame.
While there were plans to adapt Romashka into a small power plant for remote outposts (the core was known as “Gamma”), the testing program ended in 1966, to be supplanted by the BES-5 “Beech”. The legacy of the Romashka reactor lives on, however, as the first successful design of a thermionic energy conversion system for in-core use, a test-bed for the development and testing of thermionic energy conversion materials (more on that in the first power conversion system post); and it remains the father and grandfather of all Russian in-space reactors to ever fly.
Bouk: The Most Flown Nuclear Reactor in History
The Bouk (“Beech”) reactor, also known as the “Buk,” or BES-5 reactor, is arguably the most successful astronuclear design in history. Begun in 1960 by the Krasnya Zvesda Scientific and Propulsion Association, this reactor promised greater power output than the Romashka, at the cost of additional complexity, and requiring coolant to operate. From 1963 to 1969, testing of the fuel elements and reactor core was carried out without using the thermoelectric fuel elements (TFE), which were still under development. From 1968 to 1970, three reactor cores with full TFEs were tested at Baikal; and, with successful testing completed, the reactor design was prepared for launch, integrated into the Upravlenniye Sputnik Aktivny (US-A; in the West, RORSAT, for Radar Ocean Reconnaisance SATellite) spacecraft, designed to use radar for naval surveillance.
Rather than having stacked discs of UC2, the BES-5 used 79 fuel rods made out of uranium (90% enriched, total U mass 30 kg) molybdenum alloy metal, encased in high-temperature steel. NaK was used as a coolant for the reactor, pumped using the energy from 19 of the fuel assemblies to run an electromagnetic fuel pump. Producing over 100 kW of thermal energy, after electric conversion using in-core germanium-silicon thermoelectric power conversion elements (which use the difference in charge potential between two different metals along a boundary to create an electrical charge when a temperature gradient is applied across the join; again, more in a later post), a maximum of 5 kW of electrical energy was available for the spacecraft’s instrumentation. The fact that this core used thermoelectric conversion rather than thermionic is a good indicator that the common use of the term, TOPAZ, for this reactor is incorrect. Reactor control was provided by six beryllium reflector drums that would be slowly lowered through holes in the radial reflector over the reactor’s life to increase the local neutron flux to account for the buildup of neutron poisons.
One unique aspect to the BES-5 is that the reactor was able to decommission itself at end of life (although this wasn’t always successful) by moving the reactor to a higher orbit and then ejecting the end reflector and fuel assemblies (which were subcritical at time of assembly, and required the Be control rods to be inserted to reach delayed criticality), as well as dumping the NaK coolant overboard. This ensured that the reactor core would not re-enter the atmosphere (although there were two notable exceptions to this, and one late unexpected success). As an additional safety measure following the failure of KOSMOS-954 (more on that below), the reactor was redesigned so that the fuel elements would burn up upon re-entry, diluting the radioactive material to the point that no significant increase in radiation would occur. Over the reactor’s long operational history (31 BES-5 reactors were launched), the lifetime of the reactors was constantly extended, beginning with a lifetime of just 110 minutes (used for radar broadcast testing) to up to 135 days of operational life.
The first BES-5 to be launched was serial number 37 on the KOSMOS-367 satellite on October 3, 1970 (there’s some confusion on this score, with another source claiming it was KOSMOS-469, launched on 25 December 1971). After a very short (110 minute) operational life, the spacecraft was moved into a graveyard orbit and the reactor ejected due to overheating in the reactor core. Three more spacecraft (KOSMOS-402, -469, and 516) were launched over the next two years, with the -469 spacecraft possibly being the first to have the 8.2 GHz side looking radar system that the power plant was selected for. Over time, the US-A spacecraft were launched in parallel, co-planar orbits, usually deployed in pairs with closely attending Russian US-P electronics intelligence satellites (for more on the operational use of the US-A, check out Sven Grahn’s excellent blog on the operational history of the US-A).
The US-A program wasn’t without its failures, sadly, and one led to one of the biggest radiological cleanup missions in the history of nuclear power. On September 18, 1977, a Tsyklon-2 rocket launched from Baikonur Cosmodrome in Khazakhstan carrying the KOSMOS-954 US-A spacecraft on an orbital inclination of 65 degrees. By December, the spacecraft’s orbital maneuvering had become erratic, and Soviet officials contacted US officials that they had lost control of the satellite before they were able to move the reactor core into its’ designated graveyard orbit. On January 24, 1968, the satellite re-entered over Canada, spreading debris over a 600 km long section of the country. Operation Morning Light, the resulting CNES and US DOE program, was able to clear all the debris over several months, in a program that involved hundreds of people from the CNES, DOE, the NEST teams that were then available, and US Military Airlift Command. No fatalities or radiation poisoning cases were reported as a result of KOSMOS-954’s unplanned re-entry, although the remote nature of the re-entry was probably as much of a help as a challenge in this regard. A second KOSMOS spacecraft, KOSMOS-1402, also had its fuel elements re-enter the atmosphere following a failure of the spacecraft to ascend into its graveyard orbit, this time over the North Atlantic. The core re-entered the atmosphere on 23 January 1983, breaking up over the North Atlantic, north of England. No fragments of this reactor were ever recovered, and no significant increase in radioactivity as a result of this unplanned re-entry were detected.
These two incidents caused significant delays in the US-A program, and subsequent redesigns in the reactor as well. However, launches of this system continued until March 14, 1988, with the KOSMOS-1932 mission, which was moved into a graveyard orbit on 20 May, 1988, after a mission time of 66 days. The fate of its’ immediate predecessor, KOSMOS-1900, showed that the additional safety mechanisms for the US-A spacecraft’s reactor were successful: despite an apparent loss of control of the spacecraft, an increasingly eccentric orbit, and the buildup of aerodynamic forces, the reactor core was able to be boosted to a stable graveyard orbit, with the maneuver being completed on 17 October 1988. The main body of the spacecraft re-entered over the Indian Ocean 16 days earlier.
One interesting note on the controversy surrounding these reactor cores’ re-entry into Earth’s atmosphere is that the US planned on doing the exact same thing with the SNAP-10A reactors. The design was supposed to orbit for long enough (on the order of hundreds of year) for the short-lived fission products to decay away, and then the entire reactor would self-disassemble through a combination of mechanical, explosive, and aerodynamic systems; and, as a result, burn up in the upper atmosphere. While the amount of radioactivity that would be added to the atmosphere would be negligible, these accidents showed that this disposal method would not be acceptable; further complicating the American astronuclear program, as well as the one in the USSR. The SNAPSHOT reactor is still in orbit, and is expected to remain there for 2800 years, but considering the fallout of these accidents, retrieval or boosting to a graveyard orbit may be a future mission necessity for this reactor.
The US-A spacecraft demonstrated in-space nuclear fission power, and serial fission power plant production, for over two decades. Despite two major failures resulting in re-entry of the reactor core, the US-A program managed successful operation of the BES-5 reactor for 29 missions, and minimal impact from the two failures. The rest of the BES-5 cores remain parked in graveyard orbits, where they will remain for many hundreds of years until the radioactivity has dropped to natural background radiation.
There is one long-lasting legacy of the BES-5 program on in-orbit space travel, however: the ejected NaK coolant. The coolant remains a cratering hazard for spacecraft in certain orbits, but is not thought to be an object multiplication hazard. It is doubtful that the same core ejection system would be used in a newly designed astronuclear reactor, but this legacy lives on as another example of humanity’s ignorance at the time of a Kessler Syndrome situation.
While this program was not 100% successful, whether from a mission success point of view or from the point of view of it having no ongoing impact from the operations that were carried out, over 25 years of operation of a series of BES-5 reactors remains to this day the most extensive and successful of any astronuclear fission powered design, and it meets or exceeds even the service histories of any RTG design that has been deployed by any country.
TOPOL: The Most Powerful Reactor Ever Flown
The TEU-5 TOPOL (TOPAZ-1) program is the second type of Soviet reactor to fly; and, although it only flew twice, it can be argued to have been even more successful than the BES-5 reactor design. The TEU-5 was the return of the in-core thermionic power conversion system that was first utilized in Romashka; and, just as the Bouk was a step above the Romashka, the Topol was a step beyond that. Thermionic conversion remained more attractive than thermoelectric in terms of wider range of operating capabilities, increased temperature potential, and more forgiving materials requirements, but thermoelectric conversion was able to be readied for flight first. Because of this, and because of the inertia that any flight-tested and more-refined (from a programmatic and serial production sense) program has over one that has yet to fly, the BES-5 flew for over a decade before the TEU-5 would take to orbit.
Despite the different structure, and much higher power, of the TEU-5, the design was able to fulfill the same role of ocean radar reconnaissance; but, initially, it was meant to be a powerful on-orbit TV transmission station. The major advantage of the TEU-5 over the BES-5 is that, due to its higher power level, it wasn’t forced to be in a very low orbit, which increased atmospheric drag, caused the dry mass of the craft to be severely reduced in order to allow for more propellant to be on board, and created a lot of complexity in terms of reactor decommissioning and disposal. Following the KOSMOS-954 and -1402 accidents, the low-flying profile of the US-A satellite was no longer available for astronuclear reactors, and so the orbital altitude increased. TEU-5 offered the capability to get useful image resolution at this higher altitude due to its higher power, and improvements to the (never flown, but ground tested) radar systems.
The TOPOL program was begun in the 1960s, under the Russian acronym for Thermionic Experimental Converter in the Active Zone, which translates directly into Topaz in English, but ground testing didn’t begin until 1970. This was a multi-cell thermionic fuel element design similar in basic concept to Romashka, however it was a far more complex design. Instead of a single stack of disc-shaped fuel elements, a “garland” of fuel elements were formed into a thermionic fuel element. The fissile fuel element was surrounded by a thimble of tungsten or molybdenum, which formed the cathode of the thermionic converter, while the anode of the converter was a thin niobium tube; as with most thermionic converters the gap between cathode and anode was filled with cesium vapor. The anode was cooled with pumped NaK, although some sources indicate that lithium was also considered as a coolant for higher-powered versions of the reactor.
The differences between the BES-5 and TEU-5 were far more than the power conversion system. Instead of being a fast reactor, the Topaz was designed for the thermal neutron spectrum, and as such used zirconium hydride for in-core moderation (also creating a thermal limitation for the materials in the core; however, hydrogen loss mitigation measures were taken throughout the development process). Rather than using the metal fuels that its predecessor had, or the carbides of the Romashka, the Topol used a far more familiar material to nuclear power plant operators: uranium oxide (UO2), enriched to 90% 235U. This, along with reactor core geometry changes, allowed the amount of uranium needed for the core to drop from 30 kg in the BES-5 to 11.5 kg. NaK remained the coolant, due to its low melting temperature, good thermal conductivity, and neutronic transparency. The cathode temperature in the TEU-5 was in the range of 1500-1800C, which resulted in an electrical power output of up to 10 kW.
One of the most technically challenging parts of this reactor’s design was in the cesium management system. The metal would only be a gas inside the core, and electromagnetic pumps were used to move the liquid through a series of filters, heaters, and pipes. The purity of the cesium had a large impact on the efficiency of the thermionic elements, so a number of filters were installed, including for gaseous fission waste products, to be evacuated into space.
The first flight of the TEU-5 was on the KOSMOS-1818 satellite, launched on February 1st, 1987, onto a significantly different orbital trajectory than the rest of the US-A series of spacecraft, despite the fact that superficially it appeared to be quite similar. This was because it was the test-bed of a new type of US-A spacecraft, the US-AM, taking advantage of not only the more powerful nuclear reactor but also employing numerous other technologies. The USSR eventually announced that the spacecraft’s name was the Plasma-A, and was a technology demonstrator for a number of new systems. These included six SPT-70 Hall thrusters for maneuvering and reaction control, and a suite of electromagnetic and sun-finding sensors. Some sources indicate that part of the mission for the spacecraft was the development of a magnetospherically-based navigation system for the USSR. An additional advantage to the higher orbit of this spacecraft was that it eliminated the need for the ascent stage for the reactor core and fuel elements, saving spacecraft mass to complete its’ mission. It had an operational life of 187 days, before the reactor was placed in its graveyard orbit, and the remainder of the spacecraft was allowed to re-enter the atmosphere as its orbit decayed.
The second Plasma-A (KOSMOS-1867) launch was on July 10th, 1987. While the initial flight profile was remarkably similar to the original Plasma-A satellite, the later portions of the mission showed a much larger variation in orbital period, possibly indicating more extensive testing of the thrusters. It was operational for just over a year before it, too, was decommissioned.
Neither of the TEU-5 launches carried radar equipment aboard; but, considering the cancellation of the program also coincided with the fall of the Soviet Union, it’s possible that the increased power output of the TEU-5 would have allowed acceptable radar resolution from this higher orbit (the US-A spacecraft’s orbit was determined by the distance and power requirements of its radar system, and due to the higher aerodynamic drag also significantly limited the lifetime of each spacecraft).
After decommissioning, similar problems with NaK coolant from the reactor core were experienced with the TEU-5 reactors. There is one additional complication from the decommissioning of these larger reactor cores, however, which led to some confusion during the Solar Maximum Mission (SMM) to study solar behavior. Due to the higher operational altitude during the time that the reactor was being operated at full power, and the behavior of the materials that the reactor was made out of, what is often a minor curiosity in reactor physics caused some confusion among some astrophysical and heliophysical researchers: when some materials are bombarded by sufficiently high gamma flux, they will eject electron-positron pairs, which were then trapped in the magnetosphere of the Earth. While these radiation fluxes are minuscule, and unable to adversely affect living tissue, for scientists carefully studying solar behavior during the solar maximum the difference in the number of positrons was not only noticeable, but statistically significant. Both the SMM satellite and one other (Ginga, a Japanese X-ray telescope launched in 1987, which reentered in 1991) have been confirmed to have some instrument interference due to either the gamma wave flux or the resulting positron emissions from the two flown TEU-5 reactors. While this is a problem that only affected a very small number of missions, should astronuclear reactors become more commonly used in orbit, these types of emissions will need to be taken into account for future astrophysical missions.
The Topol program as a whole would survive the collapse of the Soviet Union, but just as with the BES-5, the TEU-5 never flew again after the Berlin Wall came down. KOSMOS-1867 was the last TEU-5 reactor, and the last US-AM satellite, to fly.
ENISY, The Final Soviet Reactor
The single-element thermionic reactor concept never went away. In fact, it remained in side-by-side development with the TOPOL reactor, and shared many of the basic characteristics, but was not ready in as timely a fashion as TOPOL was. The program was begun in 1967, with a total of 26 units built.
ENISY was seen to Soviet planners to be the logical extension of the TEU-5 program, and in many ways the reactor designs are linked. While the TEU-5 was designed for high-powered radar reconnaissance, the ENISY reactor was designed to be a communications and TV broadcast satellite. The amount of data that’s able to be transmitted is directly proportional to the amount of power available, and remains one of the most attractive advantages that astronuclear power plants offer to deep space probes (along with propulsion).
We’ll look at this design more in a later post, but it’s important to mention here since it is, in many ways, a direct evolution of the TEU-5. One nice thing about this reactor is that, due to the geometry of the reactor, its non-nuclear components were able to be tested as a unit without fissile fuel. Instead, induction heating units of the same size as the fuel elements could be slid into the location that the fuel rods would be for preflight testing without issues of neutron activation and material degradation due to the radiation flux.
This capability was demonstrated at the 8th US Symposium on Nuclear Energy in Albuquerque, NM, and led to the US purchasing two already-tested units from Russia (numbers V-71 and I-21U), with a buy option taken out on an additional four units, if needed. This purchase included technical information in the fuel elements, and offers of assistance from Russia to help in the fabrication of the fuel elements, but no actual fuel was sold. This reactor design would form the core of the American crewed lunar base concept in the 1990s as part of the Constellation program, as well as the core of a proposed technology demonstration mission deep space probe, but those programs never reached fruition.
We’ll look at this design in our usual depth in a couple blog posts. For now, it’s worth noting that this design reached flight-ready status; but, due to the financial situation of Russia after the collapse of the USSR, the increased availability of high-powered photovoltaic communications satellites, and the lack of funding for an American astronuclear flight test program, this reactor never achieved orbit as its predecessors did.
The Legacy of the USSR Astronuclear Program
The USSR flew more nuclear power plants than the rest of the world combined, 33 times more to be precise. Their program focused on an area of power generation that continues to hold great promise in the future, and in many ways helped define the problem for the rest of the world: in-core direct power conversion (something we’ll talk more about in the power conversion series). Even the failures of the program have taught the world much about the challenges of astronuclear design, and changed the face of what is and isn’t acceptable when it comes to flying a nuclear reactor in Earth orbit. The ENISY reactor went on to be the preferred power plant for American lunar bases for over a decade, and remains the only astronuclear design that’s been flight-certified by multiple countries.
Russia continues to build on the experience and expertise gained during the Romashka, BES-5, TEU-5, and ENISY programs. A recent test of a heat rejection system that offers far higher heat rejection capacity for its mass than any that has flown to date (a liquid droplet radiator, a concept we’ll cover in the thermal management post that will be coming up in a few months), their focus on high-power Hall thrusters, and their design for an on-orbit nuclear electric tug with a far more powerful reactor than any that we looked at today (1 MWe, depending on the power conversion system, likely between 2-5 MWt) shows that this experience has not been shoved into a closet and left to gather dust, but continues to be developed to advance the future of spaceflight.
More Coming Soon!
This post focused on the USSR and Russia’s astronuclear power plant expertise and operational history, a subject that very little has been written about in English (outside a number of reports, mostly focusing on the ENISY/TOPAZ-2 reactor), and is a subject that has long fascinated me. However, the USSR wasn’t the only country focusing on the idea, and wasn’t even the first to fly a reactor, just the most successful at making an astronuclear program.
The next post (which might be split into two due to the sheer number of fission power plant designs proposed in the US) is on the American programs from the same time, the Systems for Nuclear Auxiliary Propulsion, or SNAP, series of reactors (if split, the first post will cover SNAP-2, -10A, SNAPSHOT, -8, and the three reactors that evolved from SNAP-8, with SNAP50/SPUR, SABRE, SP-100, and possibly a couple more, as well as the ENISY/TOPAZ II US-USSR TSET/NEP Space Test Program/lunar base program). While the majority of the SNAP designs that were used were radioisotope thermoelectric generators, the ones that we’ll be focusing on are the fission power plants: the SNAP-2, SNAP-8, SNAP-10A (the first reactor to be launched into orbit), and the SNAP-100/SPUR reactor.
Following that, we’ll wrap up our look at the history of astronuclear electric power plants (the reactors themselves, at least) with a look at the designs proposed for the Strategic Defense Initiative (Reagan’s “Star Wars” program), return to a Russian-designed reactor which would have powered an America lunar base, had the funding for the base been available (ENISY), and the designs that rounded out the 20th century’s exploration of this fascinating and promising concept.
We’ll do one last post on NEP reactor cores looking at more recent designs from the last twenty years up to the present time, including the JIMO mission and a look at where Kilopower stands today, and then move on to power conversion systems in what’s likely to be another long series. As it stands that one will have a post on direct energy conversion, one on general heat engines and Stirling power conversion systems, one on Rankine cycle power conversion systems, one on Brayton cycle systems (including the ever-popular, never-realized, supercritical CO2 turbines), one on AMTEC and magnetohydrodynamic power conversion systems (possibly with a couple other non-mechanical heat engines as well), and a wrap up of the concepts, looking at which designs work best for which power levels and mission types. After that, it’ll be on to: heat rejection systems, for another multi-part series; a post on NEP ship and mission design; and, finally, one on bimodal NTR/NEP systems, looking at how to get the thrust of an NTR when it’s convenient and the efficiency of an NEP system when it’s most useful.
Hello, and welcome back to Beyond NERVA! Today, we finish our look at electric propulsion systems by looking at electrostatic propulsion. This is easily the most common form of in-space electric propulsion system, and as we saw in our History of Electric Propulsion post, it’s also the first that was developed.
I apologize about how long it’s taken to get this blog post published. As I’ve mentioned before, electric propulsion is one of my weak subjects, so I’ve been very careful to try to ensure that the information that I’m giving is correct. Another complication came from the fact that I had no idea how complex and varied each type of drive system is. I have glossed over many details in this blog post on many of the systems, but I’ve also included an extensive list of documentation on all of the drive systems I discuss in the post at the end, so if you’re curious about the details of these systems, please check out the published papers on them!
By far the most common type of electric propulsion today, and the type most likely to be called an “ion thruster,” is electrostatic propulsion. The electrostatic effect was one of the first electrical effects ever formally described, and the first ever observed (lightning is an electrostatic phenomenon, after all). Electrostatics as a general field of study refers to the study of electric charges at rest (hence, “electro-static”). The electrostatic effect is the tendency of objects with a differential charge (one positive, one negative) to attract each other, and with the same charge to repel each other. This occurs when electrons are stripped or added to one material. Some of the earliest scientific experiments involving this effect involved bars of amber and wool – the amber would become negatively ionized, and the wool would be positively ionized, due to the interactions of the very fine hairs of the wool and the crystalline and elemental composition of the amber (for the nitpicky types, this is known as the triboelectric effect, but is still a manifestation of the electrostatic effect). Other experimenters during the 18th and 19th centuries used cat fur instead of wool, a much more mentally amusing way to build an electrostatic charge. However, we aren’t going to be discussing using a rotating wheel of cats to produce an electric thruster (although, if someone feels like animating said concept, I’d love to see it).
There are a number of designs that use electrostatic effects to produce thrust. Some are very similar to some of the concepts that we discussed in the last post, like the RF ionized thruster (a major area of focus in Japan), the Electron Cyclotron Resonance thrusters (which use the same mechanisms as VASIMR’s acceleration mechanism), and the largely-abandoned Cesium Contact thruster (which has a fair amount of similarities with a pulsed plasma or arcjet thruster). Others, such as the Field Emission Electrostatic Thruster (FEEP) and Ionic Liquid Ion Source thruster (also sometimes called an electrospray) thruster, have far fewer similarities. None of these, though, are nearly as common as the electron bombardment noble gas thruster types: the gridded ion (either electron bombardment, cyclotron resonance, or RF ionization) thruster and the Hall effect thruster (which also has two types: the thruster with anode layer and stationary plasma thruster). The gridded ion thruster, commonly just called an ion thruster, is the propulsion system of choice for interplanetary missions, because it has the highest specific impulse of any currently available propulsion system. Hall effect thrusters have lower specific impulse, but higher thrust, making them a popular choice for reaction control systems on commercial and military satellites.
Most electrostatic drives use an ionization chamber or zone, to strip off electrons from an easily-ionized material. These now-positively charged ions are then accelerated toward a negatively charged structure (or accelerated by an electromagnetic field, in some cases), which is then switched off after accelerating the ions, which then are spat out the back of the thruster. Because of the low density of these ion streams, and the lack of an expanding gas, a physical nozzle isn’t used, because the characteristic bell-shaped de Laval nozzle of chemical or thermal engines is absolutely useless in this case. However, there are many ways that this ion stream can be ionized, and many ways that it can be accelerated, leading to a huge variety of design options within the area of electrostatic propulsion.
The first design for a practical electric propulsion system, patented by Robert Goddard in 1917, was an electrostatic device, and most designs, both in the US and the USSR, have used this concept. In the earliest days of electric propulsion design, each went a different way in the development of this drive concept: the US focused on the technically simpler, but materially more problematic, gridded ion thruster, while the Soviet Union worked to develop the more technically promising, but more difficult to engineer, Hall thruster. Variations of each have been produced over the years, and additional options have been explored as well. These systems have traveled to almost every body of in the Solar System, including Pluto and many of the asteroids in the Main Belt, and provide a lot of the station-keeping thrust for satellites in orbit around Earth. Let’s go ahead and look at what the different types are, what their advantages and disadvantages are, and how they’ve been used in the past.
Gridded Ion Drives
This is the best-known of the electric propulsion thrusters of any type, and is often shortened to “ion drive.” Here, the thruster has four main parts: the propellant supply, an ionization chamber, an array of conductive grids, and a neutralizing beam emitter. The propellant can be anything that is able to be easily ionized, with cesium and mercury being the first options, these have largely been replaced by xenon and argon, though.
The type of ionization chamber varies widely, and is the main difference in the different types of ion drive. Particle beams, radio frequency or microwave excitation, in addition to magnetic field agitation, are all methods used in different gridded ion drives over the years and across the different manufacturers. The first designs used gaseous agitation to cause electrons to be stripped, but many higher-powered systems use particle (mostly electron) beams, radio frequency or microwave agitation, or cyclotron resonance to strip the electrons off the atoms. The efficiency of the ionization chamber, and its capacity, define how much propellant mass flow is possible, which is one of the main limiting factors for the overall thrust possible for the thruster.
After being ionized, the gas and plasma are then separated, using a negatively charged grid to extract the positively charged ions, leaving the neutral gas in the ionization chamber to be ionized. In most modern designs, this is also the beginning of the acceleration process. Often, two or three grids are used, and the term “ion optics” is often used instead of “grids.” This is because these structures not only extract and change the acceleration of the plasma, but they also shape the beam of the plasma as well. The amount of charge, and the geometry of these grids, defines the exhaust velocity of the ions; and the desired specific impulse produced by the thruster is largely determined by the charge applied to these screens. Many US designs use a more highly charged inner screen to ensure better separation of the ions, and a charge potential difference between this grid and the second accelerates the ions. Because of this, the first grid is often called the extractor, and the second is called the accelerator grid. The charge potential possible on each grid is another major limitator of the possible power level – and therefore the maximum exhaust velocity – of these thrusters.
These screens also are one of the main limitators for the thruster’s lifetime, since the ions will impact the grid to a certain degree as they’re flowing past (although the difference in charge potential on the plasma in the ionization chamber between the apertures and the structure of the grid tends to minimize this). With many of the early gridded ion thrusters that used highly reactive materials, chemical interactions in the grids could change the conductivity of these surfaces, cause more rapid erosion, and produce other problems; the transition to noble gas propellants has made this less of an issue. Finally, the geometry of the grids have a huge impact on the direction and velocity of the ions themselves, so there’s a wide variety of options available through the manipulation of this portion of the thruster as well.
At the end of the drive cycle, after the ions are leaving the thruster, a spray of electrons is added to the propellant stream, to prevent the spacecraft from becoming negatively charged over time, and thereby attracting some of the propellant back toward the spacecraft due to the same electrostatic effect that was used to accelerate them in the first place. Problems with incomplete ion stream neutralization were common in early electrostatic thrusters; and with the cesium and mercury propellants used in these thrusters, chemical contamination of the spacecraft became an issue for some missions. Incomplete neutralization is something that is still a concern for some thruster designs, although experiments in the 1970s showed that a spacecraft can ground itself without the ion stream if the differential charge becomes too great. In three grid systems (or four, more on that concept later), the final grid takes the place of this electron beam, and ensures better neutralization of the plasma beam, as well as greater possible exhaust velocity.
Gridded ion thrusters offer very attractive specific impulse, in the range of 1500-4000 seconds, with exhaust velocities up to about 100 km/s for typical designs. The other side of the coin is their low thrust, generally from 20-100 microNewtons (lower than average even for electric propulsion, although their specific impulse is higher than average), which is a mission planning constraint, but isn’t a major show-stopper for many applications. An advanced concept, from the Australian National University and European Space Agency, the Dual Stage 4 Grid (DS4G) thruster, achieved far higher exhaust velocities by using a staged gridded ion thruster, up to 210 km/s.
Past and Current Gridded Ion Thrusters
These drive systems have been used on a number of different missions over the years, starting with the SERT missions mentioned in the history of electric propulsion section, and continuing through on an experimental basis until the Deep Space 1 technology demonstration mission – the first spacecraft to use ion propulsion as its main form of propulsion. That same thruster, the NSTAR, is still in use today on the Dawn mission, studying the minor planet Ceres. Hughes Aircraft developed a number of thrusters for station-keeping for their geosynchronous satellite bus (the XIPS thruster).
JAXA used this type of drive system for their Hayabusa mission to the asteroid belt, but this thruster used microwaves to ionize the propellant. This thruster operated successfully throughout the mission’s life, and propelled the first spacecraft to return a sample from an asteroid back to Earth.
ESA has used different variations of this thruster on multiple different satellites as well, all of which have been radio frequency ionization types. The ArianeSpace RIT-10 has been used on multiple missions, and the Qinetiq T5 thruster was used successfully on the GOCE mission mapping the Earth’s magnetic field.
NASA certainly hasn’t given up on further developing this technology. The NEXT thruster is three times as powerful in terms of thrust compared to the NSTAR thruster, although it operates on similar principles. The testing regime for this thruster has been completed, demonstrating 4150 s of isp and 236 mN of thrust over a testing life of over 48,000 hours, and it is currently awaiting a mission for it to fly on. This has also been a testbed for using new designs and materials on many of the drive system components, including a new hollow cathode made out of LaB6 (a lanthanum-boron alloy), and several new screen materials.
HiPEP: NASA’s Nuclear Ion Propulsion System
Another NASA project in gridded ion propulsion, although one that has since been canceled, is far more germane to the specific use of nuclear electric propulsion: the High Power Electric Propulsion drive (HiPEP) for the Jupiter Icy Moons Observer mission. JIMO was a NEP propelled mission to Jupiter which was canceled in 2005, meant to study Europa, Ganymede, and Callisto (this mission will get an in-depth look later in this blog series on NEP). HiPEP used two types of ionization chamber: Electron Cyclotron Resonance ionization, which combines leveraging the small number of free electrons present in any gas by moving them in a circle with the magnetic containment of the ionization chamber with microwaves that are tuned to be in resonance with these moving electrons to more efficiently ionize the xenon gas; and direct current ionization using a hollow cathode to strip off electrons, which has additional problems with cathode failure and so is the less preferred option. Cathode failure of this sort is another major failure point for ion drives, so being able to eliminate this is a significant advantage, but the microwave system ends up consuming more power, so in less-energy-intensive applications it’s often not used.
One very unusual thing about this system is its’ shape: rather than the typical circular discharge chamber and grids, this system uses a rectangular configuration. The designers note that not only does this make the system more compact to stack multiple units together (reducing the structural, propellant, and electrical feed system mass requirements for the full system), it also means that the current density across the grids can be lower for the same electrostatic potential, reducing current erosion in the grids. This means that the grid can support a 100 kg/kW throughput margin for both of the isp configurations that were studied (6000 and 8000 s isp). The longest distance between two supported sections of grid can be reduced as well, preventing issues like thermal deformation, launch vibration damage, and electrostatic attraction between the grids and either the propellant or the back of the ionization chamber itself. The fact that it makes the system more scalable from a structural engineering standpoint is one final benefit to this system.
As the power of the thruster increases, so do the beam neutralization requirements. In this case, up to 9 Amperes of continuous throughput are required, which is very high compared to most systems. This means that the neutralizing beam has to be both powerful and reliable. While the HiPEP team discuss using a common neutralization system for tightly packed thrusters, the baseline design is a fairly typical hollow cathode, similar to what was used on the NSTAR thruster, but with a rectangular cross section rather than a circular one to accommodate the different thruster geometry. Other concepts, like using microwave beam neutralization, were also discussed; however, due to the success and long life of this type of system on NSTAR, the designers felt that this would be the most reliable way to deal with the high throughput requirements that this system requires.
HiPEP consistently met its program guidelines, for both engine thrust efficiency and erosion studies. Testing was conducted at both 2.45 and 5.85 GHz for the microwave ionization system, and was successfully concluded. The 2.45 GHz test, with 16 kW of power, achieved a specific impulse of 4500-5500 seconds, allowing for the higher-powered MW emitter to be used. The 5.85 GHz ionization chamber was tested at multiple current loads, from 9.7 to 39.3 kW, and achieved a maximum specific impulse of 9620 s, and showed a clear increase in thrust of up to close to 800 mN during this test.
Sadly, with the cancellation of JIMO (a program we will continue to come back to frequently as we continue looking at NEP), the need for a high powered gridded ion thruster (and the means of powering it) went away. Much like the fate of NERVA, and almost every nuclear spacecraft ever designed, the canceling of the mission it was meant to be used on spelled the death knell of the drive system. However, HiPEP remains on the books as an attractive, powerful gridded ion drive, for when an NEP spacecraft becomes a reality.
DS4G: Fusion Research-Inspired, High-isp Drives to Travel to the Edge of the Solar System
The Dual Stage 4 Grid (DS4G) ion drive is perhaps the most efficient electric drive system ever proposed, offering specific impulse well over 10,000 seconds. While there are some drive systems that offer higher isp, they’re either rare concepts (like the fission fragment rocket, a concept that we’ll cover in a future post), or have difficulties in the development process (such as Orion derivatives, which run afoul of nuclear weapons test bans and treaty limitations concerning the use of nuclear explosives in space).
So how does this design work? Traditional ion drives use either two grids (like the HiPEP drive) combining the extraction and acceleration stages in these grids and then using a hollow cathode or electron emitter to neutralize the beam, or use three grids, where the third grid is used in place of the hollow cathode. In either case, these are very closely spaced grids, which has its’ advantages, but also a couple of disadvantages: the extraction system and acceleration system being combined forces a compromise between efficiency of extraction and capability of acceleration, and the close spacing limits the acceleration possible of the propellants. The DS4G, as the name implies, does things slightly differently: there are two pairs of grids, each pair is close to its’ partner, but further apart from the other pair, allowing for a greater acceleration chamber length, and therefore higher exhaust velocity, and the distance between the extraction grid and the final acceleration grid allows for each to be better optimized for their individual purposes. As an added benefit, the plasma beam of the propellant is better collimated than that of a traditional ion drive, which means that the drive is able to be more efficient with the mass of the propellant, increasing the specific impulse even further.
This design didn’t come out of nowhere, though. In fact, most tokamak-type fusion reactors use a device very similar to an ion drive to accelerate beams of hydrogen to high velocities, but in order to get through the intense magnetic fields surrounding the reactor the atoms can’t be ionized. This means that a very effective neutralizer needs to be stuck on the back of what’s effectively an ion drive… and these designs all use four screens, rather than three. Dr. David Fearn knew of these devices, and decided to try and adapt it to space propulsion, with the help of ESA, leading to a 2005 test-bed prototype in collaboration with the Australian National University. An RF ionization system was designed for the plasma production unit, and a 35 kV electrical system was designed for the thruster prototype’s ion optics. This was not optimized for in-space use; rather, it was used as a low cost test-bed for optics geometry testing and general troubleshooting of the concept. Another benefit to this design is a higher-than-usual thrust density of 0.86 mN/cm^2, which was seen in the second phase of testing.
Two rounds of highly successful testing were done at ESA’s CORONA test chamber in 2005 and 2006, the results of which can be seen in the tables above. The first test series used a single aperture design, which while highly inefficient was good enough to demonstrate the concept; this was later upgraded to a 37 aperture design. The final test results in 2006 showed impressive specific impulse (14000-14500 s), thrust (2.7 mN), electrical, mass, and total efficiency (0.66, 0.96, and 0.63, respectively). The team is confident that total efficiencies of about 70% are possible with this design, once optimization is complete.
There remain significant engineering challenges, but nothing that’s incredibly different from any other high powered ion drive. Indeed, many of the complications concerning ion optics, and electrostatic field impingement in the plasma chamber, are largely eliminated by the 4-grid design. Unfortunately, there are no missions that currently have funding that require this type of thruster, so it remains on the books as “viable, but in need of some final development for application” when there’s a high-powered mission to the outer solar system.
Cesium Contact Thrusters: Liquid Metal Fueled Gridded Ion Drives
As we saw in our history of electric propulsion blog post, many of the first gridded ion engines were fueled with cesium (Cs). These systems worked well, and the advantages of having an easily storable, easily ionized, non-volatile propellant (in vapor terms, at least) were significant. However, cesium is also a reactive metal, and is toxic to boot, so by the end of the 1970s development on this type of thruster was stopped. As an additional problem, due to the inefficient and incomplete beam neutralization with the cathodes available at the time, contamination of the spacecraft by the Cs ions (as well as loss of thrust) were a significant challenge for the thrusters of the time.
Perhaps the most useful part of this type of thruster to consider is the propellant feed system, since it can be applied to many different low-melting-point metals. The propellant itself was stored as a liquid in a porous metal sponge made out of nickel, which was attached to two tungsten resistive heaters. By adjusting the size of the pores of the sponge (called Feltmetal in the documentation), the flow rate of the Cs is easily, reliably, and simply controlled. Wicks of graded-pore metal sponges were used to draw the Cs to a vaporizer, made of porous tungsten and heated with two resistive heaters. This then fed to the contact ionizer, and once ionized the propellant was accelerated using two screens.
As we’ll see in the propellant section, after looking at Hall Effect thruster, Cs (as well as other metals, such as barium) could have a role to play in the future of electric propulsion, and looking at the solutions of the past can help develop ideas in the future.
Hall Effect Thrusters
When the US was beginning to investigate the gridded ion drive, the Soviet Union was investigating the Hall Effect thruster (HET). This is a very similar concept in many ways to the ion drive, in that it uses the electrostatic effect to accelerate propellant, but the mechanism is very different. Rather than using a system of grids that are electrically charged to produce the electrostatic potential needed to accelerate the ionized propellant, in a HET the plasma itself creates the electrostatic charge through the Hall effect, discovered in the 1870s by Edwin Hall. In these thrusters, the backplate functions as both a gas injector and an anode. A radial magnetic field is produced by a set of radial solenoids and and a central solenoid, which traps the electrons that have been stripped off the propellant as it’s become ionized (mostly through electron friction), forming a toroidal electric field moving through the plasma. After the ions are ejected out of the thruster, a hollow cathode that’s very similar to the one used in the ion drives that we’ve been looking at neutralizes the plasma beam, for the same reasons as this is done on an ion drive system (this is also the source of approximately 10% of the mass flow of the propellant). This also provides the electrostatic potential used to accelerate the propellant to produce thrust. Cathodes are commonly mounted external to the thruster on a small arm, however some designs – especially modern NASA designs – use a central cathode instead.
The list of propellants used tends to be similar to what other ion drives use: krypton, argon, iodine, bismuth, magnesium and zinc have all been used (along with some others, such as NaK), but Kr and Ar are the most popular propellants. While this system has a lower average specific impulse (1500-3000 s isp) than the gridded ion drives, it has more thrust (a typical drive system used today uses 1.35 kW of power to generate 83 mN of thrust), meaning that it’s very good for either orbital inclination maintenance or reaction control systems on commercial satellites.
SPT type thruster, image S. Graham BeyondNERVA
TAL Type Thruster, image S. Graham BeyondNERVA
There are a number of types of Hall effect thruster, with the most common being the Thruster with Anode Layer (TAL), the Stationary Plasma Thruster (SPT), and the cylindrical Hall thruster (CHT). The cylindrical thruster is optimized for low power applications, such as for cubesats, and I haven’t seen a high power design, so we aren’t going to really go into those. There are two obvious differences between these designs:
What the walls of the acceleration chamber are made out of: the TAL uses metallic (usually boron nitride) material for the walls, while the SPT uses an insulator, which has the effect of the TAL having higher electron velocities in the plasma than the SPT.
The length of the acceleration zone, and its impact on ionization behavior: The TAL has a far shorter acceleration zone than the SPT (sort of, see Chouieri’s analytical comparison of the two systems for dimensional vs non-dimensional characteristicshttp://alfven.princeton.edu/publications/choueiri-jpc-2001-3504). Since the walls of the acceleration zone are a major lifetime limitator for any Hall effect thruster, there’s an engineering trade-off available here for the designer (or customer) of an HET to consider.
There’s a fourth type of thruster as well, the External Discharge Plasma Thruster, which doesn’t have an acceleration zone that’s physically constrained, that we’ll also look at, but as far as I’ve been able to find there are very few designs, most of those operating at low voltage, so they, too, aren’t as attractive for nuclear electric propulsion.
Commercially available HETs generally have a total efficiency in the range of 50-60%, however all thrusters that I’ve seen increase in efficiency as the power increases up to the design power limits, so higher-powered systems, such as ones that would be used on a nuclear electric spacecraft, would likely have higher efficiencies. Some designs, such as the dual stage TAL thruster that we’ll look at, approach 80% efficiency or better.
SPT Hall Effect Thrusters
Stationary Plasma Thrusters use an insulating material for the propellant channel immediately downstream of the anode. This means that the electrostatic potential in the drive can be further separated than in other thruster designs, leading to greater separation of ionized vs. non-ionized propellant, and therefore potentially more complete ionization – and therefore thrust efficiency. While they have been proposed since the beginning of research into Hall effect thrusters in the Soviet Union, a lack of an effective and cost-efficient insulator that was able to survive for long enough to allow for a useful thruster lifetime was a major limitator in early designs, leading to an early focus on the TAL.
The SPT has the greatest depth between the gas diffuser (or propellant injector) and the nozzle of the thruster. This is nice, because it gives volume and distance to work with in terms of propellant ionization. The ionized propellant is accelerated toward the nozzle, and the not-yet-ionized portion can still be ionized, even if the plasma component is scooting it toward the nozzle by simply bouncing off the unionized portion like a billiard ball. Because of this, SPT thrusters can have much higher propellant ionization percentages than the other types of Hall effect thruster, which directly translates into greater thrust efficiency. This extended ionization chamber is made out of an electromagnetic insulator, usually boron nitride, although Borosil, a solution of BN and SiO2, is also used. Other types of materials, such as nanocrystalline diamond, graphene, and a new material called ultra-BN, or plasma assisted chemical vapor deposition built BN, have also been proposed and tested.
The downside to this type of thruster is that the insulator is eroded during operation. Because the erosion of the propellant channel is the main lifetime limitator of this type of thruster, the longer length of the propellant channel in this type of thruster is a detriment to thruster lifetime. Improved materials for the insulator cavity are a major research focus, but replacing boron nitride is going to be a challenge because there are a number of ways that it’s advantageous for a Hall effect thruster (and also in the other application we’ve looked at for reactor shielding): in addition to it being a good electromagnetic insulator, it’s incredibly strong and very thermally conductive. The only major downside is its’ expense, especially forming it into single, large, complex shapes; so, often, SPT thrusters have two boron carbide inserts: one at the base, near the anode, and another at the “waist,” or start of the nozzle, of the SPT thruster. Inconsistencies in the composition and conductivity of the insulator can lead to plasma instabilities in the propellant due to the local magnetic field gradients, which can cause losses in ionization efficiency. Additionally, as the magnetic field strength increases, plasma instabilities develop in proportion to the total field strength along the propellant channel.
Another problem that surfaces with these sorts of thrusters is that under high power, electrical arcing can occur, especially in the cathode or at a weak point in the insulator channel. This is especially true for a design that uses a segmented insulator lining for the propellant channel.
HERMeS: NASA’s High Power Single Channel SPT Thruster
The majority of NASA’s research into Hall thrusters is currently focused on the Advanced Electric Propulsion System, or AEPS. This is a solar electric propulsion system which encompasses the power generation and conditioning equipment, as well as a 14 kW SPT thruster known as HERMeS, or the Hall Effect Rocket with Magnetic Shielding. Originally meant to be the primary propulsion unit for the Asteroid Redirect Mission, the AEPS is currently planned for the Power and Propulsion Element (PPE) for the Gateway platform (formerly Lunar Gateway and LOP-G) around the Moon. Since the power and conditioning equipment would be different for a nuclear electric mission, though, our focus will be on the HERMeS thruster itself.
This thruster is designed to operate as part of a 40 kW system, meaning that three thrusters will be clustered together (complications in clustering Hall thrusters will be covered later as part of the Japanese RIAJIN TAL system). Each thruster has a central hollow cathode, and is optimized for xenon propellant.
Many materials technologies are being experimented with in the HERMeS thruster. For instance, there are two different hollow cathodes being experimented with: LaB6 (which was experimented with extensively for the NEXT gridded ion thruster) and barium oxide (BaO). Since the LaB6 was already extensively tested, the program has focused on the BaO cathode. Testing is still underway for the 2000 hour wear test; however, the testing conducted to date has confirmed the behavior of the BaO cathode. Another example is the propellant discharge channel: normally boron nitride is used for the discharge channel, however the latest iteration of the HERMeS thruster is using a boron nitride-silicon (BN-Si) composite discharge channel. This could potentially improve the erosion effects in the discharge channel, and increase the life of the thruster. As of today, the differences in plasma plume characterization are minimal to the point of being insignificant, and erosion tests are similarly inconclusive; however, theoretically, BN-Si composite could improve the lifetime of the thruster. It is also worth noting that, as with any new material, it takes time to fully develop the manufacture of the material to optimize it for a particular use.
As of the latest launch estimates, the PPE is scheduled to launch in 2022, and all development work of the AEPS is on schedule to meet the needs of the Gateway.
Nested Channel SPT Thrusters: Increasing Power and Thrust Density
One concept that has grown more popular recently (although it’s far from new) is to increase the number of propellant channels in a single thruster in what’s called a nested channel Hall thruster. Several designs have used two nested channels for the thruster. While there are a number of programs investigating nested Hall effect thrusters, including in Japan and China, we’ll use the X2 as an example, studied at the University of Michigan. While this design has been supplanted by the X3 (more on that below), many of the questions about the operation of these types of thrusters were addressed by experimenting with the X2 thruster. Generally speaking, the amount of propellant flow in the different channels is proportional to the surface area of the emitter anode, and the power and flow rate of the cathode (which is centrally mounted) is adjusted to match whether one or multiple channels are firing. Since these designs often use a single central cathode, despite having multiple anodes, a lot of development work has gone into improving the hollow cathodes for increased life and power capability. None of the designs that I saw used external cathodes, like those sometimes seen with single-channel HETs, but I’m not sure if that is just because of the design philosophies of the institutions (primarily JPL and University of Michigan) that I found while investigating this type of design, and for which I was able to access the papers.
There are a number of advantages to the nested-channel design. Not only is it possible to get more propellant flow from less mass and volume, but the thruster can be throttled as well. For higher thrust operation (such as rapid orbital changes), both channels are fired at once, and the mass flow through the cathode is increased to match. By turning off the central channel and leaving the outer channel firing, a medium “gear” is possible, with mass flow similar to a typical SPT thruster. The smallest channel can be used for the highest-isp operation for interplanetary cruise operations, where the lower mass flow allows for greater exhaust velocities.
A number of important considerations were studied during the X2 program, including the investigation of anode efficiency during the different modes of operation (slight decrease in efficiency during two-channel operation, highest efficiency during inner channel only operation), interactions between the plasma plumes (harmonic oscillations were detected at 125 and 150 V, more frequent in the outer channel, but not detected at 200 V operation, indicating some cross-channel interactions that would need to be characterized in any design), and power to thrust efficiency (slightly higher during two-channel operation compared to the sum of each channel operating independently, for reasons that weren’t fully able to be characterized). The success of this program led to its’ direct successor, which is currently under development by the University of Michigan, Aerojet Rocketdyne, and NASA: the X3 SPT thruster.
The X3 is a 100 kWe design that uses three nested discharge chambers. The cathode for this thruster is a centrally mounted hollow cathode, which accounts for 7% of the total gas flow of the thruster under all modes of operation. Testing during 2016 and 2017 ranging from 5 to 102 kW, 300 to 500 V, and 16 to 247 A, demonstrated a specific impulse range of 1800 to 2650 s, with a maximum thrust of 5.4 N. As part of the NextSTEP program, the X3 thruster is part of the XR-100 electric propulsion system that is currently being developed as a flexible, high-powered propulsion system for a number of missions, both crewed and uncrewed.
While this thruster is showing a great deal of promise, there remain a number of challenges to overcome. One of the biggest is cathode efficiency, which was shown to be only 23% during operation of just the outermost channel. This is a heavy-duty cathode, rated to 120 A. Due to the concerns of erosion, especially under high-power, high-flow conditions, there are three different gas injection points: through the central bore of the cathode (limited to 20 sccm), external flow injectors around the cathode keeper, and supplementary internal injectors.
The cross-channel thrust increases seen in the X2 thruster weren’t observed, meaning that this effect could have been something particular to that design. In addition, due to the interactions between the different magnetic lenses used in each of the discharge channels, the strength and configuration of each magnetic field has to be adjusted depending on the other channels that are operating, a challenge that increases with magnetic field strength.
Finally, the BN insulator was shown to expand in earlier tests to the point that a gap was formed, allowing arcing to occur from the discharge plasma to the body of the thruster. Not only does this mean that the plasma is losing energy – and therefore decreasing thrust – but it also heats the body of the thruster as well.
These challenges are all being addressed, and in the next year the 100-hour, full power test of the system will be conducted at NASA’s Glenn Research Center.
TAL Hall Effect Thrusters
The TAL concept has been around since the beginning of the development of the Hall thruster. In the USSR, the development of the TAL was tasked to the Central Research Institute for Machine Building (TsNIIMash). Early challenges with the design led to it not being explored as thoroughly in the US, however. In Europe and Asia, however, this sort of thruster has been a major focus of research for a number of decades. Recently, the US has also increased their study of this design as well. Since these designs have (as a general rule) higher power requirements for operation, they have not been used nearly as much as the SPT-type Hall thruster, but for high powered systems they offer a lot of promise.
As we mentioned before, the TAL uses a conductor for the walls of the plasma chamber, meaning that the radial electric charge moving across the plasma is continuous across the acceleration chamber of the thruster. Because of the high magnetic fields in this type of thruster (0.1-0.2 T), the electron cyclotron radius is very small, allowing for more efficient ionization of the propellant, and therefore limiting the size necessary for the acceleration zone. However, because a fraction of the ion stream is directed toward these conduction walls, leading to degradation, the lifetime of these types of thrusters is often shorter than their SPT counterparts. This is one area of investigation for designers of TAL thrusters, especially higher-powered variants.
As a general rule, TAL thrusters have lower thrust, but higher isp, than SPT thrusters. Since the majority of Hall thrusters are used for station-keeping, where thrust levels are a significant design consideration, this has also mitigated in favor of the SPT thruster to be in wider deployment.
High-Powered TAL Development in Japan: Clustered TAL with a Common Cathode
One country that has been doing a lot of development work on the TAL thruster is Japan. Most of their designs seem to be in the 5 kW range, and are being designed to operate clustered around a single cathode for charge neutralization. The RAIJIN program (Robust Anode-layer Intelligent Thruster for Japanese IN-space propulsion system) has been focusing on addressing many of the issues with high-powered TAL operation, mainly for raising satellites from low earth orbit to geosynchronous orbit (an area that has a large impact on the amount of propellant needed for many satellite launches today, and directly applicable to de-orbiting satellites as well). The RAIJIN94 thruster is a 5 kW TAL thruster under development by Kyushu University, the University of Tokyo, and the University of Mizayaki. Overall targets for the program are for a thruster that operates at 6 kW, providing 360 mN of thrust, 1700 s isp, with an anode mass flow rate of 20 mg/s and a cathode flow rate of 1 mg/s. The ultimate goal of the program is a 25 kW TAL system, containing 5 of these thrusters with a common cathode. Based on mass penalty analysis, this is a more mass-efficient method for a TAL than having a single large TAL with increased thermal management requirements. Management of anode and conductor erosion is a major focus of this program, but not one that has been written about extensively. The limiting of the thruster power to about 5 kW, though, seems to indicate that scaling a traditional TAL beyond this size, at least with current materials, is impractical.
There are challenges with this design paradigm, however, which also will impact other clustered Hall designs. Cathode performance, as we saw in the SPT section is a concern, especially if operating at very high power and mass flow rates, which a large cluster would need. Perhaps a larger consideration was plasma oscillations that occurred in the 20 kHz range when two thrusters were fired side by side, as was done (and continues to be done) at Gifu University. It was found that by varying the mass flow rate, operating at a slightly lower power, and maintaining a wider spacing of the thruster heads, the plasma flow instabilities could be accounted for. Experiments continue there to study this phenomenon, and the researchers, headed by Dr. Miyasaka, are confident that this issue can be managed.
Dual Stage TAL and VHITAL
One of the most interesting concepts investigated at TsNIIMash was the dual-stage TAL, which used two anodes. The first anode is very similar to the one used in a typical TAL or SPT, which also serves as an injector for the majority of the propellant and provides the charge to ionize the propellant. As the plasma exits this first anode, it encounters a second anode at the opening of the propellant channel, which accelerates the propellant. An external cathode is used to neutralize the beam. This design demonstrated specific impulses of up to 8000s, among the highest (if not the highest) of any Hall thruster to date. The final iteration during this phase of research was the water-cooled TAL-160, which operated at a power consumption from 10-140 kW.
Another point of interest with this design is the use of bismuth as the propellant for the thruster. As we’ll see below, propellant choice for an electrostatic thruster is very broad, and the choice of propellant you use is subject to a number of characteristics. Bismuth is reasonably inexpensive, relatively common, and storable as a solid. This last point is also a headache for an electrostatic thruster, since ionized powders are notorious for sticking to surfaces and gumming up the works, as it were. In this case, since bismuth has a reasonably low melting temperature, a pre-heater was used to resistively heat the bismuth, and then an electromagnetic pump was used to propel it toward the anode. Just before injection into the thruster, a vaporization plug of carbon was used to ensure proper mass flow into the thruster. As long as the operating temperature of the thruster was high enough, and the mass flow was carefully regulated, this novel fueling concept was not a problem.
This design was later picked up in 2004 by NASA, who worked with TsNIIMash researchers to develop the VHITAL, or Very High Isp Thruster with Anode Layer, over two and a half years. While this thruster uses significantly less power (36 kW as opposed to up to 140 kW), many of the design details are the same, but with a few major differences: the NASA design is radiatively cooled rather than water cooled, it added a resistive heater to the base of the first anode as well, and tweaks were made to the propellant feed system. The original TAL-160 was used for characterization tests, and the new VHITAL-160 thruster and propellant feed system were built to characterize the system using modern design and materials. Testing was carried out at TsNIIMash in 2006, and demonstrated stable operation without using a neutralizing cathode, and expected metrics were met.
If anyone has additional information about this program, please comment below or contact me via email!
Hybrid and Non-Traditional Hall Effect Thrusters: Because Tweaking Happens
As we saw with the VHITAL, the traditional SPT and TAL thrusters – while the most common – are far from the only way to use these technologies. One interesting concept, studied by EDB Fakel in Russia, is a hybrid SPT-TAL thruster. SPT thrusters, due to their extended ionization chamber lined with an insulator, generally provide fuller ionization of propellant. TAL thrusters, on the other hand, are better able to efficiently accelerate the propellant once it’s ionized. So the designers at EDB Fakel, led by M. Potapenko, developed, built, and tested the PlaS-40 Hybrid PT, rated at up to 0.4 kW, and proposed and breadbox tested a larger (up to 4.5 kW) PlaS-120 thruster as well, during the 1990s (initial conception was in the early 90s, but the test
model was built in 1999). While fairly similar in outward appearance to an SPT, the acceleration chamber was shorter. The PlaS-40 achieved 1000-1750 s isp and a thrust of 23.5 mN, while the PlaS-120 showed the capability of reaching 4000 s isp and up to 400 mN of thrust (these tests were not continued, due to a lack of funding). This design concept could offer advances in specific impulse and thrust efficiency beyond traditional thruster designs, but currently there isn’t enough research to show a clear advantage.
Another interesting hybrid design was a gridded Hall thruster, researched by V. Kim at Fakel in 1973-1975. Here, again, an SPT-type ionization chamber was used, and the screens were used to more finely control the magnetic lensing effect of the thruster. This was an early design, and one that was used due to the limitations of the knowledge and technology to do away with the grids. However, it’s possible that a hybrid Hall-gridded ion thruster may offer higher specific impulse while taking advantage of the more efficient ionization of an SPT thruster. As we saw with both the DS4G and VHITAL, increasing separation of the ionization, ion extraction, and acceleration portions of the thruster allows for a greater thrust efficiency, and this may be another mechanism to do that.
One design, out of the University of Michigan, modifies the anode itself, by segmenting it into many different parts. This was done to manage plasma instabilities within the propellant plume, which cause parasitic power losses. While it’s unclear exactly how much efficiency can be gained by this, it solves a problem that had been observed since the 1960s close to the anode of the thruster. Small tweaks like this may end up changing the geometry of the thruster significantly over time as optimization occurs.
Other modifications have been made as well, including combining discharge chambers, using conductive materials for discharge chambers but retaining a dielectric ceramic in the acceleration zone of the thruster, and many other designs. Many of these were early ideas that were demonstrated but not carried through for one reason or another. For instance, the metal discharge chambers were considered an economic benefit, because the ceramic liners are the major cost-limiting factor in SPT thrusters. With improved manufacturing and availability, costs went down, and the justification went away.
There remains an incredible amount of flexibility in the Hall effect thruster design space. While two stage, nested, and clustered designs are the current most advanced high power designs, it’s difficult to guess if someone will come up with a new idea, or revisit an old one, to rewrite the field once again.
Propellants: Are the Current Propellant Choices Still Effective For High Powered Missions?
One of the interesting things to consider about these typesof thrusters, both the gridded ion and Hall effect thrusters, is propellant choice. Xenon is, as of today, the primary propellant used by all operational electrostatic thrusters (although some early thrusters used cesium and mercury for propellants), however, Xe is rare and reasonably expensive. In smaller Hall thruster designs, such as for telecommunications satellites in the 5-10 kWe thruster range, the propellant load (as of 1999) for many spacecraft is less than 100 kg – a significant but not exorbitant amount of propellant, and launch costs (and design considerations) make this a cost effective decision. For larger spacecraft, such as a Hall-powered spacecraft to Mars, the propellant mas could easily be in the 20-30 ton range (assuming 2500 s isp, and a 100 mg/s flow rate of Xe), which is a very different matter in terms of Xe availability and cost. Alternatives, then, become far more attractive if possible.
Argon is also an attractive option, and is often proposed as a propellant as well, being less rare. However, it’s also considerably lower mass, leading to higher specific impulses but lower levels of thrust. Depending on the mission, this could be a problem if large changes in delta-vee are needed in a shorter period of time, The higher ionization energy requirements also mean that either the propellant won’t be as completely ionized, leading to loss of efficiency, or more energy is required to ionize the propellant
The next most popular choice for propellant is krypton (Kr), the next lightest noble gas. The chemical advantages of Kr are basically identical, but there are a couple things that make this trade-off far from straightforward: first, tests with Kr in Hall effect thrusters often demonstrate an efficiency loss of 15-25% (although this may be able to be mitigated slightly by optimizing the thruster design for the use of Kr rather than Xe), and second the higher ionization energy of Kr compared to Xe means that more power is required to ionize the same amount of propellant (or with an SPT, a deeper ionization channel, with the associated increased erosion concerns). Sadly, several studies have shown that the higher specific impulse gained from the lower atomic mass of Kr aren’t sufficient to make up for the other challenges, including losses from Joule heating (which we briefly discussed during our discussion of MPD thrusters in the last post), radiation, increased ionization energy requirements, and even geometric beam divergence.
This has led some designers to propose a mixture of Xe and Kr propellants, to gain the advantages of lower ionization energy for part of the propellant, as a compromise solution. The downside is that this doesn’t necessarily improve many of the problems of Kr as a propellant, including Joule heating, thermal diffusion into the thruster itself, and other design headaches for an electrostatic thruster. Additionally, some papers report that there is no resonant ionization phenomenon that facilitates the increase of partial krypton utilization efficiency, so the primary advantage remains solely cost and availability of Kr over Xe.
Atomic Mass (Ar, std.)
Ionization Energy (1st, kJ/mol)
Melting Point (K)
Boiling Point (K)
Estimated Cost ($/kg)
13.534 (at STP)
1.843 (at MP)
0.927 (at MP) 0.968 (solid)
0.828 (MP) 0.862 (solid)
0.866 (20 C)
4.933 (at STP)
Early thrusters used cesium and mercury for propellant, and for higher-powered systems this may end up being an option. As we’ve seen earlier in this post, neither Cs or Hg are unknown in electrostatic propulsion (another design that we’ll look at a little later is the cesium contact ion thruster), however they’ve fallen out of favor. The primary reason always given for this is environmental and occupational health concerns, for the development of the thrusters, the handling of the propellant during construction and launch, as well as the immediate environment of the spacecraft. The thrusters have to be built and extensively tested before they’re used on a mission, and all these experiments are a perfect way to strongly contaminate delicate (and expensive) equipment such as thrust stands, vacuum chambers, and sensing apparatus – not to mention the lab and surrounding environment in the case of an accident. Additionally, any accident that leads to the exposure of workers to Hg or Cs will be expensive and difficult to address, notwithstanding any long term health effects of chemical exposure to any personnel involved (handling procedures have been well established, but one worker not wearing the correct personal protective equipment could be constantly safe both in terms of personal and programmatic health) Perfect propellant stream neutralization is something that doesn’t actually occur in electrostatic drives (although as time goes on, this has consistently improved), leading to a buildup of negative charge in the spacecraft; and, subsequently, a portion of the positive ions used for propellant end up circling back around the magnetic fields and impacting the spacecraft. Not only is this something that’s a negative impact for the thrust of the spacecraft, but if the propellant is something that’s chemically active (as both Cs and Hg are), it can lead to chemical reactions with spacecraft structural components, sensors, and other systems, accelerating degradation of the spacecraft.
A while back on the Facebook group I asked the members about the use of these propellants, and an interesting discussion developed (primarily between Mikkel Haaheim, my head editor and frequent contributor to this blog, and Ed Pheil, who has extensive experience in nuclear power, including the JIMO mission, and is currently the head of Elysium Industries, developing a molten chloride fast reactor) concerning the pros and cons of using these propellants. Two other options, with their own complications from the engineering side, were also proposed, which we’ll touch on briefly: sodium and potassium both have low ionization energies, and form a low melting temperature eutectic, so they may offer additional options for future electrostatic propellants as well. Three major factors came up in the discussion: environmental and occupational health concerns during testing, propellant cost (which is a large part of what brings us to this discussion in the first place), and tankage considerations.
As far as cost goes, this is listed in the table above. These costs are all ballpark estimates, and costs for space-qualified supplies are generally higher, but it illustrates the general costs associated with each propellant. So, from an economic point of view, Cs is the least attractive, while Hg, Kr, and Na are all attractive options for bulk propellants.
Tankage in and of itself is a simpler question than the question of the full propellant feed question, however it can offer some insights into the overall challenges in storing and using the various propellants. Xe, our baseline propellant, has a density as a liquid of 2.942 g/cm, Kr of 2.413, and Hg of 13.53. All other things aside, this indicates that the overall tankage mass requirements for the same mass of Hg are less than 1/10th that of Xe or Kr. However, additional complications arise when considering tank material differences. For instance, both Xe and Kr require cryogenic cooling (something we discussed in the LEU NTP series briefly, which you can read here [insert LEU NTP 3 link]. While the challenges of Xe and Kr cryogenics are less difficult than H2 cryogenics due to the higher atomic mass and lower chemical reactivity, many of the same considerations do still apply. Hg on the other hand, has to be kept in a stainless steel tank (by law), other common containers, such as glass, don’t lend themselves to spacecraft tank construction. However, a stainless steel liner of a carbon composite tank is a lower-mass option.
The last type of fluid propellant to mention is NaK, a common fast reactor coolant which has been extensively studied. Many of the problems with tankage of NaK are similar to those seen in Cs or Hg: chemical reactivity (although different particulars on the tankage), however, all the research into using NaK for fast reactor coolant has largely addressed the immediate corrosion issues.
The main problem with NaK would be differential ionization causing plating of the higher-ionization-energy metal (Na in this case) onto the anode or propellant channels of the thruster. While it may be possible to deal with this, either by shortening the propellant channel (like in a TAL or EDPT), or by ensuring full ionization through excess charge in the anode and cathode. The possibility of using NaK was studied in an SPT thruster in the Soviet Union, but unfortunately I cannot find the papers associated with these studies. However, NaK remains an interesting option for future thrusters.
Solid propellants are generally considered to be condensable propellant thrusters. These designs have been studied for a number of decades. Most designs use a resistive heater to melt the propellant, which is then vaporized just before entering the anode. This was first demonstrated with the cesium contact gridded ion thrusters that were used as part of the SERT program. There (as mentioned earlier) a metal foam was used as the storage medium, which was kept warm to the point that the cesium was kept liquid. By varying the pore size, a metal wick was made which controlled the flow of the propellant from the reservoir to the ionization head. This results in a greater overall mass for the propellant tankage, but on the other hand the lack of moving parts, and the ability to ensure even heating across the propellant volume, makes this an attractive option in some cases.
A more recent design that we also discussed (the VHITAL) uses bismuth propellant for a TAL thruster, a NASA update of a Soviet TsNIIMash design from the 1970s (which was shelved due to the lack of high-powered space power systems at the time). This design uses a reservoir of liquid bismuth, which is resistively heated to above the melting temperature. An argon pressurization system is used to force the liquid bismuth through an outlet, where it’s then electromagnetically pumped into a carbon vaporization plug. This then discharges into the anode (which in the latest iteration is also resistively heated), where the Hall current then ionizes the propellant. It may be possible with this design to use multiple reservoirs to reduce the power demand for the propellant feed system; however, this would also lead to greater tankage mass requirements, so it will largely depend on the particulars of the system whether the increase in mass is worth the power savings of using a more modular system. This propellant system was successfully tested in 2007, and could be adapted to other designs as well.
Other propellants have been proposed as well, including magnesium, iodine, and cadmium. Each has its’ advantages and disadvantages in tankage, chemical reactivity limiting thruster materials considerations, and other factors, but all remain possible for future thruster designs.
For the foreseeable future, most designs will continue to use xenon, with argon being the next most popular choice, but as the amount of propellant needed increases with the development of nuclear electric propulsion, it’s possible that these other propellant options will become more prominent as tankage mass, propellant cost, and other considerations become more significant.
Electrospray thrusters use electrically charged liquids as a propellant. They fall into three main categories: colloid thrusters, which accelerate charged droplets dissolved in a solvent such as glycerol or formamide; field emission electric propulsion (FEEP) thrusters, which use liquid metals to produce positively charged metal ions; and, finally, ionic liquid ion source (ILIS) thrusters, which use room temperature molten salts to produce a beam of salt ions.
All types of electrospray end up demonstrating a phenomenon known as a Taylor cone, which occurs in an electrically charged fluid when exposed to an electrical field. If the field is strong enough, the tip of the cone is extruded to the point that it breaks, and a spray of droplets from the liquid is emitted. This is now commonly used in many different industrial applications, and the advances in these fields have made the electrospray thruster more attractive, as have a focus on volume of propulsion systems. Additionally, the amount of thrust produced, and the thrust density, is directly proportional to the density of emitters in a given area. Recent developments in nanomaterials fabrication have made it possible to increase the thrust density of these designs significantly. However, the main lifetime limitation of this type of thruster is emitter wear, which is dependent on both mass flow rates and any chemical interactions between the emitters and the propellant.
The vast majority of these systems focus on cube-sat propulsion; but one company, Accion Systems, has developed a tileable system which could offer high-powered operation through the use of dozens of thrusters arrayed in a grid. Their largest thruster (which measures 35mm by 35 mm by 16 mm, including propellant) produces a total of 200,000 N of impulse, a thrust of 10 mN, at an isp of 1500 s. While their primary focus is on cubesats, the CEO, Natalya Bailey, has mentioned before that it would be possible to use many of their TILE drive systems in parallel for high-powered missions.
One of the biggest power demands of an electrostatic engine of almost any type is the ionization cost of the propellant. Depending on the mass flow and power, different systems are used to ionize the propellant, including electron beams, RF ionization, cyclotron resonance, and the Hall effect. What if we could get rid of that power cost, and instead use all of the energy accelerating the propellant? Especially in small spacecraft, this is very attractive, and it may be possible to scale this up significantly as well (to the limits of the electrical charge that is able to be placed on the screens themselves). Some fluids are ionic, meaning that they’re positively charged, reasonably chemically stable, and easily storable. By replacing the uncharged propellant with one that carries an electric charge without the need for on-board ionization equipment, mass, volume, and power can be conserved. Not all electrospray thrusters use an ionic liquid, but ones that do offer considerable advantages in terms of energy efficiency, and possibly can offer greater overall thruster efficiency as well. I have yet to see a design for a gridded ion or Hall effect thruster that utilizes these types of propellants, but it may be possible to do so.
With that, we come to the end of our overview of electric thrusters. While there are some types of thruster that we did not discuss, they are unlikely to be able to be used in high powered systems such as would be found on an NEP spacecraft. When I began this series of blog posts, I knew that electric propulsion is a very broad topic, but the learning process during writing these three posts has been far more intense, and broad, than I was expecting. Electric propulsion has never been my strong suit, so I’ve been even more careful than usual to stick to the resources available to write these posts, and I’ve had a lot of help from some very talented people to get to this point.
I was initially planning on writing a post about the power conditioning units that are used to prepare the power provided by the power supply to these thrusters, but the more I researched, the less these systems made sense to me – something that I’ve been assured isn’t uncommon – so I’m going to skip that for now.
Instead, the next post is going to look at the power conversion systems that nuclear electric spacecraft can use. Due to the unique combination of available temperature from a nuclear reactor, the high power levels available, and the unique properties of in-space propulsion, there are many options available that aren’t generally considered for terrestrial power plants, and many designs that are used by terrestrial plants aren’t available due to mass or volume requirements. I’ve already started writing the post, but if there’s anything writing on NEP has taught me, it’s that these posts take longer than I expect, so I’m not going to give a timeline on when that will be available – hopefully in the next 2-3 weeks, though.
After that, we’ll look more in depth at thermal management and heat rejection systems for a wide range of temperatures, how they work, and the fundamental limitations that each type has. After another look at the core of an NEP spacecraft’s reactor, we will then look at combining electric and thermal propulsion in a post on bimodal NTRs, before moving on to our next blog post series (probably on pulse propulsion, but we may return to NTRs briefly to look at liquid core NTRs and the LARS proposal).
I hope you enjoyed the post. Leave a comment below with any comments, questions, or corrections, and don’t forget to check out our Facebook group, where I post work-in-progress visuals, papers I come across during research, and updates on the blog (and if you do, don’t feel shy about posting yourself on astronuclear propulsion designs and news!).
Hello, and welcome back to Beyond NERVA. Today, we are looking at a very popular topic, but one that doesn’t necessarily require nuclear power: electric propulsion. However, it IS an area that nuclear power plants are often tied to, because the amount of thrust available is highly dependent on the amount of power available for the drive system. We will touch a little bit on the history of electric propulsion, as well as the different types of electric thrusters, their advantages and disadvantages, and how fission power plants can change the paradigm for how electric thrusters can be used. It’s important to realize that most electric propulsion is power-source-agnostic: all they require is electricity; how it’s produced usually doesn’t mean much to the drive system itself. As such, nuclear power plants are not going to be mentioned much in this post, until we look at the optimization of electric propulsion systems.
We also aren’t going to be looking at specific types of thrusters in this post. Instead, we’re going to do a brief overview of the general types of electric propulsion, their history, and how electrically propelled spacecraft differ from thermally or chemically propelled spacecraft. The next few posts will focus more on the specific technology itself, its’ application, and some of the current options for each type of thruster.
Electric Propulsion: What is It?
In its simplest definition, electric propulsion is any means of producing thrust in a spacecraft using electrical energy. There’s a wide range of different concepts that get rolled into this concept, so it’s hard to make generalizations about the capabilities of these systems. As a general rule of thumb, though, most electric propulsion systems are low-thrust, long-burn-time systems. Since they’re not used for launch, and instead for on-orbit maneuvering or interplanetary missions, the fact that these systems generally have very little thrust is a characteristic that can be worked with, although there’s a great deal of variety as far as how much thrust, and how efficient in terms of specific impulse, these systems are.
There are three very important basic concepts to understand when discussing electric propulsion: thrust-to-weight ratio (T/W), specific impulse (isp), and burn time. The first is self-explanatory: how much does the engine weigh, compared to how hard it can push, commonly in relation to Earth’s gravity: a T/W ratio of 1/1 means that the engine can hover, basically, but no more, A T/W ratio of 3/1 means that it can push just less than 3 times its weight off the ground. Specific impulse is a measure of how much thrust you get out of a given unit of propellant, and ignores everything else, including the weight of the propulsion system; it’s directly related to fuel efficiency, and is measured in seconds: if the drive system had a T/W ratio of 1/1, and was entirely made out of fuel, this would be the amount of time it could hover (assuming the engine is completely made out of propellant) for any given mass of fuel at 1 gee. Finally, you have burn time: the T/W ratio and isp give you the amount of thrust imparted per unit time based on the mass of the drive system and of the propellant, then the spacecraft’s mass is factored into the equation to give the total acceleration on the spacecraft for a given unit of time. The longer the engine burns, the more overall acceleration is produced.
Electric propulsion has a very poor thrust-to-weight ratio (as a general rule), but incredible specific impulse and burn times. The T/W ratio of many of the thrusters is very low, due to the fact that they provide very little thrust, often measured in micronewtons – often, the thrust is illustrated in pieces of paper, or pennies, in Earth gravity. However, this doesn’t matter once you’re in space: with no drag, and orbital mechanics not requiring the huge amounts of thrust over a very short period of time, the total amount of thrust is more important for most maneuvers, not how long it takes to build up said thrust. This is where the burn time comes in: most electric thrusters burn continuously, providing minute amounts of thrust over months, sometimes years; they push the spacecraft in the direction of travel until halfway through the mission, then turn around and start decelerating the spacecraft halfway through the trip (in energy budget terms, not necessarily in total mission time). The trump card for electric propulsion is in specific impulse: rather than the few hundred seconds of isp for chemical propulsion, or the thousand or so for a solid core nuclear thermal rocket, electric propulsion gives thousands of seconds of isp. This means less fuel, which in turn makes the spacecraft lighter, and allows for truly astounding total velocities; the downside to this is that it takes months or years to build these velocities, so escaping a gravity well (for instance, if you’re starting in low Earth orbit) can take months, so it’s best suited for long trips, or for very minor changes in orbit – such as for communications satellites, where it’s made these spacecraft smaller, more efficient, and with far longer lifetimes.
Electric propulsion is an old idea, but one that has yet to reach its’ full potential due to a number of challenges. Tsiolkovsy and Goddard both wrote about electric propulsion, but because neither was living in a time that it was possible to get into orbit, their ideas went unrealized in their lifetimes. The reason for this is that electric propulsion isn’t suitable for lifting rockets off the surface of a planet, but for in-space propulsion it’s incredibly promising. They both showed that the only thing that matters for a rocket engine is that, to put it simply, some mass needs to be thrown out the back of the rocket to provide thrust, it doesn’t matter what that something is. Electricity isn’t (directly) limited by thermodynamics (except through entropic losses), only by electric potential differences, and can offer very efficient conversion of electric potential to kinetic energy (the “throwing something out of the back” part of the system).
In chemical propulsion, combustion is used to cause heat to be produced, which causes the byproducts of the chemical reaction to expand and accelerate. This is then directed out of a nozzle to increase the velocity of the exhaust and provide lift. This is the first type of rocket ever developed; however, while advances are always being produced, in many ways the field is chasing after more and more esoteric or exotic ways to produce ever more marginal gains. The reason for this is that there’s only so much chemical potential energy available in a given system. The most efficient chemical engines top out around 500 seconds of specific impulse, and most hover around the 350 mark. The place that chemical engines excel though, is in thrust-to-weight ratio. They remain – arguably – our best, and currently our only, way of actually getting off Earth.
Thermal propulsion doesn’t rely on the chemical potential energy, instead the reaction mass is directly heated from some other source, causing expansion. The lighter the propellant, the more it expands, and therefore the more thrust is produced for a given mass; however, heavier propellants can be used to give more thrust per unit volume, at lower efficiencies. It should be noted that thermal propulsion is not only possible, but also common, with electrothermal thrusters, but we’ll dig more into that later.
Electric propulsion, on the other hand, is kind of a catch-all term when you start to look at it. There are many mechanisms for changing electrical energy into kinetic energy, and looking at most – but not all – of the options is what this blog post is about.
In order to get a better idea of how these systems work, and the fundamental principles behind electric propulsion, it may be best to look into the past. While the potential of electric propulsion is far from realized, it has a far longer history than many realize.
Futuristic Propulsion? … Sort Of, but With A Long Pedigree
The Origins of Electric Propulsion
When looking into the history of spaceflight, two great visionaries stand out: Konstantin Tsiolkosky and Robert Goddard. Both worked independently on the basics of rocketry, both provided much in the way of theory, and both were visionaries seeing far beyond their time to the potential of rocketry and spaceflight in general. Both were working on the questions of spaceflight and rocketry at the turn of the 20th century. Both also independently came up with the concept of electric propulsion; although who did it first requires some splitting of hairs: Goddard mentioned it first, but in a private journal, while Tsiolkovsky published the concept first in a scientific paper, even if the reference is fairly vague (considering the era, understandably so). Additionally, due to the fact that electricity was a relatively poorly understood phenomenon at the time (the nature of cathode and anode “rays” was much debated, and positively charged ions had yet to be formally described); and neither of these visionaries had a deep understanding of the concepts involved, their ideas being little more than just that: concepts that could be used as a starting point, not actual designs for systems that would be able to be used to propel a spacecraft.
The first mention of electric propulsion in the formal scientific literature was in 1911, in Russia. Konstantin Tsiolkovsky wrote that “it is possible that in time we may use electricity to produce a large velocity of particles ejected from a rocket device.” He began to focus on the electron, rather than the ion, as the ejected particle. While he never designed a practical device, the promise of electric propulsion was clearly seen: “It is quite possible that electrons and ions can be used, i.e. cathode and especially anode rays. The force of electricity is unlimited and can, therefore, produce a powerful flux of ionized helium to serve a spaceship.” The lack of understanding of electric phenomena hindered him, though, and prevented him from ever designing a practical system, much less building one.
The first mention of electric propulsion in history is from Goddard, in 1906, in a private notebook, but as noted by Edgar Choueiri, in his excellent historical paper published in 2004 (a major source for this section), these early notes don’t actually describe (or even reference the use of) an electric propulsion drive system. This wasn’t a practical design (that didn’t come until 1917), but the basic principles were laid out for the acceleration of electrons (rather than positively charged ions) to the “speed of light.” For the next few years, the concept fermented in his mind, culminating in patents in 1912 (for an ionization chamber using magnetic fields, similar to modern ionization chambers) and in 1917 (for a “Method and Means for Producing Electrified Jets of Gas”). The third of three variants was for the first recognizable electric thruster, whichwould come to be known as an electrostatic thruster. Shortly after, though, America entered WWI, and Goddard spent the rest of his life focused on the then-far-more-practical field of chemical propulsion.
Other visionaries of rocketry also came up with concepts for electric propulsion. Yuri Kondratyuk (another, lesser-known, Russian rocket pioneer) wrote “Concerning Other Possible Reactive Drives,” which examined electric propulsion, and pointed out the high power requirements for this type of system. He didn’t just examine electron acceleration, but also ion acceleration, noting that the heavier particles provide greater thrust (in the same paper he may have designed a nascent colloid thruster, another type of electric propulsion).
Another of the first generation of rocket pioneers to look at the possibilities of electric propulsion was Hermann Oberth. His 1929 opus, “Ways to Spaceflight,” devoted an entire chapter to electric propulsion. Not only did he examine electrostatic thrusters, but he looked at the practicalities of a fully electric-powered spacecraft.
Finally, we come to Valentin Glushko, another early Russian rocketry pioneer, and giant of the Soviet rocketry program. In 1929, he actually built an electric thruster (an electrothermal system, which vaporized fine wires to produce superheated particles), although this particular concept never flew.By this time, it was clear that much more work had to be done in many fields for electric propulsion to be used; and so, one by one, these early visionaries turned their attention to chemical rockets, while electric propulsion sat on the dusty shelves of spaceflight concepts that had yet to be realized. It collected dust next to centrifugal artificial gravity, solar sails, and other practical ideas that didn’t have the ability to be realized for decades.
The First Wave of Electric Propulsion
Electric propulsion began to be investigated after WWII, both in the US and in the USSR, but it would be another 19 years of development before a flight system was introduced. The two countries both focused on one general type of electric propulsion, the electrostatic thruster, but they looked at different types of this thruster, reflecting the technical capabilities and priorities of each country. The US focused on what is now known as a gridded ion thruster, most commonly called an ion drive, while the USSR focused on the Hall effect thruster, which uses a magnetic field perpendicular to the current direction to accelerate particles. Both of these concepts will be examined more in the section on electrostatic thrusters; though, for now it’s worth noting that the design differences in these concepts led to two very different systems, and two very different conceptions of how electric propulsion would be used in the early days of spaceflight.
In the US, the most vigorous early proponent of electric propulsion was Ernst Stuhlinger, who was the project manager for many of the earliest electric propulsion experiments. He was inspired by the work of Oberth, and encouraged by von Braun to pursue this area, especially now that being able to get into space to test and utilize this type of propulsion was soon to be at hand. His leadership and designs had a lasting impact on the US electric propulsion program, and can still be seen today.
The first spacecraft to be propelled using electric propulsion was the SERT-I spacecraft, a follow on to a suborbital test (Program 661A, Test A, first of three suborbital tests for the USAF) of the ion drives that would be used. These drive system used cesium and mercury as a propellant, rather than the inert gasses that are commonly used today. The reason for this is that these metals both have very low ionization energy, and a reasonably favorable mass for providing more significant thrust. Tungsten buttons were used in the place of the grids used in modern ion drives, and a tantalum wire was used to neutralize the ion stream. Unfortunately, the cesium engine short circuited, but the mercury system was tested for 31 minutes and 53 cycles of the engine. This not only demonstrated ion propulsion in principle, but just as importantly demonstrated ion beam neutralization. This is important for most electric propulsion systems, because this prevents the spacecraft from becoming negatively charged, and possibly even attracting the ion stream back to the spacecraft, robbing it of thrust and contaminating sensors on board (which was a common problem in early electric propulsion systems).
The SNAPSHOT program, which launched the SNAP 10A nuclear reactor on April 3, 1965, also had a cesium ion engine as a secondary experimental payload. The failure of the electrical bus prevented this from being operated, but SNAPSHOT could be considered the first nuclear electric spacecraft in history (if unsuccessful).
The ATS program continued to develop the cesium thrusters from 1968 through 1970. The ATS-4 flight was the first demonstration of an orbital spacecraft with electric propulsion, but sadly there were problems with beam neutralization in the drive systems, indicating more work needed to be done. ATS-5 was a geostationary satellite meant to have electrically powered stationkeeping, but was not able to despin the satellite from launch, meaning that the thruster couldn’t be used for propulsion (the emission chamber was flooded with unionized propellant), although it was used as a neutral plasma source for experimentation. ATS-6 was a similar design, and successfully operated for a total of over 90 hours (one failed early due to a similar emission chamber flooding issue). SERT-II and SCATHA satellites continued to demonstrate improvements as well, using both cesium and mercury ion devices (SCATHA wasn’t optimized as a drive system, but used similar components to test spacecraft charge neutralization techniques).
These tests in the 1960s never developed into an operational satellite that used ion propulsion for another thirty years. Challenges with the aforementioned thrusters becoming saturated, spacecraft contamination issues due to highly reactive cesium and mercury propellants, and relatively low engine lifetimes (due to erosion of the screens used for this type of ion thruster) didn’t offer a large amount of promise for mission planners. The high (2000+ s) specific impulse was very promising for interplanetary spacecraft, but the low reliability, and reasonably short lifetimes, of these early ion drives made them unreliable, or of marginal use, for mission planners. Ground testing of various concepts continued in the US, but additional flight missions were rare until the end of the 1990s. This likely helped feed the idea that electric propulsion is new and futuristic, rather than having its’ conceptual roots reaching all the way back to the dawn of the age of flight.
Early Electric Propulsion in the USSR
Unlike in the US, the USSR started development of electric propulsion early, and continued its development almost continuously to the modern day. Sergei Korolev’s OKB-1 was tasked, from the beginning of the space race, with developing a wide range of technologies, including nuclear powered spacecraft and the development of electric propulsion.
Part of this may be the different architecture that the Soviet engineers used: rather than having ions be accelerated toward a pair of charged grids, the Soviet designs used a stream of ionized gas, with a perpendicular magnetic field to accelerate the ions. This is the Hall effect thruster, which has several advantages over the gridded ion thruster, including simplicity, fewer problems with erosion, as well as higher thrust (admittedly, at the cost of specific impulse). Other designs, including the PPT, or pulsed plasma thruster, were also experimented with (the ZOND-2 spacecraft carried a PPT system). However, due to the rapidly growing Soviet mastery of plasma physics, the Hall effect thruster became a very attractive system.
There are two main types of Hall thruster that were experimented with: the stationary plasma thruster (SPT) and the thruster with anode layer (TAL), which refer to how the electric charge is produced, the behavior of the plasma, and the path that the current follows through the thruster. The TAL was developed in 1957 by Askold Zharinov, and proven in 1958-1961, but a prototype wasn’t built until 1967 (using cesium, bismuth, cadmium, and xenon propellants, with isp of up to 8000 s), and it wasn’t published in open literature until 1973. This thruster can be characterized by a narrow acceleration zone, meaning it can be more compact.
The SPT, on the other hand, can be larger, and is the most common form of Hall thruster used today. Complications in the plasma dynamics of this system meant that it took longer to develop, but its’ greater electrical efficiency and thrust mean that it’s a more attractive choice for station-keeping thrusters. Research began in 1962, under Alexy Morozov at the Institute of Atomic Energy; and was later moved to the Moscow Aviation institute, and then again to what became known as FDB Fakel (now Fakel Industries, still a major producer of Hall thrusters). The first breadboard thruster was built in 1968, and flew in 1970. It was then used for the Meteor series of weather satellites for attitude control. Development continued on the design until today, but these types of thrusters weren’t widely used, despite their higher thrust and lack of spacecraft contamination (unlike similar vintage American designs).
It would be a mistake to think that only the US and the USSR were working on these concepts, though. Germany also had a diversity of programs. Arcjet thrusters, as well as magnetoplasmadynamic thrusters, were researched by the predecessors of the DLR. This work was inherited by the University of Stuttgart Institute for Space Systems, which remains a major research institution for electric propulsion in many forms. France, on the other hand, focused on the Hall effect thruster, which provides lower specific impulse, but more thrust. The Japanese program tended to focus on microwave frequency ion thrusters, which later provided the main means of propulsion for the Hayabusa sample return mission (more on that below).
The Birth of Modern Electric Propulsion
For many people, electric propulsion was an unknown until 1998, when NASA launched the Deep Space 1 mission. DS1 was a technology demonstration mission, part of the New Millennium program of advanced technology testing and experimentation. A wide array of technologies were to be tested in space, after extensive ground testing; but, for the purposes of Beyond NERVA, the most important of these new concepts was the first operational ion drive, the NASA Solar Technology Applications Readiness thruster (NSTAR). As is typical of many modern NASA programs, DS1 far exceeded the minimum requirements. Originally meant to do a flyby of the asteroid 9969 Braille, the mission was extended twice: first for a transit to the comet 19/P Borrelly, and later to extend engineering testing of the spacecraft.
In many ways, NSTAR was a departure from most of the flight-tested American electric propulsion designs. The biggest difference was with the propellant used: cesium and mercury were easy to ionize, but a combination of problems with neutralizing the propellant stream, and the resultant contamination of the spacecraft and its’ sensors (as well as minimizing chemical reaction complications and growing conservatism concerning toxic component minimization in spacecraft), led to the decision to use noble gasses, in this case xenon. This, though, doesn’t mean that it was a great overall departure from the gridded ion drives of US development; it was an evolution, not a revolution, in propulsion technology. Despite an early (4.5 hour) failure of the NSTAR thruster, it was able to be restarted, and the overall thruster life was 8,200 hours, and the backup achieved more than 500 hours beyond that.
Not only that, but this was not the only use of this thruster. The Dawn mission to the minor planet Ceres uses an NSTAR thruster, and is still in operation around that body, sending back incredibly detailed and fascinating information about hydrocarbon content in the asteroid belt, water content, and many other exciting discoveries for when humanity begins to mine the asteroid belt.
Many satellites, especially geostationary satellites, use electric propulsion today, for stationkeeping and even for final orbital insertion. The low thrust of these systems is not a major detriment, since they can be used over long periods of time to ensure a stable orbital path; and the small amount of propellant required allows for larger payloads or longer mission lifetimes with the same mass of propellant.
After decades of being considered impractical, immature, or unreliable, electric propulsion has come out of the cold. Many designs for interplanetary spacecraft use electric propulsion, due to their high specific impulse and ability to maximize the benefits of the high-isp, low-thrust propulsion regime that these thruster systems excel at.
Another type of electric thruster is also becoming popular for small-sat users: electrothermal thrusters, which offer higher thrust from chemically inert propellants in compact forms, at the cost of specific impulse. These thrusters offer the benefits of high-thrust chemical propulsion in a more compact and chemically inert form – a major requirement for most smallsats which are secondary payloads that have to demonstrate that they won’t threaten the primary payload.
So, now that we’ve looked into how we’ve gotten to this point, let’s see what the different possibilities are, and what is used today.
What are the Options?
The most well-known and popularized version of electric propulsion is electrostatic propulsion, which uses an ionization chamber (or ionic fluid) to develop a positively charged stream of ions, which are then accelerated out the “nozzle” of the thruster. A stream of electrons is added to the propellant as it leaves the spacecraft, to prevent the buildup of a negative charge. There are many different variations of this concept, including the best known types of thrusters (the Hall effect and gridded ion thrusters), as well as field effect thrusters and electrospray thrusters.
The next most common version – and one with a large amount of popular mentions these days – is the electromagnetic thruster. Here, the propellant is converted to a relatively dense plasma, and usually (but not always) magnets are used to accelerate this plasma at high speed out of a magnetic nozzle using the electromagnetic and thermal properties of plasma physics. In the cases that the plasma isn’t accelerated using magnetic fields directly, magnetic nozzles and other plasma shaping functions are used to constrict or expand the plasma flow. There are many different versions, from magnetohydrodynamic thrusters (MHD, where a charge isn’t transferred into the plasma from the magnetic field), to the less-well-known magnetoplasmadynamic (MPD, where the Lorentz force is used to at least partially accelerate the plasma), electrodeless plasma, and pulsed inductive thruster (PIT).
Thirdly, we have electrothermal drive systems, basically highly advanced electric heaters used to heat a propellant. These tend to be the less energy efficient, but high thrust, systems (although, theoretically, some versions of electromagnetic thrusters can achieve high thrust as well). The most common types of electrothermal systems proposed have been arcjet, resistojet, and inductive heating drives; the first two actually being popular choices for reaction control systems for large, nuclear-powered space stations. Inductive heating has already made a number of appearances on this page, both in testing apparatus (CFEET and NTREES are both inductively heated), and as part of a bimodal NTR (the nuclear thermal electric rocket, or NTER, covered on our NTR page).
The last two systems, MHD and electrothermal, often use similar mechanisms of operation when you look at the details, and the line between the two isn’t necessarily clear. For instance, some authors describe the pulsed plasma thruster (PPT), which most commonly uses a solid propellant such as PTFE (Teflon) as a propellant, which is vaporized and occasionally ionized electrically before it’s accelerated out of the spacecraft, as an MHD, while others describe it as an arcjet, and which term best applies depends on the particulars of the system in question. A more famous example of this gray area would be the VASIMR thruster, (VAriable Specific Impulse through Magnetic Resonance). This system uses dense plasma, contained in a magnetic field, but the plasma is inductively heated using RF energy, and then accelerated due to the thermal behavior of the plasma while being contained magnetically. Because of this, the system can be seen as an MHD thruster, or as an electrothermal thruster (that debate, and the way these terms are used, was one of the more enjoyable parts of the editing process of this blog post, and I’m sure one that will continue as we continue to examine EP).
Finally, we come to the photon drives. These use photons as the reaction mass – and as such, are sometimes somewhat jokingly called flashlight drives. They have the lowest thrust of any of these systems, but the exhaust velocity is literally the speed of light, so they have insanely high specific impulse. Just… don’t expect any sort of significant acceleration, getting up to speed with these systems could take decades, if not centuries; making them popular choices for interstellar systems, rather than interplanetary ones. Photonic drives have another option, as well, though: the power source for the photons doesn’t need to be on board the spacecraft at all! This is the principle behind the lightsail (the best-known version being the solar sail): a fixed installation can produce a laser, or other stream of photons (such as a maser, out of microwaves, in the Starwisp concept), which then impact a reflective surface to provide thrust. This type of system follows a different set of rules and limitations, however, from systems where the power supply (and associated equipment), drive system, and any propellant needed are on-board the spacecraft, so we won’t go too much into depth on that concept initially, instead focusing on designs that have everything on-board the spacecraft.
Each of these systems has its’ advantages and disadvantages. Electrostatic thrusters are very simple to build: ionization chambers are easy, and creating a charged field is easy as well; but to get it to work there has to be something generating that charge, and whatever that something is will be hit by the ionized particles used for propellant, causing erosion. Plasmadynamic thrusters can provide incredible flexibility, but generally require large power plants; and reducing the power requirements requires superconducting magnets and other materials challenges. In addition, plasma physics, while becoming increasingly well known, provides a unique set of challenges. Thermoelectric thrusters are simple, but generally provide poor specific impulse, and thermal cycling of the components causes wear. Finally, photon drives are incredibly efficient but very, very low thrust systems, requiring exceedingly long burn times to produce any noticeable thrust. Let’s look at each of these options in a bit more detail, and look at the practical limitations that each system has.
Optimizing the System: The Fiddly Bits
As we’ve seen, there’s a huge array of technologies that fall under the umbrella of “electric propulsion,” each with their advantages and disadvantages. The mission that is going to be performed is going to determine which types of thrusters are feasible or not, depending on a number of factors. If the mission is stationkeeping for a geosynchronous communications satellite, then the Hall thruster has a wonderful balance between thrust and specific impulse. If the mission is a sample return mission to an asteroid, then the lower thrust, higher specific impulse gridded ion thruster is better, because the longer mission time (and greater overall delta-v needed for the mission) make this low-thrust, high-efficiency thruster a far more ideal option. If the mission is stationkeeping on a small satellite that is a piggyback load, the arcjet may be the best option, due to its’ compactness, the chemically inert nature of the fuel, and relatively high thrust. If higher thrust is needed over a longer period for a larger spacecraft, MPD may be the best bet. Very few systems are designed to deal with a wide range of capabilities in spaceflight, and electric propulsion is no different.
There are other key concepts to consider in the selection of an electric propulsion system as well. The first is the efficiency of this system: how much electricity is required for the thruster, compared to how much energy is imparted onto the spacecraft in the form of the propellant. This efficiency will vary within each different specific design, and its’ improvement is a major goal in every thruster’s development process. The quality of electrical power needed is also an important consideration: some require direct, current, some require alternating current, some require RF or microwave power inputs, and matching the electricity produced to the thruster itself is a necessary step, which on occasion can make one thruster more attractive than another by reducing the overall mass of the system. Another key question is the total amount of change in velocity needed for the mission, and the timeframe over which this delta-v can be applied; in this case, the longer timeframe you have, the more efficient your thruster can be at lower thrust (trading thrust for specific impulse).
However, looking past just the drive itself, there are quite a few things about the spacecraft itself, and the power supply, that also have to be considered. The first consideration is the power supply available to the drive system. If you’ve got an incredibly efficient drive system that requires a MW to run, then you’re going to be severely limited in your power supply options (there are very few, if any, drive systems that require this high a charge). For more realistic systems, the mass of the power supply, and therefore of the spacecraft, is going to have a direct impact on the amount of delta-v that is able to be applied over a given time: if you want your spacecraft to be able to, say maneuver out of the way of a piece of space debris, or a mission to another planet needs to arrive within a given timeframe, the less mass for a given unit of power, the better. This is an area where nuclear power can offer real benefits: while it’s debatable whether solar or nuclear power is better for low-powered applications in terms of power per unit mass, which is known in engineering as specific power. Once higher power levels are needed, however, nuclear shines: it can be difficult (but is far from impossible) to scale nuclear down in size and power output, but it scales up very easily and efficiently, and this scaling is non-linear. A smaller output reactor and one that has 3 times the output could be very similar in terms of core size, and the power conversion systems used also often have similar scaling advantages. There are additional advantages, as well: radiators are generally speaking smaller in sail area, and harder to damage, than photovoltaic cells, and can often be repaired more easily (once a PV cell get hit with space debris, it needs to be replaced, but a radiator tube designed to be repaired can in many cases just be patched or welded and continue functioning). This concept is known as power density, or power-per-unit-volume, and also has a significant impact on the capabilities of many (especially large) spacecraft. The specific volume of the power supply is going to be a limiting factor when it comes to launching the vehicle itself, since it has to fit into the payload fairing of the launch vehicle (or the satellite bus of the satellite that will use it).
The specific power, on the other hand, has quite a few different implications, most importantly in the available payload mass fraction of the spacecraft itself. Without a payload, of whatever type is needed, either scientific missions or crew life support and habitation modules, then there’s no point to the mission, and the specific power of the entire power and propulsion unit has a large impact on the amount of mass that is able to be brought on the mission.
Another factor to consider when designing an electrically propelled spacecraft is how the capabilities and limitations of the entire power and propulsion unit interact with the spacecraft itself. Just as in chemical and thermal rockets, the ratio of wet (or fueled) to dry (unfueled) mass has a direct impact on the vehicle’s capabilities: Tsiolkovsky’s rocket equation still applies, and in long missions there can be a significant mass of propellant on-board, despite the high isp of most of these thrusters. The specific mass of the power and propulsion system will have a huge impact on this, so the more power-dense, and more mass-efficient you are when converting your electricity into useful power for your thruster, the more capable the spacecraft will be.
Finally, the overall energy budget for the mission needs to be accounted for: how much change in velocity, or delta-v, is needed for the mission, and over what time period this change in velocity can be applied, are perhaps the biggest factors in selecting one type of thruster over another. We’ve already discussed the relative advantages and disadvantages of many of the different types of thrusters earlier, so we won’t examine it in detail again, but this consideration needs to be taken into account for any designed spacecraft.
With each of these factors applied appropriately, it’s possible to create a mathematical description of the spacecraft’s capabilities, and match it to a given mission profile, or (as is more common) to go the other way and design a spacecraft’s basic design parameters for a specific mission. After all, a spacecraft designed to deliver 100 kg of science payload to Jupiter in two years is going to have a very different design than one that’s designed to carry 100 kg to the Moon in two weeks, due to the huge differences in mission profile. The math itself isn’t that difficult, but for now we’ll stick with the general concepts, rather than going into the numbers (there are a number of dimensionless variables in the equations, and for a lot of people that becomes confusing to understand).
Let’s look instead at some of the more important parts of the power and propulsion unit that are tied more directly to the drives themselves.
Just as in any electrical system, you can’t just hook wires up to a battery, solar panel, or power conversion system and feed it into the thruster, the electricity needs to be conditioned first. This ensures the correct type of current (alternating or direct), the correct amount of current, the correct amperage… all the things that are done on Earth multiple times in our power grid have to be done on-board the spacecraft as well, and this is one of the biggest factors when it comes to what specific drive is placed on a particular satellite.
After the electricity is generated, it goes through a number of control systems to first ensure protection for the spacecraft from things like power surges and inappropriate routing, and then goes to a system to actually distribute the power, not just to the thruster, but to the rest of the on-board electrical systems. Each of these requires different levels of power, and as such there’s a complex series of systems to distribute and manage this power. If electric storage is used, for instance for a solar powered satellite, this is also where that energy is tapped off and used to charge the batteries (with the appropriate voltage and battery charge management capability).
After the electricity needed for other systems has been rerouted, it is directed into a system to ensure that the correct amount and type (AC, DC, necessary voltage, etc) of electricity is delivered to the thruster. These power conditioning units, or PCUs, are some of the most complex systems in an electric propulsion systems, and have to be highly reliable. Power fluctuations will affect the functioning of a thruster (possibly even forcing it to shut down in the case of too low a current), and in extreme cases can even damage a thruster, so this is a key function that must be provided by these systems. Due to this, some designers of electrical drive systems don’t design those systems in-house, instead selling the thruster alone, and the customer must contract or design the PCU independently of the supplier (although obviously with the supplier’s support).
Finally, the thermal load on the thruster itself needs to be managed. In many cases, small enough thermal loads on the thruster mean that radiation, or thermal convection through the propellant stream, is sufficient for managing this, but for high-powered systems, an additional waste heat removal system may be necessary. If this is the case, then it’s an additional system that needs to be designed and integrated into the system, and the amount of heat generated will play a major factor in the types of heat rejection used.
There’s a lot more than just these factors to consider when integrating an electric propulsion system into a spacecraft, but it tends to get fairly esoteric fairly quickly, and the best way to understand it is to look at the relevant mathematical functions for a better understanding. Up until this point, I’ve managed to avoid using the equations behind these concepts, because for many people it’s easier to grasp the concepts without the numbers. This will change in the future (as part of the web pages associated with these blog posts), but for now I’m going to continue to try and leave the math out of the posts themselves.
Conclusions, and Upcoming Posts
As we’ve seen, electric propulsion is a huge area of research and design, and one that extends all the way back to the dawn of rocketry. Despite a slow start, research has continued more or less continuously across the world in a wide range of different types of electric propulsion.
We also saw that the term “electric propulsion” is very vague, with a huge range of capabilities and limitations for each system. I was hoping to do a brief look at each type of electric propulsion in this post (but longer than a paragraph or two each), but sadly I discovered that just covering the general concepts, history, and integration of electric propulsion was already a longer-than-average blog post. So, instead, we got a brief glimpse into the most general basics of electrothermal, electrostatic, magnetoplasmadynamic, and photonic thrusters, with a lot more to come in the coming posts.
Finally, we looked at the challenges of integrating an electric propulsion system into a spacecraft, and some of the implications for the very wide range of capabilities and limitations that this drive concept offers. This is an area that will be expanded a lot as well, since we barely scratched the surface. We also briefly looked at the other electrical systems that a spacecraft has in between the power conversion system and the thruster itself, and some of the challenges associated with using electricity as your main propulsion system.
Our next post will look at two similar in concept, but different in mechanics, designs for electric propulsion: electrothermal and magnetoplasmadynamic thrusters. I’ve already written most of the electrothermal side, and have a good friend who’s far better than I at MPD, so hopefully that one will be coming soon.
The post after that will focus on electrostatic thrusters. Due to the fact that these are some of the most widely used, and also some of the most diverse in the mechanisms used, this may end up being its’ own post, but at this point I’m planning on also covering photon drive systems (mostly on-board but also lightsail-based concepts) in that post as well to wrap up our discussion on the particulars of electric propulsion.
Once we’ve finished our look at the different drive systems, we’ll look at how these systems don’t have to be standalone concepts. Many designs for crewed spacecraft integrate both thermal and electric nuclear propulsion into a single propulsion stage, bimodal nuclear thermal rockets. We’ll examine two different design concepts, one American (the Copernicus-B), and one Russian (the TEM stage), in that post, and look at the relative advantages and disadvantages of each concept.
I would like to acknowledge the huge amount of help that Roland Antonius Gabrielli of the University of Stuttgart Institute for Space Studies has been in this post, and the ones to follow. His knowledge of these topics has made this a far better post than it would have been without his invaluable input.
As ever, I hope you’ve enjoyed the post. Feel free to leave a comment below, and join our Facebook group to join in the discussion!