Hello, and welcome back to Beyond NERVA! Today’s blog post is a special one, spurred on by the announcement recently about the Transport and Energy Module, Russia’s new nuclear electric space tug! Because of the extra post, the next post on liquid fueled NTRs will come out on Monday or Tuesday next week.
This is a fascinating system with a lot of promise, but has also gone through major changes in the last year that seem to have delayed the program. However, once it’s flight certified (which is to be in the 2030s), Roscosmos is planning on mass-producing the spacecraft for a variety of missions, including cislunar transport services and interplanetary mission power and propulsion.
Begun in 2009, the TEM is being developed by Energia on the spacecraft side and the Keldysh Center on the reactor side. This 1 MWe (4MWt) nuclear reactor will power a number of gridded ion engines for high-isp missions over the spacecraft’s expected 10-year mission life.
First publicly revealed in 2013 at the MAKS aerospace show, a new model last year showed significant changes, with additional reporting coming out in the last week indicating that more changes are on the horizon (there’s a section below on the current TEM status).
This is a rundown of the TEM, and its YaDEU reactor. I also did a longer analysis of the history of the TEM on my Patreon page (patreon.com/beyondnerva), including a year-by-year analysis of the developments and design changes. Consider becoming a Patron for only $1 a month for additional content like early blog access, extra blog posts and visuals, and more!
The TEM is a nuclear electric spacecraft, designed around a gas-cooled high temperature reactor and a cluster of ion engines.
The TEM is designed to be delivered by either Proton or Angara rockets, although with the retirement of the Proton the only available launcher for it currently is the Angara-5.
Secondary Power System
Both versions of the TEM have had secondary folding photovoltaic power arrays. Solar panels are relatively commonly used for what’s known as “hotel load,” or the load used by instrumentation, sensors, and other, non-propulsion systems.
It is unclear if these feed into the common electrical bus of the spacecraft or form a secondary system. Both schemes are possible; if the power is run through a common electrical bus the system is simpler, but a second power distribution bus allows for greater redundancy in the spacecraft.
The ID-500 was designed by the Keldysh Center specifically to be used on the TEM, in conjunction with YaEDU. Due to the very high power availability of the YaEDU, standard ion engines simply weren’t able to handle either the power input or the needed propellant flow rates, so a new design had to be come up with.
The ID-500 is a xenon-propelled ion engine, with each thruster having a maximum power level of about 35 kW, with a grid diameter of 500 mm. The initially tested design in 2014 (see references below) had a tungsten cathode, with an expected lifetime of 5000 hours, although additional improvements through the use of a carbon-carbon cathode were proposed which could increase the lifetime by a factor of 10 (more than 50,000 hours of operation).
Each ID-500 is designed to throttle from 375-750 mN of thrust, varying both propellant flow rate and ionization chamber pressure. The projected exhaust velocity of the engine is 70,000 m/s (7000 s isp), making it an attractive option for the types of orbit-altering, long duration missions that the TEM is expected to undertake.
The fact that this system uses a gridded ion thruster, rather than a Hall effect thruster (HET), is interesting, since HETs are the area that Soviet, then Russian, engineers and scientists have excelled at. The higher isp makes sense for a long-term tug, but with a system that seems that it could refuel, the isp-to-thrust trade-off is an interesting decision.
The initial design released at MAKS 2013 had a total of 16 ion thrusters on four foldable arms, but the latest version from MAKS-2019 has only five thrusters. The new design is visible below:
The first design is ideal for the tug configuration: the distance between the thrusters and the payload ensure that a minimal amount of the propellant hits the payload, robbing the spacecraft of thrust, contaminating the spacecraft, and possibly building up a skin charge on the payload. The downside is that those arms, and their hinge system, cost mass and complexity.
The new design clusters only five (less than one third) thrusters clustered in the center-line of the spacecraft. This saves mass, but the decrease in the number of thrusters, and the fact that they’re placed in the exact location that the payload makes most sense to attach, has me curious about what the mission profile for this initial TEM is.
It is unclear if the thrusters are the same design.
This may be the most interesting thing in in the TEM: the heat rejection system.
Most of the time, spacecraft use what are commonly called “bladed tubular radiators.” These are tubes which carry coolant after it reaches its maximum temperature. Welded to the tube are plates, which do two things: it increases the surface area of the tube (with the better conductivity of metal compared to most fluids this means that the heat can be further distributed than the diameter of the pipe) and it protects the pipe from debris impacts. However, there are limitations in how much heat can be rejected by this type of radiator: the pipes, and joints between pipes, have definite thermal limits, with the joins often being the weakest part in folding radiators.
The TEM has the option of using a panel-type radiator, in fact there’s many renderings of the spacecraft using this type of radiator, such as this one:
However, many more renderings present a far more exciting possibility: a liquid droplet radiator, called a “drip refrigerator” in Russian. This design uses a spray of droplets in place of the panels of the radiator. This increases the surface area greatly, and therefore allows far more heat to be rejected. In addition it can reduce the mass of the system significantly, both due to the increased surface area and also the potentially higher temperature, assuming the system can recapture the majority of its coolant.
This system was also tested on the ground throughout 2018 (https://ria.ru/20181029/1531649544.html?referrer_block=index_main_2), and appears to have passed all the vacuum chamber ground tests needed. Based on the reporting, more in-orbit tests will be needed, but with Drop-2 already on-station it may be possible to conduct these tests reasonably easily.
I have been unable to determine what the working fluid that would be used is, but anything with a sufficiently low vapor pressure to survive the vacuum of space and the right working fluid range can be used, from oils to liquid metals.
Nothing is known of the reaction control system for the TEM. A number of options are available and currently used in Russian systems, but it doesn’t seem that this part of the design has been discussed publicly.
The biggest noticeable change in the rest of the spacecraft is the change in the spine structure. The initial model and renders had a square cross section telescoping truss with an open triangular girder profile. The new version has a cylindrical truss structure, with a tetrahedral girder structure which almost looks like the same structure that chicken-wire uses. I’m certain that there’s a trade-off between mass and rigidity in this change, but what precisely it is is unclear due to the fact that we don’t have dimensions or materials for the two structures. The change in the cross-section also means that while the new design is likely stronger from all angles, it makes it harder to pack into the payload fairing of the launch vehicle.
The TEM seems like it has gone through a major redesign in the last couple years. Because of this, it’s difficult to tell what other changes are going to be occurring with the spacecraft, especially if there’s a significant decrease in electrical power available.
It is safe to assume that the first version of the TEM will be more heavily instrumented than later versions, in order to support flight testing and problem-solving, but this is purely an assumption on my part. The reconfiguration of the spacecraft at MAKS-2019 does seem to indicate, at least for one spacecraft, the loss of the payload capability, but at this point it’s impossible to say.
The YaEDU is the reactor that will be used on the TEM spacecraft. Overall, with power conversion system, the power system will weigh about 6800 kg.
The reactor itself is a gas cooled, fast neutron spectrum, oxide fueled reactor, designed with an electrical output requirement rather than a thermal output requirement, oddly enough (choice in power conversion system changes the ratio of thermal to electrical power significantly, and as we’ll see it’s not set in stone yet) of 1 Mwe. This requires a thermal output of at least 4 MWt, although depending on power conversion efficiency it may be higher. Currently, though, the 4 MWt figure seems to be the baseline for the design. It is meant to have a ten year reactor lifetime.
This system has undergone many changes over its 11 year life, and due to the not-completely-clear nature of much of its development and architecture, there’s much about the system that we have conflicting or incomplete information on. Therefore, I’m going to be providing line-by-line references for the design details in these sections, and if you’ve got confirmable technical details on any part of this system, please comment below with your references!
The fuel for the reactor appears to be highly enriched uranium oxide, encased in a monocrystalline molybdenum clad. According to some reporting (https://habr.com/en/post/381701/ ), the total fuel mass is somewhere between 80-150 kg, depending on enrichment level. There have been some mentions of carbonitride fuel, which offers a higher fissile fuel density but is more thermally sensitive (although how much is unclear), but these have been only passing mentions.
The use of monocrystalline structures in nuclear reactors is something that the Russians have been investigating and improving for decades, going all the way back to the Romashka reactor in the 1950s. The reason for this is simple: grain boundaries, or the places where different crystalline structures interact within a solid material, act as refractory points for neutrons, similarly to how a cracked pane of glass distorts the light coming through it through internal reflection and the disruption of light waves undergoing refraction in the material. There’s two ways around this: either make sure that there are no grain boundaries (the Russian method), or make it so that the entire structure – or as close to it as possible – are grain boundaries, called nanocrystalline materials (the preferred method of the US and other Western countries. While the monocrystalline option is better in many ways, since it makes an effectively transparent, homogeneous material, it’s difficult to grow large monocrystalline structures, and they can be quite fragile in certain materials and circumstances. This led the US and others to investigate the somewhat easier to execute, but more loss-intensive, nanocrystalline material paradigm. For astronuclear reactors, particularly ones with a relatively low keff (effective neutron multiplication rate, or how many neutrons the reactor has to work with), this monocrystalline approach makes sense, but I’ve been unable to find the keff of this reactor anywhere, so it may be quite high in theory.
The TEM uses a mix of helium and xenon as its primary coolant, a common choice for fast-spectrum reactors. Initial reporting indicated an inlet temperature of 1200K, with an outlet temperature of 1500K, although I haven’t been able to confirm this in any more recent sources. Molybdenum, tantalum, tungsten and niobium alloys are used for the primary coolant tubes.
Testing of the coolant loop took place at the MIR research reactor in NIIAR, in the city of Dimitrovgrad. Due to the high reactor temperature, a special test loop was built in 2013 to conduct the tests. Interestingly, other options, including liquid metal coolant, were considered (http://osnetdaily.com/2014/01/russia-advances-development-of-nuclear-powered-spacecraft/ ), but rejected due to lower efficiency and the promise of the initial He-Xe testing.
Power Conversion System
There have been two primary options proposed for the power conversion system of the TEM, and in many ways it seems to bounce back and forth between them: the Brayton cycle gas turbine and a thermionic power conversion system. The first offers far superior power conversion ratios, but is notoriously difficult to make into a working system for a high temperature astronuclear system; the second is a well-understood system that has been used through multiple iterations in flown Soviet astronuclear systems, and was demonstrated on the Buk, Topol, and Yenesiy reactors (the first two types flew, the third is the only astronuclear reactor to be flight-certified by both Russia and the US).
In 2013, shortly after the design outline for the TEM was approved, the MAKS trade show had models of many components of the TEM, including a model of the Brayton system. At the time, the turbine was advertised to be a 250 kW system, meaning that four would have been used by the TEM to support YaEDU. This system was meant to operate at an inlet temperature of 1550K, with a rotational speed of 60,000 rpm and a turbine tip speed of 500 m/s. The design work was being primarily carried out at Keldysh Center.
The Brayton system would include both DC/AC and AC/DC convertors, buffer batteries as part of a power conditioning system, and a secondary coolant system for both the power conversion system bearing lubricant and the batteries.
As early as 2015, though, there were reports (https://habr.com/en/post/381701/ ) that RSC Energia, the spacecraft manufacturer, were considering going with a simpler power conversion system, a thermionic one. Thermionic power conversion heats a material, which emits electrons (thermions). These electrons pass through either a vacuum or certain types of exotic materials (called Cs-Rydberg matter) to deposit on another surface, creating a current.
This would reduce the power conversion efficiency, so would reduce the overall electric power available, but is a technology that the Russians have a long history with. These reactors were designed by the Arsenal Design Bureau, who apparently had designs for a large (300-500 kW) thermionic design. If you’d like to learn more about the history of thermionic reactors in the USSR and Russia, check out these posts:
This was potentially confirmed just a few days ago by the website Atomic Energy (http://www.atomic-energy.ru/news/2020/01/28/100970 ) by the first deputy head of Roscosmos, Yuri Urlichich. If so, this is not only a major change, but a recent one. Assuming the reactor itself remains in the same configuration, this would be a departure from the historical precedent of Soviet designs, which used in-core thermionics (due to their radiation hardness) rather than out-of-core designs, which were investigated by the US for the SNAP-8 program (something we’ll cover in the future).
So, for now we wait and see what the system will be. If it is indeed the thermionic system, then system efficiency will drop significantly (from somewhere around 30-40% to about 10-15%), meaning that far less electrical power will be available for the TEM.
The hydrogen is useful to shield most types of radiation, but the inclusion of boron materials stops neutron radiation very effectively. This is important to minimize damage from neutron irradiation through both atomic displacement and neutron capture, and boron does a very good job of this.
Current TEM Status
Two Russian media articles came out within the past week about the TEM, which spurred me to write this article.
RIA, an official state media outlet, reported a couple days ago that the first flight of a test unit is scheduled for 2030. In addition:
Roscosmos announced the completion of the first project to create a unique space “tug” – a transport and energy module (TEM) – based on a megawatt-class nuclear power propulsion system (YaEDU), designed to transport goods in deep space, including the creation of long-term bases on the planets. A technical complex for the preparation of satellites with a nuclear tug is planned to be built at Vostochny Cosmodrome and put into operation in 2030. https://ria.ru/20200128/1563959168.html
A second report (http://www.atomic-energy.ru/news/2020/01/28/100970) said that the reactor was now using a thermionic power conversion system, which is consistent with the reports that Arsenal is now involved with the program. This is a major design change from the Brayton cycle option, however it’s one that could be considered not surprising: in the US, both Rankine and Brayton cycles have often been proposed for space reactors, only to have them replaced by thermoelectric power conversion systems. While the Russians have extensive thermoelectric experience, their experience in the more efficient thermionic systems is also quite extensive.
“Creation of theoretical and experimental and experimental backlogs to ensure the development of highly efficient rocket propulsion and power plants for promising rocket technology products, substantiation of their main directions (concepts) of innovative development, the formation of basic requirements, areas of rational use, design and rational level of parameters with development software and methodological support and guidance documents on the design and solution of problematic issues of creating a new generation of propulsion and power plants.”
Work continues on the Vostnochy Cosmodrome facilities, and the reporting still concludes that it will be completed by 2030, when the first mass-production TEMs are planned to be deployed.
According to Yuri Urlichich, deputy head of Roscosmos, the prototype for the power plant would be completed by 2025, and life testing on the reactor would be completed by 2030. This is the second major delay in the program, and may indicate that there’s a massive redesign of the reactor. If the system has been converted to thermionic power, it would explain both the delay and the redesign of the spacecraft, but it’s not clear if this is the reason.
For now, we just have to wait and see. It still appears that the TEM is a major goal of both Roscosmos and Rosatom, but it is also becoming apparent that there have been challenges with the program.
Conclusions and Author Commentary
It deserves reiterating: I’m some random person on the Internet for all intents and purposes, but my research record, as well as my care in reporting on developments with extensive documentation, is something that I think deserves paying attention to. So I’m gonna put my opinion on this spacecraft out there.
This is a fascinating possibility. As I’ve commented on Twitter, the capabilities of this spacecraft are invaluable. Decommissioning satellites is… complicated. The so-called “graveyard orbits,” or those above geosynchronous where you park satellites to die, are growing crowded. Satellites break early in valuable orbits, and the operators, and the operating nations, are on the hook for dealing with that – except they can’t.
Additionally, while many low-cost launchers are available for low and mid Earth orbit launches, geostationary orbit is a whole different thing. The fact that India has a “Polar Satellite Launch Vehicle” (PSLV) and “Geostationary Satellite Launch Vehicle” (GSLV) classification for two very different satellites drives this home within a national space launch architecture.
The ability to contract whatever operator runs TEM missions (I’m guessing Roscosmos, but I may be wrong) with an orbital path post-booster-cutoff, and specify a new orbital pat, and have what is effectively an external, orbital-class stage come and move the satellite into a final orbit is… unprecedented. The idea of an inter-orbital tug is one that’s been proposed since the 1960s, before electric propulsion was practical. If this works the way that the design specs are put at, this literally rewrites the way mission planning can be done for any satellite operator who’s willing to take advantage of it in cislunar space (most obviously, military and intelligence customers outside Russia won’t be willing to take advantage of it).
The other thing to consider in cislunar space is decommissioning satellites: dragging things into a low enough orbit that they’ll burn up from GEO is costly in mass, and assumes that the propulsion and guidance, navigation, and control systems survive to the end of the satellite’s mission. As a satellite operator, and a host nation to that satellite with all the treaty obligations the OST requires the nation to take on, being able to drag defunct satellites out of orbit is incredibly valuable. The TEM can deliver one satellite and drag another into a disposal orbit on the way back. To paraphrase a wonderful character from Sir Terry Pratchett (Harry King)“They pay me to take it away, and they pay me to buy it after.” In this case, it’s opposite: they pay me to take it out, they pay me to take it back. Especially in graveyard orbit challenge mitigation, this is a potentially golden opportunity financially for the TEM operator: every mm/s of mission dV can potentially be operationally profitable. This is potentially the only system I’ve ever seen that can actually say that.
More than that, depending on payload restrictions for TEM cargoes, interplanetary missions can gain significant delta-vee from using this spacecraft. It may even be possible, should mass production actually take place, that it may be possible to purchase the end of life (or more) dV of a TEM during decommissioning (something I’ve never seen discussed) to boost an interplanetary mission without having to pay the launch mass penalty for the Earth’s escape velocity. The spacecraft was proposed for Mars crewed mission propulsion for the first half of its existence, so it has the capability, but just as SpaceX Starship interplanetary missions require SpaceX to lose a Starship, the same applies here, and it’s got to be worth the while of the (in this case interplanetary) launch provider to lose the spacecraft to get them to agree to it.
This is an exciting spacecraft, and one that I want to know more about. If you’re familiar with technical details about either the spacecraft or the reactor that I haven’t covered, please either comment or contact me via email at firstname.lastname@example.org
We’ll continue with our coverage of fluid fueled NTRs in the next post. These systems offer many advantages over both traditional, solid core NTRs and electrically propelled spacecraft such as the TEM, and making the details more available is something I’ve greatly enjoyed. We’ll finish up liquid fueled NTRS, followed by vapor fuels, then closed and open fueled gas core NTRs, probably by the end of the summer
If you’re able to support my efforts to continue to make these sorts of posts possible, consider becoming a Patron at patreon.com/beyondnerva. My supporters help me cover systems like this, and also make sure that this sort of research isn’t lost, forgotten, or unavailable to people who come into the field after programs have ended.
Hello, and welcome back to Beyond NERVA! The blog itself has been quiet for a while for a number of reasons, but the website continues to grow! I’ve added extensive sections on radioisotope power sources and many details of their operation. There are now pages for radioisotope power sources in general (which you can find here), the fuel elements and heating units used in these systems (here), the considerations for fuel selection (which can be found here), the various options for the fuel itself (each of these has its own page: 238Pu, 241Am, and 210Po). I also covered the RTGs of the SNAP era (available here), which evolved concurrently with the SNAP-2, SNAP-10, and SNAP-8 reactors that we’ve already looked at in depth (each of those now has its own page, as well as the experimental and development reactors associated with the systems, just click on their names to find them!), as well as the Multi-Hundred Watt RTG that powered the Voyager spacecraft, the General Purpose Heat Source RTG which powered Galileo, Ulysses, and Cassini, and still powers New Horizons, and as this is released a page will be coming online about the Multi-Mission RTG, NASA’s current workhorse RTG! Make sure to check out those pages for in depth information on an ecosystem of power supplies which are fascinating, but often overlooked (including by me) for the flashier, higher-powered fission reactor proposals.
Today, we’re going to look at a particular application of the Multi-Mission RTG, or MMRTG.
This is the newly announced Dragonfly mission to Titan. This mission will only be the second to touch down on the surface of that Saturnian moon, and promises to transform our understanding of the complex hydrocarbon cycles, liquid water (which is under the surface of Titan), and complex organic chemistry that may hold clues to the early atmospheric conditions of Earth.
Congratulations are very much in order for the Johns Hopkins University Applied Physics Laboratory team for their successful mission concept!
This mission will be
primarily based around the quadcopter which has gained the most
attention in the recent announcements. Using eight rotors mounted to
four motor mounts, this system will charge a set of lithium ion
batteries with the output of the MMRTG, and then use this
higher-power-density supply to fly in short hops, first scouting out
the landing point where it will settle for the science gathering
portion after the one it will
be landing on for that flight, and then land at an already-scouted
position on the Titanic surface. There a suite of instruments,
mass spectrometer, a neutron
and gamma spectrometer, a meteorological and seismic sensor suite,
and a camera suite. There
may be others that are yet to
be fully determined but will be refined over the course of the final
mission planning (which can begin now that the mission
has been selected for funding.
on Titan is far easier than on Earth due to the high atmospheric
density (over 4 times higher
than Earth’s at sea level)
and low gravity (1/7
g) of the moon, and as such a
host of proposals for flying probes have been proposed over the
years, from balloons to helicopters to airships to airplanes – and
even a radioisotope thermal rocket proposal!
Furthermore, both surface and lake landers have been proposed.
the team pointed out in their proposal, the Dragonfly concept
combines the benefits of a Curiosity-class lander’s equipment suite
and surface science capability with the mobility of an
aerial movement platform. This provides the scientific advantages of
surface deployment with the ability to relocate the lander at a far
higher rate of travel than a wheeled rover such as Curiosity. This
provides a good balance of the advantages of each design type.
first look at the mission itself, from launch to landing on Titan,
followed by the science goals of the lander, and finally we’ll look
at the power supply relatively briefly, focusing on the thermal
management strategies needed for both the lander and the RTG itself.
All of the information here
is as of July 2019, so much is still subject to change in the next
few years before the launch window.
you’re looking for a particular aspect of the mission, the links
below will jump you to the appropriate section, but as with all
missions, each phase or component informs every other one, so by
skipping ahead you may miss some interesting tidbits about this
Profile Pre-Titan: Launch, Cruise, and EDL
Dragonfly will launch on either an Atlas 541 or equivalent launcher on April 12th, 2025, and conduct a series of flybys of various planets to get out to the Saturnian system. This means that a smaller launcher can be used, and therefore a less expensive one to save more funding for the spacecraft and science teams. However, since this launch vehicle (to my knowledge) hasn’t firmly been pinned down, differences in the actual launcher – as well as any possible onboard non-optimal launch conditions, could change the exact timing of the gravitational assists listed below.
The first gravitational assist will be on April 11th, 2026
from Earth, followed by a Venusian gravitational assist on 4/16/2017,
another Earth gravitational assist on 5/27/2028, and a final Earth
gravitational assist on 9/3/2031. While there are options to
rearrange these gravitational assists, this one was selected due to a
number of orbital mechanical factors. Sadly, Jupiter will be out of
phase with Saturn at the appropriate time, so it’s impossible to
use the large planet’s convenient gravity well to shorten the trip
to the Saturnian system.
After this series of four inner-system gravitational assists,
Dragonfly will have the necessary dV to get to Titan. A mid-course
correction around December 2, 2031 (while the spacecraft is between
Mars and Juptier) will ensure that the spacecraft is oriented
correctly for Titan capture.
During this time, the MMRTG will use its secondary coolant loop (visible as the silver tubes at the base of the RTG’s fins in the above image) combined with a pumped coolant loop similar to what was used on Curiosity’s cruise stage, in order to reject the waste heat from the RTG during the time that it’s enclosed in the heat shield and cruise stage. It’s unclear in the mission documentation whether the RTG will be used to power any instruments during cruise, as Curiosity’s particle detection system was. The instrumentation on Dragonfly is significantly different from Curiosity, and it’s not apparent whether any of the instruments would either be useful in cruise, or perhaps whether operation of the cruise stage would interfere with those experiments that could be useful to the point that the data would simply be too messy or corrupted to bother.
The cruise stage design is also something that I haven’t been able
to find, so it’s possible (although unlikely due to increased
electrical bus complexity) that the MMRTG could be used to power
scientific instruments on the cruise stage itself. While the majority
of the time the spacecraft will be doing the series of gravitational
assists in the inner Solar System, allowing for the use of solar
panels, by the time it is heading out of the inner Solar System the
power available from solar radiation will be dropping off
exponentially. This could likely be handled by battery power on board
the cruise stage, or it could possibly be handled by the MMRTG.
However, the increased complexity of the electrical system should it
be connected to both the cruise stage as well as the lander may be
sufficient to have the mission planners decide to not go this route.
Descent, and Landing for Dragonfly
Finally, at the close of 2034 (12/30), the spacecraft will reach
Titan, and perform its entry, descent and landing procedures.
For those that remember Curiosity’s landing, this was (rightly)
touted as the “Seven Minutes of Terror,” involving a huge number
of complex and risky maneuvers with a collection of sometimes exotic
craft to land on Gale Crater. These included: a set of hypersonic
parachutes designed for the thin Martian atmosphere, followed by the
use of an eight-engined “sky crane,” which hovered meters off the
surface of Mars to lower the rover on a winch and cable system
(during which the rover’s wheel and bogey driving system deployed
for the first time), followed by placing the rover on the surface of
Mars, disconnecting the cables, and then flying away to crash at some
distance. This was due to the combination of thin atmosphere,
reasonably high gravity, and the need to minimize dust and debris
from collecting on the surface of the rover (to protect the wet and
dry lab sample collection system on the surface of the rover from
being contaminated). It was a stunning series of firsts in an EDL
system, and one which rightly received worldwide attention – and
garnered NASA to repeat the process for Mars 2020.
Titan, on the other hand, is a whole different ballgame. Its thick
atmosphere and low gravity make it a very different EDL environment,
one that is both easier to deal with but also requires a different
set of conditions to be followed. Due to the fact that the gravity is
so low (1/7 of Earth’s), Titan’s sensible atmosphere extends far
higher than even Earth’s does. In fact, even though the atmospheric
density is 4 times that of Earth’s at the surface, the atmospheric
pressure is only slightly higher (1.4x). Combining a far
lighter-weight rover (weight=mass x gravity), with far better lift,
and onboard flight capability, the sky-crane maneuver isn’t
necessary on Titan.
The beginning will look similar: the cruise stage is outwardly very similar to the MSL cruise stage, and will also be ejected as the craft begins to enter the atmosphere. A set of parachutes will slow the craft further, until an optimal velocity is achieved above the surface of Titan. During this time, it’s likely that the radar designed to map the Titanic surface in flight will be used to verify the lander’s location over the surface and find an acceptable landing location for its first touchdown. Based on the typical flight profile of Dragonfly, and depending on the power level in the batteries (which would likely be fully charged) compared to how much power it will take to get to a safe landing location, the next flight’s landing location may be scouted as well.
This video from JHUAPL shows the EDL process for Dragonfly:
The location of the first landing location (as well as all other
subsequent locations) is dependent on communication with Earth.
Dragonfly is designed to communicate directly with Earth, something
which is standard for outer solar system missions due to the lack of
a communications architecture in a useful location (the one exception
is the Huygens probe, also deployed to Titan, which used the Cassini
spacecraft as a communications relay). This places certain limits on
the location that Dragonfly can deploy to on the Titanic surface, and
also where the lander will move across the surface during its ground
science mission – but more on that in the surface mission section.
The mission planners for Dragonfly have selected a similar landing
latitude and season as the Huygens mission’s landing in order to
maximize the knowledge of the atmospheric conditions for the initial,
riskiest portion of the lander’s atmospheric operations. This also
maximizes communications availability with Earth and works well with
orbital mechanical constraints upon entering the Saturnian system and
aerocapture by Titan itself.
After the lander is safely on the surface of Titan, it will deploy
its communications dish, send data back to Earth, and begin surface
science experiments. This will be the start of the Dragonfly surface
mission, which will last a number of years – until either a failure
occurs on the lander which prevents its further operation, or the
MMRTG degrades to the point that communication with Earth is no
longer possible (the science equipment takes less power than the
communications do, so even if Dragonfly isn’t able to fly it can
still provide valuable scientific data – if it can phone home).
This begins the purpose of the whole mission, which will be explored
in the next section
Surface Mission and Flight Profile
Titan is a fascinating place. The coldest location in the Solar
System thanks to its location in the outer solar system, complex
hydrocarbon cycles which in may ways mimic the hydrological cycles of
Earth (but with a complex set of liquid hydrocarbons rather than
water), and a chemical profile that may be similar to Earth’s at
the beginning of the evolution of life on Earth provide a fascinating
place to conduct science missions, with implications reaching not
only far back into Earth’s past, but also have a major impact on
the future of humanity in the Solar System.
The surface mission of Dragonfly largely comes in two major phases:
landed science instruments and communications, and movement. While
some data is collected in flight (mostly imaging), the in depth data
collection and transmission are done on the surface of the moon. This
is important, because while the MMRTG is the best power supply
available for this mission, it isn’t able to provide the power
needed for flight as quickly as needed. This means that a set of
lithium ion batteries, stored in an insulated box that’s heated by
waste heat from the MMRTG, are charged while on the surface, and then
once the desired power level is reached, a new flight can occur.
Let’s begin by looking at the science instruments which will be included on Dragonfly. This list is quoted from “Dragonfly: A Rotorcraft Lander Concept for Scientific Exploration of Titan,” a white paper from Lorentz et al at JHUAPL on January 9th of 2019:
DraMS—Dragonfly Mass Spectrometer (Goddard Space Flight Center). A central element of the payload is a highly capable mass spectrometer instrument, with front-end sample processing able to handle highmolecular-weight materials and samples of prebiotic interest. The system has elements from the highly successful SAM (Sample Analysis at Mars) instrument on Curiosity, which has pyrolysis and gas chromatographic analysis capabilities, and also draws on developments for the ExoMars/MOMA (Mars Organic Material Analyser).
DraGNS—Dragonfly Gamma-Ray and Neutron Spectrometer (APL/Goddard Space Flight Center). This instrument allows the elemental composition of the ground immediately under the lander to be determined without requiring any sampling operations. Note that because Titan’s thick and extended atmosphere shields the surface from cosmic rays that excite gammarays on Mars and airless bodies, the instrument includes a pulsed neutron generator to excite the gamma-ray signature, as also advocated for Venus missions. The abundances of carbon, nitrogen, hydrogen, and oxygen allow a rapid classification of the surface material (for example, ammonia-rich water ice, pure ice, and carbonrich dune sands). This instrument also permits the detection of minor inorganic elements such as sodium or sulfur. This quick chemical reconnaissance at each new site can inform the science team as to which types of sampling (if any) and detailed chemical analysis should be performed.
DraGMet—Dragonfly Geophysics and Meteorology Package (APL). This instrument is a suite of simple sensors with low-power data handling electronics. Atmospheric pressure and temperature are sensed with COTS sensors. Wind speed and direction are determined with thermal anemometers (similar to those flown on several Mars missions) placed outboard of each rotor hub, so that at least one senses wind upstream of the lander body, minimizing flow perturbations due to obstruction and by the thermal plume from the MMRTG. Methane abundance (humidity) is sensed by differential near-IR absorption, using components identified in the TiME Phase A study. Electrodes on the landing skids are used to sense electric fields (and in particular the AC field associated with the Schumann resonance, which probes the depth to Titan’s interior liquid water ocean) as well as to measure the dielectric constant of the ground. The thermal properties of the ground are sensed with a heated temperature sensor to assess porosity and dampness. Finally, seismic instrumentation assesses regolith properties (e.g., via sensing drill noise) and searches for tectonic activity and possibly infers Titan’s interior structure.
DragonCam—Dragonfly Camera Suite (Malin Space Science Systems). A set of cameras, driven by a common electronics unit, provides for forward and downward imaging (landed and in flight), and a microscopic imager can examine surface material down to sand-grain scale. Panoramic cameras can survey sites in detail after landing: in many respects, the imaging system is similar to that on Mars landers, although the optical design takes the weaker illumination at Titan (known from Huygens data) into account. LED illuminators permit color imaging at night, and a UV source permits the detection of certain organics (notably polycyclic aromatic hydrocarbons) via fluorescence.
Engineering systems. Data from the inertial measurement unit (IMU) may be used to recover an atmospheric density profile via the deceleration history during entry. IMU and other navigation data may provide constraints on winds during rotorcraft flight. Additionally, the radio link via Doppler and/or ranging measurements may shed light on Titan’s rotation state, which, in turn, is influenced by its internal structure.”
It’s unclear whether all electrical power will be routed through
these batteries or not. While there are advantages from a power
conditioning point of view (ensuring the correct voltage and wattage,
preventing power dropouts or spikes to sensitive instruments, etc),
it can also cause battery life complications due to the constant
discharging of the batteries themselves – and degradation of
especially the anode in the batteries. It’s unclear which power
conditioning scheme will be used for the always-on systems, such as
the meteorological system, but the high powered systems will likely
draw power from the batteries exclusively.
In order to deploy these instruments, the lander will fly from one location to another, scouting out the location of the landing after the target that it will be landing at on that mission. A good schematic of the flight profile can be seen below:
This ensures that the flight time for the next flight can be
maximized, since a known safe landing location is already known
before the lander takes off for its next flight, as well as providing
margin in flight time for all legs of the mission after the initial
one. It’s unclear if the first flight will also use this profile,
since that’s part of the entry, descent, and landing sequence, but
there’s no apparent reason why it couldn’t if the lander is
ejected from the backshell at moderate altitude and velocity.
Due to the remote nature of this mission, there aren’t many (if
any) options available to use communications hubs off Earth as relays
for Dragonfly, a major contrast to Martian operations where the many
orbiters around the planet also serve as communications satellites
for the various landers and rovers operating on the surface of Mars.
This means that in order for the scientific and engineering data from
Dragonfly to be returned to Earth, and additional commands to be
transmitted, the rover needs line of sight to Earth. This is done via
a deployable high gain antenna, which will be stowed for flight
operations to reduce drag and stress on the antenna itself.
This complicates matters in two ways: if the lander is at too high or
too low a latitude, there’s no line-of-sight available to Earth
from the Titanic surface, meaning that communications is impossible;
second, although the length of the sol (the extraterrestrial
equivalent of a day, in the case of Titan this is almost 16 Earth
days, the same as Titan’s orbital period around Saturn).
While it may be possible to do a hop outside the
communications window, collect scientific data, then return to the
communications window to transmit the data, it also increases the
chance of an unrecoverable failure due to the lack of ability for the
engineering team on Earth to troubleshoot and resolve any potential
The lander isn’t tied to the ability to gather sunlight like a
solar-powered spacecraft is, but at the same time the ability to
communicate with Earth and having the Sun visible are conditions that
pretty closely overlap, so night-time scientific data gathering on
this mission essentially means that all data would need to be stored
on board the spacecraft, and the rate of data collection and data
storage capabilities of Dragonfly aren’t clear from current
documentation. This means that, should the lander need to overnight
(a very real possibility) on Titan, some of the data may need to be
written over in order to make room for more immediately interesting
Whether this is an avoidable circumstance or not is something that
I’ve been unable to determine, but the mission design team have
made provisions for overnighting Dragonfly on Titan, and this may in
fact be required to allow for the time necessary to recharge the
batteries to flight condition. If this is the case, it’s likely
that the mission’s data storage capabilities will be sufficient to
collect all desired data through the Titanic night and transmit them
during daytime surface operations, when line-of-sight with Earth is
Now that we’ve looked at what Dragonfly is going to be doing
on the surface of Titan, let’s look at how it will do it
from a power point of view: the Multi-Mission RTG, or MMRTG.
RTG: NASA and the DOE’s Flagship RTG
is also the first RTG to be built for NASA in decades that was
designed to operate in an atmosphere, a major thermal management
change from the typical spaceborne MHW-RTG and GPHS-RTG systems. RTGs
have of course been designed to reject heat into both atmosphere and
into liquid during the SNAP RTG program, which included naval and air
on those systems here),
but this application hadn’t been used by NASA since the SNAP-19 (the
Viking landers used two of the generators, details
of the non-GPHS elements of the MMRTG are manufactured by Teledyne
(the company that the thermocouple inventors worked for in the 1960s
and 1970s), and Aerojet Rocketdyne. Lockheed Martin and the
Department of Energy provide the systems and materials for the GPHS
modules that fuel the MMRTG.
thermoelectric generator (TEG) assembly of the MMRTG is based on a
legacy design: the lead telluride/telluride arsenic germanium
selenite (PbTe/TAGS85) thermoelectric thermocouple system. This was
first used in the SNAP-19 RTG (more
information available here).
The MMRTG uses an updated configuration, using 768 thermocouples
configured as two series-parallel chains for fault tolerance: any
thermocouple that fails will do so individually, resulting in a
negligible and isolated loss of power that is easy and reliable to
integrate into mission planning.
one significant barrier for the use of these materials is
differential sublimation of the thermoelectric materials themselves.
This is unfortunate, but there are a number of ways to manage this
effect. In the current incarnation, a mix of argon and helium are
used as a cover gas, but other cover gas compositions are also
possible. Additionally, silica insulation is used immediately
surrounding the thermocouples to reduce sublimation rates.
has significant implications for Dragonfly, as we’ll see below.
MMRTG is fascinating for a number of reasons in the context of
thermal management. The two most prominent (upon investigation)
differences between the MMRTG and almost every other off-Earth RTG
radiators are designed to work in both vacuum and atmospheric
MMRTG is able to use a pumped
coolant thermal management system during operation,
the first to do so during a mission with the Mars Science Laboratory
cruise stage. This was not only critical for the cruise stage
thermal management, but also had an impact on the Entry, Descent,
and Landing profile
for a lander or rover.
space as part of a composite spacecraft, such Dragonfly in cruise,
thermal transfer points are mechanically and thermally attached to
the radiator fins of the RTG. The cruise stage for Curiosity had a
total of 23 m of aluminum tubes in a two-split flow configuration
integrated to the cruise stage for this purpose, which when welded to
a 1.5 mm aluminum sheet creates a radiator about 6 m^2 in area.
is integrated into the base of the fins on the radiator, which were
then jettisoned for surface operations (although exactly when and how
this occurs is unclear).
spacecraft interface consists of a mounting bracket which connects to
the spacecraft for mechanical attachment, as well as an electrical
and telemetry connection to the spacecraft through a single
connector. Mechanical integration uses a four bolt mounting
interface. The only telemetry provided is from platinum resistance
thermometers within the RTG.
RTG is only ever integrated onto the spacecraft at the launch pad,
due to nuclear material security concerns, waste heat management
simplification, and radiological safety. Interestingly, the entire
RTG was integrated into Curiosity on the launch pad, after the rest
of the rover and cruise stage was already integrated into the launch
vehicle (a ULA Atlas 541). This will be done with Mars 2020 as well,
with a mass simulator being used in its place. I presume the same
will be done for Dragonfly.
While the MMRTG (more information available here) was designed to handle the Titan environment from the beginning, there are many difference between this version of the MMRTG and those that are (and will be) used on Mars. Additionally, the fact that aerodynamics and mass distribution are a major design criterion for Dragonfly places additional requirements on the use of the system.
A standard MMRTG uses an eight bladed, cylindrical radiator to reject
heat. This is present on both the Mars and deep space version of the
MMRTG, but the Martian version uses a pair of shields on either side
of the rover to both control air flow past the radiator (limiting the
amount of convective heat loss) and to capture waste heat from the
RTG to heat certain temperature-sensitive system components. These
shields don’t touch the radiator, and the configuration of the
radiator is the same as the outer space version of the system
(although the deep space version may be painted black instead of
white for thermal management).
systems that use radiators have a minimum and maximum operating
temperature for the radiator itself. For the MMRTG it’s based on
the temperature at the root of each fin on the radiator. While the
MMRTG was designed with a maximum allowable
temperature of 200 C (!) (with a corresponding loss in conversion
efficiency due to a lower thermal gradient) for Martian or orbital
operations in an absolute worst-case scenario, the system faces the
opposite problem on Titan: the incredibly low surface temperature is
well below the minimum
operating temperature at the fin root temperature of -269 C. This
requires the radiator to be insolated from the exterior environment
to a certain degree, meaning that the heat rejection system needs to
Of course, the fins on the radiator aren’t the most aerodynamic
things around: not only would they cause changes in yaw and roll to
be more difficult, but the upward angle of the RTG’s mounting
causes problems with drag as well.
both of these problems can be addressed together, by taking an idea
that’s already in use on the Curiosity rover, extending and
adapting it for the Titanic environment. On
Mars, as we already mentioned, a pair of shields are used on either
side of the rover. For Dragonfly, those shields are extended to make
a cylindrical housing for the MMRTG, as seen at the back of the
lander. Not only does this ensure that the RTG components are kept at
the appropriate temperature, but also that the lander has improved
aerodynamic conditions. This is especially important because while
lift is easy to gain on Titan, drag is also far more powerful.
and the RTG: Unknown Unknowns in Mission Design and Power Supply
the MMRTG has been performing within the design envelope for the
needs of the Curiosity mission on Mars for years, it has shown some
moderately concerning degradation behaviors that the Dragonfly team
are taking into account when designing the power profile for
JPL “Radioisotope Power Systems Reference Book for Mission
Designers and Planners,” by Young Lee and Brian Barstow at JPL,
covers many of the details of the MMRTG, as well as recommendations
on the design margins that should be adhered to when applying this
power supply to a mission proposal.
first concern is fuel age. While the MMRTG on Curiosity supplied 114
We of power on landing on Mars (and this includes three years in
storage, and four years on vehicle integration, launch, and cruise),
the Reference Book lists the conservative power output for a brand
new MMRTG to be only 107 We on the surface of Mars. This is also a
longer time period than the Mars 2020 rover’s MMRTG, which was
delivered in August of 2018 for a 2020 launch, although when the fuel
was produced is something I have yet to determine. This bodes well
for having sufficient power from the 238Pu fuel for the Dragonfly at
the beginning of its surface mission, nine years after launch.
radioactive decay isn’t the only cause of regular degradation for
the MMRTG. The thermoelectric generator itself, the GeTe/TAGS-85
thermocouples that are the children of those used by the Pioneer 10
and 11 probes, lose their thermoelectric materials (mainly germanium)
over time through sublimation and migration out of the thermocouples
themselves. This is currently reduced by using a cover gas of argon
and helium in a carefully controlled ratio, but sadly the utility of
this seems to be less than ideal on the Martian surface.
Additionally, silicon insulation immediately surrounding the
thermocouples can assist in reducing sublimation, but whether that’s
currently in use or not is unclear.
MMRTG’s output was assumed to degrade between 3.5% and 4.8% per
year, between the decay energy decline in the 238Pu fuel elements and
the sublimation of the materials in the thermocouples themselves.
Sadly, real-world data from
Curiosity shows that the MMRTG on board the rover is degrading at the
top end of that scale: 4.8%. This is a fact to cause worry for a
mission planner, and one that the Dragonfly team at JHUAPL have taken
into account in the mission design.
there are advanced mitigation techniques (such as cladding them in
Al2O3 with an atomic layer deposition process for a highly regular
clad similar to nuclear fuel elements, but far more exacting than
average) have been proposed and demonstrated by the University of
Dayton, it’s unclear whether these mitigation techniques will be
used on Dragonfly due to the unknowns that they can introduce into
of the long cruise phase of the mission, the Dragonfly team assume
that the MMRTG will only be able to provide about 70 We of power once
the mission arrives at Titan. This is sufficient to power the Li ion
batteries onboard the lander (held in a thermal insulative box and
kept at temperature with waste heat from the RTG) for both flight and
energy-intensive testing, as well as provide power for the scientific
instruments that will be running during grounded science
to Lorentz et al at JHUAPL in 2018:
“Although sample acquisition and chemical analysis are somewhat power-hungry activities, they require only a few hours of activity. Science activities that require continuous monitoring, namely meteorological and seismological measurements, although of low power, actually dominate the payload energy budget. Indeed, for these extended periods, the lander avionics are powered down and data acquisition is performed only by the instrument, to maximize the rate of recharge of the battery.”
This means that there is more than enough margin for a years-long successful mission from even an MMRTG degrading at the high end of the degradation curve, ensuring that the power supply will likely not be a cause for significant concern of mission failure.
Titan, Here We Come!
The exploration of Titan has long been a goal, ever since the Voyager spacecraft first sent data back from this fascinating moon. Cassini and Huygens sent back reams of data, but sadly only served to whet our appetite for more.
Now, Dragonfly will provide us far more data, in an incredibly mobile platform, on the composition, chemical processes, and weather on Titan. This will not only increase our knowledge of the moon itself, but early chemistry on Earth and how it could have led to the rise of life in the Solar System.
Sadly, it will be 2034 before we receive data back from Dragonfly on the surface of Titan, but this is not unusual for so distant a location. Until then, we can only follow the mission development, cheer on the team at Johns Hopkins University APL, and wait with bated breath for the first data to be transmitted from this distant, intriguing moon.
For more information on the MMRTG, make sure to check out the new MMRTG page, available here: MMRTG page.
Hello, and welcome back to Beyond NERVA! As some of you may have noticed, the website has moved! Yes, we’re now at beyondnerva.com! I’m working on updating the webpage, and am getting the pieces together for a major website redesign (still a ways off, but lots of the pieces are starting to fall into place) to make the site easier to navigate and more user friendly. Make sure to update your bookmarks with this new address! With that brief administrative announcement out of the way, let’s get back to our look at in-space fission power plants.
Today, we’re going to continue our look at the SNAP program, America’s first major attempt to provide electric power in space using nuclear energy, and finishing up our look at the zirconium hydride fueled reactors that defined the early SNAP reactors by looking at the SNAP-8, and its two children – the 5 kW Thermoelectric Reactor and the Advanced Zirconium Hydride Reactor.
SNAP 8 was the first reactor designed with these space stations in mind in mind. While SNAP-10A was a low-power system (at 500 watts when flown, later upgraded to 1 kW), and SNAP-2 was significantly larger (3 kW), there was a potential need for far more power. Crewed space stations take a lot of power (the ISS uses close to 100 kWe, as an example), and neither the SNAP-10 or the SNAP-2 were capable of powering the space stations that NASA was in the beginning stages of planning.
Initially designed to be far higher powered, with 30-60 kilowatts of electrical power, this was an electric supply that could power a truly impressive outpost for humanity in orbit. However, the Atomic Energy Commission and NASA (which was just coming into existence at the time this program was started) didn’t want to throw the baby out with the bath water, as it were. While the reactor was far higher powered than the SNAP 2 reactor that we looked at last time, many of the power system’s components are shared: both use the same fuel (with minor exceptions), both use similar control drum structures for reactor control, both use mercury Rankine cycle power conversion systems, and perhaps most attractively both were able to evolve with lessons learned from the other part of the program.
While SNAP 8 never flew, it was developed to a very high degree of technical understanding, so that if the need for the reactor arose, it would be available. One design modification late in the SNAP 8 program (when the reactor wasn’t even called SNAP 8 anymore, but the Advanced Zirconium Hydride Reactor) had a very rare attribute in astronuclear designs: it was shielded on all sides for use on a space station, providing more than twice the electrical power available to the International Space Station without any of the headaches normally associated with approach and docking with a nuclear powered facility.
Let’s start back in 1959, though, with the SNAP 8, the first nuclear electric propulsion reactor system.
SNAP 8: NASA Gets Involved Directly
The SNAP 2 and SNAP 10A reactors were both collaborations between the Atomic Energy Commission (AEC), who were responsible for the research, development, and funding of the reactor core and primary coolant portions of the system, and the US Air Force, who developed the secondary coolant system, the power conversion system, the heat rejection system, the power conditioning unit, and the rest of the components. Each organization had a contractor that they used: the AEC used Atomics International (AI), one of the movers and shakers of the advanced reactor industry, while the US Air Force went to Thompson Ramo Wooldridge (better known by their acronym, TRW) for the SNAP-2 mercury (Hg) Rankine turbine and Westinghouse Electric Corporation for the SNAP-10’s thermoelectric conversion unit.
1959 brought NASA directly into the program on the reactor side of things, when they requested a fission reactor in the 30-60 kWe range for up to one year; one year later the SNAP-8 Reactor Development Program was born. It would use a similar Hg-based Rankine cycle as the SNAP-2 reactor, which was already under development, but the increased power requirements and unique environment that the power conversion system necessitated significant redesign work, which was carried out by Aerojet General as the prime contractor. This led to a 600 kWe rector core, with a 700 C outlet temperature As with the SNAP-2 and SNAP-10 programs, the SNAP 8’s reactor core was funded by the AEC, but in this case the power conversion system was the funding responsibility of NASA.
The fuel itself was similar to that in the SNAP-2 and -10A reactors, but the fuel elements were far longer and thinner than those of the -2 and -10A. Because the fuel element geometry was different, and the power level of the reactor was so much higher than the SNAP-2 reactor, the SNAP-8 program required its own experimental and developmental reactor program to run in parallel to the initial SNAP Experimental and Development reactors, although the materials testing undertaken by the SNAP-2 reactor program, and especially the SCA4 tests, were very helpful in refining the final design of the SNAP-8 reactor.
The power conversion system for this reactor was split in two: identical Hg turbines would be used, with either one or both running at any given time depending on the power needs of the mission. This allows for more flexibility in operation, and also simplifies the design challenges involved in the turbines themselves: it’s easier to design a turbine with a smaller power output range than a larger one. If the reactor was at full power, and both turbines were used, the design was supposed to produce up to 60 kW of electrical power, while the minimum power output of a single turbine would be in the 30 kWe range. Another advantage was that if one was damaged, the reactor would continue to be able to produce power.
Due to the much higher power levels, an extensive core redesign was called for, meaning that different test reactors would need to be used to verify this design. While the fuel elements were very similar, and the overall design philosophy was operating in parallel to the SNAP-2/10A program, there was only so much that the tests done for the USAF system would be able to help the new program. This led to the SNAP-8 development program, which began in 1960, and had its first reactor, the SNAP-8 Experimental Reactor, come online in 1963.
SNAP-8 Experimental Reactor: The First of the Line
The first reactor in this series, the SNAP 8 Experimental Reactor (S8ER), went critical in May 1963, and operated until 1965. it operated for 2522 hours at above 600 kWt, and over 8000 hours at lower power levels. The fuel elements for the reactor were 14 inches in length, and 0.532 inches in diameter, with uranium-zirconium hydride (U-ZrH, the same basic fuel type as the SNAP-2/10A system that we looked at last time) enriched to 93.15% 235U, with 6 X 10^22 atoms of hydrogen per cubic centimeter.
The biggest chemical change in this reactor’s fuel elements compared to the SNAP-2/10A system was the hydrogen barrier inside the metal clad: instead of using gadolinium as a burnable poison (which would absorb neutrons, then decay into a neutron-transparent element as the reactor underwent fission over time), the S8ER used samarium. The reasons for the change are rather esoteric, relating to the neutron spectrum of the reactor, the particular fission products and their ratios, thermal and chemical characteristics of the fuel elements, and other factors. However, the change was so advantageous that eventually the different burnable poison would be used in the SNAP-2/10A system as well.
The fuel elements were still loaded in a triangle array, but makes more of a cylinder than a hexagon like in the -2/10A, with small internal reflectors to fill out the smooth cylinder of the pressure vessel. The base and head plates that hold the fuel elements are very similar to the smaller design, but obviously have more holes to hold the increased number of fuel elements. The NaK-78 coolant (identical to the SNAP-2/10A system) entered in the bottom of the reactor into a space in the pressure vessel (a plenum), flowed through the base plate and up the reactor, then exits the top of the pressure vessel through an upper plenum. A small neutron source used as a startup neutron source (sort of like a spark plug for a reactor) was mounted to the top of the pressure vessel, by the upper coolant plenum. The pressure vessel itself was made out of 316 stainless steel.
Instead of four control drums, the S8ER used six void-backed control drums. These were directly derived from the SNAP-2/10A control system. Two of the drums were used for gross reactivity control – either fully rotated in or out, depending on if the reactor is under power or not. Two were used for finer control, but at least under nominal operation would be pretty much fixed in their location over longer periods of time. As the reactor approached end of life, these drums would rotate in to maintain the reactivity of the system. The final two were used for fine control, to adjust the reactivity for both reactor stability and power demand adjustment. The drums used the same type of bearings as the -2/10A system.
The S8ER first underwent criticality benchmark tests (pre-dry critical testing) from September to December 1962 to establish the reactor’s precise control parameters. Before filling the reactor with the NaK coolant, water immersion experiments for failure-to-orbit safety testing (as an additional set of tests to the SCA-4 testing which also supported SNAP-8) was carried out between January and March of 1963. After a couple months of modifications and refurbishment, dry criticality tests were once again conducted on May 19, 1963, followed in the next month with the reactor reaching wet critical power levels on June 23. Months of low-power testing followed, to establish the precise reactor control element characteristics, thermal transfer characteristics, and a host of other technical details before the reactor was increased in power to full design characteristics.
The reactor was shut down from early August to late October, because some of the water coolant channels used for the containment vessel failed, necessitating the entire structure to be dug up, repaired, and reinstalled, with significant reworking of the facility being required to complete this intensive repair process. Further modifications and upgrades to the facility continued into November, but by November 22, the reactor underwent its first “significant” power level testing. Sadly, this revealed that there were problems with the control drum actuators, requiring the reactor to be shut down again.
After more modifications and repairs, lower power testing resumed to verify the repairs, study reactor transient behavior, and other considerations. The day finally came for the SNAP-8 Experimental Reactor achieved its first full power, at temperature testing on December 11, 1963. Shortly after, the reactor had to be shut down again to repair a NaK leak in one of the primary coolant loop pumps, but the reactor was up and operating again shortly after. Lower power tests were conducted to evaluate the samarium burnable poisons in the fuel elements, measure xenon buildup, and measure hydrogen migration in the core until April 28, interrupted briefly by another NaK pump failure and a number of instrumentation malfunctions in the automatic scram system (which was designed to automatically shut down the reactor in the case of an accident or certain types of reactor behaviors). However, despite these problems, April 28 marked 60 days of continuous operation at 450 kWt and 1300 F (design temperature, but less-than-nominal power levels).
After a shutdown to repair the control drive mechanisms (again), the reactor went into near-continuous operation, either at 450 or 600 kWt of power output and 1300 F outlet temperature until April 15, 1965, when the reactor was shut down for the last time. By September 2 of 1964, the S8ER had operated at design power and temperature levels for 1000 continuous hours, and went on in that same test to exceed the maximum continuous operation time of any SNAP reactor to date on November 5 (1152 hours). January 18 of 1965 it achieved 10,000 hours of total operations, and in February of that year reached 100 days of continuous operation at design power and temperature conditions. Just 8 days later, on February 12, it exceeded the longest continuous operation of any reactor to that point (147 days, beating the Yankee reactor). March 5 marked the one year anniversary of the core outlet temperature being continuously at over 1200 F. By April 15, when the reactor was shut down for the last time it achieved an impressive set of accomplishments:
5016.5 continuous operations immediately preceeding the shutdown (most at 450 kWt, all at 1200 F or greater)
12,080 hours of total operations
A total of 5,154,332 kilowatt-hours of thermal energy produced
91.09% Time Operated Efficiency (percentage of time that the reactor was critical) from November 22, 1963 (the day of first significant power operations of the reactor), and 97.91% efficiency in the last year of operations.
Once the tests were concluded, the reactor was disassembled, inspected, and fuel elements were examined. These tests took place at the Atomics International Hot Laboratory (also at Santa Susana) starting on July 28, 1965. For about 6 weeks, this was all that the facility focused on; the core was disassembled and cleaned, and the fuel elements were each examined, with many of them being disassembled and run through a significant testing regime to determine everything from fuel burnup to fission product percentages to hydrogen migration. The fuel element tests were the most significant, because to put it mildly there were problems.
Of the 211 fuel elements in the core, only 44 were intact. Many of the fuel elements also underwent dimensional changes, either swelling (with a very small number actually decreasing) across the diameter or the length, becoming oblong, dishing, or other changes in geometry. The clad on most elements was damaged in one way or another, leading to a large amount of hydrogen migrating out of the fuel elements, mostly into the coolant and then out of the reactor. This means that much of the neutron moderation needed for the reactor to operate properly migrated out of the core, reducing the overall available reactivity even as the amount of fission poisons in the form of fission products was increasing. For a flight system, this is a major problem, and one that definitely needs to be addressed. However, this is exactly the sort of problem that an experimental reactor is meant to discover and assess, so in this way as well the reactor was a complete success, if not as smooth a development as the designers would likely have preferred.
It was also discovered that, while the cracks in the clad would indicate that the hydrogen would be migrating out of the cracks in the hydrogen diffusion barrier, far less hydrogen was lost than was expected based on the amount of damage the fuel elements underwent. In fact, the hydrogen migration in these tests was low enough that the core would most likely be able to carry out its 10,000 hour operational lifetime requirement as-is; without knowing what the mechanism that was preventing the hydrogen migration was, though, it would be difficult if not impossible to verify this without extensive additional testing, when changes in the fuel element design could result in a more satisfactory fuel clad lifetime, reduced damage, and greater insurance that the hydrogen migration would not become an issue.
The SNAP-8 Experimental Reactor was an important stepping stone to nuclear development in high-temperature ZrH nuclear fuel development, and greatly changed the direction of the whole SNAP-8 program in some ways. The large number of failures in cladding, the hydrogen migration from the fuel elements, and the phase changes within the crystalline structure of the U-ZrH itself were a huge wake-up call to the reactor developers. With the SNAP-2/10A reactor, these issues were minor at best, but that was a far lower-powered reactor, with very different geometry. The large number of fuel elements, the flow of the coolant through the reactor, and numerous other factors made the S8ER reactor far more complex to deal with on a practical level than most, if any, anticipated. Plating of the elements associated with Hastelloy on the stainless steel elements caused concern about the materials that had been selected causing blockages in flow channels, further exacerbating the problems of local hot spots in the fuel elements that caused many of the problems in the first place. The cladding material could (and would) be changed relatively easily to account for the problems with the metal’s ductility (the ability to undergo significant plastic deformation before rupture, in other words to endure fuel swelling without the metal splitting, cracking, fracturing or other ways that the clad could be breached) under high temperature and radiation fluxes over time. A number of changes were proposed to the reactor’s design, which strongly encouraged – or required – changes in the SNAP-8 Development Reactor that was currently being designed and fabricated. Those changes would alter what the SNAP-8 reactor would become, and what missions it would be proposed for, until the program was finally put to rest.
After the S8ER test, a mockup reactor, the SNAP-8 Development Mockup, was built based on the 1962 version of the design. This mockup never underwent nuclear testing, but was used for extensive non-nuclear testing of the designs components. Basically, every component that could be tested under non-nuclear conditions (but otherwise identical, including temperature, stress loading, vacuum, etc.) was tested and refined with this mockup. The tweaks to the design that this mockup suggested are far more minute than we have time to cover here, but it was an absolutely critical step to preparing the SNAP-8 reactor’s systems for flight test.
SNAP-8 Development Reactor: Facing Challenges with the Design
The final reactor in the series, the SNAP-8 Development Reactor, was a shorter-lived reactor, in part because many of the questions that needed to be answered about the geometry had been answered by the S8ER, and partly because the unanswered materials questions were able to be answered with the SCA4 reactor. This reactor underwent dry critical testing in June 1968, and power testing began at the beginning of the next year. From January 1969 to December 1969, when the reactor was shut down for the final time, the reactor operated at nominal (600 kWt) power for 668 hours, and operated at 1000 kWt for 429 hours.
The SNAP-8 Development Reactor (S8DR) was installed in the same facility as the S8ER, although it operated under different conditions than the S8ER. Instead of having a cover gas, the S8DR was tested in a vacuum, and a flight-type radiation shield was mounted below it to facilitate shielding design and materials choices. Fuel loading began on June 18, 1968, and criticality was achieved on June 22, with 169 out of the 211 fuel elements containing the U-ZrH fuel (the rest of the fuel elements were stainless steel “dummy” elements) installed in the core. Reactivity experiments for the control mechanisms were carried out before the remainder of the dummy fuel elements were replaced with actual fuel in order to better calibrate the system.
Finally, on June 28, all the fuel was loaded and the final calibration experiments were carried out. These tests then led to automatic startup testing of the reactor, beginning on December 13, 1968, as well as transient analysis, flow oscillation, and temperature reactivity coefficient testing on the reactor. From January 10 to 15, 1969, the reactor was started using the proposed automated startup process a total of five times, proving the design concept.
1969 saw the beginning of full-power testing, with the ramp up to full design power occurring on January 17. Beginning at 25% power, the reactor was stepped up to 50% after 8 hours, then another 8 hours in it was brought up to full power. The coolant flow rates in both the primary and secondary loops started at full flow, then were reduced to maintain design operating temperatures, even at the lower power setting. Immediately following these tests on January 23, an additional set of testing was done to verify that the power conversion system would start up as well. The biggest challenge was verification that the initial injection of mercury into the boiler would operate as expected, so a series of mercury injection tests were carried out successfully. While they weren’t precisely at design conditions due to test stand limitations, the tests were close enough that it was possible to verify that the design would work as planned.
After these tests, the endurance testing of the reactor began. From January 25 to February 24 was the 500-hour test at design conditions (600 kWt and 1300 F), although there were two scram incidents that led to short interruptions. Starting on March 20, the 9000 hour endurance run at design conditions lasted until April 10. This was followed by a ramp up to the alternate design power of 1 MWt. While this was meant to operate at only 1100 F (to reduce thermal stress on the fuel elements, among other things), the airblast heat exchanger used for heat rejection couldn’t keep up with the power flow at that temperature, so the outlet temperature was increased to 1150 F (the greater the temperature difference between a radiator and its environment, the more efficient it is, something we’ll discuss more in the heat rejection posts). After 18 days of 1 MWt testing, the power was once again reduced to 600 kWt for another 9000 hour test, but on June 1, the reactor scrammed itself again due to a loss of coolant flow. At this point, there was a significant loss of reactivity in the core, which led the team to decide to proceed at a lower temperature to mitigate hydrogen migration in the fuel elements. Sadly, reducing the outlet temperature (to 1200 F) wasn’t enough to prevent this test from ending prematurely due to a severe loss in reactivity, and the reactor scrammed itself again.
The final power test on the S8ER began on November 20, 1969. For the first 11 days, it operated at 300 kWt and 1200 F, when it was then increased in power back to 600 kWt, but the outlet temperature was reduced to 1140F, for an additional 7 days. An increase of outlet temperature back to 1200 F was then dialed in for the final 7 days of the test, and then the reactor was shut down.
This shutdown was an interesting and long process, especially compared to just removing all the reactivity of the control drums by rotating them all fully out. First, the temperature was dropped to 1000 F while the reactor was still at 600 kWt, and then the reactor’s power was reduced to the point that both the outlet and inlet coolant temperatures were 800 F. This was held until December 21 to study the xenon transient behavior, and then the temperatures were further reduced to 400 F to study the decay power level of the reactor. On January 7, the temperature was once again increased to 750 F, and two days later the coolant was removed. The core temperature then dropped steadily before leveling off at 180-200F.
Once again, the reactor was disassembled and examined at the Hot Laboratory, with special attention being paid to the fuel elements. These fuel elements held up much better than the S8ER’s fuel elements, with only 67 of the 211 fuel elements showing cracking. However, quite a few elements, while not cracked, showed significant dimensional changes and higher hydrogen loss rates. Another curiosity was that a thin (less than 0.1 mil thick) metal film, made up of iron, nickel, and chromium, developed fairly quickly on the exterior of the cladding (the exact composition changed based on location, and therefore on local temperature, within the core and along each fuel element).
The fuel elements that had intact cladding and little to no deformation showed very low hydrogen migration, an average of 2.4% (this is consistent with modeling showing that the permeation barrier was damaged early in its life, perhaps during the 1 MWt run). However, those with some damage lost between 6.8% and 13.2 percent of their hydrogen. This damage wasn’t limited to just cracked cladding, though – the swelling of the fuel element was a better indication of the amount of hydrogen lost than the clad itself being split. This is likely due to phase changes in the fuel elements, when the UzrH changes crystalline structure, usually due to high temperatures. This changes how well – and at what bond angle – the hydrogen is kept within the fuel element’s crystalline structure, and can lead to more intense hot spots in the fuel element, causing the problem to become worse. The loss of reactivity scrams from the testing in May-July 1969 seem to be consistent with the worst failures in the fuel elements, called Type 3 in the reports: high hydrogen loss, highly oval cross section of the swollen fuel elements (there were a total of 31 of these, 18 of them were intact, 13 were cracked). One interesting note about the clad composition is that where there was a higher copper content due to irregularities in metallography there was far less swelling of the Hastelloy N clad, although the precise mechanism was not understood at the time (and my rather cursory perusal of current literature didn’t show any explanation either). However, at the time testing showed that these problems could be mitigated, to the point of insignificance even, by maintaining a lower core temperature to ensure localized over-temperature failures (like the changes in crystalline structure) would not occur.
The best thing that can be said about the reactivity loss rate (partially due to hydrogen losses, and partially due to fission product buildup) is that it was able to be extrapolated given the data available, and that the failure would have occurred after the design’s required lifetime (had S8DR been operated at design temperature and power, the reactor would have lost all excess reactivity – and therefore the ability to maintain criticality – between October and November of 1970).
On this mixed news note, the reactor’s future was somewhat in doubt. NASA was certainly still interested in a nuclear reactor of a similar core power, but this particular configuration was neither the most useful to their needs, nor was it exceptionally hopeful in many of the particulars of its design. While NASA’s reassessment of the program was not solely due to the S8DR’s testing history, this may have been a contributing factor.
One way or the other, NASA was looking for something different out of the reactor system, and this led to many changes. Rather than an electric propulsion system, focus shifted to a crewed space station, which has different design requirements, most especially in shielding. In fact, the reactor was split into three designs, none of which kept the SNAP name (but all kept the fuel element and basic core geometry).
A New Life: the Children of SNAP-8
Even as the SNAP-8 Development Reactor was undergoing tests, the mission for the SNAP-8 system was being changed. This would have major consequences for the design of the reactor, its power conversion system, and what missions it would be used in. These changes would be so extensive that the SNAP-8 reactor name would be completely dropped, and the reactor would be split into four concepts.
The first concept was the Space Power Facility – Plumbrook (SPT) reactor, which would be used to test shielding and other components at NASA’s Plum Brook Research Center outside Cleveland, OH, and could also be used for space missions if needed. The smallest of the designs (at 300 kWt), it was designed to avoid many of the problems associated with the S8ER and S8DR; however, funding was cut before the reactor could be built. In fact, it was cut so early that details on the design are very difficult to find.
The second reactor, the Reactor Core Test, was very similar to the SPF reactor, but it was the same power output as the nominal “full power” reactor, at 600 kWt. Both of these designs increased the number of control drums to eight, and were designed to be used with a traditional shadow shield. Neither of them were developed to any great extent, much less built.
A third design, the 5 kWe Thermoelectric Reactor, was a space system, meant to take many of the lessons from the SNAP-8 program and apply them to a medium-power system which would apply both the lessons of the SNAP-8 ER and DR as well as the SNAP-10A’s experience with thermoelectric power conversion systems to a reactor between the SNAP-10B and Reference Zirconium Hydride reactor in power output.
The final design, the Reference Zirconium Hydride Reactor (ZrHR), was extensively developed, even if geometry-specific testing was never conducted. This was the most direct replacement for the SNAP-8 reactor, and the last of the major U-ZrH fueled space reactors in the SNAP program. Rather than powering a nuclear electric spacecraft, however, this design was meant to power space stations.
The 5 kWe Thermoelectric Reactor: Simpler, Cleaner, and More Reliable
The 5 kWe Thermoelectric Reactor (5 kWe reactor) was a reasonably simple adaptation of the SNAP-8 design, intended to be used with a shadow shield. Unsurprisingly, a lot of the design changes mirrored some of the work done on the SNAP-10B Interim design, which was undergoing work at about the same time. Meant to supply 5 kWe of power for 5 years using lead telluride thermoelectric convertors (derived from the SNAP-10A convertors), this system was meant to provide power for everything from small crewed space stations to large communications satellites. In many ways, this was a very different departure from the SNAP-8 reactor, but at the same time the changes that were proposed were based on evolutionary changes during the S8ER and S8DR experimental runs, as well as advances in the SNAP 2/10 core which was undergoing parallel post-SNAPSHOT design evolution (the SNAP-10A design had been frozen for the SNAPSHOT program at this point, so these changes were either for the followon SNAP-10A Advanced or SNAP-10B reactors). The change from mercury Rankine to thermoelectric power conversion, though, paralleled a change in the SNAP-2/10A origram, where greater efficiency was seen as unnecessary due to the constantly-lower power requirements of the systems.
The first thing (in the reactor itself, at least) that was different about the design was that the axial reflector was tapered, rather than cylindrical. This was done to keep the exterior profile of the reactor cleaner. While aerodynamic considerations aren’t a big deal (although they do still play a minute part in low Earth orbit) for astronuclear power plants, everything that’s exposed to the unshielded reactor becomes a radiation source itself, due to radiation scattering and material activation under neutron bombardment. If you could get your reactor to be a continuation of the taper of your shadow shield, rather than sticking out from that cone shape, you can make the shadow shield smaller for a given reactor. Since the shield is often many times heavier than the power system itself, especially for crewed applications, the single biggest place a designer can save mass is in the shadow shield.
This tapered profile meant two things: first, there would be a gradient in the amount of neutron moderation between the top and the bottom of the reactor, and second, the control system would have to be reworked. It’s unclear exactly how far the neutronics analysis for the new reflector configuration had proceeded, sadly, but the control systems were adaptations of the design changes that were proposed for the SNAP-10B reactor: instead of having the wide, partial cylinder control drums of the original design, large sections (235 degrees in total) of the reflector would be slid up or down around the core containment vessel to control the amount of reactivity available. This is somewhat similar to the SNAP-10B and BES-5 concepts in its execution, but the mechanism is quite different from a neutronics perspective: rather than capturing the unwanted neutrons using a neutron poison like boron or europium, they’re essentially vented into space.
A few other big changes from the SNAP-8 reference design when it comes to the core itself. The first is in the fuel: instead of having a single long fuel rod in the clad, the U-XrH fuel was split into five different “slugs,” which were held together by the clad. This would create a far more complex thermal distribution situation in the fuel, but would also allow for better thermal stress management within the hydride itself. The number of fuel elements was reduced to 85, and they came in three configurations: one set of 27 had radial fins to control the flow that spiralled around the fuel element in a right-hand direction, another set of 27 had fins in the left-hand direction, and the final 31 were unfinned. This was done to better manage the flow of the NaK coolant through the core, and avoid some of the hydrodynamic problems that were experienced on the S8DR.
The U-ZrH Reactor: Power for America’s Latest and Greatest Space Stations.
The Reference ZrH Reactor was begun in 1968, while the S8DR was still under construction. Because of this increased focus on having a crewed space station configuration, and the shielding requirement changes, some redesign of the reactor core was needed. The axial shield would change the reactivity of the core, and the control drums would no longer be able to effectively expose portions of the core to the vacuum of space to get rid of excess reactivity. Because of this, the number of fuel elements in the core were increased from 211 to 295. Another change was that rather than the even spacing of fuel elements used in the S8DR, the fuel elements were spaced in such a way that the amount of coolant around each fuel element was proportional to the amount of power produced by each fuel element. This means that the fuel elements on the interior of the core were wider spaced than the fuel elements around the periphery. This made it far more unlikely that local hot spots will develop which could lead to fuel element failures, but it also meant that the flow of coolant through the reactor core would need to be far more thoroughly studied than was done on the SNAP 8 reactor design. These thermohydrodynamic studies would be a major focus of the ZrHR program.
Another change was in the control drum configuration, as well as the need to provide coolant to the drums. This was because the drums were now not only fully enclosed solid cylinders, but were surrounded by a layer of molten lead gamma shielding. Each drum would be a solid cylinder in overall cross section; the main body was beryllium, but a crescent of europium alloy was used as a neutron poison (this is one of the more popular alternatives to boron for control mechanisms that operate in a high temperature environment) to absorb neutrons when this portion of the control drum was turned toward the core. These drums would be placed in dry wells, with NaK coolant flowing around them from the spacecraft (bottom) end before entering the upper reactor core plenum to flow through the core itself. The bearings would be identical to those used on the SNAP-8 Development Reactor, and minimal modifications would be needed for the drum motion control and position sensing apparatus. Fixed cylindrical beryllium reflectors, one small one along the interior radius of the control drums and a larger one along the outside of the drums, filled the gaps left by the control drums in this annular reflector structure. These, too, would be kept cool by the NaK coolant flowing around them.
Surrounding this would be an axial gamma shield, with the preferred material being molten lead encased in stainless steel – but tungsten was also considered as an alternative. Why the lead was kept molten is still a mystery to me, but my best guess is that this was due to the thermal conditions of the axial shield, which would have forced the lead to remain above its melting point. This shield would have made it possible to maneuver near the space station without having to remain in the shadow of the directional shield – although obviously dose rates would still be higher than being aboard the station itself.
Another interesting thing about the shielding is that the shadow shield was divided in two, in order to balance thermal transfer and radiation protection for the power conversion system, and also to maximize the effectiveness of the shadow shields. Most designs used a 4 pi shield design, which is basically a frustrum completely surrounding the reactor core with the wide end pointing at the spacecraft. The primary coolant loops wrapped around this structure before entering the thermoelectric conversion units. After this, there’s a small “galley” where the power conversion system is mounted, followed by a slightly larger shadow shield, with the heat rejection system feed loops running across the outside as well. Finally, the radiator – usually cylindrical or conical – completed the main body of the power system. The base of the radiator would meet up with the mounting hardware for attachment to the spacecraft, although the majority of the structural load was an internal spar running from the core all the way to the spacecraft.
While the option for using a pure shadow shield concept was always kept on the table, the complications in docking with a nuclear powered space station which has an unshielded nuclear reactor at one end of the structure were significant. Because of this, the ZrHR was designed with full shielding around the entire core, with supplementary shadow shields between the reactor itself and the power conversion system, and also a second shadow shield after the power conversion system. These shadow shields could be increased to so-called 4-pi shields for more complete shielding area, assuming the mission mass budget allowed, but as a general rule the shielding used was a combination of the liquid lead gamma shield and the combined shadow shield configuration. These shields would change to a fairly large extent depending on the mission that the ZrHR would be used on.
Another thing that was highly variable was the radiator configuration. Some designs had a radiator that was fixed in relation to the reactor, even if it was extended on a boom (as was the case of the Saturn V Orbital Workshop, later known as Skylab). Others would telescope out, as was the case for the later Modular Space Station (much later this became the International Space Station). The last option was for the radiators to be hinged, with flexible joints that the NaK coolant would flow through (this was the configuration for the lunar surface mission), and those joints took a lot of careful study, design, and material testing to verify that they would be reliable, seal properly, and not cause too many engineering compromises. We’ll look at the challenges of designing a radiator in the future, when we look at heat rejection systems (at this point, possibly next summer), but suffice to say that designing and executing a hinged radiator is a significant challenge for engineers, especially with a material at hot, and as reactive, as liquid NaK.
The ZrHR was continually being updated, since there was no reason to freeze the majority of the design components (although the fuel element spacing and fin configuration in the core may have indeed been frozen to allow for more detailed hydrodynamic predictability), until the program’s cancellation in 1973. Because of this, many design details were still in flux, and the final reactor configuration wasn’t ever set in stone. Additional modifications for surface use for a crewed lunar base would have required tweaking, as well, so there is a lot of variety in the final configurations.
The Stations: Orbital Missions for SNAP-8 Reactors
At the time of the redesign, three space stations were being proposed for the near future: the first, the Manned Orbiting Research Laboratory, (later changed to the Manned Orbiting Laboratory, or MOL), was a US Air Force project as part of the Blue Gemini program. Primarily designed as a surveillance platform, advances in photorecoinnasance satellites made this program redundant after just a single flight of an uncrewed, upgraded Gemini capsule.
The second was part of the Apollo Applications Program. Originally known as the Saturn V Orbital Workshop (OWS), this later evolved into Skylab. Three crews visited this space station after it was launched on the final Saturn V, and despite huge amounts of work needed to repair damage caused during a particularly difficult launch, the scientific return in everything from anatomy and physiology to meteorology to heliophysics (the study of the Sun and other stars) fundamentally changed our understanding of the solar system around us, and the challenges associated with continuing our expansion into space.
The final space station that was then under development was the Modular Space Station, which would in the late 1980s and early 1990s evolve into Space Station Freedom, and at the start of its construction in 1998 (exactly 20 years ago as of the day I’m writing this, actually) was known as the International Space Station. While many of the concepts from the MSS were carried over through its later iterations, this design was also quite different from the ISS that we know today.
Because of this change in mission, quite a few of the subsystems for the power plant were changed extensively, starting just outside the reactor core and extending through to shielding, power conversion systems, and heat rejection systems. The power conversion system was changed to four parallel thermoelectric convertors, a more advanced setup than the SNAP-10 series of reactors used. These allowed for partial outages of the PCS without complete power loss. The heat rejection system was one of the most mission-dependent structures, so would vary in size and configuration quite a bit from mission to mission. It, too, would use NaK-78 as the working fluid, and in general would be 1200 (on the OWS) to 1400 (reference mission) sq. ft in surface area. We’ll look more at these concepts in later posts on power conversion and heat rejection systems, but these changes took up a great deal of the work that was done on the ZrHR program.
One of the biggest reasons for this unusual shielding configuration was to allow a compromise between shielding mass and crew radiation dose. In this configuration, there would be three zones of radiation exposure: only shielded by the 4 pi shield during rendezvous and docking (a relatively short time period) called the rendezvous zone; a more significant shielding for the spacecraft but still slightly higher than fully shielded (because the spacecraft would be empty when docked the vast majority of the time) called the scatter shield zone; and the crewed portion of the space station itself, which would be the most shielded, called the primary shielded zone. With the 4 pi shield, the entire system would mass 24,450 pounds, of which 16,500 lbs was radiation shielding, leading to a crew dose of between 20 and 30 rem a year from the reactor.
The mission planning for the OWS was flexible in its launch configuration: it could have launched integral to the OWS on a Saturn V (although, considering the troubles that the Skylab launch actually had, I’m curious how well the system would have performed), or it could have been launched on a separate launcher and had an upper stage to attach it to the OWS. The two options proposed were either a Saturn 1B with a modified Apollo Service Module as a trans-stage, or a Titan IIIF with the Titan Trans-stage for on-orbit delivery (the Titan IIIC was considered unworkable due to mass restrictions).
After the 3-5 years of operational life, the reactor could be disposed of in two ways: either it would be deorbited into a deep ocean area (as with the SNAP-10A, although as we saw during the BES-5’s operational history this ended up not being considered a good option), or it could be boosted into a graveyard orbit. One consideration which is very different from the SNAP-10A is that the reactor would likely be intact due to the 4 pi shield, rather than burning up as the SNAP-10A would have, meaning that a terrestrial impact could lead to civilian population exposures to fission products, and also having highly enriched (although not quite bomb grade) uranium somewhere for someone to be able to relatively easily pick up. This made the deorbiting of the reactor a bit pickier in terms of location, and so an uncontrolled re-entry was not considered. The ideal was to leave it in a parking orbit of at least 400 nautical miles in altitude for a few hundred years to allow the fission products to completely decay away before de-orbiting the reactor over the ocean.
Nuclear Power for the Moon
The final configuration that was examined for the Advanced ZrH Reactor was for the lunar base that was planned as a follow-on to the Apollo Program. While this never came to fruition, it was still studied carefully. Nuclear power on the Moon was nothing new: the SNAP-27 radioisotope thermoelectric generator had been used on every single Apollo surface mission as part of the Apollo Lunar Surface Experiment Package (ALSEP). However, these RTGs would not provide nearly enough power for a permanently crewed lunar base. As an additional complication, all of the power sources available would be severely taxed by the 24 day long, incredibly cold lunar night that the base would have to contend with. Only nuclear fission offered both the power and the heat needed for a permanently staffed lunar base, and the reactor that was considered the best option was the Advanced ZrH Reactor.
The configuration of this form of the reactor was very different. There are three options for a surface power plant: the reactor is offloaded from the lander and buried in the lunar regolith for shielding (which is how the Kilopower reactor is being planned for surface operations); an integral lander and power plant which is assembled in Earth (or lunar) orbit before landing, with a 4 pi shield configuration; finally an integrated lander and reactor with a deployable radiator which is activated once the reactor is on the surface of the moon, again with a 4 pi shield configuration. There are, of course, in-between options between the last two configurations, where part of the radiator is fixed and part deploys. The designers of the ZrHR decided to go with the second option as their best design option, due to the ability to check out the reactor before deployment to the lunar surface but also minimizing the amount of effort needed by the astronauts to prepare the reactor for power operations after landing. This makes sense because, while on-orbit assembly and checkout is a complex and difficult process, it’s still cheaper in terms of manpower to do this work in Earth orbit rather than a lunar EVA due to the value of every minute on the lunar surface. If additional heat rejection was required, a deployable radiator could be used, but this would require flexible joints for the NaK coolant, which would pose a significant materials and design challenge. A heat shield was used when the reactor wasn’t in operation to prevent exessive heat loss from the reactor. This eased startup transient issues, as well as ensuring that the NaK coolant remained liquid even during reactor shutdown (frozen working fluids are never good for a mechanical system, after all). The power conversion system was exactly the same configuration as would be used in the OWS configuration that we discussed earlier, with the upgraded, larger tubes rather than the smaller, more numerous ones (we’ll discuss the tradeoffs here in the power conversion system blog posts).
This power plant would end up providing a total of 35.5 kWe of conditioned (i.e. usable, reliable power) electricity out of the 962 kWt reactor core, with 22.9 kWe being delivered to the habitat itself, for at least 5 years. The overall power supply system, including radiator, shield, power conditioning unit, and the rest of the ancillary bits and pieces that make a nuclear reactor core into a fission power plant, ended up massing a total of 23,100 lbs, which is comfortably under the 29,475 lb weight limit of the lander design that was selected (unfortunately, finding information on that design is proving difficult). A total dose rate at a half mile for an unshielded astronaut would be 7.55 mrem/hr was considered sufficient for crew radiation safety (this is a small radiation dose compared to the lunar radiation environment, and the astronauts will spend much of their time in the much better shielded habitat.
Sadly, this power supply was not developed to a great extent (although I was unable to find the source document for this particular design: NAA-SR-12374, “Reactor Power Plants for Lunar Base Applications, Atomics International 1967), because the plans for the crewed lunar base were canceled before much work was done on this design. The plans were developed to the point that future lunar base plans would have a significant starting off point, but again the design was never frozen, so there was a lot of flexibility remaining in the design.
The End of the Line
Sadly, these plans never reached fruition. The U-ZrH Reactor had its budget cut by 75% in 1971, with cuts to alternate power conversion systems such as the use of thermionic power conversion (30%) and reactor safety (50%), and the advanced Brayton system (completely canceled) happening at the same time. NERVA, which we covered in a number of earlier posts, also had its funding slashed at the same time. This was due to a reorientation of funds away from many current programs, and instead focusing on the Space Shuttle and a modular space station, whose power requirements were higher than the U-ZrH Reactor would be able to offer.
At this point, the AEC shifted their funding philosophy, moving away from preparing specific designs for flight readiness and instead moving toward a long-term development strategy. In 1973 head of the AEC’s Space Nuclear Systems Division said that, given the lower funding levels that NASA was forced to work within, “…the missions which were likely to require large amounts of energy, now appear to be postponed until around 1990 or later.” This led to the cancellation of all nuclear reactor systems, and a shift in focus to radioisotope thermoelectric generators, which gave enough power for NASA and the DoD’s current mission priorities in a far simpler form.
Funding would continue at a low level all the way to the current day for space fission power systems, but the shift in focus led to a very different program. While new reactor concepts continue to be regularly put forth, both by Department of Energy laboratories and NASA, for decades the focus was more on enhancing the technological capability of many areas, especially materials, which could be used by a wide range of reactor systems. This meant that specific systems wouldn’t be developed to the same level of technological readiness in the US for over 30 years, and in fact it wouldn’t be until 2018 that another fission power system of US design would undergo criticality testing (the KRUSTY test for Kilopower, in early 2018).
More Coming Soon!
Originally, I was hoping to cover another system in this blog post as well, but the design is so different compared to the ZrH fueled reactors that we’ve been discussing so far in this series that it warranted its own post. This reactor is the SNAP-50, which didn’t start out as a space reactor, but rather one of the most serious contenders for the indirect-cycle Aircraft Nuclear Propulsion program. It used uranium carbide/nitride fuel elements, liquid lithium coolant, and was far more powerful than anything that weve discussed yet in terms of electric power plants. Having it in its own post will also allow me to talk a little bit about the ANP program, something that I’ve wanted to cover for a while now, but considering how much more there is to discuss about in-space systems (and my personal aversion to nuclear reactors for atmospheric craft on Earth), hasn’t really been in the cards until now.
This series continues to expand, largely because there’s so much to cover that we haven’t gotten to yet – and no-one else has covered these systems much either! I’m currently planning on doing the SNAP-50/SPUR system as a standalone post, followed by the SP-100 and a selection of other reactor designs. After that, we’ll cover the ENISY reactor program in its own post, followed by the NEP designs from the 90s and early 00s, both in the US and Russia. Finally, we’ll cover the predecessors to Kilopower, and round out our look at fission power plant cores by revisiting Kilopower to have a look at what’s changed, and what’s stayed the same, over the last year since the KRUSTY test. We will then move on to shielding materials and design (probably two or three posts, because there’s a lot to cover there) before moving on to power conversion systems, another long series. We’ll finish up the nuclear systems side of nuclear power supplies by looking at heat sinks, radiators, and other heat rejection systems, followed by a look at nuclear electric spacecraft architecture and design considerations.
A lot of work continues in the background, especially in terms of website planning and design, research on a lot of the lesser-known reactor systems, and planning for the future of the web page. The blog is definitely set for topics for at least another year, probably more like two, just covering the basics and history of astronuclear design, but getting the web page to be more functional is a far more complex, and planning-heavy, task.
I hope you enjoyed this post, and much more is coming next year! Don’t forget to join us on Facebook, or follow me on Twitter!
Hello, and welcome back to Beyond NERVA! My apologies for the delay in this post, electric propulsion is not one of my strong points, so I spent a lot of extra time on research and in discussion with people who are more knowledgeable than I am on this subject. Special thanks to both Roland A. Gabrielli and Mikkel Haaheim for their invaluable help, not only for extensively picking their brains but also for their excellent help in editing (and sometimes rewriting large sections of) this post.
Today, we continue looking at electric propulsion, by starting to look at electrothermal and magnetoplasmadynamic (MPD) propulsion. Because there’s a fair bit of overlap, and because there are a lot of similarities in design, between these two types of thruster, we’ll start here, and then move to electrostatic thrusters in the next post.
As we saw in the last post, there are many ways to produce thrust using electricity, and many different components are shared between the different thruster types. This is most clear, though, when looking at thermal and plasma-dynamic thrusters, as we will see in this post. I’ve also made a compromise on this post’s structure: there are a few different types of thruster that fall in the gray area between thermal and MPD thrusters, but rather than writing about it between the two thruster types, one will be left for last: VASIMR, the Variable Specific Impulse Magnetoplasma Rocket. This thruster has captured the public’s imagination like few types of electric propulsion ever have; and, sadly, this has bred an incredible amount of clickbait. At the end of this post, I hope to lay to rest some of the misconceptions, and look at not only the advantages of this type of thruster, but the limitations as well. This will involve looking a little bit into mission planning and orbital mechanics, an area that we haven’t addressed much in this blog, but I hope to keep it relatively simple.
This is, to put it simply, the use of electric heaters to energize a propellant and to produce thrust by expanding it. In the most primitive and low energy thrusters, this is done with a Laval nozzle as in chemical and other thermal engines. This can be an inefficient use of energy (although this is definitely not always the case), however it can produce the most thrust for the same amount of power out of any of the systems that will be discussed today (debatably, depending on the systems and methods used). It is something that has been used since the 1960s, and continues to be used today for small-sat propulsion systems.
There are a number of ways to use electricity to make heat, and each of these methods are used for propulsion. We’ll look at them in turn: resistance heating, induction heating, and arc heating are all used by different designs. Each have their advantages and disadvantages, some of the concepts used for each thruster type are used in other types of thrusters also, and we’ll look at each in turn.
Using electricity to produce heat is something that everyone is familiar with. Space heaters and central heating are the obvious examples, but any electrical circuit produces heat due to electrical resistance within the system; this is why incandescent light bulbs and the computer that you’re reading this on get hot. This is resistive heating (or joule or Ohmic heating), and in a propulsion application this is called a “resistojet,” or an “electro-thermal thruster.” Often, this is used as a second stage for a chemical reaction – in this case the use of hydrazine propellant that undergoes catalysis to force chemical decay into a more voluminous gas. This two-stage approach is something that we’ll see again with a different heating method later in this post.
The first use of resistojets was with the Vela military satellites (first launched in 1963, canceled in 1985), which used a suite of instruments to detect nuclear tests from space, using a BE-3A AKM thruster (which I can’t find anything but the name of, if someone has documentation, please leave it in a comment below). The Intelsat-V program also used resistojet thrusters, and it has become a favored station-keeping option for smallsats. The reason for this is that the thrust is produced thermally, with no need for chemically reactive components, which is often pretty much a requirement for smallsats, which generally speaking are secondary payloads for larger spacecraft, and as such need to be absolutely safe for everything around them in order to get permission to be placed on the launch vehicle.
One of the main advantages of the resistojet is that they can achieve very high thrust efficiencies of up to 90%. Resistojets are primarily limited by two factors: first, the heat resistance of the Ohmic elements themselves; and second, the thermal transfer capacity of the system. As we have seen in NTRs, the ability to remove heat needs to be balanced with the heat produced, and the overall system needs to provide enough thrust to be useful. In the case of propelling a spacecraft on interplanetary missions, this is unlikely to come out to a useful result; however, for station-keeping with a high thrust requirement, it proves to be useful as shown in figure ## naming a few examples of satellites with EP. Exhaust velocities of about 3500 m/s are possible with decomposed hydrazine monopropellant, at about 80% efficiency. According to the ESA, specific impulse on this type of system is between 150 and 700 s of isp depending on the propellant, which is the bottom of the electric propulsion range.
Induction Thermal Thrusters
Another option for electrothermal thrusters is to use induction heating. Induction heating occurs when a high frequency alternating current is passed through a coil. Because of this, the induced magnetic field in the surroundings is swinging rapidly, stirring polar particles (particles that have a distinct plus and minus pole even if they’re electrically neutral overall) in the field. This can result even in ripping molecules apart (dissociation) and knocking electrons out of their orbitals (ionization). The charged remnants are ions and electrons, forming a plasma. Because of this, the device is called an “inductive plasma generator” or IPG. Plasma are even more susceptible to this high frequency heating. In purely thermal IPG based thrusters, Laval nozzles are used for expansion, once more, but magnetic nozzles, as explained later, can augment the performance on top of what a physical nozzle can provide. This principle is something that we’ve already seen a lot in this blog (both CFEET and NTREES operate through induction heating), and is used in one concept for bimodal nuclear thermal-electric propulsion, in the Nuclear Thermo-Electric Rocket (NTER) concept by Dr. Dujarric at ESA. This principle is also used in several different sort of electric thrusters, such as the Pulsed Induction Thruster (PIT); this is not a thermal thruster, though, so we’ll look more at it later.
This is a higher-powered system if you want significant thrust, due to the current required for the induction heater; so it’s not one that is commonly in use for smaller satellites like most of these systems. One other limitation noted by the NTER team is that supersonic induction is not an area that has been studied, and in most cases heating supersonic gasses don’t actually make them travel faster (called frozen energy), so during heating it’s necessary to make sure the propellant velocity remains subsonic.
IPG is also one of the foci of research at the Institute of Space Systems of the University of Stuttgart, studying both space and terrestrial applications, grouped in figure ## below, demonstrating the versatility of the concept: Generating a plasma without contact, the propellant cannot damage e.g. electrodes. This allows a near arbitrary selection of gasses as propellant, and therefore viable in-situ resource utilization concepts. Even space station wastes could be fed to such a thruster. Eventually, this prompted research on IPG based waste treatment for terrestrial communities. At the Institute of Space Systems, IPG also serve for the emulation of planetary atmospheres in re-entry experimentation in plasma wind tunnels.
Which type of heating is used is generally a function of the frequency of energy used to cause the oscillations, and therefore the heat. Induction heating, as we’ve discussed before in context of testing NTR fuel elements, usually occurs between 100 and 500 kHz. Radio Frequency heating occurs between 5 and 50 MHz. Finally, microwave heating occurs above 100 MHZ, although GHz operational ranges are common for many applications, like domestic microwaves which are found in most kitchens.
RF Electrothermal Thruster
RF thrusters operate via dielectric heating, where a material that has magnetic poles is oscillated rapidly in an electromagnetic field by a beam of radio waves, or more properly the molecules flip orientation relative to the field as the radio waves pass across them, causing heat to transfer to adjacent molecules. One side effect of using RF for heating is that these have very long wavelengths, meaning that the object being heated (in this case the propellant) can be heated more evenly throughout the entire mass of propellant than is typically possible in a microwave heating device.
While this is definitely a viable way to heat a propellant, this mechanism is more commonly used in ionization chambers, where the oscillating orientation of the dielectric molecules causes electrons of adjacent molecules to be stripped off, ionizing the propellant. This ionized propellant is then often accelerated using either MPD or electrostatic forces. We’ll look at that later, though, it’s just a good example of the way that many different components of these thrusters are used in different ways depending on the configuration and type of the thrusters in question.
Microwave Thermal Thrusters
Finally, we come to the last major type of electrothermal thruster: the microwave frequency thruster. This is not the Em-drive (nor will that concept be covered in this post)! Rather, it’s more akin to the microwave in your home: either radio frequencies or microwaves are used to convert the propellant, often Teflon (Polyfluorotriethylene, PFTE), into a plasma, which causes it to expand and accelerate out of a nozzle. This is most commonly done with microwaves rather than the longer wavelength radio frequencies due to a number of practical reasons.
Microwave thermal thrusters have been demonstrated with a very wide range of propellants, from H2 and N2 to Kr, Ar, Xe, PTFE, and others, at a wide variety of power levels. Due to the different power levels and propellant masses, specific impulse and thrust vary wildly. However, hydrogen-based thruster concepts have been demonstrated to have a specific impulse of approximately 1000 s with 54 kN of thrust.
An interesting option for this type of thruster is to not have your power supply on-board your spacecraft, and instead have a beam of microwaves hit a rectifier, or microwave antenna, which is then directed into the propellant. This has a major advantage of not having to have your power supply, electric conversion system, and microwave emitters weighing down your spacecraft. The beam will diverge over time, growing wider and requiring larger and larger collectors, but this may still end up being a major mass savings for quite a few different applications. Prof. Komurasaki at University of Tokyo is a major contributor to research in this concept, but this isn’t going to be something that we’re going to delve too deeply into in this post.
Electrothermal: What’s it Good For?
As we’ve seen, these systems aren’t much, if any, more efficient than a nuclear thermal system in terms of specific impulse, and the additional mass of the power conversion and heat rejection systems make them less attractive than a purely nuclear thermal system. So why would you use them?
There are a number of current applications, as has been mentioned in each of the concepts. They offer a fair bit of thrust for the system mass and complexity, a wide array of propellant options, and a huge range of sizes for the thrusters as well (including systems that are simply too small for a dedicated reactor for a nuclear thermal rocket).
Some designs for nuclear powered space stations (including Von Braun’s inflatable torus space station in the 1960s) use electrothermal thrusters for reaction control systems, partly due to their relatively high thrust. This could be a very attractive option, especially with chemically inert, inexpensive propellants such as PTFE that don’t require cryogenic propellants . They could also be used for orbital insertion burns, as they offer advantages toward thrust capability rather than efficiency, due to their simplicity and relatively low dry mass. For instance, an electric spacecraft on an interplanetary mission may use an electrothermal system to leave low Earth orbit on an interplanetary insertion burn, and then another drive system is used for the interplanetary portions of the mission; the drive system may be staged, discarding the now-burned-out drive system, or the propellant tankage (if any) could be discarded after use to minimize mass, and at orbital insertion at the destination the system could be activated again (this is, of course, not necessary, but in some cases may be advantageous, for instance for crewed missions, where the living beings on board don’t want to spend a couple months climbing out of Earth’s gravity well if they can avoid it).
Overall, electrical and thrust efficiency can be high, which makes these systems attractive for spacecraft. However, as a sole method of propulsion for interplanetary missions, this type of system DOES leave a lot to be desired, due to the generally low specific impulse of these types of thrusters, and in practice is not something that would be able to be used for this type of mission. Electric propulsion’s advantages for spaceflight are in high exhaust velocities, high specific impulse, and continuous use resulting in high spacecraft velocities, and thrust is generally secondary.
Arcjets – The First Middle Ground Between Thermal and MPD
These aren’t the only ways to produce heat from electricity, though. The first option we will discuss in the gray area between thermal and magnetically based propulsion, arc heating, is a very interesting option. Here, a spark, or arc, of electricity is sustained between two electrodes. This heating is virtually the same way an arc welder operates. This has the advantage that you aren’t thermally limited by your resistor for your highest thermal temperature: instead of being limited to about a thousand Kelvin by the melting point of your electric components, you can use the tens of thousands of Kelvin from the plasma of the arc – meaning more energy can be transferred, over a shorter time, within the same volume, due to the greater temperature difference. In most modern designs, the positive electrode – the anode – is at the throat of the nozzle. However, this arc also erodes your electrodes, carrying ablated and vaporized and even plasmified bits of their material., So there’s a limitation to how long this sort of thruster can operate before the components have to be replaced. The propellant isn’t just heated by the arc itself, but also conduction and convection from the heated structural components of the thruster.
Arcjets have been studied by NASA since the 1950s, however they didn’t become commonly used in spacecraft until the 1990s. Several companies, including Lockheed Martin and others, offer a variety of operational arcjet thrusters. As with the resistojet, the chemical stability is excellent for piggyback payloads, and they offer better efficiencies than a resistojet. They are higher-powered systems, though; often higher than your average satellite power bus (sometimes in the kW range), necessitating custom power supplies for most spacecraft (in a nuclear-powered spacecraft, this would obviously be less of an issue). Arcjets offer much higher exhaust velocities compared to resistojets, generally 3500-5000 m/s for hydrazine decomposition drives similar to what we discussed above, and up to 9000 m/s using ammonia, and are also able to scale by both scale and power efficiently. In these systems, though, the propellant doesn’t necessarily need to be a gas or liquid at operational temperature: some thrusters have used polytetrafluoroethelene (PFTE, Teflon) as a propellant.
This type of propellant is also very common in a sub-type of arcjet thruster, one that doesn’t use an internal cathode: the pulsed plasma thruster. Here, the PFTE propellant block is brought into contact with a cathode on one side of the thruster, and an anode on the other. The electric charge arcs across the gap, vaporizing the propellant, and pushing the propellant back slightly, The arc and resulting plasma continue to the end of the thrust chamber, and then the propellant (usually loaded on a spring mechanism) is brought back to the point that the propellant can be vaporized. This type of thruster is very common for small spacecraft, since it’s incredibly simple and compact, although certain designs can have engineering challenges with the spring mechanism and the lifetime of the cathode and anode.
Arcjets can also be combined with magnetoplasmadynamic thruster acceleration chambers, since arc heating is also a good way to create plasma. The pulsed plasma thruster often uses electric arcs for charging their propellant, for instance. This mechanism is also used in magnetoplasmadynamic (MPD) thrusters, which is why we haven’t placed them with the rest of the thermal thrusters.
In fact, there’s more in common between an arcjet and an MPD thruster than between other thermal designs. The cathode and anode of an arcjet are placed in exactly the same configuration as most designs for an MPD (with some exceptions, to be fair). The exhaust itself is not only vaporized, but ionized, which – like with the RF or MW thrusters – lends itself to adding electromagnetic acceleration.
VASIMR: the VAriable Specific Impulse Magnetoplasma Rocket
Coauthor Mikkel Haaheim
As mentioned earlier, the difference between MPD and thermal thrusters is a very gray area, and, even more than our previous examples, the VASIMR engine shows how gray this area is: the propellant is plasma of various types (although most designing and testing have focused on argon and xenon, other propellants could be used). This propellant is first ionized in what’s typically a separate chamber, and then fed into a magnetically confined chamber, with RF heating. This then is accelerated, and that thrust is then directed out of a magnetic nozzle.
VASIMR is the stuff of clickbait. Between the various space and futurism groups I’m active in on Facebook, I see terribly written, poorly understood, and factually incorrect articles on this design. This has led me to avoid the concept for a long time, and also puts a lot of pressure on me to get the thruster’s design and details right.
VASIMR isn’t that different from many types of electrothermal thrusters; after all, the primary method of accelerating the propellant, imparting thrust, and determining the specific impulse of the thruster is electrically generated RF heating. The fact that the propellant is a plasma also isn’t just an MPD concept, after all: the pulsed plasma thruster, and in fact most arcjets, produce plasma as their propellants as well. This thruster really demonstrates the gray area between thermal and MPD thrusters in ways that are unique in electric propulsion.
Since the characteristics of plasma did not play a vital role in the working principles of the previous thermo-electric thrusters, we should briefly discuss the concept. The energy of plasma is so high that electrons are no longer tied to their atoms, which then become ions. Both electrons and ions are charged particles whizzing around in a shared cloud, the plasma. Despite being neutral to the outward due to containing the same amount of negative as of positive charges, the plasma is interacting with magnetic fields. These interactions present themselves for various magnetohydrodynamic applications, ranging from power generators in terrestrial power plants over magnetic plasma bottles to propulsion.
In VASIMR, these forces push the hot plasma away from the walls, protecting both the walls from damaging heat loads and the plasma from crippling quenching. This allows VASIMR to have a very hot medium for expansion. While this puts VASIMR among MHD thrusters, it would not yet be a genuine “plasma thruster,” if it was not for the magnetic nozzle, which adds electromagnetic components to the forces generating the thrust. Among these components, the most important is the Lorentz-force, which occurs when a charged particle moves through a magnetic field. The Lorentz-force is orthogonal to the local magnetic field line as well as to the particle’s trajectory.
Despite the incredible amount of overblown hype, VASIMR is an incredibly interesting design. Dr. Franklin Chang Diaz, the founder of Ad Astra Rocket Company, became interested in the concept while he was a PhD candidate at MIT for applied plasma physics. Before he was able to pursue this concept, though, his obsession with space led him to become an astronaut, and he flew seven times on the Space Transport System (Space Shuttle), spending a total of over 66 days on orbit. Upon retiring from NASA, he founded the Ad Astra Rocket Company to pursue the idea that had captured his imagination during his doctoral thesis; refined by his understanding of aerospace engineering and propulsion from his time at NASA. Ad Astra continues to develop the VASIMR thruster, and consistently meets its deadlines and budgetary requirements (as well as the modeling expectations of Ad Astra), but the concept is complex in application, and as with everything in aerospace, the development process takes a long time to come to fruition.
After the end of several rounds of funding from NASA, and a series of successful tests of their VX-100 prototype, Ad Astra continued to develop the thruster privately. Their newer VX-200 thruster is designed for higher power, and with better optimization of several of its components. Following additional testing, the engine is currently going through another round of upgrades to prepare for a 100-hour test firing of the thruster. Ad Astra has been criticized for its development schedule, and the problems that they face are indeed significant, but so far they’ve managed to meet every target that they’ve set.
The main advantage of this concept is that it eliminates both friction and erosion between the propellant and the body of the thruster. This also reduces the thermal load on the thruster, because, since there’s no physical contact, conduction can’t occur, and the amount of heat that’s absorbed by the thruster is limited to radiation (which is limited by the surface area of the plasma and the temperature difference between that plasma and the thruster body). This doesn’t mean that there’s not a need to cool the thruster in most cases, it does mean that more heat is kept within the plasma, and in fact, by using regenerative cooling (as most modern chemical engines do) it’s possible to increase the efficiency of the thruster.
Another major advantage, and one that may be unique to VASIMR, is the first part of the acronym: VAriable Specific Impulse. Every staged rocket has variable specific impulse, in a way: most first-stage boosters have very low specific impulse compared to the upper stages (although, in the case of the boosters, this is due to both the atmospheric pressure and the need to impart a large amount of thrust over a limited timespan), and there are designs that use different propulsion systems with different specific impulse and thrust characteristics to optimize their usefulness for particular mission profiles (such as bimodal thermal-electric nuclear rockets, the subject of our next blog series after our look at electric propulsion), but VASIMR offers the ability to vary its’ exhaust velocity by changing the temperature it heats the propellant to. This, in turn, changes the specific impulse, and therefore its’ thrust. This is where the “30 Day Round Trip to Mars” clickbait headlines come into play: by continuously varying its’ thrust and isp depending on where it is in terms of the interplanetary transfer maneuver, VASIMR is able to optimize the trip time in ways that few, if any, other contemporary propulsion types can. However, the trip time is highly dependent on available power, and trip times on the order of 90 days require a power source of 200 MW, and the specific power of the system becomes a major concern. To explain this in detail gets into orbital mechanics far more deeply than I would like in this already very long blog post, so we’ll save that discussion for another time.
So how does VASIMR actually work, what are the requirements for efficient operation, and how does it have these highly unusual capabilities? In many ways, this is very similar to a typical RF thruster: a gas, usually argon, is injected into a quartz plenum, and then run through a helicon RF emitter. Because of the shape of the radio waves produced, this causes a cascading ionization effect within the gas, converting it into a plasma, but the electrons aren’t removed, like in the case of more familiar electrostatic thrusters (the focus of our next blog post). This excitation also heats the plasma to about 5800K. The plasma then moves to a second RF emitter, designed to heat the plasma further using an ion cyclotron emitter. This type of RF emitter efficiently heats the plasma to the desired temperature, which is then directed out of the back of the thruster. Because all of this is occurring at very high temperatures, the entire thruster is wrapped in superconducting electromagnets to contain the plasma away from the walls of the thruster, and the nozzle used to direct the thrust is magnetic as well. Because there are no components physically in contact with the plasma after it becomes ionized, there are no erosion wear points within the thruster, which extends the lifetime of the system. By varying the amount of gas that is fed into the system while maintaining the same power level, the gas will become ionized to different levels, and the amount of heating that occurs will be different, meaning that the exhaust velocity will be higher, increasing the specific impulse of the engine while reducing the thrust. This is perhaps the most interesting part of this propulsion concept, and the reason that it gets so much attention. Other systems that use pulsed thrust rather than steady state are able to vary the thrust level without changing the isp (such as the pulsed induction thruster, or PIT) by changing the pulse rate of the system, but these systems have limits as to how much the pulse rate can be varied. We’ll look at these differences more in a later blog post, though.
Many studies have looked at the thrust efficiency of the VASIMR. Like many electric propulsion concepts, it becomes more efficient as more power is applied to the system; in addition, the higher the specific impulse being used, the more efficiently it uses the electrical power available. The current VX-200 prototype is a 212 kW input, 120 kW thrust system, far more powerful than the original VX-10, and as such is more efficient. Most estimates of average efficiency seem to suggest a minimum of 60% thrust efficiency (the amount of efficiency increases with power input), increasing to 90% for higher-isp functioning. However, given the system’s sensitivity to available power level, and the fact that it’s not clear what the final flight thruster’s power availability will be, it’s difficult to guess what a flight system’s thrust efficiency will be.
VASIMR is currently upgrading their VX-200 thruster for extended operations. As of this point, problems with cooling various components (such as the superconducting electromagnets) have led to shorter lifetimes than are theoretically possible, although to be fair the lack of cooling problems come down to cooling systems not being installed. Additionally, more optimization is being done on the magnetic nozzle. One of the challenges with using a magnetic nozzle is that the plasma doesn’t want to “unstick” from the magnetic field lines used to contain the propellant. While this isn’t a major challenge for the thruster the way that the thermal management problems are, it is a source of inefficiency in the system, and so is worth addressing.
There’s a lot more that we could go into on VASIMR, and in the future we will come back to this concept; but, for the purposes of this article, it’s a wonderful example of how gray the area between thermal and MPD thrusters are: the propellant ionization and magnetic confinement of the heated plasma are both virtually identical to the applied field MPD thruster (more on that below), but the heating mechanism and thrust production are virtually identical to an RF thruster.
Let’s go ahead and look at what happens if you use magnetic fields instead of heat to accelerate your propellant, but keep most of the systems we’ve described identical in function: the magnetoplasmadynamic thruster.
Coauthor: Roland A. Gabrielli, IRS
Magnetoplasmadynamic thrusters are a high-performance electric propulsion concept; and, as such, offer greater thrust potential than the electrostatic thrusters that we’ll look at in the next blog post. They also tend to have higher power requirements. Therefore they have not been used as a dedicated thruster on operational spacecraft to date, although they’ve been researched since the 1960s in the USSR, the USA, Western Germany, Italy, and Japan. Only a few demonstrators have flown on both Russian and Japanese experimental satellites. They remain an attractive and cost efficient option for high-thrust electric propulsion including Mars transfer engines.
So far in this article, we have discussed electric thrusters which are principally thermal thrusters: The propellant runs into a reaction chamber, tanks heat and expands through a nozzle. This is as true for VASIMR, resistojets, thrusters based on inductive plasma generators, and also for arcjets. Yet, VASIMR introduces a different set of physics for thrusters, magnetohydrodynamics (MHD). This term designates the harnessing of fluids (hence ‘hydro’) with the forces (hence ‘dynamic’) emerging from magnetic fields (hence ‘magneto’). In order to effectively use magnetic forces, the fluid has to be susceptible to them, and its particles should somehow be electrically polar or even charged. The latter case, plasma, is the most common in electric propulsion.
Since the characteristics of plasma did not play a vital role in the working principles of the previous thermo-electric thrusters, we should briefly discuss the concept. The energy of plasma is so high that electrons are no longer tied to their atoms, which then become ions. Both electrons and ions are charged particles whizzing around in a shared cloud, the plasma. Despite being neutral to the outward due to containing the same amount of negative as of positive charges, the plasma interacts with magnetic fields. These magnetohydrodynamic interactions present themselves for various applications, ranging from power generators in terrestrial power plants over magnetic plasma bottles to propulsion.
In VASIMR, these forces push the hot plasma away from the walls, protecting both the walls from damaging heat loads and the plasma from cooling so rapidly that thrust is lost. This allows VASIMR to have a very hot medium for expansion. While this puts VASIMR among MHD thrusters, it would not yet be a genuine “plasma thruster,” if it was not for the magnetic nozzle, which adds electromagnetic components to the forces generating the thrust. Among these components, the most important is the Lorentz-force, which occurs when a charged particle moves through a magnetic field. The Lorentz-force is at right angles to both the local magnetic field line as well as to the particle’s trajectory.
There are two main characteristics of an MPD thruster:
The plasma constitutes a substantial part of the medium, which imparts a significant integral Lorentz force,
the integral Lorentz force has a relevant contribution towards the exhaust direction.
The electromagnetic contribution is the real distinction from the previous thermal approaches, as the kinetic energy of the jet is not only gained from undirected heating, but also from a very directed acceleration. The greater the electric discharge, and the more powerful the magnetic field, the more the propellant is accelerated, therefore the more the exhaust velocity is increased. Besides the Lorentz force, there are also minor electromagnetic effects, like a “swirl” and Hall acceleration (which will be looked at in the next blog post), but the defining electromagnetic contribution of MPD thrusters is the Lorentz force. Since the latter applies on a plasma, this type of thrusters is called magnetoplasmadynamic (MPD) thrusters.
The Lorentz force contribution is also the way magnetic nozzles work:the forces involved can be broken into three parts: along the thruster axis, toward the thruster axis, and at right angle to both of these around the axis. The first part adds to the thrust, the second pushes the plasma towards to the centre, the third generates a swirling effect, both contributing to the thrust as to spreading the arc into a radial symmetry.
There are various ways to build MPD thrusters, with differing different propellants, geometries, methods of plasma and magnetic field generation, and operation regimes, stationary or pulsed. The actual design depends mostly on the available power. The core of the most common architecture for stationary MPD thrusters is the arc plasma generator, which makes MPD thrusters seem fairly similar to thermal arcjets. But that’s just in appearance, as you can in fact build MPD thrusters almost completely free of a thermal contribution to the thrust, as evidenced by the German Aerospace Center’s (DLR) X-16, or the PEGASUS thruster that we’ll look at later in this post.
These types of thruster (technically known as stationary MPD thrusters with arc generation) differ most noticeably is the way the magnetic field is generated:
Applied-field (AF) MPD thrusters, endowed with either and torus of permanent magnets, or a Helmholtz coil placed around the jet.
Self-field (SF) MPD thrusters, which generate their magnetic field by induction around the current travelling in the arc.
Note that arc generator based AF-MPD thrusters also experience (to a minor extent) self-field effects. A schematic of an SF-MPD thruster is shown below, illustrating the conceptual differences between arcjets, and MPD thrusters (the top half is an MPD, the bottom half is an arcjet, note the difference in throst length and nozzle size). The most crucial difference is the contact of the arc with the anode. While a very long arc is undesirable in arcjets (for very important design reasons which we don’t have time to go into here), in the MPD thruster this is crucial to provide the thruster with sufficient Lorentz force. Moreover, the longer the oblique leg of the arc, the more of the Lorentz force will point out of the thruster. This effect means that relatively large anode diameters are the norm with MPD thrusters of this type. Therefore, simple arcjets and other electro-thermal thrusters tend to be more slender than most arc based MPD thrusters. The anode diameter may however not be too large, as the arc will be more resistive with increasing lengths, entailing more and more energy losses.
In stationary arc based MPD thrusters, the choice of propellant is mainly dictated by the ease of ionisation which tends to be more important than molar mass, which is what causes the preference for hydrogen in thermal thrusters. This shift is the more pronounced, the more the Lorentz-force contribution outweighs the thermal contribution. Consequently, many arc based MPD experiments are run with noble gasses, like Helium, or Neon; while Xenon is often discussed in pure development, it is rarely considered for missions due to its cost. The most important noble gas for MPD is thus Argon. Other easily ionised substances are liquid alkali metals, commonly Lithium, which enables very good efficiencies. However, the complicated propellant feed system and the risk of depositions in that case is a serious drawback. Nevertheless, there is still a very large field for hydrogen or ammonia as propellants.
The major lifetime restricting component in arc based MPD are the cathode and – to a minor extent – anode lifetimes. These will erode over time which is caused by the plasma arc. The arc will gnaw at the metals due to electron emission, sublimation and other mechanisms. Depending on the quality of the design and the material, this will be significant after a few hundred hours of operation, or a tens of thousands. Extending their life is a challenge, because the plasma behavior will change depending on a number of factors determined by the plasma and the system in question. To add to the complexity, the geometry of the electrodes, affected by the erosion, is one of them. Because of this, some designs have easily replaceable cathodes, others (like the Pegasus which we’ll cover below) just swap out the drive: the original design for the SEI that Pegasus was proposed for actually had seven thrusters on board, run in series as the cathode wore out on each one.
AF-MPD – The Lower-Power Option
Depending on the available power, the arc current in MPD may or may not be intense enough to induce a significant magnetic self-field. At the lower end of the power scale, this is definitely breaking the MPD thruster principle. Because of this, in lower powered systems an external magnet is required to create the magnetic field, which is why it’s called an applied field MPD. In general, these systems range from 50 to 500 kW of electric power, although this is far from a hard limit. The advantage of applied field MPD thrusters over a self-field types (more on the self-field later) is that the magnetic fields can be manipulated independent of the amount of charge running through the cathode and anode, which can mean longer component life for the thruster. There are two main approaches to provide for an external field: The first is a ring of a permanent magnet around the volume occupied by the arc; the second is the placement of a Helmholtz coil instead (an electromagnet whose coil wraps around the lengthwise axis of the thruster, sometimes using superconductors). At the lower ending of the power range, the permanent magnet may be the better option because it doesn’t consume what little electricity you have, while the electromagnets are more interesting at the upper end.
All these solutions do require cooling, and the requirements are important the more powerful the magnet is. This cooling can be achieved passively at the lower ending of the power range (given enough free volume). For mid level power, the cold propellant itself can provide the cooling prior to running alongside the hot anode and entering the plasma generator. Using cold propellant for cooling the thrusters is called regenerative cooling (a mainstay of current chemical and nuclear thermal engines). The most performant magnets for AF-MPD, superconducting coils, must be brought to really low temperatures, and this tends to require an additional, secondary coolant cycle, including an own refrigeration system, with pumps, compressors, and radiators.
The nice thing about the electromagnets is that it’s possible to tune the strength of the field in a certain range. If the coil degrades over time, more electricity (and coolant, due to increased electrical resistance) can be pumped through. This isn’t an option for a permanent magnet. However, the magnetic field generation equipment is one of the lifetime limiting components of this type of thruster, so it’s worth considering.
There’s not really a limit to how much power you use in an applied field MPD thruster, and especially with a Helmholtz coil you can theoretically tune your drive system in a number of interesting ways, like increasing the strength to constrict the plasma more if there’s a lower-mass stream. Something happens once the plasma has enough charge going through it, though: the unavoidable self field contribution increases. . Besides increasing the complexity of the determination of the field topology, the self field is an advantage. At sufficient power, you can get away without coil or magnets, making the system lighter, simpler, and less temperature-sensitive. This is why most very high powered systems use the self-field type of MPD.
Before we look at this concept in the next subsection, let us have a look at current developments from over the world. Table ## summarises a few interesting AF-MPD thrusters, both the performance parameters thrust F, exhaust velocity c_e, thrust efficiency η_T, electric (feed) power P_e and jet power P_T, and design, like anode radius r_A, cathode radius r_C, arc current I, magnetic field B and the propellant. Recent AF-MPD-thruster development was conducted by Myers in the USA, by MAI (the Moscow Aviation Institute) in Russia, at the University of Tokyo in Japan, and SX 3 in Germany at the Institute of Space Systems, Stuttgart. The types X 9 and X 16 in table ## are the IRS’ legacy from the German Aerospace Center (X 9, X 16).
r_A / mm
r_C / mm
I / A
B / T
F / mN
η_T / %
P_e / kW
P_T / kW
Design parameters and experimental performance data from various AF-MPD thrusters from over the world. Gabrielli 2018
Self-Field MPD: When Power Isn’t a Problem
In the previous section, we looked at low and medium powered MPD thrusters. At those power levels, an external field had to be applied to ensure a powerful enough magnetic field is applied to the plasma to generate the Lorentz force. Even though it wasn’t enough to impart enough thrust, there was always a self field contribution, albeit a weak, almost negligible one. The cause of the self field contribution is the induction of a magnetic field around the arc due to the current carried. You can get an idea of the direction of a magnetic field with the “right fist rule” by closing your right fist around the generating current, with your thumb pointing towards the cathode. Your fingers will then curl in the direction of the magnetic field. To get the direction of the Lorentz force, all you have to do in the next step is aligning your right hand again. This time, your thumb has to point in the direction of the magnetic field, and – at right angle – your index finger into the direction of the current. At right angle to both fingers, the middle finger will point in the direction of the Lorentz-force. (Note that you can also use the latter three-fingers-rule to study the acceleration in AF-MPD thrusters.)
The strength of the induced self field will depend on the current. The stronger the current is, the stronger the magnetic field will be, and, in turn, the Lorentz acceleration. As a consequence, given a sufficient current, the self field will be effective enough to provide for a decent Lorentz acceleration.
The current depends on the available electric power put into the arc generator, making the applied field obsolete from certain power levels up. This reduces complications arising from using an external magnet, and provides good efficiencies and attractive performance parameters. For example, at 300 kWe, and with an arc current of almost 5 kA (compare to AF-MPDs currents ranging from 50 A to 2 kA) DT2, an SF-MPD thruster developed at the Institute of Space Systems in Stuttgart, can provide a thrust of approximately 10 N at an exhaust velocity of 12 km/s, with a thrust efficiency of 20%. The performance possibilities have many people considering the technology as a key technology for rapid, man rated interplanetary transport, in particular to Mars. In this use case, SF-MPD thrusters may even be competitive with VASIMR, weighing possible shortcomings in efficiency up with a significantly simpler construction and, hence, much smaller cost. However, lacking current astronuclear sources of sufficient power, the development is stagnant, and awaiting disruption on the power source side.
Another example of a “typical” self-field high powered MPD thruster application (since, like all types of electric propulsion, the amount of power applied to the thruster defines the operational parameters) is that seen in the PEGASUS drive, an electric propulsion system developed for the Space Exploration Initiative (SEI) for an electric propulsion mission to Mars. Committed research on this concept began in the mid-1980s, and was meant for a mission in the late-1990s to early 2000s, but funding for SEI was canceled, and the development has been on hold ever since. Perhaps the most notable is the shape, which is fairly typical of nozzles designed for a concept we discussed briefly earlier in the post: the sinuous curvature of the nozzle profile is designed to minimize the amount of thermal heating that occurs within the plasma, so if a nozzle has this shape it means that the thermal contribution to the thrust is not only not needed, but is detrimental to the performance of the thruster.
A number of novel technologies were used in this design, and as such we’ll look at it again a couple of times during this series: first for the thruster, then for its power conversion system, and finally for its heat rejection system.
Pulsed Inductive Thrusters
Pulsed inductive thrusters (PIT) are a type of thruster that has many advantages over other MPD thrusters. The thrusters don’t need an electrode, which is one of the major causes of wear in most thrusters, and they also are able to maintain their specific impulse over a wide range of power levels. This is because the thruster isn’t a steady-state thruster, like many other forms of thruster that are commonly in use; instead a gaseous propellant is sprayed in brief jets onto a flat induction coil, which is then discharged for a very brief period from a bank of capacitors (usually in the nanosecond range), causing the gas to become ionized and then accelerated through the Lorentz force. The frequency of pulses is dependent on the time it takes to charge the capacitors, so the more power that is available, the faster the pulses can be discharged. This directly affects the amount of thrust that’s available from the thruster, but since the discharges and volume of gas are all the same, the Lorentz force applied – and therefore the exhaust velocity of the propellant and the isp – remain the same. Another advantage of the inductive plasma generation is the wide variety of propellants available, from water to ammonia to hydrazine, making it attractive for possible in-situ propellant use with minimal processing. In fact, one proposal by Kurt Polzin at Marshall SFC uses the Martian atmosphere for propellant, making refueling a Mars-bound interplanetary spacecraft a much easier proposition.
This gives a lot of flexibility to the system, especially for interplanetary missions, because additional thrust has distinct advantages when escaping a gravity well (such as Earth orbit), or orbital capture, but isn’t necessary for the “cruise” phase of interplanetary missions. Another nice thing about it is, for missions that are power-constrained, many thruster types have variation in specific impulse, and therefore the amount of propellant needed for the mission, depending on the amount of power available for propulsion when combined with other electricity requirements, like sensors and communications. For the PIT, this just means less thrust per unit time, while the isp remains the same. This isn’t necessarily a major advantage in all mission types, but for some it could be a significant draw.
PIT was one of the proposed propulsion types for Project Prometheus (which ended up using the HiPEP system that we’ll discuss in the next blog post), known as NuPIT. This thruster offered thrust efficiency of greater than 70%, and an isp of between 2,000-9,000 seconds, depending on the specific design that was decided upon (the isp would remain constant for whatever value was selected), using a 200 kWe nuclear power plant (which is on the lower end of what a crewed NEP mission would use), with ammonia propellant. Other propellants could have been selected, but they would have affected the performance of the thruster in different ways. An advantage to the PIT, though, is that its breadth of propellant options are far wider than most other thruster types, even thermal rockets, because if there’s chemical dissociation (which occurs to a certain degree in most propellants), anything that would become a solid doesn’t really have a surface to deposit onto effectively, and what little residue builds up is on a flat surface that doesn’t rely on thermal conductance or orifice size for its’ functionality, it’s just a plate to hold the inductive coil.
For a “live off the land” approach to propellant, PIT thrusters offer many advantages in their flexibility (assuming replacement of the gaseous diffuser used for the gas pulses), predictable (and fairly high) specific impulse, and variable thrust. This makes them incredibly attractive for many mission types. As higher powered electrical systems become available, they may become a popular option for many mission applications.
We’ll return to PIT thrusters in a future post, to explore the implications of the variable thrust levels on mission planning, because that’s a very different topic than just propulsion mechanics. It does open fascinating possibilities for unique mission profiles, though, in some ways very similar to the VASIMR drive.
More to follow!
Thermo-electric and MPD thrusters cover a wide range of low to high power thruster options, and offer many unique capabilities for mission planners and spacecraft architects. With the future availability of dense, high-powered fission power systems for spacecraft, these systems may show that they offer unique capabilities, not just for short missions or reaction control systems, but also for interplanetary missions as well. Some will need to wait until these power sources are available to be used, but others are already in use on operational satellites, and have shown dozens of years of efficient and effective operation.
The next post will complete our look at electric propulsion systems with a look at electrostatic thrusters, including gridded ion drives, Hall effect thrusters, and other forms of electric propulsion that use differences in electric potential to accelerate an ionized propellant. These have been in use for a long time, and are far more familiar to many people, but there are some incredible designs that have yet to be flown that extend the capabilities of these systems beyond even the very efficient systems in use today.
Another thank you to Roland Gabrielli and Mikkel Haaheim for their invaluable help on this blog post. Without them, this wouldn’t be able to be nearly as comprehensive or accurate as it is.
Again, I apologize that this blog post has taken so long. Unfortunately, I reached the point that I typically decide to split one blog post into two several times in this post, and actually DID split it a couple times. Much of this information, as well as a lot of it from the next post on electrostatic thrusters, was originally going to be part of the last post, but once that one reached 25 pages in length I decided to split it between history and summary, and this occurred once again in writing this post, separating the electrostatic thrusters from the thermal and MPD thrusters. The latter two concepts also almost got their own blog posts, but as we’ve seen, the two share key features and so it made sense to keep them together. The electrostatic thruster post is already coming along well, and I hope that it won’t take as long for me to write as this one did… sadly, I can’t promise that, but I’m trying.
Mayer, T., Gabrielli, R. A., Boxberger, A., Herdrich, G., and Petkow, D.,: “Development of Analytical Scaling Models for Applied Field Magnetoplasmadynamic Thrusters,” 64th International Astronautical Congress, International Astronautical Federation, Beijing, September 2013.
Myers, R. M., “Geometric Scaling of Applied-Field Magnetoplasmadynamic Thrusters,” Journal of Propulsion and Power, Vol. 11, No. 2, 1995, pages 343–350.
Tikhonov, V. B., Semenikhin S. A., Brophy J.R., and Polk J.E., “Performance of 130 kW MPD Thruster with an External Magnetic Field and Li as a Propellant”, International Electric Propulsion Conference, IEPC 97-117, Cleveland, Ohio, 1997, pp. 728-733.
Boxberger, A., et al.. “Experimental Investigation of Steady-State Applied-Field Magnetoplasmadynamic Thrusters at Institute of Space Systems”, 48th AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit, Atlanta, Georgia, 2012.
Boxberger, A., and G. Herdrich. “Integral Measurements of 100 kW Class Steady State Applied-Field Magnetoplasmadynamic Thruster SX3 and Perspectives of AF-MPD Technology.” 35th International Electric Propulsion Conference. 2017.
Hello, and welcome back to Beyond NERVA. Today, we are looking at a very popular topic, but one that doesn’t necessarily require nuclear power: electric propulsion. However, it IS an area that nuclear power plants are often tied to, because the amount of thrust available is highly dependent on the amount of power available for the drive system. We will touch a little bit on the history of electric propulsion, as well as the different types of electric thrusters, their advantages and disadvantages, and how fission power plants can change the paradigm for how electric thrusters can be used. It’s important to realize that most electric propulsion is power-source-agnostic: all they require is electricity; how it’s produced usually doesn’t mean much to the drive system itself. As such, nuclear power plants are not going to be mentioned much in this post, until we look at the optimization of electric propulsion systems.
We also aren’t going to be looking at specific types of thrusters in this post. Instead, we’re going to do a brief overview of the general types of electric propulsion, their history, and how electrically propelled spacecraft differ from thermally or chemically propelled spacecraft. The next few posts will focus more on the specific technology itself, its’ application, and some of the current options for each type of thruster.
Electric Propulsion: What is It?
In its simplest definition, electric propulsion is any means of producing thrust in a spacecraft using electrical energy. There’s a wide range of different concepts that get rolled into this concept, so it’s hard to make generalizations about the capabilities of these systems. As a general rule of thumb, though, most electric propulsion systems are low-thrust, long-burn-time systems. Since they’re not used for launch, and instead for on-orbit maneuvering or interplanetary missions, the fact that these systems generally have very little thrust is a characteristic that can be worked with, although there’s a great deal of variety as far as how much thrust, and how efficient in terms of specific impulse, these systems are.
There are three very important basic concepts to understand when discussing electric propulsion: thrust-to-weight ratio (T/W), specific impulse (isp), and burn time. The first is self-explanatory: how much does the engine weigh, compared to how hard it can push, commonly in relation to Earth’s gravity: a T/W ratio of 1/1 means that the engine can hover, basically, but no more, A T/W ratio of 3/1 means that it can push just less than 3 times its weight off the ground. Specific impulse is a measure of how much thrust you get out of a given unit of propellant, and ignores everything else, including the weight of the propulsion system; it’s directly related to fuel efficiency, and is measured in seconds: if the drive system had a T/W ratio of 1/1, and was entirely made out of fuel, this would be the amount of time it could hover (assuming the engine is completely made out of propellant) for any given mass of fuel at 1 gee. Finally, you have burn time: the T/W ratio and isp give you the amount of thrust imparted per unit time based on the mass of the drive system and of the propellant, then the spacecraft’s mass is factored into the equation to give the total acceleration on the spacecraft for a given unit of time. The longer the engine burns, the more overall acceleration is produced.
Electric propulsion has a very poor thrust-to-weight ratio (as a general rule), but incredible specific impulse and burn times. The T/W ratio of many of the thrusters is very low, due to the fact that they provide very little thrust, often measured in micronewtons – often, the thrust is illustrated in pieces of paper, or pennies, in Earth gravity. However, this doesn’t matter once you’re in space: with no drag, and orbital mechanics not requiring the huge amounts of thrust over a very short period of time, the total amount of thrust is more important for most maneuvers, not how long it takes to build up said thrust. This is where the burn time comes in: most electric thrusters burn continuously, providing minute amounts of thrust over months, sometimes years; they push the spacecraft in the direction of travel until halfway through the mission, then turn around and start decelerating the spacecraft halfway through the trip (in energy budget terms, not necessarily in total mission time). The trump card for electric propulsion is in specific impulse: rather than the few hundred seconds of isp for chemical propulsion, or the thousand or so for a solid core nuclear thermal rocket, electric propulsion gives thousands of seconds of isp. This means less fuel, which in turn makes the spacecraft lighter, and allows for truly astounding total velocities; the downside to this is that it takes months or years to build these velocities, so escaping a gravity well (for instance, if you’re starting in low Earth orbit) can take months, so it’s best suited for long trips, or for very minor changes in orbit – such as for communications satellites, where it’s made these spacecraft smaller, more efficient, and with far longer lifetimes.
Electric propulsion is an old idea, but one that has yet to reach its’ full potential due to a number of challenges. Tsiolkovsy and Goddard both wrote about electric propulsion, but because neither was living in a time that it was possible to get into orbit, their ideas went unrealized in their lifetimes. The reason for this is that electric propulsion isn’t suitable for lifting rockets off the surface of a planet, but for in-space propulsion it’s incredibly promising. They both showed that the only thing that matters for a rocket engine is that, to put it simply, some mass needs to be thrown out the back of the rocket to provide thrust, it doesn’t matter what that something is. Electricity isn’t (directly) limited by thermodynamics (except through entropic losses), only by electric potential differences, and can offer very efficient conversion of electric potential to kinetic energy (the “throwing something out of the back” part of the system).
In chemical propulsion, combustion is used to cause heat to be produced, which causes the byproducts of the chemical reaction to expand and accelerate. This is then directed out of a nozzle to increase the velocity of the exhaust and provide lift. This is the first type of rocket ever developed; however, while advances are always being produced, in many ways the field is chasing after more and more esoteric or exotic ways to produce ever more marginal gains. The reason for this is that there’s only so much chemical potential energy available in a given system. The most efficient chemical engines top out around 500 seconds of specific impulse, and most hover around the 350 mark. The place that chemical engines excel though, is in thrust-to-weight ratio. They remain – arguably – our best, and currently our only, way of actually getting off Earth.
Thermal propulsion doesn’t rely on the chemical potential energy, instead the reaction mass is directly heated from some other source, causing expansion. The lighter the propellant, the more it expands, and therefore the more thrust is produced for a given mass; however, heavier propellants can be used to give more thrust per unit volume, at lower efficiencies. It should be noted that thermal propulsion is not only possible, but also common, with electrothermal thrusters, but we’ll dig more into that later.
Electric propulsion, on the other hand, is kind of a catch-all term when you start to look at it. There are many mechanisms for changing electrical energy into kinetic energy, and looking at most – but not all – of the options is what this blog post is about.
In order to get a better idea of how these systems work, and the fundamental principles behind electric propulsion, it may be best to look into the past. While the potential of electric propulsion is far from realized, it has a far longer history than many realize.
Futuristic Propulsion? … Sort Of, but With A Long Pedigree
The Origins of Electric Propulsion
When looking into the history of spaceflight, two great visionaries stand out: Konstantin Tsiolkosky and Robert Goddard. Both worked independently on the basics of rocketry, both provided much in the way of theory, and both were visionaries seeing far beyond their time to the potential of rocketry and spaceflight in general. Both were working on the questions of spaceflight and rocketry at the turn of the 20th century. Both also independently came up with the concept of electric propulsion; although who did it first requires some splitting of hairs: Goddard mentioned it first, but in a private journal, while Tsiolkovsky published the concept first in a scientific paper, even if the reference is fairly vague (considering the era, understandably so). Additionally, due to the fact that electricity was a relatively poorly understood phenomenon at the time (the nature of cathode and anode “rays” was much debated, and positively charged ions had yet to be formally described); and neither of these visionaries had a deep understanding of the concepts involved, their ideas being little more than just that: concepts that could be used as a starting point, not actual designs for systems that would be able to be used to propel a spacecraft.
The first mention of electric propulsion in the formal scientific literature was in 1911, in Russia. Konstantin Tsiolkovsky wrote that “it is possible that in time we may use electricity to produce a large velocity of particles ejected from a rocket device.” He began to focus on the electron, rather than the ion, as the ejected particle. While he never designed a practical device, the promise of electric propulsion was clearly seen: “It is quite possible that electrons and ions can be used, i.e. cathode and especially anode rays. The force of electricity is unlimited and can, therefore, produce a powerful flux of ionized helium to serve a spaceship.” The lack of understanding of electric phenomena hindered him, though, and prevented him from ever designing a practical system, much less building one.
The first mention of electric propulsion in history is from Goddard, in 1906, in a private notebook, but as noted by Edgar Choueiri, in his excellent historical paper published in 2004 (a major source for this section), these early notes don’t actually describe (or even reference the use of) an electric propulsion drive system. This wasn’t a practical design (that didn’t come until 1917), but the basic principles were laid out for the acceleration of electrons (rather than positively charged ions) to the “speed of light.” For the next few years, the concept fermented in his mind, culminating in patents in 1912 (for an ionization chamber using magnetic fields, similar to modern ionization chambers) and in 1917 (for a “Method and Means for Producing Electrified Jets of Gas”). The third of three variants was for the first recognizable electric thruster, whichwould come to be known as an electrostatic thruster. Shortly after, though, America entered WWI, and Goddard spent the rest of his life focused on the then-far-more-practical field of chemical propulsion.
Other visionaries of rocketry also came up with concepts for electric propulsion. Yuri Kondratyuk (another, lesser-known, Russian rocket pioneer) wrote “Concerning Other Possible Reactive Drives,” which examined electric propulsion, and pointed out the high power requirements for this type of system. He didn’t just examine electron acceleration, but also ion acceleration, noting that the heavier particles provide greater thrust (in the same paper he may have designed a nascent colloid thruster, another type of electric propulsion).
Another of the first generation of rocket pioneers to look at the possibilities of electric propulsion was Hermann Oberth. His 1929 opus, “Ways to Spaceflight,” devoted an entire chapter to electric propulsion. Not only did he examine electrostatic thrusters, but he looked at the practicalities of a fully electric-powered spacecraft.
Finally, we come to Valentin Glushko, another early Russian rocketry pioneer, and giant of the Soviet rocketry program. In 1929, he actually built an electric thruster (an electrothermal system, which vaporized fine wires to produce superheated particles), although this particular concept never flew.By this time, it was clear that much more work had to be done in many fields for electric propulsion to be used; and so, one by one, these early visionaries turned their attention to chemical rockets, while electric propulsion sat on the dusty shelves of spaceflight concepts that had yet to be realized. It collected dust next to centrifugal artificial gravity, solar sails, and other practical ideas that didn’t have the ability to be realized for decades.
The First Wave of Electric Propulsion
Electric propulsion began to be investigated after WWII, both in the US and in the USSR, but it would be another 19 years of development before a flight system was introduced. The two countries both focused on one general type of electric propulsion, the electrostatic thruster, but they looked at different types of this thruster, reflecting the technical capabilities and priorities of each country. The US focused on what is now known as a gridded ion thruster, most commonly called an ion drive, while the USSR focused on the Hall effect thruster, which uses a magnetic field perpendicular to the current direction to accelerate particles. Both of these concepts will be examined more in the section on electrostatic thrusters; though, for now it’s worth noting that the design differences in these concepts led to two very different systems, and two very different conceptions of how electric propulsion would be used in the early days of spaceflight.
In the US, the most vigorous early proponent of electric propulsion was Ernst Stuhlinger, who was the project manager for many of the earliest electric propulsion experiments. He was inspired by the work of Oberth, and encouraged by von Braun to pursue this area, especially now that being able to get into space to test and utilize this type of propulsion was soon to be at hand. His leadership and designs had a lasting impact on the US electric propulsion program, and can still be seen today.
The first spacecraft to be propelled using electric propulsion was the SERT-I spacecraft, a follow on to a suborbital test (Program 661A, Test A, first of three suborbital tests for the USAF) of the ion drives that would be used. These drive system used cesium and mercury as a propellant, rather than the inert gasses that are commonly used today. The reason for this is that these metals both have very low ionization energy, and a reasonably favorable mass for providing more significant thrust. Tungsten buttons were used in the place of the grids used in modern ion drives, and a tantalum wire was used to neutralize the ion stream. Unfortunately, the cesium engine short circuited, but the mercury system was tested for 31 minutes and 53 cycles of the engine. This not only demonstrated ion propulsion in principle, but just as importantly demonstrated ion beam neutralization. This is important for most electric propulsion systems, because this prevents the spacecraft from becoming negatively charged, and possibly even attracting the ion stream back to the spacecraft, robbing it of thrust and contaminating sensors on board (which was a common problem in early electric propulsion systems).
The SNAPSHOT program, which launched the SNAP 10A nuclear reactor on April 3, 1965, also had a cesium ion engine as a secondary experimental payload. The failure of the electrical bus prevented this from being operated, but SNAPSHOT could be considered the first nuclear electric spacecraft in history (if unsuccessful).
The ATS program continued to develop the cesium thrusters from 1968 through 1970. The ATS-4 flight was the first demonstration of an orbital spacecraft with electric propulsion, but sadly there were problems with beam neutralization in the drive systems, indicating more work needed to be done. ATS-5 was a geostationary satellite meant to have electrically powered stationkeeping, but was not able to despin the satellite from launch, meaning that the thruster couldn’t be used for propulsion (the emission chamber was flooded with unionized propellant), although it was used as a neutral plasma source for experimentation. ATS-6 was a similar design, and successfully operated for a total of over 90 hours (one failed early due to a similar emission chamber flooding issue). SERT-II and SCATHA satellites continued to demonstrate improvements as well, using both cesium and mercury ion devices (SCATHA wasn’t optimized as a drive system, but used similar components to test spacecraft charge neutralization techniques).
These tests in the 1960s never developed into an operational satellite that used ion propulsion for another thirty years. Challenges with the aforementioned thrusters becoming saturated, spacecraft contamination issues due to highly reactive cesium and mercury propellants, and relatively low engine lifetimes (due to erosion of the screens used for this type of ion thruster) didn’t offer a large amount of promise for mission planners. The high (2000+ s) specific impulse was very promising for interplanetary spacecraft, but the low reliability, and reasonably short lifetimes, of these early ion drives made them unreliable, or of marginal use, for mission planners. Ground testing of various concepts continued in the US, but additional flight missions were rare until the end of the 1990s. This likely helped feed the idea that electric propulsion is new and futuristic, rather than having its’ conceptual roots reaching all the way back to the dawn of the age of flight.
Early Electric Propulsion in the USSR
Unlike in the US, the USSR started development of electric propulsion early, and continued its development almost continuously to the modern day. Sergei Korolev’s OKB-1 was tasked, from the beginning of the space race, with developing a wide range of technologies, including nuclear powered spacecraft and the development of electric propulsion.
Part of this may be the different architecture that the Soviet engineers used: rather than having ions be accelerated toward a pair of charged grids, the Soviet designs used a stream of ionized gas, with a perpendicular magnetic field to accelerate the ions. This is the Hall effect thruster, which has several advantages over the gridded ion thruster, including simplicity, fewer problems with erosion, as well as higher thrust (admittedly, at the cost of specific impulse). Other designs, including the PPT, or pulsed plasma thruster, were also experimented with (the ZOND-2 spacecraft carried a PPT system). However, due to the rapidly growing Soviet mastery of plasma physics, the Hall effect thruster became a very attractive system.
There are two main types of Hall thruster that were experimented with: the stationary plasma thruster (SPT) and the thruster with anode layer (TAL), which refer to how the electric charge is produced, the behavior of the plasma, and the path that the current follows through the thruster. The TAL was developed in 1957 by Askold Zharinov, and proven in 1958-1961, but a prototype wasn’t built until 1967 (using cesium, bismuth, cadmium, and xenon propellants, with isp of up to 8000 s), and it wasn’t published in open literature until 1973. This thruster can be characterized by a narrow acceleration zone, meaning it can be more compact.
The SPT, on the other hand, can be larger, and is the most common form of Hall thruster used today. Complications in the plasma dynamics of this system meant that it took longer to develop, but its’ greater electrical efficiency and thrust mean that it’s a more attractive choice for station-keeping thrusters. Research began in 1962, under Alexy Morozov at the Institute of Atomic Energy; and was later moved to the Moscow Aviation institute, and then again to what became known as FDB Fakel (now Fakel Industries, still a major producer of Hall thrusters). The first breadboard thruster was built in 1968, and flew in 1970. It was then used for the Meteor series of weather satellites for attitude control. Development continued on the design until today, but these types of thrusters weren’t widely used, despite their higher thrust and lack of spacecraft contamination (unlike similar vintage American designs).
It would be a mistake to think that only the US and the USSR were working on these concepts, though. Germany also had a diversity of programs. Arcjet thrusters, as well as magnetoplasmadynamic thrusters, were researched by the predecessors of the DLR. This work was inherited by the University of Stuttgart Institute for Space Systems, which remains a major research institution for electric propulsion in many forms. France, on the other hand, focused on the Hall effect thruster, which provides lower specific impulse, but more thrust. The Japanese program tended to focus on microwave frequency ion thrusters, which later provided the main means of propulsion for the Hayabusa sample return mission (more on that below).
The Birth of Modern Electric Propulsion
For many people, electric propulsion was an unknown until 1998, when NASA launched the Deep Space 1 mission. DS1 was a technology demonstration mission, part of the New Millennium program of advanced technology testing and experimentation. A wide array of technologies were to be tested in space, after extensive ground testing; but, for the purposes of Beyond NERVA, the most important of these new concepts was the first operational ion drive, the NASA Solar Technology Applications Readiness thruster (NSTAR). As is typical of many modern NASA programs, DS1 far exceeded the minimum requirements. Originally meant to do a flyby of the asteroid 9969 Braille, the mission was extended twice: first for a transit to the comet 19/P Borrelly, and later to extend engineering testing of the spacecraft.
In many ways, NSTAR was a departure from most of the flight-tested American electric propulsion designs. The biggest difference was with the propellant used: cesium and mercury were easy to ionize, but a combination of problems with neutralizing the propellant stream, and the resultant contamination of the spacecraft and its’ sensors (as well as minimizing chemical reaction complications and growing conservatism concerning toxic component minimization in spacecraft), led to the decision to use noble gasses, in this case xenon. This, though, doesn’t mean that it was a great overall departure from the gridded ion drives of US development; it was an evolution, not a revolution, in propulsion technology. Despite an early (4.5 hour) failure of the NSTAR thruster, it was able to be restarted, and the overall thruster life was 8,200 hours, and the backup achieved more than 500 hours beyond that.
Not only that, but this was not the only use of this thruster. The Dawn mission to the minor planet Ceres uses an NSTAR thruster, and is still in operation around that body, sending back incredibly detailed and fascinating information about hydrocarbon content in the asteroid belt, water content, and many other exciting discoveries for when humanity begins to mine the asteroid belt.
Many satellites, especially geostationary satellites, use electric propulsion today, for stationkeeping and even for final orbital insertion. The low thrust of these systems is not a major detriment, since they can be used over long periods of time to ensure a stable orbital path; and the small amount of propellant required allows for larger payloads or longer mission lifetimes with the same mass of propellant.
After decades of being considered impractical, immature, or unreliable, electric propulsion has come out of the cold. Many designs for interplanetary spacecraft use electric propulsion, due to their high specific impulse and ability to maximize the benefits of the high-isp, low-thrust propulsion regime that these thruster systems excel at.
Another type of electric thruster is also becoming popular for small-sat users: electrothermal thrusters, which offer higher thrust from chemically inert propellants in compact forms, at the cost of specific impulse. These thrusters offer the benefits of high-thrust chemical propulsion in a more compact and chemically inert form – a major requirement for most smallsats which are secondary payloads that have to demonstrate that they won’t threaten the primary payload.
So, now that we’ve looked into how we’ve gotten to this point, let’s see what the different possibilities are, and what is used today.
What are the Options?
The most well-known and popularized version of electric propulsion is electrostatic propulsion, which uses an ionization chamber (or ionic fluid) to develop a positively charged stream of ions, which are then accelerated out the “nozzle” of the thruster. A stream of electrons is added to the propellant as it leaves the spacecraft, to prevent the buildup of a negative charge. There are many different variations of this concept, including the best known types of thrusters (the Hall effect and gridded ion thrusters), as well as field effect thrusters and electrospray thrusters.
The next most common version – and one with a large amount of popular mentions these days – is the electromagnetic thruster. Here, the propellant is converted to a relatively dense plasma, and usually (but not always) magnets are used to accelerate this plasma at high speed out of a magnetic nozzle using the electromagnetic and thermal properties of plasma physics. In the cases that the plasma isn’t accelerated using magnetic fields directly, magnetic nozzles and other plasma shaping functions are used to constrict or expand the plasma flow. There are many different versions, from magnetohydrodynamic thrusters (MHD, where a charge isn’t transferred into the plasma from the magnetic field), to the less-well-known magnetoplasmadynamic (MPD, where the Lorentz force is used to at least partially accelerate the plasma), electrodeless plasma, and pulsed inductive thruster (PIT).
Thirdly, we have electrothermal drive systems, basically highly advanced electric heaters used to heat a propellant. These tend to be the less energy efficient, but high thrust, systems (although, theoretically, some versions of electromagnetic thrusters can achieve high thrust as well). The most common types of electrothermal systems proposed have been arcjet, resistojet, and inductive heating drives; the first two actually being popular choices for reaction control systems for large, nuclear-powered space stations. Inductive heating has already made a number of appearances on this page, both in testing apparatus (CFEET and NTREES are both inductively heated), and as part of a bimodal NTR (the nuclear thermal electric rocket, or NTER, covered on our NTR page).
The last two systems, MHD and electrothermal, often use similar mechanisms of operation when you look at the details, and the line between the two isn’t necessarily clear. For instance, some authors describe the pulsed plasma thruster (PPT), which most commonly uses a solid propellant such as PTFE (Teflon) as a propellant, which is vaporized and occasionally ionized electrically before it’s accelerated out of the spacecraft, as an MHD, while others describe it as an arcjet, and which term best applies depends on the particulars of the system in question. A more famous example of this gray area would be the VASIMR thruster, (VAriable Specific Impulse through Magnetic Resonance). This system uses dense plasma, contained in a magnetic field, but the plasma is inductively heated using RF energy, and then accelerated due to the thermal behavior of the plasma while being contained magnetically. Because of this, the system can be seen as an MHD thruster, or as an electrothermal thruster (that debate, and the way these terms are used, was one of the more enjoyable parts of the editing process of this blog post, and I’m sure one that will continue as we continue to examine EP).
Finally, we come to the photon drives. These use photons as the reaction mass – and as such, are sometimes somewhat jokingly called flashlight drives. They have the lowest thrust of any of these systems, but the exhaust velocity is literally the speed of light, so they have insanely high specific impulse. Just… don’t expect any sort of significant acceleration, getting up to speed with these systems could take decades, if not centuries; making them popular choices for interstellar systems, rather than interplanetary ones. Photonic drives have another option, as well, though: the power source for the photons doesn’t need to be on board the spacecraft at all! This is the principle behind the lightsail (the best-known version being the solar sail): a fixed installation can produce a laser, or other stream of photons (such as a maser, out of microwaves, in the Starwisp concept), which then impact a reflective surface to provide thrust. This type of system follows a different set of rules and limitations, however, from systems where the power supply (and associated equipment), drive system, and any propellant needed are on-board the spacecraft, so we won’t go too much into depth on that concept initially, instead focusing on designs that have everything on-board the spacecraft.
Each of these systems has its’ advantages and disadvantages. Electrostatic thrusters are very simple to build: ionization chambers are easy, and creating a charged field is easy as well; but to get it to work there has to be something generating that charge, and whatever that something is will be hit by the ionized particles used for propellant, causing erosion. Plasmadynamic thrusters can provide incredible flexibility, but generally require large power plants; and reducing the power requirements requires superconducting magnets and other materials challenges. In addition, plasma physics, while becoming increasingly well known, provides a unique set of challenges. Thermoelectric thrusters are simple, but generally provide poor specific impulse, and thermal cycling of the components causes wear. Finally, photon drives are incredibly efficient but very, very low thrust systems, requiring exceedingly long burn times to produce any noticeable thrust. Let’s look at each of these options in a bit more detail, and look at the practical limitations that each system has.
Optimizing the System: The Fiddly Bits
As we’ve seen, there’s a huge array of technologies that fall under the umbrella of “electric propulsion,” each with their advantages and disadvantages. The mission that is going to be performed is going to determine which types of thrusters are feasible or not, depending on a number of factors. If the mission is stationkeeping for a geosynchronous communications satellite, then the Hall thruster has a wonderful balance between thrust and specific impulse. If the mission is a sample return mission to an asteroid, then the lower thrust, higher specific impulse gridded ion thruster is better, because the longer mission time (and greater overall delta-v needed for the mission) make this low-thrust, high-efficiency thruster a far more ideal option. If the mission is stationkeeping on a small satellite that is a piggyback load, the arcjet may be the best option, due to its’ compactness, the chemically inert nature of the fuel, and relatively high thrust. If higher thrust is needed over a longer period for a larger spacecraft, MPD may be the best bet. Very few systems are designed to deal with a wide range of capabilities in spaceflight, and electric propulsion is no different.
There are other key concepts to consider in the selection of an electric propulsion system as well. The first is the efficiency of this system: how much electricity is required for the thruster, compared to how much energy is imparted onto the spacecraft in the form of the propellant. This efficiency will vary within each different specific design, and its’ improvement is a major goal in every thruster’s development process. The quality of electrical power needed is also an important consideration: some require direct, current, some require alternating current, some require RF or microwave power inputs, and matching the electricity produced to the thruster itself is a necessary step, which on occasion can make one thruster more attractive than another by reducing the overall mass of the system. Another key question is the total amount of change in velocity needed for the mission, and the timeframe over which this delta-v can be applied; in this case, the longer timeframe you have, the more efficient your thruster can be at lower thrust (trading thrust for specific impulse).
However, looking past just the drive itself, there are quite a few things about the spacecraft itself, and the power supply, that also have to be considered. The first consideration is the power supply available to the drive system. If you’ve got an incredibly efficient drive system that requires a MW to run, then you’re going to be severely limited in your power supply options (there are very few, if any, drive systems that require this high a charge). For more realistic systems, the mass of the power supply, and therefore of the spacecraft, is going to have a direct impact on the amount of delta-v that is able to be applied over a given time: if you want your spacecraft to be able to, say maneuver out of the way of a piece of space debris, or a mission to another planet needs to arrive within a given timeframe, the less mass for a given unit of power, the better. This is an area where nuclear power can offer real benefits: while it’s debatable whether solar or nuclear power is better for low-powered applications in terms of power per unit mass, which is known in engineering as specific power. Once higher power levels are needed, however, nuclear shines: it can be difficult (but is far from impossible) to scale nuclear down in size and power output, but it scales up very easily and efficiently, and this scaling is non-linear. A smaller output reactor and one that has 3 times the output could be very similar in terms of core size, and the power conversion systems used also often have similar scaling advantages. There are additional advantages, as well: radiators are generally speaking smaller in sail area, and harder to damage, than photovoltaic cells, and can often be repaired more easily (once a PV cell get hit with space debris, it needs to be replaced, but a radiator tube designed to be repaired can in many cases just be patched or welded and continue functioning). This concept is known as power density, or power-per-unit-volume, and also has a significant impact on the capabilities of many (especially large) spacecraft. The specific volume of the power supply is going to be a limiting factor when it comes to launching the vehicle itself, since it has to fit into the payload fairing of the launch vehicle (or the satellite bus of the satellite that will use it).
The specific power, on the other hand, has quite a few different implications, most importantly in the available payload mass fraction of the spacecraft itself. Without a payload, of whatever type is needed, either scientific missions or crew life support and habitation modules, then there’s no point to the mission, and the specific power of the entire power and propulsion unit has a large impact on the amount of mass that is able to be brought on the mission.
Another factor to consider when designing an electrically propelled spacecraft is how the capabilities and limitations of the entire power and propulsion unit interact with the spacecraft itself. Just as in chemical and thermal rockets, the ratio of wet (or fueled) to dry (unfueled) mass has a direct impact on the vehicle’s capabilities: Tsiolkovsky’s rocket equation still applies, and in long missions there can be a significant mass of propellant on-board, despite the high isp of most of these thrusters. The specific mass of the power and propulsion system will have a huge impact on this, so the more power-dense, and more mass-efficient you are when converting your electricity into useful power for your thruster, the more capable the spacecraft will be.
Finally, the overall energy budget for the mission needs to be accounted for: how much change in velocity, or delta-v, is needed for the mission, and over what time period this change in velocity can be applied, are perhaps the biggest factors in selecting one type of thruster over another. We’ve already discussed the relative advantages and disadvantages of many of the different types of thrusters earlier, so we won’t examine it in detail again, but this consideration needs to be taken into account for any designed spacecraft.
With each of these factors applied appropriately, it’s possible to create a mathematical description of the spacecraft’s capabilities, and match it to a given mission profile, or (as is more common) to go the other way and design a spacecraft’s basic design parameters for a specific mission. After all, a spacecraft designed to deliver 100 kg of science payload to Jupiter in two years is going to have a very different design than one that’s designed to carry 100 kg to the Moon in two weeks, due to the huge differences in mission profile. The math itself isn’t that difficult, but for now we’ll stick with the general concepts, rather than going into the numbers (there are a number of dimensionless variables in the equations, and for a lot of people that becomes confusing to understand).
Let’s look instead at some of the more important parts of the power and propulsion unit that are tied more directly to the drives themselves.
Just as in any electrical system, you can’t just hook wires up to a battery, solar panel, or power conversion system and feed it into the thruster, the electricity needs to be conditioned first. This ensures the correct type of current (alternating or direct), the correct amount of current, the correct amperage… all the things that are done on Earth multiple times in our power grid have to be done on-board the spacecraft as well, and this is one of the biggest factors when it comes to what specific drive is placed on a particular satellite.
After the electricity is generated, it goes through a number of control systems to first ensure protection for the spacecraft from things like power surges and inappropriate routing, and then goes to a system to actually distribute the power, not just to the thruster, but to the rest of the on-board electrical systems. Each of these requires different levels of power, and as such there’s a complex series of systems to distribute and manage this power. If electric storage is used, for instance for a solar powered satellite, this is also where that energy is tapped off and used to charge the batteries (with the appropriate voltage and battery charge management capability).
After the electricity needed for other systems has been rerouted, it is directed into a system to ensure that the correct amount and type (AC, DC, necessary voltage, etc) of electricity is delivered to the thruster. These power conditioning units, or PCUs, are some of the most complex systems in an electric propulsion systems, and have to be highly reliable. Power fluctuations will affect the functioning of a thruster (possibly even forcing it to shut down in the case of too low a current), and in extreme cases can even damage a thruster, so this is a key function that must be provided by these systems. Due to this, some designers of electrical drive systems don’t design those systems in-house, instead selling the thruster alone, and the customer must contract or design the PCU independently of the supplier (although obviously with the supplier’s support).
Finally, the thermal load on the thruster itself needs to be managed. In many cases, small enough thermal loads on the thruster mean that radiation, or thermal convection through the propellant stream, is sufficient for managing this, but for high-powered systems, an additional waste heat removal system may be necessary. If this is the case, then it’s an additional system that needs to be designed and integrated into the system, and the amount of heat generated will play a major factor in the types of heat rejection used.
There’s a lot more than just these factors to consider when integrating an electric propulsion system into a spacecraft, but it tends to get fairly esoteric fairly quickly, and the best way to understand it is to look at the relevant mathematical functions for a better understanding. Up until this point, I’ve managed to avoid using the equations behind these concepts, because for many people it’s easier to grasp the concepts without the numbers. This will change in the future (as part of the web pages associated with these blog posts), but for now I’m going to continue to try and leave the math out of the posts themselves.
Conclusions, and Upcoming Posts
As we’ve seen, electric propulsion is a huge area of research and design, and one that extends all the way back to the dawn of rocketry. Despite a slow start, research has continued more or less continuously across the world in a wide range of different types of electric propulsion.
We also saw that the term “electric propulsion” is very vague, with a huge range of capabilities and limitations for each system. I was hoping to do a brief look at each type of electric propulsion in this post (but longer than a paragraph or two each), but sadly I discovered that just covering the general concepts, history, and integration of electric propulsion was already a longer-than-average blog post. So, instead, we got a brief glimpse into the most general basics of electrothermal, electrostatic, magnetoplasmadynamic, and photonic thrusters, with a lot more to come in the coming posts.
Finally, we looked at the challenges of integrating an electric propulsion system into a spacecraft, and some of the implications for the very wide range of capabilities and limitations that this drive concept offers. This is an area that will be expanded a lot as well, since we barely scratched the surface. We also briefly looked at the other electrical systems that a spacecraft has in between the power conversion system and the thruster itself, and some of the challenges associated with using electricity as your main propulsion system.
Our next post will look at two similar in concept, but different in mechanics, designs for electric propulsion: electrothermal and magnetoplasmadynamic thrusters. I’ve already written most of the electrothermal side, and have a good friend who’s far better than I at MPD, so hopefully that one will be coming soon.
The post after that will focus on electrostatic thrusters. Due to the fact that these are some of the most widely used, and also some of the most diverse in the mechanisms used, this may end up being its’ own post, but at this point I’m planning on also covering photon drive systems (mostly on-board but also lightsail-based concepts) in that post as well to wrap up our discussion on the particulars of electric propulsion.
Once we’ve finished our look at the different drive systems, we’ll look at how these systems don’t have to be standalone concepts. Many designs for crewed spacecraft integrate both thermal and electric nuclear propulsion into a single propulsion stage, bimodal nuclear thermal rockets. We’ll examine two different design concepts, one American (the Copernicus-B), and one Russian (the TEM stage), in that post, and look at the relative advantages and disadvantages of each concept.
I would like to acknowledge the huge amount of help that Roland Antonius Gabrielli of the University of Stuttgart Institute for Space Studies has been in this post, and the ones to follow. His knowledge of these topics has made this a far better post than it would have been without his invaluable input.
As ever, I hope you’ve enjoyed the post. Feel free to leave a comment below, and join our Facebook group to join in the discussion!
Hello, and welcome to the Beyond NERVA blog! Today, we continue our in-depth look of NASA’s new nuclear thermal rocket. We briefly looked at the history of NTP (as NASA calls it, “nuclear thermal propulsion”) in part one, and in part two we took a deep dive into the materials that NASA is investigating for its’ new design, ceramic metal (CERMET) fuel elements. Today, we look at the stage and spacecraft itself, with a brief look at some information about the proposed engine design. The next post will focus on the testing and launch safety considerations for an NTP system (as well as some unique guidance, navigation, and control considerations), and we’ll close with a post about other options for using low enriched uranium to fuel a nuclear thermal rocket, this time using advanced carbide fuels.
As we saw in the first post, nuclear thermal rockets are nothing new. The US has built and tested them before, and even successfully tested one in flight configuration. In the second post, we looked more closely at the new materials technologies that are being used to make an even more capable NTR, CERMET fuels, but we also saw that there’s a problem: in order to use low enriched uranium (LEU), the fuel needs large amounts of isotopically separated tungsten, and this has been a major challenge for the supplier. To date, I have been able to find no information about deliveries of even 50% enriched 184W (the needed isotope), much less the more than 90% enriched tungsten needed for the fuel elements that NASA has designed.
So how does NASA plan to address this problem? Well, 184W would be useful for more than NTRs (tungsten is also used as a neutron reflector in the core of certain thermonuclear weapons designs, hence the lack of details on the development process and difficulties associated with it), so just because NASA has not been able to have the process developed doesn’t mean that it won’t be in the future (weapons programs have an easier time getting money than NASA’s nuclear program).
Advances in LEU Nuclear Propulsion
Even if this doesn’t pan out, there’s still other options. The one that caught the public’s attention last year was the signing of a new contract with BWXT. While far from a household name, in the DOE, US Navy, and NASA they are well-known. They helped build the USS Nautilus, and were an early (and currently are the major) supplier of fuel for the US Nuclear Navy. They offer commercial and research fuel resupply and disposal contracts on a number of reactor designs. Since the early 1980’s they have fabricated all of the Department of Energy’s experimental fuel elements (with the exception of KRUSTY, which was fabricated by Y12). They also are a prime contractor for many of NASA’s nuclear-related activities, participating in environmental impact assessments, technical consultancy, and other areas.
They have proposed a new, and thus far poorly described, design for an NTR, using CERMET fuels of a varying composition at different points in the core, to better manage and moderate the neutrons produced during fission. Based on what little information I’ve been able to gather since NETS 2018, according to Michael Eades using molybdenum/tungsten (MoW) as a matrix material for CERMET fuels is apparently as good as using tungsten-184 for both moderation and thermal limits, and this appears to be the path that BWXT will be using moving forward. However, I haven’t had the chance to go over the research yet, and apparently there are some significant changes, so today we’ll focus on the rest of the spacecraft.
It’s likely that the design will be similar to the one proposed by BWXT last year, however, which uses a technique known as “zoned moderation,” where different parts of the reactor are exposed to different neutron flux energies due to the distribution of moderator and reflectors throughout the core. There is no reason that this technique will not work using natural uranium as a fuel element matrix material rather than the beryllium and tungsten that was proposed for the earlier design.
Even this isn’t the end of the options, though… another fuel form, advanced tricarbides, also offer the potential for LEU use, in particular in the Superior Use of Low Enriched Uranium (SULEU) reactor design, which we’ll cover in a future blog post (not only is it a carbide-fueled reactor, but there are enough other nifty design features that this reactor definitely needs its’ own post).
The Beginnings of the Modern Astronuclear Thermal Era
Beginnings are important in nuclear engineering. A retired Lawrence Livermore engineer once told me “nuclear design is evolutionary,” and that’s especially the case with this system.
In many ways, the new dawn for nuclear propulsion was in 1990, at “Nuclear Thermal Propulsion: A Joint NASA/DOE/DOD Workshop,” held in Albuquerque, NM from June 10-12. This conference happened during the death throws of the Strategic Defense Initiative (SDI, Reagan’s Star Wars program), when funding was being cut for every single program associated with SDI. There was a nuclear thermal rocket design that was part of SDI, Project Timberwind, which used a pebblebed reactor to increase fuel surface area, but this was a relatively early casualty of Congressional budget cuts. In addition, as noted by the DOD Office of the Inspector General, the program not only was over budget and consistently failed to meet benchmarks, but there were questions about the predicted performance of the engine as well.
In order to continue moving forward with nuclear thermal propulsion, the three main stakeholders in the US came together to present their concepts. The conference started by establishing what had come before, and also a “baseline” was established to compare new ideas to the legacy NERVA designs that were available at the time. New subsystems, techniques for handling cryogenic hydrogen, and materials all would combine to make even an NTR using the same fuel elements and reactor geometry would be greatly improved over what was available in 1973. After this, presentations were made about many different aspects of nuclear thermal propulsion, from launch safety concerns to materials advances to advanced concepts for liquid, vapor, and plasma fueled reactor designs. The focus was on getting the most bang for the very few bucks that would be coming down the pipeline, and on the difficulties of testing any design under the regulatory regime that was in place at the time (which was not hugely different from what we face now).
The only way at the time to be able to test an engine was to capture ALL of the exhaust that passed through the reactor, which meant that you had to be able to store it somewhere – a very big somewhere. It also meant that more exhaust translated rather directly into greater expense for testing, so the thrust of the designed engine was specified to be in the 25,000 klbf range, similar to what the Pewee engine provided during Project Rover. We aren’t going to be getting into testing options in this post (that’s the next one), but keep in mind that the ability to fully test an NTP system on Earth is going to be a critical requirement, and the size of the engine (and the amount of propellant that needs to be captured) directly affect how difficult (and expensive) it will be to test as a full system.
This conference also can be seen as the birthplace of the immediate predecessor of the LEU NTP system: Stan Borowski’s Small Nuclear Rocket Engine (updated version of the design available here). This engine is a Pewee-class, graphite composite modern design, and still remains an option for a small NTR, although one that would require HEU rather than the LEU that NASA is currently focusing on. This engine also allows for bimodal operation, where oxygen is injected into the hot hydrogen stream and then ignited, giving a big boost to the amount of thrust available (at the cost of specific impulse), which became the LANTR, or Lunar Oxygen Augmented Nuclear Thermal Rocket, for faster trips to and from the Moon (which is so close the increased thrust is a big boost to mission capabilities).This design was investigated for more than a decade as NASA’s primary NTP concept, and remains an area of active research at both NASA’s Glenn Research Center and at Oak Ridge National Laboratory, where many of the fuel fabrication techniques are being investigated in depth.
Many of the parts of the SNRE stage remain in the LEU NTP stage. These include non-nuclear components, the basic shape and volume of the stage, nuclear and thermal shielding, and while slightly changed the mission requirements are largely the same. Perhaps the only major-ish change is the difference in the proposed launch vehicle: in the early days of NASA’s Design Reference Mission for Mars 5.0, the Ares V rocket was still on the drawing boards – and in the mission plans. This rocket ended up being canceled with the end of the Constellation program, and a slightly smaller replacement, the Space Launch System, was proposed. The difference between the rockets necessitates a re-juggling of what is launched on each orbital launch, due to the decrease in payload capacity from 140 mt to low Earth orbit to 110 mt, but this is something that can be addressed relatively easily by using slightly smaller modules (although often it ends up requiring an extra launch). So, looking back over the design proposals leads to a lot of insights into NASA’s thinking and requirements for their new nuclear rocket design.
Since there’s still a lot of questions about the exact form that the engine itself will take, let’s go ahead and look at the rest of the NTP stage: shielding, non-nuclear components, propellant tankage, size, and mission requirements.
Radiation shielding is essential on any nuclear system, but nuclear propulsion presents a number of challenges that are unique. Of course, the biggest part of this effort is to reduce crew dose during operation, but the engine components that aren’t in the core of the reactor (such as turbopumps, actuators, etc) will also be materially attacked by the radiation flux coming off the reactor, and the fuel itself can be heated as well (which causes local boiling and cavitaton in the turbopumps – and both are bad news). For a deep dive into this subject, I cannot recommend Winchell Chung’s Atomic Rockets page on the subject highly enough.
Because space is pretty much the definition of the middle of nowhere, the only thing that really needs to be shielded is the spacecraft itself. To save mass, the easiest thing to do is to stick your nuclear reactor on one end of the ship, your crew quarters on the other, with the fuel tanks in the middle. Then, place a radiation shield between the nuclear reactor and the rest of the ship. This is called a shadow shield, because the ship stays in the shadow of this radiation shield.
What is Radiation, and How is it Shielded?
There are four main types of radiation that come off a nuclear reactor: alpha, beta, and neutron radiation form the group known as particle radiation, and high energy photons like hard UV, x rays, and gamma rays, form the ray portion of the radiation flux. (These, obviously, are ionizing radiation types. Non-ionizing radiation, on the other hand, is not a danger to the crew, and is something to just be dealt with or exploited by the ship – infrared, for instance, also copiously comes off a nuclear reactor, but that heat energy is the entire point of running the thing!) This second type of radiation is made up entirely of photons, but of much higher frequency than visible light. The first type, however, is a salad of different particles: alpha particles are bare helium-4 nuclei (and as such have a charge of +2), beta radiation is a high-energy electron (charge -1), and neutron radiation is made up of – surprise! – neutrons (and as such have no charge)
The easiest way to consider shielding is to split the two types of radiation up and deal with them separately, since they have almost opposite requirements for stopping them. So, let’s look at particles first, and then rays.
Particle radiation is stopped through a process called “elastic scattering,” which is most easily pictured by a pair of balls, one moving and one stationary, hitting each other. Depending on the mass and velocity of each ball, they will reflect off each other, and momentum from the ball that WAS moving gets at least partially transferred into the ball that was stationary. How much is transferred depends on the relative masses of the balls: the closer the masses, the more energy can be transferred. So, to stop any of the particle radiation types, low-atomic-mass (low-Z) materials are ideal, usually something chock full of hydrogen. This lends itself to water, hydrates, and organic materials. However, the atoms in the material will obviously be bounced around, and over time the material will become degraded. As an additional challenge, ray-type radiation will break the hydrocarbon chains that make up organic shielding, and as such will degrade these types of materials even further (for a look more into these effects, check out the organically moderated reactor concept, only think of the challenges of slowing these particles to a stop). The other option doesn’t work for neutron radiation, but works on the other two: electromagnetic confinement. This is the approach used by the concept of a mini-magnetosphere for a ship, being explored by NASA and Rutherford-Appleton Laboratory, and diverts the particles before they come in contact with any materials using powerful electromagnets. This is a very advanced concept, and often is far more massy than using a passive material. In addition, alpha and beta particles aren’t able to leave the reactor’s pressure vessel anyway, so they generally aren’t a concern.
There ARE particles that are a concern, though: neutrons, which are uncharged but slowed and stopped the same way, and galactic cosmic rays, or GCRs. These are higher-Z nuclei that have been ejected from some high energy event, like a supernova, and come tearing through space at a significant percentage of the speed of light. They cause a large amount of damage on the atomic level, and are a major source of the radiation flux that astronauts receive. Unfortunately, because they’re moving so fast they’re virtually impossible to stop or divert, unless you have a strong electromagnetic field blanketing whatever you’re protecting (and even then, because they have so much mass, it’s hard to get them completely diverted, just slowed a bit).
Rays, on the other hand, tend to be simpler to stop – assuming you can handle large amounts of mass! For lower-energy photons, when they come into contact with an atom, they are absorbed by an electron in the electron cloud, which jumps to a higher energy state, then drops down, emitting a slightly lower energy photon in the process. This effect is how neon lights are produced, or how chemicals can be identified by spectral emission. This is where lead shielding comes in for terrestrial reactors (and magnetite-heavy concrete, along with other design features), and lead is commonly used in shadow shield designs for the same reason. However, any high atomic mass element (HZE) can be a reasonably effective shadow shield, and depleted uranium (238U) is sometimes used as a shield for compact reactors due to its greater density and atomic number. The down side to this method of shielding, however, is that it’s heavy, and heavy is the LAST thing that you want on your spaceship. Unfortunately, for complicated reasons I’m not going to get into here, there’s no way to effectively reflect these high-energy photons, so this really is the only way that we are able to deal with them.
Keep in mind, for the main payload (in this case the crew quarters) there is plenty of other mass in the way. This includes the tanks of propellant, the material the tanks are made out of, structural components to transfer the thrust from the engine to the payload without destroying the ship, support equipment… all of these will absorb or reflect radiation to a greater or lesser extent. We’ll look at the propellant and tanks separately, but keep in mind that while the majority of the shielding is provided by the radiation shield, this doesn’t mean that this is the only shielding available.
The exact size and composition of the shield is going to change, depending on the final design of the engines that will be used, but there shouldn’t be a huge variation in the type or quantity of radiation coming off one form of solid core NTR versus another, so only minor tweaks to the shield composition should be necessary. For a good example of the types of changes that may be necessary, an analysis of the Kilopower reactor by McClure and Poston shows how shielding requirements change as fuel type changes [Insert link].
Starting in 2014, a team of researchers at Oregon State University and Marshall Spaceflight Center led by Jarvis Caffery has been examining NASA’s shielding requirements for NTP. In a nutshell, the goal is to REDUCE the overall radiation exposure to the crew for the length of the mission by reducing the overall mission flight time, reducing the crew’s exposure to much more damaging galactic cosmic rays and HZE particle radiation from events like supernovae and neutron star collisions. This doesn’t mean that there aren’t short-term radiation limits that NASA has to work within, or career doses of radiation that are a severe limitation to current mission planners. Given current NASA radiation dose limits, it’s actually impossible to use chemically propelled rockets, because the crew would reach their lifetime dose limit either on the surface of Mars, or on the trip back. NASA is re-examining these limits, and recent legislation that has been proposed to study low-dose radiation exposure may end up significantly changing these requirements in the future.
Caffery et al suggest that in order to maximize the benefit of radiation shielding for the available mass budget, it may be best to concentrate on combined shielding around the crew habitat, to deal with the radiation flux coming off both the reactor and from the environment, rather than concentrating more mass in the shadow shield. However, they also note that using HZE shielding (like tungsten, lead or uranium) near the crew habitat is something to be avoided, since this is how you get brehmstrahlung, and either gamma or x-rays flood your crew cabin.
For the shielding between the reactor and the rest of the ship, this is certainly going to be more than one shield, and the main one is likely to be a composite shield, for a variety of reasons. Various parts of the rocket engine itself will need to be shielded to ensure the more sensitive components are exposed to as-low-as-practicable neutron fluxes. Perhaps the two most important are the stepping motors that will likely be used for the control systems and the turbopumps. Depending on the design that ends up being used, the turbopumps may be on the “hot” side of the main shadow shield, or if a significant part of the shielding occurs in the main structure of the ship and so the shadow shield is reduced in mass, these pumps may be exposed to too high a neutron (or gamma) radiation flux. These can be shielded by secondary shields – in this case possibly B4C, because it not only is a more effective shield per unit volume, but also moderates the neutrons that interact with it less than something rich in hydrogen, leading to lower neutron absorption rates into the mechanical assembly.
The main shield has many options, but there are definite limits to what can be done. Any one concept isn’t going to be good enough, there’s going to need to be a solution that addresses many tradeoffs and problems. This leads to the composite main shadow shield, a concept we’ve seen before in the Kilopower system.
Looking at the Kilopower shield, there are layers of an HZE material (in this case tungsten, but DU is another good option), with thin layers of LiH sandwiched between. This means that the neutron moderation benefits of the LiH – and therefore the likelihood that the neutron will be slowed enough to be absorbed – are spread through the bulk of the shield.
LiH is one of the best neutron shields out there (the best by mass), especially when enriched with Li6, but it has a number of problems, including chemical and thermal stability. Especially in the case of the Li6-enriched variety, a lot of energy will be deposited here as the neutrons are first slowed, and then absorbed, which means that heating can be significant, and unfortunately LiH isn’t the best conductor out there. Dissociation of LiH into lithium metal and H2, which then will either form pockets of gas that weaken the surrounding material, or is lost through outgassing, can occur if the thermal load gets too high.
In order to mitigate this, and also increase the chances of neutron capture and energy deposition in the more thermally conducive HZE shielding plates, the LiH is spread through the shield. This allows for the LiH to be in a small enough sheet to allow for needed thermal dispersion into the more thermally conductive (and less sensitive) metal components. It also means that the secondary gamma emissions from the neutron moderation and capture have plenty of shielding to stop them before they reach the end of the shield.
A design using B4C would have less volume, but more mass. This material is already something that’s commonly used in machine tools all over the world, and even enriching the boron to increase its’ likelihood of absorbing neutrons won’t change those manufacturing techniques significantly. One option studied by McCafferty et al was a pebblebed design, where spheres of B4C would be packed into a casing made out of some structural material. This allows the already better thermal properties of B4C to be maximized, while maintaining the shielding properties of the material by minimizing available ray paths for radiation through the material. Due to its’ higher mass, this material hasn’t been studied as extensively as LiH, but offers some distinct advantages, and so this is was explored more thoroughly in the 2015 paper I linked above. With its’ machinability (and long industrial application), thermal conductivity and resistance, and lower-volume shielding properties, this is a material that will likely show up in many designs, if not necessarily for a main shield then definitely for secondary shields.
While the flux is going to be highest when the reactor is operating, this does not mean that the only radiation flux coming off the reactor is during operation. Once fission occurs in the reactor core, the entire reactor becomes irradiated – the longer it operates, and the higher power it operates at, the more radioactive the partially used fuel and exposed reactor components will become. This means that the highest radiation flux coming off the reactor will likely be in the final burn of the nuclear fuel’s life (and the reactor itself if it’s not designed for refueling), when it will also likely be pretty much exhausted of fuel. This is the worst case that must be designed for, and unfortunately the one most sensitive to decisions that haven’t been made yet.
Unfortunately, at the moment the design for the shield will be up in the air. Until a number of decisions have been finalized in the engine design process, including fuel type, enrichment, neutron spectrum, and others, only options and broad outlines are able to be proposed. Another challenge brought up by the authors is that the primary tools used to model time-dependent dosing calculations, the MCNP code released by Los Alamos National Labs, isn’t exactly the best at these sorts of calculations. Because of this, testing of any shielding system will be needed.
The Propellant Tanks
Any propulsion stage needs propellant to work, and in the case of NTRs the ideal propellant is one of the most difficult to work with: hydrogen. Hydrogen is the lightest of the elements, and as such has a bad habit of being able to seep through just about everything, weakening it in the process. Cryogenic cooling can significantly reduce its’ bulk, but it remains incredibly bulky even under cryogenics, and its’ very low liquefaction point means that maintaining it in cryogenic storage is a major challenge.
This is the hydrogen boil-off problem, and it’s something that has vexed every rocket designer to use LH2 since the beginning of the space age (and many chemical engineers in the decades since it has been discovered). In low Earth orbit (LEO), H2 tends to boil off at predictable rates due to a number of factors which define the quirks of various systems. In addition, as the hydrogen seeps through the structures of the tank and spacecraft, these get weakened and brittle, known as hydrogen embrittlement. Add in the large volume that H2 requires, and this can be one of the most challenging propellants to use in a rocket, and many of the challenges dealt with in the Rover program were actually related to using H2 propellant, which hadn’t been done in the US before.
There are two ways to deal with this problem: first is with a purely passive system, as is done in launch vehicles, and the second is to use an actively cooled system to minimize or eliminate hydrogen boiloff. The first option, unfortunately, isn’t an option for the NTP stage (due to very long mission times), but the passive cooling technology is still used, based on the LH2 tank design for the Space Launch System. This tank is made out a special aluminum/copper/lithium alloy (Al 2195), which is a high-strength, weldable alloy. Currently, new welding techniques are being used with this alloy on the construction of the SLS main tank at NASA’s Michoud facility by Boeing, which will also improve the quality of an NTP propellant tank as well.
Surrounding the main tank is a thermal protection system (TPS), commonly a foam insulation (in the Apollo SIV-B third stage, this was polyethylene foam internal to the tank, which resulted in less than 10% boil-off during the LEO insertion and TMI burn phases of the mission), and a sun shield as well (likely, in the case of a longer-term mission, seen as a gold foil coating on the stage around the propellant tanks). Additional TPS techniques have continued to be investigated, including use of H2 gasses during the boiling being used as a vapor to cool the rest of the TPS through careful venting of the overpressure coming off the H2 tank, and the use of cryocoolers to further cool the thermal shields and mitigate heat transfer. However, it seems unclear exactly which materials would be used for a long-term cryogenic storage TPS for the NTP, and this could be a major problem for such a long-duration mission as a manned Mars mission, which would require the H2 to be maintained for over a year. An example of what has been flown would be the Power Reactant Storage Hydrogen Tank, which lost 2.03% of its’ reactant per day over the 21 day lifetime of the system. This leads to a huge increase in the needed H2 for an extended mission, and a corresponding loss of payload capacity. Even more modern systems lead to a boil-off rate of about 6%/month, which is incredibly prohibitive for as extended mission as a Mars crewed mission.
Nuclear + Rockets: Always Complicated
When I started this project, it seemed (relatively) easy: nuclear power, while complex, isn’t unknowable. Rocket propulsion is far more complicated in so many ways than nuclear thermal rockets: only heat is necessary, not the finicky balance between fuel and oxidizer. Sure, there are a thousand details, but that’s what engineering is for.
There is a truth to this, but one of the simplest systems is a wonderful example of why this subject is so difficult to address. Propellant tanks are, in theory, fairly simple: it’s a fancy thermos, with given rates of boil-off that can be adjusted by improving the insulation of the system. Heat mitigation is primarily needed from the solar environment, which is a task that all spacecraft have to address
With a nuclear reactor, there are two additional vectors for thermal heating, both from the reactor. First, there’s gamma ray heating, caused by the remaining gamma radiation after the primary shadow shield and support equipment. A small amount of this is coming from the fission reactions themselves, but the bulk of the shield will absorb these particles. The larger component comes from the neutron flux coming off the reactor, either through elastic collisions of the neutrons and hydrogen in the tank – which slow the neutron and accelerate the hydrogen, heating it – or through the secondary gamma radiation caused by these collisions. As each neutron is slowed (thermalized), it is more likely to interact with the next atomic nucleus, increasing the number of reactions while reducing the energy of each of those interactions, until the neutron is finally captured. These resulting gamma rays are far more easily absorbed by heavier (higher-Z) atoms than lighter ones, so while it’s more unlikely that they will be absorbed by the propellant, the support structures and tank itself will be heated by these interactions, transferring the heat to the propellant through conduction.
According to a 2015 paper by B.D. Taylor et al of NASA’s Marshall Spaceflight Center, neutron interactions with liquid H2 drop to effectively zero after less than 50 cm of penetration into the tank itself (due to hydrogen’s excellent moderation properties), and gamma heating becomes the major source of nuclear-caused thermal heating after about 15 cm.
Due to the unique nature of the internal heating caused by the radiation flux (rather than the still-present external heating caused by background radiation that is a more well-understood problem), thermal stratification and complex convective cycles are far more likely to develop in an NTR’s propellant tanks. This can be mitigated by careful construction of baffling, and possibly with mixing equipment internal to the tank itself.
Enter Active Cooling: The Zero Boil-Off Tank
If LH2 boiloff was eliminated, not only would your propellant not be leaking away constantly, but the hydrogen embrittlement would be reduced or eliminated as well. In addition, according to some studies the launch mass of a LH2 system could be reduced by 20% or more if boiloff was eliminated for cislunar space missions, between the mass of the H2 and the required larger tankage requirements. While thermal shielding is a huge help (such as the gold foil seen on many spacecraft), the ambient temperature of space is still higher than the boiloff temperature, so active cooling is needed. In addition, the propellant will be warmed by both gamma rays and neutrons that weren’t absorbed by the shadow shield, and so needs to be actively cooled to prevent even faster boiloff. This problem is so severe, in fact, that NASA no longer plans to try and use just passive cooling techniques to get to Mars.
Enter the Zero-Boiloff Tank, a design that NASA began researching in 2006 with the Florida Solar Energy Center. This system uses a multi-stage cryocooler and hydrogen densification system to ensure continuous cooling of the cryo H2. This design started as a small (150 L, due to facility safety regulations) dewar, built at the FSEC, surrounded by a storage vessel. Later tests used a much larger tank, closer to what would be used for a rocket propellant tank, either for a stage or a propellant depot.
This is a system that we’ll go more into depth on in its’ own post, so we’re going to look at it more briefly than we typically do here in the interests of blog post length.
In short, a ZBO tank uses integral cryocoolers to maintain the propellant below the boiling temperature of the H2. This definitely adds dry mass and complexity to the system, but by significantly reducing or eliminating boil-off, the overall mass needed for the system to complete the mission requirements is reduced by a large amount. This can be paired with vapor-cooled shielding and passive TPS to optimize the mass of the system.
There is one advantage that the ZBO designs have over traditional tank designs for NTR use: the internal support structure will act as an additional shield down the center line of the spacecraft, protecting the payload more than just the LH2 remaining in the tanks.
LH2 Shielding Goes Away Through the Mission
As propellant is expended during a burn, there will be less mass between the payload and the reactor, meaning that secondary radiation protection will decrease the longer the engines burn.
This is a problem for the payload, because the flux coming off the reactor increases the longer its’ burned (due to fission product decay, ambient delayed neutron flux, and increased reactivity requirements to overcome neutron poisoning in the fuel elements). As mentioned above, the internal structures of a zero boil-off tank mitigate this problem somewhat, but they aren’t so large that they would completely fill the entire center line of the spacecraft between the reactor and the payload. However, there have been some designs that retain a column of H2 in the tanks, even when “empty,” which mitigate this. There is a mass loss if this is done, but depending on acceptable radiation dose to the payload, and the radiation flux coming off the reactor, this may be a good decision for some spacecraft designs, especially smaller ones where the distance between the reactor and the payload bus is smaller than on larger spacecraft (for instance, a lunar shuttle vs. a Mars spacecraft).
LEU NTP Mars Vehicle
The LEU NTP stage’s primary mission is going to be a crewed mission to Mars. This doesn’t mean that the stage can’t be used for other missions, but every project needs a mission, and in this case that mission is NASA’s Mars Design Reference Mission 5.0 with an expected mission date for the 2037 launch window to Mars.
In order to complete this mission, a number of components are going to need to be assembled on-orbit: the core propulsion stage (CPS, containing not only the main engines, but reaction control systems, avionics, a solar electrical power system – no bimodal plans for the basic design – and cryogenic fluid management hardware), an in-line propellant tank (essentially the same as the CPS, but with the engines and shielding replaced with more LH2 tank, and a smaller RCS), a saddle truss with up to 5 LH2 drop tanks (one in-line, the rest attached to the outside of the truss), a smaller saddle truss for payload, a deep space habitat (based on the TransHab design), and an on-orbit manned spacecraft (the Orion module).
The numbers that are used in this section are based on the HEU version of this stage, optimized for Mars DRM 5.0. They will likely be slightly different, not only due to the new engines but also due to advances in mission design and vehicle optimization, by the time this system is taking its’ first crew to Mars, but they will be very close.
Core Propulsion Stage
Being NASA, NTP design is modular in nature, with the idea being that the same propulsion and power module can be used for multiple mission types by adding additional propellant tankage and support equipment depending on the mission profile and destination. So, a Lunar shuttle may only need the core stage, while extensive additional tankage and payload would be necessary for a Mars mission.
The core propulsion stage for the NCPS (and likely the LEU NTP stage) is approximately 25 meters long and 8.4 meters wide, carries three CERMET-fueled 25 klbf NTP engines (Pewee class) for main ship propulsion, 47.2 metric tons of LH2 propellant, and 15.6 metric tons of reaction control system fuel and oxidizer (NTO/MMH). When launched, it will be fully fueled, with a wet mass of 109.5 mt (dry mass 46.2 mt). This pretty much maxes out the payload capabilities of the Space Launch System, which is the preferred method for lofting the stage into orbit. A composite truss structure provides the structural strength for the stage. The reaction control system is nitrous oxide/monomethel hydrazine hypergolic fueled, based on the Fregat RCS, with 328 s of specific impulse. The main engines are planned to be rated at 900 s isp, but as we’ve seen there are many questions remaining about the actual design of the engine that will be used.
In-Line Propellant Tank
Moving up the spacecraft structure, the next module to be launched will be an in-line LH2 tank (ILT), which at 25.7 m is just slightly longer than the CPS, but with the same diameter. This module is very similar to the CPS, but replacing the engines and shadow shield with additional tank volume. The RCS is also smaller on this stage, since it’s not on the end of the stack and therefore needs to apply less force than the rear-most section of the spacecraft. With a dry mass of 29.7 mt, 79.2 mt of usable LH2, and just over 2 mt of RCS fuel/oxidizer, the total wet mass of this module while on the ground is 108.2 mt – once again, constrained by the capabilities of the Space Launch System. A similar composite truss structure is used on this portion as the CPS, and docking adapters on each end are used to secure this module to the CPS aft, and the saddle truss forward.
Saddle Truss with Drop Tanks
The third portion of the spacecraft is a long (27.8 m) saddle truss, which means that the structural components form a cylinder around a central hollow. In this case, that hollow holds an additional in-line drop tank, another part of the RCS, and has the capability to mount additional external drop tanks (this part of the spacecraft is far enough forward that these will be shielded by the shadow shield). With a total dry mass of 29.75 mt, and a total wet mass of 118.4 mt, this portion of the stack carries a minimum of 84 mt of LH2 propellant. Since this is a drop tank, and will be used for trans-Mars injection burns, the ZBO tank will not be used here, leading to LH2 boiloff of approximately 1.54 mt. Once again, this will take up the full launch capabilities of the SLS, and will be the second-to-last module launched.
The final portion of the spacecraft is the mission payload. In this case, it consists of a smaller saddle truss (containing mission specific payload, an RCS, and a canister for holding cargo, approx 12.14 mt), a fully stocked deep space habitat (TransHab, 51.85 mt fully stocked), and the crewed spacecraft (in this case the Orion spacecraft, but the original design called for the MPCV, Orion’s predecessor, which massed 14.49 mt without fuel). This is the lightest weight of all the launched modules, with 78.8 mt of mass on the pad. It’s possible that this launch may carry additional fuel, but instead it may just take advantage of using a less capable (and therefore less costly) launch vehicle.
The Integrated Stack
While in low Earth orbit, and once fully assembled, the Mars crewed spacecraft will mass approximately 414.15 metric tons, delivered by four launches of the Space Launch System. Once assembled, the crew will be delivered to the spacecraft for the beginning of the trip to Mars. This will be the largest spacecraft ever meant to travel ANYWHERE except in low Earth Orbit, and will only be smaller than the International Space Station.
Another nice thing about this spacecraft is that, because it’s so long, and the mass is well-distributed, it will also be the first to use centrifugal artificial gravity. By rotating it end over end, it is possible to induce 1 gee of centrifugal acceleration after the trans-Mars injection (TMI) burns, and slow the rotation down to 0.38 gee by the time of Mars orbital insertion (MOI). Then, the rotation will be stopped, and the MOI burn will take place.
Variants of this design have been proposed since the middle of the 1900’s, both for pure nuclear thermal and bimodal thermal and electric propulsion. The bimodal variant (named by its’ creator, Stan Borowski at NASA’s Glenn Research Center) is the Copernicus – B, and has a single large Hall thruster mounted on the center of mass of the spacecraft. After TMI, and spacecraft spinup, the electric thruster is activated for the Earth-Mars cruise period, turning around at the midway point (RCS and navigational correction on a spinning spacecraft has been demonstrated before, it’s more difficult but completely doable). This reduces the travel time to Mars significantly over pure NTP, but at the cost of a much more complex reactor system (of a type that the US isn’t currently investigating strongly, although most astronuclear companies have considered the idea, including NASA’s prime contractor for NTP, BWXT), a power conversion system, added heat rejection equipment, and the electrical thrusters and propulsion. This design is more complex, however, and the current contracts for NTP focus heavily on the pure reactor core.
Most mission designs for crewed Mars missions assume more than one vehicle: at least one, often two, NTP powered cargo ships are sent to Mars before the crewed vehicle described here. These cargo missions are usually planned for arriving at Mars, and having systems verification completed, before the manned mission. This does impose an approximately 22 month delay on the manned mission (the time it takes for another launch window to open from Earth to Mars), but on the other hand it ensures that the supplies and resources needed by the astronauts have been delivered safely. These designs use the CPS as described above, as well as the in-line fuel tank, but the additional saddle truss with drop tanks may or may not be necessary, depending on the mass requirements for delivery to Mars and the number of cargo missions. These follow a slower, minimum-energy (Hohmann transfer) TMI profile, whereas the crewed mission will follow a faster transit (both to reduce crew exposure to the interplanetary radiation environment and to maximize surface stay time).
An early (2009) construction plan for a two cargo ship mission (available here, based on the Ares V, the predecessor to the SLS) involved launching two core propulsion stages, to be mounted to two uncrewed cargo ships for a minimum energy transfer to Mars. This involved a total of four launches for the two craft, each of which would have a mass in LEO of about 236 mt. However, based on the launch estimates more recently provided compared to the launch requirements for this version of the mission’s manned vehicle (which requires three launches as opposed to the more recent estimate of four), it is likely that each of these vehicles may require three launches instead of two (the Ares V was designed for 140 mt to LEO, significantly more than the SLS). One other change, however, is that the overall mass of the crewed interplanetary transfer vehicle is only 326 mt, indicating that a significant amount of mass that current plans assume is on the crewed vehicle would be transferred by the cargo missions instead (my guess is that this is because they were planning on 140 mt to LEO for this design study, not 110 mt). These modules would be assembled in LEO before TMI, and until the first burn for leaving Earth orbit, the reactors would not achieve criticality. This makes the reactor effectively radiologically inert, and not a concern to operate around during launch and construction.
These modules could be assembled at the ISS (assuming it’s still around by the time crewed Mars missions are being launched), or independently in LEO. Details on specific construction methods are sketchy, however with extensive experience in multi-module construction on orbit by most international players involved in the ISS, it shouldn’t pose too great of a challenge – just one with many technical details to work out.
LEU NTP: The Latest Plan to Get to Mars
Nuclear thermal propulsion offers the chance to open far more distant places than humanity has ever set foot to human exploration. While it’s theoretically possible to use chemical or electric propulsion, nuclear thermal propulsion offers far higher efficiency than chemical engines, with high thrust making orbital and interplanetary maneuvering far more rapid than the slow but steady burn of electric thrusters.
Currently, NASA’s plans to go to Mars heavily rely on this promising technology, which was demonstrated over 50 years ago (as we saw in part 1). New requirements in the types of fuel that are able to be used have led to major advances in materials engineering, and open up the possibility of using low enriched uranium (as we saw in part 2). By this point, the basic design for the interplanetary spacecraft is (hopefully) clear.
There remain issues to be dealt with, though: First, the engines need to go through a testing regime that will minimize radiological release to the environment, and be demonstrated to be able to be launched safely, and survive a launch failure without causing an environmental disaster or accidental criticality event; second, the core propulsion stages need to not only be launched, but also be used to their maximum effectiveness to get us to Mars. These will comprise the next two blog posts, which research is already well underway on. After that, I hope to address a different popular fuel form, carbide fuels, which offer even higher operating temperatures, and also address the Russian version of NTR, the RD-0410 “twisted ribbon” architecture, which China has also been experimenting with in recent years.
I hope to have these blog posts released in a more timely manner. Unfortunately, these posts often have me searching for weeks for obscure information that is difficult to find even when paper titles and authors are known, and this last year has been more… fulsome with events in my personal life, let’s say. Hopefully, the greatest challenges are now behind me, and I hope to be able to post more frequently.
Unfortunately, with the difficulty in putting out just the blog (and associated pages), the YouTube channel is now on indefinite hold. There are draft scripts for many different videos, which will likely be edited into pages for the site in the coming weeks and months, but I can’t reasonably see myself being able to edit those scripts, record them, and do the video editing, much less the animations required for the scripts, at any point in the near future.
On the bright side, as some of you may have seen, the Facebook group has hit over 100 members! Feel free to come join the conversation if you’re on FB! (At some point I may branch out onto other platforms as well, but for now it’s difficult enough just keeping up with the blog and FB groups!)