Hello, and welcome back to Beyond NERVA! Today, we take a break from the Topaz International Program to cover a subject that we haven’t touched on at Beyond NERVA yet, and sadly the inspiration is due to the death of one of the true luminaries of astronuclear engineering, Dr. Emanuel “Emil” Skrabek, who passed on March 14th of progressive supranuclear palsy and Parkinson’s disease.
His obituary can be found here: http://www.ruckfuneral.com/obituary/emanuel-andrew-skrabek-phd . May his family have comfort in his passing, and may his memory be eternal. His legacy within astronuclear engineering, and the discoveries that his inventions enabled within (and outside) our solar system, certainly make him one of the true unsung engineering heroes in our race’s ability to reach out beyond our planet and learn about our own solar neighborhood.
Today, we are going to talk about the bread and butter of astronuclear engineering: the radioisotope thermoelectric generator, or RTG. From the dawn of spaceflight, these systems have provided simple, solid state power for missions of all types, from orbiters to landers to rovers, and have enabled incredible science to be done in the far-flung reaches of our solar system – and just recently, beyond.
Simply put, RTGs use the natural radioactive decay of a radioisotope, or radioactive isotope of a material, to produce electricity through the use of a pair (or more, but mostly just two) of materials that, at the place that they meet, produce electricity – assuming that there’s a hot side (where you stick the radioisotope) and a cold side (a radiator). They’re insanely attractive to mission planners for quite a few reasons. First, they’re completely solid state, which means that there are no moving parts to break. Second, they’re well-characterized, meaning that the problems that they face, their effects on a spacecraft, their behavior during launch, and a slew of other factors are well-known. Third, they can provide a heat source for components in a spacecraft, meaning that the freezing cold of space isn’t going to cause mechanical seizing or electronics failure thanks to the waste heat from the generator. Finally, they’re a legacy design that remains incredibly relevant, meaning that we’ve been doing them for a long time but they’re still useful.
This is an incredibly well-documented and widely-used application for in-space nuclear power, so I’m working on a page with more details on this technology, but when it will be complete is still up in the air. Follow me either on FB or Twitter to get notified when it is available!
Radioisotope Thermoelectric Generators: The Fuel
Everything is radioactive, but some things are more radioactive than others, and all fall into a broad set of categories of “radioactivity.” For the purposes of a radioisotope thermoelectric generator, the best option for fuel is an isotope that only undergoes alpha decay (and preferably only decay once) so the fuel needs only minimal radiation shielding to protect the spacecraft from any radiation that could damage the spacecraft’s on-board electronics and other payload. Because of this, and because deep space missions have very long timelines. This led RTG designers to decide to use an isotope of Plutonium, 238Pu, for many of its’ deep space mission.
238Pu is rare in that it only emits alpha radiation, meaning that the fuel pretty much shields itself. It also emits a useful amount of heat for either producing electricity or heat, and over a useful timeframe. There are many other options for radioisotope fuel for RTGs, but of the flown missions the vast majority have used 238Pu for fuel. We wont go into the other fuel types here, but I’m currently working on a page on RTGs which will go into the different options.
Unfortunately, due to a number of organizational and planning difficulties within NASA and the DOE, who supply the 238Pu oxide fuel, the radioisotope itself is becoming scarce. With less fuel available, and more powerful sensors needing support from more powerful transmitters to get all the information back to Earth, the increased use of electric propulsion, and other power requirements, the transition is almost being forced on NASA’s mission planners.
Radioisotope Thermoelectric Generators: The Heat Sink
As with pretty much all other known forms of thermal-to-electric heating, there needs to be a hot side of the system and a cold side. In smaller power units, this heat rejection system is simple: a set of simple metal fins secured to the metal exterior of the power unit. This casing shields the very minor amounts of radiation coming from the fuel during the decay process (if you’re interested about the radiation environment for the payload of an MMRTG, it can be found here: https://trs.jpl.nasa.gov/handle/2014/45778 ), and also provides micrometeorite protection for the power supply. Due to the low power involved, these simple systems are more than sufficient to cool the converter.
Another use for the waste heat is to provide heat to temperature-sensitive sensors and electronics. In fact, the RTG is only one application for a broader category of systems” Radioisotope Heating Units.” which can provide not only heat for components, and electric power for on-board systems, but even propulsion as well, in the form of a radioisotope thermal rocket. This last option isn’t available in current systems, but New Horizons successfully used its RTGs to power maneuvering thrusters in the outer solar system. If it’s able to be used somehow, waste heat isn’t waste yet.
Radioisotope Thermoelectric Generators: The Thermocouple
RTGs use the thermoelectric effect to convert heat to electricity. Thethermoelectric effect occurs when there’s a difference in temperature across the junction of two different metals. This is more properly known as the “Seebeck effect,” after its second discoverer (it was first described by Volta in 1794): Thomas Johann Seebeck in 1821 after his independent rediscovery. The temperature range that the system would be exposed to determines the materials that are best for the particular converter. that is being used. The efficiency depends on a specialized material property known as the “Seebeck coefficient;” a higher Seebeck coefficient means a more efficient power conversion process, and more electricity is generated for a given temperature gradient. This coefficient depends on a number of things, including the microstructure and crystalline structure of the materials being used for each element of the converter, meaning that changing the alloy of the base materials used can have a significant effect on the performance of the converter.
This application was actually widely in use already as a type of sensor called a “thermocouple,” which sends a voltage based on its temperature within a certain range of environments and is one of the most common forms of temperature monitoring in many fields. Designers of early astronuclear systems, such as the designers of the SNAP-3 RTG (which flew for the first time in 1961) scaled this concept, turned it inside out, and placed it on a spacecraft. Many different types of material combinations have been experimented with over the years, mostly based around lead and tellurium, PbTe. Thin strips were alternated around the radius of the fuel canister, with heat being provided by conduction and radiation using wide radiator fins. Another common option is silicon-germanium, a better option at higher temperatures.
Today most designs use lead telluride (PbTe) doped with a special material that Dr. Skrabek invented with Donald Trimmer while working for Teledyne Energy Systems: TAGS-85. In TAGS, the tellurium used in the converter. is carefully doped with silver (Ag) and antimony (Sb), hence TAGS. The most commonly used version of TAGS, TAGS-85, uses (PbTe)85(AgSbTe2)15. By using this combination of metals, the crystalline microstructure of the tellurides by adding materials that differ in one of two properties: the atomic size being either significantly larger or smaller, or the inclusion of something that either does or does not have a localized magnetic moment. The reasons for this are incredibly complex, and are still the focus of ongoing research in the field, with two big goals. The first is decreasing thermal conductivity, thereby enabling better heat retention in the fuel until it’s able to be converted into electricity, and something that can be affected by imperfections in the crystalline structure of a material. The other is to increase the power factor, which is the relationship between the Seebeck coefficient and electrical resistivity, which is where the localized magnetic moments come into play. A paper from 2012 looks at more recent proposed changes to TAGS-85, involving the use of dysprosium as an additional doping agent: https://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1333&context=usdoepub .
This type of material was first used in the SNAP-19B RTG, for the Nimbus-B satellite which failed to launch. The fuel was recovered, and placed in the Nimbus-3 satellite, which launched April 3, 1969.
This was the first in an impressive list of missions powered by these power supplies: Pioneer 10, the first flyby of Jupiter (launched March 2, 1962, perijove on December 4, 1973); and Pioneer 11, which launched on April 6, 1973, flew by Jupiter on December 2, 1974, and then went on to Saturn on September 1, 1979 (just months before Voyager 1 and 2). The Martian Viking Landers 1 and 2 used a pair of modified SNAP-19’s each as well in their missions.
The next major use of TAG-85 was in the Multi-Mission RTG (MMRTG), which is a true workhorse of American astronuclear engineering, currently powering the Curiosity rover. His legacy will fly again on the Mars 2020 rover, currently under construction at the Jet Propulsion Laboratory.
Current RTG design work is shifting away from the solid-state conversion systems so long favored by NASA, due to its low power conversion efficiency. This is nothing new, but the inherent simplicity of the solid-state systems have dominated the available supply of radioisotope power systems, and the mission needs of NASA, the USAF, and other major customers that using a heat engine to produce electricity has only been explored so far. The upcoming power conversion series will deal with these options in detail.
Thank You, Dr. Skrabek
While RTGs may not be the big, exciting power supplies that we often discuss here on Beyond NERVA, they have literally opened the outer solar system to our understanding, powering the missions that have amazed us all, no matter our level of education, and the knowledge and beauty we’ve all gained is due in part to Dr. Skrabek’s discovery and subsequent work on these systems. The bread and butter power supply for the outer solar system, and one that is powering the most advanced rover ever built by humanity on Mars, is possible thanks to his ideas, and his hard work.
While there is a transition happening to heat engine based RPUs (radioisotope power units, the broader category that a Stirling or Rankine powered RTG would fall under), this does not mean that the traditional RTG is going anywhere any time soon. Their inherent stability, durability, and ruggedness, combined with their ability to power a rover the size of a small truck around Mars, vaporizing rock at a distance with a laser beam to analyze its composition, is not something to be cast aside lightly.
Even if TAGS-85 never flies after Mars 2020 (something I very much doubt), his work will continue to inform us every day about the environment, both past and present, of Mars for years to come. Five of his power supplies (two each on Viking 1 and 2, one on MSL) will, all things being equal, end their days on the Red Planet, with a sixth on its way in another year. His work made us able to open our eyes on the beauty of the outer solar system, showed us Pluto in fascinating detail for the first time, and literally pioneered the path of the Voyager probes at Saturn (Pioneer 11 inserted in the same orbit to verify that particle density was within safe limits), and is now flying out of the solar system in two different directions. One small, but crucial, piece of materials engineering allowed these spacecraft and rovers to do more than they would be able to with other materials, and open our eyes that much more.
Thank you, Dr. Skrabek, for your life’s work. Your memory will live on in all the missions you have, are, and will make possible, and the knowledge that you’ve helped bring humanity as a whole.
Here are some of the pictures that Dr Skrabek helped enable:
Hello, and welcome to a bizarre post for Beyond NERVA, which involves no nuclear reactions at all! I’m deep in the bowels of the next blog post on Topaz International Program, which has split into two (for long-following fans of the blog, this isn’t weird), and has taken me down many different research paths that I haven’t expected. As such, I want to devote more time and care to the next posts, but also want to keep bringing you content, so I’m bringing up an older (about 9 month-1 year) old post that went through editing, but never seemed to fit anywhere until now.
This is a field that I hope to be expanding into over the coming year: the ecological possibilities offered by rotating habitats.
This is an extension of my interests, and something that I discuss quite frequently with my ornithologist (and multispecies occupancy modeling) wife. It won’t be as pretty as usual, and nuclear reactions won’t be mentioned once, but I think that it’s an interesting topic that I haven’t seen covered in depth ANYWHERE. The sources are basically nonexistent, thanks to a failure of the system to save most of them and my personal arcane filing system which has epically failed me, but if you’re interested in this topic, PLEASE comment below, and I’ll do my best to both expand this line of research with better referencing in the future.
For those that aren’t familiar, I’m going to defer to my good friend, and frequent collaborator (I contribute to his work on a regular basis as part of his Production Team, the research and script-writing side, and we seem to have a future project we’re working on together in the works… but it’s a long term one, and not for this page), Isaac Arthur. For those that aren’t familiar with his work, the YouTube channel Science and Futurism with Isaac Arthur is an incredible introduction to the vast diversity of futurist possibilities, explorations of the minutiae of the Fermi Paradox, and many other concepts, and a font of novel concepts in futurism, from the settlement and development of the Solar system to stellar engines to (in the short future) the relocation of galaxies. I can’t recommend the channel enough, and I can’t point to any particular thing (other than Isaac’s brilliance) that is really the strength of the channel: the writing, editing, and custom visuals are literally world-class.
Here’s his video on O’Neill Cylinders, the concept that was popularized by Dr Gerard K. O’Neill in the 1970s. A strong recommendation is to read Dr O’Neill’s “The High Frontier,” currently in its third edition, but it’s not required.
Additionally, there’s a video on the environment of rotating habitats, which I consider to be relevant and educational, even if it’s far more futuristic than this post will cover, which will be a survey of what seems to be within the last five years of the field.
An ecosystem, on the other hand, is a system of interactions between biological organisms and their environment, and ecology is the study of these ecosystems. There’s a number of different ways to study an ecosystem, but probably the most useful for us would to be consider it as a structured set of systems that interact with each other. Each system in the ecology has its own subsystems, made up of different components, with some components having different roles to play in several of these subsystems. The more subsystems that a species is important within, the more important that species is for the ecosystem as a whole. These interactions between the systems allow for things like nutrient processing and transfer, energy transfer, population limitation, and habitat management, and many interactions are far from well understood. On top of that, often there are different species that perform these same functions between systems, but in different ways. This makes the ecosystem more robust, because it means that if something happens to one species, another is able to fulfill at least some of the roles that the species in trouble used to, keeping the whole system going.
heir environment, and ecology is the study of these ecosystems. There’s a number of different ways to study an ecosystem, but probably the most useful for us would to be consider it as a structured set of systems that interact with each other. Each system in the ecology has its own subsystems, made up of different components, with some components having different roles to play in several of these subsystems. The more subsystems that a species is important within, the more important that species is for the ecosystem as a whole. These interactions between the systems allow for things like nutrient processing and transfer, energy transfer, population limitation, and habitat management, and many interactions are far from well understood. On top of that, often there are different species that perform these same functions between systems, but in different ways. This makes the ecosystem more robust, because it means that if something happens to one species, another is able to fulfill at least some of the roles that the species in trouble used to, keeping the whole system going.
A good example of this is birds that distribute seeds for plants by eating fruit or berries, and defecating elsewhere: there are many different birds that eat different fruits at different times of the year, but just because a bird will eat a particular fruit doesn’t mean that it will eat ALL fruit – there are even cases where a berry that’s toxic to one bird isn’t toxic to another, maybe because the digestive tract of the first bird would destroy the seed, or for some other reason. If you’ve got the bush, but put in birds that can’t or won’t eat those berries, then that bush won’t spread very far, if at all. If you were counting on this bush for something else in the system, say for its leaves to be food for another animal, then you’re going to have a problem.
If we want to build a rotating habitat with an ecosystem on board, we need to understand what parts of the ecosystem we want: food, for instance, or air processing, or species protection and maintenance for endangered species. After we know that, then we need to figure out what systems are needed to support the end result we want, the systems that support them, and the environment needed to support all of those systems needed for the desired end result. Unfortunately, many of these subsystems are far from obvious, and as we learn more and more about each of these systems, and more about the species in each system, we discover more and more interactions that we never even suspected were happening, much less were as important as we now know.
This leads us to the conclusion that the more complex a system is, the more robust it is, and also means that we can get more benefits out of it as well. On Earth, we see many benefits of natural processes, beyond just having oxygen to breathe and beautiful places to go hiking, boating, fishing, or camping: ocean fishing of wild populations provides the majority of protein a large percentage of the world’s consumption; wild pollinators are a major part of agricultural production (http://science.sciencemag.org/content/339/6127/1608) having wilderness integrated into our agricultural lands has been shown to have significant economic benefits (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4614778/); ecosystems like mangrove forests or water lily fields have a huge impact on processing liquid wastes of many different types (http://myukk.org/SM2017/sm_pdf/SM633.pdf), and it has been shown to have a huge impact on mental health (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5663018/). There are lots of other benefits we get from having a robust ecosystem surrounding our living areas, like recreation, timber for construction and artwork, and other more immediate benefits, but in a lot of cases the things we don’t normally think of first are the things that can have the bigger impact on our well-being as a population, and a people.
These are called “ecosystem services,” and they’re going to be perhaps the biggest driving force behind which ecosystems are selected for various habitats, after the general environment of a habitat is chosen. Water purification and conditioning is something that a desert is going to be pretty much useless for, but a mangrove forest or swamp will be very helpful, and would also provide the ability to have aquaculture, shrimp or crayfish farming, and other food resources in the same location. Recreational areas are another type of ecosystem service, as are the scientific advances that studying these artificial ecosystems will provide. These services, the practical considerations of having these services available, and the aesthetic desires of the occupants of the habitat will define which ecosystems, and in what proportions, will be on a particular habitat.
Another concept we’ve mentioned before, but haven’t ever really looked at as anything more than a passing comment, is that rotating habitats can be used as ecological preserves for endangered species of plants and animals, and for some of the smaller species they offer the ability to expand their range hundreds of times over in very little space. This is especially true for many amphibians; many species of newt or salamander live along one particular creek and nowhere else, and even a fairly small rotating habitat can have lots of creeks on it; if you make it small enough, and if you plan some of the other complications well enough, you could even have a small rotating habitat with a salamander being the apex predator of the habitat. Now, this means you’ve got to have some way of controlling the population of the salamanders, but that’s doable, and we’ll look at some of the ways to do that later in this episode. However, it also means more territory without the threat of habitat loss and poaching for many other, more iconic endangered species, like giraffes, cheetahs, polar bears, wolves, and others. We looked a little bit at how to do this in the Exporting Earth episode, when we discussed the concept of a keystone species, or species that changes its’ environment so much, or is so important to many different species, that it in many ways defines an ecosystem.
One nice thing about a rotating habitat, though, is that if we aren’t trying to preserve a certain type of wilderness, we can to a certain degree mix and match parts of different ecosystems, either because of particular requirements of our rotating habitat or for aesthetic reasons. This has happened on Earth throughout our history, for better or for worse. This is, after all, a variation of both the agricultural revolution and the domestication of various animals, which were then introduced to other environments for the benefits of humans. There was a fairly large movement in the 19th and 20th centuries called the “Acclimatization Society” (https://en.wikipedia.org/wiki/Acclimatisation_society) movement that took this a step further, wanting to introduce species not only for the practical benefit of humanity but also for their enjoyment – something that often backfired, since many of these species ended up going from introduced, or finding a reasonably non-destructive niche in the new ecosystem, to invasive, which means that the impacts on the ecosystem are destructive and destabilizing, and often can lead to major declines in population of many species, if not outright extinctions. The European starling is a good example from that time period of this sort of introduction in the Western Hemisphere; a more modern example of the same idea would be honeysuckle, an ornamental plant that has become incredibly invasive across much of the US. Islands are another great example of this sort of thing happening. Many Hawaiian bird species were already in dramatic decline by the time Western explorers and settlers came, but for many species that was the death knell. Many later settlers found the lack of birds depressing, so exotic species were brought in from South America, SE Asia, and other parts of the world, to the point that the vast majority of birds seen on the Hawaiian Archipelago are either introduced or invasive.
A rotating habitat, on the other hand, IS a blank slate when you begin, so as long as the needs of the ecosystem are filled, HOW they’re filled is up to the ecologists designing the ecosystem. This means that species that fill different niches in an ecosystem from different places on Earth can exist side by side, and this could be perfectly OK for the overall ecosystem, like having berry-eating birds from South America, and insect eating birds of various sizes from SE Asia, Europe, and North America all side by side. Habitats may even decide to have this as a tourist draw, since bird watching is already one of the fastest growing hobbies and most active citizen science fields in the world. The same is true for mammals, reptiles, plants, and even microorganisms: as long as the interactions between the different species fill the needs of the overall system, and there aren’t any major conflicts between the different parts of the system, then the exact species of the individual parts of the system is less important. This is true of genetic engineering as well: if you want all of your butterflies to have metallic gold and silver wings, and the color doesn’t mean that they’re eaten more (or less), or that the nutrient or metamorphosis requirements are too high, then why not?
That being said, especially early on it’s probably best to try and transplant already-existing ecosystems, because these are systems that are known to work well. Just because a particular species fills a particular niche in one ecosystem doesn’t mean that it won’t change to a new niche if given different opportunities; this is a common problem with introduced species, which are meant to fill a similar niche in a different ecosystem and end up causing major problems for the ecosystem in other ways while ignoring their intended niche. Even doing things this way is no guarantee, though. It’s pretty much guaranteed that some species just won’t be able to adapt for one reason or another, but a substitute from a different ecosystem may be a better option than genetically modifying the original species to be able to survive. Another thing to consider in the process is that if a species decides to do something other than what was planned, but it still works will with the overall ecosystem, then this isn’t necessarily a problem. Another species may need to be brought in to fill a particular role as far as nutrient recycling or other roles, but if there aren’t causing problems for the ecosystem, then why not leave it?
Once an ecosystem is established, invasive species DO become a concern, so import/export controls on things like animals, plants, insects, soil, and that sort of thing would need to be put in place, but this is no different than what happens between most countries here on Earth, and keeping the environment of a rotating habitat isolated is far easier if there’s hundreds of kilometer of empty space between your ecosystems, instead of sharing the same atmosphere, oceans, and often animals. Even migratory birds can, in some instances, carry fungi, seeds, and even mollusks long distances, after all, and this can spread invasive species to new areas. With human interconnectedness growing every year, the artificial means of spreading invasive species have grown as well. Zebra mussels are a major aquatic pest that were probably transported in the bilge water of cargo ships, fruit fly infestations have been caused by air travel, and escaped pet snakes have become a major ecological problem in Florida, just to name a few examples. Because of this, it’s likely that biological containment is going to be a big priority for rotating habitats – one more reason to try and grow as much food domestically as possible. For more on that, check out our Space Farming episode.
That leads us to look at something that we haven’t even mentioned yet, but is more important than most people realize: the plants in an ecosystem. Plants emerged far earlier than animals in our evolutionary history, and far from being dumb, static food for animals to eat, plants have a huge range of different needs, capabilities, and even communication capabilities. Stands of trees release chemical signals, either through the air or the soil, to alert other plants that they’re under attack by an insect or fungus, for instance, and so they need to start producing more of a chemical that’s normally wasted resources, but is critical as an insecticide or fungicide right now. Plants are, in many ways, far more complex and difficult to study than animals, because of their incredible genetic and structural complexity. You can’t slice off someone’s nose and put on an elephant’s trunk, apply a poultice, and expect a good result, but this process is done all the time with trees, and has been done for thousands of years. This is called grafting, and is particularly common with citrus trees. Some new types of fruit are actually parts of different trees grafted together, like apple branches on lemon trees, which change the flavor of the resulting fruit. Likewise, while the human body has a lot of different chemical immune responses, a plant has far more options. They’re chemical synthesis laboratories and factories that are far more flexible than anything but a chemistry lab for a major corporation or university in the range of possible compounds that they can synthesize.
Another thing we’ve just begun to realize in the last few years is how connected everything in the macrobiological world is connected to everything in the microbiological one. We used to see microorganisms as something that really only affected the things that we see in two ways: at some point, they’re food for something, which is eaten by something bigger, and something bigger than that, and so on; and we saw them as ways that these bigger things can get sick. Within the last couple decades, though, we’ve discovered that their impact is far more prevalent, and far more complex, than even the most avid fan of the microscopic world would have guessed even 30 years ago. Not a week goes by that we don’t hear about a new, usually overblown, discovery about the effects of our gut microbes on our brains, or a new treatment for mitochondrial diseases – which, remember, are absolutely critical for most life, but are completely genetically and evolutionarily distinct from their host – or some other surprisingly beneficial impact an unknown, or formerly feared, maybe even actively killed, micro-critter living on or in us imparts to us. The tree’s equivalent of mitochondria is symbiotic fungi in the root system and surrounding soil, which makes the nutrients in the soil possible for the plant to absorb Every species has similar microfloral ecosystems living within them, and often they’re completely different from the ones in humans – with important exceptions, like mitochondria. These symbiotic, beneficial microorganisms in one species can be a major cause of disease in other species in the same ecosystem, and even within a species what’s symbiotic in one part of the body can be a life-threatening infection in another. This is why, no matter how hard we try, disease will always be a part of any ecosystem – and this isn’t necessarily a bad thing! More on that later, though.
These microorganisms can have in terms of inter-species interactions, another newer discovery that has greatly changed how we view these organisms’ role in an ecosystem. We’re all familiar with the classically understood role that bacteria and fungi play in an ecosystem: they break down anything that’s dead into food for the living organisms in the ecosystem. This is an absolutely critical role, and one that’s not exclusive to bacteria, but it’s also a very small part of the role that these organisms play. In the last decade, there’s been a lot of research that’s shown that fungi and bacteria have a far larger role that certain types of microbiota play: that of executive assistants, or even administrators, of resources in an ecology. Just like we’ve discovered that bacteria in your gut can regulate your mood, your energy level, and other things that were traditionally thought of as a function the brain and its’ neurochemistry played, soil fungi and to a lesser extent bacteria also regulate how nutrients are distributed in an ecosystem. This area is well-established at this point, but due to the fact that these organisms don’t generally accept the confines of a petri dish, they’re also not well understood. Their importance, though, has been long-suspected, even if it was pure speculation at that point. Legumes, or beans, take their names from the characteristic nodules on their roots, which aren’t actually biologically part of the plant – they’re symbiotic fungi, which are critical for the plant’s survival, but different species in their own right, they’re even in a different kingdom – fungi. These nodules provide many nutrients to the plant, but also alter the soil chemistry, because nitrogen is a waste product for both the plant and themselves. Because of this, crop rotation since the early Bronze age in certain cultures has included legumes as a necessary fertilizing mechanism. It turns out that this is just the tip of the iceberg, there are species of fungi that manage nutrient availability depending on their proximity to certain plants within every forest, and presumably most other ecosystems, on Earth, between different species depending on their chemical waste products, their mature heights, and the needs of the fungi to survive in the soil. We genuinely don’t understand all of the mechanisms behind these very simple organisms, but just like mitochondria regulate any species that relies on aerobic respiration, these types of fungi (and there’s quite a few of them) regulate botanical ecosystems. If we need a lever for customizing biomass within a rotating habitat, this may be one of the most powerful available to us, but we don’t understand anything but the absolute basics of what it is or how it functions.
A major part of designing an ecosystem is the interaction between the organisms on the habitat and their environment, something that we discussed in the Environment of Rotating Habitats video. This can be anything from “how much rain does an area receive, and when does it receive it” to “what kind of seasons are on the habitat” to “what temperature is the atmosphere or the soil at over the course of a year” to “what day length is being used.” Every place on Earth has seasons, from the winter, spring, summer, fall cycle that’s familiar to most people in temperate zones to the extremes in day length experienced at the poles to wet and dry seasons along the equator. The species that live in these areas have evolved to need these variations in a lot of cases, especially a lot of the plants that ended up being bred to become our food crops. Freezing weather is important for insect and bacterial population control, because even beneficial pollinators and symbiotic micro-organisms can become the cause of major ecological imbalances if they get out of hand, while others become predatory. Many plant species use day length, average temperature, or both as indicators on when to start flowering, and when to start producing or ripening their fruits. Animals use these indicators, as well as plant growth cycles and other environmental factors, for everything from breeding to which types of food they’re eating to how far the young disperse after reaching maturity.
A last variable that we should look at before we start looking at how to actually go about designing an ecosystem for a rotating habitat is migration. Many types of species, from birds to fish to insects to mammals, migrate, and when they do this they perform many different roles not only in the ecosystems that they winter and summer in, but also the ecosystems in between. While there are some species that generally do migrate, but a certain part of the population decides not to due to new resource availability thanks to human activity, like the Canada goose in North America, there are other species that have something called “migratory restlessness,” where even if they don’t have to migrate, or can’t because their migratory route is blocked, or they’re in a zoo, or whatever, they will still do whatever the species equivalent of pacing restlessly, drinking black coffee and puffing on a cigarette irritably until 4 in the morning. Some species even have a higher likelihood of having strokes or heart attacks if they can’t migrate. The triggers and methods of migration vary wildly, too: some species follow food availability as it moves toward the equator for the winter, some use day length, some use temperature, and some just breed, and once the breeding is successful they leave – male hummingbirds are notorious for doing this, and migrate weeks or months before the females do, sometimes to completely different parts of the world, like Caribbean islands compared to Argentina.
Even species that don’t migrate change their behavior and movement patterns a lot between breeding and non-breeding seasons. Species that are intensely territorial during breeding and offspring raising often gather together in fairly sizable packs or flocks once the offspring are old enough to be at least self-sufficient, if not completely independent. Some species are nomadic, following learned patterns that are passed on through family groups, waterways, available patches of different foods, or moving up and down mountains due to altitudinal differences in plant availability and growth cycles. This is actually a major driver for plant propagation in a lot of cases, so it’s something that’s important for certain species, and detrimental to others. All of these things could be very different on a rotating habitat than they are on Earth, but how they’re different will have a big impact on what behaviors these species would show. Whether they will remain nomadic, or whether we want them to be nomadic or not, is going to be a question for the people who are actually designing these ecosystems.
We described an ecosystem earlier as a set of systems that interact with each other. This is a really useful framework, but we also need a way to define how that interaction happens. When we talk about interplanetary colonization, we talk about two things: energy available, and the constituent elements available. This is usually through mining, but things like fusion and fission can provide those as well. This is definitely NOT the way that modern ecologists define things in most cases, but there are a couple of fields in ecology that do. These are the fields of “ecological energetics” and “ecological stoichiometry,” with a healthy leavening of “island biogeography,” “patch dynamics,” and other disciplines.
Some of these concepts have been around for a very long time. Hunter-gatherer cultures worldwide used controlled burns of forests to create meadows and edge habitat, which encourages game animals and also makes it easier to hunt them, for instance, but the scientific study of these conditions didn’t start until the 1960s, when Robert MacArthur and RJ O’Rourke wrote “The Theory of Island Biogeography,” examining the biodiversity of islands in relation to their size and their separation from other bodies of land, either other islands or the mainland. On land, Jared Diamond posed the SLOSS question in the 1970s: is it better for species diversity and populations to have a Single Large Or Several Small patches of habitat? It turns out that the answer is: it depends on the species, the ecosystem, and the shape of the patches, just as it does on islands. These fields have continued to expand and be refined since, in the study of spatial ecology. This can also incorporate understandings of nutrient transfer, energy exchange, biological population sensitivity, and other variables that change the number and types of species that can thrive in a particular area, if the only thing that’s changing is the rearrangement of the same types of species.
One of the biggest problems until recently was that there was no framework for analyzing an ecology in a complete enough way to be useful in actually designing an ecosystem. As ecology developed, food chains became food webs, and flows of energy and nutrients through a system, and interactions between biological organisms and their geological environment, developed, but none of it was well integrated. Land biogeography and patch dynamics were possibly the first big step toward being able to do this, but at least initially, they weren’t enough to form the whole picture. Ecological energetics, biogeochemistry, and elemental cycles like the P, N, and C cycles have all been studied, and to a greater or lesser extent incorporated into various models, but this still wasn’t integrated enough to actually design a full ecosystem, just enough to better manage parts of an already-existing ecosystem.
With the rise of supercomputing, improvements in statistical and dynamic modeling of ecosystems, and the increase in available, detailed ecological data, new methods of ecological analysis that incorporates not only local geography, populations of animals, plants, and microorganisms, energy flows and also nutrient availability and biogeochemical processes have begun to reach maturity. One of these theories is called “ecological stoichiometry,” which integrates most, if not all, of these different systems into an overarching framework. The development of this system has already led to new insights in everything from species abundance to the effects of fertilizer runoff on ecosystems to management of fishing stocks around the world. While much of this work has focused on aquatic ecosystems, so far, the fundamental concepts should be able to be adapted to terrestrial ecosystems, and the interactions between watersheds, lakes and other aquatic ecosystems with their terrestrial neighbors as well. This could very well be the first time that tools are being built that make it possible to analyze, and eventually design, ecosystems on a rotating habitat.
The relationships between different species, their distribution, and the shape and size of necessary components in an ecosystem provides a good framework for the biological, geological, and environmental considerations that define the limits of what ecosystems require, but they also provide a set of engineering limitations for the design of the rotating habitat itself. From literally the ground up, each different type of ecosystem has different requirements as far as soil composition and depth, water content, and mass of the various plants, animals, and micro-organisms. These factors will significantly change the size of the habitat, how much mass is required in each area of the habitat, hydrologic considerations for weather patterns, and many other factors. An early succesional habitat, or scrub land, masses far less than a rain forest for the same given area, for instance, and a tree with a deep tap root will require a greater soil depth than one with a shallow, wider-spreading root system. These systems will also change over time, and the extent to which this change is managed, and how it’s managed, will change the requirements from a management point of view. Looking at the tree example, soil depth is something that will affect the mass distribution of the habitat, lower deck configuration (if there is one, which is likely), and other factors, so deeper-rooting trees may require more maintenance to ensure that they don’t spread beyond the boundaries of their suitable areas. Having roots growing through the roof of lower decks, or getting into water pipes, air ducts, or other engineering structures, is something that is best avoided whenever possible, after all. Another mass consideration for more wilderness preserve type habitats is large animal migrations: a wildebeest, bison, or elephant herd undergoing annual migration will significantly change the mass distribution within a habitat, so some sort of ballast system would be needed to make sure that rotational balance is maintained in the cylinder.
Another consideration when it comes to changes in mass distribution over longer timescales is that ecosystems are not static things. In nature, the distribution of plants and animals in areas that are even marginally suitable habitat changes over time, and effects like wildfires, droughts, and other events are necessary to revitalize soil, create new habitat for edge-dwelling creatures, and other biogeochemical effects. Changes in ecosystem distribution are driven by other factors as well, including various micro-organisms, animals, plant seed distribution methods, and environmental factors. One of the best-known examples of this is beavers altering the local hydrology of an area. Beavers are important in creating habitat for other species of both terrestrial animals and fish, flood and erosion control, plant population and distribution management, nutrient recycling, and several other ecological functions within the ecosystems that they exist in. Because of this, managing beaver population, through either population control or by providing suitable habitat in multiple locations to account for migration from one suitable location to another, could be a good example of using one particular species in order to leverage the ecosystem itself to make the desired changes over time without resorting to brute force methods of mass landscaping of large tracts of land through mechanical means.
While it may be possible to prevent this ecosystem evolution through brute force methods like forest thinning, fertilization, planting, and other techniques, whether this is something that’s desirable or not is going to be very design-specific, both for the habitat and for the ecosystems in that habitat. It may be that it’s simply not worth maintaining the same geographic distribution, and instead ensuring that a certain balance of proportions, and certain shapes and sizes of constituent parts of interrelated biomes, are maintained, but the exact location of these patches changes over time. This is actually studied to a certain extent already, and is known as patch dynamics theory. Certain things are probably going to be less dynamic on a rotating habitat than on Earth, like river locations, ponds, and lakes, but other, less mass-intensive systems could very easily be left to change over time like they do on Earth, constrained only by practical considerations like root depth or water requirements. Which systems are geographically limited, and by how much, is going to be a question that’s best addressed when looking at particular designs for rotating habitat ecosystems.
A final thing to consider is the impact that the biological components of an ecosystem have on the environment as a whole. Forests, wetlands, and other ecosystems increase the amount of rain locally because of evaporation and transpiration; the extreme version of this is the rain forest, which often generates its’ own rainfall because of the large volume of water vapor that’s rising above the trees. Temperature is another example of an environmental effect of the local plant life, with less ground cover causing more energy to be reflected and absorbed by the air; this, combined with the albedo of the ground, is what causes the urban heat island effect, making cities hotter than the immediately surrounding countryside. Wind patterns, local humidity, and many other environmental conditions are also modified by the plants, and to a certain extent animals, in the area. This could be used as a way to modify the environment of the habitat, but is at the same time something that has to be considered during the design of the initial ecosystems, as well.
The ecosystems of rotating habitats are a unique area when it comes to ecological design and management. Not only do the wide variety of sizes and internal structures provide unique geographic limitations on the structure of an ecosystem, but unlike on Earth, these structures truly are islands: their energy supply, elemental and geological composition, nutrient recycling systems, and geographic location are entirely isolated from each other, and inputs into the system have to be intentional and planned in most cases. They are also truly a blank slate from an ecological point of view: with the exception of accidentally introduced species, something that is likely to happen but is also something that strong controls to prevent would be put in place, everything that exists in these ecosystems is something that has to be intentionally placed, in relation to the other components of the ecosystem and the other structures of the habitat, in order to create a stable system. The challenges of constructing such an ecosystem are huge, and the understanding of these systems isn’t understood well enough to be successful today, but ecology is developing to the point that these concepts can be considered from a scientific point of view. The time scales needed to develop this understanding, both for data collection and model development and testing, is going to be long, but so is the time frame until these structures are being built. The challenges on the ecological side, though, are likely far higher than the engineering challenges, so beginning to consider and research these ecological systems now is not premature. Now is the time to begin to lay the groundwork.
Hello, and welcome back to Beyond NERVA! I wanted to give my readers an update on what’s going to be happening here this year, since there’s major changes coming soon to the blog and website!
So why are things changing, and how? Well, the last few blog posts have been an immense amount of work, and ended up turning out… not as well as I would have liked to say the least. They were great learning experiences, and I gained a much deeper appreciation for both the designs themselves and the context in which they were designed in. However, this doesn’t mean that I want to be reading hundreds to thousands of pages of technical information every week to write a post a couple times a month, and the depth of the posts was also getting a bit out of hand.
This has led me to move to a shorter post format. Rather than the 10-20 pages of single-spaced plain text that my rough drafts tend to work out to, I’m going to be going for 2-5 pages (depending on the subject matter), and try to cover at least the majority of a system in that overview. The in-depth information will continue to fill out over time on the web pages for each reactor concept that I cover in the blog, including the more technical details that I know a lot of my readers like. Sometimes I’ll have that information ready to go at the time that I publish the post, but often it’ll continue to come in over time, as I find sources and time to write.
The other big change is the focus: up until now, the blog has been following a fairly well-set path, and one that kinda-sorta is inherent to astronuclear reactor designs. However, at the rate I’m covering the subjects over the last few months, it’ll be at least five years before I’ve gotten the basics written up. While I plan on doing this for far longer than just 5 years, it’s still longer than I’d like to take getting the in-depth basics of astronuclear engineering off the ground.
Because of this, I’m splitting the blog into several columns. One of them will be the “primer” series on nuclear electric propulsion that we’ve been covering recently, but in a much shorter format (there’s a couple deep dives still to come in the series, though). This will allow us to progress through the huge array of reactors, power conversion systems, thermal management systems, and the rest of the components that go into a nuclear electric power plant much more quickly, and hopefully also in a more digestible length as well. I’ll also start going back to nuclear thermal rocket designs as well, and possibly even start rotating in some columns on the basics of nuclear pulse propulsion as well.
The next will be either Forgotten Reactors or Reactor Snapshots, looking at historical designs that either aren’t as well known, have a novel design concept, or led to a later design that we either have or will cover.
We’ll also have a column on the spacecraft, stations, or surface bases that these nuclear reactors were meant to power, as well as any interesting or novel designs, architectures, systems, or pitfalls that those designs have.
Finally, we’ll have some interspersed blog posts on… basically whatever interesting random astronuclear tidbit caught my attention that week and I felt like writing about. This post falls into that category, I suppose. This category will also include “Reader Questions” or “Q&A” posts as well, so if you have a question, either ask in the Facebook group (the best thing to do, TBH, a lot of people in there are far smarter than I am on various subjects), e-mail me (firstname.lastname@example.org) or ask in the comments section, and I will write up a blog post about some of them – but try to answer all of them, at least for the person asking. Sometimes, people’s comments get shunted into the “Spam” folder on my comments thanks to the spam-blocking software I use (it seems to be that if you’re blocking your IP address, using a VPN, etc is the likely culprit), but I review it at least once a week or so.
Hopefully, by both continuing to cover the information that I have simply not been able to find outside technical papers and textbooks as well as spreading out to more topical categories will allow me to cover more of the history of astronuclear engineering, the wide array of possible designs, and the future it offers us in exploring and settling the solar system and beyond.
The New Website
In order to get the blog to a more reasonable level, I’m going to be moving a lot of the details from the blog to the website. Up until now, most of the things that I’ve covered on the website have also been covered in depth on the blog, the big difference was that the website was more topical, with some things that just didn’t fit into the blog.
That’s about to change, in a number of ways. First, in-depth information is going to mostly be the preserve of the website, especially the more technical information. For the nuclear reactor designers, grad students, and some other technical readers, the website will be the better place to check for in-depth information on a particular reactor architecture, coolant, etc. The information will be in a far less narrative style, and for those not familiar with the technical details of nuclear reactor design, it may appear daunting on occasion.
The other part of the website that will be expanding greatly is the tutorial section, which currently doesn’t really exist. One of the consistent challenges that many people seem to run into when it comes to nuclear power in general, and astronuclear in particular, is that it’s very difficult to go from the point of “I know that this isn’t black magic, but sure looks like it from where I’m standing” to “OK, now I understand how a nuclear rocket works, and what the advantages and limitations of various concepts are.” As someone who’s self-taught in this field, I’m all to familiar with both the immense number of technical resources available and the difficulty of getting to the point that reading a technical paper and gaining useful insight from more than the introduction and the conclusions of the paper.
So, over time I’m going to be adding an “Introduction to Astronuclear” section on the website. I’m not entirely sure exactly how this is going to be done, but it’ll be a combination of a “course” (for lack of a better term) and a glossary that links to useful, astronuclear-specific (when available and appropriate) resources to learn more about unfamiliar concepts or provide an easy, quick reminder on the definition of a term to those more experienced. The course part of this is going to take a fair bit of time to completely flesh out, but it’s a desperately needed resource which is largely unavailable at the moment. The fact that there isn’t a simple “glossary of nuclear engineering” that I’ve been able to find is a major irritation, and leads me to the conclusion that I’m going to have to build one. Again, this is likely to be a long process, but hopefully the end result will be accessible and useful to both newcomers and professionals alike.
Another change that has already been happening (although not as regularly as will happen in the future) is the addition of reactor-specific pages to the website. This is going to be an ongoing project, but is beginning now. These more topical pages will be more in depth than the blog posts, including more engineering details, development history, and component information.
The Reactors Left in the Cold
Up until the 1970s, every single national laboratory, as well as a half dozen companies (Atomics International, Rocketdyne, General Atomic, Pratt and Whitney, General Electric, Westinghouse, and others), and several NASA centers (including Lewis, now Glenn) all had their own designs for in-space nuclear power and propulsion. Many astronuclear reactors are not well-known, or well-developed. We’ve looked at many of the high-profile programs from the 1950s-1970s, but few of the less-politically-favored designs. Several were designs built around a particular concept, with not as much care taken in the design as it moved away from the “interesting bits.” Several others were developed only to the point that the company involved could take out a patent. Others still were design projects that were mostly done by graduate students, a la the Stanford study that led to the original design of the Stanford Torus space habitat.
Then, the cuts to funding for space research and development, the cancellation of the post-lunar Apollo missions, cuts to the AEC – and later reorganization into the DOE, and a host of other problems chipped away at the funding until fewer and fewer organizations conducted experiments into space nuclear reactors, and only a handful continued to design reactors that they knew would never be tested, much less flown. To make matters worse, very little of the raw data, or notes on construction, behavior, etc, were kept, and so it’s difficult to reconstruct the knowledge that was gained while these experiments were going on.
This isn’t to say that astronuclear engineering is dead by any stretch of the imagination. There’s a new breath of enthusiasm and innovation in the nuclear field in general, and the variety of Generation IV reactors lends itself to a wide variety of concepts. The increased interest in space, likewise, lends itself to a resurgence in astronuclear design and innovation. The first novel nuclear reactor design to undergo criticality testing since the founding of the US Department of Energy was a reactor for space (the KRUSTY experiment, part o the Kilopower program), and already the implications of that test for small modular reactor design have spilled across a wide swath of the industry.
While there was a fairly quiet time from the mid-to-late 1970s to early 1980s (depending on what program in particular, what reporting was required after concluding experimentation, etc), with only a few major programs being pursued to a small degree, and another lull from the end of Reagan’s and Bush’s Strategic Defense Initiative and Constellation programs – again, there were a few programs, but only a few, and none with enough funding to complete flight qualifications.
Even in the golden days of ROVER and SNAP, when critical experiments, test reactors, and the like were reasonably easy to get approved, and funding was plentiful, few of the designs that were proposed ever moved off the drawing board. Despite having minimal regulatory barriers, the technical barriers were so high that every test was still a major expense. Additionally, especially early in the program, getting enough fissile material together for a test, much less the novel manufacturing, instrumentation, and test stand requirements formed a substantial set of barriers.
The list of astronuclear reactors that underwent minimal to no testing is long, depressing, and not something that I feel like trying to type out right now, and the list of tested reactors is contrastingly short, and flight-ready reactors even shorter – in fact, we’ve already covered all but one of them. This doesn’t mean that those untested reactors are without worth, in fact many of them offered potential superior qualities.
For the vast majority of problems in astronuclear engineering, it doesn’t come down to reactor physics, or to propellant/coolant flow behavior, or thermohydrodynamics, it comes down to materials science. Many of the reactors that were set aside had that happen specifically because there were materials or components that were outside the technological capabilities of the time. Examples of this litter the landscape of astronuclear history, especially when it comes to things like clad materials or fuel elements. These areas have had revolutions in manufacturing capability that have been completely independent of the nuclear industry in general, and many other capabilities that were unheard of at the time of these reactor proposals. While an in-depth coverage of these issues and their implications is going to be a years-long process, we can begin by covering the reactor proposals over the years.
Spacecraft of Futures Past and Present
Modular construction is the reality of the future. Within certain limits, spacecraft are designed around what the available engines offer, but those limits are relatively few. Similarly, reactor design can be tweaked to a certain extent to account for spacecraft requirements. This can be seen in a more mature form in aerospace: everything from the mission duration of a satellite to which airliner is going to be designed and built next is influenced by propulsion, either from the point of view of specific impulse and propellant mass and volume budget for a geosynchronous communication satellite or looking at the operational cost per pound and operational range on a next generation airliner, with everything in between.
The same is true in the design of interstellar spacecraft. Winchell Chung’s incredible Realistic Designs library on the Atomic Rockets page has an incredible assortment of information on various designs, and is an incredible resource, but each person looks at what is important slightly differently, so while I’ll extensively reference his page I hope to bring a different angle to the coverage of the spacecraft.
I hope to cover, in shorter posts, the individual bits of these spacecraft, not just the engines but the cargoes needed, lander designs, habitat structures, en-route scientific instruments, etc. While occasionally there’ll be a deep dive into a particular spacecraft, my hope is to cover particular types of systems and how they’ve evolved over time – like we briefly did with the use of hydrogen propellant for NTRs. However, design philosophies and materials advances offer the promise of better structural support for less mass, for instance, or different crew quarters concepts over the years, and the interplay between isp, mission duration, crew size, and engine capability all play a significant role in evaluating potential spacecraft concepts, and what kind of spacecraft would offer the greatest benefits for future use in various mission applications.
However, this won’t just cover spacecraft. As some of you know, I help out on a YouTube channel, Science and Futurism with Isaac Arthur [insert link], which has sometimes been called a “High Temple to Rotating Habitats.” There are many design possibilities out there, but they aren’t well-covered in many cases. From time to time I hope to either cover specific designs or aspects of the concept as a whole. For instance, at some point later this winter I will publish a longer post on the possibilities of beginning to look at creating ecosystems within large rotating habitats, the challenges that still remain, and some potential lines of exploration for those interested in looking more at the subject (as I will in the future). Other concepts, such as potential opportunities and challenges for migration in joined cylinders with artificial magnetospheres, are things that I hope to touch on in the future as well. Other, more mechanical aspects, such as bearings and seals, transportation concepts, and size optimization will probably be added over time. We’ll also look at concepts for bases on the Moon, Mars, and other planets, moons, and other astronomical bodies, and how they’ve changed over time as our technology and our knowledge of the destination have improved. Finally, we’ll look into in situ resource utilization, automatic manufacturing, and the ways that these technologies could impact astronuclear engineering (spoiler: a LOT).
More Coming Soon!
Many changes are coming to the blog and website! Keep an eye out for notifications on these exciting updates! The next blog post, and the first of our Forgotten Reactors series, is coming soon! Other developments, which are still in deep development, are continuing apace as well, and as more details get fleshed out and planning is completed, I’ll post more about these exciting opportunities and collaborations!
Our next post will be the first of the Forgotten Reactors, this time looking back at one of the earliest nuclear thermal rocket designs during project Rover: Dumbo!
Hello, and welcome back to Beyond NERVA! Today, we finish our look at electric propulsion systems by looking at electrostatic propulsion. This is easily the most common form of in-space electric propulsion system, and as we saw in our History of Electric Propulsion post, it’s also the first that was developed.
I apologize about how long it’s taken to get this blog post published. As I’ve mentioned before, electric propulsion is one of my weak subjects, so I’ve been very careful to try to ensure that the information that I’m giving is correct. Another complication came from the fact that I had no idea how complex and varied each type of drive system is. I have glossed over many details in this blog post on many of the systems, but I’ve also included an extensive list of documentation on all of the drive systems I discuss in the post at the end, so if you’re curious about the details of these systems, please check out the published papers on them!
By far the most common type of electric propulsion today, and the type most likely to be called an “ion thruster,” is electrostatic propulsion. The electrostatic effect was one of the first electrical effects ever formally described, and the first ever observed (lightning is an electrostatic phenomenon, after all). Electrostatics as a general field of study refers to the study of electric charges at rest (hence, “electro-static”). The electrostatic effect is the tendency of objects with a differential charge (one positive, one negative) to attract each other, and with the same charge to repel each other. This occurs when electrons are stripped or added to one material. Some of the earliest scientific experiments involving this effect involved bars of amber and wool – the amber would become negatively ionized, and the wool would be positively ionized, due to the interactions of the very fine hairs of the wool and the crystalline and elemental composition of the amber (for the nitpicky types, this is known as the triboelectric effect, but is still a manifestation of the electrostatic effect). Other experimenters during the 18th and 19th centuries used cat fur instead of wool, a much more mentally amusing way to build an electrostatic charge. However, we aren’t going to be discussing using a rotating wheel of cats to produce an electric thruster (although, if someone feels like animating said concept, I’d love to see it).
There are a number of designs that use electrostatic effects to produce thrust. Some are very similar to some of the concepts that we discussed in the last post, like the RF ionized thruster (a major area of focus in Japan), the Electron Cyclotron Resonance thrusters (which use the same mechanisms as VASIMR’s acceleration mechanism), and the largely-abandoned Cesium Contact thruster (which has a fair amount of similarities with a pulsed plasma or arcjet thruster). Others, such as the Field Emission Electrostatic Thruster (FEEP) and Ionic Liquid Ion Source thruster (also sometimes called an electrospray) thruster, have far fewer similarities. None of these, though, are nearly as common as the electron bombardment noble gas thruster types: the gridded ion (either electron bombardment, cyclotron resonance, or RF ionization) thruster and the Hall effect thruster (which also has two types: the thruster with anode layer and stationary plasma thruster). The gridded ion thruster, commonly just called an ion thruster, is the propulsion system of choice for interplanetary missions, because it has the highest specific impulse of any currently available propulsion system. Hall effect thrusters have lower specific impulse, but higher thrust, making them a popular choice for reaction control systems on commercial and military satellites.
Most electrostatic drives use an ionization chamber or zone, to strip off electrons from an easily-ionized material. These now-positively charged ions are then accelerated toward a negatively charged structure (or accelerated by an electromagnetic field, in some cases), which is then switched off after accelerating the ions, which then are spat out the back of the thruster. Because of the low density of these ion streams, and the lack of an expanding gas, a physical nozzle isn’t used, because the characteristic bell-shaped de Laval nozzle of chemical or thermal engines is absolutely useless in this case. However, there are many ways that this ion stream can be ionized, and many ways that it can be accelerated, leading to a huge variety of design options within the area of electrostatic propulsion.
The first design for a practical electric propulsion system, patented by Robert Goddard in 1917, was an electrostatic device, and most designs, both in the US and the USSR, have used this concept. In the earliest days of electric propulsion design, each went a different way in the development of this drive concept: the US focused on the technically simpler, but materially more problematic, gridded ion thruster, while the Soviet Union worked to develop the more technically promising, but more difficult to engineer, Hall thruster. Variations of each have been produced over the years, and additional options have been explored as well. These systems have traveled to almost every body of in the Solar System, including Pluto and many of the asteroids in the Main Belt, and provide a lot of the station-keeping thrust for satellites in orbit around Earth. Let’s go ahead and look at what the different types are, what their advantages and disadvantages are, and how they’ve been used in the past.
Gridded Ion Drives
This is the best-known of the electric propulsion thrusters of any type, and is often shortened to “ion drive.” Here, the thruster has four main parts: the propellant supply, an ionization chamber, an array of conductive grids, and a neutralizing beam emitter. The propellant can be anything that is able to be easily ionized, with cesium and mercury being the first options, these have largely been replaced by xenon and argon, though.
The type of ionization chamber varies widely, and is the main difference in the different types of ion drive. Particle beams, radio frequency or microwave excitation, in addition to magnetic field agitation, are all methods used in different gridded ion drives over the years and across the different manufacturers. The first designs used gaseous agitation to cause electrons to be stripped, but many higher-powered systems use particle (mostly electron) beams, radio frequency or microwave agitation, or cyclotron resonance to strip the electrons off the atoms. The efficiency of the ionization chamber, and its capacity, define how much propellant mass flow is possible, which is one of the main limiting factors for the overall thrust possible for the thruster.
After being ionized, the gas and plasma are then separated, using a negatively charged grid to extract the positively charged ions, leaving the neutral gas in the ionization chamber to be ionized. In most modern designs, this is also the beginning of the acceleration process. Often, two or three grids are used, and the term “ion optics” is often used instead of “grids.” This is because these structures not only extract and change the acceleration of the plasma, but they also shape the beam of the plasma as well. The amount of charge, and the geometry of these grids, defines the exhaust velocity of the ions; and the desired specific impulse produced by the thruster is largely determined by the charge applied to these screens. Many US designs use a more highly charged inner screen to ensure better separation of the ions, and a charge potential difference between this grid and the second accelerates the ions. Because of this, the first grid is often called the extractor, and the second is called the accelerator grid. The charge potential possible on each grid is another major limitator of the possible power level – and therefore the maximum exhaust velocity – of these thrusters.
These screens also are one of the main limitators for the thruster’s lifetime, since the ions will impact the grid to a certain degree as they’re flowing past (although the difference in charge potential on the plasma in the ionization chamber between the apertures and the structure of the grid tends to minimize this). With many of the early gridded ion thrusters that used highly reactive materials, chemical interactions in the grids could change the conductivity of these surfaces, cause more rapid erosion, and produce other problems; the transition to noble gas propellants has made this less of an issue. Finally, the geometry of the grids have a huge impact on the direction and velocity of the ions themselves, so there’s a wide variety of options available through the manipulation of this portion of the thruster as well.
At the end of the drive cycle, after the ions are leaving the thruster, a spray of electrons is added to the propellant stream, to prevent the spacecraft from becoming negatively charged over time, and thereby attracting some of the propellant back toward the spacecraft due to the same electrostatic effect that was used to accelerate them in the first place. Problems with incomplete ion stream neutralization were common in early electrostatic thrusters; and with the cesium and mercury propellants used in these thrusters, chemical contamination of the spacecraft became an issue for some missions. Incomplete neutralization is something that is still a concern for some thruster designs, although experiments in the 1970s showed that a spacecraft can ground itself without the ion stream if the differential charge becomes too great. In three grid systems (or four, more on that concept later), the final grid takes the place of this electron beam, and ensures better neutralization of the plasma beam, as well as greater possible exhaust velocity.
Gridded ion thrusters offer very attractive specific impulse, in the range of 1500-4000 seconds, with exhaust velocities up to about 100 km/s for typical designs. The other side of the coin is their low thrust, generally from 20-100 microNewtons (lower than average even for electric propulsion, although their specific impulse is higher than average), which is a mission planning constraint, but isn’t a major show-stopper for many applications. An advanced concept, from the Australian National University and European Space Agency, the Dual Stage 4 Grid (DS4G) thruster, achieved far higher exhaust velocities by using a staged gridded ion thruster, up to 210 km/s.
Past and Current Gridded Ion Thrusters
These drive systems have been used on a number of different missions over the years, starting with the SERT missions mentioned in the history of electric propulsion section, and continuing through on an experimental basis until the Deep Space 1 technology demonstration mission – the first spacecraft to use ion propulsion as its main form of propulsion. That same thruster, the NSTAR, is still in use today on the Dawn mission, studying the minor planet Ceres. Hughes Aircraft developed a number of thrusters for station-keeping for their geosynchronous satellite bus (the XIPS thruster).
JAXA used this type of drive system for their Hayabusa mission to the asteroid belt, but this thruster used microwaves to ionize the propellant. This thruster operated successfully throughout the mission’s life, and propelled the first spacecraft to return a sample from an asteroid back to Earth.
ESA has used different variations of this thruster on multiple different satellites as well, all of which have been radio frequency ionization types. The ArianeSpace RIT-10 has been used on multiple missions, and the Qinetiq T5 thruster was used successfully on the GOCE mission mapping the Earth’s magnetic field.
NASA certainly hasn’t given up on further developing this technology. The NEXT thruster is three times as powerful in terms of thrust compared to the NSTAR thruster, although it operates on similar principles. The testing regime for this thruster has been completed, demonstrating 4150 s of isp and 236 mN of thrust over a testing life of over 48,000 hours, and it is currently awaiting a mission for it to fly on. This has also been a testbed for using new designs and materials on many of the drive system components, including a new hollow cathode made out of LaB6 (a lanthanum-boron alloy), and several new screen materials.
HiPEP: NASA’s Nuclear Ion Propulsion System
Another NASA project in gridded ion propulsion, although one that has since been canceled, is far more germane to the specific use of nuclear electric propulsion: the High Power Electric Propulsion drive (HiPEP) for the Jupiter Icy Moons Observer mission. JIMO was a NEP propelled mission to Jupiter which was canceled in 2005, meant to study Europa, Ganymede, and Callisto (this mission will get an in-depth look later in this blog series on NEP). HiPEP used two types of ionization chamber: Electron Cyclotron Resonance ionization, which combines leveraging the small number of free electrons present in any gas by moving them in a circle with the magnetic containment of the ionization chamber with microwaves that are tuned to be in resonance with these moving electrons to more efficiently ionize the xenon gas; and direct current ionization using a hollow cathode to strip off electrons, which has additional problems with cathode failure and so is the less preferred option. Cathode failure of this sort is another major failure point for ion drives, so being able to eliminate this is a significant advantage, but the microwave system ends up consuming more power, so in less-energy-intensive applications it’s often not used.
One very unusual thing about this system is its’ shape: rather than the typical circular discharge chamber and grids, this system uses a rectangular configuration. The designers note that not only does this make the system more compact to stack multiple units together (reducing the structural, propellant, and electrical feed system mass requirements for the full system), it also means that the current density across the grids can be lower for the same electrostatic potential, reducing current erosion in the grids. This means that the grid can support a 100 kg/kW throughput margin for both of the isp configurations that were studied (6000 and 8000 s isp). The longest distance between two supported sections of grid can be reduced as well, preventing issues like thermal deformation, launch vibration damage, and electrostatic attraction between the grids and either the propellant or the back of the ionization chamber itself. The fact that it makes the system more scalable from a structural engineering standpoint is one final benefit to this system.
As the power of the thruster increases, so do the beam neutralization requirements. In this case, up to 9 Amperes of continuous throughput are required, which is very high compared to most systems. This means that the neutralizing beam has to be both powerful and reliable. While the HiPEP team discuss using a common neutralization system for tightly packed thrusters, the baseline design is a fairly typical hollow cathode, similar to what was used on the NSTAR thruster, but with a rectangular cross section rather than a circular one to accommodate the different thruster geometry. Other concepts, like using microwave beam neutralization, were also discussed; however, due to the success and long life of this type of system on NSTAR, the designers felt that this would be the most reliable way to deal with the high throughput requirements that this system requires.
HiPEP consistently met its program guidelines, for both engine thrust efficiency and erosion studies. Testing was conducted at both 2.45 and 5.85 GHz for the microwave ionization system, and was successfully concluded. The 2.45 GHz test, with 16 kW of power, achieved a specific impulse of 4500-5500 seconds, allowing for the higher-powered MW emitter to be used. The 5.85 GHz ionization chamber was tested at multiple current loads, from 9.7 to 39.3 kW, and achieved a maximum specific impulse of 9620 s, and showed a clear increase in thrust of up to close to 800 mN during this test.
Sadly, with the cancellation of JIMO (a program we will continue to come back to frequently as we continue looking at NEP), the need for a high powered gridded ion thruster (and the means of powering it) went away. Much like the fate of NERVA, and almost every nuclear spacecraft ever designed, the canceling of the mission it was meant to be used on spelled the death knell of the drive system. However, HiPEP remains on the books as an attractive, powerful gridded ion drive, for when an NEP spacecraft becomes a reality.
DS4G: Fusion Research-Inspired, High-isp Drives to Travel to the Edge of the Solar System
The Dual Stage 4 Grid (DS4G) ion drive is perhaps the most efficient electric drive system ever proposed, offering specific impulse well over 10,000 seconds. While there are some drive systems that offer higher isp, they’re either rare concepts (like the fission fragment rocket, a concept that we’ll cover in a future post), or have difficulties in the development process (such as Orion derivatives, which run afoul of nuclear weapons test bans and treaty limitations concerning the use of nuclear explosives in space).
So how does this design work? Traditional ion drives use either two grids (like the HiPEP drive) combining the extraction and acceleration stages in these grids and then using a hollow cathode or electron emitter to neutralize the beam, or use three grids, where the third grid is used in place of the hollow cathode. In either case, these are very closely spaced grids, which has its’ advantages, but also a couple of disadvantages: the extraction system and acceleration system being combined forces a compromise between efficiency of extraction and capability of acceleration, and the close spacing limits the acceleration possible of the propellants. The DS4G, as the name implies, does things slightly differently: there are two pairs of grids, each pair is close to its’ partner, but further apart from the other pair, allowing for a greater acceleration chamber length, and therefore higher exhaust velocity, and the distance between the extraction grid and the final acceleration grid allows for each to be better optimized for their individual purposes. As an added benefit, the plasma beam of the propellant is better collimated than that of a traditional ion drive, which means that the drive is able to be more efficient with the mass of the propellant, increasing the specific impulse even further.
This design didn’t come out of nowhere, though. In fact, most tokamak-type fusion reactors use a device very similar to an ion drive to accelerate beams of hydrogen to high velocities, but in order to get through the intense magnetic fields surrounding the reactor the atoms can’t be ionized. This means that a very effective neutralizer needs to be stuck on the back of what’s effectively an ion drive… and these designs all use four screens, rather than three. Dr. David Fearn knew of these devices, and decided to try and adapt it to space propulsion, with the help of ESA, leading to a 2005 test-bed prototype in collaboration with the Australian National University. An RF ionization system was designed for the plasma production unit, and a 35 kV electrical system was designed for the thruster prototype’s ion optics. This was not optimized for in-space use; rather, it was used as a low cost test-bed for optics geometry testing and general troubleshooting of the concept. Another benefit to this design is a higher-than-usual thrust density of 0.86 mN/cm^2, which was seen in the second phase of testing.
Two rounds of highly successful testing were done at ESA’s CORONA test chamber in 2005 and 2006, the results of which can be seen in the tables above. The first test series used a single aperture design, which while highly inefficient was good enough to demonstrate the concept; this was later upgraded to a 37 aperture design. The final test results in 2006 showed impressive specific impulse (14000-14500 s), thrust (2.7 mN), electrical, mass, and total efficiency (0.66, 0.96, and 0.63, respectively). The team is confident that total efficiencies of about 70% are possible with this design, once optimization is complete.
There remain significant engineering challenges, but nothing that’s incredibly different from any other high powered ion drive. Indeed, many of the complications concerning ion optics, and electrostatic field impingement in the plasma chamber, are largely eliminated by the 4-grid design. Unfortunately, there are no missions that currently have funding that require this type of thruster, so it remains on the books as “viable, but in need of some final development for application” when there’s a high-powered mission to the outer solar system.
Cesium Contact Thrusters: Liquid Metal Fueled Gridded Ion Drives
As we saw in our history of electric propulsion blog post, many of the first gridded ion engines were fueled with cesium (Cs). These systems worked well, and the advantages of having an easily storable, easily ionized, non-volatile propellant (in vapor terms, at least) were significant. However, cesium is also a reactive metal, and is toxic to boot, so by the end of the 1970s development on this type of thruster was stopped. As an additional problem, due to the inefficient and incomplete beam neutralization with the cathodes available at the time, contamination of the spacecraft by the Cs ions (as well as loss of thrust) were a significant challenge for the thrusters of the time.
Perhaps the most useful part of this type of thruster to consider is the propellant feed system, since it can be applied to many different low-melting-point metals. The propellant itself was stored as a liquid in a porous metal sponge made out of nickel, which was attached to two tungsten resistive heaters. By adjusting the size of the pores of the sponge (called Feltmetal in the documentation), the flow rate of the Cs is easily, reliably, and simply controlled. Wicks of graded-pore metal sponges were used to draw the Cs to a vaporizer, made of porous tungsten and heated with two resistive heaters. This then fed to the contact ionizer, and once ionized the propellant was accelerated using two screens.
As we’ll see in the propellant section, after looking at Hall Effect thruster, Cs (as well as other metals, such as barium) could have a role to play in the future of electric propulsion, and looking at the solutions of the past can help develop ideas in the future.
Hall Effect Thrusters
When the US was beginning to investigate the gridded ion drive, the Soviet Union was investigating the Hall Effect thruster (HET). This is a very similar concept in many ways to the ion drive, in that it uses the electrostatic effect to accelerate propellant, but the mechanism is very different. Rather than using a system of grids that are electrically charged to produce the electrostatic potential needed to accelerate the ionized propellant, in a HET the plasma itself creates the electrostatic charge through the Hall effect, discovered in the 1870s by Edwin Hall. In these thrusters, the backplate functions as both a gas injector and an anode. A radial magnetic field is produced by a set of radial solenoids and and a central solenoid, which traps the electrons that have been stripped off the propellant as it’s become ionized (mostly through electron friction), forming a toroidal electric field moving through the plasma. After the ions are ejected out of the thruster, a hollow cathode that’s very similar to the one used in the ion drives that we’ve been looking at neutralizes the plasma beam, for the same reasons as this is done on an ion drive system (this is also the source of approximately 10% of the mass flow of the propellant). This also provides the electrostatic potential used to accelerate the propellant to produce thrust. Cathodes are commonly mounted external to the thruster on a small arm, however some designs – especially modern NASA designs – use a central cathode instead.
The list of propellants used tends to be similar to what other ion drives use: krypton, argon, iodine, bismuth, magnesium and zinc have all been used (along with some others, such as NaK), but Kr and Ar are the most popular propellants. While this system has a lower average specific impulse (1500-3000 s isp) than the gridded ion drives, it has more thrust (a typical drive system used today uses 1.35 kW of power to generate 83 mN of thrust), meaning that it’s very good for either orbital inclination maintenance or reaction control systems on commercial satellites.
SPT type thruster, image S. Graham BeyondNERVA
TAL Type Thruster, image S. Graham BeyondNERVA
There are a number of types of Hall effect thruster, with the most common being the Thruster with Anode Layer (TAL), the Stationary Plasma Thruster (SPT), and the cylindrical Hall thruster (CHT). The cylindrical thruster is optimized for low power applications, such as for cubesats, and I haven’t seen a high power design, so we aren’t going to really go into those. There are two obvious differences between these designs:
What the walls of the acceleration chamber are made out of: the TAL uses metallic (usually boron nitride) material for the walls, while the SPT uses an insulator, which has the effect of the TAL having higher electron velocities in the plasma than the SPT.
The length of the acceleration zone, and its impact on ionization behavior: The TAL has a far shorter acceleration zone than the SPT (sort of, see Chouieri’s analytical comparison of the two systems for dimensional vs non-dimensional characteristicshttp://alfven.princeton.edu/publications/choueiri-jpc-2001-3504). Since the walls of the acceleration zone are a major lifetime limitator for any Hall effect thruster, there’s an engineering trade-off available here for the designer (or customer) of an HET to consider.
There’s a fourth type of thruster as well, the External Discharge Plasma Thruster, which doesn’t have an acceleration zone that’s physically constrained, that we’ll also look at, but as far as I’ve been able to find there are very few designs, most of those operating at low voltage, so they, too, aren’t as attractive for nuclear electric propulsion.
Commercially available HETs generally have a total efficiency in the range of 50-60%, however all thrusters that I’ve seen increase in efficiency as the power increases up to the design power limits, so higher-powered systems, such as ones that would be used on a nuclear electric spacecraft, would likely have higher efficiencies. Some designs, such as the dual stage TAL thruster that we’ll look at, approach 80% efficiency or better.
SPT Hall Effect Thrusters
Stationary Plasma Thrusters use an insulating material for the propellant channel immediately downstream of the anode. This means that the electrostatic potential in the drive can be further separated than in other thruster designs, leading to greater separation of ionized vs. non-ionized propellant, and therefore potentially more complete ionization – and therefore thrust efficiency. While they have been proposed since the beginning of research into Hall effect thrusters in the Soviet Union, a lack of an effective and cost-efficient insulator that was able to survive for long enough to allow for a useful thruster lifetime was a major limitator in early designs, leading to an early focus on the TAL.
The SPT has the greatest depth between the gas diffuser (or propellant injector) and the nozzle of the thruster. This is nice, because it gives volume and distance to work with in terms of propellant ionization. The ionized propellant is accelerated toward the nozzle, and the not-yet-ionized portion can still be ionized, even if the plasma component is scooting it toward the nozzle by simply bouncing off the unionized portion like a billiard ball. Because of this, SPT thrusters can have much higher propellant ionization percentages than the other types of Hall effect thruster, which directly translates into greater thrust efficiency. This extended ionization chamber is made out of an electromagnetic insulator, usually boron nitride, although Borosil, a solution of BN and SiO2, is also used. Other types of materials, such as nanocrystalline diamond, graphene, and a new material called ultra-BN, or plasma assisted chemical vapor deposition built BN, have also been proposed and tested.
The downside to this type of thruster is that the insulator is eroded during operation. Because the erosion of the propellant channel is the main lifetime limitator of this type of thruster, the longer length of the propellant channel in this type of thruster is a detriment to thruster lifetime. Improved materials for the insulator cavity are a major research focus, but replacing boron nitride is going to be a challenge because there are a number of ways that it’s advantageous for a Hall effect thruster (and also in the other application we’ve looked at for reactor shielding): in addition to it being a good electromagnetic insulator, it’s incredibly strong and very thermally conductive. The only major downside is its’ expense, especially forming it into single, large, complex shapes; so, often, SPT thrusters have two boron carbide inserts: one at the base, near the anode, and another at the “waist,” or start of the nozzle, of the SPT thruster. Inconsistencies in the composition and conductivity of the insulator can lead to plasma instabilities in the propellant due to the local magnetic field gradients, which can cause losses in ionization efficiency. Additionally, as the magnetic field strength increases, plasma instabilities develop in proportion to the total field strength along the propellant channel.
Another problem that surfaces with these sorts of thrusters is that under high power, electrical arcing can occur, especially in the cathode or at a weak point in the insulator channel. This is especially true for a design that uses a segmented insulator lining for the propellant channel.
HERMeS: NASA’s High Power Single Channel SPT Thruster
The majority of NASA’s research into Hall thrusters is currently focused on the Advanced Electric Propulsion System, or AEPS. This is a solar electric propulsion system which encompasses the power generation and conditioning equipment, as well as a 14 kW SPT thruster known as HERMeS, or the Hall Effect Rocket with Magnetic Shielding. Originally meant to be the primary propulsion unit for the Asteroid Redirect Mission, the AEPS is currently planned for the Power and Propulsion Element (PPE) for the Gateway platform (formerly Lunar Gateway and LOP-G) around the Moon. Since the power and conditioning equipment would be different for a nuclear electric mission, though, our focus will be on the HERMeS thruster itself.
This thruster is designed to operate as part of a 40 kW system, meaning that three thrusters will be clustered together (complications in clustering Hall thrusters will be covered later as part of the Japanese RIAJIN TAL system). Each thruster has a central hollow cathode, and is optimized for xenon propellant.
Many materials technologies are being experimented with in the HERMeS thruster. For instance, there are two different hollow cathodes being experimented with: LaB6 (which was experimented with extensively for the NEXT gridded ion thruster) and barium oxide (BaO). Since the LaB6 was already extensively tested, the program has focused on the BaO cathode. Testing is still underway for the 2000 hour wear test; however, the testing conducted to date has confirmed the behavior of the BaO cathode. Another example is the propellant discharge channel: normally boron nitride is used for the discharge channel, however the latest iteration of the HERMeS thruster is using a boron nitride-silicon (BN-Si) composite discharge channel. This could potentially improve the erosion effects in the discharge channel, and increase the life of the thruster. As of today, the differences in plasma plume characterization are minimal to the point of being insignificant, and erosion tests are similarly inconclusive; however, theoretically, BN-Si composite could improve the lifetime of the thruster. It is also worth noting that, as with any new material, it takes time to fully develop the manufacture of the material to optimize it for a particular use.
As of the latest launch estimates, the PPE is scheduled to launch in 2022, and all development work of the AEPS is on schedule to meet the needs of the Gateway.
Nested Channel SPT Thrusters: Increasing Power and Thrust Density
One concept that has grown more popular recently (although it’s far from new) is to increase the number of propellant channels in a single thruster in what’s called a nested channel Hall thruster. Several designs have used two nested channels for the thruster. While there are a number of programs investigating nested Hall effect thrusters, including in Japan and China, we’ll use the X2 as an example, studied at the University of Michigan. While this design has been supplanted by the X3 (more on that below), many of the questions about the operation of these types of thrusters were addressed by experimenting with the X2 thruster. Generally speaking, the amount of propellant flow in the different channels is proportional to the surface area of the emitter anode, and the power and flow rate of the cathode (which is centrally mounted) is adjusted to match whether one or multiple channels are firing. Since these designs often use a single central cathode, despite having multiple anodes, a lot of development work has gone into improving the hollow cathodes for increased life and power capability. None of the designs that I saw used external cathodes, like those sometimes seen with single-channel HETs, but I’m not sure if that is just because of the design philosophies of the institutions (primarily JPL and University of Michigan) that I found while investigating this type of design, and for which I was able to access the papers.
There are a number of advantages to the nested-channel design. Not only is it possible to get more propellant flow from less mass and volume, but the thruster can be throttled as well. For higher thrust operation (such as rapid orbital changes), both channels are fired at once, and the mass flow through the cathode is increased to match. By turning off the central channel and leaving the outer channel firing, a medium “gear” is possible, with mass flow similar to a typical SPT thruster. The smallest channel can be used for the highest-isp operation for interplanetary cruise operations, where the lower mass flow allows for greater exhaust velocities.
A number of important considerations were studied during the X2 program, including the investigation of anode efficiency during the different modes of operation (slight decrease in efficiency during two-channel operation, highest efficiency during inner channel only operation), interactions between the plasma plumes (harmonic oscillations were detected at 125 and 150 V, more frequent in the outer channel, but not detected at 200 V operation, indicating some cross-channel interactions that would need to be characterized in any design), and power to thrust efficiency (slightly higher during two-channel operation compared to the sum of each channel operating independently, for reasons that weren’t fully able to be characterized). The success of this program led to its’ direct successor, which is currently under development by the University of Michigan, Aerojet Rocketdyne, and NASA: the X3 SPT thruster.
The X3 is a 100 kWe design that uses three nested discharge chambers. The cathode for this thruster is a centrally mounted hollow cathode, which accounts for 7% of the total gas flow of the thruster under all modes of operation. Testing during 2016 and 2017 ranging from 5 to 102 kW, 300 to 500 V, and 16 to 247 A, demonstrated a specific impulse range of 1800 to 2650 s, with a maximum thrust of 5.4 N. As part of the NextSTEP program, the X3 thruster is part of the XR-100 electric propulsion system that is currently being developed as a flexible, high-powered propulsion system for a number of missions, both crewed and uncrewed.
While this thruster is showing a great deal of promise, there remain a number of challenges to overcome. One of the biggest is cathode efficiency, which was shown to be only 23% during operation of just the outermost channel. This is a heavy-duty cathode, rated to 120 A. Due to the concerns of erosion, especially under high-power, high-flow conditions, there are three different gas injection points: through the central bore of the cathode (limited to 20 sccm), external flow injectors around the cathode keeper, and supplementary internal injectors.
The cross-channel thrust increases seen in the X2 thruster weren’t observed, meaning that this effect could have been something particular to that design. In addition, due to the interactions between the different magnetic lenses used in each of the discharge channels, the strength and configuration of each magnetic field has to be adjusted depending on the other channels that are operating, a challenge that increases with magnetic field strength.
Finally, the BN insulator was shown to expand in earlier tests to the point that a gap was formed, allowing arcing to occur from the discharge plasma to the body of the thruster. Not only does this mean that the plasma is losing energy – and therefore decreasing thrust – but it also heats the body of the thruster as well.
These challenges are all being addressed, and in the next year the 100-hour, full power test of the system will be conducted at NASA’s Glenn Research Center.
TAL Hall Effect Thrusters
The TAL concept has been around since the beginning of the development of the Hall thruster. In the USSR, the development of the TAL was tasked to the Central Research Institute for Machine Building (TsNIIMash). Early challenges with the design led to it not being explored as thoroughly in the US, however. In Europe and Asia, however, this sort of thruster has been a major focus of research for a number of decades. Recently, the US has also increased their study of this design as well. Since these designs have (as a general rule) higher power requirements for operation, they have not been used nearly as much as the SPT-type Hall thruster, but for high powered systems they offer a lot of promise.
As we mentioned before, the TAL uses a conductor for the walls of the plasma chamber, meaning that the radial electric charge moving across the plasma is continuous across the acceleration chamber of the thruster. Because of the high magnetic fields in this type of thruster (0.1-0.2 T), the electron cyclotron radius is very small, allowing for more efficient ionization of the propellant, and therefore limiting the size necessary for the acceleration zone. However, because a fraction of the ion stream is directed toward these conduction walls, leading to degradation, the lifetime of these types of thrusters is often shorter than their SPT counterparts. This is one area of investigation for designers of TAL thrusters, especially higher-powered variants.
As a general rule, TAL thrusters have lower thrust, but higher isp, than SPT thrusters. Since the majority of Hall thrusters are used for station-keeping, where thrust levels are a significant design consideration, this has also mitigated in favor of the SPT thruster to be in wider deployment.
High-Powered TAL Development in Japan: Clustered TAL with a Common Cathode
One country that has been doing a lot of development work on the TAL thruster is Japan. Most of their designs seem to be in the 5 kW range, and are being designed to operate clustered around a single cathode for charge neutralization. The RAIJIN program (Robust Anode-layer Intelligent Thruster for Japanese IN-space propulsion system) has been focusing on addressing many of the issues with high-powered TAL operation, mainly for raising satellites from low earth orbit to geosynchronous orbit (an area that has a large impact on the amount of propellant needed for many satellite launches today, and directly applicable to de-orbiting satellites as well). The RAIJIN94 thruster is a 5 kW TAL thruster under development by Kyushu University, the University of Tokyo, and the University of Mizayaki. Overall targets for the program are for a thruster that operates at 6 kW, providing 360 mN of thrust, 1700 s isp, with an anode mass flow rate of 20 mg/s and a cathode flow rate of 1 mg/s. The ultimate goal of the program is a 25 kW TAL system, containing 5 of these thrusters with a common cathode. Based on mass penalty analysis, this is a more mass-efficient method for a TAL than having a single large TAL with increased thermal management requirements. Management of anode and conductor erosion is a major focus of this program, but not one that has been written about extensively. The limiting of the thruster power to about 5 kW, though, seems to indicate that scaling a traditional TAL beyond this size, at least with current materials, is impractical.
There are challenges with this design paradigm, however, which also will impact other clustered Hall designs. Cathode performance, as we saw in the SPT section is a concern, especially if operating at very high power and mass flow rates, which a large cluster would need. Perhaps a larger consideration was plasma oscillations that occurred in the 20 kHz range when two thrusters were fired side by side, as was done (and continues to be done) at Gifu University. It was found that by varying the mass flow rate, operating at a slightly lower power, and maintaining a wider spacing of the thruster heads, the plasma flow instabilities could be accounted for. Experiments continue there to study this phenomenon, and the researchers, headed by Dr. Miyasaka, are confident that this issue can be managed.
Dual Stage TAL and VHITAL
One of the most interesting concepts investigated at TsNIIMash was the dual-stage TAL, which used two anodes. The first anode is very similar to the one used in a typical TAL or SPT, which also serves as an injector for the majority of the propellant and provides the charge to ionize the propellant. As the plasma exits this first anode, it encounters a second anode at the opening of the propellant channel, which accelerates the propellant. An external cathode is used to neutralize the beam. This design demonstrated specific impulses of up to 8000s, among the highest (if not the highest) of any Hall thruster to date. The final iteration during this phase of research was the water-cooled TAL-160, which operated at a power consumption from 10-140 kW.
Another point of interest with this design is the use of bismuth as the propellant for the thruster. As we’ll see below, propellant choice for an electrostatic thruster is very broad, and the choice of propellant you use is subject to a number of characteristics. Bismuth is reasonably inexpensive, relatively common, and storable as a solid. This last point is also a headache for an electrostatic thruster, since ionized powders are notorious for sticking to surfaces and gumming up the works, as it were. In this case, since bismuth has a reasonably low melting temperature, a pre-heater was used to resistively heat the bismuth, and then an electromagnetic pump was used to propel it toward the anode. Just before injection into the thruster, a vaporization plug of carbon was used to ensure proper mass flow into the thruster. As long as the operating temperature of the thruster was high enough, and the mass flow was carefully regulated, this novel fueling concept was not a problem.
This design was later picked up in 2004 by NASA, who worked with TsNIIMash researchers to develop the VHITAL, or Very High Isp Thruster with Anode Layer, over two and a half years. While this thruster uses significantly less power (36 kW as opposed to up to 140 kW), many of the design details are the same, but with a few major differences: the NASA design is radiatively cooled rather than water cooled, it added a resistive heater to the base of the first anode as well, and tweaks were made to the propellant feed system. The original TAL-160 was used for characterization tests, and the new VHITAL-160 thruster and propellant feed system were built to characterize the system using modern design and materials. Testing was carried out at TsNIIMash in 2006, and demonstrated stable operation without using a neutralizing cathode, and expected metrics were met.
If anyone has additional information about this program, please comment below or contact me via email!
Hybrid and Non-Traditional Hall Effect Thrusters: Because Tweaking Happens
As we saw with the VHITAL, the traditional SPT and TAL thrusters – while the most common – are far from the only way to use these technologies. One interesting concept, studied by EDB Fakel in Russia, is a hybrid SPT-TAL thruster. SPT thrusters, due to their extended ionization chamber lined with an insulator, generally provide fuller ionization of propellant. TAL thrusters, on the other hand, are better able to efficiently accelerate the propellant once it’s ionized. So the designers at EDB Fakel, led by M. Potapenko, developed, built, and tested the PlaS-40 Hybrid PT, rated at up to 0.4 kW, and proposed and breadbox tested a larger (up to 4.5 kW) PlaS-120 thruster as well, during the 1990s (initial conception was in the early 90s, but the test
model was built in 1999). While fairly similar in outward appearance to an SPT, the acceleration chamber was shorter. The PlaS-40 achieved 1000-1750 s isp and a thrust of 23.5 mN, while the PlaS-120 showed the capability of reaching 4000 s isp and up to 400 mN of thrust (these tests were not continued, due to a lack of funding). This design concept could offer advances in specific impulse and thrust efficiency beyond traditional thruster designs, but currently there isn’t enough research to show a clear advantage.
Another interesting hybrid design was a gridded Hall thruster, researched by V. Kim at Fakel in 1973-1975. Here, again, an SPT-type ionization chamber was used, and the screens were used to more finely control the magnetic lensing effect of the thruster. This was an early design, and one that was used due to the limitations of the knowledge and technology to do away with the grids. However, it’s possible that a hybrid Hall-gridded ion thruster may offer higher specific impulse while taking advantage of the more efficient ionization of an SPT thruster. As we saw with both the DS4G and VHITAL, increasing separation of the ionization, ion extraction, and acceleration portions of the thruster allows for a greater thrust efficiency, and this may be another mechanism to do that.
One design, out of the University of Michigan, modifies the anode itself, by segmenting it into many different parts. This was done to manage plasma instabilities within the propellant plume, which cause parasitic power losses. While it’s unclear exactly how much efficiency can be gained by this, it solves a problem that had been observed since the 1960s close to the anode of the thruster. Small tweaks like this may end up changing the geometry of the thruster significantly over time as optimization occurs.
Other modifications have been made as well, including combining discharge chambers, using conductive materials for discharge chambers but retaining a dielectric ceramic in the acceleration zone of the thruster, and many other designs. Many of these were early ideas that were demonstrated but not carried through for one reason or another. For instance, the metal discharge chambers were considered an economic benefit, because the ceramic liners are the major cost-limiting factor in SPT thrusters. With improved manufacturing and availability, costs went down, and the justification went away.
There remains an incredible amount of flexibility in the Hall effect thruster design space. While two stage, nested, and clustered designs are the current most advanced high power designs, it’s difficult to guess if someone will come up with a new idea, or revisit an old one, to rewrite the field once again.
Propellants: Are the Current Propellant Choices Still Effective For High Powered Missions?
One of the interesting things to consider about these typesof thrusters, both the gridded ion and Hall effect thrusters, is propellant choice. Xenon is, as of today, the primary propellant used by all operational electrostatic thrusters (although some early thrusters used cesium and mercury for propellants), however, Xe is rare and reasonably expensive. In smaller Hall thruster designs, such as for telecommunications satellites in the 5-10 kWe thruster range, the propellant load (as of 1999) for many spacecraft is less than 100 kg – a significant but not exorbitant amount of propellant, and launch costs (and design considerations) make this a cost effective decision. For larger spacecraft, such as a Hall-powered spacecraft to Mars, the propellant mas could easily be in the 20-30 ton range (assuming 2500 s isp, and a 100 mg/s flow rate of Xe), which is a very different matter in terms of Xe availability and cost. Alternatives, then, become far more attractive if possible.
Argon is also an attractive option, and is often proposed as a propellant as well, being less rare. However, it’s also considerably lower mass, leading to higher specific impulses but lower levels of thrust. Depending on the mission, this could be a problem if large changes in delta-vee are needed in a shorter period of time, The higher ionization energy requirements also mean that either the propellant won’t be as completely ionized, leading to loss of efficiency, or more energy is required to ionize the propellant
The next most popular choice for propellant is krypton (Kr), the next lightest noble gas. The chemical advantages of Kr are basically identical, but there are a couple things that make this trade-off far from straightforward: first, tests with Kr in Hall effect thrusters often demonstrate an efficiency loss of 15-25% (although this may be able to be mitigated slightly by optimizing the thruster design for the use of Kr rather than Xe), and second the higher ionization energy of Kr compared to Xe means that more power is required to ionize the same amount of propellant (or with an SPT, a deeper ionization channel, with the associated increased erosion concerns). Sadly, several studies have shown that the higher specific impulse gained from the lower atomic mass of Kr aren’t sufficient to make up for the other challenges, including losses from Joule heating (which we briefly discussed during our discussion of MPD thrusters in the last post), radiation, increased ionization energy requirements, and even geometric beam divergence.
This has led some designers to propose a mixture of Xe and Kr propellants, to gain the advantages of lower ionization energy for part of the propellant, as a compromise solution. The downside is that this doesn’t necessarily improve many of the problems of Kr as a propellant, including Joule heating, thermal diffusion into the thruster itself, and other design headaches for an electrostatic thruster. Additionally, some papers report that there is no resonant ionization phenomenon that facilitates the increase of partial krypton utilization efficiency, so the primary advantage remains solely cost and availability of Kr over Xe.
Atomic Mass (Ar, std.)
Ionization Energy (1st, kJ/mol)
Melting Point (K)
Boiling Point (K)
Estimated Cost ($/kg)
13.534 (at STP)
1.843 (at MP)
0.927 (at MP) 0.968 (solid)
0.828 (MP) 0.862 (solid)
0.866 (20 C)
4.933 (at STP)
Early thrusters used cesium and mercury for propellant, and for higher-powered systems this may end up being an option. As we’ve seen earlier in this post, neither Cs or Hg are unknown in electrostatic propulsion (another design that we’ll look at a little later is the cesium contact ion thruster), however they’ve fallen out of favor. The primary reason always given for this is environmental and occupational health concerns, for the development of the thrusters, the handling of the propellant during construction and launch, as well as the immediate environment of the spacecraft. The thrusters have to be built and extensively tested before they’re used on a mission, and all these experiments are a perfect way to strongly contaminate delicate (and expensive) equipment such as thrust stands, vacuum chambers, and sensing apparatus – not to mention the lab and surrounding environment in the case of an accident. Additionally, any accident that leads to the exposure of workers to Hg or Cs will be expensive and difficult to address, notwithstanding any long term health effects of chemical exposure to any personnel involved (handling procedures have been well established, but one worker not wearing the correct personal protective equipment could be constantly safe both in terms of personal and programmatic health) Perfect propellant stream neutralization is something that doesn’t actually occur in electrostatic drives (although as time goes on, this has consistently improved), leading to a buildup of negative charge in the spacecraft; and, subsequently, a portion of the positive ions used for propellant end up circling back around the magnetic fields and impacting the spacecraft. Not only is this something that’s a negative impact for the thrust of the spacecraft, but if the propellant is something that’s chemically active (as both Cs and Hg are), it can lead to chemical reactions with spacecraft structural components, sensors, and other systems, accelerating degradation of the spacecraft.
A while back on the Facebook group I asked the members about the use of these propellants, and an interesting discussion developed (primarily between Mikkel Haaheim, my head editor and frequent contributor to this blog, and Ed Pheil, who has extensive experience in nuclear power, including the JIMO mission, and is currently the head of Elysium Industries, developing a molten chloride fast reactor) concerning the pros and cons of using these propellants. Two other options, with their own complications from the engineering side, were also proposed, which we’ll touch on briefly: sodium and potassium both have low ionization energies, and form a low melting temperature eutectic, so they may offer additional options for future electrostatic propellants as well. Three major factors came up in the discussion: environmental and occupational health concerns during testing, propellant cost (which is a large part of what brings us to this discussion in the first place), and tankage considerations.
As far as cost goes, this is listed in the table above. These costs are all ballpark estimates, and costs for space-qualified supplies are generally higher, but it illustrates the general costs associated with each propellant. So, from an economic point of view, Cs is the least attractive, while Hg, Kr, and Na are all attractive options for bulk propellants.
Tankage in and of itself is a simpler question than the question of the full propellant feed question, however it can offer some insights into the overall challenges in storing and using the various propellants. Xe, our baseline propellant, has a density as a liquid of 2.942 g/cm, Kr of 2.413, and Hg of 13.53. All other things aside, this indicates that the overall tankage mass requirements for the same mass of Hg are less than 1/10th that of Xe or Kr. However, additional complications arise when considering tank material differences. For instance, both Xe and Kr require cryogenic cooling (something we discussed in the LEU NTP series briefly, which you can read here [insert LEU NTP 3 link]. While the challenges of Xe and Kr cryogenics are less difficult than H2 cryogenics due to the higher atomic mass and lower chemical reactivity, many of the same considerations do still apply. Hg on the other hand, has to be kept in a stainless steel tank (by law), other common containers, such as glass, don’t lend themselves to spacecraft tank construction. However, a stainless steel liner of a carbon composite tank is a lower-mass option.
The last type of fluid propellant to mention is NaK, a common fast reactor coolant which has been extensively studied. Many of the problems with tankage of NaK are similar to those seen in Cs or Hg: chemical reactivity (although different particulars on the tankage), however, all the research into using NaK for fast reactor coolant has largely addressed the immediate corrosion issues.
The main problem with NaK would be differential ionization causing plating of the higher-ionization-energy metal (Na in this case) onto the anode or propellant channels of the thruster. While it may be possible to deal with this, either by shortening the propellant channel (like in a TAL or EDPT), or by ensuring full ionization through excess charge in the anode and cathode. The possibility of using NaK was studied in an SPT thruster in the Soviet Union, but unfortunately I cannot find the papers associated with these studies. However, NaK remains an interesting option for future thrusters.
Solid propellants are generally considered to be condensable propellant thrusters. These designs have been studied for a number of decades. Most designs use a resistive heater to melt the propellant, which is then vaporized just before entering the anode. This was first demonstrated with the cesium contact gridded ion thrusters that were used as part of the SERT program. There (as mentioned earlier) a metal foam was used as the storage medium, which was kept warm to the point that the cesium was kept liquid. By varying the pore size, a metal wick was made which controlled the flow of the propellant from the reservoir to the ionization head. This results in a greater overall mass for the propellant tankage, but on the other hand the lack of moving parts, and the ability to ensure even heating across the propellant volume, makes this an attractive option in some cases.
A more recent design that we also discussed (the VHITAL) uses bismuth propellant for a TAL thruster, a NASA update of a Soviet TsNIIMash design from the 1970s (which was shelved due to the lack of high-powered space power systems at the time). This design uses a reservoir of liquid bismuth, which is resistively heated to above the melting temperature. An argon pressurization system is used to force the liquid bismuth through an outlet, where it’s then electromagnetically pumped into a carbon vaporization plug. This then discharges into the anode (which in the latest iteration is also resistively heated), where the Hall current then ionizes the propellant. It may be possible with this design to use multiple reservoirs to reduce the power demand for the propellant feed system; however, this would also lead to greater tankage mass requirements, so it will largely depend on the particulars of the system whether the increase in mass is worth the power savings of using a more modular system. This propellant system was successfully tested in 2007, and could be adapted to other designs as well.
Other propellants have been proposed as well, including magnesium, iodine, and cadmium. Each has its’ advantages and disadvantages in tankage, chemical reactivity limiting thruster materials considerations, and other factors, but all remain possible for future thruster designs.
For the foreseeable future, most designs will continue to use xenon, with argon being the next most popular choice, but as the amount of propellant needed increases with the development of nuclear electric propulsion, it’s possible that these other propellant options will become more prominent as tankage mass, propellant cost, and other considerations become more significant.
Electrospray thrusters use electrically charged liquids as a propellant. They fall into three main categories: colloid thrusters, which accelerate charged droplets dissolved in a solvent such as glycerol or formamide; field emission electric propulsion (FEEP) thrusters, which use liquid metals to produce positively charged metal ions; and, finally, ionic liquid ion source (ILIS) thrusters, which use room temperature molten salts to produce a beam of salt ions.
All types of electrospray end up demonstrating a phenomenon known as a Taylor cone, which occurs in an electrically charged fluid when exposed to an electrical field. If the field is strong enough, the tip of the cone is extruded to the point that it breaks, and a spray of droplets from the liquid is emitted. This is now commonly used in many different industrial applications, and the advances in these fields have made the electrospray thruster more attractive, as have a focus on volume of propulsion systems. Additionally, the amount of thrust produced, and the thrust density, is directly proportional to the density of emitters in a given area. Recent developments in nanomaterials fabrication have made it possible to increase the thrust density of these designs significantly. However, the main lifetime limitation of this type of thruster is emitter wear, which is dependent on both mass flow rates and any chemical interactions between the emitters and the propellant.
The vast majority of these systems focus on cube-sat propulsion; but one company, Accion Systems, has developed a tileable system which could offer high-powered operation through the use of dozens of thrusters arrayed in a grid. Their largest thruster (which measures 35mm by 35 mm by 16 mm, including propellant) produces a total of 200,000 N of impulse, a thrust of 10 mN, at an isp of 1500 s. While their primary focus is on cubesats, the CEO, Natalya Bailey, has mentioned before that it would be possible to use many of their TILE drive systems in parallel for high-powered missions.
One of the biggest power demands of an electrostatic engine of almost any type is the ionization cost of the propellant. Depending on the mass flow and power, different systems are used to ionize the propellant, including electron beams, RF ionization, cyclotron resonance, and the Hall effect. What if we could get rid of that power cost, and instead use all of the energy accelerating the propellant? Especially in small spacecraft, this is very attractive, and it may be possible to scale this up significantly as well (to the limits of the electrical charge that is able to be placed on the screens themselves). Some fluids are ionic, meaning that they’re positively charged, reasonably chemically stable, and easily storable. By replacing the uncharged propellant with one that carries an electric charge without the need for on-board ionization equipment, mass, volume, and power can be conserved. Not all electrospray thrusters use an ionic liquid, but ones that do offer considerable advantages in terms of energy efficiency, and possibly can offer greater overall thruster efficiency as well. I have yet to see a design for a gridded ion or Hall effect thruster that utilizes these types of propellants, but it may be possible to do so.
With that, we come to the end of our overview of electric thrusters. While there are some types of thruster that we did not discuss, they are unlikely to be able to be used in high powered systems such as would be found on an NEP spacecraft. When I began this series of blog posts, I knew that electric propulsion is a very broad topic, but the learning process during writing these three posts has been far more intense, and broad, than I was expecting. Electric propulsion has never been my strong suit, so I’ve been even more careful than usual to stick to the resources available to write these posts, and I’ve had a lot of help from some very talented people to get to this point.
I was initially planning on writing a post about the power conditioning units that are used to prepare the power provided by the power supply to these thrusters, but the more I researched, the less these systems made sense to me – something that I’ve been assured isn’t uncommon – so I’m going to skip that for now.
Instead, the next post is going to look at the power conversion systems that nuclear electric spacecraft can use. Due to the unique combination of available temperature from a nuclear reactor, the high power levels available, and the unique properties of in-space propulsion, there are many options available that aren’t generally considered for terrestrial power plants, and many designs that are used by terrestrial plants aren’t available due to mass or volume requirements. I’ve already started writing the post, but if there’s anything writing on NEP has taught me, it’s that these posts take longer than I expect, so I’m not going to give a timeline on when that will be available – hopefully in the next 2-3 weeks, though.
After that, we’ll look more in depth at thermal management and heat rejection systems for a wide range of temperatures, how they work, and the fundamental limitations that each type has. After another look at the core of an NEP spacecraft’s reactor, we will then look at combining electric and thermal propulsion in a post on bimodal NTRs, before moving on to our next blog post series (probably on pulse propulsion, but we may return to NTRs briefly to look at liquid core NTRs and the LARS proposal).
I hope you enjoyed the post. Leave a comment below with any comments, questions, or corrections, and don’t forget to check out our Facebook group, where I post work-in-progress visuals, papers I come across during research, and updates on the blog (and if you do, don’t feel shy about posting yourself on astronuclear propulsion designs and news!).
This week marks the one year anniversary of Beyond NERVA’s start as a blog. While I’ve been fascinated by nuclear powered spacecraft since I was a child, September 23, 2017 was the start of my outreach efforts in this incredibly diverse and fascinating area of research.
In the past year, I have had the extraordinary opportunity to speak with both amateurs and experts from all over the world about the use of nuclear power in space, and I can’t express how appreciative I am for all the insights and knowledge that they’ve willingly and eagerly shared with me. While I may be the (still mostly anonymous, since this isn’t about me AT ALL) author of the blog, almost every post I’ve ever written has not been solely my work. Most professionals are very wary of putting their name on anything as speculative as this (fairly hard nosed, in my opinion) blog, and I respect their privacy and professional reputation, but numerous Principle Investigators and holders of various advanced degrees in advanced propulsion have been instrumental in making this blog what it is, and I am incredibly grateful for their time, not only in helping me, but in developing the skillset to help humanity expand beyond our cradle, as Dr. Sagan so eloquently put it. My knowledge on this subject has blossomed thanks to their invaluable help.
We’ve looked at Project ROVER, and the testing that was done to enable humans to go to Mars in the 1980s, which was sadly cancelled with the massive defunding of NASA even as we were preparing to step foot on the Moon for the first time. Not only did we look at the basics of the designs used, but we delved into the testing that it takes to make a paper reactor into a real one, and the challenges that are faced when testing in real life.
In the last couple months, we’ve begun looking at nuclear electric power supplies and propulsion in space. To be honest, I was hoping to be able to post the first of two posts on electric thrusters in the last few days, but this is a huge area to cover, with an incredible number of overlapping parts, and the post is taking longer than I hoped it would. I’m still hopeful to have it out either late this week or early next week, with the follow-on post shortly after, but… we’ll see how the writing and editing process goes.
In between, the website itself has started to develop. Not only have pages been written for the major subjects (except one, Kilopower remains the one I haven’t reworked into a proper website – ironic, since that’s my most popular set of blog posts, but there’s a lot there to unpack, rewrite, and add to), but other subjects have been expanded far beyond what’s available in this blog.
Many concepts that I haven’t had a chance to cover, or that simply wouldn’t fit in the blog posts, are covered on various sites on the webpage itself. For instance, the Nuclear Thermal Reactor page covers many designs that we will hopefully discuss over the next year, but haven’t had a chance to yet, such as liquid, vapor, and plasma fueled nuclear thermal rockets, bimodal thermal/electric systems, and others.
Fuel elements, a subject that we’ve covered in specific quite a bit in the blog, also have their own page, with a broader scope than what’s been covered on the blog. This is another area that I’m planning on expanding more widely as time is available, including with new information on both CERMET and carbide fuel elements.
Finally, while we haven’t discussed nuclear pulse propulsion in the blog, there is a page covering it now, with some links to relevant articles. Next year will have many blog posts on this subject, I hope it won’t take the full year (my hope is to start the new year off with a bang, as it were).
What’s Been Going On Behind the Scenes
A lot has been going on that doesn’t make its’ way to the website or blog, mostly because it’s in the early stages of development. For those who are in the Facebook group, much of this won’t be new to you.
I’ve been doing a ton of research on various subjects that will be coming up in the future, and some that will only really be discussed when the YouTube channel gets started (when that will happen is still very much to be determined). This includes a lot of research into Project Rover, and the history of the various programs that weren’t discussed in the blog post for brevity’s sake, but will be a major part of the YouTube channel. My work with various other projects and organizations, especially Science and Futurism with Isaac Arthur, continues as well, bringing new ideas to various parts of this website.
I’ve also been honing my skills in Blender, to produce graphics for both the blog and the upcoming YouTube channel. This has been a very time consuming process, but I’m beginning to see much better results, and hope to be able to produce high-quality, original digital images and animations for the blog starting in the very near future.
As a glimpse into these efforts, here’s a selection of the work that I’ve been doing on the graphics side of things. Some models are more complete than others, and some animations are of better quality than others. Most are, in fact, test renders and animations, due to my lack of computer processing power (I’ve had offers from others for rendering capability, but as of yet have very few projects that are at the point where I consider them to the point of being ready to ask others to help).
One of the first projects was to develop a “baseline model” for a nuclear thermal rocket, in order to demonstrate the various strategies and techniques used to mitigate radiation exposure for the crew, among other things. This is that model (although more work remains to be done, especially on materials). I based it one half of Crouch’s “Coast Guard” cislunar craft from “Nuclear Power And Propulsion.” Here’s a short test render of the design:
Another early project is Boeing’s Integrated Manned Interplanetary Spacecraft concept, which was NASA’s preferred spacecraft to mount the NERVA engines to. Depending on mission profile, desired payload and other factors, anything from 5-10 engines would be discarded during the staging process. This will be another fairly-frequently used model during the YouTube channel. Work continues to be done on this model, this is just an early, low-poly render of the spacecraft:
As with the previous model, I also did a short animation about 6 or so months ago of this one leaving Earth orbit, although it does show that I’ve got a long way to go in terms of camera motion and other techniques.
During our discussion of Fission Power Systems, I was looking for a good illustration of a disc droplet radiator, and was only able to find a pen sketch from a NASA proposal, so I decided to do my own model.
Finally, while we haven’t covered Orion in this blog, it’s an iconic spacecraft, with a very complex and visible firing sequence. Because of this, I’ve begun to work on a number of models These are the variants of the 10 m Mars mission architectures. Each uses the same base, which I’m currently working on finishing the animation sequence of.
This sequence isn’t complete, there are a number of things that are still being done to make it more accurate, but the basic timing, as well as the compression of the two shock absorbers, is correct. The most glaringly obvious one is the oscillation of the structures surrounding the toroid shock absorbers are reversed: the pusher plate should smoothly go in and out, with the secondary plate (with the pistons attached) should be the one oscillating during the firing sequence. I’m rejiggering the armature to make this happen, and after that a small number of tweaks will be needed in order to be able to mount it to any of the 10 m models that I’ve already got at least partially modeled.
There are more models in various states of completion that I’m continuing to work on, and the process is definitely going to be improving over time. However, I figured that it would be nice for those that just follow my blog to be able to see that progress is also being made on the visuals side.
Where We’re Going
So what does the next year have in store for Beyond NERVA?
First, we’ll finish covering nuclear electric propulsion. The first post on the drive systems is close to complete, and the second is about half written, so hopefully there won’t be as big a gap between them as there has been between the last blog post and the one coming up. We will then look more deeply at power conversion systems in their own post, followed by heat rejection systems. Finally, we’ll look at designing NEP spacecraft, as well as the different types of missions that they offer for both crewed and uncrewed missions.
After we finish with NEP, we’ll return to thermal rockets… ish. Instead of looking at the purely thermal systems, though, we’ll be examining bimodal systems, starting with thermal/electric bimodal rockets. There have been many designs proposed over the years, but often (out side of technical papers) they’re often depicted as “bolt an ion drive on the side, and you’re good;” we’ll look at the actual challenges of designing these spacecraft. We’ll also look at thermal/chemical bimodal systems as well, with a look at Dr. Stanley Borowski’s LANTR design. We’ll look at a trimodal system, using thermal, chemical, and electric propulsion with a single reactor, the US Navy-funded TRITON concept. Finally, we’ll look at the implications for mission planning with these types of spacecraft, and other concepts that have variable thrust and specific impulse, such as VASIMR.
Then, it’s off to looking at nuclear pulse propulsion, starting with one of the most iconic nuclear powered ships ever, Project Orion. We’ll examine the later Marshall Spaceflight Center study of the concept, as well as more modern designs such as Mini-Mag Orion, and how they address some of the problems with the original design. Finally, we’ll look at fusion for the first time on this blog, with Project Medusa and Project Longshot, and even examine antimatter-catalyzed microfission and microfusion concepts.
We’ll look at designs that don’t really fit in anywhere in the grand scheme of common reactor types, starting with the fission fragment rocket engine. Two types have been designed, and one has undergone a significant amount of research and optimization in recent years. After this, we’ll look at fission sails, both with spontaneous fission (sometimes called Forward sails) as well as photofission concepts.
After we complete our grand tour of general nuclear rocket types, we’ll return to nuclear thermal rockets once again, to look at non-solid fuel elements. We’ll walk our way up the energy scale of the phases of matter, beginning with molten and liquid fueled designs. Vapor, gas, and plasma cores will follow, including a deep dive into the ever-popular open cycle gas core (which I prefer to call a plasma core) NTR. Finally, we’ll look at nuclear electric concepts that don’t use solid fuel elements, and the advantages and disadvantages they offer spacecraft designers.
My hope is to cover, at least in brief, all of the different fission-powered types of nuclear rocket in the next year, so in one years’ time I’m able to begin covering fusion concepts, as well as more exotic ideas, such as antimatter, beamed propulsion of both light and smart matter, and possibly even black hole propulsion.
Once covering the fission concepts is concluded, this blog will also slow down, unfortunately, because focus will shift onto the long-delayed YouTube channel, which will not only cover a lot of the basics in what is hopefully a more understandable way, but will also delve more deeply into the history of nuclear spacecraft and the technology behind them. I may bring on additional writers that I know and trust, perhaps even in the next year (but the two I have my eye on are both kneck-deep in their studies at the moment), but that’s a decision for the future.
Another concept that I’ll be beginning to cover in the next year, although on a sporadic basis, is rotating habitats, and their variants, the gravity-enhancing toroid habitats for low-gravity planets to make them more habitable. We’ll look at the basic concept, various proposed designs, and the limitations, both in engineering and in materials, that the different concepts have. We’ll look at the life support systems, including agriculture, volatiles recycling, and the mental health support systems as well, especially when it comes to not feeling like you’re stuck in a “tin can” all the time. Finally, we’ll look at the ecological concepts and models that have been developed over the last 50 years that could enable us to build an artificial ecosystem, as well as the limitations on our understanding of these systems. This is a subject that has been very sparsely covered, but one that I’m quite interested in, so expect to see posts from time to time on this concept as well.
The Adventure Continues!
This first year has definitely been a wonderful experience for me, and I hope that I’ve been able to provide a good resource for those that want to learn more about in-space nuclear power and propulsion. I’ve been very lucky to get assistance from a number of people in the astronuclear community, and I’ve also gained a couple of talented editors and technical experts that have been invaluable with their time and energy.
Thank you everyone for your encouragement and support in this first year. I’ll do my best to make the next year even better than this one has been!
This is a bit of an unusual post from me, but I’m running into IRL realities that need to be addressed that directly impact Beyond NERVA.
This blog and page are the most important thing to me, and are a thing that I want to continue to expand in the way that I have for about the past year. I genuinely believe that I’ve found a niche that no-one else has attempted to fill. Winchell Chung (otherwise known as Nyrath the Nearly Great [in his words, I tend to use Legendary when I reference him]) has built an incredible resource for all astronuclear geeks, but he himself says that he’s focused on helping Hard SF writers. (he also started the page in the late 1990s, so he’s got about two decades of development on me).
Beyond NERVA, on the other hand, looks at the nitty-gritty of making the potential power of astronuclear engineering into a reality. This is why I’m focusing on things like test stands, experiments that have been conducted, and making the various technical publications be MORE accessible. I tend to put out something closer to a serial white paper than a traditional blog (typically, a blog post here, without visuals, runs about 15 pages, with about that many direct references), and this takes a lot of time and effort on my part.
This page is a labor of love, but it also consumes (on average, whenever my job at the moment allows) between 30 and 60 hours a week of my time… and real life sucks. I have bills, I have a spouse, and I have to eat, have the lights on, have internet… and most especially, the free wordpress account is starting to creak at the seams.
I also am REALLY interested in starting up the YouTube channel, but again, this is a huge investment of time to get the channel built up. In about 200 hours of searching, I’ve found that the vast majority of the visuals that I’m needing simply aren’t available, and while I’ve been working to develop my knowledge in Blender, audio recording and editing, and video edition, this is still going to be an additional several hundred hours before I can start putting out video content.
So… I’m looking at monetizing my page. This isn’t really something that I WANT to do, but it’s something that I NEED to do. With WordPress (my current host), this is $25.00 a month, charged yearly, which also gives me the chance to have a MUCH better, and far more accessible, website design for all the work that I do that ISN’T the blog (about 50% of it by time, BTW).
This is FAR outside my current monetary capabilities. $300 isn’t much to many of you, but to many of you that’s a huge chunk of money, and I fall into the second category. Even that is the tip of the iceberg, what with my time investment and other considerations for building a webpage, much less a YT channel.
So, I’m looking at options for crowdfunding, and for rewards for said crowdfunding. I want to keep the content free, and not plagued by spam, but don’t object to advertisers or sponsors (as long as I can continue to publish as I see fit… any sponsored content will be clearly labeled as such, and any sponsored blog posts will still have serious questions and examinations of the systems proposed in the post).
The obvious choice is Patreon, but what the rewards are, I don’t really know. I’ve barely scratched the surface of the basics of astronuclear engineering (over 300 pages in… and still trying to stop talking about solid core NTRs!) What are your ideas for what good rewards would be?
Kickstarter has many advantages, for getting the webpage up and running, but again, the rewards are something that I struggle with.
What is worth money to you? What is worth a donation of a particular amount that I can continue to pursue this project? How can I incentivize you to help me in this project?
Please, comment, or PM me, or send me an email. I want to continue to be able to produce this content, and I think that we’re building something important here. That $300 initial cost (and about 40 hours of my time initially, with additional time dependent on moderation time) will allow me to build a forum on the website, do embedded features that simply aren’t available to me right now, upgrade the appearance and accessibility of the webpage… and anything more will allow me to eat, and keep the internet going, and other real-life stuff.
Today, the Beyond NERVA Facebook group hit 100 members! There’s a great group of professionals and amateurs, from all levels of education and experience. It’s a great place to exchange ideas, papers, and information, and I post late drafts of each of the blog posts on the group for review and correction from the group members, generally about a week or two before they go live on here.
If you’re a Facebook member, please join in the conversation! It’s already a great group, and each person that joins continues to make it better!
Hello, and welcome back to Beyond NERVA! Today, we return to the KRUSTY test, and the Kilopower system, being presented today by NASA, the Department of Energy, and the National Nuclear Security Site.
As we’ve seen before, KRUSTY is the testbed for the Kilopower reactor, developed by Los Alamos as a small, simple nuclear reactor meant for space missions (although it also has terrestrial uses as well, and two companies have proposed similar, but larger architectures since: Oklo Power and Westinghouse). After an initial proof of concept fission test (DUFF), KRUSTY was designed and built by NASA (at the Glenn Research Center in Cleveland), and the Department of Energy (Y12 in Tennessee fabricated the core, and Los Alamos was the lead design site), and just last month completed fission powered testing at the National Nuclear Security Site (NNSS) in Nevada.
As mentioned in my previous post on KRUSTY (which you can read here), a number of nuclear tests have been conducted over the last 6 months, culminating in the full-power critical test on March 20th and 21st. These initial tests were component criticality tests to ensure the neutronic characteristics of various components (fuel, neutron reflector, shield, and startup rod), cold critical testing (which added the rest of the components of KRUSTY, including the heat pipes, clamps, insulation, vacuum vessel, and more), and warm critical (where heat was generated through fission in the reactor core, but at low power, to verify reactor dynamics) testing were completed at the NNSS.
The warm critical test demonstrated that the reactor dynamics were well within acceptable operational margins, and gave the all clear for full power testing. This test produced fission power for just over 6 hours to wait for the oscillations of the reactor startup to dampen, and the system to stabilize in normal operation. One interesting note on the warm critical test is that so little power was being drawn (100 W), the oscillations of fission power and core temperature after startup were very slow, almost 75 minutes long. This is to be expected at these low energies, and isn’t typical of a full power startup (as we’ll see in the full power test).
Full Power Test
This is the thing that nuclear engineers have been looking forward to since the 1970’s: the first fission-powered full system test of a space reactor since the 1970s, with SNAP-10a. The wait has been long, but the wait couldn’t have been ended with better test results!
On March 20th, KRUSTY’s core was lowered into the neutron reflector on the Comet test stand once again, beginning the 28 hour full power test. The series of reactor dynamics and simulated equipment failure tests conducted was the same as the electrical heating profile used with the depleted uranium core at NASA’s Glenn Research Center [insert lab name], and the test results show that the modeling that the earlier (non-nuclear) test profile very closely matched the results that are being released today.
First, let’s look at the results of the electrical test and fission test side by side, and look at the individual parts of the test:
KRUSTY achieved full fission and Stirling power in the first hour, and the reactor temperature increased to about 850 C. Since the test profile was designed for 800 C (the slightly higher temperatures weren’t a significant issue, but it’s best to be as accurate as possible), the reactivity of the core was adjusted after about 6 hours to meet the target temperature over the course of the next hour.
Eight hours in, they started playing with things: First, the power drawn from the Stirlings was reduced to 60%, resulting in a small (less than 25 C) fluctuation in core temperature and about 750 W fission power reduction in the core. After an hour, the Stirlings were returned to full power, and then an hour later the Stirling simulators were cranked up to 200% power. This resulted in a large (~1200 W) increase in fission power being produced by the core. An hour later, the reactor was once again returned to nominal full power operating conditions.
Now they started (simulatedly) breaking the heat removal and power conversion systems: First, they simulated a single Stirling failure, resulting in a dip in fission power (and, if I’m reading the graph right, a slight increase in the heat pipe temperature, which I suspect is the blue line on the graph – but the temperature points aren’t labelled, so I can’t be sure). After another hour, they proceeded to remove another Stirling from operation, with similar results. In both cases, the reactor temperature only slightly varied from its’ nominal 800 C temperature.
Another hour of nominal operation, and the Stirlings were once again cranked up to 200% power, with effectively identical results to the first time this was done about 3 hours before. After another return to nominal operation, a series of tests to simulate control rod adjustment were done, including what looks to be (simulated) almost full removal of the control rod 18 hours into the test (this would actually be full insertion of the core into the reflector), resulting in a huge (2500 W) spike in power in the reactor core. Once again, the reactor temperature remained well within the acceptable bounds of the test, despite the rather severe adjustments being made to the amount of reactivity in the core.
With another return to normal operation, they killed most of the heat removal, resulting in a 1500 W drop in fission power – a wonderful demonstration of the strong negative thermal reactivity coefficient that makes Kilopower such an appealing design from a reactor physics point of view. Two and a half hours later the heat removal was eliminated as much as was possible. This resulted in a further, but smaller, drop in fission power being produced. An hour later, two of the Stirlings were restarted, and after the power transients dampened down, the last six were restarted as well, with corresponding increases in fission power.
27 hours after the beginning of the test, all heat removal was once again killed for the core, returning the fission power to the ~1500 W that were produced in the earlier simulation of this situation. An hour later, the reactor was scrammed (all reactivity removed), and the reactor was left to cool down.
Based on the test profile that was designed by the NNSS, KRUSTY was then set aside for the shorter-lived (and therefore more dangerous) fission products to decay.
This highly successful test shows that KRUSTY performed exactly as expected, and that Kilopower is ready for the next step in its’ development: the construction of a flight article for the first new astronuclear reactor design in the US for close to 50 years. Considering all the design and testing work for this system has cost less than $20 million dollars, this is nothing short of an epic achievement on the part of Drs. Patrick McClure, David Poston, and the rest of the LANL space nuclear reactor design team, as well as Marc Gibson at NASA’s Glenn Research Center, and everyone else involved in the program.
Further Development of Kilopower
KRUSTY is a major milestone for US in-space reactor development, but Kilopower has a lot more to offer than just the small 1 kilowatt (electric, kWe) reactor that KRUSTY proved the design of.
The first thing the Kilopower program offers is more power. As a system architecture, Kilopower has four different sizes of reactor, ranging from 1 to 40 kWe, for everything from small, electrically propelled deep space probes to in situ resource utilization and power supply for manned missions, both on planetary and orbital missions.
By moving or adding additional heat pipes, upgrading the power conversion system to match, and increasing the reactor core size, much more power can be drawn out of the potential core configurations of this flexible design architecture. Of course, changing the pattern of heat removal affects the thermal gradients (hot and cold spots) of the fuel element. In this case, the entire core is one single fuel element (known as a monolithic core), an unusual arrangement for a nuclear reactor, so the behavior of this type of reactor isn’t as well studied and understood as the more common type of reactor with many separate fuel elements.
However, Patrick McClure, the head of the Kilopower program at Los Alamos National Laboratory, is confident that any additional testing that is needed to verify the thermodynamic behavior of these larger and more complex designs can be done through electrical heating, similar to what was done at NASA Glenn with the depleted uranium dummy core for the electrical heating test (see the previous KRUSTY post for details on that test), without further fission-powered testing. This means that further development of the larger reactors can be done at only a modest increase in program cost.
Another thermal concern that is common in reactors is known as edge heating, where the edges of the reactor core (or individual fuel elements) are hotter than the center. This is often (including for Kilopower) due to the moderated neutrons being reflected back into the reactor core.
Depending on what materials the fuel elements and core structure are made out of, this can become a limiting factor for heat rejection (and therefore power extraction) in a nuclear reactor. In the case of KRUSTY and the smallest Kilopower reactor, the heat pipes are placed along the edge of the core, where the problem is the worst, but all other designs have the heat pipes internal to the reactor core. Fortunately, Kilopower’s uranium-molybdenum alloy fuel element (U7Mo) has both high thermal conductivity and high thermal limitations, so this isn’t a major concern in this group of reactor cores; however, changes in the fuel element type (for instance, using oxide fuel as is proposed for the Megapower derivative), or the addition of a thermally limited neutron moderator, can make this a much larger issue.
Kilopower and Low Enriched Uranium
As we’ve discussed in the low enriched uranium nuclear thermal propulsion (LEU NTP) posts, the US government has endeavored since 2008 to eliminate the use of highly enriched uranium in all civilian, and many military, reactors. Both KRUSTY and the baseline Kilopower reactor use highly enriched uranium (HEU), but as McClure and Poston point out in a paper from November of last year (available here), the core can be redesigned to use low enriched uranium (LEU) – but there’s a price. Not only will the reactor become significantly larger and massier, but there are additional thermal limitations on the core as well. These limitations would make a failed heat pipe (a challenge that KRUSTY handled brilliantly) a much more significant challenge. As McClure pointed out in an email: “The thermal issues can be dealt with (we have designs) but it is not nearly as elegant as the HEU designs.” The plan for the smaller reactors in the Kilopower group is to continue the use of HEU (and therefore the fast neutron spectrum) for simplicity’s sake. The larger reactor designs (like the ones that would be used to manufacture rocket fuel on Mars) may end up utilizing a moderator in order to soften the neutron spectrum. Apparently, they’ve been looking at a metal moderator (yttrium hydride, YH) for the larger designs, but hydrogen leakage is an ever present concern (as we saw during Project Rover, and is a major component of the propellant tank design for the LEU NTP stage – that’s exactly what I’m writing about on the NTP stage now!), because this leads to a loss of moderation, and therefore reactivity. In addition, this induces a new thermal limitation to the core, and the thermal gradients within the core will be different as well.
These issues will require additional research and testing, probably including fission powered testing along the lines of the successful test announced today, although possibly more extensive (to test hydrogen migration over longer time periods at different power levels). Fortunately, since the design would use LEU, the testing could be done at a non-DOE facility, significantly reducing the cost and regulatory hurdles of the test (a key point in the LEU NTP program as well).
Other LEU options exist as well, and were examined in that paper.
The first simply expands both the core and the reflector of the current U7Mo fueled reactor, the second replaces the molybdenum with natural uranium, and the third uses uranium zirconium hydride (UzrC) as the fuel element matrix. Each ahs a different impact on the mass of not only the reactor core and reflector, but also the radiation shield and other components.
There are some minor differences between surface and space reactor designs. Here’s the cutaways and masses of the equivalent 10 kWe Mars surface systems:
It’s unclear right now why the YH moderated core is the current frontrunner for a LEU design at Los Alamos, however I hope to be able to find out, and will update this when that information is available.
Private Companies and Kilopower
This leads directly into private companies, rather than the DOE, continuing development, and deploying, Kilopower-derived reactors for space missions. There has been some speculation about SpaceX (or some other private company) using Kilopower for in situ resource utilization (ISRU) techniques on Mars or some other extraterrestrial body.
Due to the restrictions on HEU, any private company looking to develop and use this technology basically has to use LEU. It may be possible for a private company to use HEU (BWXT does this for naval reactor fuel elements, for instance), but the company does not have full control of the design of the reactor (for national security reasons), and any portion of the reactor build involving HEU would have to be done at a DOE facility, increasing the cost and lengthening the development timetable due to the limited resources of the DOE.
An additional concern, for a long-term crewed base, would be non-proliferation. HEU us considered special nuclear material, something that the international nuclear community watches closely. However, even at these higher enrichment levels, the necessary enrichment to go from HEU to weapons grade uranium is, in fact, the hardest (and most equipment-intensive) step in the enrichment process, so this is more of a regulatory straw man than a legitimate concern.
To my knowledge, though, there aren’t any private companies looking to license this technology right now, and for SpaceX in particular, other than a couple of tweets, there hasn’t been any interest shown by the company in nuclear technology of any sort.
The Future of Kilopower at NASA
Currently, NASA and the DOE are examining possibilities for a technology demonstration mission, which is the first step toward widespread deployment of this system. According to McClure, this process is still in the early phases of mission definition. One possibility, a lunar mission (either on the surface or possibly, but far less likely, with the LOP-G, the international lunar space station formerly known as the Deep Space Gateway). However, it is still unclear what the future of the upcoming lunar mission proposals is, and NASA is still waiting for a private company to develop a lunar lander that would be able to complete the missions that NASA is interested in.
At the end of the KRUSTY post, I looked at many of the possible initial missions that could utilize Kilopower. For those that did not see that, I’ll cut and paste it here. Unfortunately, I have not been able to expand on the mission profiles, and I’m certain there have been certain changes in priorities and likelihood of any particular mission coming to fruition, but the variety of missions gives a good idea of the diverse missions that would benefit from this truly game-changing technology
Most of these missions did not incorporate a nuclear reactor as part of their power supply options so often the mission changes from what was originally proposed to account for the reactor. In fact, they were all powered by multiple RTGs, as Cassini was (three MMRTGs), which don’t scale well as a general rule. Even if a mission had planned for a reactor, the specific data about this reactor firms up questions that were left in the original design study.
Titan Saturn System Mission (TSSM)
This was a design from a 2010 decadal survey design, re-examined by the Collaborative Modeling for Parametric Assessment for Space Systems (COMPASS) in 2014. Originally designed with a 500 W ASRG, a 1 kWe Kilopower reactor was installed instead in the 2014 study. This is a good example of the tradeoffs that are considered when looking at different power supplies: there’s less mass and a shorter trip time for the original, RTG-based electric propulsion spacecraft, but the fission power supply (the reactor) allows for more power for instruments and communications, allowing for real-time, continuous communications at a higher bandwidth while allowing higher-resolution imaging due to the increased power available.
As with the following concepts, this was a mission that was briefly looked at as an option for a mission to use the Kilopower reactor, not a mission designed with the Kilopower reactor in mind from the outset. The short development time of the reactor (I never thought I’d write those words…), combined with the newness of the capability, caught NASA a bit flat-footed in the mission planning area, so not all the implications of this change in power supply have been analyzed.
These low-power missions are where any new in-space power plant will be tested, to ensure a TRL high enough for crewed missions. Because of this, I’m going to be adding mission pages to the website over time, with this being the first, looking at these nuclear-powered probes is the best way to see what could be coming down the pipeline in the near future.
A close cousin to the Chiron Observer, the KBOO was originally a RPS-powered mission which used an incredible 9 ASRGs, with a total power output of a little over 4 kWe, to examine an as-yet undetermined target in the Kuiper Belt. Having access to nuclear power is a requirement that far into the solar system, and with Kilopower not only is the mission not power-constrained, but is able to increase the amount of bandwidth available for data, and the power will allow for radar surveys of the objects that KBOO will do flybys of.
A predecessor to the Europa Clipper, the JEO was originally designed with 5 MMRTGs (the equivalent of 1 ASRG, 500 We). However, the design could have double the available power, and much higher data return rates and better data collection capabilities, if a 1 kWe reactor was used. This would increase the power plant mass (at 260 kg for the MMRTGs) by an additional 360 kg, but this would also eliminate the need for Pu-238, which remains very difficult to get a hold of.
The Europa Clipper is based on a more economical version of this mission, the Europa Multiple Flyby Mission, and has some of the same hardware.
While this is certainly smaller than the power requirements for many crewed surface missions, Kilopower has been designed with crewed surface missions in mind. The orientation of the heat pipes has already been tested, and will be tested more thoroughlly at NNSS (when held vertical in a gravity field, the heat pipe acts as a thermosyphon, increasing how much heat the pipes can reject). This reactor could certainly be used for manned space missions as well, but only for what’s known as “hotel load,” not for providing large amounts of electrical power for an electric drive system (we’ll get to that in a couple blog posts). As such, it’s typically seen being used in crewed missions as a modular power unit, with more reactors added as the base grows to keep up with increased power demand.
Phase 1 launches before humans ever leave Earth, for ISRU, and will either be solar or fission powered. The trade-off between the systems mass and time required for refueling: more fuel and water can be extracted faster using Kilopower, but it masses more than solar panels (after factoring in the full power production system). Phase 2 is the beginning of crewed missions. In this case, a NASA study showed significant mass savings due to energy storage costs over solar.
The fundamental advantage on the Moon for fission power systems is the lack of energy storage requirements for the lunar night. The Fission Surface Power program was, in fact, primarily oriented at use with manned Lunar (and later Martian) missions. Kilopower will be able to operate well in these environments, if only offering up to 40 kWe of power (which is where FSP takes over). The study above looks at Lunar mission options and requirements as well.
A wonderful resource for those interested in the beginnings of Kilopower is Dr. David Poston’s personal blog, SpaceNuke (spacenuke.blogspot.com), mostly written before the DUFF experiment. There’s a lot of insight into the design philosophy behind the reactor, and also into the difficulties of developing nuclear fission systems for in-space use. I can’t recommend it highly enough.
If an image doesn’t have credit, it’s from NASA or the DOE, from one of the sources below.
I’m going to break this up into KRUSTY and Kilopower sections, organized chronologically. The KRUSTY papers tend to be focused more on the reactor physics and hardware testing side, and are a great source for more detailed information about the reactor. The Kilopower papers and presentations are bigger-picture, and focus more on missions and policy.
KRUSTY Experiment Nuclear Design, presentation by Poston et al, Los Alamos NL, July 2015
Hello, and welcome to Beyond NERVA! Today (and in the next series of posts, I’m trying to keep it a bit shorter!) we’re going to begin looking at another NASA nuclear program that’s been in the news a lot recently, Nuclear Thermal Propulsion (which is NASA’s preferred term for nuclear thermal rockets, or NTR). This is a system that is really misunderstood by the majority of those that I’ve seen comment on various articles on the subject, so in this series of (shorter and more frequent) blog posts we’re going to look at this system, and what makes it different from the NERVA-derived engines that have been proposed over the years.
If you’ve found your way to this blog, you are probably already familiar with nuclear thermal propulsion as a concept, and Project Rover was the most famous of the nuclear thermal programs that has been carried out worldwide. Project Rover is also the subject of the second video that I’m working on, and will be the example I use to teach the basics of NTRs as a complete engine. As such, I’m not going to go into either with much depth here.
In a nutshell, a thermal rocket uses heating from an outside source to produce an expansion of a propellant (usually cryogenic), which is then ejected out of a nozzle. This differs from most rockets, which use chemical combustion to produce expansion in the propellant(s, fuel and oxidizer), in many ways, but perhaps the most significant is in the fact that since combustion is not needed, the plumbing of the engine can be greatly simplified. The fact that fuel mixing, combustion efficiency, and leakage through fuel/oxidizer systems (with resultant explosive dangers) are not an issue also greatly helps matters from an engineering point of view. These subjects have been delved into beautifully by W Greene at “Inside the LEO Doghouse,” with two wonderful posts: first, looking at chemical engine cycles, and second looking at nuclear thermal rockets (I recommend reading both, available here for chemical and here for nuclear, even if you’re only marginally interested in how rockets work, because they really are wonderful pieces of writing, and the nuclear post builds directly on the chemical engine post). If you want a more general explanation of NTRs, please check out the NTR page.
What the US has Done Before
Project Rover was a program that went through many reorganizations and changes of sponsors during its early years: it started as a program at Los Alamos (LASL/LANL) and Lawrence Radiation (LRL, later Lawrence Livermore) Labs, and included aircraft reactors as well. It was decided in 1957 that the aircraft program would be transferred solely to LRL (and subsequently renamed Project Pluto), leaving nuclear thermal rocket development as LASL’s project under the Rover program. The Air Force had been a partner since the beginning in this program, but with the creation of NASA in 1958 they handed off their stake to the new space agency. Then, in 1977, the AEC became the Department of Energy (DOE). The program ended in 1977, despite meeting many of its test objectives and having one NTR design ready to fly. Many other designs waited in the wings, facilities had been built for all aspects of design, manufacture, and testing of these engines… and then the political winds changed in two different directions: the US canceled the manned Mars missions that were meant to follow the Apollo lunar missions (and thus the reason for the engine’s existence), and the growth of anti-nuclear activism led to the ending of not just Project Rover, but all US nuclear thermal testing powered by nuclear fission all the way to the present day (what testing has been done recently has been electrically heated, a process that we’ll look at more in depth later). This timeline is based on the in-depth overview offered by JD Finseth of Sverdup Corporation, working as a contractor for NASA. His Rover Nuclear Rocket Engine Program Engine Test Final Report, published in 1991, is the bible of the hot fire tests conducted through the various stages of engine development. This and a summary of historical NTR fuels by Kelsea Benensky are the primary sources for this post.
Looking back at Rover, the engine that was designed for flight as part of the Nuclear Energy for Rocket Vehicle Applications (or NERVA) program, the NRX-XE, was a solid core NTR using hydrogen propellant and graphite composite (GC) fuel clad in niobium carbide (NbC). This engine (as tested for the XE-PRIME test) sucked in 32 kg/s of cryogenic H2 to produce 244 kilonewtons (kN, 55 klbf) of thrust at a specific impulse of 710 s by heating it to 2475 C (2550 K fuel element outlet temperature) using the 1100 MWt nuclear reactor. This flight qualification test reactor burned at full power continuously for over an hour, and based on NERVA test data (and additional testing done at Westinghouse Astronuclear) However, the engines had a few problems that I’ll delve more into in detail below, because the potential solutions lead directly to design changes in the current LEU NTR that NASA is looking to test. Other, often flashier, problems that occurred during Rover were related to some of the first experiments with using cryogenic hydrogen as a propellant; happily, these issues have been resolved for the most part in the use of hydrogen propellant for (mostly upper stages) for chemical rockets. Issues that remain I hope to cover in the future, and the hydrogen zero-boil-off system will be featured in an early Dealing with Physics video, but we will touch on them briefly as they come up in this proposed engine as well.
The engines for the NERVA program were based on a design that was come up with at Los Alamos, the Phoebus reactor. This particular engine type (sometimes referred to as a Westinghouse A-type reactor) went through three major iterations (for hot-fire tests) and the XE-PRIME flight prototype alone was tested 24 times. Other concepts were proposed at the time as well, and a good part of the beginning of Beyond NERVA is going to be looking at those different concepts, especially PEWEE (a much smaller engine that’s closer to the engine size proposed more commonly today), which had a direct impact on what a NERVA-legacy engine that was built today would look like.
This reactor used hexagonal rods of graphite (or prisms) arranged in a roughly cylindrical reactor core, interspersed with tie tubes (more on these later). This was surrounded by a set of control drums, made of beryllium, with a coating of boron along one side to act as a neutron poison. These drums would be rotated to reflect or absorb more neutrons, and therefore control the reactivity of the core – and the power of the engine.
Graphite composite fuel is a good choice for a beginning NTR, because it has a high thermal capacity, is moldable and millable, and has fairly well-understood thermal expansion characteristics. However, it is highly susceptible to erosion in the propellant stream, so the propellant tubes must be clad to avoid major damage to the fuel element, and release of both fission products and unburned fuel into the propellant stream. As a practical matter, this clad is a milled tube, and a similar material is used to coat the outside of the fuel element as well.
As the name implies, this fuel is a composite of multiple materials: the fuel itself is uranium oxide (UO2, 95%+ enriched 235U), which is dispersed through a matrix of graphite which is pyrolitically deposited to form the basic fuel form shape. The details of ensuring even packing of this graphite consumed a lot of time and study at Los Alamos, and the problem has become well-understood. The fissile material itself came in multiple forms, the most common one being spheres of UO2 clad in pyrolitic graphite before FE manufacture to ensure a clad that would resist the release of fission products into the surrounding graphite, and other particle clad variations were experimented with as well. However, the issue of power distribution through the fuel element remained a technical challenge, and one that is exacerbated by the different isotopic and chemical compositions that occur throughout the more complex fuel element. More advanced FE designs focused on a more controlled distribution of fissile materials, resulting in increased performance out of a GC fuel element that would be used in a modern “rebuilding” of a NERVA engine.
This entire graphite hexagonal rod (called a prism) was then clad, the propellant tubes with niobium carbide (NbC) that was milled from bar stock to minimize erosion and thermal loading issues in the flow of hot hydrogen, and welded Mo clad surrounding the end caps and outside surfaces to prevent midband erosion and damage from the graphite-to-graphite interactions in the heavy vibrations that can occur in an NTR core. Again, the last 60 years of materials science have offered improvements to clad materials and manufacturing, and graphite deposition that requires extreme accuracy is now a relatively mundane task, as opposed to the major technical challenge it was during Rover.
Later versions used an unclad matrix of UO2 in the graphite, to increase the homogeneity (consistency) of the distribution of fissile fuel. This requires better cladding for the fuel elements, since the particle coatings are not available to catch fission products in the case of fuel element failure. However, the advances in clad materials allowed for this possibility, and it does improve the functioning of the engine.
The Problems with NERVA
Looking briefly at the problems that were most often encountered with the tested engines, common problems for the reactor itself were the fuel elements cracking due to sheer stresses and a phenomenon known as mid-band corrosion, where the clad (and then the fuel element itself) would be eroded by the combination of the intense heat and radiation in the core interacting with the hot hydrogen propellant and clad material.
Cracking across the narrow part of the fuel elements (transverse cracking) was a constant problem with the graphite composite fuel elements, leading to a number of hot fire tests aborted due to molten fragments of fuel elements being ejected into the Nevada desert. Because of their graphite composite construction, it’s very easy to shear the fuel elements along the line of deposition, i.e. along the short axis of the prism. This problem is seen in plastic extrusion 3D printing, as well, where the orientation of the printed model often has to take into account the structural needs of the final model to make sure it won’t be too fragile. Add in the relative brittleness of graphite and the vibrational and thermal extremes that the NTR required the tie tubes to deal with, and cracking seemed almost inevitable. Shorter lengths of GC stacked together in the clad had been experimented with, but rejected, before the initiation of NERVA as a potential solution. Having the clad support the fractured fuel element was another strategy that was used, but was complicated by the need to account for the different thermal expansion profiles the various parts of the reactor core had.
Many different clad types were experimented with, and it was found that milled cylinders that were then inserted into the pre-drilled fuel elements were the best option for erosion during Rover. This is an area that constant improvement has occurred in, and new manufacturing methods, materials, and dimensional trade-offs are proposed regularly (often for other high temperature gas core systems), and is looked at more in-depth in the fuel element page.
The Versatile Tie Tube
The shear force (vibrationally-caused transverse cracking) problems were solved using a very clever device known as a tie-tube. Beloved by many astronuclear geeks, this is a device that performs many different functions in the reactor: it provides structural support for the fuel elements, it moderates the neutron spectrum to reduce the required fuel loading, and it also collects thermal energy to power the turbopumps used for the propellant
A consistent problem with the Phoebus-derived reactors was that they were always starved for hydrogen, both as propellant and as moderator, and while the number and size of the holes in the fuel elements was constantly being increased—and the loading of fissile material in the fuel elements was constantly being tweaked – the problem remained. It was realized that whatever structural solution for the breaking fuel elements was going to be found would have to reside within the core, and therefore required its own cooling system. Hydrogen was the natural choice, as a good moderator that was already being used as propellant.
This is pumped first from the propellant tanks into the nozzle of the engine to cool it, then splits into two streams. The first stream enters the reactor vessel at the top and flows through the propellant channels in the fuel elements, to be ejected out the nozzle, but the second takes a longer route, traveling from the nozzle end of the reactor up to the top through the interior of the tie tube, then down the outer part of the tie tube to the bottom again before being fed into a turbine to drive the turbopumps. This now-cooler hydrogen is then used to provide roll control thrust through a smaller nozzle (hot bleed cycle). While in the core, this hydrogen both cools the tie tubes and provides neutron moderation, pushing this reactor into the epithermal spectrum. This has changed in modern NERVA-legacy engines, however, and the more efficient expander cycle is now the common design choice
However, while the tie tubes perform many roles in this engine, the primary reason for their existence is due to the nature of the fuel elements being used, namely the graphite composite elements. Here, the 235U that made up the nuclear fuel is spread through a graphite matrix. Since the graphite is built up by vapor deposition, distribution of the fissile fuel can be highly controlled with this method, and the form that it takes can vary from uranium in irregular patterns to pyrolitic graphite clad micro-granules similar to minuscule TRISO fuel elements that are sometimes used in either gas- or salt-cooled pebble-bed reactors. The graphite provides moderation for the reactor within the fuel element itself, and the ability to adjust the fuel loading was extensively experimented with, but there are drawbacks to graphite as a fuel for NTRs using hydrogen: it’s incredibly susceptible to hydrogen erosion, exposing both the unburned 235U and the fission products bound in the graphite to the exhaust stream and stripping them away. Cladding was the obvious solution.
Unfortunately, national interest in space waned as more and more astronauts landed on the Moon. First, the Mars missions were cut, then the last three Apollo lunar missions were dropped as well. Because the NERVA engine was specifically funded for the manned Mars missions, all NASA funding was stopped. LASL did two more years of work, closing the project up on the DOE end, and the program went into mothballs.
This meant that the engineers involved were able (now mostly as unpaid hobbyists, unfortunately) to reassess various engineering decisions that were made in deference to a schedule that was no longer there. Incremental, small improvements were suggested, and NASA’s plans were updated over time as dribbles of money could be found.
Other inherent engineering challenges in the engine were able to be addressed in a more leisurely fashion after the cancellation of the manned Mars missions, as well. In any major experimental engineering endeavor, the system as a whole isn’t necessarily optimized to function as well as it could for a variety of reasons. In order for the design to continue moving ahead, and for other decisions about the engine to be reached, certain specifications have to be defined earlier than others, and sometimes this leads to a limitation in the final design that is difficult or impossible to change later. For instance, the turbopump specified for the NRX flight engine was capable of pumping about 40 kg/s at 1360 psi, with a total mass of 243 kg. As is the case with any equipment that is commercially produced, the companies involved are constantly working to improve their product, and a more modern turbopump (for the Space Shuttle Main Engine, the usual benchmark for rocket engine comparison) can deliver 7206 kg/s at 7040 psi for only 350 kg. Since the hydrogen doesn’t just affect the thrust provided, and the cooling, but the neutron spectrum as well, the nuclear engineers needed to know exactly how much H2 was going through the pump… so a design was chosen, and the decision was frozen. Any upgrades to the turbopump are going to have to be analyzed to verify the effect of the increased flow rate on reactor dynamics, both neutronic and thermal, and any associated plumbing would have to be upgraded to the higher pressures that a more powerful turbopump would cause in the system. As in anything to do with either space or nuclear reactors, there are many things that go into an engine than the combustion chamber (or the reactor core).
The Rebirth of NTR at NASA
Research into nuclear thermal propulsion waxes and wanes. After the cancellation of NERVA, some research continued for a short time at LANL, but overall the program had ended. Some work continued on the high-temperature gas-cooled reactor being designed by General Electric (which was also seen as a possible NTR FE, as we’ll see later), but for actual NTR designs, NASA entered a drought.
The next NTR to be proposed was not for NASA, but for the Department of Defense, as part of the Strategic Defense Initiative (SDI, also known as Star Wars). This is a reactor that we’ll look at more in a later post, but in short it used fuel pebbles, held in place with centrifugal force by spinning the reactor core, in order to increase fuel element surface area, and therefor make the rocket more efficient. This engine design was not hot-fire tested (the only other one besides the NERVA engines that has is the Russian RD-0410), and as far as I can tell almost no hardware was built for it, either. The project was canceled when all SDI funds were methodically stripped out of any budget bill before congress, and the engineers involved moved on to other projects, still thinking about what could be done with an NTR.
During this time, NASA conducted some smaller-scale tests of NTR components, mostly chemical and thermal analyses of various materials that could be used to build the next-generation NTR. At this point, with no immediate mission, some of the basic assumptions and architectures could be relatively easily changed, so it was decided that a conference would be held to try and jump-start NTR development in the US again. This was the “Nuclear Thermal Propulsion Joint NASA/DOE/DOD Conference held in Albuquerque, NM from July 10th to 12th, 1990. The proceedings of the workshop can be found here.
To determine if something’s better than it was, though, you have to accurately assess where it would be today. Dr. Stanley Borowski (arguably the father of modern nuclear thermal rocketry at NASA) prepared a paper in 1990 for that same conference in Albuquerque, NM, titled “Nuclear Thermal Rocket Workshop Reference System -Rover/NERVA,” looking at these considerations. While this paper is now 27 years old, it still provides valuable insight into the impact of design decisions and the impact that new subsystems have on existing rocket engine designs. In it, he lays out the “baseline legacy NERVA engine” that is still referenced today (although it could still likely be improved using modern materials understanding and manufacturing techniques).
This design is based on the NERVA NRX-XE flight-configuration rocket, tested at Jackass Flats in 1969. This engine was a hot-bleed cycle (i.e. the hydrogen used to power the turbopumps was then vented outside the engine in a separate nozzle for roll and spin control) engine, although with modern computational analysis an expander cycle could be used to increase performance (for a more in-depth look at the different engine cycles, check out W Greene’s writing on “Inside the LEO Doghouse;” you can find the links here and here, as well as a great perspective on the Space Shuttle Main Engine in other posts of his). This reactor mostly used the same type of fuel as the NERVA A6 reactor, although many of the challenges that were presented during the manufacturing of these fuel elements have been solved (chemical vapor deposition for clad application, for example), and so the expected performance out of even the baseline graphite composite (with pyrolitic graphite coated fuel pebbles of various levels of fissile fuel content carefully distributed through the reactor) is likely higher than the numbers gathered in the 1960s would seem to indicate. Given these modernizations, it can reasonably be expected that a modernized, expander cycle GC NERVA engine designed for 75 klbf thrust would be able to operate at a chamber temperature of 2500 K, and a chamber pressure of 500 psia, with a 200:1 expansion ratio for the nozzle (rather than the 100:1 used in the 1970s). This results in an expected specific impulse of about 875 (which is very nice for high-thrust systems, but not astounding).
However, much of the benefit of modern materials and manufacturing gets hidden in the improved thrust to weight ratio, which jumped from 3.0 to 4.4 (with the internal shield). Turbopumps are lighter and more efficient, titanium pressure vessels offer much more strength for much less mass, and are commonplace today, and the regeneratively cooled nozzles for the Space Shuttle’s main engines had to deal with greater extremes than this engine will be able to produce, and did so with far less mass and a far higher degree of reliability. Many of the challenges that seemed to beset Project Rover constantly were non-nuclear in nature; those problems have mostly cropped up in other parts of aerospace development over the decades and have been addressed. This technology was already at Technology Readiness Level 6 (for an explanation of NASA’s TRL system, you can find a good one here), and that’s before taking these advances into account
There were also NTR programs in the USSR under different design bureaus, at different times; the design that ended up being tested is the RD-0140 and -0411, by Rosatom, Roscosmos, and NPO Luch. Instead of using graphite composite (GC), this reactor uses a carbide fuel element because it is far more temperature-resistant. Another thing that the Russians have researched extensively is an on-board effluent cleanup system to deal with radiological release in the rocket exhaust rather than cladding (the original design used unclad fuel elements, but apparently a contract was signed a number of years ago with Westinghouse to develop a clad for the FE’s – still trying to find it, though, I’ve just heard of it from a couple nuclear engineers I have met). Carbides are far harder, and more temperature and hydrogen resistant, than GC is; in this case a mix of two ternary carbides, UC-ZrC-NbC and UC-ZrC-C are used at different places in the reactor. This is an awesome system, with many advantages to the design from both a testing and an operations point of view. The Soviets weren’t the only ones interested in ternary carbide fuel elements: they were also a major area of study in the US, particularly for the nuclear ramjet under development at what was then Lawrence Radiation Lab (now Lawrence Livermore National Lab, LLL) and Idaho National Lab, working with Vought Aircraft, under Project Pluto from 1957 to 1964. Several different fuel elements were manufactured and tested in this program, and we’ll look at them more in-depth when we look at the Russian carbide designs. At the same 1991 conference, carbides were discussed for NTRs, and in the slide above the expected (at the time) operating parameters were very attractive. There’s a little more information on my NTR Fuel Elements page, but digging into Pluto isn’t something that I’ve had a chance to do much of yet. If anyone has more papers, especially about fuel element design and manufacture, core geometry, etc., that they could send me (or even better, link to in the comments!) I’d greatly appreciate it.
One of the other most attractive materials solutions was a curious mix of two materials that we deal with in everyday life: metals and ceramics. The CERMET fuel form was first proposed in the late 1950s at both Argonne National Labs and General Electric’s nuclear division in Cincinnati, OH, and is a composite of a ceramic fissile element (in this case highly enriched uranium oxide) and a metal matrix, and plays with the advantages that each material offers: The fact that it’s metal allows for alloying techniques that are well-understood for fuel element manufacture, and the ceramic fuel form was already well-understood in terms of radiochemistry and microstructure degradation when in direct contact with the fissioning uranium atoms. Even better, the fuel wouldn’t have the problem with transverse (sheer) cracking that the graphite composite fuel elements were constantly challenged by, because the majority of it was a metal, and therefore better at handling the vibrational effects due to its greater toughness… but that led to a complication: all of a sudden, the power source for your turbopumps wasn’t there anymore. There are two ways around this problem: First, you can use an external power source to drive your turbopumps (either chemically or electrically, as Rocket Lab does with their Electron rocket), or you find another way to extract energy out of the reactor core for your turbopumps. The second option is the way the designers of most CERMET-fueled reactors choose to solve the problem, often replacing fuel elements around the periphery with hydrogen feed tubes similar to tie tubes. This can cause challenges for a reactor designer, as we’ll see in a later blog post, in terms of evenly distributing power across the reactor, but by placing them around the periphery of the core the problem can be dealt with more easily. Other challenges reared their heads as well, perhaps the biggest of which was that the fuel elements swelled quite significantly when at operating temperature. This is something that happens with all nuclear fuel elements, and one that can be addressed for most designs (although some designs are able to handle this better than others, and it does provide challenges to reactor startup and shutdown).
This fuel form does allow a very interesting capability, though: it allows the possibility of low-enriched uranium fuel, by varying where in the fuel element the fissile material is, and what metals (each with different moderation and reflection properties) are distributed through the main body of the matrix in what densities. From a regulatory standpoint, this is a big deal, since virtually all in-space reactor designs up to this point have used highly enriched uranium fuel, and so to work on them requires extensive (and expensive) security procedures, and in order to do this the controlled distribution of moderator and fissile element within the matrix is a key enabling factor. Since 2012, the US government has been working to eliminate the use of highly enriched uranium wherever possible in the American nuclear fleet. Initially, shifting regulatory priorities (and pressure from the International Atomic Energy Agency) drove the push toward using low enriched uranium fuel, but since then the idea has gained traction in the industry (and academia) as well, because then universities with nuclear engineering programs could actually perform many of the tests required to fully verify a fuel form (and many Master’s and doctorates flow out of said testing). This fuel form is exciting because of the amount of flexibility it allows as far as chemical composition, fissile fuel distribution, and other factors that we’ll look at in the next post, so having this available more widely to research institutions across the country will lead to more innovation in this field, with more testing occurring to validate the systems that will fly, or could fly.
Certain reactors (such as the High Flux Isotope Reactor at Oak Ridge NL) need a very high neutron flux, either for irradiation or for feeding beamlines. Others, mainly the US Navy’s propulsion reactors, are designed to only be fueled once in their lifetime, which makes going to low enriched fuel essentially a non-starter (although research is currently being done into possibly developing a LEU naval reactor fuel by, among others, BWXT’s naval reactor program). Development of CERMET fuel began in the 1960s at both General Electric and Argonne National Labs for both NTR and aircraft propulsion reactors using a clad tungsten and pyrolitic clad UO2 particle composites, and then was stopped because carbide fuel seemed (with good reason) to be the best option for NTR fuels, and the aircraft reactor programs were canceled not long after.
For the majority of the history of in-space nuclear power, HEU has been the norm. Most of the numbers I’ve found say that 95-97% 235U is standard in both power and thermal reactors (TOPAZ, BES-5, NERVA/Rover, RD 0410, and KRUSTY/Kilopower are all 95+% enriched). It’s been widely assumed, in fact, that designing a LEU NTR is an exercise in futility, because the fission poisons would overwhelm the available fissile fuel, and the process of breeding 238U to 239Pu takes too long, is to slow, and too picky to be relied upon during a mission. Why rely on breeding when you can just make sure your uranium is all already fissile? It seems like bringing along only 20% 235U is a waste of mass. However, breeding of fuel occurs in all fuel elements, even in terrestrial reactors. By the end of a fuel pellet’s life, the vast majority of what’s being burned isn’t uranium, but plutonium that has been bred from the 238U in the initial fuel load. In a conversation on a Facebook group, a nuclear engineer with experience designing in-space systems ball-parked an estimate of a breeding ratio of approximately 1.01 as what would be needed to maintain power distribution and overcome the fission product buildup over the life of the fuel element (in this case a few dozen hours for a pure NTR, a number of months or years with a bimodal design, and about 18 months or so with current light water reactors). Since then, an interesting paper by Vishal Patel et al of the Center for Space Nuclear Research has come out that found a mass savings may be possible for a CERMET-fueled LEU system compared to an HEU one! The recent development history of CERMET fuels is a discussion for the next blog post, however.
CERMET gives us a few options to challenge that assumption, which we’ll look more at in the next blog post on these new fuel elements. The post after that will look at testing options that have been proposed over the years, and comparing them to what I’ve been able to find on the refit of NASA’s Stennis Space Center to allow nuclear thermal rockets to be tested at an already-existing test stand. Finally, we’ll look at the ships and missions that have been proposed for this NTR, and some variations on the design that have been proposed over the years, such as bimodal options.
All writing copyright Beyond NERVA, 2017. Images used with permission. Contact for reprint.
Welcome to the inauguration of the Beyond NERVA blog!
In-space nuclear power has been a fascination for me for the majority of my life, but I have had to spend years fighting to understand the details of these incredible engines. There is an immense library of information available on them, but the majority of it is locked away in technical papers, progress reports, and buried in organizational histories.
This isn’t to say that there aren’t people that have done awesome work bringing the history and future of nuclear propulsion to light. Winchell Chung’s Atomic Rockets is the online Bible of advanced spaceflight concepts. There are several bloggers that have written on this, such as Atomic Skies, and other blogs have covered the subject on occasion as well, such as Inside the LEO Doghouse (no main page for that blog, because NASA) and the ANS Nuclear Cafe.
There was one omission that I saw, though: nobody seems to be doing videos on the subject. YouTube has become an awesome platform for public outreach and education, and as Gordon McDowell has showed in his Thorium videos, even fairly advanced reactor design can be tackled in video format, given enough time.
At the same time I started thinking about this, I also started watching Science and Futurism with Isaac Arthur, and that pushed me harder to want to make something like SFIA, but more narrowly focused on nuclear power in space. I wanted to make the nuclear reactor on a spacecraft into something that people could understand, not the magic black box that it’s often depicted as.
Then, Isaac did two things: he invited to help me on his channel as part of the Production Group, and he released The Nuclear Option, the last video before the PG started, so I got to see his summary of the technology. If you haven’t seen it, it’s very good and worth watching. It’s the only video that looks at the variety of systems available, but even then, time constraints prevented much more than a summary.
Seeing what went into the channel behind the scenes has been a real eye-opener, but has also given me the chance to learn from the best YouTuber out there, and to rub shoulders with incredible writers, artists, researchers, and all around creative people. I also got recruited into a couple other roles as well, but all of these ended up taking time away from getting the channel together.
While all this was happening, I continued to research, and get some writing done, but found that there were some common misconceptions out there about what’s going on with nuclear power in space, ones that took a few paragraphs to address properly. The differences between NERVA and the new LEUCERMET engine are huge, but not immediately obvious. The Kilopower program is often seen as some huge LFTR, not a small scale heat-pipe cooled sealed unit designed to fit a very specific niche. These are things that I’m planning on addressing at some point in the channel, but they are well down the list of when the videos would come out.
Enter the blog. Here, I’m going to post on stuff that keeps coming up, or catches my attention during research but isn’t something I’m ready to do a video on. When videos come out, I’ll also do a companion blog post with my sources, and at the very least a transcript of the video. I’ll also post on stuff that I had to cut for time, or doesn’t really fit anywhere else.
Work is continuing on the channel, both in the writing and the visuals. I’m beginning to work on 3D models for animations, and thanks to especially Katie and Jarred I’m learning a lot about Blender, and the visual side of things is a huge key to doing this: there are very few stills, and even less video, available for illustration of advanced reactor concepts, and Blender is surprisingly easy. Scripts keep getting written and rewritten, and the other random considerations are being addressed as well. Unfortunately, I don’t really have a timeline available.
In the meantime, I’ll post on here at unknown frequency, on unpredictable subjects related to nuclear spaceflight. The adventure is still just beginning!