Hello, and welcome back to Beyond NERVA, where today we are looking at ground testing of nuclear rockets. This is the first of two posts on ground testing NTRs, focusing on the testing methods used during Project ROVER, including a look at the zero power testing and assembly tests carried out at Los Alamos Scientific Laboratory, and the hot-fire testing done at the National Defense Research Station at Jackass Flats, Nevada. The next post will focus on the options that both have and are being considered for hot fire testing the next generation of LEU NTP, as well as a brief look at cost estimates for the different options, and the plans that NASA has proposed for the facilities that are needed to support this program (what little of it is available).
We have examined how to test NTR fuel elements in nun-nuclear situations before, and looked at two of the test stands that were developed for testing thermal, chemical, and erosion effects on them as individual components, the Compact Fuel Element Environment Simulator (CFEET) and the Nuclear Thermal Rocket Environment Effects Simulator (NTREES). These test stands provide economical means of testing fuel elements before loading them into a nuclear reactor for neutronic and reactor physics behavioral testing, and can catch many problems in terms of chemical and structural problems without dealing with the headaches of testing a nuclear reactor.
However, as any engineer can tell you, computer modeling is far from enough to test a full system. Without extensive real-life testing, no system can be used in real-life situations. This is especially true of something as complex as a nuclear reactor – much less a rocket engine. NTRs have the challenge of being both.
Back in the days of Project Rover, there were many nuclear propulsion tests performed. The most famous of these were the tests carried out at Jackass Flats, NV, in the National Nuclear Test Site (Now the National Criticality Experiment Research Center), in open-air testing on specialized rail cars. This was far from the vast majority of human habitation (there was one small – less than 100 people – ranch upwind of the facility, but downwind was the test site for nuclear weapons tests, so any fallout from a reactor meltdown was not considered a major concern).
The test program at the Nevada site started with the arrival of the fully-constructed and preliminary tested rocket engines arriving by rail from Los Alamos, NM, along with a contingent of scientists, engineers, and additional technicians. After doing another check-out of the reactor, they were hooked up (still attached to the custom rail car it was shipped on) to instrumentation and hydrogen propellant, and run through a series of tests, ramping up to either full power or engine failure. Rocket engine development in those days (and even today, sometimes) could be an explosive business, and hydrogen was a new propellant to use, so accidents were unfortunately common in the early days of Rover.
After the test, the rockets were wheeled off onto a remote stretch of track to cool down (from a radiation point of view) for a period of time, before being disassembled in a hot cell (a heavily shielded facility using remote manipulators to protect the engineers) and closely examined. This examination verified how much power was produced based on the fission product ratios of the fuel, examined and detailed all of the material and mechanical failures that had occurred, and started the reactor decommissioning and disposal procedures.
As time went on, great strides were made not only in NTR design, but in metallurgy, reactor dynamics, fluid dynamics, materials engineering, manufacturing techniques, cryogenics, and a host of other areas. These rocket engines were well beyond the bleeding edge of technology, even for NASA and the AEC – two of the most scientifically advanced organizations in the world at that point. This, unfortunately, also meant that early on there were many failures, for reasons that either weren’t immediately apparent or that didn’t have a solution based on the design capabilities of the day. However, they persisted, and by the end of the Rover program in 1972, a nuclear thermal rocket was tested successfully in flight configuration repeatedly, the fuel elements for the rocket were advancing by leaps and bounds past the needed specifications, and with the ability to cheaply iterate and test new versions of these elements in new, versatile, and reusable test reactors, the improvements were far from stalling out – they were accelerating.
However, as we know, the Rover program was canceled after NASA was no longer going to Mars, and the development program was largely scrapped. Scientists and engineers at Westinghouse Astronuclear Laboratory (the commercial contractor for the NERVA flight engine), Oak Ridge National Laboratory (where much of the fuel element fabrication was carried out) and Los Alamos Scientific Laboratory (the AEC facility primarily responsible for reactor design and initial testing) spent about another year finishing paperwork and final reports, and the program was largely shut down. The final report on the hot-fire test programs for NASA, though, wouldn’t be released until 1991.
Behind the Scenes: Pre-Hot Fire Testing of ROVER reactors
These hot fire tests were actually the end result of many more tests carried out in New Mexico, at Los Alamos Scientific Laboratory – specifically the Pajarito Test Area. Here, there were many test stands and experimental reactors used to measure such things as neutronics, reactor behavior, material behavior, critical assembly limitations and more.
The first of these was known as Honeycomb, due to its use of square grids made out of aluminum (which is mostly transparent to neutrons), held in large aluminum frames. Prisms of nuclear fuel, reflectors, neutron absorbers, moderator, and other materials were assembled carefully (to prevent accidental criticality, something that the Pajarito Test Site had seen early in its’ existence in the Demon Core experiments and subsequent accident) to ensure that the modeled behavior of possible core configurations matched closely enough to predicted behavior to justify going through the effort and expense of going on to the next steps of refining and testing fuel elements in an operating reactor core. Especially for cold and warm criticality tests, this test stand was invaluable, but with the cancellation of Project Rover, there was no need to continue using the test stand, and so it was largely mothballed.
The second was a modified KIWI-A reactor, which used a low-pressure, heavy water moderated island in the center of the reactor to reduce the amount of fissile fuel necessary for the reactor to achieve criticality. This reactor, known as Zepo-A (for zero-power, or cold criticality), was the first of an experiment that was carried out with each successive design in the Rover program, supporting Westinghouse Astronuclear Laboratory and the NNTS design and testing operations. As each reactor went through its’ zero-power neutronic testing, the design was refined, and problems corrected. This sort of testing was completed late in 2017 and early in 2018 at the NCERC in support of the KRUSTY series of tests, which culminated in March with the first full-power test of a new nuclear reactor in the US for more than 40 years, and remain a crucial testing phase for all nuclear reactor and fuel element development. An early, KIWI-type critical assembly test ended up being re-purposed into a test stand called PARKA, which was used to test liquid metal fast breeder reactor (LMFBR, now known as “Integral Fast Reactor or IFR, under development at Idaho National Labs) fuel pins in a low-power, epithermal neutron environment for startup and shutdown transient behavior testing, as well as being a well-understood general radiation source.
Finally, there was a pair of hot gas furnaces (one at LASL, one at WANL) for electrical heating of fuel elements in an H2 environment that used resistive heating to bring the fuel element up to temperature. This became more and more important as the project continued, since development of the clad on the fuel element was a major undertaking. As the fuel elements became more complex, or as materials that were used in the fuel element changed, the thermal properties (and chemical properties at temperature) of these new designs needed to be tested before irradiation testing to ensure the changes didn’t have unintended consequences. This was not just for the clad, the graphite matrix composition changed over time as well, transitioning from using graphite flour with thermoset resin to a mix of flour and flakes, and the fuel particles themselves changed from uranium oxide to uranium carbide, and the particles themselves were coated as well by the end of the program. The gas furnace was invaluable in these tests, and can be considered the grandfather of today’s NTREES and CFEET test stands.
An excellent example of the importance of these tests, and the careful checkout that each of the Rover reactors received, can be seen with the KIWI-B4 reactor. Initial mockups, both on Honeycomb and in more rigorous Zepo mockups of the reactor, showed that the design had good reactivity and control capability, but while the team at Los Alamos was assembling the actual test reactor, it was discovered that there was so much reactivity the core couldn’t be assembled! Inert material was used in place of some of the fuel elements, and neutron poisons were added to the core, to counteract this excess reactivity. Careful testing showed that the uranium carbide fuel particles that were suspended in the graphite matrix underwent hydrolysis, moderating the neutrons and therefore increasing the reactivity of the core. Later versions of the fuel used larger particles of UC2, which was then individually coated before being distributed through the graphite matrix, to prevent this absorption of hydrogen. Careful testing and assembly of these experimental reactors by the team at Los Alamos ensured the safe testing and operation of these reactors once they reached the Nevada test site, and supported Westinghouse’s design work, Oak Ridge National Lab’s manufacturing efforts, and the ultimate full-power testing carried out at Jackass Flats.
Once this series of mockup crude criticality testing, zero-power testing, assembly, and checkout was completed, the reactors were loaded onto a special rail car that would also act as a test stand with the nozzle up, and – accompanied by a team of scientists and engineers from both New Mexico and Nevada – transported by train to the test site at Jackass Flats, adjacent to Nellis Air Force Base and the Nevada Test Site, where nuclear weapons testing was done. Once there, a final series of checks was done on the reactors to ensure that nothing untoward had happened during transport, and the reactors were hooked up to test instrumentation and the coolant supply of hydrogen for testing.
Problems at Jackass Flats: Fission is the Easy Part!
The testing challenges that the Nevada team faced extended far beyond the nuclear testing that was the primary goal of this test series. Hydrogen is a notoriously difficult material to handle due to its’ incredibly tiny size and mass. It seeps through solid metal, valves have to be made with incredibly tight clearances, and when it’s exposed to the atmosphere it is a major explosion hazard. To add to the problems, these were the first days of cryogenic H2 experimentation. Even today, handling of cryogenic H2 is far from a routine procedure, and the often unavoidable problems with using hydrogen as a propellant can be seen in many areas – perhaps the most spectacular can be seen during the launch of a Delta-IV Heavy rocket, which is a hydrolox (H2/O2) rocket. Upon ignition of the rocket engines, it appears that the rocket isn’t launching from the pad, but exploding on it, due to the outgassing of H2 not only from the pressure relief valves in the tanks, but seepage from valves, welds, and through the body of the tanks themselves – the rocket catching itself on fire is actually standard operating procedure!
In the late 1950s, these problems were just being discovered – the hard way. NASA’s Plum Brook Research Station in Ohio was a key facility for exploring techniques for handling gaseous and liquid hydrogen safely. Not only did they experiment with cryogenic equipment, hydrogen densification methods, and liquid H2 transport and handling, they did materials and mechanical testing on valves, sensors, tanks, and other components, as well as developed welding techniques and testing and verification capabilities to improve the ability to handle this extremely difficult, potentially explosive, but also incredible valuable (due to its’ low atomic mass – the exact same property that caused the problems in the first place!) propellant, coolant, and nuclear moderator. The other options available for NTR propellant (basically anything that’s a gas at reactor operating temperatures and won’t leave excessive residue) weren’t nearly as good of an option due to the lower exhaust velocity – and therefore lower specific impulse.
Plum Brook is another often-overlooked facility that was critical to the success of not just NERVA, but all current liquid hydrogen fueled systems. I plan on doing another post (this one’s already VERY long) looking into the history of the various facilities involved with the Rover and NERVA program.
Indeed, all the KIWI-A tests and the KIWI-B1A used gaseous hydrogen instead of liquid hydrogen, because the equipment that was planned to be used (and would be used in subsequent tests) was delayed due to construction problems, welding issues, valve failures, and fires during checkout of the new systems. These teething troubles with the propellant caused major problems at Jackass Flats, and caused many of the flashiest accidents that occurred during the testing program. Hydrogen fires were commonplace, and an accident during the installation of propellant lines in one reactor ended up causing major damage to the test car, the shed it was contained in, and exposed instrumentation, but only minor apparent damage to the reactor itself, delaying the test of the reactor for a full month while repairs were made (this test also saw two hydrogen fires during testing, a common problem that improved as the program continued and the methods for handling the H2 were improved).
While the H2 coolant was the source of many problems at Jackass Flats, other issues arose due to the fact that these NTRs were using technology that was well beyond bleeding-edge at the time. New construction methods doesn’t begin to describe the level of technological innovation in virtually every area that these engines required. Materials that were theoretical chemical engineering possibilities only a few years before (sometimes even months!) were being utilized to build innovative, very high temperature, chemically and neutronically complex reactors – that also functioned as rocket engines. New metal alloys were developed, new forms of graphite were employed, experimental methods of coating the fuel elements to prevent hydrogen from attacking the carbon of the fuel element matrix (as seen in the KIWI-A reactor, which used unclad graphite plates for fuel, this was a major concern) were constantly being adjusted – indeed, the clad material experimentation continues to this day, but with advanced micro-imaging capabilities and a half century of materials science and manufacturing experience since then, the results now are light-years ahead of what was available for the scientists and engineers in the 50s and 60s. Hydrodynamic principles that were only poorly understood, stress and vibrational patterns that weren’t able to be predicted, and material interactions at temperatures higher than are experienced in the vast majority of situations caused many problems for the Rover reactors.
One common problem in many of these reactors was transverse fuel element cracking, where a fuel element would split across the narrow axis, disrupting coolant flow through the interior channels, exposing the graphite matrix to the hot H2 (which it then would ferociously eat away, exposing both fission products and unburned fuel to the H2 stream and carry it elsewhere – mostly out of the nozzle, but it turned out the uranium would congregate at the hottest points in the reactor – even against the H2 stream – which could have terrifying implications for accidental fission power hot spots. Sometimes, large sections of the fuel elements would be ejected out of the nozzle, spraying partially burned nuclear fuel into the air – sometimes as large chunks, but almost always some of the fuel was aerosolized. Today, this would definitely be unacceptable, but at the time the US government was testing nuclear weapons literally next door to this facility, so it wasn’t considered a cause of major concern.
If this sounds like there were major challenges and significant accidents that were happening at Jackass Flats, well in the beginning of the program that was certainly correct. These early problems were also cited in Congress’ decision to not continue to fund the program (although, without a manned Mars mission, there was really no reason to use the expensive and difficult to build systems, anyway). The thing to remember, though, is that they were EARLY tests, with materials that had been a concept in a material engineer’s imagination only a few years (or sometimes months) beforehand, mechanical and thermal stresses that no-one had ever dealt with, and a technology that seemed the only way to send humans to another planet. The moon was hard enough, Mars was millions of miles further away.
Hot Fire Testing: What Did a Test Look Like?
Nuclear testing is far more complex than just hooking up the test reactor to coolant and instrumentation lines, turning the control drums and hydrogen valves, and watching the dials. Not only are there many challenges associated with just deciding what instrumentation is possible, and where it would be placed, but the installation of these instruments and collecting data from them was often a challenge as well early in the program.
To get an idea of what a successful hot fire test looks like, let’s look at a single reactor’s test series from later in the program: the NRX A2 technology demonstration test. This was the first NERVA reactor design to be tested at full power by Westinghouse ANL, the others, including KIWI and PHOEBUS, were not technology demonstration tests, but proof-of-concept and design development tests leading up to NERVA, and were tested by LASL. The core itself consisted of 1626 hexagonal prismatic fuel elements This reactor was significantly different from the XE-PRIME reactor that would be tested five years later. One way that it was different was the hydrogen flow path: after going through the nozzle, it would enter a chamber beside the nozzle and above the axial reflector (the engine was tested nozzle-up, in flight configuration this would be below the reflector), then pass through the reflector to cool it, before being diverted again by the shield, through the support plate, and into the propellant channels in the core before exiting the nozzle
Two power tests were conducted, on September 24 and October 15, 1964.
With two major goals and 22 lesser goals, the September 24 test packed a lot into the six minutes of half-to-full power operation (the reactor was only at full power for 40 seconds). The major goals were: 1. Provide significant information for verifying steady-state design analysis for powered operation, and 2. Provide significant information to aid in assessing the reactor’s suitability for operation at steady-state power and temperature levels that were required if it was to be a component in an experimental engine system. In addition to these major, but not very specific, test goals, a number of more specific goals were laid out, including top priority goals of evaluating environmental conditions on the structural integrity of the reactor and its’ components, core assembly performance evaluation, lateral support and seal performance analysis, core axial support system analysis, outer reflector assembly evaluation, control drum system evaluation, and overall reactivity assessment. The less urgent goals were also more extensive, and included nozzle assembly performance, pressure vessel performance, shield design assessment, instrumentation analysis, propellant feed and control system analysis, nucleonic and advanced power control system analysis, radiological environment and radiation hazard evaluation, thermal environment around the reactor, in-core and nozzle chamber temperature control system evaluation, reactivity and thermal transient analysis, and test car evaluation.
Several power holds were conducted during the test, at 51%, 84%, and 93-98%, all of which were slightly above the power that the holds were planned at. This was due to compressability of the hydrogen gas (leading to more moderation than planned) and issues with the venturi flowmeters used to measure H2 flow rates, as well as issues with the in-core thermocouples used for instrumentation (a common problem in the program), and provides a good example of the sorts of unanticipated challenges that these tests are meant to evaluate. The test length was limited by the availability of hydrogen to drive the turbopump, but despite this being a short test, it was a sweet one: all of the objectives of the test were met, and an ideal specific impulse in a vacuum equivalent of 811 s was determined (low for an NTR, but still over twice as good as any chemical engine at the time).
The October 15th test was a low-power, low flow test meant to evaluate the reactor’s operation when not operating in a high power, steady state of operation, focusing on reactor behavior at startup and cool-down. The relevant part of the test lasted for about 20 minutes, and operated at 21-53 MW of power and a flow rate of 2.27-5.9 kg/s of LH2. As with any system, operating at the state that the reactor was designed to operate in was easier to evaluate and model than at startup and shutdown, two conditions that every engine has to go through but are far outside the “ideal” conditions for the system, and operating with liquid hydrogen just made the questions greater. Only four specific objectives were set for this test: demonstration of stability at low LH2 flow (using dewar pressure as a gauge), demonstration of suitability at constant power but with H2 flow variation, demonstration of stability with fixed control drums but variable H2 flow to effect a change in reactor power, and getting a reactivity feedback value associated with LH2 at the core entrance. Many of these tests hinge on the fact that the LH2 isn’t just a coolant, but a major source of neutron moderation, so the flow rate (and associated changes in temperature and pressure) of the propellant have impacts extending beyond just the temperature of the exhaust. This test showed that there were no power or flow instabilities in the low-power, low-flow conditions that would be seen even during reactor startup (when the H2 entering the core was at its’ densest, and therefore most moderating). The predicted behavior and the test results showed good correlation, especially considering the instrumentation used (like the reactor itself) really wasn’t designed for these conditions, and the majority of the transducers used were operating at the extreme low range of their scale.
After the October test, the reactor was wheeled down a shunt track to radiologically cool down (allow the short-lived fission products to decay, reducing the gamma radiation flux coming off the reactor), and then was disassembled in the NRDC hot cell. These post-mortem examinations were an incredibly important tool for evaluating a number of variables, including how much power was generated during the test (based on the distribution of fission products, which would change depending on a number of factors, but mainly due to the power produced and the neutron spectrum that the reactor was operating in when they were produced), chemical reactivity issues, mechanical problems in the reactor itself, and several other factors. Unfortunately, disassembling even a simple system without accidentally breaking something is difficult, and this was far from a simple system. A challenge became “did the reactor break that itself, or did we?” This is especially true of fuel elements, which often broke due to inadequate lateral support along their length, but also would often break due to the way they were joined to the cold end of the core (which usually involved high-temperature, reasonably neutronically stable adhesives).
This issue was illustrated in the A2 test, when there were multiple broken fuel elements that did not have erosion at the break. This is a strong indicator that they broke during disassembly, not during the test itself: hot H2 tends to heavily erode the carbon in the graphite matrix – and the carbide fuel pellets – and is a very good indicator if the fuel rods broke during a power test. Broken fuel elements were a persistent problem in the entire Rover and NERVA programs (sometimes leading to ejection of the hot end portion of the fuel elements), and the fact that all of the fueled ones seem to have not broken was a major victory for the fuel fabricators.
This doesn’t mean that the fuel elements weren’t without their problems. Each generation of reactors used different fuel elements, sometimes multiple different types in a single core, and in this case the propellant channels, fuel element ends, and the tips of the exterior of the elements were clad in NbC, but the full length of the outside of the elements was not, to attempt to save mass and not overly complicate the neutronic environment of the reactor itself. Unfortunately, this means that the small amount of gas that slipped between the filler strips and pyro-tiles placed to prevent this problem could eat away at the middle of the outside of the fuel element (toward the hot end), something known as mid-band corrosion. This occurred mostly on the periphery of the core, and had a characteristic pattern of striations on the fuel elements. A change was made, to ensure that all of the peripheral fuel elements were fully clad with NbC, since the areas that had this clad were unaffected. Once again, the core became more complex, and more difficult to model and build, but a particular problem was addressed due to empirical data gathered during the test. A number of unfueled, instrumented fuel elements in the core were found to have broken in such a way that it wasn’t possible to conclusively rule out handling during disassembly, however, so the integrity of the fuel elements was still in doubt.
The problems associated with these graphite composite fuel elements never really went away during ROVER or NERVA, with a number of broken fuel elements (which were known to have been broken during the test) were found in the PEWEE reactor, the last test of this sort of fuel element matrix (NF-1 used either CERMET – then called composite – or carbide fuel elements, no GC fuel elements were used). The follow-on A3 reactor exhibited a form of fuel erosion known as pin-hole erosion, which the NbC clad was unable to address, forcing the NERVA team to other alternatives. This was another area where long-term use of the GC fuel elements was shown to be unsustainable for long-duration use past the specific mission parameters, and a large part of why the entire NERVA engine was discarded during staging, rather than just the propellant tanks as in modern designs. New clad materials and application techniques show a lot of promise, and GC is able to be used in a carefully designed LEU reactor, but this is something that isn’t really being explored in any depth in most cases (both the LANTR and NTER concepts still use GC fuel elements, with the NTER specifying them exclusively due to fuel swelling issues, but that seems to be the only time it’s actually required).
Worse Than Worst Case: KIWI-TNT
One question that often is asked by those unfamiliar with NTRs is “what happens if it blows up?” The short answer is that they can’t, for a number of reasons. There is only so much reactivity in a nuclear reactor, and only so fast that it can be utilized. The amount of reactivity is carefully managed through fuel loading in the fuel elements and strategically placed neutron poisons. Also, the control systems used for the nuclear reactors (in this case, control drums placed around the reactor in the radial reflector) can only be turned so fast. I recommend checking out the report on Safety Neutronics in Rover Reactors liked at the end of this post if this is something you’d like to look at more closely.
However, during the Rover testing at NRDS one WAS blown up, after significant modification that would not ever be done to a flight reactor. This is the KIWI-TNT test (TNT is short for Transient Nuclear Test). The behavior of a nuclear reactor as it approaches a runaway reaction, or a failure of some sort, is something that is studied in all types of reactors, usually in specially constructed types of reactors. This is required, since the production design of every reactor is highly optimized to prevent this sort of failure from occurring. This was also true of the Rover reactors. However, knowing what a fast excursion reaction would do to the reactor was an important question early in the program, and so a test was designed to discover exactly how bad things could be, and characterize what happened in a worse-than-worst-case scenario. It yielded valuable data for the possibility of an abort during launch that resulted in the reactor falling into the ocean (water being an excellent moderator, making it more likely that accidental criticality would occur), if the launch vehicle exploded on the pad, and also tested the option of destroying the reactor in space after it had been exhausted of its’ propellant (something that ended up not being planned for in the final mission profiles).
What was the KIWI-TNT reactor? The last of the KIWI series of reactors, its’ design was very similar to the KIWI-B4A reactor (the predecessor for the NERVA-1 series of reactors), which was originally designed as a 1000 MW reactor with an exhaust exit chamber temperature of 2000 C. However, a number of things prevented a fast excursion from happening in this reactor: first, the shims used for the fuel elements were made of tantalum, a neutron poison, to prevent excess reactivity; second, the control drums used stepping motors that were slow enough that a runaway reaction wasn’t possible; finally, this experiment would be done without coolant, which also acted as moderator, so much more reactivity was needed than the B4A design allowed. With the shims removed, excess reactivity added to the point that the reactor was less than 1 sub-critical (with control drums fully inserted) and $6 of excess reactivity available relative to prompt critical, and the drum rotation rate increased by a factor of 89(!!), from 45 deg/s to 4000 deg/s, the stage was set for this rapid scheduled disassembly on January 12, 1965. This degree of modification shows how difficult it would be to have an accidental criticality accident in a standard NTR design.
The test had six specific goals: 1. Measure reaction history and total fissions produced under a known reactivity and compare to theoretical predictions in order to improve calculations for accident predictions, 2. to determine distribution of fission energy between core heating and vaporization, and kinetic energies, 3. determination of the nature of the core breakup, including the degree of vaporization and particle sizes produced, to test a possible nuclear destruct system, 4. measure the release into the atmosphere of fission debris under known conditions to better calculate other possible accident scenarios, 5. measure the radiation environment during and after the power transient, and 6. to evaluate launch site damage and clean-up techniques for a similar accident, should it occur (although the degree of modification required to the reactor core shows that this is a highly unlikely event, and should an explosive accident occur on the pad, it would have been chemical in nature with the reactor never going critical, so fission products would not be present in any meaningful quantities).
There were 11 measurements taken during the test: reactivity time history, fission rate time history, total fissions, core temperatures, core and reflector motion, external pressures, radiation effects, cloud formation and composition, fragmentation and particle studies, and geographic distribution of debris. An angled mirror above the reactor core (where the nozzle would be if there was propellant being fed into the reactor) was used in conjunction with high-speed cameras at the North bunker to take images of the hot end of the core during the test, and a number of thermocouples placed in the core.
As can be expected, this was a very short test, with a total of 3.1×10^20 fissions achieved after only 12.4 milliseconds. This was a highly unusual explosion, not consistent with either a chemical or nuclear explosion. The core temperature exceeded 17.5000 C in some locations, vaporizing approximately 5-15% of the core (the majority of the rest either burned in the air or was aerosolized into the cloud of effluent), and produced 150 MW/sec of kinetic energy about the same amount of kinetic energy as approximately 100 pounds of high explosive (although due to the nature of this explosion, caused by rapid overheating rather than chemical combustion, in order to get the same effect from chemical explosives would take considerably more HE). Material in the core was observed to be moving at 7300 m/sec before it came into contact with the pressure vessel, and flung the largest intact piece of the pressure vessel (a 0.9 sq. m, 67 kg piece of the pressure vessel) 229 m away from the test location. There were some issues with instrumentation in this test, namely with the pressure transducers used to measure the shock wave. All of these instruments but two (placed 100 ft away) didn’t record the pressure wave, but rather an electromagnetic signal at the time of peak power (those two recorded a 3-5 psi overpressure).
Radioactive Release during Rover Testing Prequel: Radiation is Complicated
Radiation is a major source of fear for many people, and is the source of a huge amount of confusion in the general population. To be completely honest, when I look into the nitty gritty of health physics (the study of radiation’s effects on living tissue), I spend a lot of time re-reading most of the documents because it is easy to get confused by the terms that are used. To make matters worse, especially for the Rover documentation, everything is in the old, outdated measures of radioactivity. Sorry, SI users out there, all the AEC and NASA documentation uses Ci, rad, and rem, and converting all of it would be a major headache. If someone would like to volunteer helping me convert everything to common sense units, please contact me, I’d love the help! However, the natural environment is radioactive, and the Sun emits a prodigious amount of radiation, only some of which is absorbed by the atmosphere. Indeed, there is evidence that the human body REQUIRES a certain amount of radiation to maintain health, based on a number of studies done in the Soviet Union using completely non-radioactive, specially prepared caves and diets.
Exactly how much is healthy and not is a matter of intense debate, and not much study, though, and three main competing theories have arisen. The first, the linear-no-threshold model, is the law of the land, and states that there’s a maximum amount of radiation that is allowable to a person over the course of a year, no matter if it’s in one incident (which usually is a bad thing), or evenly spaced throughout the whole year. Each rad (or gray, we’ll get to that below) of radiation increases a person’s chance of getting cancer by a certain percentage in a linear fashion, and so effectively the LNT model (as it’s known) determines a minimum acceptable increase in the chance of a person getting cancer in a given timeframe (usually quarters and years). This doesn’t take into account the human body’s natural repair mechanisms, though, which can replace damaged cells (no matter how they’re damaged), which leads most health physicists to see issues with the model, even as they work within the model for their professions.
The second model is known as the linear-threshold model, which states that low level radiation (under the threshold of the body’s repair mechanisms) doesn’t make sense to count toward the likelihood of getting cancer. After all, if you replace your Formica counter top in your kitchen with a granite one, the natural radioactivity in the granite is going to expose you to more radiation, but there’s no difference in the likelihood that you’re going to get cancer from the change. Ramsar, Iran (which has the highest natural background radiation of any inhabited place on Earth) doesn’t have higher cancer rates, in fact they’re slightly lower, so why not set the threshold to where the normal human body’s repair mechanisms can control any damage, and THEN start using the linear model of increase in likelihood of cancer?
The third model, hormesis, takes this one step further. In a number of cases, such as Ramsar, and an apartment building in Taiwan which was built with steel contaminated with radioactive cobalt (causing the residents to be exposed to a MUCH higher than average chronic, or over time, dose of gamma radiation), people have not only been exposed to higher than typical doses of radiation, but had lower cancer rates when other known carcinogenic factors were accounted for. This is evidence that having an increased exposure to radiation may in fact stimulate the immune system and make a person more healthy, and reduce the chance of that person getting cancer! A number of places in the world actually use radioactive sources as places of healing, including radium springs in Japan, Europe, and the US, and the black monazite sands in Brazil. There has been very little research done in this area, since the standard model of radiation exposure says that this is effectively giving someone a much higher risk for cancer, though.
I am not a health physicist. It has become something of a hobby for me in the last year, but this is a field that is far more complex than astronuclear engineering. As such, I’m not going to weigh in on the debate as to which of these three theories is right, and would appreciate it if the comments section on the blog didn’t become a health physics flame war. Talking to friends of mine that ARE health physicists (and whom I consult when this subject comes up), I tend to lean somewhere between the linear threshold and hormesis theories of radiation exposure, but as I noted before, LNT is the law of the land, and so that’s what this blog is going to mostly work within.
Radiation (in the context of nuclear power, especially) starts with the emission of either a particle or ray from a radioisotope, or unstable nucleus of an atom. This is measured with the Curie (Cu) which is a measure of how much radioactivity IN GENERAL is released, or 3.7X10^10 emissions (either alpha, beta, neutron, or gamma) per second. SI uses the term Becquerels (Bq), which is simple: one decay = 1 Bq. So 1 Cu = 3.7X10^10 Bq. Because it’s so small, megaBequerels (Mbq) is often used, because unless you’re looking at highly sensitive laboratory experiments, even a dozen Bq is effectively nothing.
Each different type of radiation affects both materials and biological systems differently, though, so there’s another unit used to describe energy produced by radiation being deposited onto a material, the absorbed dose: this is the rad, and SI unit is the gray (Gy). The rad is defined as 100 ergs of energy deposited in one gram of material, and the gray is defined as 1 joule of radiation absorbed by one kilogram of matter. This means that 1 rad = 0.01 Gy. This is mostly seen for inert materials, such as reactor components, shielding materials, etc. If it’s being used for living tissue, that’s generally a VERY bad sign, since it’s pretty much only used that way in the case of a nuclear explosion or major reactor accident. It is used in the case of an acute – or sudden – dose of radiation, but not for longer term exposures.
This is because there’s many things that go into how bad a particular radiation dose is: if you’ve got a gamma beam that goes through your hand, for instance, it’s far less damaging than if it goes through your brain, or your stomach. This is where the final measurement comes into play: in NASA and AEC documentation, they use the term rem (or radiation equivalent man), but in SI units it’s known as the Sievert. This is the dose equivalent, or normalizing all the different radiation types’ effects on the various tissues of the body, by applying a quality factor to each type of radiation for each part of a human body that is exposed to that type of radiation. If you’ve ever wondered what health physicists do, it’s all the hidden work that goes on when that quality factor is applied.
The upshot of all of this is the way that radiation dose is assessed. There are a number of variables that were assessed at the time (and currently are assessed, with this as an effective starting point for ground testing, which has a minuscule but needing to be assessed consideration as far as release of radioactivity to the general public). The exposure was broadly divided into three types of exposure: full-body (5 rem/yr for an occupational worker, 0.5 rem/yr for the public); skin, bone, and thyroid exposure (30 rem/yr occupational, 3 rem/yr for the public); and other organs (15 rem/yr occupational, 1.5 rem/yr for the public). In 1971, the guidelines for the public were changed to 0.5 rem/yr full body and 1.5 rem/yr for the general population, but as has been noted (including in the NRDS Effluent Final Report) this was more an administrative convenience rather than biomedical need.
Additional considerations were made for discrete fuel element particles ejected from the core – a less than one in ten thousand chance that a person would come in contact with one, and a number of factors were considered in determining this probability. The biggest concern is skin contact can result in a lesion, at an exposure of above 750 rads (this is an energy deposition measure, not an expressly medical one, because it is only one type of tissue that is being assessed).
Finally, and perhaps the most complex to address, is the aerosolized effluent from the exhaust plume, which could be both gaseous fission products (which were not captured by the clad materials used) and from small enough particles to float through the atmosphere for a longer duration – and possibly be able to be inhaled. The relevant limits of radiation exposure for these tests for off-site populations were 170 mrem/yr whole body gamma dose, and a thyroid exposure dose of 500 mrem/yr. The highest full body dose recorded in the program was in 1966, of 20 mrem, and the highest thyroid dose recorded was from 1965 of 72 mrem.
The Health and Environmental Impact of Nuclear Propulsion Testing Development at Jackass Flats
So how bad were these tests about releasing radioactive material, exactly? Considering the sparsely populated area few people – if any – that weren’t directly associated with the program received any dose of radiation from aerosolized (inhalable, fine particulate) radioactive material. By the regulations of the day, no dose of greater than 15% of the allowable AEC/FRC (Federal Radiation Council, an early federal health physics advisory board) dose for the general public was ever estimated or recorded. The actual release of fission products in the atmosphere (with the exception of Cadmium-115) was never more than 10%, and often less than 1% (115Cd release was 50%). The vast majority of these fission products are very short lived, decaying in minutes or days, so there was not much – if any – change for migration of fallout (fission products bound to atmospheric dust that then fell along the exhaust plume of the engine) off the test site. According to a 1995 study by the Department of Energy, the total radiation release from all Rover and Tory-II nuclear propulsion tests was approximately 843,000 Curies. To put this in perspective, a nuclear explosive produces 30,300,000 Curies per kiloton (depending on the size and efficiency of the explosive), so the total radiation release was the equivalent of a 30 ton TNT equivalent explosion.
This release came from either migration of the fission products through the metal clad and into the hydrogen coolant, or due to cladding or fuel element failure, which resulted in the hot hydrogen aggressively attacking the graphite fuel elements and carbide fuel particles.
The amount of fission product released is highly dependent on the temperature and power level the reactors were operated at, the duration of the test, how quickly the reactors were brought to full power, and a number of other factors. The actual sampling of the reactor effluent occurred three ways: sampling by aircraft fitted with special sensors for both radiation and particulate matter, the “Elephant gun” effluent sampler placed in the exhaust stream of the engine, and by postmortem chemical analysis of the fuel elements to determine fuel burnup, migration, and fission product inventory. One thing to note is that for the KIWI tests, effluent release was not nearly as well characterized as for the later Phoebus, NRX, Pewee, and Nuclear Furnace tests, so the data for these tests is not only more accurate, but far more complete as well.
Two sets of aircraft data were collected: one (by LASL/WANL) was from fixed heights and transects in the six miles surrounding the effluent plume, collecting particulate effluent which would be used (combined with known release rates of 115Cd and post-mortem analysis of the reactor) to determine the total fission product inventory release at those altitudes and vectors, and was discontinued in 1967; the second (NERC) method used a fixed coordinate system to measure cloud size and density, utilizing a mass particulate sampler, charcoal bed, cryogenic sampler, external radiation sensor, and other equipment, but due to the fact that these samples were taken more than ten miles from the reactor tests, it’s quite likely that more of the fission products had either decayed or come down to the ground as fallout, so depletion of much of the fission product inventory could easily have occurred by the time the cloud reached the plane’s locations. This technique was used after 1967.
The next sampling method also came online in 1967 – the Elephant Gun. This was a probe that was stuck directly into the hot hydrogen coming out of the nozzle, and collected several moles of the hot hydrogen from the exhaust stream at several points throughout the test, which were then stored in sampling tanks. Combined with hydrogen temperature and pressure data, acid leaching analysis of fission products, and gas sample data, this provided a more close-to-hand estimate of the fission product release, as well as getting a better view of the gaseous fission products that were released by the engine.
Finally, after testing and cool-down, each engine was put through a rigorous post-mortem inspection. Here, the amount of reactivity lost compared to the amount of uranium present, power levels and test duration, and chemical and radiological analysis were used to determine which fission products were present (and in which ratios) compared to what SHOULD have been present. This technique enhanced understanding of reactor behavior, neutronic profile, and actual power achieved during the test as well as the radiological release in the exhaust stream.
Radioactive release from these engine tests varied widely, as can be seen in the table above, however the total amount released by the “dirtiest” of the reactor tests, the Phoebus 1B second test, was only 240,000 Curies, and the majority of the tests released less than 2000 Curies. Another thing that varied widely was HOW the radiation was released. The immediate area (within a few meters) of the reactor would be exposed to radiation during operation, in the form of both neutron and gamma radiation. The exhaust plume would contain not only the hydrogen propellant (which wasn’t in the reactor for long enough to accumulate additional neutrons and turn into deuterium, much less tritium, in any meaningful quantities), but the gaseous fission products (most of which the human body isn’t able to absorb, such as 135Xe) and – if fuel element erosion or breakage occurred – a certain quantity of particles that may either have become irradiated or contain burned or unburned fission fuel.
These particles, and the cloud of effluent created by the propellant stream during the test, were the primary concern for both humans and the environment from these tests. The reason for this is that the radiation is able to spread much further this way (once emitted, and all other things being equal, radiation goes in a straight line), and most especially it can be absorbed by the body, through inhalation or ingestion, and some of these elements are not just radioactive, but chemically toxic as well. As an additional complication, while alpha and beta radiation are generally not a problem for the human body (your skin stops both particles easily), when they’re IN the human body it’s a whole different ballgame. This is especially true of the thyroid, which is more sensitive than most to radiation, and soaks up iodine (131I is a fairly active radioisotope) like nobody’s business. This is why, after a major nuclear accident (or a theoretical nuclear strike), iodine tablets, containing a radio-inert isotope, are distributed: once the thyroid is full, the excess radioactive iodine passes through the body since nothing else in the body can take it up and store it.
There are quite a few factors that go into how far this particulate will spread, including particle mass, temperature, velocity, altitude, wind (at various altitudes), moisture content of the air (particles could be absorbed into water droplets), plume height, and a host of other factors. The NRDS Effluent Program Final Report goes into great depth on the modeling used, and the data collection methods used to collect data to refine these estimates.
Another thing to consider in the context of Rover in particular is that open-air testing of nuclear weapons was taking place in the area immediately surrounding the Rover tests, which released FAR more fallout (by dozens of orders of magnitude), so it contributed a very minor amount to the amount of radionucleides released at the time.
The offsite radiation monitoring program, which included sampling of milk from cows to estimate thyroid exposure, collected data through 1972, and all exposures measured were well below the exposure limits set on the program.
Since we looked at the KIWI-TNT test earlier, let’s look at the environmental effects of this particular test. After all, a nuclear rocket blowing up has to be the most harmful test, right? Surprisingly, ten other tests released more radioactivity than KIWI-TNT. The discrete particles didn’t travel more than 600 feet from the explosion. The effluent cloud was recorded from 4000 feet to 50 miles downwind of the test site, and aircraft monitoring the cloud were able to track it until it went out over the Pacific ocean (although at that point, it was far less radioactive). By the time the cloud had moved 16,000 feet from the test site, the highest whole body dose from the cloud measured was 1.27X10^-3 rad (at station 16-210), and the same station registered an inhalation thyroid dose of 4.55X10^-3 rads. This shows that even the worst credible accident possible with a NERVA-type reactor has only a negligible environmental and biological impact due to either the radiation released or the explosion of the reactor itself, further attesting to the safety of this engine type.
If you’re curious about more in-depth information about the radiological and environmental effects of the KIWI-TNT tests, I’ve linked the (incredibly detailed) reports on the experiment at the end of this post.
The Results of the Rover Test Program
Throughout the Rover testing program, the fuel elements were the source of most of the non-H2 related issues. While other issues, such as instrumentation, were also encountered, the main headache was the fuel elements themselves.
A lot of the problems came down to the mechanical and chemical properties of the graphite fuel matrix. Graphite is easily attacked by the hot H2, leading to massive fuel element erosion, and a number of solutions were experimented with throughout the test series. With the exception of the KIWI-A reactor (which used unclad fuel plates, and was heavily affected by the propellant), each of the reactors featured FEs that were clad to a greater or lesser extent, using a variety of methods and materials. Often, niobium carbide (NbC) was the favored clad material, but other options, such as tungsten, exist.
Chemical vapor deposition was an early option, but unfortunately it was not feasible to consistently and securely coat the interior of the propellant tubes, and differential thermal expansion was a major challenge. As the fuel elements heated, they expanded, but at a different rate than the coating did. This led to cracking, and in some cases, flaking off, of the clad material, leading to the graphite being exposed to the propellant and being eroded away. Machined inserts were a more reliable clad form, but required more complexity to install.
The exterior of the fuel elements originally wasn’t clad, but as time went on it was obvious that this would need to be addressed as well. Some propellant would leak between the prisms, leading to erosion of the outside of the fuel elements. This changed the fission geometry of the reactor, led to fission product and fuel release through erosion, and weakened the already somewhat fragile fuel elements. Usually, though, vapor deposition of NbC was sufficient to eliminate this problem
Fortunately, these issues are exactly the sort of thing that CFEET and NTREES are able to test, and these systems are far more economical to operate than a hot-fired NTR is. It is likely that by the time a hot-fire test is being conducted, the fuel elements will be completely chemically and thermally characterized, so these issues shouldn’t arise.
The other issue with the fuel elements was mechanical failure due to a number of problems. The pressure across the system changes dramatically, which leads to differential stress along the length of the fuel elements. The original, minimally-supported fuel elements, would often undergo transverse cracking, leading to blockage of propellant and erosion. In a number of cases, after the fuel element broke this way, the hot end of the fuel element would be ejected from the core.
This led to the development of a structure that is still found in many NTR designs today: the tie tube. This is a hexagonal prism, the same size as the fuel elements, which supports the adjacent fuel elements along their length. In addition to being a means of support, these are also a major source of neutron moderation, due to the fact that they’re cooled by hydrogen propellant from the regeneratively cooled nozzle. The hydrogen would make two passes through the tie tube, one in each direction, before being injected into the reactor’s cold end to be fed through the fuel elements.
The tie tubes didn’t eliminate all of the mechanical issues that the fuel element faced. Indeed, even in the NF-1 test, extensive fuel element failure was observed, although none of the fuel elements were ejected from the core. However, new types of fuel elements were being tested (uranium carbide-zirconium carbide carbon composite, and (U,Zr)C carbide), which offered better mechanical properties as well as higher thermal tolerances.
Current NTR designs still usually incorporate tie tubes, especially because the low-enriched uranium that is the main notable difference in NASA’s latest design requires a much more moderated neutron spectrum than a HEU reactor does. However, the ability to support the fuel element mechanically along its entire length (rather than just at the cold end, as was common in NERVA designs) does also increase the mechanical stability of the reactor, and helps maintain the integrity of the fuel elements.
The KIWI-B and Phoebus reactors were successful enough designs to use as starting points for the NERVA engines. NERVA is an acronym for the Nuclear Energy for Rocket Vehicle Applications, and took place in two parts: NERVA-1, or NERVA-NRX, developed the KIWI-B4D reactor into a more flight-prototypic design, including balance of plant optimization, enhanced documentation of the workings of the reactor, and coolant flow studies. The second group of engines, NERVA-2, were based on the Phoebus 2 type of reactor from Rover, and ended up finally being developed into the NERVA-XE, which was meant to be the engine that would power the manned mission to Mars. The NERVA-XE PRIME test was of the engine in flight configuration, with all the turbopumps, coolant tanks, instrumentation, and even the reactor’s orientation (nozzle down, instead of up) were all the way that it would have been configured during the mission.
The XE-PRIME test series lasted for nine months, from December 1968 to September 1969, and involved 24 startups and shutdowns of the reactor. Using a 1140 MW reactor, operating at 2272 K exhaust temperature, and produced 247 kN of thrust at 710 seconds of specific impulse. This included using new startup techniques from cold-start conditions, verification of reactor control systems – including using different subsystems to be able to manipulate the power and operating temperature of the reactor – and demonstrated that the NERVA program had successfully produced a flight-ready nuclear thermal rocket.
Ending an Era: Post-Flight Design Testing
Toward the end of the Rover program, the engine design itself had been largely finalized, with the NERVA XE-Prime test demonstrating an engine tested in flight configuration (with all the relevant support hardware in place, and the nozzle pointing down), however, some challenges remained for the fuel elements themselves. In order to have a more cost-effective testing program for fuel elements, two new reactors were constructed.
The first, Pewee, was a smaller (75 klbf, the same size as NASA’s new NTR) nuclear rocket engine, which was able to have the core replaced for multiple rounds of testing, but was only used once before the cancellation of the program – but not before achieving the highest specific impulse of any of the Rover engines. This reactor was never tested outside of a breadboard configuration, because it was never meant to be used in flight. Instead, it was a cost-saving measure for NASA and the AEC: due to its’ smaller size, it was much cheaper to built, and due to its’ lower propellant flow rate, it was also much easier to test. This meant that experimental fuel elements that had undergone thermal and irradiation testing would be able to be tested in a fission-powered, full flow environment at lower cost.
The second was the Nuclear Furnace, which mimicked the neutronic environment and propellant flow rates of the larger NTRs, but was not configured as an engine. This reactor also was the first to incorporate an effluent scrubber, capturing the majority of the non-gaseous fission products and significantly reducing the radiological release into the environment. It also achieved the highest operating temperatures of any of the reactors tested in Nevada, meaning that the thermal stresses on the fuel elements would be higher than would be experienced in a full-power burn of an actual NTR. Again, this was designed to be able to be repeatedly reused in order to maximize the financial benefit of the reactor’s construction, but was only used once before the cancellation of the program. The fuel elements were tested in separate cans, and none of them were the graphite composite fuel form: instead, CERMET (then known as composite) and carbide fuel elements, which had been under development but not extensively used in Rover or NERVA reactors, were tested. This system also used an effluent cleanup system, but that’s something that we’re going to look at more in depth on the next post, as it remains a theoretically possible method of doing hot-fire testing for a modern NTR.
Westinghouse ANL also proposed a design based on the NERVA XE, called the PAX reactor, which would be designed to have its’ core replaced, but this never left the drawing boards. Again, the focus had shifted toward lower cost, more easily maintained experimental NTR test stands, although this one was much closer to flight configuration. This would have been very useful, because not only would the fuel be subjected to a very similar radiological and chemical environment, but the mechanical linkages, hydrogen flow paths, and resultant harmonic and gas-dynamic issues would have been able to be evaluated in a near-prototypic environment. However, this reactor was never tested.
As we’ve seen, hot-fire testing was something that the engineers involved in the Rover and NERVA programs were exceptionally concerned about. Yes, there were radiological releases into the environment that are well above and beyond what would be considered today, but when compared to the releases from the open-air nuclear weapons tests that were occurring in the immediate vicinity, they were miniscule.
Today, though, these releases would be unacceptable. So, in the next blog post we’re going to look at the options, and restrictions, for a modern testing facility for NTR hot firing, including a look at the proposals over the years and NASA’s current plan for NTR testing. This will include the exhaust filtration system on the Nuclear Furnace, a more complex (but also more effective) filtering system proposed for the SNTP pebblebed reactor (TimberWind), a geological filtration concept called SAFE, and a full exhaust capture and combustion system that could be installed at NASA’s current rocket test facility at Stennis Space Center.
This post is already started, and I hope to have it out in the next few weeks. I look forward to hearing all your feedback, and if there are any more resources on this subject that I’ve missed, please share them in the comments below!
Los Alamos Pajarito Site
Los Alamos Critical Assemblies Facility, LA-8762-MS, by R. E. Malenfant,
Thirty-Five Years at Pajarito Canyon Site, LA-7121-H, Rev., by Hugh Paxton
A History of Critical Experiments at Pajarito Site, LA-9685-H, by R.E. Malenfant, 1983
Environmental Impacts and Radiological Release Reports
NRDS Nuclear Rocket Effluent Program, 1959-1970; NERC-LV-539-6, by Bernhardt et al, 1974
Offsite Monitoring Report for NRX-A2; 1965
Radiation Measurements of the Effluent from the Kiwi-TNT Experiment; LA-3395-MS, by Henderson et al, 1966
Environmental Effects of the KIWI-TNT Effluent: A Review and Evaluation; LA-3449, by R.V.Fultyn, 1968
Technological Development and Non-Nuclear Testing
A Review of Fuel Element Development for Nuclear Rocket Engines; LA-5931, by J.M. Taub
Hot Fire Testing
Rover Nuclear Rocket Engine Program: Overview of Rover Engine Tests; N92-15117, by J.L. Finseth, 1992
Nuclear Furnace 1 Test Report; LA-5189-MS, W.L. Kirk, 1973
KIWI-Transient Nuclear Test; LA-3325-MS, 1965
Kiwi-TNT Explosion; LA-3551, by Roy Reider, 1965
An Analysis of the KIWI-TNT Experiment with MARS Code; Journal of Nuclear Science and Technology, Hirakawa et al. 1968
Safety Neutronics for Rover Reactors; LA-3558-MS, Los Alamos Scientific Laboratory, 1965
The Behavior of Fission Products During Nuclear Rocket Reactor Tests; LA-UR-90-3544, by Bokor et al, 1996