Hello, and welcome back to Beyond NERVA! Before we begin, I would like to announce that our Patreon page, at https://www.patreon.com/beyondnerva, is live! This blog consumes a considerable amount of my time, and being able to pay my bills is of critical importance to me. If you are able to support me, please consider doing so. The reward tiers are still very much up for discussion with my Patrons due to the early stage of this part of the Beyond NERVA ecosystem, but I can only promise that I will do everything I can to make it worth your support! Every dollar counts, both in terms of the financial and motivational support!
Today, we continue our look at the collaboration between the US and the USSR/Russia involving the Enisy reactor: Topaz International. Today, we’ll focus on the transfer from the USSR (which became Russia during this process) to the US, which was far more drama-ridden than I ever realized, as well as the management and bureaucratic challenges and amusements that occurred during the testing. Our next post will look at the testing program that occurred in the US, and the changes to the design once the US got involved. The final post will overview the plans for missions involving the reactors, and the aftermath of the Topaz International Program, as well as the recent history of the Enisy reactor.
For clarification: In this blog post (and the next one), the reactor will mostly be referred to as Topaz-II, however it’s the same as the Enisy (Yenisey is another common spelling) reactor discussed in the last post. Some modifications were made by the Americans over the course of the program, which will be covered in the next post, but the basic reactor architecture is the same.
When we left off, we had looked at the testing history within the USSR. The entry of the US into the list of customers for the Enisy reactor has some conflicting information: according to one document (Topaz-II Design History, Voss, linked in the references), the USSR approached a private (unnamed) US company in 1980, but the company did not purchase the reactor, instead forwarding the offer up the chain in the US, but this account has very few details other than that; according to another paper (US-Russian Cooperation… TIP, Dabrowski 2013, also linked), the exchange built out of frustration within the Department of Defense over the development of the SP-100 reactor for the Strategic Defense Initiative. We’ll look at the second, more fleshed out narrative of the start of the Topaz International Program, as the beginning of the official exchange of technology between the USSR (and soon after, Russia) and the US.
The Topaz International Program (TIP) was the final name for a number of programs that ended up coming under the same umbrella: the Thermionic System Evaluation Test (TSET) program, the Nuclear Electric Propulsion Space Test Program (NEPSTP), and some additional materials testing as part of the Thermionic Fuel Element Verification Program (TFEVP). We’ll look at the beginnings of the overall collaboration in this post, with the details of TSET, NEPSTP, TFEVP, the potential lunar base applications, and the aftermath of the Topaz International Program, in the next post.
Let’s start, though, with the official beginnings of the TIP, and the challenges involved in bringing the test articles, reactors, and test stands to the US in one of the most politically complex times in modern history. One thing to note here: this was most decidedly not the US just buying a set of test beds, reactor prototypes, and flight units (all unfueled), this was a true international technical exchange. Both the American and Soviet (later Russian) organizations involved on all levels were true collaborators in this program, with the Russian head of the program, Academician Nikolay Nikolayvich Ponomarev-Stepnoy, still being highly appreciative of the effort put into the program by his American counterparts as late as this decade, when he was still working to launch the reactor that resulted from the TIP – because it’s still not only an engineering masterpiece, but could perform a very useful role in space exploration even today.
The Beginnings of the Topaz International Program
While the US had invested in the development of thermionic power conversion systems in the 1960s, the funding cuts in the 1970s that affected so many astronuclear programs also bit into the thermionic power conversion programs, leading to their cancellation or diminution to the point of being insignificant. There were several programs run investigating this technology, but we won’t address them in this post, which is already going to run longer than typical even for this blog! An excellent resource for these programs, though, is Thermionics Quo Vadis by the Defense Threat Reduction Agency, available in PDF here: https://www.nap.edu/catalog/10254/thermionics-quo-vadis-an-assessment-of-the-dtras-advanced-thermionics (paywall warning).
Our story begins in detail in 1988. The US was at the time heavily invested in the Strategic Defense Initiative (SDI), which as its main in-space nuclear power supply was focused on the SP-100 reactor system (another reactor that we’ll be covering in a Forgotten Reactors post or two). However, certain key players in the decision making process, including Richard Verga of the Strategic Defense Initiative Organization (SDIO), the organizational lynchpin on the SDI. The SP-100 was growing in both cost and time to develop, leading him to decide to look elsewhere to either meet the specific power needs of SDI, or to find a fission power source that was able to operate as a test-bed for the SDI’s technologies.
Investigations into the technological development of all other nations’ astronuclear capabilities led Dr. Verga to realize that the most advanced designs were those of the USSR, who had just launched the two TOPOL-powered Plasma-A satellites. This led him to invite a team of Soviet space nuclear power program personnel to the Eighth Albuquerque Space Nuclear Power Symposium (the predecessor to today’s Nuclear and Emerging Technologies for Space, or NETS, conference, which just wrapped up recently at the time of this writing) in January of 1991. The invitation was accepted, and they brought a mockup of the TOPAZ. The night after their presentation, Academician Nikolay Nicolayvich Ponomarev-Stepnoy, the Russian head of the Topol program, along with his team of visiting academicians, met with Joe Wetch, the head of Space Power Incorporated (SPI, a company made up mostly of SNAP veterans working to make space fission power plants a reality), and they came to a general understanding: the US should buy this reactor from the USSR – assuming they could get both governments to agree to the sale. The terms of this “sale” would take significant political and bureaucratic wrangling, as we’ll see, and sadly the problems started less than a week later, thanks to their generosity in bringing a mockup of the Topaz reactor with them. While the researchers were warmly welcomed, and they themselves seemed to enjoy their time at the conference, when it came time to leave a significant bureaucratic hurdle was placed in their path.
This mockup, and the headaches surrounding being able to take it back with the researchers, were a harbinger of things to come. While this mockup was non-functional, but the Nuclear Regulatory Commission claimed that, since it could theoretically be modified to be functional (a claim which I haven’t found any evidence for, but is theoretically possible), and as such was considered a “nuclear utilization facility” which could not be shipped outside the US. Five months later, and with the direct intervention of numerous elected officials, including US Senator Pete Domenici, the mockup was finally returned to Russia. This decision by the NRC led to a different approach to importing further reactors from the USSR and Russia, when the time came to do this. The mockup was returned, however, and whatever damage the incident caused to the newly-minted (hopeful) partnership was largely weathered thanks to the interpersonal relationships that were developed in Albuquerque.
Teams of US researchers (including Susan Voss, who was the major source for the last post) traveled to the USSR, to inspect the facilities used to build the Enisy (Yenisey is another common spelling, the reactor was named after the river in Siberia). These visits started in Moscow, with Drs Wetch and Britt of SPI, when a revelation came to the American astronuclear establishment: there wasn’t one thermionic reactor in the USSR, but two, and the most promising one was available for potential export and sale!
These visits continued, and personal relationships between the team members from both sides of the Iron Curtain grew. Due to headaches and bureaucratic difficulties in getting technical documentation translated effectively in the timeframe that the program required, often it was these interpersonal relationships that allowed the US team to understand the necessary technical details of the reactor and its components. The US team also visited many of the testing and manufacturing locations used in the production and development of the Enisy reactor (if you haven’t read it yet, check out the first blog post on the Enisy for an overview of how closely these were linked), as well as observing testing in Russia of these systems. This is also the time when the term “Topaz-II” was coined by one of the American team members, to differentiate the reactor from the original Topol (known in the west as Topaz, and covered in our first blog post on Soviet astronuclear history) in the minds of the largely uninformed Western academic circles.
The seeds of the first cross-Iron Curtain technical collaboration on astronuclear systems development, planted in Albuquerque, were germinating in Russian soil.
The Business of Intergovernmental Astronuclear Development
During this time, due to the headaches involved in both the US and the USSR from a bureaucratic point of view (I’ve never found any information that showed that the two teams ever felt that there were problems in the technological exchange, rather they all seem to be political and bureaucratic in nature, and exclusively from outside the framework of what would become known as the Topaz International Program), two companies were founded to provide an administrative touchstone for various points in the technological transfer program.
The first was International Scientific Products, which from the beginning (in 1989) was made specifically to facilitate the purchase of the reactors for the US, and worked closely with the SDIO Dr. Verga was still intimately involved, and briefed after every visit to Russia on progress in the technical exchange and eventual purchase of the reactors. This company was the private lubricant for the US government to be able to purchase these reactor systems (for reasons too complex to get into in this blog post). The two main players in ISP were Drs Wetch and Britt, who also appear to be the main administrative driving force in the visits. The company gave a legal means to transmit non-classified data from the USSR to the US, and vice versa. After each visit, these three would meet, and Dr. Verga kept his management at SDIO consistently briefed on the progress of the program.
The second was the International Nuclear Energy Research and Technology corporation, known as INERTEK. This was a joint US-USSR company, involving the staff of ISP, as well as individuals from all of the Soviet team of design bureaus, manufacturing centers (except possibly in Talinn, but I haven’t been able to confirm this, it’s mainly due to the extreme loss of documentation from that facility following the collapse of the USSR), and research institutions that we saw in the last post. These included the Kurchatov Institute of Atomic Energy (headed by Academician and Director Ponomarev-Stepnoy, the head of the Russian portion of the Topaz International Program), the Scientific Industrial Association “LUCH” (represented by Deputy Director Yuri Nikolayev), the Central Design Bureau for Machine Building (represented by Director Vladmir Nikitin), and the Keldysh Institute of Rocket Research (represented by Director Academician Anatoli Koreteev). INERTEK was the vehicle by which the technology, and more importantly to the bureaucrats the hardware, would be exported from the USSR to the US. Academician Ponomarev-Stepnoy was the director of the company, and Dr Wetch was his deputy. Due to the sensitive nature of the company’s focus, the company required approval from the Ministry of Atomic Energy (Minatom) in Moscow, which was finally achieved in December 1990.
In order to gain this approval, the US had to agree to a number of demands from Minatom. This included: the Topaz-II reactors had to be returned to Russia after the testing and that the reactors could not be used for military purposes. Dr. Verga insisted on additional international cooperation, including staff from the UK and France. This not only was a cost-saving measure, but reinforced the international and transparent nature of the program, and made military use more challenging.
While this was occurring, the Americans were insistent that the non-nuclear testing of the reactors had to be duplicated in the US, to ensure they met American safety and design criteria. This was a major sticking point for Minatom, and delayed the approval of the export for months, but the Americans did not slow in their preparations for building a test facility. Due to the concentration of space nuclear power research resources in New Mexico (with Los Alamos and Sandia National Laboratories, the US Air Force Philips Laboratory, and the University of New Mexico’s New Mexico Engineering Research Institute (NMERI), as well as the presence of the powerful Republican senator Pete Domenici to smooth political feathers in Washington, DC (all of the labs were within his Senatorial district in the north of the state), it was decided to test the reactors in Albuquerque, NM. The USAF purchased an empty building from the NMERI, and hired personnel from UNM to handle the human resources side of things. The selection of UNM emphasized the transparent, exploratory nature of the program, an absolute requirement for Minatom, and the university had considerable organizational flexibility when compared to either the USAF or the DOE. According to the contract manager, Tim Stepetic:
“The University was very cooperative and accommodating… UNM allowed me to open checking accounts to provide responsive payments for the support requirements of the INTERTEK and LUCH contracts – I don’t think they’ve ever permitted such checkbook arrangements either before or since…”
These freedoms were necessary to work with the Russian team members, who were in culture shock and dealing with very different organizational restrictions than their American counterparts. As has been observed both before and since, the Russian scientists and technicians preferred to save as much of their (generous in their terms) per diem for after the project and the money would go further. They also covered local travel expenses as well. One of the technicians had to leave the US for Russia for his son’s brain tumor operation, and was asked by the surgeon to bring back some Tylenol, a request that was rapidly acquiesced to with bemusement from his American colleagues. In addition, personal calls (of a limited nature due to international calling rates at the time) were allowed for the scientists and technicians to keep in touch with their families and reduce their homesickness.
As should be surprising to no-one, the highly unusual nature of this financial arrangement, as well as the large amount of money involved (which ended up coming out to about $400,000 in 1990s money), a routine audit led to the Government Accounting Office being called in to investigate the arrangement later. Fortunately, no significant irregularities in the financial dealings of the NMERI were found, and the program continued. Additionally, the reuse of over $500,000 in equipment scrounged from SNL and LANL’s junk yards allowed for incredible cost savings in the program.
With the business side of the testing underway, it was time to begin preparations for the testing of the reactors in the US, beginning with the conversion of an empty building into a non-nuclear test facility. The building’s conversion, under the head of Frank Thome on the facilities modification side, and Scott Wold as the TSET training manager, began in April of 1991, only four months after Minatom’s approval of INTERTEK. Over the course of the next year, the facility would be prepared for testing, and would be completed just before the delivery of the first shipment of reactors and equipment from Russia.
By this point, the test program had grown to include two programs. The first was the Thermionic Systems Evaluation Test (TSET), which would study mechanical, thermophysical, and chemical properties of the reactors to verify the data collected in Russia. This was to flight-qualify the reactors for American space mission use, and establish the collaboration of the various international participants in the Topaz International Program.
The second program was the Nuclear Electric Propulsion Space Test Program (NEPSTP); run by the Johns Hopkins Applied Physics Laboratory, and funded by the SDIP Ballistic Missile Defense Organization, it proposed an experimental spacecraft that would use a set of six different electric thrusters, as well as equipment to monitor the environmental effects of both the thrusters and the reactor during operation. Design work for the spacecraft began almost immediately after the TSET program began, and the program was of interest to both the American and Russian parts of the team.
Later, one final program would be added: the Thermionic Fuel Element Verification Program (TFEVP). This program, which had predated TIP, is where many of the UK and French researchers were involved, and focused of increasing the lifetime of the thermionic fuel elements from one year (the best US estimate before the TSET) to at least three, and preferably seven, years. This would be achieved through better knowledge of materials properties, as well as improved manufacturing methods.
Finally, there were smaller programs that were attached to the big three, looking at materials effets in intense radiation and plasma environments, as well as long-term contact with cesium vapor, chemcal reactions within the hardware itself, and the surface electrical properties of various ceramics. These tests, while not the primary focus of the program, WOULD contribute to the understanding of the environment an astronuclear spacecraft would experience, and would significantly affect future spacecraft designs. These tests would occur in the same building as the TSET testing, and the teams involved would frequently collaborate on all projects, leading to a very well-integrated and collegial atmosphere.
Reactor Shipment: A Funny Little Thing Occurred in Russia
While all of this was going on in the Topaz International Program, major changes were happening thoughout the USSR: it was falling apart. From the uprisings in Latvia and Lithuania (violently put down by the Soviet military), to the fall of the Berlin Wall, to the ultimate lowering of the hammer and sickle from the Kremlin in December 1991 and its replacement with the tricolor of the Russian Federation, the fall of the Iron Curtain was accelerating. The TIP teams were continuing to work at their program, knowing that it offered hope for the Topaz-II project as well as a vehicle to form closer technological collaborations with their former adversaries, but the complications would rear their heads in this small group as well.
The American purchase of the Topaz reactors was approved by President George H.W. Bush on 27 March, 1992 during a meeting with his Secretary of State, James Barker, and Secretary of Defense Richard Cheney. This freed the American side of the collaboration to do what needed to be done to make the program happen, as well as begin bringing in Russian specialists to begin test facility preparations.
The first group of 14 Russian scientists and technicians to arrive in the US for the TSET program arrived on April 3, 1992, but only got to sleep for a few hours before being woken up by their guests (who also brought their families) for a long van journey. This was something that the Russians greatly appreciated, because April 4 is a special day in one small part of the world: it’s one of only two days of the year that the Trinity Site, the location of the first nuclear explosion in history, is open to the public. According to one of them, Georgiy Kompaniets:
“It was like for a picnic! And at the entrance to the site there were souvenir vendors selling t-shirts with bombs and rocks supposedly at the epicenter of the blast…” (note: no trinitite is allowed to be collected at the Trinity site anymore, and according to some interpretations of federal law is considered low-level radioactive waste from weapons production)
The Russians were a hit at the Trinity site, being the center of attention from those there, and were interviewed for television. They even got to tour the McDonald ranch house, where the Gadget was assembled and the blast was initiated. This made a huge impression on the visiting Russians, and did wonders in cementing the team’s culture.
Another cultural exchange that occurred later (exactly when I’m not sure) was the chance to ride in a hot air balloon. Albuquerque’s International Balloon Fiesta is the largest hot air ballooning event in the world, and whenever atmospheric conditions are right a half dozen or more balloons can be seen floating over the city. A local ballooning club, having heard about the Russian scientists and technicians (they had become minor local celebrities at this point) offered them a free hot air balloon ride. This is something that the Russians universally accepted, since none of them had ever experienced this.
According to Boris Steppenov:
“The greatest difficulty, it seemed, was landing. And it was absolutely forbidden to touch down on the reservations belonging to the Native Americans, as this would be seen as an attack on their land and an affront to their ancestors…
[after the flight] there were speeches, there were oaths, there was baptism with champagne, and many other rituals. A memory for an entire life!”
The balloon that Steppenov flew in did indeed land on the Sandia Pueblo Reservation, but before touchdown the tribal police were notified, and they showed up to the landing site, issued a ticket to the ballooning company, and allowed them to pack up and leave.
These events, as well as other uniquely New Mexican experiences, cemented the TIP team into a group of lifelong friends, and would reinforce the willingness of everyone to work together as much as possible to make TIP as much of a success as it could be.
In late April, 1992, a team of US military personnel (led by Army Major Fred Tarantino of SDIO, with AF Major Dan Mulder in charge of logistics), including a USAF Airlift Control Element Team, landed in St. Petersburg on a C-141 and C-130, carrying the equipment needed to properly secure the test equipment and reactors that would be flown to the US. Overflight permissions were secured, and special packing cases, especially for the very delicate tungsten TISA heaters, were prepared. These preparations were complicated by the lack of effective packing materials for these heaters, until Dr. Britt of both ISP and INTERTEK had the idea of using foam bedding pads from a furniture store. Due to the large size and weight of the equipment, though, the C-141 and C-130 aircraft were not sufficient for airlifting the equipment, so the teams had to wait on the larger C-5 Galaxy transports intended for this task, which were en route from the US at the time.
Sadly, when the time came for the export licenses to be given to the customs officer, he refused to honor them – because they were Soviet documents, and the Soviet Union no longer existed. This led Academician Ponomarev-Stepnoy and INTERTEK’s director, Benjamin Usov, to travel to Moscow on April 27 to meet with the Chairman of the Government, Alexander Shokhin, to get new export licenses. After consulting with the Minister of Foreign Economic Relations, Sergei Glazev, a one-time, urgent export license was issued for the shipment to the US. This was then sent via fast courier to St. Petersburg on May 1.
The C-5s, though, weren’t in Russia yet. Once they did land, though, a complex paperwork ballet needed to be carried out to get the reactors and test equipment to America. First, the reactors were purchased by INTERTEK from the Russian bureaus responsible for the various components. Then, INTERTEK would sell the reactors and equipment to Dr. Britt of ISP once the equipment was loaded onto the C-5. Dr. Britt then immediately resold the equipment to the US government. This then avoided the import issues that would have occurred on the US side if the equipment had been imported by ISP, a private company, or INTERTEK, a Russian-led international consortium.
One of them landed in St. Petersburg on May 6, was loaded with the two Topaz-II reactors (V-71 and Ya-21U) and as much equipment as could be fit in the aircraft, and left the same day. It would arrive in Albuquerque on May 7. The other developed maintenance problems, and was forced to wait in England for five days, finally arriving on May 8. The rest of the equipment was loaded up (including the Baikal vacuum chamber), and the plane left later that day. Sadly, it ran into difficulties again upon reaching England, as was forced to wait two more days for it to be repaired, arriving in Albuquerque on May 12.
Preparations for Testing: Two Worlds Coming Together
Once the equipment was in the US, detailed examination of the payload was required due to the beryllium used in the reflectors and control drums of the reactor. Berylliosis, or the breathing in of beryllium dust, is a serious health issue, and one that the DOE takes incredibly seriously (they’ll evacuate an entire building at the slightest possibility that beryllium dust could be present, at the cost of millions of dollars on occasion). Detailed checks, both before the equipment was removed from the aircraft and during the unpackaging of the reactors. However, no detectable levels of beryllium dust were detected, and the program continued with minimal disruption.
Then it came time to unbox the equipment, but another problem arose: this required the approval of the director of the Central Design Bureau of Heavy Machine Building, Vladmir Nikitin, who was in Moscow. Rather than just call him for approval, Dr Britt called and got approval for Valery Sinkevych, the Albuquerque representative for INTERTEK, to have discretional control over these sorts of decisions. The approval was given, greatly smoothing the process of both setup and testing during TIP.
Sinkevych, Scott Wold and Glen Schmidt worked closely together in the management of the project. Both were on hand to answer questions, smooth out difficulties, and other challenges in the testing process, to the point that the Russians began calling Schmidt “The Walking Stick.” His response was classic: that’s my style, “Management by Walking Around.”
Every day, Schmidt would hold a lab-wide meeting, ensuring everyone was present, before walking everyone through the procedures that needed to be completed for the day, as well as ensuring that everyone had the resources that they needed to complete their tasks. He also made sure that he was aware of any upcoming issues, and worked to resolve them (mostly through Wetch and Britt) before they became an issue for the facility preparations. This was a revelation to the Russian team, who despite working on the program (in Russia) for years, often didn’t know anything other than the component that they worked on. This synthesis of knowledge would continue throughout the program, leading to a far
Initial estimates for the time that it would take to prepare the facility and equipment for testing of the reactors were supposed to be 9 months. Due to both the well-integrated team, as well as the more relaxed management structure of the American effort, this was completed in only 6 ½ months. According to Sinkevych:
“The trust that was formed between the Russian and American side allowed us in an unusually short time to complete the assembly of the complex and demonstrate its capabilities.”
This was so incredible to Schmidt that he went to Wetch and Britt, asking for a bonus for the Russians due to their exceptional work. This was approved, and paid proportional to technical assignment, duration, and quality of workmanship. This was yet another culture shock for the Russian team, who had never received a bonus before. The response was twofold: greatly appreciative, and also “if we continue to save time, do we get another bonus?” The answer to this was a qualified “perhaps,” and indeed one more, smaller bonus was paid due to later time savings.
Mid-Testing Drama, and the Second Shipment
Both in the US and Russia, there were many questions about whether this program was even possible. The reason for its success, though, is unequivocally that it was a true partnership between the American and Russian parts of TIP. This was the first Russian-US government-to-government cooperative program after the fall of the USSR. Unlike the Nunn-Lugar agreement afterward, TIP was always intended to be a true technological exchange, not an assistance program, which is one of the main reasons why the participants of TIP still look fondly and respectfully at the project, while most Russian (and other former Soviet states) participants in N-L consider it to be demeaning, condescending, and not something to ever be repeated again. More than this, though, the Russian design philosophy that allowed full-system, non-nuclear testing of the Topaz-II permanently changed American astronuclear design philosophy, and left its mark on every subsequent astronuclear design.
However, not all organizations in the US saw it this way. Drs. Thorne and Mulder provided excellent bureaucratic cover for the testing program, preventing the majority of the politics of government work from trickling down to the management of the test itself. However, as Scott Wold, the TSET training manager pointed out, they would still get letters from outside organizations stating:
“[after careful consideration] they had concluded that an experiment we proposed to do wouldn’t be possible and that we should just stop all work on the project as it was obviously a waste of time. Our typical response was to provide them with the results of the experiment we had just wrapped up.”
As mentioned, this was not uncommon, but was also a minor annoyance. In fact, if anything it cemented the practicality of collaborations of this nature, and over time reduced the friction the program faced through proof of capabilities. Other headaches would arise, but overall they were relatively minor.
Sadly, one of the programs, NEPSTP, was canceled out from under the team near the completion of the spacecraft. The new Clinton administration was not nearly as open to the use of nuclear power as the Bush administration had been (to put it mildly), and as such the program ended in 1993.
One type of drama that was avoided was the second shipment of four more Topaz-II reactors from Russia to the US. These were the Eh-40, Eh-41, Eh-43, and Eh-44 reactors. The use of these terms directly contradicts the earlier-specified prefixes for Soviet determinations of capabilities (the systems were built, then assessed for suitability for mechanical, thermal, and nuclear capabilities after construction, for more on this see our first Enisy post here). These units were for: Eh-40 thermal-hydraulic mockup, with a functioning NaK heat rejection system, for “cold-test” testing of thermal covers during integration, launch, and orbital injection; Eh-41 structural mockup for mechanical testing, and demonstration of the mechanical integrity of the anticriticality device (more on that in the next post), modified thermal cover, and American launch vehicle integration; Eh-43 and -44 were potential flight systems, which would undergo modal testing, charging of the NaK coolant system, fuel loading and criticality testing, mechanical vibration, shock, and acoustic tests, 1000 hour thermal vacuum steady-state stability and NaK system integrity tests, and others before launch.
How was drama avoided in this case? The previous shipment was done by the US Air Force, which has many regulations involved in the transport of any cargo, much less flight-capable nuclear reactors containing several toxic substances. This led to delays in approval the first time this shipment method was used. The second time, in 1994, INTERTEK and ISP contracted a private cargo company, Russian Volga Dnepr Airlines, to transport these four reactors. In order to do this, Volga Dnepr Airlines used their An-124 to fly these reactors from St. Petersburg to Albuquerque.
For me personally, this was a very special event, because I was there. My dad got me out of school (I wasn’t even a teenager yet), drove me out to the landing strip fence at Kirtland AFB, and we watched with about 40 other people as this incredible aircraft landed. He told me about the shipment, and why they were bringing it in, and the seed of my astronuclear obsession was planted.
No beryllium dust was found in this shipment, and the reactors were prepared for testing. Additional thermophysical testing, as well as design work for modifications needed to get the reactors flight-qualified and able to be integrated with the American launchers, were conducted on these reactors. These tests and changes will be the subject of the next blog post, as well as the missions that were proposed for the reactors.
These tests would continue until 1995, and the end of testing in Albuquerque. All reactors were packed up, and returned to Russia per the agreement between INTERTEK and Minatom. The Enisy would continue to be developed in Russia until at least 2007.
More Coming Soon!
The story of the Topaz International Program is far from over. The testing in the US, as well as the programs that the US/Russian team had planned have not even been touched on yet besides very cursory mentions. These programs, as well as the end of the Topaz International Program and the possible future of the Enisy reactor, are the focus of our next blog post, the final one in this series.
This program provided a foundation, as well as a harbinger of challenges to come, in international astronuclear collaboration. As such, I feel that it is a very valuable subject to spend a significant amount of time on.
I hope to have the next post out in about a week and a half to two weeks, but the amount of research necessary for this series has definitely surprised me. The few documents available that fill in the gaps are, sadly, behind paywalls that I can’t afford to breach at my current funding availability.
Hello, and welcome to Beyond NERVA! Today we’re going to look at the program that birthed the first astronuclear reactor to go into orbit, although the extent of the program far exceeds the flight record of a single launch.
Before we get into that, I have a minor administrative announcement that will develop into major changes for Beyond NERVA in the near-to-mid future! As you may have noticed, we have moved from beyondnerva.wordpress.com to beyondnerva.com. For the moment, there isn’t much different, but in the background a major webpage update is brewing! Not only will the home page be updated to make it easier to navigate the site (and see all the content that’s already available!), but the number of pages on the site is going to be increasing significantly. A large part of this is going to be integrating information that I’ve written about in the blog into a more topical, focused format – with easier access to both basic concepts and technical details being a priority. However, there will also be expansions on concepts, pages for technical concepts that don’t really fit anywhere in the blog posts, and more! As these updates become live, I’ll mention them in future blog posts. Also, I’ll post them on both the Facebook group and the new Twitter feed (currently not super active, largely because I haven’t found my “tweet voice yet,” but I hope to expand this soon!). If you are on either platform, you should definitely check them out!
The Systems for Nuclear Auxiliary Propulsion, or SNAP program, was a major focus for a wide range of organizations in the US for many decades. The program extended everywhere from the bottom of the seas (SNAP-4, which we won’t be covering in this post) to deep space travel with electric propulsion. SNAP was divided up into an odd/even numbering scheme, with the odd model numbers (starting with the SNAP-3) being radioisotope thermoelectric generators, and the even numbers (beginning with SNAP-2) being fission reactor electrical power systems.
Due to the sheer scope of the SNAP program, even eliminating systems that aren’t fission-based, this is going to be a two post subject. This post will cover the US Air Force’s portion of the SNAP reactor program: the SNAP-2 and SNAP-10A reactors; their development programs; the SNAPSHOT mission; and a look at the missions that these reactors were designed to support, including satellites, space stations, and other crewed and uncrewed installations. The next post will cover the NASA side of things: SNAP-8 and its successor designs as well as SNAP-50/SPUR. The one after that will cover the SP-100, SABRE, and other designs from the late 1970s through to the early 1990s, and will conclude with looking at a system that we mentioned briefly in the last post: the ENISY/TOPAZ II reactor, the only astronuclear design to be flight qualified by the space agencies and nuclear regulatory bodies of two different nations.
The Beginnings of the US Astronuclear Program: SNAP’s Early Years
Beginning in the earliest days of both the nuclear age and the space age, nuclear power had a lot of appeal for the space program: high power density, high power output, and mechanically simple systems were in high demand for space agencies worldwide. The earliest mention of a program to develop nuclear electric power systems for spacecraft was the Pied Piper program, begun in 1954. This led to the development of the Systems for Nuclear Auxiliary Power program, or SNAP, the following year (1955), which was eventually canceled in 1973, as were so many other space-focused programs.
Once space became a realistic place to send not only scientific payloads but personnel, the need to provide them with significant amounts of power became evident. Not only were most systems of the day far from the electricity efficient designs that both NASA and Roscosmos would develop in the coming decades; but, at the time, the vision for a semi-permanent space station wasn’t 3-6 people orbiting in a (completely epic, scientifically revolutionary, collaboratively brilliant, and invaluable) zero-gee conglomeration of tin cans like the ISS, but larger space stations that provided centrifugal gravity, staffed ‘round the clock by dozens of individuals. These weren’t just space stations for NASA, which was an infant organization at the time, but the USAF, and possibly other institutions in the US government as well. In addition, what would provide a livable habitation for a group of astronauts would also be able to power a remote, uncrewed radar station in the Arctic, or in other extreme environments. Even if crew were there, the fact that the power plant wouldn’t have to be maintained was a significant military advantage.
Responsible for both radioisotope thermoelectric generators (which run on the natural radioactive decay of a radioisotope, selected according to its energy density and half-life) as well as fission power plants, SNAP programs were numbered with an even-odd system: even numbers were fission reactors, odd numbers were RTGs. These designs were never solely meant for in-space application, but the increased mission requirements and complexities of being able to safely launch a nuclear power system into space made this aspect of their use the most stringent, and therefore the logical one to design around. Additionally, while the benefits of a power-dense electrical supply are obvious for any branch of the military, the need for this capability in space far surpassed the needs of those on the ground or at sea.
Originally jointly run by the AEC’s Department of Reactor Development (who funded the reactor itself) and the USAF’s AF Wright Air Development Center (who funded the power conversion system), full control was handed over to the AEC in 1957. Atomics International Research was the prime contractor for the program.
There are a number of similarities in almost all the SNAP designs, probably for a number of reasons. First, all of the reactors that we’ll be looking at (as well as some other designs we’ll look at in the next post) used the same type of fissile fuel, even though the form, and the cladding, varied reasonably widely between the different concepts. Uranium-zirconium hydride (U-ZrH) was a very popular fuel choice at the time. Assuming hydrogen loss could be controlled (this was a major part of the testing regime in all the reactors that we’ll look at), it provided a self-moderating, moderate-to-high-temperature fuel form, which was a very attractive feature. This type of fuel is still used today, for the TRIGA reactor – which, between it and its direct descendants is the most common form of research and test reactor worldwide. The high-powered reactors (SNAP 2 and 8) both initially used variations on the same power conversion system: a boiling mercury Rankine power conversion cycle, which was determined by the end of the testing regime to be possible to execute, however to my knowledge has never been proposed again (we’ll look at this briefly in the post on heat engines as power conversion systems, and a more in-depth look will be available in the future), although a mercury-based MHD conversion system is being offered as a power conversion system for an accelerator-driven molten salt reactor.
SNAP-2: The First American Built-For-Space Nuclear Reactor Design
The idea for the SNAP-2 reactor originally came from a 1951 Rand Corporation study, looking at the feasibility of having a nuclear powered satellite. By 1955, the possibilities that a fission power supply offered in terms of mass and reliability had captured the attention of many people in the USAF, which was (at the time) the organization that was most interested and involved (outside the Army Ballistic Missile Agency at the Redstone Arsenal, which would later become the Goddard Spaceflight Center) in the exploitation of space for military purposes.
The original request for the SNAP program, which ended up becoming known as SNAP 2, occurred in 1955, from the AEC’s Defense Reactor Development Division and the USAF Wright Air Development Center. It was for possible power sources in the 1 to 10 kWe range that would be able to autonomously operate for one year, and the original proposal was for a zirconium hydride moderated sodium-potassium (NaK) metal cooled reactor with a boiling mercury Rankine power conversion system (similar to a steam turbine in operational principles, but we’ll look at the power conversion systems more in a later post), which is now known as SNAP-2. The design was refined into a 55 kWt, 5 kWe reactor operating at about 650°C outlet temperature, massing about 100 kg unshielded, and was tested for over 10,000 hours. This epithermal neutron spectrum would remain popular throughout much of the US in-space reactor program, both for electrical power and thermal propulsion designs. This design would later be adapted to the SNAP-10A reactor, with some modifications, as well.
SNAP-2’s first critical assembly test was in October of 1957, shortly after Sputnik-1’s successful launch. With 93% enriched 235U making up 8% of the weight of the U-ZrH fuel elements, a 1” beryllium inner reflector, and an outer graphite reflector (which could be varied in thickness), separated into two rough hemispheres to control the construction of a critical assembly; this device was able to test many of the reactivity conditions needed for materials testing on a small economic scale, as well as test the behavior of the fuel itself. The primary concerns with testing on this machine were reactivity, activation, and intrinsic steady state behavior of the fuel that would be used for SNAP-2. A number of materials were also tested for reflection and neutron absorbency, both for main core components as well as out-of-core mechanisms. This was followed by the SNAP-2 Experimental Reactor in 1959-1960 and the SNAP 2 Development Reactor in 1961-1962.
The SNAP-2 Experimental Reactor (S2ER or SER) was built to verify the core geometry and basic reactivity controls of the SNAP-2 reactor design, as well as to test the basics of the primary cooling system, materials, and other basic design questions, but was not meant to be a good representation of the eventual flight system. Construction started in June 1958, with construction completed by March 1959. Dry (Sept 15) and wet (Oct 20) critical testing was completed the same year, and power operations started on Nov 5, 1959. Four days later, the reactor reached design power and temperature operations, and by April 23 of 1960, 1000 hours of continuous testing at design conditions were completed. Following transient and other testing, the reactor was shut down for the last time on November 19, 1960, just over one year after it had first achieved full power operations. Between May 19 and June 15, 1961, the reactor was disassembled and decommissioned. Testing on various reactor materials, especially the fuel elements, was conducted, and these test results refined the design for the Development Reactor.
The SNAP-2 Development Reactor (S2DR or SDR, also called the SNAP-2 Development System, S2DS) was installed in a new facility at the Atomics International Santa Susana research facility to better manage the increased testing requirements for the more advanced reactor design. While this wasn’t going to be a flight-type system, it was designed to inform the flight system on many of the details that the S2ER wasn’t able to. This, interestingly, is much harder to find information on than the S2ER. This reactor incorporated many changes from the S2ER, and went through several iterations to tweak the design for a flight reactor. Zero power testing occurred over the summer of 1961, and testing at power began shortly after (although at SNAP-10 power and temperature levels. Testing continued until December of 1962, and further refined the SNAP-2 and -10A reactors.
A third set of critical assembly reactors, known as the SNAP Development Assembly series, was constructed at about the same time, meant to provide fuel element testing, criticality benchmarks, reflector and control system worth, and other core dynamic behaviors. These were also built at the Santa Susana facility, and would provide key test capabilities throughout the SNAP program. This water-and-beryllium reflected core assembly allowed for a wide range of testing environments, and would continue to serve the SNAP program through to its cancellation. Going through three iterations, the designs were used more to test fuel element characteristics than the core geometries of individual core concepts. This informed all three major SNAP designs in fuel element material and, to a lesser extent, heat transfer (the SNAP-8 used thinner fuel elements) design.
Extensive testing was carried out on all aspects of the core geometry, fuel element geometry and materials, and other behaviors of the reactor; but by May 1960 there was enough confidence in the reactor design for the USAF and AEC to plan on a launch program for the reactor (and the SNAP-10A), called SNAPSHOT (more on that below). Testing using the SNAP-2 Experimental Reactor occurred in 1960-1961, and the follow-on test program, including the Snap 2 Development reactor occurred in 1962-63. These programs, as well as the SNAP Critical Assembly 3 series of tests (used for SNAP 2 and 10A), allowed for a mostly finalized reactor design to be completed.
The power conversion system (PCS), a Rankine (steam) turbine using mercury, were carried out starting in 1958, with the development of a mercury boiler to test the components in a non-nuclear environment. The turbine had many technical challenges, including bearing lubrication and wear issues, turbine blade pitting and erosion, fluid dynamics challenges, and other technical difficulties. As is often the case with advanced reactor designs, the reactor core itself wasn’t the main challenge, nor the control mechanisms for the reactor, but the non-nuclear portions of the power unit. This is a common theme in astronuclear engineering. More recently, JIMO experienced similar problems when the final system design called for a theoretical but not yet experimental supercritical CO2 Brayton turbine (as we’ll see in a future post). However, without a power conversion system of useable efficiency and low enough mass, an astronuclear power system doesn’t have a means of delivering the electricity that it’s called upon to deliver.
Reactor shielding, in the form of a metal honeycomb impregnated with a high-hydrogen material (in this case a form of paraffin), was common to all SNAP reactor designs. The high hydrogen content allowed for the best hydrogen density of the available materials, and therefore the greatest shielding per unit mass of the available options.
Testing on the SNAP 2 reactor system continued until 1963, when the reactor core itself was re-purposed into the redesigned SNAP-10, which became the SNAP-10A. At this point the SNAP-2 reactor program was folded into the SNAP-10A program. SNAP-2 specific design work was more or less halted from a reactor point of view, due to a number of factors, including the slower development of the CRU power conversion system, the large number of moving parts in the Rankine turbine, and the advances made in the more powerful SNAP-8 family of reactors (which we’ll cover in the next post). However, testing on the power conversion system continued until 1967, due to its application to other programs. This didn’t mean that the reactor was useless for other missions; in fact, it was far more useful, due to its far more efficient power conversion system for crewed space operations (as we’ll see later in this post), especially for space stations. However, even this role would be surpassed by a derivative of the SNAP-8, the Advanced ZrH Reactor, and the SNAP-2 would end up being deprived of any useful mission.
The SNAP Reactor Improvement Program, in 1963-64, continued to optimize and advance the design without nuclear testing, through computer modeling, flow analysis, and other means; but the program ended without flight hardware being either built or used. We’ll look more at the missions that this reactor was designed for later in this blog post, after looking at its smaller sibling, the first reactor (and only US reactor) to ever achieve orbit: the SNAP-10A.
SNAP-10: The Father of the First Reactor in Space
At about the same time as the SNAP 2 Development Reactor tests (1958), the USAF requested a study on a thermoelectric power conversion system, targeting a 0.3 kWe-1kWe power regime. This was the birth of what would eventually become the SNAP-10 reactor. This reactor would evolve in time to become the SNAP-10A reactor, the first nuclear reactor to go into orbit.
In the beginning, this design was superficially quite similar to the Romashka reactor that we’ll examine in the USSR part of this blog post, with plates of U-ZrH fuel, separated by beryllium plates for heat conduction, and surrounded by radial and axial beryllium reflectors. Purely conductively cooled internally, and radiatively cooled externally, this was later changed to a NaK forced convection cooling system for better thermal management (see below). The resulting design was later adapted to the SNAP-4 reactor, which was designed to be used for underwater military installations, rather than spaceflight. Outside these radial reflectors were thermoelectric power conversion systems, with a finned radiating casing being the only major component that was visible. The design looked, superficially at least, remarkably like the RTGs that would be used for the next several decades. However, the advantages to using even the low power conversion efficiency thermoelectric conversion system made this a far more powerful source of electricity than the RTGs that were available at the time (or even today) for space missions.
Within a short period, however, the design was changed dramatically, resulting in a design very similar to the core for the SNAP-2 reactor that was under development at the same time. Modifications were made to the SNAP-2 baseline, resulting in the reactor cores themselves becoming identical. This also led to the NaK cooling system being implemented on the SNAP-10A. Many of the test reactors for the SNAP-2 system were also used to develop the SNAP-10A. This is because the final design, while lower powered, by 20 kWe of electrical output, was largely different in the power conversion system, not the reactor structure. This reactor design was tested extensively, with the S2ER, S2DR, and SCA test series (4A, 4B, and 4C) reactors, as well as the SNAP-10 Ground Test Reactor (S10FS-1). The new design used a similar, but slightly smaller, conical radiator using NaK as the working fluid for the radiator.
This was a far lower power design than the SNAP-2, coming in at 30 kWt, but with the 1.6% power conversion ratio of the thermoelectric systems, its electrical power output was only 500 We. It also ran almost 100°C cooler (200 F), allowing for longer fuel element life, but less thermal gradient to work with, and therefore less theoretical maximum efficiency. This tradeoff was the best on offer, though, and the power conversion system’s lack of moving parts, and ease of being tested in a non-nuclear environment without extensive support equipment, made it more robust from an engineering point of view. The overall design life of the reactor, though, remained short: only about 1 year, and less than 1% fissile fuel burnup. It’s possible, and maybe even likely, that (barring spacecraft-associated failure) the reactor could have provided power for longer durations; however, the longer the reactor operates, the more the fuel swells, due to fission product buildup, and at some point this would cause the clad of the fuel to fail. Other challenges to reactor design, such as fission poison buildup, clad erosion, mechanical wear, and others would end the reactor’s operational life at some point, even if the fuel elements could still provide more power.
The SNAP-10A was not meant to power crewed facilities, since the power output was so low that multiple installations would be needed. This meant that, while all SNAP reactors were meant to be largely or wholly unmaintained by crew personnel, this reactor had no possibility of being maintained. The reliability requirements for the system were higher because of this, and the lack of moving parts in the power conversion system aided in this design requirement. The design was also designed to only have a brief (72 hour) time period where active reactivity control would be used, to mitigate any startup transients, and to establish steady-state operations, before the active control systems would be left in their final configuration, leaving the reactor entirely self-regulating. This placed additional burden on the reactor designers to have a very strong understanding of the behavior of the reactor, its long-term stability, and any effects that would occur during the year-long lifetime of the system.
At the end of the reactor’s life, it was designed to stay in orbit until the short-lived and radiotoxic portions of the reactor had gone through at least five product half-lives, reducing the radioactivity of the system to a very low level. At the end of this process, the reactor would re-enter the atmosphere, the reflectors and end reflector would be ejected, and the entire thing would burn up in the upper atmosphere. From there, winds would dilute any residual radioactivity to less than what was released by a single small nuclear test (which were still being conducted in Nevada at the time). While there’s nothing wrong with this approach from a health physics point of view, as we saw in the last post on the BES-5 reactors the Soviet Union was flying, there are major international political problems with this concept. The SNAPSHOT reactor continues to orbit the Earth (currently at an altitude of roughly 1300 km), and will do so for more than 2000 years, according to recent orbital models, so the only system of concern is not in danger of re-entry any time soon; but, at some point, the reactor will need to be moved into a graveyard orbit or collected and returned to Earth – a problem which currently has no solution.
The Runup to Flight: Vehicle Verification and Integration
1960 brought big plans for orbital testing of both the SNAP-2 and SNAP-10 reactors, under the program SNAPSHOT: Two SNAP-10 launches, and two SNAP-2 launches would be made. Lockheed Missiles System Division was chosen as the launch vehicle, systems integration, and launch operations contractor for the program; while Atomics International, working under the AEC, was responsible for the power plant.
The SNAP-10A reactor design was meant to be decommissioned by orbiting for long enough that the fission product inventory (the waste portion of the burned fuel elements, and the source of the vast majority of the radiation from the reactor post-fission) would naturally decay away, and then the reactor would be de-orbited, and burn up in the atmosphere. This was planned before the KOSMOS-954 accident, when the possibility of allowing a nuclear reactor to burn up in the atmosphere was not as anathema as it is today. This plan wouldn’t increase the amount of radioactivity that the public would receive to any appreciable degree; and, at the time, open-air testing of nuclear weapons was the norm, sending up thousands of kilograms of radioactive fallout per year. However, it was important that the fuel rods themselves would burn up high in the atmosphere, in order to dilute the fuel elements as much as possible, and this is something that needed to be tested.
Enter the SNAP Reactor Flight Demonstration Number 1 mission, or RFD-1. The concept of this test was to demonstrate that the planned disassembly and burnup process would occur as expected, and to inform the further design of the reactor if there were any unexpected effects of re-entry. Sandia National Labs took the lead on this part of the SNAPSHOT program. After looking at the budget available, the launch vehicles available, and the payloads, the team realized that orbiting a nuclear reactor mockup would be too expensive, and another solution needed to be found. This led to the mission design of RFD-1: a sounding rocket would be used, and the core geometry would be changed to account for the short flight time, compared to a real reentry, in order to get the data needed for the de-orbiting testing of the actual SNAP-10A reactor that would be flown.
So what does this mean? Ideally, the development of a mockup of the SNAP-10A reactor, with the only difference being that there wouldn’t be any highly enriched uranium in the fuel elements, as normally configured; instead depleted uranium would be used. It would be launched on the same launch vehicle that the SNAPSHOT mission would use (an Atlas-Agena D), be placed in the same orbit, and then be deorbited at the same angle and the same place as the actual reactor would be; maybe even in a slightly less favorable reentry angle to know how accurate the calculations were, and what the margin of error would be. However, an Atlas-Agena rocket isn’t a cheap piece of hardware, either to purchase, or to launch, and the project managers knew that they wouldn’t be able to afford that, so they went hunting for a more economical alternative.
This led the team to decide on a NASA Scout sounding rocket as the launch vehicle, launched from Wallops Island launch site (which still launches sounding rockets, as well as the Antares rocket, to this day, and is expanding to launch Vector Space and RocketLab orbital rockets as well in the coming years). Sounding rockets don’t reach orbital altitudes or velocities, but they get close, and so can be used effectively to test orbital components for systems that would eventually fly in orbit, but for much less money. The downside is that they’re far smaller, with less payload and less velocity than their larger, orbital cousins. This led to needing to compromise on the design of the dummy reactor in significant ways – but those ways couldn’t compromise the usefulness of the test.
Sandia Corporation (which runs Sandia National Laboratories to this day, although who runs Sandia Corp changes… it’s complicated) and Atomics International engineers got together to figure out what could be done with the Scout rocket and a dummy reactor to provide as useful an engineering validation as possible, while sticking within the payload requirements and flight profile of the relatively small, suborbital rocket that they could afford. Because the dummy reactor wouldn’t be going nearly as fast as it would during true re-entry, a steeper angle of attack when the test was returning to Earth was necessary to get the velocity high enough to get meaningful data.
The Scout rocket that was being used had much less payload capability than the Atlas rocket, so if there was a system that could be eliminated, it was removed to save weight. No NaK was flown on RFD-1, the power conversion system was left off, the NaK pump was simulated by an empty stainless steel box, and the reflector assembly was made out of aluminum instead of beryllium, both for weight and toxicity reasons (BeO is not something that you want to breathe!). The reactor core didn’t contain any dummy fuel elements, just a set of six stainless steel spacers to keep the grid plates at the appropriate separation. Because the angle of attack was steeper, the test would be shorter, meaning that there wouldn’t be time for the reactor’s reflectors to degrade enough to release the fuel elements. The fuel elements were the most important part of the test, however, since it needed to be demonstrated that they would completely burn up upon re-entry, so a compromise was found.
The fuel elements would be clustered on the outside of the dummy reactor core, and ejected early in the burnup test period. While the short time and high angle of attack meant that there wouldn’t be enough time to observe full burnup, the beginning of the process would be able to provide enough data to allow for accurate simulations of the process to be made. How to ensure that this data, which was the most important part of the test, would be able to be collected was another challenge, though, which forced even more compromises for RFT-1’s design. Testing equipment had to be mounted in such a way as to not change the aerodynamic profile of the dummy reactor core. Other minor changes were needed as well, but despite all of the differences between the RFD-1 and the actual SNAP-10A the thermodynamics and aerodynamics of the system were different in only very minor ways.
Testing support came from Wallops Island and NASA’s Bermuda tracking station, as well as three ships and five aircraft stationed near the impact site for radar observation. The ground stations would provide both radar and optical support for the RFD-1 mission, verifying reactor burnup, fuel element burnup, and other test objective data, while the aircraft and ships were primarily tasked with collecting telemetry data from on-board instruments, as well as providing additional radar data; although one NASA aircraft carried a spectrometer in order to analyze the visible radiation coming off the reentry vehicle as it disintegrated.
The test went largely as expected. Due to the steeper angle of attack, full fuel element burnup wasn’t possible, even with the early ejection of the simulated fuel rods, but the amount that they did disintegrate during the mission showed that the reactor’s fuel would be sufficiently distributed at a high enough altitude to prevent any radiological risk. The dummy core behaved mostly as expected, although there were some disagreements between the predicted behavior and the flight data, due to the fact that the re-entry vehicle was on such a steep angle of attack. However, the test was considered a success, and paved the way for SNAPSHOT to go forward.
The next task was to mount the SNAP-10A to the Agena spacecraft. Because the reactor was a very different power supply than was used at the time, special power conditioning units were needed to transfer power from the reactor to the spacecraft. This subsystem was mounted on the Agena itself, along with tracking and command functionality, control systems, and voltage regulation. While Atomics International worked to ensure the reactor would be as self-contained as possible, the reactor and spacecraft were fully integrated as a single system. Besides the reactor itself, the spacecraft carried a number of other experiments, including a suite of micrometeorite detectors and an experimental cesium contact thruster, which would operate from a battery system that would be recharged by electricity produced by the reactor.
In order to ensure the reactor would be able to be integrated to the spacecraft, a series of Flight System Prototypes (FSM-1, and -4; FSEM-2 and -3 were used for electrical system integration) were built. These were full scale, non-nuclear mockups that contained a heating unit to simulate the reactor core. Simulations were run using FSM-1 from launch to startup on orbit, with all testing occurring in a vacuum chamber. The final one of the series, FSM-4, was the only one that used NaK coolant in the system, which was used to verify that the thermal performance of the NaK system met with flight system requirements. FSEM-2 did not have a power system mockup, instead it used a mass mockup of the reactor, power conversion system, radiator, and other associated components. Testing with FSEM-2 showed that there were problems with the original electrical design of the spacecraft, which required a rebuild of the test-bed, and a modification of the flight system itself. Once complete, the renamed FSEM-2A underwent a series of shock, vibration, acceleration, temperature, and other tests (known as the “Shake and Bake” environmental tests), which it subsequently passed. The final mockup, FSEM-3, underwent extensive electrical systems testing at Lockheed’s Sunnyvale facility, using simulated mission events to test the compatibility of the spacecraft and the reactor. Additional electrical systems changes were implemented before the program proceeded, but by the middle of 1965, the electrical system and spacecraft integration tests were complete and the necessary changes were implemented into the flight vehicle design.
The last round of pre-flight testing was a test of a flight-configured SNAP-10A reactor under fission power. This nuclear ground test, S10F-3, was identical to the system that would fly on SNAPSHOT, save some small ground safety modifications, and was tested from January 22 1965 to March 15, 1966. It operated uninterrupted for over 10,000 hours, with the first 390 days being at a power output of 35 kWt, and (following AEC approval) an additional 25 days of testing at 44 kWt. This testing showed that, after one year of operation, the continuing problem of hydrogen redistribution caused the reactor’s outlet temperature to drop more than expected, and additional, relatively minor, uncertainties about reactor dynamics were seen as well. However, overall, the test was a success, and paved the way for the launch of the SNAPSHOT spacecraft in April 1965; and the continued testing of S10F-3 during the SNAPSHOT mission was able to verify that the thermal behavior of astronuclear power systems during ground test is essentially identical to orbiting systems, proving the ground test strategy that had been employed for the SNAP program.
SNAPSHOT: The First Nuclear Reactor in Space
In 1963 there was a change in the way the USAF was funding these programs. While they were solely under the direction of the AEC, the USAF still funded research into the power conversion systems, since they were still operationally useful; but that changed in 1963, with the removal of the 0.3 kWe to 1 kWe portion of the program. Budget cuts killed the Zr-H moderated core of the SNAP-2 reactor, although funding continued for the Hg vapor Rankine conversion system (which was being developed by TRW) until 1966. The SNAP-4 reactor, which had not even been run through criticality testing, was canceled, as was the planned flight test of the SNAP-10A, which had been funded under the USAF, because they no longer had an operational need for the power system with the cancellation of the 0.3-1 kWe power system program. The associated USAF program that would have used the power supply was well behind schedule and over budget, and was canceled at the same time.
The USAF attempted to get more funding, but was denied. All parties involved had a series of meetings to figure out what to do to save the program, but the needed funds weren’t forthcoming. All partners in the program worked together to try and have a reduced SNAPSHOT program go through, but funding shortfalls in the AEC (who received only $8.6 million of the $15 million they requested), as well as severe restrictions on the Air Force (who continued to fund Lockheed for the development and systems integration work through bureaucratic creativity), kept the program from moving forward. At the same time, it was realized that being able to deliver kilowatts or megawatts of electrical power, rather than the watts currently able to be produced, would make the reactor a much more attractive program for a potential customer (either the USAF or NASA).
Finally, in February of 1964 the Joint Congressional Committee on Atomic Energy was able to fund the AEC to the tune of $14.6 million to complete the SNAP-10A orbital test. This reactor design had already been extensively tested and modeled, and unlike the SNAP-2 and -8 designs, no complex, highly experimental, mechanical-failure-prone power conversion system was needed.
SNAPSHOT consisted of a SNAP-10A fission power system mounted to a modified Agena-D spacecraft, which by this time was an off-the-shelf, highly adaptable spacecraft used by the US Air Force for a variety of missions. An experimental cesium contact ion thruster (read more about these thrusters on the Gridded Ion Engine page) was installed on the spacecraft for in-flight testing. The mission was to validate the SNAP-10A architecture with on-orbit experience, proving the capability to operate for 9 days without active control, while providing 500 W (28.5 V DC) of electrical power. Additional requirements included the use of a SNAP-2 reactor core with minimal modification (to allow for the higher-output SNAP-2 system with its mercury vapor Rankine power conversion system to be validated as well, when the need for it arose), eliminating the need (while offering the option) for active control of the reactor once startup was achieved for one year (to prove autonomous operation capability); facilitating safe ground handling during spacecraft integration and launch; and, accommodating future growth potential in both available power and power-to-weight ratio.
While the threshold for mission success was set at 90 days, for Atomics International wanted to prove 1 year of capability for the system; so, in those 90 days, the goal was that the entire reactor system would be demonstrated to be capable of one year of operation (the SNAP-2 requirements). Atomics International imposed additional, more stringent, guidelines for the mission as well, specifying a number of design requirements, including self-containment of the power system outside the structure of the Agena, as much as possible; more stringent mass and center-of-gravity requirements for the system than specified by the US Air Force; meeting the military specifications for EM radiation exposure to the Agena; and others.
The flight was formally approved in March, and the launch occurred on April 3, 1965 on an Atlas-Agena D rocket from Vandenberg Air Force Base. The launch went perfectly, and placed the SNAPSHOT spacecraft in a polar orbit, as planned. Sadly, the mission was not one that could be considered either routine or simple. One of the impedance probes failed before launch, and a part of the micrometeorite detector system failed before returning data. A number of other minor faults were detected as well, but perhaps the most troubling was that there were shorts and voltage irregularities coming from the ion thruster, due to high voltage failure modes, as well as excessive electromagnetic interference from the system, which reduced the telemetry data to an unintelligible mess. This was shut off until later in the flight, in order to focus on testing the reactor itself.
The reactor was given the startup order 3.5 hours into the flight, when the two gross adjustment control drums were fully inserted, and the two fine control drums began a stepwise reactivity insertion into the reactor. Within 6 hours, the reactor achieved on-orbit criticality, and the active control portion of the reactor test program began. For the next 154 hours, the control drums were operated with ground commands, to test reactor behavior. Due to the problems with the ion engine, the failure sensing and malfunction sensing systems were also switched off, because these could have been corrupted by the errant thruster. Following the first 200 hours of reactor operations, the reactor was set to autonomous operation at full power. Between 600 and 700 hours later, the voltage output of the reactor, as well as its temperature, began to drop; an effect that the S10-F3 test reactor had also demonstrated, due to hydrogen migration in the core.
On May 16, just over one month after being launched into orbit, contact was lost with the spacecraft for about 40 hours. Some time during this blackout, the reactor’s reflectors ejected from the core (although they remained attached to their actuator cables), shutting down the core. This spelled the end of reactor operations for the spacecraft, and when the emergency batteries died five days later all communication with the spacecraft was lost forever. Only 45 days had passed since the spacecraft’s launch, and information was received from the spacecraft for only 616 orbits.
What caused the failure? There are many possibilities, but when the telemetry from the spacecraft was read, it was obvious that something badly wrong had occurred. The only thing that can be said with complete confidence is that the error came from the Agena spacecraft rather than from the reactor. No indications had been received before the blackout that the reactor was about to scram itself (the reflector ejection was the emergency scram mechanism), and the problem wasn’t one that should have been able to occur without ground commands. However, with the telemetry data gained from the dwindling battery after the shutdown, some suppositions could be made. The most likely immediate cause of the reactor’s shutdown was traced to a possible spurious command from the high voltage command decoder, part of the Agena’s power conditioning and distribution system. This in turn was likely caused by one of two possible scenarios: either a piece of the voltage regulator failed, or it became overstressed because of either the unusual low-power vehicle loads or commanding the reactor to increase power output. Sadly, the cause of this system failure cascade was never directly determined, but all of the data received pointed to a high-voltage failure of some sort, rather than a low-voltage error (which could have also resulted in a reactor scram). Other possible causes of instrumentation or reactor failure, such as thermal or radiation environment, collision with another object, onboard explosion of the chemical propellants used on the Agena’s main engines, and previously noted flight anomalies – including the arcing and EM interference from the ion engine – were all eliminated as the cause of the error as well.
Despite the spacecraft’s mysterious early demise, SNAPSHOT provided many valuable lessons in space reactor design, qualification, ground handling, launch challenges, and many other aspects of handling an astronuclear power source for potential future missions: Suggestions for improved instrumentation design and performance characteristics; provision for a sunshade for the main radiator to eliminate the sun/shade efficiency difference that was observed during the mission; the use of a SNAP-2 type radiation shield to allow for off-the-shelf, non-radiation-hardened electronic components in order to save both money and weight on the spacecraft itself; and other minor changes were all suggested after the conclusion of the mission. Finally, the safety program developed for SNAPSHOT, including the SCA4 submersion criticality tests, the RFT-1 test, and the good agreement in reactor behavior between the on-orbit and ground test versions of the SNAP-10A showed that both the AEC and the customer of the SNAP-10A (be it the US Air Force or NASA) could have confidence that the program was ready to be used for whatever mission it was needed for.
Sadly, at the time of SNAPSHOT there simply wasn’t a mission that needed this system. 500 We isn’t much power, even though it was more power than was needed for many systems that were being used at the time. While improvements in the thermoelectric generators continued to come in (and would do so all the way to the present day, where thermoelectric systems are used for everything from RTGs on space missions to waste heat recapture in industrial facilities), the simple truth of the matter was that there was no mission that needed the SNAP-10A, so the program was largely shelved. Some follow-on paper studies would be conducted, but the lowest powered of the US astronuclear designs, and the first reactor to operate in Earth orbit, would be retired almost immediately after the SNAPSHOT mission.
Post-SNAPSHOT SNAP: the SNAP Improvement Program
The SNAP fission-powered program didn’t end with SNAPSHOT, far from it. While the SNAP reactors only ever flew once, their design was mature, well-tested, and in most particulars ready to fly in a short time – and the problems associated with those particulars had been well-addressed on the nuclear side of things. The Rankine power conversion system for the SNAP-2, which went through five iterations, reached technological maturity as well, having operated in a non-nuclear environment for close to 5,000 hours and remained in excellent condition, meaning that the 10,000 hour requirement for the PCS would be able to be met without any significant challenges. The thermoelectric power conversion system also continued to be developed, focusing on an advanced silicon-germanium thermoelectric convertor, which was highly sensitive to fabrication and manufacturing processes – however, we’ll look more at thermoelectrics in the power conversion systems series of blog posts, just keep in mind that the power conversion systems continued to improve throughout this time, not just the reactor core design.
On the reactor side of things, the biggest challenge was definitely hydrogen migration within the fuel elements. As the hydrogen migrates away from the ZrH fuel, many problems occur; from unpredictable reactivity within the fuel elements, to temperature changes (dehydrogenated fuel elements developed hotspots – which in turn drove more hydrogen out of the fuel element), to changes in ductility of the fuel, causing major headaches for end-of-life behavior of the reactors and severely limiting the fuel element temperature that could be achieved. However, the necessary testing for the improvement of those systems could easily be conducted with less-expensive reactor tests, including the SCA4 test-bed, and didn’t require flight architecture testing to continue to be improved.
The maturity of these two reactors led to a short-lived program in the 1960s to improve them, the SNAP Reactor Improvement Program. The SNAP-2 and -10 reactors went through many different design changes, some large and some small – and some leading to new reactor designs based on the shared reactor core architecture.
By this time, the SNAP-2 had mostly faded into obscurity. However, the fact that it shared a reactor core with the SNAP-10A, and that the power conversion system was continuing to improve, warranted some small studies to improve its capabilities. The two of note that are independent of the core (all of the design changes for the -10 that will be discussed can be applied to the -2 core as well, since at this point they were identical) are the change from a single mercury boiler to three, to allow more power throughput and to reduce loads on one of the more challenging components, and combining multiple cores into a single power unit. These were proposed together for a space station design (which we’ll look at later) to allow an 11 kWe power supply for a crewed station.
The vast majority of this work was done on the -10A. Any further reactors of this type would have had an additional three sets of 1/8” beryllium shims on the external reflector, increasing the initial reactivity by about 50 cents (1 dollar of reactivity is exactly break even, all other things being equal; reactivity potential is often somewhere around $2-$3, however, to account for fission product buildup); this means that additional burnable poisons (elements which absorb neutrons, then decay into something that is mostly neutron transparent, to even out the reactivity of the reactor over its lifetime) could be inserted in the core at construction, mitigating the problems of reactivity loss that were experienced during earlier operation of the reactor. With this, and a number of other minor tweaks to reflector geometry and lowering the core outlet temperature slightly, the life of the SNAP-10A was able to be extended from the initial design goal of one year to five years of operation. The end-of-life power level of the improved -10A was 39.5 kWt, with an outlet temperature of 980 F (527°C) and a power density of 0.14 kWt/lb (0.31 kWt/kg).
e design modifications led to another iteration of the SNAP-10A, the Interim SNAP-10A/2 (I-10A/2). This reactor’s core was identical, but the reflector was further enhanced, and the outlet temperature and reactor power were both increased. In addition, even more burnable poisons were added to the core to account for the higher power output of the reactor. Perhaps the biggest design change with the Interim -10A/2 was the method of reactor control: rather than the passive control of the reactor, as was done on the -10A, the entire period of operation for the I-10A/2 was actively controlled, using the control drums to manage reactivity and power output of the reactor. As with the improved -10A design, this reactor would be able to have an operational lifetime of five years. These improvements led the I-10A/2 to have an end of life power rating of 100 kWt, an outlet temperature of 1200 F (648°C), and an improved power density of 0.33 kWt/lb (0.73 kWt/kg).
This design, in turn, led to the Upgraded SNAP-10A/2 (U-10A/2). The biggest in-core difference between the I-10A/2 and the U-10A/2 was the hydrogen barrier used in the fuel elements: rather than using the initial design that was common to the -2, -10A, and I-10A/2, this reactor used the hydrogen barrier from the SNAP-8 reactor, which we’ll look at in the next blog post. This is significant, because the degradation of the hydrogen barrier over time, and the resulting loss of hydrogen from the fuel elements, was the major lifetime limiting factor of the SNAP-10 variants up until this point. This reactor also went back to static control, rather than the active control used in the I-10A/2. As with the other -10A variants, the U-10A/2 had a possible core lifetime of five years, and other than an improvement of 100 F in outlet temperature (to 1300 F), and a marginal drop in power density to 0.31 kWt/lb, it shared many of the characteristics that the I-10A/2 had. SNAP-10B: The Upgrade that Could Have Been
One consistent mass penalty in the SNAP-10A variants that we’ve looked at so far is the control drums: relatively large reactivity insertions were possible with a minimum of movement due to the wide profile of the control drums, but this also meant that they extended well away from the reflector, especially early in the mission. This meant that, in order to prevent neutron backscatter from hitting the rest of the spacecraft, the shield had to be relatively wide compared the the size of the core – and the shield was not exactly a lightweight system component.
The SNAP-10B reactor was designed to address this problem. It used a similar core to the U-10A/2, with the upgraded hydrogen barrier from the -8, but the reflector was tapered to better fit the profile of the shadow shield, and axially sliding control cylinders would be moved in and out to provide control instead of the rotating drums of the -10A variants. A number of minor reactor changes were needed, and some of the reactor physics parameters changed due to this new control system; but, overall, very few modifications were needed.
The first -10B reactor, the -10B Basic (B-10B), was a very simple and direct evolution of the U-10A/2, with nothing but the reflector and control structures changed to the -10B configuration. Other than a slight drop in power density (to 0.30 kWt/lb), the rest of the performance characteristics of the B-10B were identical to the U-10A/2. This design would have been a simple evolution of the -10A/2, with a slimmer profile to help with payload integration challenges.
The next iteration of the SNAP-10B, the Advanced -10B (A-10B), had options for significant changes to the reactor core and the fuel elements themselves. One thing to keep in mind about these reactors is that they were being designed above and beyond any specific mission needs; and, on top of that, a production schedule hadn’t been laid out for them. This means that many of the design characteristics of these reactors were never “frozen,” which is the point in the design process when the production team of engineers need to have a basic configuration that won’t change in order to proceed with the program, although obviously many minor changes (and possibly some major ones) would continue to be made up until the system was flight qualified.
Up until now, every SNAP-10 design used a 37 fuel element core, with the only difference in the design occurring in the Upgraded -10A/2 and Basic -10B reactors (which changed the hydrogen barrier ceramic enamel inside the fuel element clad). However, with the A-10B there were three core size options: the first kept the 37 fuel element core, a medium-sized 55-element core, and a large 85-element core. There were other questions about the final design, as well, looking at two other major core changes (as well as a lot of open minor questions). The first option was to add a “getter,” a sheath of hydrophilic (highly hydrogen-absorbing) metal to the clad outside the steel casing, but still within the active region of the core. While this isn’t as ideal as containing the hydrogen within the U-ZrH itself, the neutron moderation provided by the hydrogen would be lost at a far lower rate. The second option was to change the core geometry itself, as the temperature of the core changed, with devices called “Thermal Coefficient Augmenters” (TCA). There were two options that were suggested: first, there was a bellows system that was driven by NaK core temperature (using ruthenium vapor), which moves a portion of the radial reflector to change the core’s reactivity coefficient; second, the securing grids for the fuel elements themselves would expand as the NaK increased in temperature, and contract as the coolant dropped in temperature.
Between the options available, with core size, fuel element design, and variable core and fuel element configuration all up in the air, the Advanced SNAP-10B was a wide range of reactors, rather than just one. Many of the characteristics of the reactors remained identical, including the fissile fuel itself, the overall core size, maximum outlet temperature, and others. However, the number of fuel elements in the core alone resulted in a wide range of different power outputs; and, which core modification the designers ultimately decided upon (Getter vs TCA, I haven’t seen any indication that the two were combined) would change what the capabilities of the reactor core would actually be. However, both for simplicity’s sake, and due to the very limited documentation available on the SNAP-10B program, other than a general comparison table from the SNAP Systems Capability Study from 1966, we’ll focus on the 85 fuel element core of the two options: the Getter core and the TCA core.
A final note, which isn’t clear from these tables: each of these reactor cores was nominally optimized to a 100 kWt power output, the additional fuel elements reduced the power density required at any time from the core in order to maximize fuel lifetime. Even with the improved hydrogen barriers, and the variable core geometry, while these systems CAN offer higher power, it comes at the cost of a shorter – but still minimum one year – life on the reactor system. Because of this, all reported estimates assumed a 100 kWt power level unless otherwise stated.
The idea of a hydrogen “getter” was not a new one at the time that it was proposed, but it was one that hadn’t been investigated thoroughly at that point (and is a very niche requirement in terrestrial nuclear engineering). The basic concept is to get the second-best option when it comes to hydrogen migration: if you can’t keep the hydrogen in your fuel element itself, then the next best option is keeping it in the active region of the core (where fission is occurring, and neutron moderation is the most directly useful for power production). While this isn’t as good as increasing the chance of neutron capture within the fuel element itself, it’s still far better than hydrogen either dissolving into your coolant, or worse yet, migrating outside your reactor and into space, where it’s completely useless in terms of reactor dynamics. Of course, there’s a trade-off: because of the interplay between the various aspects of reactor physics and design, it wasn’t practical to change the external geometry of the fuel elements themselves – which means that the only way to add a hydrogen “getter” was to displace the fissile fuel itself. There’s definitely an optimization question to be considered; after all, the overall reactivity of the reactor will have to be reduced because the fuel is worth more in terms of reactivity than the hydrogen that would be lost, but the hydrogen containment in the core at end of life means that the system itself would be more predictable and reliable. Especially for a static control system like the A-10B, this increase in behavioral predictability can be worth far more than the reactivity that the additional fuel would offer. Of the materials options that were tested for the “getter” system, yttrium metal was found to be the most effective at the reactor temperatures and radiation flux that would be present in the A-10B core. However, while improvements had been made in the fuel element design to the point that the “getter” program continued until the cancellation of the SNAP-2/10 core experiments, there were many uncertainties left as to whether the concept was worth employing in a flight system. The second option was to vary the core geometry with temperature, the Thermal Coefficient Augmentation (TCA) variant of the A-10B. This would change the reactivity of the reactor mechanically, but not require active commands from any systems outside the core itself. There were two options investigated: a bellows arrangement, and a design for an expanding grid holding the fuel elements themselves.
The first variant used a bellows to move a portion of the reflector out as the temperature increased. This was done using a ruthenium reservoir within the core itself. As the NaK increased in temperature, the ruthenium would boil, pushing a bellows which would move some of the beryllium shims away from the reactor vessel, reducing the overall worth of the radial reflector. While this sounds simple in theory, gas diffusion from a number of different sources (from fission products migrating through the clad to offgassing of various components) meant that the gas in the bellows would not just be ruthenium vapor. While this could have been accounted for, a lot of study would have needed to have been done with a flight-type system to properly model the behavior. The second option would change the distance between the fuel elements themselves, using base plate with concentric accordion folds for each ring of fuel elements called the “convoluted baseplate.” As the NaK heated beyond optimized design temperature, the base plates would expand radially, separating the fuel elements and reducing the reactivity in the core. This involved a different set of materials tradeoffs, with just getting the device constructed causing major headaches. The design used both 316 stainless steel and Hastelloy C in its construction, and was cold annealed. The alternative, hot annealing, resulted in random cracks, and while explosive manufacture was explored it wasn’t practical to map the shockwave propagation through such a complex structure to ensure reliable construction at the time. While this is certainly a concept that has caused me to think a lot about the concept of a variable reactor geometry of this nature, there are many problems with this approach(which could have possibly been solved, or proven insurmountable). Major lifetime concerns would include ductility and elasticity changes through the wide range of temperatures that the baseplate would be exposed to; work hardening of the metal, thermal stresses, and neutron bombardment considerations of the base plates would also be a major concern in this concept.
These design options were briefly tested, but most of these ended up not being developed fully. Because the reactor’s design was never frozen, many engineering challenges remained in every option that had been presented. Also, while I know that a report was written on the SNAP-10B reactor’s design (R. J . Gimera, “SNAP 1OB Reactor Conceptual Design,” NAA-SR-10422), I can’t find it… yet. This makes writing about the design difficult, to say the least.
Because of this, and the extreme paucity of documentation on this later design, it’s time to turn to what these innovative designs could have offered when it comes to actual missions.
The Path Not Taken: Missions for SNAP-2, -10A
Every space system has to have a mission, or it will never fly. Both SNAP-2 and SNAP-10 offered a lot for the space program, both for crewed and uncrewed missions; and what they offered only grew with time. However, due to priorities at the time, and the fact that many records from these programs appear to never have been digitized, it’s difficult to point to specific mission proposals for these reactors in a lot of cases, and the missions have to be guessed at from scattered data, status reports, and other piecemeal sources.
SNAP-10 was always a lower-powered system, even with its growth to a kWe-class power supply. Because of this, it was always seen as a power supply for unmanned probes, mostly in low Earth orbit, but certainly it would also have been useful in interplanetary studies as well, which at this point were just appearing on the horizon as practical. Had the SNAPSHOT system worked as planned, the cesium thruster that had been on board the Agena spacecraft would have been an excellent propulsion source for an interplanetary mission. However, due to the long mission times and relatively fragile fuel of the original SNAP-10A, it is unlikely that these missions would have been initially successful, while the SNAP-10A/2 and SNAP-B systems, with their higher power output and lifetimes, would have been ideal for many interplanetary missions.
As we saw in the US-A program, one of the major advantages that a nuclear reactor offers over photovoltaic cells – which were just starting to be a practical technology at the time – is that they offer very little surface area, and therefore the atmospheric drag that all satellites experience due to the thin atmosphere in lower orbits is less of a concern. There are many cases where this lower altitude offers clear benefits, but the vast majority of them deal with image resolution: the lower you are, the more clear your imagery can be with the same sensors. For the Russians, the ability to get better imagery of US Navy movements in all weather conditions was of strategic importance, leading to the US-A program. For Americans, who had other means of surveillance (and an opponent’s far less capable blue-water navy to track), radar surveillance was not a major focus – although it should be noted that 500 We isn’t going to give you much, if any resolution, no matter what your altitude.
One area that SNAP-10A was considered for was for meteorological satellites. With a growing understanding of how weather could be monitored, and what types of data were available through orbital systems, the ability to take and transmit pictures from on-orbit using the first generations of digital cameras (which were just coming into existence, and not nearly good enough to interest the intelligence organizations at the time), along with transmitting the data back to Earth, would have allowed for the best weather tracking capability in the world at the time. By using a low orbit, these satellites would be able to make the most of the primitive equipment available at the time, and possibly (speculation on my part) have been able to gather rudimentary moisture content data as well.
However, while SNAP-10A was worked on for about a decade, for the entire program there was always the question of “what do you do with 500-1000 We?” Sure, it’s not an insignificant amount of power, even then, but… communications and propulsion, the two things that are the most immediately interesting for satellites with reliable power, both have a linear relationship between power level and capability: the more power, the more bandwidth, or delta-vee, you have available. Also, the -10A was only ever rated for one year of operations, although it was always suspected it could be limped along for longer, which precluded many other missions.
The later SNAP-10A/2 and -10B satellites, with their multi-kilowatt range and years-long lifespans, offered far more flexibility, but by this point many in the AEC, the US Air Force, NASA, and others were no longer very interested in the program; with newer, more capable, reactor designs being available (we’ll look at some of those in the next post). While the SNAP-10A was the only flight-qualified and -tested reactor design (and the errors on the mission were shown to not be the fault of the reactor, but the Agena spacecraft), it was destined to fade into obscurity.
SNAP-10A was always the smallest of the reactors, and also the least powerful. What about the SNAP-2, the 3-6 kWe reactor system?
Initial planning for the SNAP-2 offered many options, with communications satellites being mentioned as an option early on – especially if the reactor lifetime could be extended. While not designed specifically for electric propulsion, it could have utilized that capability either on orbit around the Earth or for interplanetary missions. Other options were also proposed, but one was seized on early: a space station.
At the time, most space station designs were nuclear powered, and there were many different configurations. However, there were two that were the most common: first was the simple cylinder, launched as a single piece (although there were multiple module designs proposed which kept the basic cylinder shape) which would be finally realized with the Skylab mission; second was a torus-shaped space station, which was proposed almost a half a century before by Tsiolkovsky, and popularized at the time by Werner von Braun. SNAP-2 was adapted to both of these types of stations. Sadly, while I can find one paper on the use of the SNAP-2 on a station, it focuses exclusively on the reactor system, and doesn’t use a particular space station design, instead laying out the ground limits of the use of the reactor on each type of station, and especially the shielding requirements for each station’s geometry. It was also noted that the reactors could be clustered, providing up to 11 kWe of power for a space station, without significant change to the radiation shield geometry. We’ll look at radiation shielding in a couple posts, and look at the particulars of these designs there.
Since space stations were something that NASA didn’t have the budget for at the time, most designs remained vaguely defined, without much funding or impetus within the structure of either NASA or the US Air Force (although SNAP-2 would have definitely been an option for the Manned Orbiting Laboratory program of the USAF). By the time NASA was seriously looking at space stations as a major funding focus, the SNAP-8 derived Advanced Zirconium Hydride reactor, and later the SNAP-50 (which we’ll look at in the next post) offered more capability than the more powerful SNAP-2. Once again, the lack of a mission spelled the doom of the SNAP-2 reactor.
The SNAP-2 reactor met its piecemeal fate even earlier than the SNAP-10A, but oddly enough both the reactor and the power conversion system lasted just as long as the SNAP-10A did. The reactor core for the SNAP-2 became the SNAP-10A/2 core, and the CRU power conversion system continued under development until after the reactor cores had been canceled. However, mention of the SNAP-2 as a system disappears in the literature around 1966, while the -2/10A core and CRU power conversion system continued until the late 1960s and late 1970s, respectively.
The Legacy of The Early SNAP Reactors
The SNAP program was canceled in 1971 (with one ongoing exception), after flying a single reactor which was operational for 43 days, and conducting over five years of fission powered testing on the ground. The death of the program was slow and drawn out, with the US Air Force canceling the program requirement for the SNAP-10A in 1963 (before the SNAPSHOT mission even launched), the SNAP-2 reactor development being canceled in 1967, all SNAP reactors (including the SNAP-8, which we’ll look at next week) being canceled by 1974, and the CRU power conversion system being continued until 1979 as a separate internal, NASA-supported but not fully funded, project by Rockwell International.
The promise of SNAP was not enough to save the program from the massive cuts to space programs, both for NASA and the US Air Force, that fell even as humanity stepped onto the Moon for the first time. This is an all-too-common fate, both in advanced nuclear reactor engineering and design as well as aerospace engineering. As one of the engineers who worked on the Molten Salt Reactor Experiment noted in a recent documentary on that technology, “everything I ever worked on got canceled.”
However, this does not mean that the SNAP-2/10A programs were useless, or that nothing except a permanently shut down reactor in orbit was achieved. In fact, the SNAP program has left a lasting mark on the astronuclear engineering world, and one that is still felt today. The design of the SNAP-2/10A core, and the challenges that were faced with both this reactor core and the SNAP-8 core informed hydride fuel element development, including the thermal limits of this fuel form, hydrogen migration mitigation strategies, and materials and modeling for multiple burnable poison options for many different fuel types. The thermoelectric conversion system (germanium-silicon) became a common one for high-temperature thermoelectric power conversion, both for power conversion and for thermal testing equipment. Many other materials and systems that were used in this reactor system continued to be developed through other programs.
Possibly the most broad and enduring legacy of this program is in the realm of launch safety, flight safety, and operational paradigms for crewed astronuclear power systems. The foundation of the launch and operational safety guidelines that are used today, for both fission power systems and radioisotope thermoelectric generators, were laid out, refined, or strongly informed by the SNAPSHOT and Space Reactor Safety program – a subject for a future web page, or possibly a blog post. From the ground handling of a nuclear reactor being integrated to a spacecraft, to launch safety and abort behavior, to characterizing nuclear reactor behavior if it falls into the ocean, to operating crewed space stations with on-board nuclear power plants, the SNAP-2/10A program literally wrote the book on how to operate a nuclear power supply for a spacecraft.
While the reactors themselves never flew again, nor did their direct descendants in design, the SNAP reactors formed the foundation for astronuclear engineering of fission power plants for decades. When we start to launch nuclear power systems in the future, these studies, and the carefully studied lessons of the program, will continue to offer lessons for future mission planners.
More Coming Soon!
The SNAP program extended well beyond the SNAP-2/10A program. The SNAP-8 reactor, started in 1959, was the first astronuclear design specifically developed for a nuclear electric propulsion spacecraft. It evolved into several different reactors, notably the Advanced ZrH reactor, which remained the preferred power option for NASA’s nascent modular space station through the mid-to-late 1970s, due to its ability to be effectively shielded from all angles. Its eventual replacement, the SNAP-50 reactor, offered megawatts of power using technology from the Aircraft Nuclear Propulsion program. Many other designs were proposed in this time period, including the SP-100 reactor, the ancestor of Kilopower (the SABRE heat pipe cooled reactor concept), as well as the first American in-core thermionic power system, advances in fuel element designs, and many other innovations.
Originally, these concepts were included in this blog post, but this post quickly expanded to the point that there simply wasn’t room for them. While some of the upcoming post has already been written, and a lot of the research has been done, this next post is going to be a long one as well. Because of this, I don’t know exactly when the post will end up being completed.
After we look at the reactor programs from the 1950s to the late 1980s, we’ll look at NASA and Rosatom’s collaboration on the TOPAZ-II reactor program, and the more recent history of astronuclear designs, from SDI through the Fission Surface Power program. We’ll finish up the series by looking at the most recent power systems from around the world, from JIMO to Kilopower to the new Russian on-orbit nuclear electric tug.
After this, we’ll look at shielding for astronuclear power plants, and possibly ground handling, launch safety, and launch abort considerations, then move on to power conversion systems, which will be a long series of posts due to the sheer number of options available.
These next posts are more research-intensive than usual, even for this blog, so while I’ll be hard at work on the next posts, it may be a bit more time than usual before these posts come out.
Hello, and welcome back to Beyond NERVA in the second part of our two-part series on ground testing NTRs. In part one, we examined the testing done at the National Defense Research Site in Nevada as part of Project Rover, and also a little bit of the zero power testing that was done at the Los Alamos Scientific Laboratory to support the construction, assembly, and zero-power reactivity characterization of these reactors. We saw that the environmental impact to the population (even those living closest to the test) rarely exceeded the equivalent dose of a full-body high contrast MRI. However, even this low amount of radioisotope release is unacceptable in today’s regulatory environment, so new avenues of testing must be explored.
We will look at the proposals over the last 25 years for new ways of testing nuclear thermal rockets in full flow, fission-powered testing, as well as looking at cost estimates (which, as always, should be taken with a grain of salt) and the challenges associated with each concept.
Finally, we’re going to look at NASA’s current plans for test facilities, facility costs, construction schedules, and testing schedules for the LEU NTP program. This information is based on the preliminary estimates released by NASA, and as such there’s still a lot that’s up in the air about these concepts and cost estimates, but we’ll look at what’s available.
Pre-Hot Fire Testing: Thermal Testing, Neutronic Analysis, and Preparation for Prototypic Fuel Testing
We’ve already taken a look at the test stands that are currently in use for fuel element development, CFEET and NTREES. These test stands allow for electrically heated testing in a hydrogen environment, allowing for testing of he thermal and chemical properties of NTR fuel. They also allow for things like erosion tests to be done, to ensure clad materials are able to withstand not just the thermal stresses of the test but also the erosive effects of the hot hydrogen moving through them at a high rate.
However, there are a number of other effects that the fuel elements will be exposed to during reactor operation, and the behavior of these materials in an irradiated environment is something that still needs to be characterized. Fuel element irradiation is done using existing reactors, either in a beamline for out-of-core initial testing, or using specially designed capsules to ensure the fuel elements won’t adversely affect the operation of the reactor, and to ensure the fuel element is in the proper environment for its’ operation, for in-core testing.
A number of reactors could be used for these tests, including TRIGA-type reactors that are common in many universities around the US. This is one of the advantages of LEU, rather than the traditional HEU: there are fewer restrictions on LEU fuels, so many of these early tests could be carried out by universities and contractors who have these types of reactors. This will be less expensive than using DOE facilities, and has the additional advantage of supporting additional research and education in the field of astronuclear engineering.
The initial fuel element prototypes for in-pile testing will be unfueled versions of the fuel element, to ensure the behavior of the rest of the materials involved won’t have adverse reactions to the neutronic and radiation environment that they’ll be subjected to. This is less of a concern then it used to be, because material properties under radiation flux have been continually refined over the decades, but caution is the watchword with nuclear reactors, so this sort of test will still need to be carried out. These experiments will be finally characterized in the Safety Analysis Report and Technical Safety Review documents, a major milestone for any fuel element development program. These documents will provide the reactor operators all the necessary information for the behavior of these fuel elements in the research reactor in preparation for fueled in-pile testing. Concurrently with these plans, extensive neutronic and thermal analysis will be carried out based on any changes necessitated by the in-pile unfueled testing. Finally, a Quality Assurance Plan must be formulated, verified, and approved. Each material has different challenges to producing fuel elements of the required quality, and each facility has slightly different regulations and guidelines to meet their particular needs and research guidelines. After these studies are completed, the in-pile, unfueled fuel elements are irradiated, and then subjected to post irradiation examination, for chemical, mechanical, and radiological behavior changes. Fracture toughness, tensile strength, thermal diffusivity, and microstructure examination through both scanning electron and transmission electron microscopy are particular areas of focus at this point in the testing process.
One last thing to consider for in-pile testing is that the containment vessel (often called a can) that the fuel elements will be held in inside the reactor has to be characterized, especially its’ impact on the neutron flux and thermal transfer properties, before in-pile testing can be done. This is a relatively straightforward, but still complex due to the number of variables involved, process, involving making an MCNP model of the fuel element in the can at various points in each potential test reactor, in order to verify the behavior of the test article in the test reactor. This is something that can be done early in the process, but may need to be slightly modified after the refinements and experimental regime that we’ve been looking at above.
Another consideration for the can will be its’ thermal insulation properties. NTR fuel elements are run at the edge of the thermal capabilities of the materials they’re made out of, since this maximizes thermal transfer and therefore specific impulse. This also means that, for the test to be as accurate as possible, the fuel element itself must be far hotter than the surrounding reactor, generally in the ballpark of 2500 K. The ORNL Irradiation Plan suggests the use of SIGRATHERM, a soft graphite felt, for this insulating material. Graphite’s behavior is well understood in reactors (and for those in the industry, the fact that it has about 4% of the density of solid graphite makes Wigner energy release minimal).
Pre-Hot Fire Testing: In-Pile Prototypic Fuel Testing
Once this extensive testing regime for fuel elements has been completed, a fueled set of fuel elements would be manufactured and transported to the appropriate test reactor. Not only are TRIGA-type reactors common to many universities an option, but three research reactors are also available with unique capabilities. The first is the High Flux Isotope Reactor at Oak Ridge, which is one of the longest-operating research reactors with quite a few ports for irradiation studies at different neutron flux densities. As an incredibly well-characterized reactor, there are many advantages to using this well-understood system, especially for analysis at different levels of fuel burnup and radiation flux.
The second is a newly-reactivated reactor at Idaho National Laboratory, the Transient Reactor Test (TREAT). An air cooled, graphite moderated thermal reactor, the most immediately useful instrument for this sort of experiment is the hodoscope. This device uses fast neutrons to detect fission activity in the prototypic fuel element in real time, allowing unique analysis of fuel element behavior, burnup behavior, and other characteristics that can only be estimated after in-pile testing in other reactors.
The third is also at Idaho National Lab, this is the Advanced Test Reactor. A pressurized light water reactor, the core of this reactor has four lobes, and almost looks like a clover from above. This allows for very fine control of the neutron flux the fuel elements would experience. In addition, six of the locations in the core allow independent cooling systems that are separated from the primary cooling system. This would allow (with modification, and possible site permission requirements due to the explosive nature of H2) the use of hydrogen coolant to examine the chemical and thermal transfer behaviors of the NTR fuel element while undergoing fission.
Each of these reactors uses a slightly different form of canister to contain the test article. This is required to prevent any damage to the fuel element contaminating the rest of the reactor core, an incredibly expensive, difficult, and lengthy process that can be avoided by isolated the fuel elements from their surrounding environment chemically. Most often, these cans are made out of aluminum-6061, 300 series stainless steel, or grade 5 titanium (links in the reference section). According to a recent Oak Ridge document (linked in references), the most preferred material would be the titanium, with the stainless being the least attractive due to 59Fe and 60Co activation leading to the can to become highly gamma-active. This makes the transportation and disposal of the cans post-irradiation much more costly.
Here’s an example of the properties that would be tested by the time that the tests we’ve looked at so far have been completed:
NTR Hot Fire Testing For Today’s Regulatory Environment
It goes without saying that with the current regulatory strictures placed on nuclear testing, the same type of testing as done during Rover will not be able to be done today. Radioisotope release into the environment is something that is incredibly stringently regulated, so the open-air testing as was conducted at Jackass Flats would not be possible. However, there are multiple options that have been proposed for testing of an NTR in the ensuing years within the more rigorous regulatory regime, as well as cost estimates (some more reliable than others) and characterization of the challenges that need to be overcome in order to ensure that the necessary environmental regulations are met.
The options for current hot-fire testing of an NTR are: the use of upgraded versions of the effluent scrubbers used in the Nuclear Furnace test reactor; the use of boreholes as effluent capture and scrubbing systems (either already-existing boreholes drilled for nuclear weapons tests that have not been used for that purpose at Frenchman’s Flat, or new boreholes at the Idaho National Laboratory); the use of a horizontal, hydrogen-cooled scrubbing system (either using existing U-la or P-tunnel facilities modified for the purpose, or constructing a new facility at the National Nuclear Security Site); and the use of a new, full-exhaust-capture system at NASA’s current rocket test facilities at the John C. Stennis Space Center in Mississippi.
The Way We Did It Before: Nuclear Furnace Exhaust Scrubbers
The NF-1 test, the last test of Project Rover, actually included an exhaust scrubber to minimize the amount of effluent released in the test. Because this test was looking at different types of fuel elements than had been looked at in most previous tests, there was some concern that erosion would be an issue with these fuel elements more than others.
The hydrogen exhaust, after passing the instrumentation that would provide similar data to the Elephant Gun used in earlier tests, would be cooled with a spray of water, which then flashed to steam. This water was initially used to moderate the reactor itself, and then part of it was siphoned off into a wastewater holding tank while the rest was used for this exhaust cooling injection system. After this, the steam/H2 mixture had a temperature of about 1100 R.
After leaving the water injector system, the coolant went through radial outflow filter that was about 3 ft long, containing two wire mesh screens, the first with 0.078 inch square openings, the second one with 0.095 inch square openings.
Once it had passed through the screens, a steam generator was used to further cool the effluent, and to pull some of the H2O out of the exhaust stream. Once past this steam generator, the first separator drew the now-condensed water out of the effluent stream. Part of the radioactive component of the exhaust is at this point dissolved in the water. The water was drawn off to maintain an appropriate liquid level, and was moved into the wastewater disposal tank for filtering. A further round of exhaust cooling followed, using a water heat exchanger to cool the remaining effluent enough to condense out the rest of the water. The water used in this heat exchanger would be used by the steam generator that was used earlier in the effluent stream as its’ cool water intake, and would be discharged into the wastewater holding tank, but would not come in direct contact with the effluent stream. Once past the heat exchanger, the now much cooler H2/H2O mixture would go through a second separator identical in design to the first. At this point, most of the radioactive contaminant that could be dissolved in water had been, and the discharge from this unit was at this point pretty much completely dry.
A counterflow, U-tube type heat exchanger was then used to cool the effluent even more, and then a third separator – identical to the first two – was used to capture any last amounts of water still present in the effluent stream. During normal operation, though, basically no water would collect in this separator. The gas would then be passed through a silica gel sorption bed to further dry it. A back flow of gaseous nitrogen would be used to dry this bed for reuse. The gas, at this point completely dried, was then passed through another heat exchanger almost identical to the one that preceded the silica gel bed.
After passing through a throttle valve (used to maintain back-pressure in the reactor), the gas was then passed through an activated charcoal filter trap, 60 inches long and 60 inches in diameter, to capture the rest of the radioactive effluent left in the hydrogen stream after being mixed with LH2 to further cool the gas to 250-350 R. Finally, the now-cleaned H2 is burned to prevent a buildup of H2 gas in the area- a major explosion hazard. This filter system was constantly adjusted after each power test, because pressure problems kept on cropping up for a number of reasons, from too much resistance to thermal disequilibrium.
So how well did this system do at scrubbing the effluent? Two of the biggest concerns were the capture of radiokrypton and radioxenon, both mildly radioactive noble gasses. The activated charcoal bed was primarily tasked with scrubbing these gasses out of the exhaust stream. Since xenon is far more easily captured than krypton in activated charcoal, the focus was on ensuring the krypton would be scrubbed out of the gas stream, since this meant that all the xenon would be captured as well. Because the Kr could be pushed through the charcoal bed by the flow of the H2, a number of traps were placed through the charcoal bed to measure gamma activity at various points. Furthermore, the effluent was sampled before being flared off, to get a final measurement of how much krypton was released by the trap itself.
Looking at the sampling of the exhaust plume, as well as the ground test stations, the highest dose rat was 1 mCi/hr, far lower than the other NTR tests. Radioisotope concentrations were also far lower than the other tests. However, some radiation was still released from the reactor, and the complications of ensuring that this doesn’t occur (effectively no release is allowed under current testing regimes) due to material, chemical, and gas-dynamic challenges makes this a very challenging, and costly, proposition to adapt to a full-flow NTR test.
Above Ground Test Option #1: Exhaust Scrubbing
The most detailed analysis of this concept was in support of the Space Nuclear Thermal Propulsion program, run by the Department of Energy – better known as Project Timber Wind. This was a far larger engine (111kN as opposed to 25 kN) engine, so the exhaust volume would be far larger. This also means that the costs associated with the program would be larger due to the higher exhaust flow rate, but unfortunately it’s impossible to make a reasonable estimate of the cost reduction, since these costs are far from linear in nature (it would cost significantly more than 20% of the cost estimated for the SNTP engine). However, it’s a good example of the types of facilities needed, and the challenges associated with this approach.
The primary advantage to the ETS concept is that it doesn’t use H2O to cool the exhaust, but LH2. This means that the potential for release of large amounts of (very mildly) irradiated water into the groundwater supply are severely limited (although the water solubility of the individual fission products would not change). The disadvantage, of course, is that it requires large amounts of LH2 to be on hand. At Stennis SC, this is less of an issue, since LH2 facilities are already in place, but LH2 is – as we saw in the last blog post – a major headache. It was estimated that either a combined propellant-effluent coolant supply could be used (~181,440 kg), or a separate supply for the coolant system (~136,000 kg) could be used (numbers based on a maximum of 2 hours burn time per test). To get a sense of what this amount of LH2 would require, two ~1400 kl dewars of LH2 would be needed for the combined system, about ¾ of the LH2 supply available at Kennedy Space Center (~3200 kl).
Once the exhaust is sufficiently cooled, it is a fairly routine matter to filter out the fission products (a combination of physical filters and chemical reactions can ensure that no radionucleides are released, and radiation monitoring can verify that the H2 has been cleaned of all radioactive effluent). In the NF-1 test, water was used to capture the particulate matter, and the H2O was passed through a silica gel bed to remove the fission products. An activated carbon filter was used to remove the noble gasses and other gaseous and aerosol fission products. After this, depending on the facility setup, it is possible to recycle a good portion of the H2 from the test; however this has massive power requirements for the cryocoolers and hydrogen densification equipment to handle this massive amount of H2.
Due to both the irradiation of the facilities and the very different requirements for this type of test facility, it was determined that the facilities built for the NRDS during Rover would be insufficient for this sort of testing, and so new facilities would need to be constructed, with much larger LH2 storage capabilities. One more recent update to the concept is brought up in the SAFE proposal (next section), using already existing facilities at the Nevada Test Site (now National Nuclear Security Site), in the U-la or P-tunnel complexes. These underground facilities were horizontal, interconnected tunnel complexes used for sub-critical nuclear testing. There are a number of benefits to using these (now-unused) facilities for this type of testing: first, the rhyolite that the P-tunnel facility is cut into is far less permeable to fission products, but remains an excellent heat sink for the thermal effects of the exhaust plume. Second, it’s unlikely to fracture due to overpressure, although back-pressure into the engine itself will constrain the minimum size of the tunnel. Third, a hot cell can be cut into the mountain adjacent to the test location, making a very well-shielded facility for cool-down and disassembly beside the test location, eliminating the need to transport the now-hot engine to another facility for disassembly.
After the gas has passed through a length of tunnel, and cooled sufficiently, a heat exchanger is used to further cool the gas, and then it’s passed through an activated charcoal filter similar to the one used in the NF-1 test. This filtered H2 will then be flared off after going through a number of fission product detectors to ensure the filter maintained its’ integrity. The U-la tunnels are dug into alluvium, so we’ll look at those in the next section.
One concern with using charcoal filters is that their effectiveness varies greatly depending on the temperature of the effluent, and the pressure that it’s fed into the filter. Indeed, the H2 can push fission products through the filter, so there’s a definite limit to how small the filter can be. The longer the test, the larger the filter will be. Activated charcoal is relatively cheap, but by the end of the test it will be irradiated, meaning that it has to be disposed of in nuclear waste repositories.
Cost estimates were avoided in the DOD assessment, due to a number of factors, including uncertain site location and the possibility of using this facility for multiple programs, allowing for cost sharing, but the overall cost for the test systems and facilities was estimated to be $500M in 1993 dollars. Most papers seem to think that this is the most expensive, and least practical, option for above ground NTR testing.
The Borehole Option: Subsurface Active Filtration of Exhaust
Many different options have been suggested over the years as to testing options. The simplest is to fire the rocket with its’ nozzle pointed into a deep borehole at the Nevada Test Site, which has had extensive geological work done to determine soil porosity and other characteristics that would be important to the concept. Known as Subsurface Active Filtration of Exhaust, or SAFE, it was proposed in 1999 by the Center for Space Studies, and continued to be refined for a number of years.
In this concept, the engine is placed over an already existing (from below-ground nuclear weapons testing) 8 foot wide, 1200 foot deep borehole, with a water spray system being mounted adjacent to the nozzle of the NTR. The first section of the hole will be clad in steel, and the rest will simply be lined with the rock that is being bored into. The main limiting consideration will be the migration of radionucleides into the surrounding rock, which is something that’s been modeled computationally using Frenchman’s Flat geologic data, but has not been verified.
The primary challenges associated with this type of testing will be twofold: first, it needs to be ensured that the fission products will not migrate into groundwater or the atmosphere; and second, in order to ensure that the surrounding bedrock isn’t fractured – and therefore allows greater-than-anticipated migration of fission products to migrate from the borehole – it is necessary to prevent the pressure in the borehole from reaching above a certain level. A sub-scale test with an RL-10 chemical rocket engine and radioisotope tracers was proposed (this test would have a much smaller borehole, and use known radioisotope tracers – either Xe or Kr isotopes – in the fuel to test dispersion of fission products through the bedrock). This test would provide the necessary migration, permeability, and (given appropriate borehole scaling to ensure prototypic temperature and pressure regimes) soil fracture pressures to ensure the full filtration of the exhaust of an NTR.
The advantage to doing this test at Frenchman’s Flat is that the ground has already been extensively tested for the porosity (35%), permeability (8 darcys), water content (initial pore saturation 30%), and homogeneity (alluvium, so pretty much 100%) that is needed. In fact, a model already exists to calculate the behavior of the soil to these effects, known as WAFE, and the model was applied to the test parameters in 1999. Both full thrust (73.4 kg/s of H2O from both exhaust and cooling spray, and 0.64 kg/s of H2) and 30% thrust (20.5 kg/s H2O and 0.33 kg/s of H2) were modeled, both assuming 600 C exhaust injection after the steel liner. They found that the maximum equilibrium pressure in the borehole would reach 36 psia for the full thrust test, and 21 psia in the 30% thrust case, after about 2 hours, well within the acceptable pressure range for the borehole, assuming the exhaust gases were limited to below Mach 1 to prevent excess back-pressure buildup.
Other options were explored as well, including using the use of the U-la facility at the NNSS for horizontal testing. This is an underground set of tunnels in Nevada, which would provide safety for the testing team and the availability of a hot cell for reactor disassembly beside the test point (the P-tunnel facility is also cut into similar alluvial deposits, so primary filtration will come from the soil itself, and water cooling will still be necessary).
Further options were explored in the “Final Report – Assessment of Testing Options for the NTR at the INL.” This is a more geologically complex region, including pahoehoe and rubble basalt, and various types of sediment. Another complication is that INL is on the Snake River plain, and above an aquifer, so the site will be limited to those places that the aquifer is more than 450 feet below the surface. However, the pahoehoe basalt is gas-impermeable, so if a site can be found that has a layer of this basalt below the borehole but above the aquifer, it can provide a gas-impermeable barrier below the borehole.
A 1998 cost estimate by Bechtel Nevada on the test concept estimated a cost of $5M for the non-nuclear validation test, and $16M for the full-scale NTR test, but it’s unclear if this included cost for the hot cell and associated equipment that would need to be built to support the test campaign, and I haven’t been able to find the specific report.
However, this testing option does not seem to feature heavily in NASA’s internal discussions for NTR testing at this point. One of the disadvantages is that it would require the rocket testing equipment, and support facilities, to be built from scratch, and to occur on DOE property. NASA has an extensive rocket testing facility at the John C. Stennis Space Center in Hancock County, MS, which has geology that isn’t conducive to subterranean testing of any sort, much less testing that requires significant isolation from the water table, and most NASA presentations seem to focus on using this facility.
The main reasons given in a late 2017 presentation for not pursuing this option are: Unresolved issues on water saturation effects on soil permeability, hole pressure during engine operation, and soil effectiveness in exhaust filtering. I have been unable to find the Bechtel Nevada and Desert Research Institute studies on this subject, but they have been studied. I would be curious to know why these studies would be considered incomplete.
One advantage to these options, though, which cannot be overstated, is that these facilities would be located on DOE land. As was seen in the recent KRUSTY fission-powered test, nuclear reactors in DOE facilities use an internal certification and licensing program independent of the NRC. This means that the 9-10 year (or longer), incredibly expensive certification process, which has never been approved for a First of a Kind reactor, would be bypassed. This alone is a potentially huge cost savings for the project, and may offset the additional study required to verify the suitability of these sites for NTR testing compared to certifying a new location – no matter how well established it is for rocket testing already.
Above Ground Test Option #2: Complete Capture
In this NTR test setup, the exhaust is slowed from supersonic to subsonic speeds, allowing O2 to be injected and mixed well past the molar equilibrium point for H2O. The resultant mixture is then combusted, resulting in hot steam and free O2. A water sprayer is used to cool the steam, and then passes through a debris trap filled with water at the bottom. It is then captured in a storage pool, and the remaining gaseous O2 is run through a desiccant filter, which is exhausted into the same storage pool. The water is filtered of all fission products and any unburned fuel, and then released. The gaseous O2 is recaptured and cooled using liquid nitrogen, and whatever is unable to be efficiently recaptured is vented into the atmosphere. The primary advantage to this system is that the resulting H2O can be filtered at leisure, allowing for more efficient and thorough filtration without the worry of over-pressurization of the system if there’s a blockage in the filters.
There are many questions that need to be answered to ensure that this system works properly, as there are with all of the systems that have yet to be tested. In other to verify that the system will work as advertised, a sub-scale demonstrator will need to be built. This facility will use a hydrogen wave heater in place of the nuclear reactor, and test the rest of the components at a smaller scale wherever possible. Due to the specific needs of the exhaust capture system, especially the need to test complete combustion at different heat loads, the height of the facility may not be able to be scaled down (in order to ensure complete combustion, the gas flow will need to be subsonic before mixing and combustion). Thermal loading on structures is another major concern for the sub-scale test, since many components must be tested at the appropriate temperature, and the smaller structures won’t be able to passively reject heat as well. Finally, some things won’t be able to be tested in a sub-scale system, so what data will need to be collected in the full-scale system needs to be assessed.
One last thing to note is that this system will also be used to verify that high-velocity impacts of hot debris will not be a concern. This was, of course, seen in many of the early Rover tests, as fuel elements would break and be ejected from the nozzle at similar velocities to the exhaust. While CERMET fuels are (likely) more durable, this is an accident condition that has to be prepared for. In addition, smaller pieces of debris need to be able to be fully captured as well (such as flakes of clad, or non-nuclear components). These tests will need to be carried out on the sub-scale test bed to ensure for the regulators that any accident is able to be addressed. This adds to the complexity of the test setup, and encourages the ability to change the test stand as quickly and efficiently as possible – in other words, to make it as modular as possible. This also increases the flexibility of the facility for any other uses that it may be put to.
NTP Testing at Stennis Space Center
This last testing concept seems to be the front-runner for current NASA designs, to be integrated into the A3 test stand at NASA’s Stennis Space Center (SSC). SSC is the premier rocket test facility for NASA, testing both solid and liquid rocket engines. The test facilities are located in the “fee area,” a 20 square mile area (avg. radius 2.5 miles) surrounded by an acoustic “buffer zone” that averages 7.9 miles in radius (195 sq mi). With available office space, manufacturing spaces, and indoor and outdoor warehouse space, as well as a number of rocket engine test stands, the facility has much going for it. Most of the rocket engines being used by American launch companies have been tested here, going all the way back to the moon program. This is a VERY advanced, well-developed facility for the development of any type of chemical engine ever developed… but unfortunately, nuclear is different. Because SSC has not supported nuclear operations, a number of facilities will need to be constructed to support NTR testing at the facility. This raises the overall cost of the program considerably, to less than but around $850M (in 2017 dollars). A number of facilities will need to be constructed at SSC to support NTR testing, for both E3 and A3 test stands.
As one of the newer facilities at SSC, the A3 test stand groundbreaking was held in August of 2007, and was completed in 2014. It is the only facility that is able to handle the thrust level (300+ Klbf at altitude, 1,000 Klbf nominal design) and simulated altitude (100 Kft) that testing a powerful upper stage requires. There are two additional facilities designed to operate at lower-than-ambient atmospheric pressures at SSC, the A2 test stand (650 Klbf at 60 Kft) and the E3 test facility (60 Klbf at 100 Kft). The E3 facility will be used for sub-scale testing, turbopump validation, and other tests for the NTP program, but the A2 test stand seems to not be under consideration at this time. The rest of the test stands at SSC are designed to operate at ambient pressure (i.e. sea level), and so they are not suitable for NTP testing.
The E3 facility would be used for sub-scale testing, first of the turbopumps (similar to the tests done there for the SSME), and sub-scale reactor tests. These would likely be the first improvements made at SSC to support the NTP testing, within the next couple years, and would cost $35-38M ($15-16M for sub-scale turbopump tests, $20-22M for the sub-scale reactor test, according to preliminary BWXT cost estimates). Another thing that would be tested at E3 would be a sub-scale engine exhaust capture system, which has been approved for both Phases 1&2, work to support this should be starting at any time ($8.74M was allocated to this goal in the FY’14 budget). From what I can see, work had already started (to an unknown extent) at E3 on this sub-scale system, however I have been unable to find information regarding the extent of the work or the scale that the test stand will be compared to the full system.
The A3 facility has the most that needs to be added, including facilities for power pack testing ($21M); a full-flow, electrically heated depleted uranium test (cost TBD); a facility for zero power testing and reactor verification before testing ($15M); an adjacent hot cell for reactor cool-down and disassembly (the new version of the EMAD facility, $220M); and testing for both sub-scale and full scale fission powered NTP testing (cost to be determined, it’s likely to be heavily influenced by regulatory burden). This does not include radiation shielding, and an alternate ducting system to ensure that the HVAC system doesn’t become irradiated (a major headache in the decomissioning of the original E-MAD facility). It is unlikely that design work for this facility will start in earnest until FY21, and construction of the facility is not likely to start until FY24. Assuming a 10 year site licensing lead time (which is typical), it is unlikely that any nuclear testing will be able to be done until FY29, with full power nuclear testing not likely until FY30.
Documents relating to the test stands at SSC show that there has been some funding for this project since FY ‘16, but it’s difficult to tell how much of that has gone to analysis, environmental impact studies, and other bureaucratic and regulatory necessities, and how much has gone to actual construction. I HAVE had one person who works at SSC mention that physical work has started, but they were unwilling to provide any more information than that due to their not being authorized to speak to the public about the work, and their unfamiliarity with what is and isn’t public knowledge (most of it simply isn’t public). According to a presentation at SSC in July of 2017, the sub-scale turbopump testing may start in the next year or two, but initial design work for the A3 test stand is unlikely to start before FY’21.
According to the presentation (linked below), there are two major hurdles the program needs to overcome on the policy and regulatory side. First, a national/agency level decision needs to be made between NASA, the DOE, and the NRC as to the specific responsibilities and roles for NTP development, especially in regards to: 1. reactor production, engine and launch vehicle integration strategy, and 2. ground, launch, and in-space operations of the NTR. Second, NTP testing at SSC requires a nuclear site license, which is a 9-10 year process even for a traditional light water power reactor, much less as unusual a reactor architecture as an NTR. This is another area that BWXT’s experience is being leaned on heavily, with two (not publicly available) studies having been carried out by them in FY16 on both a site licensing strategy and implementation roadmap, and on initial identification of policy issues related to licensing an NTP ground test at SSC.
Regulatory Burdens, Bureaucratic Concerns, and Other Matters
Originally, this post was going to delve into the regulatory and environmental challenges of doing NTR testing. An NTR is very different from any other sort of nuclear reactor, not only because it’s a once-through gas cooled reactor operating at a very high temperature, but also due to the performance characteristics that the reactor is expected to be able to provide.
Additionally, these are short-lived reactors – 100 hours of operation is more than enough to complete an entire crewed mission to Mars, and is a long lifetime for a rocket engine. However, as we saw during the Rover hot-fire testing, there are always issues that come up that aren’t able to be adequately tested beforehand (even with our far more advanced computational and modeling capabilities), so iteration is key. This means that the site has to be licensed for multiple different reactors.
Unfortunately, these subjects are VERY complex, and are very difficult to learn. Communicating with the NRC in and of itself is a subspecialty of both the nuclear and legal industries for reactor designers. The fact that the DOE, NASA, and the NRC are having to interact on this project just adds to the complexity.
So, I’m going to put that part of this off for now, and it will become its’ own separate blog post. I have contacted NASA, the DOE and the NRC looking for additional information and clarification in their various areas, and hopefully will hear back in the coming weeks or months. I also am reading the appropriate regulations and internal rules for these organizations, and there’s more than enough there for a quite lengthy blog post on its’ own. If you work with any of these organizations, and are either able to help me gather this information or get me in touch with someone that can, I would greatly appreciate it if you contact me.
For now, we’re going to leave testing behind as the main focus of the blog, but we will still look at the subject as it becomes relevant in other posts. For now, we’re going to do one final post on solid core pure NTRs, looking at carbide fueled NTRs, both the RD-0410 in Russia and some legacy and new designs from the US. After that, we’ll move on to bimodal NTR/chemical and bimodal NTR/thermal electric designs in the next post.
After that, with one small exception, we’ll leave NTRs behind for a while, and look at nuclear electric propulsion. I plan on doing pages for individual reactor designs during this time, both NTR and NEP, and add the as their own pages on the website. As I write posts, I’ll link to the new (or updated) pages as they’re completed.
Be sure to check out the rest of the website, and join us on Facebook! This blog is far from the only thing going on!
In Pile Testing
Technology Implementation Plan: Irradiation Testing and Qualification for Nuclear Thermal Propulsion Fuel; ORNL/TM-2017/376, Howard et al September 2017
Hello, and welcome back to Beyond NERVA, where today we are looking at ground testing of nuclear rockets. This is the first of two posts on ground testing NTRs, focusing on the testing methods used during Project ROVER, including a look at the zero power testing and assembly tests carried out at Los Alamos Scientific Laboratory, and the hot-fire testing done at the National Defense Research Station at Jackass Flats, Nevada. The next post will focus on the options that both have and are being considered for hot fire testing the next generation of LEU NTP, as well as a brief look at cost estimates for the different options, and the plans that NASA has proposed for the facilities that are needed to support this program (what little of it is available).
We have examined how to test NTR fuel elements in nun-nuclear situations before, and looked at two of the test stands that were developed for testing thermal, chemical, and erosion effects on them as individual components, the Compact Fuel Element Environment Simulator (CFEET) and the Nuclear Thermal Rocket Environment Effects Simulator (NTREES). These test stands provide economical means of testing fuel elements before loading them into a nuclear reactor for neutronic and reactor physics behavioral testing, and can catch many problems in terms of chemical and structural problems without dealing with the headaches of testing a nuclear reactor.
However, as any engineer can tell you, computer modeling is far from enough to test a full system. Without extensive real-life testing, no system can be used in real-life situations. This is especially true of something as complex as a nuclear reactor – much less a rocket engine. NTRs have the challenge of being both.
Back in the days of Project Rover, there were many nuclear propulsion tests performed. The most famous of these were the tests carried out at Jackass Flats, NV, in the National Nuclear Test Site (Now the National Criticality Experiment Research Center), in open-air testing on specialized rail cars. This was far from the vast majority of human habitation (there was one small – less than 100 people – ranch upwind of the facility, but downwind was the test site for nuclear weapons tests, so any fallout from a reactor meltdown was not considered a major concern).
The test program at the Nevada site started with the arrival of the fully-constructed and preliminary tested rocket engines arriving by rail from Los Alamos, NM, along with a contingent of scientists, engineers, and additional technicians. After doing another check-out of the reactor, they were hooked up (still attached to the custom rail car it was shipped on) to instrumentation and hydrogen propellant, and run through a series of tests, ramping up to either full power or engine failure. Rocket engine development in those days (and even today, sometimes) could be an explosive business, and hydrogen was a new propellant to use, so accidents were unfortunately common in the early days of Rover.
After the test, the rockets were wheeled off onto a remote stretch of track to cool down (from a radiation point of view) for a period of time, before being disassembled in a hot cell (a heavily shielded facility using remote manipulators to protect the engineers) and closely examined. This examination verified how much power was produced based on the fission product ratios of the fuel, examined and detailed all of the material and mechanical failures that had occurred, and started the reactor decommissioning and disposal procedures.
As time went on, great strides were made not only in NTR design, but in metallurgy, reactor dynamics, fluid dynamics, materials engineering, manufacturing techniques, cryogenics, and a host of other areas. These rocket engines were well beyond the bleeding edge of technology, even for NASA and the AEC – two of the most scientifically advanced organizations in the world at that point. This, unfortunately, also meant that early on there were many failures, for reasons that either weren’t immediately apparent or that didn’t have a solution based on the design capabilities of the day. However, they persisted, and by the end of the Rover program in 1972, a nuclear thermal rocket was tested successfully in flight configuration repeatedly, the fuel elements for the rocket were advancing by leaps and bounds past the needed specifications, and with the ability to cheaply iterate and test new versions of these elements in new, versatile, and reusable test reactors, the improvements were far from stalling out – they were accelerating.
However, as we know, the Rover program was canceled after NASA was no longer going to Mars, and the development program was largely scrapped. Scientists and engineers at Westinghouse Astronuclear Laboratory (the commercial contractor for the NERVA flight engine), Oak Ridge National Laboratory (where much of the fuel element fabrication was carried out) and Los Alamos Scientific Laboratory (the AEC facility primarily responsible for reactor design and initial testing) spent about another year finishing paperwork and final reports, and the program was largely shut down. The final report on the hot-fire test programs for NASA, though, wouldn’t be released until 1991.
Behind the Scenes: Pre-Hot Fire Testing of ROVER reactors
These hot fire tests were actually the end result of many more tests carried out in New Mexico, at Los Alamos Scientific Laboratory – specifically the Pajarito Test Area. Here, there were many test stands and experimental reactors used to measure such things as neutronics, reactor behavior, material behavior, critical assembly limitations and more.
The first of these was known as Honeycomb, due to its use of square grids made out of aluminum (which is mostly transparent to neutrons), held in large aluminum frames. Prisms of nuclear fuel, reflectors, neutron absorbers, moderator, and other materials were assembled carefully (to prevent accidental criticality, something that the Pajarito Test Site had seen early in its’ existence in the Demon Core experiments and subsequent accident) to ensure that the modeled behavior of possible core configurations matched closely enough to predicted behavior to justify going through the effort and expense of going on to the next steps of refining and testing fuel elements in an operating reactor core. Especially for cold and warm criticality tests, this test stand was invaluable, but with the cancellation of Project Rover, there was no need to continue using the test stand, and so it was largely mothballed.
The second was a modified KIWI-A reactor, which used a low-pressure, heavy water moderated island in the center of the reactor to reduce the amount of fissile fuel necessary for the reactor to achieve criticality. This reactor, known as Zepo-A (for zero-power, or cold criticality), was the first of an experiment that was carried out with each successive design in the Rover program, supporting Westinghouse Astronuclear Laboratory and the NNTS design and testing operations. As each reactor went through its’ zero-power neutronic testing, the design was refined, and problems corrected. This sort of testing was completed late in 2017 and early in 2018 at the NCERC in support of the KRUSTY series of tests, which culminated in March with the first full-power test of a new nuclear reactor in the US for more than 40 years, and remain a crucial testing phase for all nuclear reactor and fuel element development. An early, KIWI-type critical assembly test ended up being re-purposed into a test stand called PARKA, which was used to test liquid metal fast breeder reactor (LMFBR, now known as “Integral Fast Reactor or IFR, under development at Idaho National Labs) fuel pins in a low-power, epithermal neutron environment for startup and shutdown transient behavior testing, as well as being a well-understood general radiation source.
Finally, there was a pair of hot gas furnaces (one at LASL, one at WANL) for electrical heating of fuel elements in an H2 environment that used resistive heating to bring the fuel element up to temperature. This became more and more important as the project continued, since development of the clad on the fuel element was a major undertaking. As the fuel elements became more complex, or as materials that were used in the fuel element changed, the thermal properties (and chemical properties at temperature) of these new designs needed to be tested before irradiation testing to ensure the changes didn’t have unintended consequences. This was not just for the clad, the graphite matrix composition changed over time as well, transitioning from using graphite flour with thermoset resin to a mix of flour and flakes, and the fuel particles themselves changed from uranium oxide to uranium carbide, and the particles themselves were coated as well by the end of the program. The gas furnace was invaluable in these tests, and can be considered the grandfather of today’s NTREES and CFEET test stands.
An excellent example of the importance of these tests, and the careful checkout that each of the Rover reactors received, can be seen with the KIWI-B4 reactor. Initial mockups, both on Honeycomb and in more rigorous Zepo mockups of the reactor, showed that the design had good reactivity and control capability, but while the team at Los Alamos was assembling the actual test reactor, it was discovered that there was so much reactivity the core couldn’t be assembled! Inert material was used in place of some of the fuel elements, and neutron poisons were added to the core, to counteract this excess reactivity. Careful testing showed that the uranium carbide fuel particles that were suspended in the graphite matrix underwent hydrolysis, moderating the neutrons and therefore increasing the reactivity of the core. Later versions of the fuel used larger particles of UC2, which was then individually coated before being distributed through the graphite matrix, to prevent this absorption of hydrogen. Careful testing and assembly of these experimental reactors by the team at Los Alamos ensured the safe testing and operation of these reactors once they reached the Nevada test site, and supported Westinghouse’s design work, Oak Ridge National Lab’s manufacturing efforts, and the ultimate full-power testing carried out at Jackass Flats.
Once this series of mockup crude criticality testing, zero-power testing, assembly, and checkout was completed, the reactors were loaded onto a special rail car that would also act as a test stand with the nozzle up, and – accompanied by a team of scientists and engineers from both New Mexico and Nevada – transported by train to the test site at Jackass Flats, adjacent to Nellis Air Force Base and the Nevada Test Site, where nuclear weapons testing was done. Once there, a final series of checks was done on the reactors to ensure that nothing untoward had happened during transport, and the reactors were hooked up to test instrumentation and the coolant supply of hydrogen for testing.
Problems at Jackass Flats: Fission is the Easy Part!
The testing challenges that the Nevada team faced extended far beyond the nuclear testing that was the primary goal of this test series. Hydrogen is a notoriously difficult material to handle due to its’ incredibly tiny size and mass. It seeps through solid metal, valves have to be made with incredibly tight clearances, and when it’s exposed to the atmosphere it is a major explosion hazard. To add to the problems, these were the first days of cryogenic H2 experimentation. Even today, handling of cryogenic H2 is far from a routine procedure, and the often unavoidable problems with using hydrogen as a propellant can be seen in many areas – perhaps the most spectacular can be seen during the launch of a Delta-IV Heavy rocket, which is a hydrolox (H2/O2) rocket. Upon ignition of the rocket engines, it appears that the rocket isn’t launching from the pad, but exploding on it, due to the outgassing of H2 not only from the pressure relief valves in the tanks, but seepage from valves, welds, and through the body of the tanks themselves – the rocket catching itself on fire is actually standard operating procedure!
In the late 1950s, these problems were just being discovered – the hard way. NASA’s Plum Brook Research Station in Ohio was a key facility for exploring techniques for handling gaseous and liquid hydrogen safely. Not only did they experiment with cryogenic equipment, hydrogen densification methods, and liquid H2 transport and handling, they did materials and mechanical testing on valves, sensors, tanks, and other components, as well as developed welding techniques and testing and verification capabilities to improve the ability to handle this extremely difficult, potentially explosive, but also incredible valuable (due to its’ low atomic mass – the exact same property that caused the problems in the first place!) propellant, coolant, and nuclear moderator. The other options available for NTR propellant (basically anything that’s a gas at reactor operating temperatures and won’t leave excessive residue) weren’t nearly as good of an option due to the lower exhaust velocity – and therefore lower specific impulse.
Plum Brook is another often-overlooked facility that was critical to the success of not just NERVA, but all current liquid hydrogen fueled systems. I plan on doing another post (this one’s already VERY long) looking into the history of the various facilities involved with the Rover and NERVA program.
Indeed, all the KIWI-A tests and the KIWI-B1A used gaseous hydrogen instead of liquid hydrogen, because the equipment that was planned to be used (and would be used in subsequent tests) was delayed due to construction problems, welding issues, valve failures, and fires during checkout of the new systems. These teething troubles with the propellant caused major problems at Jackass Flats, and caused many of the flashiest accidents that occurred during the testing program. Hydrogen fires were commonplace, and an accident during the installation of propellant lines in one reactor ended up causing major damage to the test car, the shed it was contained in, and exposed instrumentation, but only minor apparent damage to the reactor itself, delaying the test of the reactor for a full month while repairs were made (this test also saw two hydrogen fires during testing, a common problem that improved as the program continued and the methods for handling the H2 were improved).
While the H2 coolant was the source of many problems at Jackass Flats, other issues arose due to the fact that these NTRs were using technology that was well beyond bleeding-edge at the time. New construction methods doesn’t begin to describe the level of technological innovation in virtually every area that these engines required. Materials that were theoretical chemical engineering possibilities only a few years before (sometimes even months!) were being utilized to build innovative, very high temperature, chemically and neutronically complex reactors – that also functioned as rocket engines. New metal alloys were developed, new forms of graphite were employed, experimental methods of coating the fuel elements to prevent hydrogen from attacking the carbon of the fuel element matrix (as seen in the KIWI-A reactor, which used unclad graphite plates for fuel, this was a major concern) were constantly being adjusted – indeed, the clad material experimentation continues to this day, but with advanced micro-imaging capabilities and a half century of materials science and manufacturing experience since then, the results now are light-years ahead of what was available for the scientists and engineers in the 50s and 60s. Hydrodynamic principles that were only poorly understood, stress and vibrational patterns that weren’t able to be predicted, and material interactions at temperatures higher than are experienced in the vast majority of situations caused many problems for the Rover reactors.
One common problem in many of these reactors was transverse fuel element cracking, where a fuel element would split across the narrow axis, disrupting coolant flow through the interior channels, exposing the graphite matrix to the hot H2 (which it then would ferociously eat away, exposing both fission products and unburned fuel to the H2 stream and carry it elsewhere – mostly out of the nozzle, but it turned out the uranium would congregate at the hottest points in the reactor – even against the H2 stream – which could have terrifying implications for accidental fission power hot spots. Sometimes, large sections of the fuel elements would be ejected out of the nozzle, spraying partially burned nuclear fuel into the air – sometimes as large chunks, but almost always some of the fuel was aerosolized. Today, this would definitely be unacceptable, but at the time the US government was testing nuclear weapons literally next door to this facility, so it wasn’t considered a cause of major concern.
If this sounds like there were major challenges and significant accidents that were happening at Jackass Flats, well in the beginning of the program that was certainly correct. These early problems were also cited in Congress’ decision to not continue to fund the program (although, without a manned Mars mission, there was really no reason to use the expensive and difficult to build systems, anyway). The thing to remember, though, is that they were EARLY tests, with materials that had been a concept in a material engineer’s imagination only a few years (or sometimes months) beforehand, mechanical and thermal stresses that no-one had ever dealt with, and a technology that seemed the only way to send humans to another planet. The moon was hard enough, Mars was millions of miles further away.
Hot Fire Testing: What Did a Test Look Like?
Nuclear testing is far more complex than just hooking up the test reactor to coolant and instrumentation lines, turning the control drums and hydrogen valves, and watching the dials. Not only are there many challenges associated with just deciding what instrumentation is possible, and where it would be placed, but the installation of these instruments and collecting data from them was often a challenge as well early in the program.
To get an idea of what a successful hot fire test looks like, let’s look at a single reactor’s test series from later in the program: the NRX A2 technology demonstration test. This was the first NERVA reactor design to be tested at full power by Westinghouse ANL, the others, including KIWI and PHOEBUS, were not technology demonstration tests, but proof-of-concept and design development tests leading up to NERVA, and were tested by LASL. The core itself consisted of 1626 hexagonal prismatic fuel elements This reactor was significantly different from the XE-PRIME reactor that would be tested five years later. One way that it was different was the hydrogen flow path: after going through the nozzle, it would enter a chamber beside the nozzle and above the axial reflector (the engine was tested nozzle-up, in flight configuration this would be below the reflector), then pass through the reflector to cool it, before being diverted again by the shield, through the support plate, and into the propellant channels in the core before exiting the nozzle
Two power tests were conducted, on September 24 and October 15, 1964.
With two major goals and 22 lesser goals, the September 24 test packed a lot into the six minutes of half-to-full power operation (the reactor was only at full power for 40 seconds). The major goals were: 1. Provide significant information for verifying steady-state design analysis for powered operation, and 2. Provide significant information to aid in assessing the reactor’s suitability for operation at steady-state power and temperature levels that were required if it was to be a component in an experimental engine system. In addition to these major, but not very specific, test goals, a number of more specific goals were laid out, including top priority goals of evaluating environmental conditions on the structural integrity of the reactor and its’ components, core assembly performance evaluation, lateral support and seal performance analysis, core axial support system analysis, outer reflector assembly evaluation, control drum system evaluation, and overall reactivity assessment. The less urgent goals were also more extensive, and included nozzle assembly performance, pressure vessel performance, shield design assessment, instrumentation analysis, propellant feed and control system analysis, nucleonic and advanced power control system analysis, radiological environment and radiation hazard evaluation, thermal environment around the reactor, in-core and nozzle chamber temperature control system evaluation, reactivity and thermal transient analysis, and test car evaluation.
Several power holds were conducted during the test, at 51%, 84%, and 93-98%, all of which were slightly above the power that the holds were planned at. This was due to compressability of the hydrogen gas (leading to more moderation than planned) and issues with the venturi flowmeters used to measure H2 flow rates, as well as issues with the in-core thermocouples used for instrumentation (a common problem in the program), and provides a good example of the sorts of unanticipated challenges that these tests are meant to evaluate. The test length was limited by the availability of hydrogen to drive the turbopump, but despite this being a short test, it was a sweet one: all of the objectives of the test were met, and an ideal specific impulse in a vacuum equivalent of 811 s was determined (low for an NTR, but still over twice as good as any chemical engine at the time).
The October 15th test was a low-power, low flow test meant to evaluate the reactor’s operation when not operating in a high power, steady state of operation, focusing on reactor behavior at startup and cool-down. The relevant part of the test lasted for about 20 minutes, and operated at 21-53 MW of power and a flow rate of 2.27-5.9 kg/s of LH2. As with any system, operating at the state that the reactor was designed to operate in was easier to evaluate and model than at startup and shutdown, two conditions that every engine has to go through but are far outside the “ideal” conditions for the system, and operating with liquid hydrogen just made the questions greater. Only four specific objectives were set for this test: demonstration of stability at low LH2 flow (using dewar pressure as a gauge), demonstration of suitability at constant power but with H2 flow variation, demonstration of stability with fixed control drums but variable H2 flow to effect a change in reactor power, and getting a reactivity feedback value associated with LH2 at the core entrance. Many of these tests hinge on the fact that the LH2 isn’t just a coolant, but a major source of neutron moderation, so the flow rate (and associated changes in temperature and pressure) of the propellant have impacts extending beyond just the temperature of the exhaust. This test showed that there were no power or flow instabilities in the low-power, low-flow conditions that would be seen even during reactor startup (when the H2 entering the core was at its’ densest, and therefore most moderating). The predicted behavior and the test results showed good correlation, especially considering the instrumentation used (like the reactor itself) really wasn’t designed for these conditions, and the majority of the transducers used were operating at the extreme low range of their scale.
After the October test, the reactor was wheeled down a shunt track to radiologically cool down (allow the short-lived fission products to decay, reducing the gamma radiation flux coming off the reactor), and then was disassembled in the NRDC hot cell. These post-mortem examinations were an incredibly important tool for evaluating a number of variables, including how much power was generated during the test (based on the distribution of fission products, which would change depending on a number of factors, but mainly due to the power produced and the neutron spectrum that the reactor was operating in when they were produced), chemical reactivity issues, mechanical problems in the reactor itself, and several other factors. Unfortunately, disassembling even a simple system without accidentally breaking something is difficult, and this was far from a simple system. A challenge became “did the reactor break that itself, or did we?” This is especially true of fuel elements, which often broke due to inadequate lateral support along their length, but also would often break due to the way they were joined to the cold end of the core (which usually involved high-temperature, reasonably neutronically stable adhesives).
This issue was illustrated in the A2 test, when there were multiple broken fuel elements that did not have erosion at the break. This is a strong indicator that they broke during disassembly, not during the test itself: hot H2 tends to heavily erode the carbon in the graphite matrix – and the carbide fuel pellets – and is a very good indicator if the fuel rods broke during a power test. Broken fuel elements were a persistent problem in the entire Rover and NERVA programs (sometimes leading to ejection of the hot end portion of the fuel elements), and the fact that all of the fueled ones seem to have not broken was a major victory for the fuel fabricators.
This doesn’t mean that the fuel elements weren’t without their problems. Each generation of reactors used different fuel elements, sometimes multiple different types in a single core, and in this case the propellant channels, fuel element ends, and the tips of the exterior of the elements were clad in NbC, but the full length of the outside of the elements was not, to attempt to save mass and not overly complicate the neutronic environment of the reactor itself. Unfortunately, this means that the small amount of gas that slipped between the filler strips and pyro-tiles placed to prevent this problem could eat away at the middle of the outside of the fuel element (toward the hot end), something known as mid-band corrosion. This occurred mostly on the periphery of the core, and had a characteristic pattern of striations on the fuel elements. A change was made, to ensure that all of the peripheral fuel elements were fully clad with NbC, since the areas that had this clad were unaffected. Once again, the core became more complex, and more difficult to model and build, but a particular problem was addressed due to empirical data gathered during the test. A number of unfueled, instrumented fuel elements in the core were found to have broken in such a way that it wasn’t possible to conclusively rule out handling during disassembly, however, so the integrity of the fuel elements was still in doubt.
The problems associated with these graphite composite fuel elements never really went away during ROVER or NERVA, with a number of broken fuel elements (which were known to have been broken during the test) were found in the PEWEE reactor, the last test of this sort of fuel element matrix (NF-1 used either CERMET – then called composite – or carbide fuel elements, no GC fuel elements were used). The follow-on A3 reactor exhibited a form of fuel erosion known as pin-hole erosion, which the NbC clad was unable to address, forcing the NERVA team to other alternatives. This was another area where long-term use of the GC fuel elements was shown to be unsustainable for long-duration use past the specific mission parameters, and a large part of why the entire NERVA engine was discarded during staging, rather than just the propellant tanks as in modern designs. New clad materials and application techniques show a lot of promise, and GC is able to be used in a carefully designed LEU reactor, but this is something that isn’t really being explored in any depth in most cases (both the LANTR and NTER concepts still use GC fuel elements, with the NTER specifying them exclusively due to fuel swelling issues, but that seems to be the only time it’s actually required).
Worse Than Worst Case: KIWI-TNT
One question that often is asked by those unfamiliar with NTRs is “what happens if it blows up?” The short answer is that they can’t, for a number of reasons. There is only so much reactivity in a nuclear reactor, and only so fast that it can be utilized. The amount of reactivity is carefully managed through fuel loading in the fuel elements and strategically placed neutron poisons. Also, the control systems used for the nuclear reactors (in this case, control drums placed around the reactor in the radial reflector) can only be turned so fast. I recommend checking out the report on Safety Neutronics in Rover Reactors liked at the end of this post if this is something you’d like to look at more closely.
However, during the Rover testing at NRDS one WAS blown up, after significant modification that would not ever be done to a flight reactor. This is the KIWI-TNT test (TNT is short for Transient Nuclear Test). The behavior of a nuclear reactor as it approaches a runaway reaction, or a failure of some sort, is something that is studied in all types of reactors, usually in specially constructed types of reactors. This is required, since the production design of every reactor is highly optimized to prevent this sort of failure from occurring. This was also true of the Rover reactors. However, knowing what a fast excursion reaction would do to the reactor was an important question early in the program, and so a test was designed to discover exactly how bad things could be, and characterize what happened in a worse-than-worst-case scenario. It yielded valuable data for the possibility of an abort during launch that resulted in the reactor falling into the ocean (water being an excellent moderator, making it more likely that accidental criticality would occur), if the launch vehicle exploded on the pad, and also tested the option of destroying the reactor in space after it had been exhausted of its’ propellant (something that ended up not being planned for in the final mission profiles).
What was the KIWI-TNT reactor? The last of the KIWI series of reactors, its’ design was very similar to the KIWI-B4A reactor (the predecessor for the NERVA-1 series of reactors), which was originally designed as a 1000 MW reactor with an exhaust exit chamber temperature of 2000 C. However, a number of things prevented a fast excursion from happening in this reactor: first, the shims used for the fuel elements were made of tantalum, a neutron poison, to prevent excess reactivity; second, the control drums used stepping motors that were slow enough that a runaway reaction wasn’t possible; finally, this experiment would be done without coolant, which also acted as moderator, so much more reactivity was needed than the B4A design allowed. With the shims removed, excess reactivity added to the point that the reactor was less than 1 sub-critical (with control drums fully inserted) and $6 of excess reactivity available relative to prompt critical, and the drum rotation rate increased by a factor of 89(!!), from 45 deg/s to 4000 deg/s, the stage was set for this rapid scheduled disassembly on January 12, 1965. This degree of modification shows how difficult it would be to have an accidental criticality accident in a standard NTR design.
The test had six specific goals: 1. Measure reaction history and total fissions produced under a known reactivity and compare to theoretical predictions in order to improve calculations for accident predictions, 2. to determine distribution of fission energy between core heating and vaporization, and kinetic energies, 3. determination of the nature of the core breakup, including the degree of vaporization and particle sizes produced, to test a possible nuclear destruct system, 4. measure the release into the atmosphere of fission debris under known conditions to better calculate other possible accident scenarios, 5. measure the radiation environment during and after the power transient, and 6. to evaluate launch site damage and clean-up techniques for a similar accident, should it occur (although the degree of modification required to the reactor core shows that this is a highly unlikely event, and should an explosive accident occur on the pad, it would have been chemical in nature with the reactor never going critical, so fission products would not be present in any meaningful quantities).
There were 11 measurements taken during the test: reactivity time history, fission rate time history, total fissions, core temperatures, core and reflector motion, external pressures, radiation effects, cloud formation and composition, fragmentation and particle studies, and geographic distribution of debris. An angled mirror above the reactor core (where the nozzle would be if there was propellant being fed into the reactor) was used in conjunction with high-speed cameras at the North bunker to take images of the hot end of the core during the test, and a number of thermocouples placed in the core.
As can be expected, this was a very short test, with a total of 3.1×10^20 fissions achieved after only 12.4 milliseconds. This was a highly unusual explosion, not consistent with either a chemical or nuclear explosion. The core temperature exceeded 17.5000 C in some locations, vaporizing approximately 5-15% of the core (the majority of the rest either burned in the air or was aerosolized into the cloud of effluent), and produced 150 MW/sec of kinetic energy about the same amount of kinetic energy as approximately 100 pounds of high explosive (although due to the nature of this explosion, caused by rapid overheating rather than chemical combustion, in order to get the same effect from chemical explosives would take considerably more HE). Material in the core was observed to be moving at 7300 m/sec before it came into contact with the pressure vessel, and flung the largest intact piece of the pressure vessel (a 0.9 sq. m, 67 kg piece of the pressure vessel) 229 m away from the test location. There were some issues with instrumentation in this test, namely with the pressure transducers used to measure the shock wave. All of these instruments but two (placed 100 ft away) didn’t record the pressure wave, but rather an electromagnetic signal at the time of peak power (those two recorded a 3-5 psi overpressure).
Radioactive Release during Rover Testing Prequel: Radiation is Complicated
Radiation is a major source of fear for many people, and is the source of a huge amount of confusion in the general population. To be completely honest, when I look into the nitty gritty of health physics (the study of radiation’s effects on living tissue), I spend a lot of time re-reading most of the documents because it is easy to get confused by the terms that are used. To make matters worse, especially for the Rover documentation, everything is in the old, outdated measures of radioactivity. Sorry, SI users out there, all the AEC and NASA documentation uses Ci, rad, and rem, and converting all of it would be a major headache. If someone would like to volunteer helping me convert everything to common sense units, please contact me, I’d love the help! However, the natural environment is radioactive, and the Sun emits a prodigious amount of radiation, only some of which is absorbed by the atmosphere. Indeed, there is evidence that the human body REQUIRES a certain amount of radiation to maintain health, based on a number of studies done in the Soviet Union using completely non-radioactive, specially prepared caves and diets.
Exactly how much is healthy and not is a matter of intense debate, and not much study, though, and three main competing theories have arisen. The first, the linear-no-threshold model, is the law of the land, and states that there’s a maximum amount of radiation that is allowable to a person over the course of a year, no matter if it’s in one incident (which usually is a bad thing), or evenly spaced throughout the whole year. Each rad (or gray, we’ll get to that below) of radiation increases a person’s chance of getting cancer by a certain percentage in a linear fashion, and so effectively the LNT model (as it’s known) determines a minimum acceptable increase in the chance of a person getting cancer in a given timeframe (usually quarters and years). This doesn’t take into account the human body’s natural repair mechanisms, though, which can replace damaged cells (no matter how they’re damaged), which leads most health physicists to see issues with the model, even as they work within the model for their professions.
The second model is known as the linear-threshold model, which states that low level radiation (under the threshold of the body’s repair mechanisms) doesn’t make sense to count toward the likelihood of getting cancer. After all, if you replace your Formica counter top in your kitchen with a granite one, the natural radioactivity in the granite is going to expose you to more radiation, but there’s no difference in the likelihood that you’re going to get cancer from the change. Ramsar, Iran (which has the highest natural background radiation of any inhabited place on Earth) doesn’t have higher cancer rates, in fact they’re slightly lower, so why not set the threshold to where the normal human body’s repair mechanisms can control any damage, and THEN start using the linear model of increase in likelihood of cancer?
The third model, hormesis, takes this one step further. In a number of cases, such as Ramsar, and an apartment building in Taiwan which was built with steel contaminated with radioactive cobalt (causing the residents to be exposed to a MUCH higher than average chronic, or over time, dose of gamma radiation), people have not only been exposed to higher than typical doses of radiation, but had lower cancer rates when other known carcinogenic factors were accounted for. This is evidence that having an increased exposure to radiation may in fact stimulate the immune system and make a person more healthy, and reduce the chance of that person getting cancer! A number of places in the world actually use radioactive sources as places of healing, including radium springs in Japan, Europe, and the US, and the black monazite sands in Brazil. There has been very little research done in this area, since the standard model of radiation exposure says that this is effectively giving someone a much higher risk for cancer, though.
I am not a health physicist. It has become something of a hobby for me in the last year, but this is a field that is far more complex than astronuclear engineering. As such, I’m not going to weigh in on the debate as to which of these three theories is right, and would appreciate it if the comments section on the blog didn’t become a health physics flame war. Talking to friends of mine that ARE health physicists (and whom I consult when this subject comes up), I tend to lean somewhere between the linear threshold and hormesis theories of radiation exposure, but as I noted before, LNT is the law of the land, and so that’s what this blog is going to mostly work within.
Radiation (in the context of nuclear power, especially) starts with the emission of either a particle or ray from a radioisotope, or unstable nucleus of an atom. This is measured with the Curie (Cu) which is a measure of how much radioactivity IN GENERAL is released, or 3.7X10^10 emissions (either alpha, beta, neutron, or gamma) per second. SI uses the term Becquerels (Bq), which is simple: one decay = 1 Bq. So 1 Cu = 3.7X10^10 Bq. Because it’s so small, megaBequerels (Mbq) is often used, because unless you’re looking at highly sensitive laboratory experiments, even a dozen Bq is effectively nothing.
Each different type of radiation affects both materials and biological systems differently, though, so there’s another unit used to describe energy produced by radiation being deposited onto a material, the absorbed dose: this is the rad, and SI unit is the gray (Gy). The rad is defined as 100 ergs of energy deposited in one gram of material, and the gray is defined as 1 joule of radiation absorbed by one kilogram of matter. This means that 1 rad = 0.01 Gy. This is mostly seen for inert materials, such as reactor components, shielding materials, etc. If it’s being used for living tissue, that’s generally a VERY bad sign, since it’s pretty much only used that way in the case of a nuclear explosion or major reactor accident. It is used in the case of an acute – or sudden – dose of radiation, but not for longer term exposures.
This is because there’s many things that go into how bad a particular radiation dose is: if you’ve got a gamma beam that goes through your hand, for instance, it’s far less damaging than if it goes through your brain, or your stomach. This is where the final measurement comes into play: in NASA and AEC documentation, they use the term rem (or radiation equivalent man), but in SI units it’s known as the Sievert. This is the dose equivalent, or normalizing all the different radiation types’ effects on the various tissues of the body, by applying a quality factor to each type of radiation for each part of a human body that is exposed to that type of radiation. If you’ve ever wondered what health physicists do, it’s all the hidden work that goes on when that quality factor is applied.
The upshot of all of this is the way that radiation dose is assessed. There are a number of variables that were assessed at the time (and currently are assessed, with this as an effective starting point for ground testing, which has a minuscule but needing to be assessed consideration as far as release of radioactivity to the general public). The exposure was broadly divided into three types of exposure: full-body (5 rem/yr for an occupational worker, 0.5 rem/yr for the public); skin, bone, and thyroid exposure (30 rem/yr occupational, 3 rem/yr for the public); and other organs (15 rem/yr occupational, 1.5 rem/yr for the public). In 1971, the guidelines for the public were changed to 0.5 rem/yr full body and 1.5 rem/yr for the general population, but as has been noted (including in the NRDS Effluent Final Report) this was more an administrative convenience rather than biomedical need.
Additional considerations were made for discrete fuel element particles ejected from the core – a less than one in ten thousand chance that a person would come in contact with one, and a number of factors were considered in determining this probability. The biggest concern is skin contact can result in a lesion, at an exposure of above 750 rads (this is an energy deposition measure, not an expressly medical one, because it is only one type of tissue that is being assessed).
Finally, and perhaps the most complex to address, is the aerosolized effluent from the exhaust plume, which could be both gaseous fission products (which were not captured by the clad materials used) and from small enough particles to float through the atmosphere for a longer duration – and possibly be able to be inhaled. The relevant limits of radiation exposure for these tests for off-site populations were 170 mrem/yr whole body gamma dose, and a thyroid exposure dose of 500 mrem/yr. The highest full body dose recorded in the program was in 1966, of 20 mrem, and the highest thyroid dose recorded was from 1965 of 72 mrem.
The Health and Environmental Impact of Nuclear Propulsion Testing Development at Jackass Flats
So how bad were these tests about releasing radioactive material, exactly? Considering the sparsely populated area few people – if any – that weren’t directly associated with the program received any dose of radiation from aerosolized (inhalable, fine particulate) radioactive material. By the regulations of the day, no dose of greater than 15% of the allowable AEC/FRC (Federal Radiation Council, an early federal health physics advisory board) dose for the general public was ever estimated or recorded. The actual release of fission products in the atmosphere (with the exception of Cadmium-115) was never more than 10%, and often less than 1% (115Cd release was 50%). The vast majority of these fission products are very short lived, decaying in minutes or days, so there was not much – if any – change for migration of fallout (fission products bound to atmospheric dust that then fell along the exhaust plume of the engine) off the test site. According to a 1995 study by the Department of Energy, the total radiation release from all Rover and Tory-II nuclear propulsion tests was approximately 843,000 Curies. To put this in perspective, a nuclear explosive produces 30,300,000 Curies per kiloton (depending on the size and efficiency of the explosive), so the total radiation release was the equivalent of a 30 ton TNT equivalent explosion.
This release came from either migration of the fission products through the metal clad and into the hydrogen coolant, or due to cladding or fuel element failure, which resulted in the hot hydrogen aggressively attacking the graphite fuel elements and carbide fuel particles.
The amount of fission product released is highly dependent on the temperature and power level the reactors were operated at, the duration of the test, how quickly the reactors were brought to full power, and a number of other factors. The actual sampling of the reactor effluent occurred three ways: sampling by aircraft fitted with special sensors for both radiation and particulate matter, the “Elephant gun” effluent sampler placed in the exhaust stream of the engine, and by postmortem chemical analysis of the fuel elements to determine fuel burnup, migration, and fission product inventory. One thing to note is that for the KIWI tests, effluent release was not nearly as well characterized as for the later Phoebus, NRX, Pewee, and Nuclear Furnace tests, so the data for these tests is not only more accurate, but far more complete as well.
Two sets of aircraft data were collected: one (by LASL/WANL) was from fixed heights and transects in the six miles surrounding the effluent plume, collecting particulate effluent which would be used (combined with known release rates of 115Cd and post-mortem analysis of the reactor) to determine the total fission product inventory release at those altitudes and vectors, and was discontinued in 1967; the second (NERC) method used a fixed coordinate system to measure cloud size and density, utilizing a mass particulate sampler, charcoal bed, cryogenic sampler, external radiation sensor, and other equipment, but due to the fact that these samples were taken more than ten miles from the reactor tests, it’s quite likely that more of the fission products had either decayed or come down to the ground as fallout, so depletion of much of the fission product inventory could easily have occurred by the time the cloud reached the plane’s locations. This technique was used after 1967.
The next sampling method also came online in 1967 – the Elephant Gun. This was a probe that was stuck directly into the hot hydrogen coming out of the nozzle, and collected several moles of the hot hydrogen from the exhaust stream at several points throughout the test, which were then stored in sampling tanks. Combined with hydrogen temperature and pressure data, acid leaching analysis of fission products, and gas sample data, this provided a more close-to-hand estimate of the fission product release, as well as getting a better view of the gaseous fission products that were released by the engine.
Finally, after testing and cool-down, each engine was put through a rigorous post-mortem inspection. Here, the amount of reactivity lost compared to the amount of uranium present, power levels and test duration, and chemical and radiological analysis were used to determine which fission products were present (and in which ratios) compared to what SHOULD have been present. This technique enhanced understanding of reactor behavior, neutronic profile, and actual power achieved during the test as well as the radiological release in the exhaust stream.
Radioactive release from these engine tests varied widely, as can be seen in the table above, however the total amount released by the “dirtiest” of the reactor tests, the Phoebus 1B second test, was only 240,000 Curies, and the majority of the tests released less than 2000 Curies. Another thing that varied widely was HOW the radiation was released. The immediate area (within a few meters) of the reactor would be exposed to radiation during operation, in the form of both neutron and gamma radiation. The exhaust plume would contain not only the hydrogen propellant (which wasn’t in the reactor for long enough to accumulate additional neutrons and turn into deuterium, much less tritium, in any meaningful quantities), but the gaseous fission products (most of which the human body isn’t able to absorb, such as 135Xe) and – if fuel element erosion or breakage occurred – a certain quantity of particles that may either have become irradiated or contain burned or unburned fission fuel.
These particles, and the cloud of effluent created by the propellant stream during the test, were the primary concern for both humans and the environment from these tests. The reason for this is that the radiation is able to spread much further this way (once emitted, and all other things being equal, radiation goes in a straight line), and most especially it can be absorbed by the body, through inhalation or ingestion, and some of these elements are not just radioactive, but chemically toxic as well. As an additional complication, while alpha and beta radiation are generally not a problem for the human body (your skin stops both particles easily), when they’re IN the human body it’s a whole different ballgame. This is especially true of the thyroid, which is more sensitive than most to radiation, and soaks up iodine (131I is a fairly active radioisotope) like nobody’s business. This is why, after a major nuclear accident (or a theoretical nuclear strike), iodine tablets, containing a radio-inert isotope, are distributed: once the thyroid is full, the excess radioactive iodine passes through the body since nothing else in the body can take it up and store it.
There are quite a few factors that go into how far this particulate will spread, including particle mass, temperature, velocity, altitude, wind (at various altitudes), moisture content of the air (particles could be absorbed into water droplets), plume height, and a host of other factors. The NRDS Effluent Program Final Report goes into great depth on the modeling used, and the data collection methods used to collect data to refine these estimates.
Another thing to consider in the context of Rover in particular is that open-air testing of nuclear weapons was taking place in the area immediately surrounding the Rover tests, which released FAR more fallout (by dozens of orders of magnitude), so it contributed a very minor amount to the amount of radionucleides released at the time.
The offsite radiation monitoring program, which included sampling of milk from cows to estimate thyroid exposure, collected data through 1972, and all exposures measured were well below the exposure limits set on the program.
Since we looked at the KIWI-TNT test earlier, let’s look at the environmental effects of this particular test. After all, a nuclear rocket blowing up has to be the most harmful test, right? Surprisingly, ten other tests released more radioactivity than KIWI-TNT. The discrete particles didn’t travel more than 600 feet from the explosion. The effluent cloud was recorded from 4000 feet to 50 miles downwind of the test site, and aircraft monitoring the cloud were able to track it until it went out over the Pacific ocean (although at that point, it was far less radioactive). By the time the cloud had moved 16,000 feet from the test site, the highest whole body dose from the cloud measured was 1.27X10^-3 rad (at station 16-210), and the same station registered an inhalation thyroid dose of 4.55X10^-3 rads. This shows that even the worst credible accident possible with a NERVA-type reactor has only a negligible environmental and biological impact due to either the radiation released or the explosion of the reactor itself, further attesting to the safety of this engine type.
If you’re curious about more in-depth information about the radiological and environmental effects of the KIWI-TNT tests, I’ve linked the (incredibly detailed) reports on the experiment at the end of this post.
The Results of the Rover Test Program
Throughout the Rover testing program, the fuel elements were the source of most of the non-H2 related issues. While other issues, such as instrumentation, were also encountered, the main headache was the fuel elements themselves.
A lot of the problems came down to the mechanical and chemical properties of the graphite fuel matrix. Graphite is easily attacked by the hot H2, leading to massive fuel element erosion, and a number of solutions were experimented with throughout the test series. With the exception of the KIWI-A reactor (which used unclad fuel plates, and was heavily affected by the propellant), each of the reactors featured FEs that were clad to a greater or lesser extent, using a variety of methods and materials. Often, niobium carbide (NbC) was the favored clad material, but other options, such as tungsten, exist.
Chemical vapor deposition was an early option, but unfortunately it was not feasible to consistently and securely coat the interior of the propellant tubes, and differential thermal expansion was a major challenge. As the fuel elements heated, they expanded, but at a different rate than the coating did. This led to cracking, and in some cases, flaking off, of the clad material, leading to the graphite being exposed to the propellant and being eroded away. Machined inserts were a more reliable clad form, but required more complexity to install.
The exterior of the fuel elements originally wasn’t clad, but as time went on it was obvious that this would need to be addressed as well. Some propellant would leak between the prisms, leading to erosion of the outside of the fuel elements. This changed the fission geometry of the reactor, led to fission product and fuel release through erosion, and weakened the already somewhat fragile fuel elements. Usually, though, vapor deposition of NbC was sufficient to eliminate this problem
Fortunately, these issues are exactly the sort of thing that CFEET and NTREES are able to test, and these systems are far more economical to operate than a hot-fired NTR is. It is likely that by the time a hot-fire test is being conducted, the fuel elements will be completely chemically and thermally characterized, so these issues shouldn’t arise.
The other issue with the fuel elements was mechanical failure due to a number of problems. The pressure across the system changes dramatically, which leads to differential stress along the length of the fuel elements. The original, minimally-supported fuel elements, would often undergo transverse cracking, leading to blockage of propellant and erosion. In a number of cases, after the fuel element broke this way, the hot end of the fuel element would be ejected from the core.
This led to the development of a structure that is still found in many NTR designs today: the tie tube. This is a hexagonal prism, the same size as the fuel elements, which supports the adjacent fuel elements along their length. In addition to being a means of support, these are also a major source of neutron moderation, due to the fact that they’re cooled by hydrogen propellant from the regeneratively cooled nozzle. The hydrogen would make two passes through the tie tube, one in each direction, before being injected into the reactor’s cold end to be fed through the fuel elements.
The tie tubes didn’t eliminate all of the mechanical issues that the fuel element faced. Indeed, even in the NF-1 test, extensive fuel element failure was observed, although none of the fuel elements were ejected from the core. However, new types of fuel elements were being tested (uranium carbide-zirconium carbide carbon composite, and (U,Zr)C carbide), which offered better mechanical properties as well as higher thermal tolerances.
Current NTR designs still usually incorporate tie tubes, especially because the low-enriched uranium that is the main notable difference in NASA’s latest design requires a much more moderated neutron spectrum than a HEU reactor does. However, the ability to support the fuel element mechanically along its entire length (rather than just at the cold end, as was common in NERVA designs) does also increase the mechanical stability of the reactor, and helps maintain the integrity of the fuel elements.
The KIWI-B and Phoebus reactors were successful enough designs to use as starting points for the NERVA engines. NERVA is an acronym for the Nuclear Energy for Rocket Vehicle Applications, and took place in two parts: NERVA-1, or NERVA-NRX, developed the KIWI-B4D reactor into a more flight-prototypic design, including balance of plant optimization, enhanced documentation of the workings of the reactor, and coolant flow studies. The second group of engines, NERVA-2, were based on the Phoebus 2 type of reactor from Rover, and ended up finally being developed into the NERVA-XE, which was meant to be the engine that would power the manned mission to Mars. The NERVA-XE PRIME test was of the engine in flight configuration, with all the turbopumps, coolant tanks, instrumentation, and even the reactor’s orientation (nozzle down, instead of up) were all the way that it would have been configured during the mission.
The XE-PRIME test series lasted for nine months, from December 1968 to September 1969, and involved 24 startups and shutdowns of the reactor. Using a 1140 MW reactor, operating at 2272 K exhaust temperature, and produced 247 kN of thrust at 710 seconds of specific impulse. This included using new startup techniques from cold-start conditions, verification of reactor control systems – including using different subsystems to be able to manipulate the power and operating temperature of the reactor – and demonstrated that the NERVA program had successfully produced a flight-ready nuclear thermal rocket.
Ending an Era: Post-Flight Design Testing
Toward the end of the Rover program, the engine design itself had been largely finalized, with the NERVA XE-Prime test demonstrating an engine tested in flight configuration (with all the relevant support hardware in place, and the nozzle pointing down), however, some challenges remained for the fuel elements themselves. In order to have a more cost-effective testing program for fuel elements, two new reactors were constructed.
The first, Pewee, was a smaller (75 klbf, the same size as NASA’s new NTR) nuclear rocket engine, which was able to have the core replaced for multiple rounds of testing, but was only used once before the cancellation of the program – but not before achieving the highest specific impulse of any of the Rover engines. This reactor was never tested outside of a breadboard configuration, because it was never meant to be used in flight. Instead, it was a cost-saving measure for NASA and the AEC: due to its’ smaller size, it was much cheaper to built, and due to its’ lower propellant flow rate, it was also much easier to test. This meant that experimental fuel elements that had undergone thermal and irradiation testing would be able to be tested in a fission-powered, full flow environment at lower cost.
The second was the Nuclear Furnace, which mimicked the neutronic environment and propellant flow rates of the larger NTRs, but was not configured as an engine. This reactor also was the first to incorporate an effluent scrubber, capturing the majority of the non-gaseous fission products and significantly reducing the radiological release into the environment. It also achieved the highest operating temperatures of any of the reactors tested in Nevada, meaning that the thermal stresses on the fuel elements would be higher than would be experienced in a full-power burn of an actual NTR. Again, this was designed to be able to be repeatedly reused in order to maximize the financial benefit of the reactor’s construction, but was only used once before the cancellation of the program. The fuel elements were tested in separate cans, and none of them were the graphite composite fuel form: instead, CERMET (then known as composite) and carbide fuel elements, which had been under development but not extensively used in Rover or NERVA reactors, were tested. This system also used an effluent cleanup system, but that’s something that we’re going to look at more in depth on the next post, as it remains a theoretically possible method of doing hot-fire testing for a modern NTR.
Westinghouse ANL also proposed a design based on the NERVA XE, called the PAX reactor, which would be designed to have its’ core replaced, but this never left the drawing boards. Again, the focus had shifted toward lower cost, more easily maintained experimental NTR test stands, although this one was much closer to flight configuration. This would have been very useful, because not only would the fuel be subjected to a very similar radiological and chemical environment, but the mechanical linkages, hydrogen flow paths, and resultant harmonic and gas-dynamic issues would have been able to be evaluated in a near-prototypic environment. However, this reactor was never tested.
As we’ve seen, hot-fire testing was something that the engineers involved in the Rover and NERVA programs were exceptionally concerned about. Yes, there were radiological releases into the environment that are well above and beyond what would be considered today, but when compared to the releases from the open-air nuclear weapons tests that were occurring in the immediate vicinity, they were miniscule.
Today, though, these releases would be unacceptable. So, in the next blog post we’re going to look at the options, and restrictions, for a modern testing facility for NTR hot firing, including a look at the proposals over the years and NASA’s current plan for NTR testing. This will include the exhaust filtration system on the Nuclear Furnace, a more complex (but also more effective) filtering system proposed for the SNTP pebblebed reactor (TimberWind), a geological filtration concept called SAFE, and a full exhaust capture and combustion system that could be installed at NASA’s current rocket test facility at Stennis Space Center.
This post is already started, and I hope to have it out in the next few weeks. I look forward to hearing all your feedback, and if there are any more resources on this subject that I’ve missed, please share them in the comments below!