What were the operating principles of Japan's MITI during the 1950s and 60s?

What were the operating principles of Japan's MITI during the 1950s and 60s?

Japan's Ministry of International Trade and Industry (MITI) guided Japan's economy through two decades of unprecedented economic growth. MITI was the primary instrument of Japan's industrial policy, which did not run the entire economy as a centrally planned communist system might, but "provided industries with administrative guidance and other direction, both formal and informal, on modernization, technology, investments in new plants and equipment, and domestic and foreign competition" (source: wikipedia). How was MITI set up? How did it attract competent and well-intentioned bureaucrats? How did it prevent conflicts of interest between entrenched industries and consumers?

References to a long-form description of how they achieved this feat (in English) would also be appreciated.

MITI was formed by the merging of the Trade Agency and the Ministry of Commerce and Industry. Its purpose was to help curb inflation and to provide government leadership in different types of industries. MITI helped these industries by establishing policies on exports and imports as well as domestic inductries. In fact, the foreign trade policy that they developed was designed to strengthen domestic manufacturing. They were also responsible for establishing guidelines pertaining to pollution control, energy and power, and customer complaints.

The key to their success from bureaucratic perspective was that they did not have a group of bureaucrats who randomly made decisions that they felt would be successful. Instead, they relied on bring the top figures in each field of industry together to obtain a consensus on policies before enacting them. They also encouraged industry leaders to share their best practices to ensure that everyone had an opportunity for success.

In regards to your request for a long-form description of how they did this, I found a research paper from Harvard that focuses specifically on MITI's role in Information technology advancements. However, the principles applied in this particular field are equally relvant in other forms of industry as well. Here is an example of the primary point that brings all this together:

MITI by itself is not responsible for Japan's economic success. Rather, its forte appears to be its ability to bring divergent points of view together in creating national policies that are generally acceptable to the various sectors of the society. Consensus-building remains one of the striking features of the Japanese scene. Japan's and MITI's success in the post-war era have been due in large part to the political stability that has been enjoyed. This stability has enabled MITI to perform its role of guidance and coordination in an effective manner.

There is a very thorough explanation on Japanese economic growth in the work by Asian scholar, Chalmers Johnson in his work Blowback: The Costs and Consequences of American Empire. In addition to a very well researched explanation on American actions and the intelligence community derived concept of "blowback", he provides a very detailed explanation on Japanese economic success (you would probably be interested in his work with a more direct explanation of the role of the MITI specifcally, MITI and the Japanese Miracle: The Growth of Industrial Policy, 1925-1975). His argument in Blowback is that in addition to the Japanese skill in industry promoted by the likes of MITI, much of their incredible growth was due to their ability to use their geopolitical surroundings to their benefit.

"From approximately 1950 to 1970, the United States treated Japan as a beloved ward, indulging its every economic need and proudly patronizing it as a star capitalist pupil. The United States sponsored Japan's entry into many international institutions… transferred crucial technologies to the Japanese on virtually concessionary terms and opened its markets to Japanese products while tolerating Japan's production of its own domestic market."

-Chalmers Johnson, Blowback: The Costs and Consequences of American Empire, p.177

One of the interesting points Johsnon makes is on his claim that "East Asian export regimes thrived on foreign demand artificially generated by an imperialist power… the strategy only worked so long as Japan and perhaps one or two smaller countries pursued this strategy." While the beneficial policies initially benefited both Japan and the US, the former as described above and the latter by providing cheap consumer goods and an example of the benefits of capitalism to be used in the ideologically driven conflicts ragging in East Asia. However, by about the end of the 1980's, the Japanese developed an overcapacity to produce goods aimed at the American market at the same time that American policies in Japan (and elsewhere) had hollowed out vital American industries, lowering employment opportunities and wages in the US and thus correspondingly lowering the ability for the US consumer to absorb Japanese products.

His argument is elaborated more in chapter 9 of Blowback, and I would really recommend that you check it out from your local library or buy it, if you want to know more.

Underwriters Laboratories, Inc.

Underwriters Laboratories, Inc. (UL) and its subsidiaries around the world evaluate products, materials, and systems for safety and compliance with U.S. and foreign standards. In 1998, more than 14 billion UL Marks appeared on new products worldwide. The UL staff has developed more than 600 Standards for Safety, 80 percent of which are approved as American National Standards. Testing and service fees from clients support the independent, not-for-profit organization.

USS Amberjack (SS-522)

Guppy II configuration as she looked when I served aboard her. The deck guns were removed during the GUPPY conversions.

Displacement: 1,570 (surf.), 2,415 (subm.)
Length: 311′ 8″, Beam: 27′ 3″
Draft: 15′ 5″ (mean), Speed: 20.25 k. (surf.), 8.75 k. (subm.)
Complement: 81
Armament: 10 21″ torpedo tubes, 1 5″ deck gun, 1 40-mm. deck gun
Class: BALAO

The second AMBERJACK (SS-522) was laid down on 8 February 1944 at the Boston Navy Yard, launched on 15 December 1944 sponsored by Mrs. Dina C. Lang, and commissioned on 4 March 1946, Comdr. William B. Parham in command. The first submarine with this name, USS Amberjack (SS219), was sunk by the Japanese torpedo boat Hiyodori on 16 February 1943, less than 9 months after commissioning. To read more about the exploits and loss of the SS219, you can read her skipper’s two written war patrol reports and his radioed reports of the third patrol up until the time of her sinking. Click here to read the very poignant story.

Following shakedown training in the West Indies and in the Gulf of Mexico, AMBERJACK reported on 17 June for duty with Submarine Squadron (SubRon) 8. Operating out of the Submarine Base, New London, Conn., she conducted training missions in the North Atlantic, and, in November 1946, made a cruise above the Arctic Circle. In January 1947, the submarine entered the Portsmouth (N. H.) Naval Shipyard for extensive modifications and thereafter spent about a year undergoing a “Guppy” II conversion (from greater underwater propulsive power) during which her hull and sail were streamlined and additional batteries and a snorkel were installed to increase her submerged speed endurance, and maneuverability. In January 1948, she reported for duty with SubRon 4 based at Key West, Fla. She operated along the east coast and in the West Indies for a little more than 11 years. Her schedule included the development of tactics and independent ship exercises, type training, periodic overhauls and fleet exercises. During this period, she also visited numerous Caribbean ports. In July of 1952, AMBERJACK was transferred to the newly established SubRon 12, though she remained based at Key West and her employment continued as before.

The January 1950 edition of National Geographic Magazine has a 23 page article on the Navy in Key West with many pictures of the Amberjack and her crew. The author was on board during the steep ascent made famous in the photo at the bottom of this page.

Amberjack snorkeling off Key west in 1950

Early in August 1959, after more than 11 years of operations out of Key West, the submarine’s home port was changed to Charleston, S.C. She arrived there on the 8th and reported for duty with her former squadron, SubRon 4. While working out of her new home port, AMBERJACK’s operations remained much as they had been before with one significant difference: she began making deployments to European waters. In August, September and October of 1960, the submarine participated in a NATO exercise before making a week-long port visit to Portsmouth, England. She returned to Charleston late in October and resumed her normal duties. Between May and September of 1961, the warship deployed to the Mediterranean Sea for duty in the 6th Fleet.

I reported on board on 9 Sep 1961 in Charleston, South Carolina as a Seaman 1C, ET striker. My assignment for the first two weeks was that of a mess cook which was part of the initiation routine. We immediately left for sea as a hurricane was approaching and all USN ships were to put to open ocean in order to keep from pounding against the piers. We had been in the middle of some repairs that left us unable to submerge so we had ride it out on the surface. We tossed about like a coconut in the surf and I was constantly seasick for several days as we weathered huge seas. We couldn’t go out on deck so I had to lug garbage cans through the control room, up a ladder to the conning tower, and up another ladder to the bridge. Then I had to wait for the correct roll angle so that the garbage wouldn’t land on the deck when I dumped the contents over the side. All without vomiting along the way. If I did a good job the Conning Officer would let me stay topside for awhile to get some fresh air in my face. After the initiation period, I became a regular member of the Electronics gang in the Operations Department.

My duties included standing radar and electronic counter measures watches and maintaining the equipment. I rotated through helm, diving control and lookout watch stations, and my battle station was at the bow planes control station as I was pretty good at holding the ordered angle on the boat.

We made frequent stops in Fort Lauderdale, Florida where we ran sound profiles at the South Florida Testing Facility . The sea bed dropped to 600 feet within 3 miles of shore so it made for an ideal test range. We’d tie up at a commercial pier in Port Everglades, leave in the morning, run up and down the range most of the day and return to Fort Lauderdale liberty in the evening. While on this duty we would run Port and Starboard watches (typical in port) rather than 3 watches as at sea. Sometimes we’d leave one of the 2 watch sections ashore for liberty as we only needed 1 watch for the day. They just had to be back at the pier in time to tie us up. After a day on the town they sometimes had trouble catching the heaving lines.

On non-operational days we’d hold Open House on the boat for local visitors. We used to joke that the reason for us being there so often was because the skipper had a girlfriend there. I know many of the guys on the boat did.

We also stopped in Key West when patrolling the waters between Florida and Cuba. In 1962, we participated in a major fleet exercise, “Spring Board”. We and several other subs shadowed and “attacked” the military flotilla each night on the way to Puerto Rico. We were based in San Juan when not transporting and “locking out” Navy SEALs while submerged so they could “invade” Vieques Island in their rubber rafts.

Today it’s not uncommon for the boats to have armed personnel topside when entering constricted waters. But back then we didn’t have much experience with small arms except for occasional “shark shoots” after swim call. Aside from carrying a .45 when standing OOD, I only carried a sidearm one other time. Our crypto machine had been malfunctioning and needed to go to a secure shop at the Charleston base for maintenance. As I was fast tracked for the Nuclear program, I had already been vetted for a Secret security clearance. So the Operations Officer and I bundled the machine up in a locked canvas satchel with vent holes designed to sink to the bottom if we had to toss it off the pier. Then we each loaded our .45s, took hold of the satchel’s 2 handles and walked it up the pier and across the Navy yard to a building where 2 armed Marines signed for it and took it into a room behind a vault like door.

I earned my first Dolphins after qualifying as a Submariner on AMBERJACK in March of 1962. At the time, it took about 7 months to learn all the systems, controls and watch stations necessary for qualification. In June, I was transferred to Nuclear Power Training School in Bainbridge, Maryland, to join the class starting that month.

After a three-year interlude operating along the east coast and in the West Indies, AMBERJACK made another Mediterranean cruise between 7 July and 1 November 1964. She spent the ensuing 29 months working out of Charleston. In 1967, the submarine made a three-month deployment to the Mediterranean between 23 April and 24 July. On 2 September 1969, following another 25 months of operations along the east coast and in the West Indies, she embarked upon her last Charleston-based tour of duty in European waters during which she participated in another NATO exercise with units of the British, Canadian, and Dutch navies. At the conclusion of the exercise, AMBERJACK visited a number of ports in northern Europe before returning to Charleston on 12 December 1969.

On 9 July 1970, AMBERJACK arrived in her new home port, Key West, her base for the remainder of her service in the American Navy. She made her last deployment to the Mediterranean between 27 November 1972 and 30 march 1973. On 17 October 1973, AMBERJACK was decommissioned at Key West, and her name was struck from the Navy list. That same day, she was transferred to the Brazilian Navy and commissioned as CEARA (S-12). As of the end of 1984, she was still active in the Brazilian Navy. In about 1995, I met a business associate in Brasilia who was in the Brazilian Naval Reserve. He confirmed that the AMBERJACK/CEARA was still in operation at that time.

Before GUPPY conversion. Still has her deck guns. After GUPPY II modifications. After full sail modifications.

The USS Pickerel (SS524) also recorded a very steep ascent. Here is one report of her experience:

Pickerel (SS-524), surfacing at a 48 degree up angle, from a depth of 150 feet, during tests off the coast of Oahu, Hawaii, 1 March 1952. “The purpose of this operation was to enable the Navy’s submarine experts to evaluate the sub’s capabilities and characteristics of the GUPPY-snorkel type sub. This picture was taken from Sabalo (SS-302). Her sonarmen kept Pickerel under observation while she was submerged and preparing to surface. During Pickerel’s maneuvering the sonar gear delivered the constantly changing relative bearing which enabled the photographers to make this shot as she broke the surface.” Note: The official record of the “surfacing” pictured above is that it started at 150 feet and reached a 48 degree up-angle. From a crew-member manning the helm during this evolution: “We started at 250 feet, flank speed. The surfacing order included ‘use 60 degrees’ (the highest reading on the -bubble-type’ angle indicator). “We overshot, and lost the bubble at 65 degrees. The maximum angle (72 degrees) was calculated later by the highwater marks in the Pump Room bilges. Thinking back, even with the bow sticking above water up to the bridge fairwater, the screws wouldn’t have been much above where we started, still pushing us upward. “First message from the Queenfish (SS-393) which was accompanying us: ‘What is the specific gravity of your Torpedo Room bilges?’ “As you may imagine, the C.O. was something of a competitive wildman, pushing to find out what the limits were for these new GUPPY boats, after putting up with the older WW2 boats. And, we had to beat the Amberjack’s (SS-522) record of 43 degrees.”

History of Hygiene Timeline

The word hygiene comes from Hygeia, the Greek goddess of health, who was the daughter of Aesculapius, the god of medicine. Since the arrival of the Industrial Revolution (c.1750-1850) and the discovery of the germ theory of disease in the second half of the nineteenth century, hygiene and sanitation have been at the forefront of the struggle against illness and disease.

4000 BC – Egyptian women apply galena mesdemet (made of copper and lead ore) and malachite to their faces for color and definition.

To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video

3000 BC – The Ancient Romans invented lead-lined water pipes and tanks. The rich paid private water companies for their drinking water and other water needs, although it wasn’t much better than the water supply the peasants used. Most water systems where made from elm trunks and domestic pipes lined with lead. Water was stored in large lead tanks and often became stagnant.

2800 BC – Some of the earliest signs of soap or soap-like products were found in clay cylinders during the excavation of ancient Babylon. Inscriptions on the side of the cylinders say that fats were boiled with ashes, but did not refer to the purpose of ‘soap’.

1550-1200 BC – The Ancient Israelites took a keen interest in hygiene. Moses gave the Israelites detailed laws governing personal cleanliness. He also related cleanliness to health and religious purification. Biblical accounts suggest that the Israelites knew that mixing ashes and oil produced a kind of hair gel.

1500 BC – Records show that ancient Egyptians bathed regularly. The Ebers Papyrus, a medical document from about 1500 B.C describes combining animal and vegetable oils with alkaline salts to form a soap-like material used for treating skin diseases, as well as for washing.

1200-200 BC – The ancient Greeks bathed for aesthetic reasons and apparently did not use soap. Instead, they cleaned their bodies with blocks of clay, sand, pumice and ashes, then anointed themselves with oil, and scraped off the oil axnd dirt with a metal instrument known as a strigil. They also used oil with ashes.

1000 BC – Grecians whitened their complexion with chalk or lead face powder and fashioned crude lipstick out of ochre clays laced with red iron.

600 BC – Ancient Greeks start using public baths. In The Book of the Bath, Françoise de Bonneville wrote, “The history of public baths begins in Greece in the sixth century BC” where men and women washed in basins near places of exercise. The Ancient Greeks also start using chamber pots. Used from at least 600 BC by ancient Grecians, they’ve been used up till around the 18th century all over the world.

300 BC – Wealthy Ancient Romans began to use wiping techniques in their toilet habits. Common materials used were wool and rosewater. About 100 years later, the more common Romans used a sponge soaked in salt water.

19 BC – Anicient Romans began to use public baths. Agrippa (Emperor Augustus’ right-hand man) built the first public baths called Thermae in the year 19 BC. They increased in number rapidly at least 170 were operating in Rome by the year 33 BC, with more than 800 operating at the height of their popularity.

27 BC – Ancient Romans believed in the ability of urine to remove stains. Until the medieval period, people used lye, made of ashes and urine, to clean their clothes.

100 AD – The Ancient Romans developed cesspits, usually in the cellar or garden. In 1183 BC a Roman Emperor’s hall floor collapsed, sending dinner guests into the cesspit where some of them, unfortunately, drowned.

400 AD – In Medieval Britain, the population had begun various habits to keep their teeth clean. This included rinsing your mouth out with water, or a mixture of vinegar and mint, to remove gunk. Bay leaves soaked in orange flower water were also used, and the teeth would often be rubbed with a clean cloth too.

1110 AD – In Britain, one pamphlet recommended that people keep their teeth white by rubbing their teeth with powdered fish bones and then rinsing their mouths out with a mixture of vinegar and sulphuric acid!

1308 AD – In Britain it was common for your barber to remove problem teeth! If basic treatments didn’t fix the problem, the barber would be removing it, without the help of novocaine! A guide for Barbers was established in 1308 teaching barbers surgery skills.

1346-1353 AD – Black Death pandemic swept across Europe killing 40-50% of the population during a 4 year period. Likely originating in Central Asia, it was probably spread through trade routes.

1400 AD – The Chinese invented toilet paper.

1500-1600 AD – Pale faces were fashionable during the reign of Elizabeth I. Ceruse was the foundation make-up choice for both men and women in the Elizibethan era, as it gave them a smooth, pale look. However, it contained lead that seeped into the body through the skin, leading to poisoning. Varients with lead have, been used for thousands of years.

1566 – King James VI of Scotland wore the same clothes for months on end, even sleeping in them on occasion. He also kept the same hat on 24 hours a day until it fell apart! He didn’t take a bath as he thought it was bad for his health!

1586– Sir John Harington invented a valve that when pulled would release water from a water closet. Albert Giblin holds the 1819 British Patent for the Silent Valveless Water Waste Preventer, a system that allowed a toilet to flush effectively. Unfortunately there were no Sewers or running water at the time, so it wasn’t able to be practically used.

1600 – New developments in teeth cleaning started to appear in Britain. Rubbing ones teeth with the ashes of rosemary was common, and powdered sage was used to rub on teeth as a whitening agent. Vinegar and wine were also mixed to form a mouthwash.

1600-1700 – The same practices for cleaning were in use, but the ‘barbers’ (a.k.a dentists) had begun to learn more about dentistry. The first dentures, gold crowns, and porcelain teeth arrived in the 1700s. 1790 brought about the dental foot engine which rotated a drill for cleaning out cavities. The first dental chair was made in the late 1700’s.

1750 – A letter from Lord Chesterfield to his son urges the use of a sponge and warm water to scrub the teeth each morning. The recommendation of using one’s own urine in France was widely flouted by Fouchard, the French dentist. Gunpowder and alum were also recommended.

1789 – People were already fashion-conscious during the 18th century. When their eyebrows did not look fashionable, they often masked them with tiny pieces of skin from a mouse. Poems from as early as 1718 insinuated their use.

1834 – The 1834 London Medical and Surgical Journal describes sharp stomach pains in patients with no evidence of disease. This led them to believe “painter’s colic” was a “nervous affection” of the intestines occurring when lead “is absorbed into the body”.

1846 – Public baths had been popular since the 13th century. Due to the scarcity of firewood, bathing became an expensive practice. Whole families and friends had to share a bath, or many of them would remain dirty.

1847 – A physician called Ignaz Semmelwis found childbed fever occurred in women who were assisted by medical students. He found students who assisted in childbirth did so after autopsies. After instituting a strict hand washing policy, deaths dropped by 20 fold within 3 months.

1837 – 1901 – A nose-gay was typically a small bouquet of flowers or a sachet of herbs. It was attached to the wrist on a lapel or simply held in the hand. It would also be held under one’s nose for people walking through crowds. Nose-gays gained popularity during Queen Victoria’s reign.

1854 – In mid-18th century England, outbreaks of cholera led to an epidemic. A physician called John Snow observed that cholera seemed to spread via sewage-contaminated water. This was mostly noticed around a water pump in Broad Street, London. John removed the pump handle and the spread was instantly contained.

1858 – Hot weather struck the capital in 1858, drying up the River Thames and leaving pure sewage and other wasted piled up and exposed. This was the start of ‘The Great Stink’, forcing Parliament to close for the day and eventually initiating a reform of the sewerage systems and cesspits.

1861 – The modern flushing toilet. Thomas Crapper didn’t invent the flush toilet, but is understood to have made major contributions towards its development by implanting a modern septic system that pumped soiled waters out of the city. However this particular subject is still heavily debated.

1920 – Lysol was sold as a genital disinfectant and birth control method. Lysol ads proclaimed a host of benefits for every gynaecological need, and was the leading form of birth control from 1930 to 1960. Lysol is actually a caustic poison causing burns and itches after the first drop – most women were applying it to their skin for 30 years.

What were the operating principles of Japan's MITI during the 1950s and 60s? - History

It seemed, then, an opportune moment to pause and reflect on how we have got here. As with all such endeavours, the journey has been part strategic and part serendipitous but underpinning it all has been a commitment to furthering our knowledge of manufacturing in its broadest sense, and passing that knowledge on to industry and government and to successive generations of talented students.

In many respects, the IfM is following in the footsteps of James Stuart, the first &lsquotrue&rsquo Professor of Engineering at Cambridge (1875-1890). An educational innovator and a passionate advocate of putting theory into practice, he challenged the conventions of his day. When faced with what he considered to be inadequate teaching facilities, undeterred he created a workshop for his students in a wooden hut and, less popularly, installed a foundry in Free School Lane. The story of manufacturing at Cambridge is imbued with his indomitable spirit.

1966: students on the first Advanced Course in Production Methods and Management (now the MPhil Industrial Systems, Manufacture and Management)

THE 1950s, 60s AND 70s

The start of manufacturing education in Cambridge: the Advanced Course in Production Methods and Management.

In the 1950s Britain was still an industrial Goliath. Manufacturing accounted for around a third of the national output and employed 40 per cent of the workforce. It played a vital role in rebuilding postwar Britain but for a number of reasons &ndash including a lack of serious competition and an expectation that it would provide high levels of employment &ndash there was little incentive for companies to modernise their factories or improve the skills of their managers and workers.

In those days, it was the norm for engineering graduates to go into industry as &lsquograduate apprentices&rsquo for a period of up to two years. In practice, this was often badly organised, resulting in disappointment and frustration for all concerned.

Sir William Hawthorne, Professor of Applied Thermodynamics (and later Head of Department and Master of Churchill College), was himself an unimpressed recipient of graduate training. He likened apprenticeships to an unpleasant initiation ritual &ldquoin which people had their noses rubbed in it and then rubbed other peoples&rsquo noses in it.&rdquo Even if you were lucky enough to avoid having your nose rubbed in anything, your apprenticeship probably involved &ldquostanding next to Nelly and watching what they did&rdquo. Hawthorne could see that this approach perpetuated current practice and inhibited innovation and entrepreneurship.

He decided that Cambridge could &ndash and should &ndash do something about it and asked his colleagues John Reddaway and David Marples to devise some short industrial courses for graduates. These comprised lectures, discussions and site visits and looked at how a whole company operated &ndash how it organised its engineering design, production control, welfare and marketing. And the courses seemed to work. They were run very successfully for the aircraft engine manufacturer, D. Napier & Son Ltd. and based on this experience, Reddaway, Marples and Napier&rsquos head of personnel J. D. A. Radford, wrote a paper on &ldquoAn approach to the techniques of graduate training&rdquo. They presented this paper to the Institution of Mechanical Engineers in 1956 with the suggestion that it would take over the running of these courses and make them widely available. Although the courses &ndash and the paper &ndash were well received, with no sense of urgency over the need to improve current practice, the enterprise succumbed to a lack of funding. In the meantime, Reddaway had been asked by the University to produce a plan for a course similar in style and content that would last a year. This became known as the Reddaway Plan. But there was no money to recruit someone to run it so the plan gathered dust for the best part of ten years.

During those ten years concern was beginning to mount over Britain&rsquos lagging productivity and its declining share of world export markets. Successive governments embarked on a series of policy interventions and manufacturing became something of a national preoccupation. When John Reddaway was asked to talk about his plan at a conference of the Cambridge University Engineering Association in 1965, there was perhaps a greater imperative for change. In attendance was Sir Eric Mensforth, the Chairman of Westland Aircraft. Coincidentally, Reddaway had been an apprentice at Westland and when Mensforth established a scholarship at Cambridge, Reddaway had been its first recipient. Mensforth offered the University £5,000 if they could get the Reddaway Plan off the ground.

Also in the audience was Cambridge alumnus, Mike Sharman, who immediately volunteered to leave his lectureship at Hatfield Polytechnic to run the course, even though Mensforth&rsquos contribution amounted to just two years&rsquo worth of funding.

The Advanced Course in Production Methods and Management was up and running the following year, with its first intake of 12 students. Lasting a full calendar year, and designed to emulate professional rather than student tasks and disciplines, the course involved an intense series of real two-to-three week projects in factories across the country, interspersed with lectures from practitioners as well as academics.

The projects, typically analysing and improving factory operations, were almost always successful &ndash sometimes spectacularly so. Industry responded well to seeing these students getting to grips with the practicalities of engineering and manufacturing and graduates from the course were, and continue to be, much in demand. The notion that going into a factory and undertaking short, intensive projects would be an effective way of learning was nothing short of and gave them the confidence to tackle increasingly difficult tasks, developing them very rapidly into people who really could go on to become &lsquocaptains of industry&rsquo.

Mike Gregory took the course in its fourth year: &ldquoFor many of us who were introduced to the world of engineering and manufacturing through the ACPMM, the experience was quite literally life changing. We students were swept along by Mike Sharman&rsquos enthusiasm, not to mention the thrill of travelling around the UK and overseas, visiting and working in all manner of factories. How to make a Volkswagen Beetle, how to make a tennis racket, how to put the flavour on both sides of a potato crisp &ndash we learnt all this and much, much more.&rdquo

In 1987 a design option was added to ACPMM and it changed its name to the Advanced Course in Design, Manufacture and Management (ACDMM). This was in response to the growing recognition of the importance of design as a competitive differentiator.

But the path of ACPMM/ACDMM did not always run smooth. For many years the course occupied an anomalous position within the University, which remained suspicious of it and would periodically try to close it down. Until 1984, when Wolfson College agreed to take it in, it did not have a proper University home which meant the students were not members of the University. Funding was a perpetual problem, particularly when universities were required to be more accountable for their spending. For many years, ACPMM did not have a qualification attached to it and the University Grants Committee (UCG) would only fund universities on the basis of the numbers of students who were awarded degrees or diplomas.

Another unusual aspect of the course was that in the 1970s it developed relationships first with the University of Lancaster and then Durham as a way of both expanding its teaching expertise and extending its geographical reach into companies the length and breadth of Britain. This became an additional complication when funds began to be allocated on the basis of student numbers and the administrative task of sharing the funding equitably between the partners proved to be too difficult to resolve. In 1996 Cambridge was left to forge ahead on its own.

Year 17 ACPMM students emerging from a mine

The qualification problem was solved when Professor Colin Andrew arrived in the mid-80s and set about devising an examination which would allow for the awarding of a diploma. He managed to persuade both Mike Sharman and the University that this was a good thing to do. But as one hurdle was surmounted another would appear. Other funding shortfalls emerged as the awarding bodies offered fewer studentships and cut support for staff. In this not entirely conducive environment, ACDMM was looking to increase its student numbers. At this point, David Sainsbury (now Lord Sainsbury of Turville and Chancellor of the University) and the Gatsby Charitable Foundation intervened. The continuation of ACDMM was consistent with one of Gatsby&rsquos primary charitable objectives &ndash to strengthen science and engineering skills within the UK &ndash so Gatsby agreed to provide funding for a five-year period.

Mike Sharman finally retired in 1995, having been awarded an MBE the previous year for his endeavours. Tom Ridgman arrived from the University of Warwick with a 20-year career in the automotive industry behind him and took over as Course Director in 1996. In 2004, still facing funding challenges and after a thorough review of the options, the course was renamed again &ndash Industrial Systems, Manufacture and Management (ISMM) &ndash and became an MPhil. It was reduced to an intensive nine months, concluding with a major dissertation. This resulted in an immediate increase in student numbers and the course today, under the stewardship of Simon Pattinson, is oversubscribed by a factor of five, and attracts candidates of an exceptionally high calibre from all over the world.

Recent ISMM students on an overseas study tour

A new course for undergraduates: Production Engineering Tripos

In the 1950s and 60s an undergraduate degree in engineering at Cambridge was all about science and mathematics &ndash management was very much the poor relation. David Newland, who went on to be Head of Department between 1996 and 2002, recalls that as an undergraduate in the 1950s there were just two lectures a week on management, timetabled for Saturday mornings, &ldquowhich was when most people played sport and, in any case, there was a perception that you could just waffle your way through the exam questions.&rdquo By the 1970s, Britain&rsquos manufacturers were seeing their share of global export markets continue to decline and were facing an array of domestic challenges, not least in the area of labour relations.

Governments continued to pursue industrial policies and announced that the University Grants Commission would consider applications for a four-year engineering degree course rather than the conventional three years, as long as the focus was on preparing graduates for industry rather than research. The Department of Engineering responded with a proposal, which was accepted, to establish the Production Engineering Tripos (PET). This was a first for Cambridge: it allowed engineering students to specialise for their last two years in learning about manufacturing both from an engineering and a management perspective. The intention was to equip these very bright students with the theoretical and practical knowledge and ability to solve real industrial problems &ndash and the skills and experience to hold their own in a factory setting.

Mike Gregory who had been recruited in 1975 by Mike Sharman to work on ACPMM moved across to set up the new PET course. In 1988 PET changed its name to Manufacturing Engineering Tripos (MET) to reflect the breadth of its approach. From the early days of John Reddaway&rsquos short courses there had been a recognition that manufacturing was concerned with much more than just &lsquoproduction&rsquo and encompassed a range of activities which included understanding markets and technologies, product and process design and performance, supply chain management and service delivery.

Mike Gregory and MET students

on the overseas
research project, 1988.

By 1997 Mike, as we shall see, was increasingly busy and passed the running of the course on to Ken Platts. Ken steered MET through its first teaching quality assessment before handing it over first to Jim Platts and then to Claire Barlow who ran it successfully for many years. Today&rsquos MET students, like &lsquoISMMs&rsquo, are very much sought after and the course has produced a string of distinguished alumni who have launched successful start-ups, transformed existing manufacturing organisations, developed new technologies and delivered a wide range of new products and services around the world.

THE 1980s AND 90s

Research and practice go hand in hand

During the 1980s and 1990s UK manufacturing continued to shrink as a proportion of national output. But if manufacturing in the UK was in decline, it was proliferating in both scale and complexity elsewhere. Japan, in particular, was combining automation with innovative working practices and was achieving spectacular results both in terms of quality and productivity. Manufacturers of all nationalities were going global, building new factories in developing countries giving them access both to rapidly growing new markets and cheaper sources of labour. Now manufacturers were in the business of managing interconnected global production networks and taking an even broader view of their role &ndash subcontracting parts of their operation to other businesses.

While large companies were becoming increasingly international, entrepreneurship was thriving close to home. The &lsquoCambridge Phenomenon&rsquo &ndash a cluster of technology, life sciences and service-based start-ups &ndash was underway and beginning to attract the attention of researchers.

When Colin Andrew was appointed as Professor of Mechanics in 1986, the name of the chair, at his request, was changed to Manufacturing Engineering. This signalled a new direction for the Department and a growing recognition that manufacturing was an important subject for academic engagement. Around the same time, Mike Gregory admitted to harbouring an ambition to establish a manufacturing institute. Colin was sympathetic to the idea, but counselled that a convincing academic track record was a prerequisite for such a task. With characteristic energy, Mike took up the challenge and set about developing a set of research activities which would reflect the broad definition of manufacturing that was already informing both undergraduate and postgraduate teaching.

Ten years later the foundations were in place. In 1994, on Colin Andrew&rsquos retirement, Mike was appointed as Professor of Manufacturing and Head of a new Manufacturing and Management Division within the Department of Engineering. An embryonic Manufacturing Systems Research Group was beginning to make a name for itself. Had James Stuart been around, he would have recognised a fellow unstoppable force.

Management research

Following a series of industrial and academic consultations in 1985 and 1986 an EPSRC Research Grant, Manufacturing Audit, was won. It explored how manufacturing strategies might be understood and designed in a business context. The recruitment of Ken Platts from TI&rsquos research labs in 1987, and his pursuit of the project as a PhD topic, resulted in a sharper academic focus and the publication of a workbook on behalf of the Department for Trade and Industry, Competitive Manufacturing: a practical approach to the development of manufacturing strategy.

Ken&rsquos appointment was important in a number of ways. It established the precedent for bringing in people with industrial experience to research posts and it embedded the principle that manufacturing research at Cambridge should be useful for industry, both in its subject matter and in its outputs. The workbook became the blueprint for a distinctive way of working. For each major research project, a book would be produced that would give managers working in industry a set of tools and approaches they could apply themselves. The fact that this first attempt went on to sell in the region of 10,000 copies was also helpful in establishing Cambridge&rsquos credentials.

Ken&rsquos work demonstrated the potential for taking an &lsquoaction research&rsquo approach to management. In other words, instead of relying on surveys and case studies, important though these were, the researchers would take their theoretical models into companies and test them in real-life situations. This strand of research led to the Centre for Strategy and Performance and established an approach that would be widely adopted across the IfM. Ken&rsquos early work also attracted funding from the Engineering and Physical Sciences Research Council. This large, rolling grant enabled the recruitment of additional researchers, including one Andy Neely, and established Cambridge as a serious player in the field of manufacturing strategy and performance measurement.

The next key research appointment was of David Probert in 1992, another ACPMM alumnus who, like Ken, came from industry. Building on the foundations laid by Mike and Ken in manufacturing strategy, David identified and focused on what was becoming an increasingly common conundrum: whether a manufacturer should make a product or part itself, or outsource it to a supplier. David&rsquos work in this area gained immediate traction with companies and his framework was adopted by Rolls-Royce amongst others. This led directly to EPSRC-funded work on technology management which has since developed into a highly successful and wide-ranging research programme. A principal focus has been on creating robust technology management systems to help companies turn new ideas into successful products and services. This work coalesced around five key processes: how to identify, select, acquire, exploit and protect new technologies. Strength in this area was bolstered by the addition of James Moultrie&rsquos expertise in industrial design and new product development, and, more recently, by the arrival of Frank Tietze with his research interest in innovation and intellectual property. Research into widely applicable business management tools has also emerged as a fruitful area of investigation, with Rob Phaal establishing the IfM as a centre of expertise in roadmapping.

Much of this research activity has been most applicable to large and mid-size companies but there has also been significant interest in more entrepreneurial technology-based activities, not least those taking place in the &lsquoCambridge Cluster&rsquo and the challenges inherent in trying to commercialise new technologies. This work was pioneered by Elizabeth Garnsey in the 1980s and is continued by today by Tim Minshall and his Technology Enterprise Group.

In 1994 Yongjiang Shi joined this small band of researchers to start his PhD on international manufacturing networks. This was the beginning of a whole new research strand which initially focused on &lsquomanufacturing footprint&rsquo. Its groundbreaking work in this area led to a major collaboration with Caterpillar and the IfM&rsquos Industry Links Unit (more of which later) and the development of a set of approaches that would help multinational companies &lsquomake the right things in the right places&rsquo. As international manufacturing has become increasingly complex and dispersed, the research, under the leadership of Jag Srai, has broadened to encompass end-to-end supply chains, designing global value networks and creating more resilient and sustainable networks. As with the early work on manufacturing footprint, this new research is carried out in partnership with industrial collaborators.

Technology research

Significant progress had been made in management and operations research when Duncan McFarlane joined the fledgling Division in 1995 bringing his expertise in industrial automation and adding an important technical dimension to the team. Duncan went on to establish the Cambridge Auto-ID Lab, one of a group of seven labs worldwide, leading work on the tracking and tracing of objects within the supply chain using RFID. It was this group that coined the phrase the &lsquointernet of things&rsquo and has gone on to lead research in this area. Duncan&rsquos team subsequently expanded to encompass a wider range of interests, looking at how smart systems and smart data both within factories and across supply chains can be used to create more intelligent products and services. Ajith Parlikad&rsquos work on asset management has become a key part of this research programme and is also integral to the innovative work Cambridge&rsquos Centre for Smart Infrastructure and Construction is doing to improve the UK&rsquos infrastructure and built environment.

Installing robots in Mill Lane

Production processes were clearly an important topic for a manufacturing research programme and a new group drawing on work from across the Division was set up to address it in the late 1990s. In 2001, GKN funded a new chair in Manufacturing Engineering to which Ian Hutchings was appointed. Ian came from the Department of Materials Science and Metallurgy and had an international reputation for his work in tribology. He further developed the Production Processes Group, which brought together a number of research activities including Claire Barlow&rsquos work on developing more sustainable processes. In 2005, Ian set up the Inkjet Research Centre with EPSRC funding to work with a group of UK companies, including a number in the local Cambridge cluster, to carry out research both into the science behind this important technology and its use as a production process.

In 2003, Bill O&rsquoNeill had joined the IfM from the University of Liverpool, bringing with him his EPSRC Innovative Manufacturing Research Centre (IMRC) in laser-based micro-engineering. This became the Centre for Industrial Photonics which is now, with Cranfield University, home to the EPSRC Centre for Innovative Manufacturing in Ultra Precision and the EPSRC Centre for Doctoral Training in Ultra Precision. Both the Distributed Information and Automation Laboratory and the Centre for Industrial Photonics have been able to commercialise their intellectual property through spin-outs, the former setting up RedBite, a &lsquotrack and trace&rsquo solutions company and the latter, Laser Fusion Technologies which uses laser fusion cold spray technology for a wide range of energy, manufacturing and aerospace applications.

A new identity

Mike&rsquos ambition to create a manufacturing institute finally came to fruition in 1998 when an alliance was forged with the Foundation for Manufacturing and Industry (the FM&I). This was an organisation set up to help companies understand how economic and policy considerations would affect their businesses and to enhance the public profile of manufacturing in the UK. It brought with it a large network of industrial partners and complemented the Division&rsquos now considerable strength and breadth in manufacturing and management research and its embryonic Industry Links Unit (see below). The Institute for Manufacturing was born, embedded in the Engineering Department&rsquos Manufacturing and Management Division but with a distinct character and set of capabilities which enabled it to address the challenges manufacturers were facing &ndash and the policy context in which they were operating.

Policy research

One of Mike&rsquos aspirations for the new Institute was to use its manufacturing expertise &ndash both strategic and technical &ndash to support government thinking and to raise awareness of the continued importance of manufacturing in the context of an increasingly service-oriented economy. The merger with the FM&I added an economics and policy dimension to the IfM. This would develop into an important research strand asking the fundamental question: why are some countries better than others at translating scientific and engineering research into new industries and economic prosperity? The IfM&rsquos policy research team, founded by Finbarr Livesey and today led by Eoin O&rsquoSullivan, is very actively engaged with the policy community in addressing these questions (see page 6).

As with all IfM undertakings, the intention was that research in this area should prove useful. It is based, therefore, on practical engagement with policymakers to understand their needs and provide outputs which support them in their decision-making. In 2003, Mike also established the Manufacturing Professors&rsquo Forum, an annual event which brings together the UK&rsquos leading manufacturing academics, industrialists and policymakers to develop a shared understanding of how to create the conditions in which UK manufacturing can flourish.

Putting research into practice

That notion that the research carried out at the IfM should be of real value to its industrial and governmental collaborators was enshrined in the creation of an Industry Links Unit (ILU) which had been set up in 1997, a year before the IfM came into being. At that time, stimulating fruitful collaborations between universities and industries was not a priority for public funding. The Gatsby Charitable Foundation, which had previously played a critical part in sustaining ACPMM through tricky financial times, believed that fostering such interactions was key to developing long-term economic growth &ndash and that the proposed new unit could have a useful part to play in this regard. It provided initial funding for the ILU which allowed it to develop the three main strands of activity designed to facilitate the transfer of knowledge: education, consultancy and publications. Gatsby also encouraged the ILU to put itself on a clear commercial footing by setting up a separate, University-owned company (Cambridge Manufacturing Industry Links or CMIL) through which it could generate income from the ILU&rsquos activities to fund future research.

CMIL was successfully nurtured through its early years first by John Lucas and then by Paul Christodoulou. In 2003, Peter Templeton was recruited as Chief Executive and by 2009 the range and scale of its activities had grown to such an extent that the decision was taken to merge ILU and CMIL into IfM Education and Consultancy Services Limited. This created a clearer organisational structure and a name that &lsquodoes what it says on the tin&rsquo.

Education services

CMIL aimed to transfer knowledge and skills to people working in industry through a variety of courses, some of which were one- or two-day practical workshops while others were longer programmes such as the Manufacturing Leaders&rsquo Programme, a two-year course for talented mid-career engineers and technologists who had the potential to move into more strategic roles in industry. In 2006, CMIL set up an MSc in Industrial Innovation, Education and Management for the University of Trinidad and Tobago which ran very successfully until 2013 and demonstrated a capability for exporting IfM educational practice. Creating customised courses for very large companies was &ndash and continues to be &ndash an important activity.

Consultancy Services

By appointing &lsquoindustrial fellows&rsquo, many of them alumni of ACPMM and MET, CMIL was able to establish a consultancy arm which could disseminate and apply the IfM&rsquos research outputs to companies of all sizes, from multinationals to start-ups and with national and regional governments. Initially, much of the focus was on small and medium sized manufacturers which, according to former Chairman and CEO of Jaguar Land Rover and longstanding friend and advisor to the IfM, Bob Dover, had been largely neglected by academics. The intention was to give an academic rigour to the decisions the companies were taking, underpinned by research from the Centre for Strategy and Performance. This led to the development of ECS&rsquos &lsquoprioritisation&rsquo tool which has now been used with more than 750 companies and its &lsquofast-start&rsquo approach to business strategy development.

The consultancy programme has grown steadily in recent years, delivering projects which have had a real impact on the organisations concerned and the wider manufacturing environment. IfM ECS, for example, has facilitated many of the roadmaps which define the vision and implementation plans for new technologies in the UK, such as synthetic biology, robotics and autonomous systems and quantum technologies. In 2012, it was commissioned by the Technology Strategy Board (now Innovate UK) to carry out a landscaping exercise looking at opportunities for high value manufacturing across the UK. It is currently engaged in &lsquorefreshing&rsquo the landscape to establish clear priorities for the government and, in particular, to identify areas where investment in manufacturing capabilities can be maximised by co-ordinating the efforts of delivery agencies.

IfM ECS also carries out a wide range of research-based consultancy activities with companies, including major projects with multinationals to redesign their production networks or end-to-end supply chains. It works with companies of all shapes and sizes to align their technology and business strategies and help them turn new technologies into successful products or services.

2000s AND 2010s

Rapid expansion &ndash and a new home

Since 2000, the manufacturing landscape has changed very rapidly. Disruptive technologies and new business models present threats and opportunities which industry and governments need to understand, and act upon. An increasingly pressing concern is how we can continue to satisfy the world&rsquos appetite for products and services without destroying the planet in the process.

The proposed position of the new building on the West Cambridge site

As we have already seen, research, education and practice at the IfM were expanding at speed as we entered the new millennium. In 2001 the IfM was awarded a major grant and became home to one of the EPSRC&rsquos flagship Innovative Manufacturing Research Centres which, when joined with Bill O&rsquoNeill&rsquos IMRC in 2003, created an organisation of significant size and scope. However, it was operating out of a rather ramshackle set of offices and laboratories in Mill Lane in the centre of Cambridge and this was becoming a limiting factor, to the extent that the new photonics team was exiled to the Science Park.

A fundraising campaign raised £15 million from a number of very generous benefactors, including Alan Reece through the Reece Foundation, and the Gatsby Charitable Foundation, which was enough to build the IfM a new home. In 2009, it moved to its current purpose-built premises on the West Cambridge site. This was a hugely significant development, not only from the perspective of staff comfort and morale. It meant the IfM could host a whole range of events and activities which were useful in themselves but also gave more and more people a glimpse of the work going on there and led to further interest in research collaborations and consultancy projects.

19 November 2009: the Duke of Edinburgh unveils the plaque at the opening of the new building, applauded by Dame Alison Richard, the Vice-Chancellor of the University of Cambridge at the time.

The new building also enabled further expansion of the research programme, through increased office space and laboratory facilities. In 2010, Professor Andy Neely returned to the IfM from Cranfield &ndash having worked with Ken Platts on performance measurement in the 1990s &ndash to found the Cambridge Service Alliance, which brings together academics and multinational companies to address the challenge an organisation faces when moving from being a maker of products to a provider of services.

A cross-disciplinary Sustainable Manufacturing Group had been operating at the IfM since the late 1990s and developing sustainable industrial practice has been a common thread running through the IfM&rsquos various research programmes. In 2011 this was given a significant boost when the EPSRC Centre for Innovative Manufacturing in Industrial Sustainability led by Steve Evans was established within the IfM. This is a collaboration between four universities (Cambridge, Cranfield, Imperial College, London and Loughborough), with a membership programme to ensure manufacturing businesses both help set the research agenda and actively participate in its projects.

Understanding business models is at the heart of many of the IfM&rsquos research activities: how can a company add a example, or learn to operate in a more sustainable way? What impact will new technologies such as 3D printing have on both established firms and new market entrants? How should businesses redesign their operations networks in response to disruptive technologies? Chander Velu has set up a new research initiative which takes a management and economics approach to business model innovation and aims to bring together different perspectives from across the IfM and key UK and international universities to establish a co-ordinated research agenda.

More lab space has allowed the IfM to extend its science and technology research interests, recently acquiring multidisciplinary teams looking at how to manufacture new materials at scale, such as carbon nanotubes (see page 9) and biosensors, led by Michaël De Volder and Ronan Daly respectively. By working with colleagues with policy, management and operations expertise, these teams are able to address the scientific and technological challenges within the broader context of the manufacturing value chain in order to understand the risk factors early on and maximise the chances of successful commercialisation.

IfM common room &ndash a space designed to encourage networking and collaboration.

IfM ECS has continued to expand the range of services it offers. For example, it is currently running a bespoke executive and professional development programme for Atos (see page 27) and is actively expanding its portfolio of open courses and workshops to reflect new research emerging from the IfM&rsquos research centres. Similarly, the number of tools and techniques IfM ECS has at its disposal to support industry and government through consultancy is growing to encompass activities such as product design and servitization.

In 2010 IfM ECS took on the management of ideaSpace, an innovation hub in Cambridge which provides flexible office space and networking opportunities for entrepreneurs and innovators looking to start up new, high impact enterprises. As well as helping to create successful new businesses and economic value, ideaSpace also works with governments, agencies and universities to develop policies, strategies and programmes which support a thriving start-up sector.

Taking stock

Manufacturing research, education and practice at Cambridge have come a long way in the last 50 years but they still remain true to the vision of Hawthorne and Reddaway: manufacturing is about much more than shaping materials. To understand the complexities of modern industrial systems with their engineering, managerial and economic dimensions you need to be fully engaged with the people and companies that do it &lsquofor real&rsquo. The research programme here is now extensive, covering the full spectrum of manufacturing activities. This year the University of Cambridge as a whole received more EPSRC funding for manufacturing research than any other UK university. IfM has an important role to play not only in doing its share of that research but in facilitating manufacturing research across the University.

Education is thriving. The ISMM and MET courses go from strength to strength and this year we have more than 75 students doing PhDs or research MPhils.

IfM ECS continues to grow, putting IfM research into practice whether redesigning multinational companies&rsquo operations networks, helping to develop robust innovation and technology strategies and systems, or delivering executive and professional development programmes and open courses.

Using the scanning electron microscope in the Centre for Industrial Photonics

Looking to the future

So where will the next 50 years take the IfM? Our strong sense of purpose will not change &ndash we remain committed to making a difference to the world by improving the performance and sustainability of manufacturing. We will continue to create knowledge, insights and technologies which have real value to new and established manufacturing industries and to the associated policy community. And we will continue to ensure that our knowledge has an impact, through our education and consultancy activities.

James Moultrie inspiring recent MET students

But the IfM is fundamentally about innovation. So while we will carry on doing what we do best, we will also look for opportunities to do things differently. We have ambitious plans for the future which include the possible development of a &lsquoscale-up centre&rsquo, a physical space devoted to supporting the transition of ideas and concepts from lab-based prototypes into scalable industrial applications. James Stuart would have approved of the energy and determination that has gone into creating the IfM as we know it today and his pioneering spirit will continue to inspire us. This way we hope to ensure that the next 50 years are even more productive and enjoyable than the last 50.

This article was written by Sarah Fell based on interviews conducted by IfM doctoral students Chara Makri, Katharina Greve and Kirsten Van Fossen with members of staff past and present and with long-standing friends of IfM.

Masagana 99

The Masagana 99 Program was launched in 1973 as a Program of Survival to address the acute food shortages and later to increase rice production. The target was to achieve a yield of 99 cavans (or 4.4 tons) of unmilled rice per hectare. Masagana 99 was anchored on two service provisions – a credit program and transfer of technology. Masagana 99 was an innovative supervised credit program and the first of its kind in its time. To emancipate farmers from usury and banks’onerous conditions set by banks in extending loans to farmers, the government guaranteed 85% of all losses on Masagana 99 loans. This warranty induced rural banks to forego of its traditional practice of requiring collaterals. Even the rediscounting policy was revamped to make them easy and at least cost to the farmer-creditor. Thereafter, some 420 rural banks and 102 branches of the Philippine National Bank agreed to provide loans on such conditions.

Loan applications were processed quickly and on the spot. Bank employees, together with farm technicians, processed the farm plan and budget for farmers’ seldas[4] or cooperatives. An individual farmer with a collateral to offer may also obtain credit. The maximum allowable loan reached the equivalent of US$100 per hectare with one percent (1%) monthly interest. Once approved, many of the loans were sent to the farm sites by foot, motorcycle, jeeps, and even pump boats. The Philippine National Bank called this program “Bank on Wheels”. Part of the loan was given in cash to cover labor costs while the balance was given in Purchase Orders and which could be exchanged for fertilizers and pesticides at participating stores.

PNB’s Bank on Wheels Program designed to supplement the Masagana 99 Program by way of providing loans and even delivering them to the farmers in the fields

If the credit program was innovative, so too was the transfer of technology. Farmers were now introduced to new rice varieties called HYVs (high-yield varieties) which were radically different from the ones they previously planted. These varieties required extensive preparation and use of fertilizers and pesticides so that the farmer, with the aid of farm technicians, would have to follow the method specified by the Program.

To ensure coordination and cooperation of all farm-related initiatives, local chief executives were drawn into the program. Governors were designated chair of the Provincial Action Committees while mayors were made heads of Municipal Action Teams. Both officials were responsible for coordinating various agencies – banks, millers and traders, farm input dealers, local radio networks, DA, DAR, and DLGCD – at their respective levels.

In its first year, Masagana 99 was a huge success. Because of the prevailing political conditions, implementing actors performed their mandated tasks, however grudgingly. Moreover, the country generally enjoyed good weather in 1974 so that losses to agriculture were minimal, unlike in the last three years. Furthermore, as fertilizer prices in 1974 increased sharply due to the turmoil in the Middle East and the dictate imposed by the Organization of Petroleum Exporting Countries (OPEC), the government cushioned its impact through subsidies, amounting to about 21% of retail price. Lastly, the government provided a guaranteed farm gate price of US$6 per sack, relieving farmers of severe losses when market prices fell during harvest time. As far as attaining self-sufficiency in rice is concerned, Masagana 99 was a huge success. In fact, after only two years of its implementation, the Philippines was able to attain sufficiency in 1976[5] and may have exported rice[6].

Table 3: Status of Masagana 99 Credit Program after Expiration (31 April 1979)

Phase Term # of Borrowers Area (has.) Loans Granted (in M pesos) Repayment Rate (in %)
I May – October 1973 401,461 620,922 369.5
II November ’73 – April ‘74 236,115 355,387 230.7 94
III May – October 1974 529,161 866,351 716.2 94
IV November ’74 – April ‘75 354,901 593,609 572.1 84
V May – October 1975 301,879 558,330 572.9 82
VI November ’75 – April ‘76 151,862 255,882 255.9 76
VII May – October 1976 144,265 244,477 274.3 81
VIII November ’76 – April ‘77 89,623 148,763 164.3 80
IX May – October 1977 131,842 222,622 250.5 81
X November ’77 – April ‘78 92,476 155,095 176.1 74
XI May – October 1978 116,624 202,606 236.9 80
XII November ’78 – April ‘79 85,401 157,521 158.0 68

GM: How The Giant Lost Its Voice – An Insider’s Perspective

[first posted 5/21/2013. CraigInNC is a former GM employee, and has been sharing the benefits of his insider’s knowledge and perspective since he arrived here at CC. In this post, which was originally a comment Craig left, he shares his thoughts on the external and internal forces at work during the crucial era that started with the OPEC Oil Embargo, and which he identifies as a key turning point at GM. The decline of GM is the biggest automotive story just about ever, and there obviously are many takes and perspectives to it. Feel free to agree or disagree, but please keep the tone civil. – PN]

GM’s downsized 1985 FWD C-Body cars (Cadillac DeVille and Fleetwood, Olds 98, Buick Electra) and 1986 E/K cars (Eldorado/Seville) represent one of the key turning points at GM. The situation with these all-new cars was not just confined to those models only, but was part of a broader set of directions that GM decided to take ten years before they hit the street. To say that OPEC had influence on this would be an understatement, but that affected all cars, most especially the domestics who built big cars. At that time, most of the imports were very small, with the exception of some Mercedes models.

As much as we blame Roger Smith for all of the troubles of the 1980s, he was only marginally influential in the process that got all of this going. He could have done more probably to exert pressure to tweak models, but the die was already cast before he assumed the chairmanship in the fall of 1980. The situation as I saw it was like this:

Of course the OPEC Oil Embargo of 1973-1974 changed everything – it burst the bubble for most Americans and made us realize that the oil that begot gasoline was a finite natural resource, and a resource that was not entirely controlled by the United States. We all pretty much know and understand that part of history so nothing more needs to be said.

CAFE was enacted in 1975, to take affect for MY1978 passenger cars and MY1979 light trucks. GM was most affected by this legislation as they were the master builder of large cars. Chrysler fuel economy was probably slightly lower than GM’s during the 1970s, but given Chrysler’s dire financial situation by the late 1970s, the focus at Chrysler was keeping the company alive rather than the threat of government action on CAFE. Also being the weakest of the Big Three with about a 15% market share, Chrysler was less of a threat to government action than GM, which always lived under the threat of anti-trust, much like AT&T and IBM.

As I commented in the article with the Olds Firenza, GM not meeting CAFE standards would likely have resulted in severe consequences. At that time, Washington wrote legislation and bench-marked it again GM. AT&T gave you telephones, RCA gave you TVs, NBC/ABC/CBS gave you TV programs, Kodak sold you film for your camera, and IBM sold you computers. That was it in a nutshell. But as we can see, each and every one of those companies no longer exist in historical form. Some like AT&T were forcibly broken up, others like IBM & GM endured a decline and recovered in new form, and of course we know how TV went from 3 channels to 3,851 and counting…

Ed Cole (left) retired as GM President in 1974. He was one of the last truly influential GM Presidents that was considered a “car guy.” He started his career working in an auto parts store and ended it as President of Checker before he tragically died. Pete Estes (right) followed Cole and was an operational guy, but under him the guy got rolling and despite his legendary career with Oldsmobile, Pontiac, and Chevrolet, he did not have the swagger of Ed Cole and others. But he was much loved. And a man of extreme innovations. While others were more big picture in their pronouncements, Estes understood all the details. As young GMI students, we were all in awe of the big names like Mitchell and Cole, and even Iacocca at Ford, because they were celebrities. But for those that did not possess extreme extroversion, a guy like Pete Estes whose influence was felt less in what you saw on the outside of a car than what you felt on the inside driving it. He was a real engineer’s man and it is quite apropos that there is a near shrine to him on display in the Scharchburg Archives at Kettering University.

Roger Smith was a numbers man his tenure before Chairman and CEO was Executive VP of public relations, governmental affairs, and finance. Things we considered necessary for doing business but never expected anyone to get promoted to the head of the company from. A very unusual situation and one that many had a difficult time adapting to. We were used to getting our ideas approved or disapproved by guys that had walked in our shoes beforehand. To discuss costs before results was tantamount to squashing creativity and productivity.

Part of the genius of GM for so many years as that is was really a collective organization of mini brain trusts and ideas and energy flowed up to the top from below. Things like the genesis of the turbo Buick V6 originating from a Boy Scout project is testament to this. That would have been nearly impossible to have occurred after the 1980s. There simply was not that level of flow of communications. Coming from a non-operational background, Smith felt no connection to the engineering crew that actually made everything that that we sold. He had no personal affinity for any of them and often did not even know all but the most senior staff of the divisions at the time he became CEO. Thus, he felt unencumbered to embark on various projects of his liking without pains of guilt.

Bill Mitchell retired in 1977. Mitchell exerted influence over corporate management unlike what had been seen before or since. Pretty much whatever Mitchell wanted, he got. Unless it was mandated by the government, no one told Mitchell what to do. Unfortunately, Irv Rybicki, who replaced Mitchell, did not have the spine nor the influence that Mitchell had. By the time that Rybicki retired in 1986, he was basically designing cars that he was told to design and not the other way around.

By the time that Chuck Jordan assumed the design reigns, things were already too far along to be able to correct in concrete fashion, but had Jordan followed Mitchell, I personally feel things would have been a lot better. Jordan was a real fan of Cadillac and his influence was most felt with the 1992 E/K designs that are widely considered smart looking. Jordan was also influential in convincing management to begin upsizing cars again and was largely responsible for why cars grew from 1988-on. Of course Mitchell is second to Jesus Christ in automotive styling at GM, but Jordan had a flair for presence second only to that. The current GM design chief, Ed Welburn is very talented himself, but Jordan was the last of the old guys with the critical eye.

In the aftermath of OPEC, with the coming of CAFE, and changing customer tastes, the decision was made in 1975 within GM on a corporate level, to go full speed into FWD and to maximize space efficiency. The belief was, rightly, that the days as we knew them were over. The paradigms that drove automotive design and development from the first Oldsmobile no longer applied. Up until the late 1960s, automakers built whatever they wanted, totally unencumbered by anything, whether it be government regulation to world events. Europe and Japan were still digging out from the ashes of WWII. Detroit built cars represented everything that we thought of about the United States.

I remember sitting in an auditorium at GMI when a GM executive gave a speech extolling to us the virtues of GM and how it fits in with the rest of the country. You know “what’s good for GM is good for the US, etc…” The car was the ultimate expression of the freedom that built this country. Manifest destiny, live free or die, and the power of the individual. Soviet citizens drove Ladas and East Germans drove Trabants. We drove cars that represented the country’s industrial might, and they were styled accordingly.

Then it all changed. A bunch of small men, dressed in white shirts with skinny black ties and coke bottle glasses came in and told us it was all a dream. Well not really, but it felt like that. Suddenly we had insurance companies breathing down our backs, the EPA looking for trouble, NTHSA telling us that people were crazy, and OPEC told us that strange sounding men with permanent tans dressed in bathrobes showed us that they had more control over our behavior than our own elected government. It was very surreal after a while.

After 1978, everything seemed to be a giant scramble if it wasn’t CAFE, it was cash flow, or something else. Nothing felt like it flowed freely from the brain to the garage. Everything felt like a compromise success felt like you achieved as much as you could. Everything was a what if… The days of building cars from dreams that Bill Mitchell had of cars coming out of the clouds in England were over.

Given all of these factors, the decision was made at the corporate level to take the direction of the company towards FWD and space efficiency. The second oil crisis (1980) and the two following years of uncertain energy prices and inflation only validated that. First came the X cars (Citation, etc.). Then came the J & A cars, and the rest followed as we know them. It was a total commitment, not just on the model level, but corporate wide. RWD was gone, done, finished for all but the most specialized models of passenger cars like the Corvette.

If all had went accordingly to plan there would have been NO RWD cars, except for a few, by MY1985. It was a paradigm shift unheard of in the automotive world that ranks probably third behind the invention of the self starter and the automatic transmission, in terms of what changed people’s perception of what a car was. Yes, the market had the VW Rabbit that was small, efficient, and FWD with a transverse four cylinder engine, but the Rabbit was a niche vehicle. It was purchased by people that needed a small car, and VW at that time did not produce anything that matched the center of the buying public, the big RWD car.

As much as we disparage the Citation, when it came out it really changed the thinking of both Detroit and the US buying public. Especially as time went on with the release of the J and A body cars, everyone knew where the market was going. Chrysler came out with the K cars. Had GM decided to make smaller RWD cars with longitudinal engines, it is very likely that FWD transverse cars would have remained the purview of imports, and/or only smaller cars. To underestimate the impact would not do justice to the sea change that both affected the design culture inside Detroit and in the minds of the US buying public. One only has to think of the Chevette and the Vega as ultra-small cars with ultra-conventional designs. While all these ‘deadly sins’ of the 1980s might have cost GM dealers, their influence begot space efficiency as a benchmark for car design that remains with us today.

The fulcrum of influence at GM shifted from the divisions to the executive level under Roger Smith. No more reflective of that was the ill-fated 1984 reorganization that largely demolished corporate autonomy. I suppose from a business perspective, the old model was not going to last forever. As we saw with the engine mixing affairs of the 1970s, total vertical integration of the divisions was no longer cost effective in light of continuing escalation of costs of goods sold, regulation, and the costs associated with the corporate plan to move to FWD.

In the old days, cars were simple, they were RWD, mostly framed vehicles, V8s, carbureted, and large. Most of the budget in car development went to styling. Engines evolved incrementally, bodies could be changed rather easily and with less expense with a body on frame design. Everything was set up nicely. Even using identical frames and substructures, you could make a car look and feel completely different with relative ease. GM was the master at this. With FWD and unibody that was no longer possibly, at least as easily and inexpensively as in the past.

FWD costs money, a lot of money. Unibody designs cost more money to design because they have to be designed as a package no just as a body that can be dropped on an existing frame. And they cannot be easily made different. Hence all the cookie cutter cars of the 1980s. Unfortunately, the other automakers, especially the imports, only built one kind of car, so there was nothing to look similar to, until the Japanese came out with their premium brands and many of those models started looking and feeling a lot more badge engineered (although admittedly not to the degree of GM vehicles).

So when you have five divisions now having to build similar-sized FWD cars on unibody designs, you go from platforms that could be before-easily altered to fit each divisions styling themes and customer characteristics, to platforms that were virtually impossible to make different. It was a bad situation that could not be easily rectified. Believe me, that idea was lost on NO ONE at the division level and it pained many people. But when a company makes a corporate decision to take the company in one direction and invests what is the equivalent of the GDP of probably several states, directions can’t be changed easily. Given the predictions of gas prices going up and regulations continuing, FWD was here to stay and we had to make the best of it.

Given that GM went whole-heartedly into the FWD program, not only did the basic body structure change, but everything else changed with it. The X cars were one of the first mass market vehicles that had fuel injection standard. Real fuel injection, like the kind that took cars into the modern ages and lasted well into the 1990s. That was another incredible paradigm shift. Of course the Seville was the first big GM model to have a modern EFI system, but it was a niche model and handled only by Cadillac dealers who could train select personnel to service it. And that EFI system borrowed heavily on existing European systems.

The GM TBI system that debuted in 1980 set the standard for basic but highly efficient throttle body fuel injections system in production. While there were many engineering failures over the years, that TBI system was not one of them and became a highly reliable rock solid design along with the subsequently developed SFI that came out in turbo Buicks for 1984 that again set the standard for fuel control in the industry. Until the recent adoption of direct injection systems, fuel injection systems were largely carbon copies of the original system that debuted in the 1984 Regals.

All of this was done on a massive scale, unprecedented before seen. The closest thing to a total re-engineering was the 1966 Toronado and that was justified because it was sold to Oldsmobile and eventually Cadillac and Buick because they were premium cars. Now we were building inexpensive everyday cars for the masses that had move development in them that had been spent developing the atomic bomb of WWII. If you added up all the monies spent from the first dollar spent on the Citation to the last car converted from RWD to FWD and converted them to 2013 dollars, you could almost balance the federal budget. No kidding. It was at that level. It was overwhelming. It was nothing like anything could have dreamed or imagined when they entered engineering school.

When we started college, we all expected to be building variations of RWD body on frame cars forever. Ones that were styled like each division wanted them. Some cars like the Vette and the Toronado were different but they were low volume vehicles and had dedicated staffs. Little did I know that by the time that I effectively retired full time after 41 years that we would be driving massively computerized FWD vehicles with space aged materials that could protect us from all but the most dire of situations.

And that really was where all the failures came from. Some of them, like the V864 were stop gap, clearly introduced to bridge between old and new. Others like the HT4100, an engine that turned out to be quite sound by the end but was rushed into production due to time and circumstance. So in a way, it was like mobilizing for war. The changes affected everything. Almost nothing was the same from 1975 to 1985. I am not sure a single automotive company changed like that in a ten year span on the face of the earth. Maybe the Soviet bloc companies but I suppose we could confine it to free market countries.

Whenever you undertake such massive change on that level, with a company that large, with that amount of influence in the industry, mistakes are going to happen. That does not absolve anyone of the effects, but it would have been difficult to have imagined how it could have been totally perfected since so much was going on we had our hands full just keeping everything moving.

So people ask, well how did Honda, or Mercedes manage to keep it together during this time and grow? Well, quite simply, they were a lot smaller, built fewer products, and were largely unaffected by the forces that affected the Big Three and GM in particular. Toyota built nothing of any particular size except for the Cressida which was only a bit player in the market. Honda only sold Accords, Civics, and Preludes three of which were small vehicles unaffected by CAFE, so Honda as a corporation did not have to endure a wholesale change that the Big Three experienced after OPEC. They could quietly continue to devote their energies into continuing to develop their vehicles without radical changes.

When gas was in short supply and fuel economy was of paramount concern, people bought a lot of small imported cars, plus a lot of small domestic cars. But when those concerns subsided we saw buyers return to more traditional buying patterns if only for short periods. During the 1990s we had an extended period of prosperity and low relative gas prices which, by that time passenger cars were already fully redesigned and much smaller, drove SUV sales which were the spiritual successors to the traditional American car design. Gas went back up and people started buying smaller again and the cycle has yo-yo’d around like that for some time.

So in a way, at least for the Japanese automakers, building only small cars, when things began to change on the energy front, they did not go to the market, but the market came to them. They just sort of happened to be there, like the Mustang II was in 1974, designed without any real regard to OPEC, but happened to be something that seemed so right for the moment. And it sold, partly because it was more manageable than the recent Mustangs, but often because it was just much more efficient. Same with the Vega, despite the troubles of the first couple of years, MY1974 was a banner year because it was a small efficient car when people’s worlds were turned upside down.

In the 1980s, the biggest sin for Roger Smith was the money spent on extraneous projects unrelated to car design and build. Things like EDS, Hughes Electronics, and buying robots to lick envelopes when money could have been spent refining product. It felt like the Federal government, billions of dollars flying everywhere but no one really knew where it was going. In 1965, every dollar went into putting cars into people’s garages. Yes, Frigidaire built fridges and appliances, but they did so in part because they also made air conditioning for cars, and those product lines were profitable, and did not drain from corporate resources. I did not know until about eight years ago that a division of Hughes Electronics developed and introduced DIRECTV, yes THAT DirecTV that competes with DishNetwork and TimeWarner for our television viewing. So all you GM haters with DirecTV, better switch fast! Well actually you don’t have to since it is a fully separate company (spun off in 2003), but just so you know…

By the time that Roger Smith retired in 1990 and Bob Stempel assumed the reigns, things were a mess. Bob should have replaced Pete Estes in 1981, but he wasn’t at that point in the food chain at the time, but like Pete, he was an operations guy. He knew how to get things done. He could not have reversed the push to FWD, but he would have not spent the money that Roger Smith did on everything but and might have made these cars the best vehicles ever produced, or at least much better than they were. By the time he got the keys, he was hamstrung. The company was bleeding money, nothing was selling, and he spent most of his time trying to right the ship. Unfortunately for him, he was out of the country for much of the 1980s managing Opel (which was making money hand over fist at the time not like today where it is dying) and did not have influence over North American operations. But Stempel was a car guy and would have done well if he would have had the resources to do so.

To tie this into something that Paul might appreciate, back in 2007, there was an article written over at TTAC:

While most of the (first) story documents various ills experienced by the owner and others, I feel he rightly points to the X car as the beginning of the end. Really that date was June 21, 1975 when the executive committee approved the whole FWD X platform to begin with, but that is being discrete.

It’s hard to say whether it was the decision to go FWD that began the decline itself or the ills that the car suffered as a product. Because we have to go back to the original premise of what made GM what it was and what made it great. All that changed with the decision to go FWD unibody. People bought GM cars because each division made something unique and not only was it unique, but at least with regards to imports, the only competition was from within the Big Three.

Before OPEC, no one really cared about imported vehicles except on the margins. Mercedes was chipping away at luxury sales but unless they started building mass market vehicles, they would have been confined to a small section of that market. As would have VW and the other Europeans who built small quirky vehicles that catered to niche segments of the populations that had specific needs or were just weird enough to not mind driving 55hp VW buses that went nowhere fast. If OPEC would not have happened, one of two things would have happened: imports would have remained nibbling on both ends of the extremes, or they would have been forced to introduce larger products that would have looked a lot more like old fashioned American cars than what they were currently building. As much as we talk about how much OPEC and CAFE affected the Big Three, just to play devil’s advocate I have often thought about what would have happened if the reverse were true that the government would have passed a law mandating cars of a minimum size. GM would have gladly dropped the Vega and chaos would have reigned in Tokyo. Not unlike how it reigned in Detroit for so long.

So the moral of all of this way many things, we could argue that Roger Smith wasted money that could have been spent on product, we could blame it all on OPEC for destroying the US business model, blame it on UAW for extracting maximum benefits, blame it on corporate decisions to go FWD. Its impossible to really do that constructively. It was just so big. So much was going on. The real deadly sin was that it was all just overwhelming. Almost like a drug addiction.

Once the ball got rolling back in 1975, it blew up into this huge amount of change that was unprecedented in history. It got out of control, and sadly, probably to the point that no one man could have stopped. When you have a corporate the size that GM was at one time, that was bigger than probably half the countries in the United Nations, it was like nuclear fission. When the reactions start happening they are hard to control. It was like the meltdown in Chernobyl.

So while I look back at the 40+ years of my life, and think of all of this and history and my place in it, what could have been done differently, what I could have done differently, in the end I really do not have any answers. It would have been like trying to figure out how to run the world during WWII. I suppose in the end it had to all go away. The days of vertically integrated massive corporations with dominating market shares are over. Some will dominate for a short time, usually when a new product is introduced, but the days of GM, AT&T, IBM, Kodak and RCA are over. Gone, done. Globalization, technology, communication, whatever the factors are will never let such things happen again.

But it was a ride, a fun ride, a ride I never dreamed would turn out the way it did, but despite all of the bitterness I could have regarding everything, I probably would not have had it any other way.


There were too many overlapping brands (or, more specifically, too many distribution channels.) That encouraged a culture of badge engineering and cannibalization, which eroded the value of the individual brands and necessarily became more desperate as the company began to lose market share.

GM and Detroit generally were woefully unprepared for the OPEC crisis, which provided a sales opportunity to foreign competitors.

And the Japanese beat the pants off of them with total quality management and lean production. Consumers began to realize that the build quality and engineering just weren’t all that good.

Ford invented automotive mass production, and GM had invented the idea of competing based upon branding, styling and features. GM wasn’t expecting that business model to evolve anywhere beyond where they had taken it, and were unwilling to accept that some foreigners might be better at the game than they were. The greatest legacy cost was hubris, which fostered that inability to adapt to change.

This is an outstanding article!
Curbside is becoming the place to be!

Over a period of a decade, GM went from making cars that made Americans happy to making cars that didn’t. Somehow the biggest car company in the US decided that the future was thataway – and their maket went thisaway instead. Somehow the biggest car company in the US didn’t see having both thisaway and thataway in their product portfolio as a possibility.

No one demanded that GM drop the cars Americans liked driving. No one told GM they no longer wanted a rear drive, body on frame, automobile. GM decided to stop making the cars Americans liked to drive. GM committed itself to a belief that the future of automobiles was not the automobiles that made them the biggest car company in the world. It was as if the New Coke phenomena infiltrated GM.

GM was monolithic, and still thinks monolithically. It was so hide-bound in this that regardless of the brand and the brand’s market, GM decided it’s one size fits all front wheel drive, unibody car of the future was going to fit every brand and market. So what is being described in this wonderful article was SELF INFLICTED. OPEC didn’t do it. Technology didn’t do it. Labor didn’t do it. Because if those were really the reasons this was done – then why is GM no longer producing the same cookie-cutter front drivers today? it is 2013 and there is more diversity of drivetrains and manufacturing methodologies than there was in 1985. GM’s problems were self inflicted, not forced upon them.

GM decided to reinvent themselves in a manner that only a monolithic hide-bound top heavy corporation would – and blew it. GM wasted billions of dollars reinventing themselves when no one was forcing them to. H. Ross Perot mentioned accurately that GM could have BOUGHT Toyota for what they wasted during the Roger Smith years. However, the men at the top of the organization couldn’t understand how to make an organization as big as GM do a 180 degree backflip, even when it wasn’t necessary.

GM’s new cars sucked like a F5 tornado in a Black Hole. What we wanted in a GM car was no longer made. You wanted a rear drive Park Avenue? Tough – you got a front driver that couldn’t be a Park Avenue even with when it came with tufted crushed velour.

We wanted real GM cars. GM told us that their craptastic front drive unibody pseudomobiles were better. We knew what they knew at the time they said this – these cars were not real GM cars.

When the front drivers didn’t give us what we wanted in a GM car, we left GM.


By the 1990’s Australia routinely accessed remote sensing data from the US Landsat and NOAA satellites, the French SPOT satellites, the European ERS-1 radar satellite, and the Japanese Geostationary Meteorological Satellite (GMS). Australia was now recognised internationally as a highly professional provider of ground support, and an innovative and effective user of data provided by other countries, particularly in the analysis and processing of raw data.

The ESA spacecraft Ulysses was launched in October 1990 on a mission to study the poles of the Sun in greater detail. To achieve this initially it flew out to Jupiter and then swung back south, out of the ecliptic plane, to fly over the South Pole of the Sun. During that time Tidbinbilla had an exclusive view of the spacecraft as the first details emerged. The spacecraft lasted for more than 12 years.

The Australian Space Research Institute (ASRI) came about in the early 1990s as the result of a merger between the AUSROC Launch Vehicle Development Group at Monash University in Melbourne and the Australian Space Engineering Research Association (ASERA). They started propulsion development engineering, were certified safety officers and launch officers for sounding rockets, took over a large quantity of Zuni rockets and offered them as launch opportunities for payloads, conducting launch campaigns at Woomera twice a year up until the 2010, when the ADF essentially prohibited non-military use of Woomera. Many University theses were completed thanks to ASRI and the Zunis, including the student supersonic projects in Queensland.

After haphazard hangouts as more of a space club the National Space Society of Australia held the first Australian Space Development Conference in 1990 in Sydney with the financial support of GIO Reinsurance, OTC Australia, Baker & McKenzie, the Cape York Space Agency’s successor The Essington Group, Australian Airlines and American Airlines, and the Australian Space Office.

ASRI held their first national space conference in 1991 and held 19 conferences annually in total up until 2009.

The Magellan Mission to Venus spacecraft entered orbit August 1990 and then proceeded to map the surface of Venus in unprecedented detail through 1994. Tidbinbilla supported the mission.

CSIRO and Australian industry provided some design and component construction contributions to the Along Track Scanning Radiometer (ATSR)-1 and -2 instruments and the Advanced ATSR (AATSR) instrument. The ATSR series of instruments were jointly funded by the UK and Australian Governments, and were flown onboard the European Space Agency’s ERS-1 (ATSR-1, launched 1991) and -2 (ATSR-2, launched 1995) satellites. The Advanced ATSR instrument was launched onboard ESA’s ENVISAT satellite in 2002, and continued to function until 2012.

Optus acquired AUSSAT and its satellites when it became Australia’s new telecommunications carrier in January 1992. The communications satellite, Optus B1, was launched into orbit in 1992.

The Essington Group (formed to replace the Cape York Space Agency) ceased in 1992 and its place was taken by the Space Transportation System (STS) formed in 1992. STS planned to launch Proton-Ks from Darwin and Melville island in collaboration with Russia.

An Agreement between Australia and the United States concerning the Conduct of Scientific Balloon Flights for Civil Research Purposes was established in 1992.

In the same year a NASA shuttle flight carried an Australian ultraviolet space telescope Endeavour into orbit.

The 2nd Australian Space Development Conference was held in Sydney in October 1992 and included the establishment of Australian Space Industry Chamber of Commerce (which would in the 2010’s become the Space Industry Association of Australia).

Rocketplane Kistler was incorporated in 1993 in South Australia as a joint Australian-US venture with the aim to launch a two stage RLV COTS program. It all went horribly wrong and ended expensively in 2001, finally de-registering in 2007.

The Australian Government commissioned an expert panel review (the Curtis review) of the National Space Program in 1992. As a result, the Australian Space Council Act 1994 was passed, which created a space council. The council’s mandate was to report on matters affecting the application of space-related science, and to recommend a national space policy called the National Space Program to encourage the application of space-related science and technology by the public and private sector in Australia.

Lockridge Earth Station was built in 1993 and continues to support international and some domestic satellite services. It’s still staffed 24 hours a day in recognition of its key role as a Tracking, Telemetry & Control facility.

The 3rd Australian Space Development Conference was held in Sydney in 1994 and was used by the then Australian Space Office to launch it’s five year plan for the Australian space industry.

UniSA’s DCG developed into the Institute for Telecommunications Research (ITR) in 1994. ITR is the largest university-based research organisation in the area of wireless communications in Australia and conducts its research in four main areas: satellite communications, high speed data communications, flexible radios and networks and computational and theoretical neuroscience. They developed satellite ground station modems used in ACRES and commercial ground stations. ITR also operates the ASTRA and S-Band antennas, used by ESA for ATV missions to ISS and the first Dragon missions by SpaceX.

In 1994 the Optus B3 communications satellite was launched into orbit to replace the failed Optus B2 satellite, which never reached orbit due to a launch vehicle failure. It is located at the 164°E orbital slot in inclined orbit with a footprint covering Australia and New Zealand. Optus B3 carries 16 transponders, 15 of them operating in the Ku-band and the remaining in the L-band with Ku-band feeder links.

In 1995, the Galileo probe entered Jupiter’s atmosphere. CDSCC was the prime tracking and communications station for this mission. The probe revealed that the chemical composition and structure of the atmosphere was not what was expected.

Seeing a niche in satellite data distribution, John Douglas founded the highly successful Apogee Imaging International, an Adelaide based Remote Sensing Company, in 1995. He travelled the world and directed projects in Africa, Asia, and Australia for the following 15 years.

The Oxford Falls Earth Station was established in 1995. The facility is Optus’ international gateway for voice, data and video services from international news gatherers as well as providing international communications for key Australian government departments and pay TV providers.

The Government abolished the Australian Space Office and the Australian Space Council, and terminated National Space Program funding in 1996. Several key people tried to morph it into the Australian Space Agency Office but this fell on deaf ears.

Along with ASICC, the NSSA called for the establishment of an Australian National Space Agency (ANSA) and, through the efforts of Philip Young, released the white paper “Space Australia” to the government. Nothing resulted from these petitions.

The Australian-born astronaut Dr Andy Thomas AO flew his first flight into space on Endeavour in the same year.

In 1996 CSIRO, on behalf of Australia, was Chair of international Earth observation cooperative body the Committee on Earth Observation Satellites (CEOS).

The Western Pacific Laser Tracking Network satellite, WESTPAC, owned by Canberra based Electro Optics Systems Pty Ltd, was launched in 1998.

The Cooperative Research Centre for Satellite Systems was established in 1998, to investigate applications of small satellites for Australia.

In 1998 Melbourne hosted the 49th International Astronautical Congress – the first time this annual global event was held in Australia.

The 5th Australian Space Development Conference was held in Sydney in July 1998 and the Melbourne Space Frontier Society had a brief resurgence. An offshoot group also held Space Frontier conferences for a few years.

Spacelift Australia Ltd (SLA) was formed in 1999 as a joint Australian-Russian venture with the aim to create 150 jobs and a $200million industry at Woomera using the START-1 LV and converting a Russian ICBM. It ended in 2001.

United Launch Systems International (ULSI) proposed a new-generation vehicle, the Unity-22, to be targeted at the LEO market. The ULSI consortium was made up of International Space Development of Bermuda, which held 90% of shares, and Projects International Australia, holding the remaining 10%. International Space Development was in turn majority-owned by Thai Satellite Telecommunications (TST). ULSI proposed to undertake test launches from a new range near Gladstone in northern Queensland, Australia, in 2002, with commercial operations starting in 2003 at an initial rate of six launches a year. None of that happened.

Spacelift Australia raised almost $1million in capital from a Russian investor and targeted the lower end of the commercial launch market, aiming to use the Russian SS-25-based Start rocket as the basis for a total turnkey service provided by STC-Complex MIHT. Starting in November 2000, Spacelift planned three demonstration flights from Woomera and actively sought customers and prepayments, promising full commercial flights from 2001. It didn’t burn rocket fuel but it would go on to burn a lot of money in the next two years.

In 1999 there were five different spaceport consortiums in Australia, four of which were based on Russian hardware, all aiming to set up commercial launch facilities.

The Australian Space Council Act 1994 was repealed in 1999.

Joint Defence Facility Nurrungar (JDFN), located near Woomera, ceased operations and was decommissioned. The ADF now uses the site occasionally for army test and evaluation work under the approval of the Woomera Test Range. The whole facility is completely empty and stripped – even lights and power plugs. One of the giant ‘golf balls’ remains in tact as an impressive radome structure (non-operational, all mechanisms removed).

NTR Hot Fire Testing Part I: Rover and NERVA Testing

Hello, and welcome back to Beyond NERVA, where today we are looking at ground testing of nuclear rockets. This is the first of two posts on ground testing NTRs, focusing on the testing methods used during Project ROVER, including a look at the zero power testing and assembly tests carried out at Los Alamos Scientific Laboratory, and the hot-fire testing done at the National Defense Research Station at Jackass Flats, Nevada. The next post will focus on the options that both have and are being considered for hot fire testing the next generation of LEU NTP, as well as a brief look at cost estimates for the different options, and the plans that NASA has proposed for the facilities that are needed to support this program (what little of it is available).

We have examined how to test NTR fuel elements in nun-nuclear situations before, and looked at two of the test stands that were developed for testing thermal, chemical, and erosion effects on them as individual components, the Compact Fuel Element Environment Simulator (CFEET) and the Nuclear Thermal Rocket Environment Effects Simulator (NTREES). These test stands provide economical means of testing fuel elements before loading them into a nuclear reactor for neutronic and reactor physics behavioral testing, and can catch many problems in terms of chemical and structural problems without dealing with the headaches of testing a nuclear reactor.

However, as any engineer can tell you, computer modeling is far from enough to test a full system. Without extensive real-life testing, no system can be used in real-life situations. This is especially true of something as complex as a nuclear reactor – much less a rocket engine. NTRs have the challenge of being both.

Engine Maintenance and Disassembly Facility, image via Wikimedia Commons

Back in the days of Project Rover, there were many nuclear propulsion tests performed. The most famous of these were the tests carried out at Jackass Flats, NV, in the National Nuclear Test Site (Now the National Criticality Experiment Research Center), in open-air testing on specialized rail cars. This was far from the vast majority of human habitation (there was one small – less than 100 people – ranch upwind of the facility, but downwind was the test site for nuclear weapons tests, so any fallout from a reactor meltdown was not considered a major concern).

The test program at the Nevada site started with the arrival of the fully-constructed and preliminary tested rocket engines arriving by rail from Los Alamos, NM, along with a contingent of scientists, engineers, and additional technicians. After doing another check-out of the reactor, they were hooked up (still attached to the custom rail car it was shipped on) to instrumentation and hydrogen propellant, and run through a series of tests, ramping up to either full power or engine failure. Rocket engine development in those days (and even today, sometimes) could be an explosive business, and hydrogen was a new propellant to use, so accidents were unfortunately common in the early days of Rover.

After the test, the rockets were wheeled off onto a remote stretch of track to cool down (from a radiation point of view) for a period of time, before being disassembled in a hot cell (a heavily shielded facility using remote manipulators to protect the engineers) and closely examined. This examination verified how much power was produced based on the fission product ratios of the fuel, examined and detailed all of the material and mechanical failures that had occurred, and started the reactor decommissioning and disposal procedures.

As time went on, great strides were made not only in NTR design, but in metallurgy, reactor dynamics, fluid dynamics, materials engineering, manufacturing techniques, cryogenics, and a host of other areas. These rocket engines were well beyond the bleeding edge of technology, even for NASA and the AEC – two of the most scientifically advanced organizations in the world at that point. This, unfortunately, also meant that early on there were many failures, for reasons that either weren’t immediately apparent or that didn’t have a solution based on the design capabilities of the day. However, they persisted, and by the end of the Rover program in 1972, a nuclear thermal rocket was tested successfully in flight configuration repeatedly, the fuel elements for the rocket were advancing by leaps and bounds past the needed specifications, and with the ability to cheaply iterate and test new versions of these elements in new, versatile, and reusable test reactors, the improvements were far from stalling out – they were accelerating.

However, as we know, the Rover program was canceled after NASA was no longer going to Mars, and the development program was largely scrapped. Scientists and engineers at Westinghouse Astronuclear Laboratory (the commercial contractor for the NERVA flight engine), Oak Ridge National Laboratory (where much of the fuel element fabrication was carried out) and Los Alamos Scientific Laboratory (the AEC facility primarily responsible for reactor design and initial testing) spent about another year finishing paperwork and final reports, and the program was largely shut down. The final report on the hot-fire test programs for NASA, though, wouldn’t be released until 1991.

Behind the Scenes: Pre-Hot Fire Testing of ROVER reactors

Pajarito Test Area, image courtesy LANL

These hot fire tests were actually the end result of many more tests carried out in New Mexico, at Los Alamos Scientific Laboratory – specifically the Pajarito Test Area. Here, there were many test stands and experimental reactors used to measure such things as neutronics, reactor behavior, material behavior, critical assembly limitations and more.

Honeycomb, with a KIWI mockup loaded. Image via LANL

The first of these was known as Honeycomb, due to its use of square grids made out of aluminum (which is mostly transparent to neutrons), held in large aluminum frames. Prisms of nuclear fuel, reflectors, neutron absorbers, moderator, and other materials were assembled carefully (to prevent accidental criticality, something that the Pajarito Test Site had seen early in its’ existence in the Demon Core experiments and subsequent accident) to ensure that the modeled behavior of possible core configurations matched closely enough to predicted behavior to justify going through the effort and expense of going on to the next steps of refining and testing fuel elements in an operating reactor core. Especially for cold and warm criticality tests, this test stand was invaluable, but with the cancellation of Project Rover, there was no need to continue using the test stand, and so it was largely mothballed.

PARKA, image courtesy LANL

The second was a modified KIWI-A reactor, which used a low-pressure, heavy water moderated island in the center of the reactor to reduce the amount of fissile fuel necessary for the reactor to achieve criticality. This reactor, known as Zepo-A (for zero-power, or cold criticality), was the first of an experiment that was carried out with each successive design in the Rover program, supporting Westinghouse Astronuclear Laboratory and the NNTS design and testing operations. As each reactor went through its’ zero-power neutronic testing, the design was refined, and problems corrected. This sort of testing was completed late in 2017 and early in 2018 at the NCERC in support of the KRUSTY series of tests, which culminated in March with the first full-power test of a new nuclear reactor in the US for more than 40 years, and remain a crucial testing phase for all nuclear reactor and fuel element development. An early, KIWI-type critical assembly test ended up being re-purposed into a test stand called PARKA, which was used to test liquid metal fast breeder reactor (LMFBR, now known as “Integral Fast Reactor or IFR, under development at Idaho National Labs) fuel pins in a low-power, epithermal neutron environment for startup and shutdown transient behavior testing, as well as being a well-understood general radiation source.

Hot gas furnace at LASL, image courtesy LANL

Finally, there was a pair of hot gas furnaces (one at LASL, one at WANL) for electrical heating of fuel elements in an H2 environment that used resistive heating to bring the fuel element up to temperature. This became more and more important as the project continued, since development of the clad on the fuel element was a major undertaking. As the fuel elements became more complex, or as materials that were used in the fuel element changed, the thermal properties (and chemical properties at temperature) of these new designs needed to be tested before irradiation testing to ensure the changes didn’t have unintended consequences. This was not just for the clad, the graphite matrix composition changed over time as well, transitioning from using graphite flour with thermoset resin to a mix of flour and flakes, and the fuel particles themselves changed from uranium oxide to uranium carbide, and the particles themselves were coated as well by the end of the program. The gas furnace was invaluable in these tests, and can be considered the grandfather of today’s NTREES and CFEET test stands.

KIWI-A, Zepo-A, and Honeycomb mockup in Kiva 3. Image courtesy LANL

An excellent example of the importance of these tests, and the careful checkout that each of the Rover reactors received, can be seen with the KIWI-B4 reactor. Initial mockups, both on Honeycomb and in more rigorous Zepo mockups of the reactor, showed that the design had good reactivity and control capability, but while the team at Los Alamos was assembling the actual test reactor, it was discovered that there was so much reactivity the core couldn’t be assembled! Inert material was used in place of some of the fuel elements, and neutron poisons were added to the core, to counteract this excess reactivity. Careful testing showed that the uranium carbide fuel particles that were suspended in the graphite matrix underwent hydrolysis, moderating the neutrons and therefore increasing the reactivity of the core. Later versions of the fuel used larger particles of UC2, which was then individually coated before being distributed through the graphite matrix, to prevent this absorption of hydrogen. Careful testing and assembly of these experimental reactors by the team at Los Alamos ensured the safe testing and operation of these reactors once they reached the Nevada test site, and supported Westinghouse’s design work, Oak Ridge National Lab’s manufacturing efforts, and the ultimate full-power testing carried out at Jackass Flats.

NTR Core Design Process, image courtesy IAEA

Once this series of mockup crude criticality testing, zero-power testing, assembly, and checkout was completed, the reactors were loaded onto a special rail car that would also act as a test stand with the nozzle up, and – accompanied by a team of scientists and engineers from both New Mexico and Nevada – transported by train to the test site at Jackass Flats, adjacent to Nellis Air Force Base and the Nevada Test Site, where nuclear weapons testing was done. Once there, a final series of checks was done on the reactors to ensure that nothing untoward had happened during transport, and the reactors were hooked up to test instrumentation and the coolant supply of hydrogen for testing.

Problems at Jackass Flats: Fission is the Easy Part!

The testing challenges that the Nevada team faced extended far beyond the nuclear testing that was the primary goal of this test series. Hydrogen is a notoriously difficult material to handle due to its’ incredibly tiny size and mass. It seeps through solid metal, valves have to be made with incredibly tight clearances, and when it’s exposed to the atmosphere it is a major explosion hazard. To add to the problems, these were the first days of cryogenic H2 experimentation. Even today, handling of cryogenic H2 is far from a routine procedure, and the often unavoidable problems with using hydrogen as a propellant can be seen in many areas – perhaps the most spectacular can be seen during the launch of a Delta-IV Heavy rocket, which is a hydrolox (H2/O2) rocket. Upon ignition of the rocket engines, it appears that the rocket isn’t launching from the pad, but exploding on it, due to the outgassing of H2 not only from the pressure relief valves in the tanks, but seepage from valves, welds, and through the body of the tanks themselves – the rocket catching itself on fire is actually standard operating procedure!

Plu Brook Cryo Tank Pressure Test, image courtesy NASA

In the late 1950s, these problems were just being discovered – the hard way. NASA’s Plum Brook Research Station in Ohio was a key facility for exploring techniques for handling gaseous and liquid hydrogen safely. Not only did they experiment with cryogenic equipment, hydrogen densification methods, and liquid H2 transport and handling, they did materials and mechanical testing on valves, sensors, tanks, and other components, as well as developed welding techniques and testing and verification capabilities to improve the ability to handle this extremely difficult, potentially explosive, but also incredible valuable (due to its’ low atomic mass – the exact same property that caused the problems in the first place!) propellant, coolant, and nuclear moderator. The other options available for NTR propellant (basically anything that’s a gas at reactor operating temperatures and won’t leave excessive residue) weren’t nearly as good of an option due to the lower exhaust velocity – and therefore lower specific impulse.

Plum Brook is another often-overlooked facility that was critical to the success of not just NERVA, but all current liquid hydrogen fueled systems. I plan on doing another post (this one’s already VERY long) looking into the history of the various facilities involved with the Rover and NERVA program.

Indeed, all the KIWI-A tests and the KIWI-B1A used gaseous hydrogen instead of liquid hydrogen, because the equipment that was planned to be used (and would be used in subsequent tests) was delayed due to construction problems, welding issues, valve failures, and fires during checkout of the new systems. These teething troubles with the propellant caused major problems at Jackass Flats, and caused many of the flashiest accidents that occurred during the testing program. Hydrogen fires were commonplace, and an accident during the installation of propellant lines in one reactor ended up causing major damage to the test car, the shed it was contained in, and exposed instrumentation, but only minor apparent damage to the reactor itself, delaying the test of the reactor for a full month while repairs were made (this test also saw two hydrogen fires during testing, a common problem that improved as the program continued and the methods for handling the H2 were improved).

While the H2 coolant was the source of many problems at Jackass Flats, other issues arose due to the fact that these NTRs were using technology that was well beyond bleeding-edge at the time. New construction methods doesn’t begin to describe the level of technological innovation in virtually every area that these engines required. Materials that were theoretical chemical engineering possibilities only a few years before (sometimes even months!) were being utilized to build innovative, very high temperature, chemically and neutronically complex reactors – that also functioned as rocket engines. New metal alloys were developed, new forms of graphite were employed, experimental methods of coating the fuel elements to prevent hydrogen from attacking the carbon of the fuel element matrix (as seen in the KIWI-A reactor, which used unclad graphite plates for fuel, this was a major concern) were constantly being adjusted – indeed, the clad material experimentation continues to this day, but with advanced micro-imaging capabilities and a half century of materials science and manufacturing experience since then, the results now are light-years ahead of what was available for the scientists and engineers in the 50s and 60s. Hydrodynamic principles that were only poorly understood, stress and vibrational patterns that weren’t able to be predicted, and material interactions at temperatures higher than are experienced in the vast majority of situations caused many problems for the Rover reactors.

One common problem in many of these reactors was transverse fuel element cracking, where a fuel element would split across the narrow axis, disrupting coolant flow through the interior channels, exposing the graphite matrix to the hot H2 (which it then would ferociously eat away, exposing both fission products and unburned fuel to the H2 stream and carry it elsewhere – mostly out of the nozzle, but it turned out the uranium would congregate at the hottest points in the reactor – even against the H2 stream – which could have terrifying implications for accidental fission power hot spots. Sometimes, large sections of the fuel elements would be ejected out of the nozzle, spraying partially burned nuclear fuel into the air – sometimes as large chunks, but almost always some of the fuel was aerosolized. Today, this would definitely be unacceptable, but at the time the US government was testing nuclear weapons literally next door to this facility, so it wasn’t considered a cause of major concern.

If this sounds like there were major challenges and significant accidents that were happening at Jackass Flats, well in the beginning of the program that was certainly correct. These early problems were also cited in Congress’ decision to not continue to fund the program (although, without a manned Mars mission, there was really no reason to use the expensive and difficult to build systems, anyway). The thing to remember, though, is that they were EARLY tests, with materials that had been a concept in a material engineer’s imagination only a few years (or sometimes months) beforehand, mechanical and thermal stresses that no-one had ever dealt with, and a technology that seemed the only way to send humans to another planet. The moon was hard enough, Mars was millions of miles further away.

Hot Fire Testing: What Did a Test Look Like?

Nuclear testing is far more complex than just hooking up the test reactor to coolant and instrumentation lines, turning the control drums and hydrogen valves, and watching the dials. Not only are there many challenges associated with just deciding what instrumentation is possible, and where it would be placed, but the installation of these instruments and collecting data from them was often a challenge as well early in the program.

NRX A2 Flow Diagram, image via NASA (Finseth, 1991)

To get an idea of what a successful hot fire test looks like, let’s look at a single reactor’s test series from later in the program: the NRX A2 technology demonstration test. This was the first NERVA reactor design to be tested at full power by Westinghouse ANL, the others, including KIWI and PHOEBUS, were not technology demonstration tests, but proof-of-concept and design development tests leading up to NERVA, and were tested by LASL. The core itself consisted of 1626 hexagonal prismatic fuel elements This reactor was significantly different from the XE-PRIME reactor that would be tested five years later. One way that it was different was the hydrogen flow path: after going through the nozzle, it would enter a chamber beside the nozzle and above the axial reflector (the engine was tested nozzle-up, in flight configuration this would be below the reflector), then pass through the reflector to cool it, before being diverted again by the shield, through the support plate, and into the propellant channels in the core before exiting the nozzle

Two power tests were conducted, on September 24 and October 15, 1964.

With two major goals and 22 lesser goals, the September 24 test packed a lot into the six minutes of half-to-full power operation (the reactor was only at full power for 40 seconds). The major goals were: 1. Provide significant information for verifying steady-state design analysis for powered operation, and 2. Provide significant information to aid in assessing the reactor’s suitability for operation at steady-state power and temperature levels that were required if it was to be a component in an experimental engine system. In addition to these major, but not very specific, test goals, a number of more specific goals were laid out, including top priority goals of evaluating environmental conditions on the structural integrity of the reactor and its’ components, core assembly performance evaluation, lateral support and seal performance analysis, core axial support system analysis, outer reflector assembly evaluation, control drum system evaluation, and overall reactivity assessment. The less urgent goals were also more extensive, and included nozzle assembly performance, pressure vessel performance, shield design assessment, instrumentation analysis, propellant feed and control system analysis, nucleonic and advanced power control system analysis, radiological environment and radiation hazard evaluation, thermal environment around the reactor, in-core and nozzle chamber temperature control system evaluation, reactivity and thermal transient analysis, and test car evaluation.

Image via NASA (Finseth, 1991)

Several power holds were conducted during the test, at 51%, 84%, and 93-98%, all of which were slightly above the power that the holds were planned at. This was due to compressability of the hydrogen gas (leading to more moderation than planned) and issues with the venturi flowmeters used to measure H2 flow rates, as well as issues with the in-core thermocouples used for instrumentation (a common problem in the program), and provides a good example of the sorts of unanticipated challenges that these tests are meant to evaluate. The test length was limited by the availability of hydrogen to drive the turbopump, but despite this being a short test, it was a sweet one: all of the objectives of the test were met, and an ideal specific impulse in a vacuum equivalent of 811 s was determined (low for an NTR, but still over twice as good as any chemical engine at the time).

Image via NASA (Finseth, 1991)

The October 15 th test was a low-power, low flow test meant to evaluate the reactor’s operation when not operating in a high power, steady state of operation, focusing on reactor behavior at startup and cool-down. The relevant part of the test lasted for about 20 minutes, and operated at 21-53 MW of power and a flow rate of 2.27-5.9 kg/s of LH2. As with any system, operating at the state that the reactor was designed to operate in was easier to evaluate and model than at startup and shutdown, two conditions that every engine has to go through but are far outside the “ideal” conditions for the system, and operating with liquid hydrogen just made the questions greater. Only four specific objectives were set for this test: demonstration of stability at low LH2 flow (using dewar pressure as a gauge), demonstration of suitability at constant power but with H2 flow variation, demonstration of stability with fixed control drums but variable H2 flow to effect a change in reactor power, and getting a reactivity feedback value associated with LH2 at the core entrance. Many of these tests hinge on the fact that the LH2 isn’t just a coolant, but a major source of neutron moderation, so the flow rate (and associated changes in temperature and pressure) of the propellant have impacts extending beyond just the temperature of the exhaust. This test showed that there were no power or flow instabilities in the low-power, low-flow conditions that would be seen even during reactor startup (when the H2 entering the core was at its’ densest, and therefore most moderating). The predicted behavior and the test results showed good correlation, especially considering the instrumentation used (like the reactor itself) really wasn’t designed for these conditions, and the majority of the transducers used were operating at the extreme low range of their scale.

After the October test, the reactor was wheeled down a shunt track to radiologically cool down (allow the short-lived fission products to decay, reducing the gamma radiation flux coming off the reactor), and then was disassembled in the NRDC hot cell. These post-mortem examinations were an incredibly important tool for evaluating a number of variables, including how much power was generated during the test (based on the distribution of fission products, which would change depending on a number of factors, but mainly due to the power produced and the neutron spectrum that the reactor was operating in when they were produced), chemical reactivity issues, mechanical problems in the reactor itself, and several other factors. Unfortunately, disassembling even a simple system without accidentally breaking something is difficult, and this was far from a simple system. A challenge became “did the reactor break that itself, or did we?” This is especially true of fuel elements, which often broke due to inadequate lateral support along their length, but also would often break due to the way they were joined to the cold end of the core (which usually involved high-temperature, reasonably neutronically stable adhesives).

This issue was illustrated in the A2 test, when there were multiple broken fuel elements that did not have erosion at the break. This is a strong indicator that they broke during disassembly, not during the test itself: hot H2 tends to heavily erode the carbon in the graphite matrix – and the carbide fuel pellets – and is a very good indicator if the fuel rods broke during a power test. Broken fuel elements were a persistent problem in the entire Rover and NERVA programs (sometimes leading to ejection of the hot end portion of the fuel elements), and the fact that all of the fueled ones seem to have not broken was a major victory for the fuel fabricators.

This doesn’t mean that the fuel elements weren’t without their problems. Each generation of reactors used different fuel elements, sometimes multiple different types in a single core, and in this case the propellant channels, fuel element ends, and the tips of the exterior of the elements were clad in NbC, but the full length of the outside of the elements was not, to attempt to save mass and not overly complicate the neutronic environment of the reactor itself. Unfortunately, this means that the small amount of gas that slipped between the filler strips and pyro-tiles placed to prevent this problem could eat away at the middle of the outside of the fuel element (toward the hot end), something known as mid-band corrosion. This occurred mostly on the periphery of the core, and had a characteristic pattern of striations on the fuel elements. A change was made, to ensure that all of the peripheral fuel elements were fully clad with NbC, since the areas that had this clad were unaffected. Once again, the core became more complex, and more difficult to model and build, but a particular problem was addressed due to empirical data gathered during the test. A number of unfueled, instrumented fuel elements in the core were found to have broken in such a way that it wasn’t possible to conclusively rule out handling during disassembly, however, so the integrity of the fuel elements was still in doubt.

The problems associated with these graphite composite fuel elements never really went away during ROVER or NERVA, with a number of broken fuel elements (which were known to have been broken during the test) were found in the PEWEE reactor, the last test of this sort of fuel element matrix (NF-1 used either CERMET – then called composite – or carbide fuel elements, no GC fuel elements were used). The follow-on A3 reactor exhibited a form of fuel erosion known as pin-hole erosion, which the NbC clad was unable to address, forcing the NERVA team to other alternatives. This was another area where long-term use of the GC fuel elements was shown to be unsustainable for long-duration use past the specific mission parameters, and a large part of why the entire NERVA engine was discarded during staging, rather than just the propellant tanks as in modern designs. New clad materials and application techniques show a lot of promise, and GC is able to be used in a carefully designed LEU reactor, but this is something that isn’t really being explored in any depth in most cases (both the LANTR and NTER concepts still use GC fuel elements, with the NTER specifying them exclusively due to fuel swelling issues, but that seems to be the only time it’s actually required).

Worse Than Worst Case: KIWI-TNT

One question that often is asked by those unfamiliar with NTRs is “what happens if it blows up?” The short answer is that they can’t, for a number of reasons. There is only so much reactivity in a nuclear reactor, and only so fast that it can be utilized. The amount of reactivity is carefully managed through fuel loading in the fuel elements and strategically placed neutron poisons. Also, the control systems used for the nuclear reactors (in this case, control drums placed around the reactor in the radial reflector) can only be turned so fast. I recommend checking out the report on Safety Neutronics in Rover Reactors liked at the end of this post if this is something you’d like to look at more closely.

However, during the Rover testing at NRDS one WAS blown up, after significant modification that would not ever be done to a flight reactor. This is the KIWI-TNT test (TNT is short for Transient Nuclear Test). The behavior of a nuclear reactor as it approaches a runaway reaction, or a failure of some sort, is something that is studied in all types of reactors, usually in specially constructed types of reactors. This is required, since the production design of every reactor is highly optimized to prevent this sort of failure from occurring. This was also true of the Rover reactors. However, knowing what a fast excursion reaction would do to the reactor was an important question early in the program, and so a test was designed to discover exactly how bad things could be, and characterize what happened in a worse-than-worst-case scenario. It yielded valuable data for the possibility of an abort during launch that resulted in the reactor falling into the ocean (water being an excellent moderator, making it more likely that accidental criticality would occur), if the launch vehicle exploded on the pad, and also tested the option of destroying the reactor in space after it had been exhausted of its’ propellant (something that ended up not being planned for in the final mission profiles).

KIWI B4A reactor, which KIWI-TNT was based on, image via LANL

What was the KIWI-TNT reactor? The last of the KIWI series of reactors, its’ design was very similar to the KIWI-B4A reactor (the predecessor for the NERVA-1 series of reactors), which was originally designed as a 1000 MW reactor with an exhaust exit chamber temperature of 2000 C. However, a number of things prevented a fast excursion from happening in this reactor: first, the shims used for the fuel elements were made of tantalum, a neutron poison, to prevent excess reactivity second, the control drums used stepping motors that were slow enough that a runaway reaction wasn’t possible finally, this experiment would be done without coolant, which also acted as moderator, so much more reactivity was needed than the B4A design allowed. With the shims removed, excess reactivity added to the point that the reactor was less than 1 sub-critical (with control drums fully inserted) and $6 of excess reactivity available relative to prompt critical, and the drum rotation rate increased by a factor of 89(!!), from 45 deg/s to 4000 deg/s, the stage was set for this rapid scheduled disassembly on January 12, 1965. This degree of modification shows how difficult it would be to have an accidental criticality accident in a standard NTR design.

KIWI-TNT Test Stand Schematic, image via LANL

The test had six specific goals: 1. Measure reaction history and total fissions produced under a known reactivity and compare to theoretical predictions in order to improve calculations for accident predictions, 2. to determine distribution of fission energy between core heating and vaporization, and kinetic energies, 3. determination of the nature of the core breakup, including the degree of vaporization and particle sizes produced, to test a possible nuclear destruct system, 4. measure the release into the atmosphere of fission debris under known conditions to better calculate other possible accident scenarios, 5. measure the radiation environment during and after the power transient, and 6. to evaluate launch site damage and clean-up techniques for a similar accident, should it occur (although the degree of modification required to the reactor core shows that this is a highly unlikely event, and should an explosive accident occur on the pad, it would have been chemical in nature with the reactor never going critical, so fission products would not be present in any meaningful quantities).

There were 11 measurements taken during the test: reactivity time history, fission rate time history, total fissions, core temperatures, core and reflector motion, external pressures, radiation effects, cloud formation and composition, fragmentation and particle studies, and geographic distribution of debris. An angled mirror above the reactor core (where the nozzle would be if there was propellant being fed into the reactor) was used in conjunction with high-speed cameras at the North bunker to take images of the hot end of the core during the test, and a number of thermocouples placed in the core.

KIWI-TNT test, AEC image via SomethingAwful

As can be expected, this was a very short test, with a total of 3.1×10^20 fissions achieved after only 12.4 milliseconds. This was a highly unusual explosion, not consistent with either a chemical or nuclear explosion. The core temperature exceeded 17.5000 C in some locations, vaporizing approximately 5-15% of the core (the majority of the rest either burned in the air or was aerosolized into the cloud of effluent), and produced 150 MW/sec of kinetic energy about the same amount of kinetic energy as approximately 100 pounds of high explosive (although due to the nature of this explosion, caused by rapid overheating rather than chemical combustion, in order to get the same effect from chemical explosives would take considerably more HE). Material in the core was observed to be moving at 7300 m/sec before it came into contact with the pressure vessel, and flung the largest intact piece of the pressure vessel (a 0.9 sq. m, 67 kg piece of the pressure vessel) 229 m away from the test location. There were some issues with instrumentation in this test, namely with the pressure transducers used to measure the shock wave. All of these instruments but two (placed 100 ft away) didn’t record the pressure wave, but rather an electromagnetic signal at the time of peak power (those two recorded a 3-5 psi overpressure).

KIWI-TNT remains, image via LANL

Radioactive Release during Rover Testing Prequel: Radiation is Complicated

Radiation is a major source of fear for many people, and is the source of a huge amount of confusion in the general population. To be completely honest, when I look into the nitty gritty of health physics (the study of radiation’s effects on living tissue), I spend a lot of time re-reading most of the documents because it is easy to get confused by the terms that are used. To make matters worse, especially for the Rover documentation, everything is in the old, outdated measures of radioactivity. Sorry, SI users out there, all the AEC and NASA documentation uses Ci, rad, and rem, and converting all of it would be a major headache. If someone would like to volunteer helping me convert everything to common sense units, please contact me, I’d love the help! However, the natural environment is radioactive, and the Sun emits a prodigious amount of radiation, only some of which is absorbed by the atmosphere. Indeed, there is evidence that the human body REQUIRES a certain amount of radiation to maintain health, based on a number of studies done in the Soviet Union using completely non-radioactive, specially prepared caves and diets.

Exactly how much is healthy and not is a matter of intense debate, and not much study, though, and three main competing theories have arisen. The first, the linear-no-threshold model, is the law of the land, and states that there’s a maximum amount of radiation that is allowable to a person over the course of a year, no matter if it’s in one incident (which usually is a bad thing), or evenly spaced throughout the whole year. Each rad (or gray, we’ll get to that below) of radiation increases a person’s chance of getting cancer by a certain percentage in a linear fashion, and so effectively the LNT model (as it’s known) determines a minimum acceptable increase in the chance of a person getting cancer in a given timeframe (usually quarters and years). This doesn’t take into account the human body’s natural repair mechanisms, though, which can replace damaged cells (no matter how they’re damaged), which leads most health physicists to see issues with the model, even as they work within the model for their professions.

The second model is known as the linear-threshold model, which states that low level radiation (under the threshold of the body’s repair mechanisms) doesn’t make sense to count toward the likelihood of getting cancer. After all, if you replace your Formica counter top in your kitchen with a granite one, the natural radioactivity in the granite is going to expose you to more radiation, but there’s no difference in the likelihood that you’re going to get cancer from the change. Ramsar, Iran (which has the highest natural background radiation of any inhabited place on Earth) doesn’t have higher cancer rates, in fact they’re slightly lower, so why not set the threshold to where the normal human body’s repair mechanisms can control any damage, and THEN start using the linear model of increase in likelihood of cancer?

The third model, hormesis, takes this one step further. In a number of cases, such as Ramsar, and an apartment building in Taiwan which was built with steel contaminated with radioactive cobalt (causing the residents to be exposed to a MUCH higher than average chronic, or over time, dose of gamma radiation), people have not only been exposed to higher than typical doses of radiation, but had lower cancer rates when other known carcinogenic factors were accounted for. This is evidence that having an increased exposure to radiation may in fact stimulate the immune system and make a person more healthy, and reduce the chance of that person getting cancer! A number of places in the world actually use radioactive sources as places of healing, including radium springs in Japan, Europe, and the US, and the black monazite sands in Brazil. There has been very little research done in this area, since the standard model of radiation exposure says that this is effectively giving someone a much higher risk for cancer, though.

I am not a health physicist. It has become something of a hobby for me in the last year, but this is a field that is far more complex than astronuclear engineering. As such, I’m not going to weigh in on the debate as to which of these three theories is right, and would appreciate it if the comments section on the blog didn’t become a health physics flame war. Talking to friends of mine that ARE health physicists (and whom I consult when this subject comes up), I tend to lean somewhere between the linear threshold and hormesis theories of radiation exposure, but as I noted before, LNT is the law of the land, and so that’s what this blog is going to mostly work within.

Radiation (in the context of nuclear power, especially) starts with the emission of either a particle or ray from a radioisotope, or unstable nucleus of an atom. This is measured with the Curie (Cu) which is a measure of how much radioactivity IN GENERAL is released, or 3.7X10^10 emissions (either alpha, beta, neutron, or gamma) per second. SI uses the term Becquerels (Bq), which is simple: one decay = 1 Bq. So 1 Cu = 3.7X10^10 Bq. Because it’s so small, megaBequerels (Mbq) is often used, because unless you’re looking at highly sensitive laboratory experiments, even a dozen Bq is effectively nothing.

Each different type of radiation affects both materials and biological systems differently, though, so there’s another unit used to describe energy produced by radiation being deposited onto a material, the absorbed dose: this is the rad, and SI unit is the gray (Gy). The rad is defined as 100 ergs of energy deposited in one gram of material, and the gray is defined as 1 joule of radiation absorbed by one kilogram of matter. This means that 1 rad = 0.01 Gy. This is mostly seen for inert materials, such as reactor components, shielding materials, etc. If it’s being used for living tissue, that’s generally a VERY bad sign, since it’s pretty much only used that way in the case of a nuclear explosion or major reactor accident. It is used in the case of an acute – or sudden – dose of radiation, but not for longer term exposures.

This is because there’s many things that go into how bad a particular radiation dose is: if you’ve got a gamma beam that goes through your hand, for instance, it’s far less damaging than if it goes through your brain, or your stomach. This is where the final measurement comes into play: in NASA and AEC documentation, they use the term rem (or radiation equivalent man), but in SI units it’s known as the Sievert. This is the dose equivalent, or normalizing all the different radiation types’ effects on the various tissues of the body, by applying a quality factor to each type of radiation for each part of a human body that is exposed to that type of radiation. If you’ve ever wondered what health physicists do, it’s all the hidden work that goes on when that quality factor is applied.

The upshot of all of this is the way that radiation dose is assessed. There are a number of variables that were assessed at the time (and currently are assessed, with this as an effective starting point for ground testing, which has a minuscule but needing to be assessed consideration as far as release of radioactivity to the general public). The exposure was broadly divided into three types of exposure: full-body (5 rem/yr for an occupational worker, 0.5 rem/yr for the public) skin, bone, and thyroid exposure (30 rem/yr occupational, 3 rem/yr for the public) and other organs (15 rem/yr occupational, 1.5 rem/yr for the public). In 1971, the guidelines for the public were changed to 0.5 rem/yr full body and 1.5 rem/yr for the general population, but as has been noted (including in the NRDS Effluent Final Report) this was more an administrative convenience rather than biomedical need.

1974 Occupational Radiological Release Standards, Image via EPA

Additional considerations were made for discrete fuel element particles ejected from the core – a less than one in ten thousand chance that a person would come in contact with one, and a number of factors were considered in determining this probability. The biggest concern is skin contact can result in a lesion, at an exposure of above 750 rads (this is an energy deposition measure, not an expressly medical one, because it is only one type of tissue that is being assessed).

Finally, and perhaps the most complex to address, is the aerosolized effluent from the exhaust plume, which could be both gaseous fission products (which were not captured by the clad materials used) and from small enough particles to float through the atmosphere for a longer duration – and possibly be able to be inhaled. The relevant limits of radiation exposure for these tests for off-site populations were 170 mrem/yr whole body gamma dose, and a thyroid exposure dose of 500 mrem/yr. The highest full body dose recorded in the program was in 1966, of 20 mrem, and the highest thyroid dose recorded was from 1965 of 72 mrem.

The Health and Environmental Impact of Nuclear Propulsion Testing Development at Jackass Flats

So how bad were these tests about releasing radioactive material, exactly? Considering the sparsely populated area few people – if any – that weren’t directly associated with the program received any dose of radiation from aerosolized (inhalable, fine particulate) radioactive material. By the regulations of the day, no dose of greater than 15% of the allowable AEC/FRC (Federal Radiation Council, an early federal health physics advisory board) dose for the general public was ever estimated or recorded. The actual release of fission products in the atmosphere (with the exception of Cadmium-115) was never more than 10%, and often less than 1% (115Cd release was 50%). The vast majority of these fission products are very short lived, decaying in minutes or days, so there was not much – if any – change for migration of fallout (fission products bound to atmospheric dust that then fell along the exhaust plume of the engine) off the test site. According to a 1995 study by the Department of Energy, the total radiation release from all Rover and Tory-II nuclear propulsion tests was approximately 843,000 Curies. To put this in perspective, a nuclear explosive produces 30,300,000 Curies per kiloton (depending on the size and efficiency of the explosive), so the total radiation release was the equivalent of a 30 ton TNT equivalent explosion.

Summary of Radiological Release, image via DOE

This release came from either migration of the fission products through the metal clad and into the hydrogen coolant, or due to cladding or fuel element failure, which resulted in the hot hydrogen aggressively attacking the graphite fuel elements and carbide fuel particles.

The amount of fission product released is highly dependent on the temperature and power level the reactors were operated at, the duration of the test, how quickly the reactors were brought to full power, and a number of other factors. The actual sampling of the reactor effluent occurred three ways: sampling by aircraft fitted with special sensors for both radiation and particulate matter, the “Elephant gun” effluent sampler placed in the exhaust stream of the engine, and by postmortem chemical analysis of the fuel elements to determine fuel burnup, migration, and fission product inventory. One thing to note is that for the KIWI tests, effluent release was not nearly as well characterized as for the later Phoebus, NRX, Pewee, and Nuclear Furnace tests, so the data for these tests is not only more accurate, but far more complete as well.

Offsite Dose Map, 1967 (a year with higher-than-average release, and the first to employ better sampling techniques) Image via EPA

Two sets of aircraft data were collected: one (by LASL/WANL) was from fixed heights and transects in the six miles surrounding the effluent plume, collecting particulate effluent which would be used (combined with known release rates of 115Cd and post-mortem analysis of the reactor) to determine the total fission product inventory release at those altitudes and vectors, and was discontinued in 1967 the second (NERC) method used a fixed coordinate system to measure cloud size and density, utilizing a mass particulate sampler, charcoal bed, cryogenic sampler, external radiation sensor, and other equipment, but due to the fact that these samples were taken more than ten miles from the reactor tests, it’s quite likely that more of the fission products had either decayed or come down to the ground as fallout, so depletion of much of the fission product inventory could easily have occurred by the time the cloud reached the plane’s locations. This technique was used after 1967.

The next sampling method also came online in 1967 – the Elephant Gun. This was a probe that was stuck directly into the hot hydrogen coming out of the nozzle, and collected several moles of the hot hydrogen from the exhaust stream at several points throughout the test, which were then stored in sampling tanks. Combined with hydrogen temperature and pressure data, acid leaching analysis of fission products, and gas sample data, this provided a more close-to-hand estimate of the fission product release, as well as getting a better view of the gaseous fission products that were released by the engine.

Engine Maintenance and Disassembly Building at NRDC under construction, image via Wikimedia Commons

Finally, after testing and cool-down, each engine was put through a rigorous post-mortem inspection. Here, the amount of reactivity lost compared to the amount of uranium present, power levels and test duration, and chemical and radiological analysis were used to determine which fission products were present (and in which ratios) compared to what SHOULD have been present. This technique enhanced understanding of reactor behavior, neutronic profile, and actual power achieved during the test as well as the radiological release in the exhaust stream.

Radioactive release from these engine tests varied widely, as can be seen in the table above, however the total amount released by the “dirtiest” of the reactor tests, the Phoebus 1B second test, was only 240,000 Curies, and the majority of the tests released less than 2000 Curies. Another thing that varied widely was HOW the radiation was released. The immediate area (within a few meters) of the reactor would be exposed to radiation during operation, in the form of both neutron and gamma radiation. The exhaust plume would contain not only the hydrogen propellant (which wasn’t in the reactor for long enough to accumulate additional neutrons and turn into deuterium, much less tritium, in any meaningful quantities), but the gaseous fission products (most of which the human body isn’t able to absorb, such as 135Xe) and – if fuel element erosion or breakage occurred – a certain quantity of particles that may either have become irradiated or contain burned or unburned fission fuel.

Image via EPA

These particles, and the cloud of effluent created by the propellant stream during the test, were the primary concern for both humans and the environment from these tests. The reason for this is that the radiation is able to spread much further this way (once emitted, and all other things being equal, radiation goes in a straight line), and most especially it can be absorbed by the body, through inhalation or ingestion, and some of these elements are not just radioactive, but chemically toxic as well. As an additional complication, while alpha and beta radiation are generally not a problem for the human body (your skin stops both particles easily), when they’re IN the human body it’s a whole different ballgame. This is especially true of the thyroid, which is more sensitive than most to radiation, and soaks up iodine (131I is a fairly active radioisotope) like nobody’s business. This is why, after a major nuclear accident (or a theoretical nuclear strike), iodine tablets, containing a radio-inert isotope, are distributed: once the thyroid is full, the excess radioactive iodine passes through the body since nothing else in the body can take it up and store it.

There are quite a few factors that go into how far this particulate will spread, including particle mass, temperature, velocity, altitude, wind (at various altitudes), moisture content of the air (particles could be absorbed into water droplets), plume height, and a host of other factors. The NRDS Effluent Program Final Report goes into great depth on the modeling used, and the data collection methods used to collect data to refine these estimates.

Another thing to consider in the context of Rover in particular is that open-air testing of nuclear weapons was taking place in the area immediately surrounding the Rover tests, which released FAR more fallout (by dozens of orders of magnitude), so it contributed a very minor amount to the amount of radionucleides released at the time.

The offsite radiation monitoring program, which included sampling of milk from cows to estimate thyroid exposure, collected data through 1972, and all exposures measured were well below the exposure limits set on the program.

Since we looked at the KIWI-TNT test earlier, let’s look at the environmental effects of this particular test. After all, a nuclear rocket blowing up has to be the most harmful test, right? Surprisingly, ten other tests released more radioactivity than KIWI-TNT. The discrete particles didn’t travel more than 600 feet from the explosion. The effluent cloud was recorded from 4000 feet to 50 miles downwind of the test site, and aircraft monitoring the cloud were able to track it until it went out over the Pacific ocean (although at that point, it was far less radioactive). By the time the cloud had moved 16,000 feet from the test site, the highest whole body dose from the cloud measured was 1.27X10^-3 rad (at station 16-210), and the same station registered an inhalation thyroid dose of 4.55X10^-3 rads. This shows that even the worst credible accident possible with a NERVA-type reactor has only a negligible environmental and biological impact due to either the radiation released or the explosion of the reactor itself, further attesting to the safety of this engine type.

Map of discrete particle distribution, image via LANL

If you’re curious about more in-depth information about the radiological and environmental effects of the KIWI-TNT tests, I’ve linked the (incredibly detailed) reports on the experiment at the end of this post.

Radiological distribution from particle monitors, image via LANL

The Results of the Rover Test Program

Throughout the Rover testing program, the fuel elements were the source of most of the non-H2 related issues. While other issues, such as instrumentation, were also encountered, the main headache was the fuel elements themselves.

A lot of the problems came down to the mechanical and chemical properties of the graphite fuel matrix. Graphite is easily attacked by the hot H2, leading to massive fuel element erosion, and a number of solutions were experimented with throughout the test series. With the exception of the KIWI-A reactor (which used unclad fuel plates, and was heavily affected by the propellant), each of the reactors featured FEs that were clad to a greater or lesser extent, using a variety of methods and materials. Often, niobium carbide (NbC) was the favored clad material, but other options, such as tungsten, exist.

CVD Coating device, image courtesy LANL

Chemical vapor deposition was an early option, but unfortunately it was not feasible to consistently and securely coat the interior of the propellant tubes, and differential thermal expansion was a major challenge. As the fuel elements heated, they expanded, but at a different rate than the coating did. This led to cracking, and in some cases, flaking off, of the clad material, leading to the graphite being exposed to the propellant and being eroded away. Machined inserts were a more reliable clad form, but required more complexity to install.

The exterior of the fuel elements originally wasn’t clad, but as time went on it was obvious that this would need to be addressed as well. Some propellant would leak between the prisms, leading to erosion of the outside of the fuel elements. This changed the fission geometry of the reactor, led to fission product and fuel release through erosion, and weakened the already somewhat fragile fuel elements. Usually, though, vapor deposition of NbC was sufficient to eliminate this problem

Fortunately, these issues are exactly the sort of thing that CFEET and NTREES are able to test, and these systems are far more economical to operate than a hot-fired NTR is. It is likely that by the time a hot-fire test is being conducted, the fuel elements will be completely chemically and thermally characterized, so these issues shouldn’t arise.

The other issue with the fuel elements was mechanical failure due to a number of problems. The pressure across the system changes dramatically, which leads to differential stress along the length of the fuel elements. The original, minimally-supported fuel elements, would often undergo transverse cracking, leading to blockage of propellant and erosion. In a number of cases, after the fuel element broke this way, the hot end of the fuel element would be ejected from the core.

Rover tie tube image courtesy NASA

This led to the development of a structure that is still found in many NTR designs today: the tie tube. This is a hexagonal prism, the same size as the fuel elements, which supports the adjacent fuel elements along their length. In addition to being a means of support, these are also a major source of neutron moderation, due to the fact that they’re cooled by hydrogen propellant from the regeneratively cooled nozzle. The hydrogen would make two passes through the tie tube, one in each direction, before being injected into the reactor’s cold end to be fed through the fuel elements.

The tie tubes didn’t eliminate all of the mechanical issues that the fuel element faced. Indeed, even in the NF-1 test, extensive fuel element failure was observed, although none of the fuel elements were ejected from the core. However, new types of fuel elements were being tested (uranium carbide-zirconium carbide carbon composite, and (U,Zr)C carbide), which offered better mechanical properties as well as higher thermal tolerances.

Current NTR designs still usually incorporate tie tubes, especially because the low-enriched uranium that is the main notable difference in NASA’s latest design requires a much more moderated neutron spectrum than a HEU reactor does. However, the ability to support the fuel element mechanically along its entire length (rather than just at the cold end, as was common in NERVA designs) does also increase the mechanical stability of the reactor, and helps maintain the integrity of the fuel elements.

The KIWI-B and Phoebus reactors were successful enough designs to use as starting points for the NERVA engines. NERVA is an acronym for the Nuclear Energy for Rocket Vehicle Applications, and took place in two parts: NERVA-1, or NERVA-NRX, developed the KIWI-B4D reactor into a more flight-prototypic design, including balance of plant optimization, enhanced documentation of the workings of the reactor, and coolant flow studies. The second group of engines, NERVA-2, were based on the Phoebus 2 type of reactor from Rover, and ended up finally being developed into the NERVA-XE, which was meant to be the engine that would power the manned mission to Mars. The NERVA-XE PRIME test was of the engine in flight configuration, with all the turbopumps, coolant tanks, instrumentation, and even the reactor’s orientation (nozzle down, instead of up) were all the way that it would have been configured during the mission.

NERVA XE-PRIME pre-fire installation and verification, image via Westinghouse Engineer (1974)

The XE-PRIME test series lasted for nine months, from December 1968 to September 1969, and involved 24 startups and shutdowns of the reactor. Using a 1140 MW reactor, operating at 2272 K exhaust temperature, and produced 247 kN of thrust at 710 seconds of specific impulse. This included using new startup techniques from cold-start conditions, verification of reactor control systems – including using different subsystems to be able to manipulate the power and operating temperature of the reactor – and demonstrated that the NERVA program had successfully produced a flight-ready nuclear thermal rocket.

Ending an Era: Post-Flight Design Testing

Toward the end of the Rover program, the engine design itself had been largely finalized, with the NERVA XE-Prime test demonstrating an engine tested in flight configuration (with all the relevant support hardware in place, and the nozzle pointing down), however, some challenges remained for the fuel elements themselves. In order to have a more cost-effective testing program for fuel elements, two new reactors were constructed.

PEWEE Test Stand, image courtesy LANL

The first, Pewee, was a smaller (75 klbf, the same size as NASA’s new NTR) nuclear rocket engine, which was able to have the core replaced for multiple rounds of testing, but was only used once before the cancellation of the program – but not before achieving the highest specific impulse of any of the Rover engines. This reactor was never tested outside of a breadboard configuration, because it was never meant to be used in flight. Instead, it was a cost-saving measure for NASA and the AEC: due to its’ smaller size, it was much cheaper to built, and due to its’ lower propellant flow rate, it was also much easier to test. This meant that experimental fuel elements that had undergone thermal and irradiation testing would be able to be tested in a fission-powered, full flow environment at lower cost.

NF-1 Transverse view, image courtesy NASA

The second was the Nuclear Furnace, which mimicked the neutronic environment and propellant flow rates of the larger NTRs, but was not configured as an engine. This reactor also was the first to incorporate an effluent scrubber, capturing the majority of the non-gaseous fission products and significantly reducing the radiological release into the environment. It also achieved the highest operating temperatures of any of the reactors tested in Nevada, meaning that the thermal stresses on the fuel elements would be higher than would be experienced in a full-power burn of an actual NTR. Again, this was designed to be able to be repeatedly reused in order to maximize the financial benefit of the reactor’s construction, but was only used once before the cancellation of the program. The fuel elements were tested in separate cans, and none of them were the graphite composite fuel form: instead, CERMET (then known as composite) and carbide fuel elements, which had been under development but not extensively used in Rover or NERVA reactors, were tested. This system also used an effluent cleanup system, but that’s something that we’re going to look at more in depth on the next post, as it remains a theoretically possible method of doing hot-fire testing for a modern NTR.

NRX A reactor, which PAX was based on, image courtesy NASA

Westinghouse ANL also proposed a design based on the NERVA XE, called the PAX reactor, which would be designed to have its’ core replaced, but this never left the drawing boards. Again, the focus had shifted toward lower cost, more easily maintained experimental NTR test stands, although this one was much closer to flight configuration. This would have been very useful, because not only would the fuel be subjected to a very similar radiological and chemical environment, but the mechanical linkages, hydrogen flow paths, and resultant harmonic and gas-dynamic issues would have been able to be evaluated in a near-prototypic environment. However, this reactor was never tested.

As we’ve seen, hot-fire testing was something that the engineers involved in the Rover and NERVA programs were exceptionally concerned about. Yes, there were radiological releases into the environment that are well above and beyond what would be considered today, but when compared to the releases from the open-air nuclear weapons tests that were occurring in the immediate vicinity, they were miniscule.

Today, though, these releases would be unacceptable. So, in the next blog post we’re going to look at the options, and restrictions, for a modern testing facility for NTR hot firing, including a look at the proposals over the years and NASA’s current plan for NTR testing. This will include the exhaust filtration system on the Nuclear Furnace, a more complex (but also more effective) filtering system proposed for the SNTP pebblebed reactor (TimberWind), a geological filtration concept called SAFE, and a full exhaust capture and combustion system that could be installed at NASA’s current rocket test facility at Stennis Space Center.

This post is already started, and I hope to have it out in the next few weeks. I look forward to hearing all your feedback, and if there are any more resources on this subject that I’ve missed, please share them in the comments below!

Los Alamos Pajarito Site

Los Alamos Critical Assemblies Facility, LA-8762-MS, by R. E. Malenfant,

A History of Critical Experiments at Pajarito Site, LA-9685-H, by R.E. Malenfant, 1983

Environmental Impacts and Radiological Release Reports

NRDS Nuclear Rocket Effluent Program, 1959-1970 NERC-LV-539-6, by Bernhardt et al, 1974

Offsite Monitoring Report for NRX-A2 1965

Radiation Measurements of the Effluent from the Kiwi-TNT Experiment LA-3395-MS, by Henderson et al, 1966

Environmental Effects of the KIWI-TNT Effluent: A Review and Evaluation LA-3449, by R.V.Fultyn, 1968

Technological Development and Non-Nuclear Testing

A Review of Fuel Element Development for Nuclear Rocket Engines LA-5931, by J.M. Taub

Rover Nuclear Rocket Engine Program: Overview of Rover Engine Tests N92-15117, by J.L. Finseth, 1992

Nuclear Furnace 1 Test Report LA-5189-MS, W.L. Kirk, 1973

KIWI-Transient Nuclear Test LA-3325-MS, 1965

Kiwi-TNT Explosion LA-3551, by Roy Reider, 1965

An Analysis of the KIWI-TNT Experiment with MARS Code Journal of Nuclear Science and Technology, Hirakawa et al. 1968

Miscellaneous Resources

Safety Neutronics for Rover Reactors LA-3558-MS, Los Alamos Scientific Laboratory, 1965

The Behavior of Fission Products During Nuclear Rocket Reactor Tests LA-UR-90-3544, by Bokor et al, 1996

What were the operating principles of Japan's MITI during the 1950s and 60s? - History

New York Jews? . The convention was held in Wisconsin." [in MACDONALD, 1998, p. 72]

"The problem arose," says Arthur Liebman,

"to the means to accomplish the objective of Americanizing what was an essentialy Jewish and European socialist movement . [LIEBMAN, A., 1986, p. 340] . The disporportionate presence of Jews and the foreign born generally in the socialist movement coupled with the relative absence of non-Jews and native Americans troubled many of its leaders, Jews and non-Jews alike. The Communist party, for example, in the 1920s was made up almost entirely of Jews and foreign born, most of whom were in foreign language federations. The Jews alone in the 1930s and 1940s accounted for approximately 40 to 50 percent of the membership of the Communist party." [LIEBMAN, A.,|
1986, p. 339]

Nathaniel Weyl notes that:

"Although Communist leaders were normally taciturn about the extent

only nine 'American' . Based on scrutiny of surnames, Glazer concluded that all of the 'Rank and File' (Communist) teachers placed on trial by the Teachers Union in 1932 were Jewish." [WEYL., N., 1968, p. 118-119]

The popular association of Jews with Communism," notes Peter Novick, "dated from the Bolshevik Revolution. Most of the 'alien agitators' deported from the United States during the Red Scare after World War I had been Jews." [NOVICK, P., 1999, p. 92] Major American twentieth century court trials included those of Charles Schenck, general secretary of the Socialist Party, who was arrested for sedition in 1919: "The case marked the first time the Supreme Court ruled on the extent to which the U.S. government may limit speech." [KNAPPMAN, E., 1995, p. 61, 60] Likewise, in 1927 the Supreme Court "upheld the conviction of Socialist Benjamin Gitlow under a New York state law for advocating criminal anarchy." [KNAPPMAN, E., 1995, p. 63]

Peter Pulzer once noted that, in the German socialist ranks of the early 20th century, "Their [Jews'] disproportionately bourgeois origins and their tendency to derive their views from first principles rather than empirical experience, led them into a dominating position [in] the party's debates." [WEISBERGER, A., 1997, p. 93] Arthur Liebman notes the background to the Morris Hillquit's election to the American Socialist party chairmanship in 1932:

"Hilquit, in turn, brought the unmentionable to the center stage in an emotional speech, declaring, 'I apologize for having been born abroad, for being a Jew, and living in New York City.' Hilquit's oblique reference to anti-Semitism assured him of victory. As Thomas[Hilquit's opponent for the chairmanship] later commented, 'Once the anti-Semitic issue was raised, even though unjustly, I was inclined to think it best that Hillquit won.' The Socialist party did not want to risk being labeled anti-Semitic." [LIEBMAN, A., 1986, p. 341]

Some estimates suggest that 60% of the leadership for the 60s-era radical SDS (Students for a Democratic Society) were Jews (well-known radicals included Kathy Boudin, Bettina Aptheker, among many others). [PRAGER, p. 61] From 1960 to 1970, five of the nine changing presidents of the organization were Jewish males (Al Haber, Todd Gitlin, and the last three for the decade: Mike Spiegel, Mike Klonsky, and Mark Rudd). [SALE, K., 1973, p. 663] "Perhaps fully 50 percent of the revolutionary Students for a Democratic Society," says Milton Plesur, "and as many as 50 to 75 percent of those in campus radical activities in the late 1960s were Jewish." [PLESUR, M., 1982, p. 137] As Stanley Rothman and S. Robert Lichter note:

"The early SDS was heavily Jewish in both its leadership and its activist cadres. Key SDS leaders included Richard Flacks, who played an important role in its formation and growth, as well as Al Haber, Robb Ross, Steve Max, Mike Spiegel, Mike Klonsky, Todd Gitlin, Mark Rudd, and others. Indeed, for the first few years, SDS was largely funded by the League for Industrial Democracy, a heavily Jewish socialist (but anti-communist) organization. SDS's early successes were at elite universities containing substantial numbers of Jewish students and sympathetic Jewish faculty, including the University of Wisconsin at Madison, Brandeis, Oberlin, and the University of California. At Berkeley SDS leaders were not unaware of their roots. As Robb Ross put it, describing the situation at the Unversity of Wisconsin in the early 1960s, '. my impression is that the left at Madison is not just a new left, but a revival of the old . with all the problems that entails. I am struck by the lack of Wisconsin-born people [in the Madison-area left] and the massive preponderance of New York Jews. The situation at the University of Minnesota is similar' . [Researcher] Berns and his associates found that 83 percent of a small radical activist sample studied at the University of California in the early 1970s were of Jewish background." [ROTHMAN/LICHTER, 1982, p. 61]

Susan Stern was among those to turn to the violent Weatherman underground organization. Ted Gold, another Weatherman member, died when a bomb he was making exploded in his hands. [ROTHMAN/LICHTER, 1982, p. 61] In an iconic 1970 incident, three of the four students shot and killed by National Guardsmen at a famous Kent State University demonstration were Jewish. [BYARD, K., 5-5-00]

A study by Joseph Adelson at the University of Michigan, one of the American hotbeds of 1960s-era activism, suggested that 90% of those defined as politically "radical students" at that school were Jews. [PRAGER, p. 61, 66] And, "when, for instance, the Queens College SDS held a sit-in at an induction center several years ago," wrote Gabriel Ende, "they chose to sing Christmas carols to dramatize their activity, although the chairman and almost all of the members were Jewish." [ENDE, G., 1971, p. 61]

"In elite institutions like the University of Chicago, a large 63 percent of student radicals were Jewish Tom Hayden may have been the most famous name in the University of Michigan SDS, but '90 percent of the student left [in that school] came from jewish backgrounds' and nationally, 60 percent of SDS members were Jewish. As my once-friend Paul Breines wrote about my own alma mater the University of Wisconsin, 'the real yeast in the whole scene had been
the New York Jewish students in Wisconsin' . As late as 1946, one-third of America's Jews held a favorable view of the Soviet Union." [RADOSH, R., 6-5-01]

Decades earlier, note Rothman and Lichter:

"The American Student Union, the most prominent radical student group during the 1930s, was heavily concentrated in New York colleges and universities with large Jewish enrollments. And on other campuses,
such as the University of Illinois, substantial portions of its limited membership were students of Jewish background from New York City." [ROTHMAN/LICHTER, 1982, p. 101]

Watch the video: Japan Inc. 3 MITI and Japan Challenge to US Technology