The prospect of nuclear power conjures up vastly different feelings today than it did in 1954, when the Obninsk Nuclear Power Plant (USSR) became the world's first to generate electricity for a power grid. The intervening 64 years have witnessed major nuclear accidents, most notably at Three Mile Island in Pennsylvania (1979), Chernobyl in the former Soviet Union (1986), and Fukushima, Japan (2011), although malfunction of power plants can be traced further back to the Windscale Piles plant in the UK (1957), and Kyshytm in Russia (1957), both ranking 5-6 on the International Nuclear Events 7-point Scale. The events at Fukushima and Chernobyl are the only level 7 events recorded by the International Atomic Energy Authority.
Nuclear fallout from these malfunctions has had devastating consequences, both immediate and long-term, with contested figures but clear health implications around the rates of thyroid cancers, birth defects, and other health problems pointing to radiation exposure in the surrounding areas for generations after these disasters. Since The National Academy of Science’s 1979 report, “Carbon Dioxide and Climate” articulated the scientific consensus about the effects of human activity on global climate change, the need to find so-called ‘clean’ alternatives to the rapidly depleting reserves of fossil fuels has shifted the stakes in the conversation about the trade-offs of nuclear energy.
But what were the considerations of that first generation of nuclear physicists and engineers who built and operated the first power plants of a burgeoning and deeply promising nuclear industry? Lewis Lichtenstein Strauss, the third chairperson of the US Atomic Energy Commission, outlined his hope for a future fuelled by nuclear power in a speech to the National Association of Science Writers in New York on September 16, 1954:
Our children will enjoy in their homes electrical energy too cheap to meter. […] It is not too much to expect that our children will know of great periodic regional famines in the world only as matters of history, will travel effortlessly over the seas and under them and through the air with a minimum of danger and at great speeds, and will experience a lifespan far longer than ours, as disease yields and man comes to understand what causes him to age. This is the forecast of an age of peace.
Unsurprisingly, the promise of an age of peace was prevalent in public discourse around nuclear energy in the decade following the bombing of Hiroshima and Nagasaki (1945). President Eisenhower’s “Atoms for Peace” speech to the UN General Assembly in 1953 promised an America that was “constructive, not destructive”; in 1955, the Department of State released Peaceful Uses of Atomic Energy, a booklet which promised “this great new source of energy will eventually make an enormous contribution to agricultural and industrial productivity and production and hence to feeding, clothing, and housing the rapidly increasing population of the world.” Three years later, in May 1958, the first civilian power reactor in America was officially opened at Shippingport, Pennsylvania.
Meanwhile in the UK, the world’s first commercial nuclear power station was opened by Queen Elizabeth II at Calder Hall in Cumberland on October 17, 1956. Lord Privy Seal Richard Butler described the moment as “epoch-making,” and the chairman of the Atomic Energy Authority, Sir Edwin Plowden, said that “Nothing that comes after will be able to detract from the importance of this first great step forward.” Plowden, like many, saw the opening of the plant as heralding in “a second industrial revolution” with nuclear power as its basis.
It was against this backdrop of optimism that the desire for a constructive application for nuclear power developed. The ‘Peacetime Atom’ would create exponentially less toxic waste than other forms of energy production, and eventually prove more cost effective than the burning of coal; this argument, still invoked by those in favour of nuclear power today, comes with the crucial caveat that this minimal quantity of waste is still more toxic and harder to treat than its alternatives. And while optimism about nuclear energy dominated early public discourse, for many, it was immediately clear that nuclear power offered the deferral of consequence, rather than a solution. An article in The New York Times, later quoted in Westminster in March 1954 by Labour MP Frank Anderson, warned that “the day of the atomic power plant has not yet arrived, but already engineers are worrying about what they will do with radioactive by-products.”
Standing in the shadow of the chimneys of the Windscale plant on the day of its opening, where less than a year later a three-day fire would release large amounts of radioactive contamination across the continent, Queen Elizabeth II warned of how:
A grave responsibility is placed upon all of us to see that man adds as much to his stature by the application of this new power as he has by its discovery. Future generations will judge us, above all else, by the way in which we use these limitless opportunities which providence has given us and to which we have unlocked the door.
Economics remains the determining factor in the viability of energy production. Today the US is the world’s largest producer of nuclear power, and its ninety-nine nuclear reactors produced 805 billion kWh in 2016, amounting to almost 20% of America’s total electrical output. There are two further reactors under construction, expected to come into use by 2020. The “epoch-making” allure of harnessing a vast power, and its burgeoning economic potential, drove early investment in nuclear power, but for early scientists engaging in this field, this new exploration was predicated on the downplay of risk, and the assurance that the geographical disposal of hazardous waste would be, without exception, manageable and safe. In the context of growing Cold War tensions, the respected role of scientists in civic life became increasingly visible. Space was being explored, atoms were being split, and everything seemed knowable.
—John King