U.S. Nuclear Energy: Transform Or Become Irrelevant

By Ramanan Krishnamoorti, Chief Energy Officer, Interim Vice Chancellor for Research & Technology Transfer, Interim Vice President for Research & Technology Transfer and S. Radhakrishnan, Managing Director, UH Energy

The recent financial crisis facing Toshiba due to construction cost overruns at the newest nuclear power plants in the U.S. brought home the message: the nuclear power industry in the U. S. must change or become increasingly irrelevant.

This latest financial crisis strikes an industry that already has undergone a radical slowdown since the Fukushima disaster in 2011, which followed stricter regulations and safety concerns among the public after the Chernobyl disaster in 1985 and the partial meltdown at Three Mile Island in 1979.  The increased cost of building traditional high pressure light water reactors comes at a time when natural gas prices have plummeted and grid-scale solar and wind are becoming price competitive. So with all the financial and environmental concerns – including the very real issue of where and how we should store spent nuclear rods – why should the world even want nuclear power?

Several reasons.

First, nuclear power represents nearly 20% of the electricity generated in the U.S. Only coal and natural gas account for a higher percentage. More important than the total percentage, nuclear has the ability to provide highly reliable base load power, a critical factor as we go towards more intermittent sources, including wind and solar.  The power generated using nuclear power has the highest capacity utilization factor that is, among all fuel sources, it has the highest ratio of power actually produced compared to potential power generation, highlighted by the fact that it represents only 9% of the installed capacity in the U.S.

Clearly, nuclear, combined with natural gas, could be a great mechanism for replacing coal as base-load power. Moreover, natural gas power plants can be rapidly mobilized and de-mobilized and effectively offset the inherent intermittency of solar and wind in the absence of effective grid-scale storage.

Which points to the second reason: energy sources not based on hydrocarbons have become the de facto option to decrease anthropogenic carbon dioxide. Thus, along with solar and wind, nuclear represents a significant technological solution to address the human-caused CO2 issue.

A strong case for nuclear was recently presented at a symposium hosted by UH Energy, especially if we are looking for a rapidly scalable solution. Nuclear power technology continues to evolve away from the concrete-intensive light water high pressure process and toward a modular and molten salt-based process, especially outside the U.S. With the broad availability of nuclear fuel, especially in a world where thorium and other trans-uranium elements are increasingly becoming the fuel of choice, this technology is scalable and ready for global consumption. If done right, the use of thorium and some of the trans-uranium elements might quite substantially scale-down the issue of spent fuel disposal.

But other, less tangible barriers remain. Perhaps the single largest barrier for nuclear energy, after the economics associated with traditional nuclear power plants, is one of social acceptance. The near-misses such as Three Mile Island and the catastrophic incidents at Chernobyl and Fukushima highlight the challenge of gaining broad societal acceptance of nuclear energy.  Compounding these challenges is the much publicized possibility of a “dirty-bomb” based on nuclear material from rogue nations.

Reducing the amount of fissile material in a power plant and reducing and even eliminating the risk are crucial to gain the public’s confidence. One significant advancement that might help minimize the challenges with public confidence is that of fuel reprocessing and, with that, the virtual elimination of nuclear fuel waste. While these technologies are in their infancy, rapid advancement and scale-up might result in a significant shift in public perception of nuclear power.

Despite the barriers, several symposium speakers argued that the increased use of nuclear energy is not only possible but the best bridge to a low-carbon future. They did not deny the concerns, especially the staggering upfront cost of building a new nuclear power plant. Jessica Lovering, director of energy at The Breakthrough Institute, acknowledged the upfront cost has quadrupled since the 1970s and ’80s in the U.S., largely stemming from increased safety engineering in response to tougher regulations and the custom development of each nuclear facility. In contrast, Lovering has reported that the cost in France, through standardization of equipment and centralization of generation capacity, for new generation capacity has risen far more slowly. And therein lies a potential path forward for how the nuclear industry may adapt.

Perhaps the biggest disruption to the current nuclear paradigm are two large changes that are just getting started: First is the global reach of South Korea and its desire to become the leading global supplier of nuclear energy production. Based on imported technologies from Canada, France and the U.S., and using the key lessons from the success of the French nuclear industry due to standardization and centralization, Korea has taken on building modular nuclear power plants, assembled at a single site. And the site that they are working from is the United Arab Emirates! Using these advances, they have been able to keep capital costs for new generation capacity to under $2,400 per kilowatt hour. That compares to $5,339 per kilowatt hour in 2010 in the United States, according to the Nuclear Energy Agency.  Interestingly, China is looking to emulate the Korean model and with as many as 30 new nuclear reactors for power generation planned over the next two decades in China alone, the global competition is heating up.

Second is the advancement of small modular nuclear reactor (SMR) technologies, which have now achieved prototype testing. The opportunity and challenge associated with SMRs is captured in a recent DOE report. These reactors are designed with smaller nuclear cores and are inherently more flexible, employ passive safety features, have fewer parts and components, thus fewer dynamic points of failure, and can be easily scaled-out through their modular design.

Done at scale, these would result in reactors being constructed more quickly and at much lower capital costs than the traditional reactors. Aside from technical advances that would enable this technology to be produced at scale, issues of public policy, public perception, regulatory predictability and (micro) grid integration need to be resolved.

The U.S. nuclear power industry needs to embrace the Korean model and SMR technologies in order to transform and provide the base load capacity.  The traditional model has failed us in too many ways.

Dr. Ramanan Krishnamoorti is the interim vice chancellor and vice president for research and technology transfer and the chief energy officer at the University of Houston. During his tenure at the university, he has served as chair of the Cullen College of Engineering’s chemical and biomolecular engineering department, associate dean of research for engineering, professor of chemical and biomolecular engineering with affiliated appointments as professor of petroleum engineering and professor of chemistry.

Dr. Suryanarayanan Radhakrishnan is a Clinical Assistant Professor in the Decision and Information Sciences and the Managing Director of Energy. He previously worked with Shell Oil Company where he held various positions in Planning, Strategy, Marketing and Business Management. Since retiring from Shell in 2010, Dr. Radhakrishnan has been teaching courses at the Bauer College of Business in Supply Chain Management, Project Management, Business Process Management and Innovation Management and Statistics.

Don’t Expect Carbon Capture To Save Coal

By Ramanan Krishnamoorti, Chief Energy Officer and Interim Vice Chancellor for Research and Technology Transfer, University of Houston

There has been a lot of excitement around the recent startup of commercial scale carbon capture and sequestration operations by NRG Energy at the W.A. Parish coal-fired power plant near Houston. The reason is clear: the technology offers the promise of “clean” coal, with little or no CO2 emission, and the potential to revive the coal-based power generation industry, which has declined nationally from about 44% of electric power generation in 2009 to 31% today.

Moreover, effectively sequestering and using the CO2 to enhance oil recovery operations in declining oil and gas fields bolsters the case, both from an environmental perspective and an economic perspective.


This raises the question of why I was less than enthusiastic about the scale out and advancement of this technology in a recent Houston Public Media interview.

Let’s start with economics.

Capital expenses required for the technology, the energy required to power the CO2 capture system and the current price of crude oil that would be recovered through the enhanced oil recovery process,  mean that operating costs for a coal-fired generating plant coupled with carbon capture technology are 30% to 35% higher than the operating costs for a coal-fired plant alone This is consistent with extensive life cycle analysis studies of carbon capture and sequestration reported by Sathre, R.  2011.

NRG executives, I should note, disagree and say the technology is essentially cost-neutral when oil is $50 a barrel, as profits from the additional oil harvested with the use of the sequestered carbon cover both capital and operating expenses. Moreover, they estimate that with scale up and improvements in technology, the W. A. Parish plant operates its carbon capture and sequestration with a parasitic energy load of 21% or lower and not the broad industry standard of 30% to 35%.

Their bigger argument in support of the project is that investing in this technology now will pay off globally down the road. Even if the decline of coal in the United States isn’t reversed, due to concerns about climate change and because cleaner-burning natural gas is cheaper, this thinking suggests that new markets for the capture technology may open in China and India, where new coal units continue to come online. They also see it as an important step toward developing other low-carbon technologies, including the use of this carbon capture at natural-gas fired power plants.

I respect that argument, and NRG’s investment. Their project at the W.A. Parish plant has moved us further along the learning curve for this and future technologies that would allow for a more sustainable use of hydrocarbon fuels.

But I question whether the project and technology is economically scalable to other locations – especially coal-fired plants that aren’t near an existing oil or gas field in the U. S. and in the absence of substantial economic incentives for the use of such capture technologies.  And there are other concerns:

The physical footprint.

The technology will significantly strain the need for physical space associated with the power generation unit, and if scaled up to accommodate the average 1 to 4 gigawatt coal-based power plant, I believe it would be a significant impediment. In an era where most communities globally operate with a “not in my neighborhood” philosophy, the physical size of current technology will pose a serious barrier to adoption on a wide scale, especially in new construction of coal-fired power plants.

The price of oil.

Using the captured CO2 to improve oil production, and storing it geologically in oil formations, is the critical piece in making carbon capture and sequestration technology economically viable, whether it involves coal or natural gas.

The proximity between power plant and oilfield will matter – sending the CO2 by pipeline to a field 100 miles away is far different than sending it to an oilfield 1,000 miles away. NRG and its partner in the Parish plant, JX Nippon, are sending the CO2 about 90 miles away, to a field in South Texas.

But the price for which a producer can sell the additional oil harvested by the CO2 injection is a more fundamental issue for the technology’s viability. With oil hovering around $50 per barrel and the expected improved production of four barrels per ton of captured CO2, the economics of the combined process is not promising.

Should the price of oil increase to between $80 and $100 per barrel, or if a considerable carbon tax is incorporated and natural gas prices stay near their current prices, this technology might, in fact, be economically viable in spite of the high capital costs.

Competing technologies, especially solar, solar thermal and wind.

Renewables are becoming increasingly cost competitive with coal and natural gas; the unsubsidized levelized cost of wind energy has dropped by 66% since 2009, and the unsubsidized levelized cost of photovoltaic solar has dropped by 85% over the same time period. New power generation from utility scale solar and wind are now cost comparable to that from combined cycle natural gas.

There are legitimate questions about how soon these intermittent energy sources can be incorporated into the grid without improvements in affordable grid-scale storage technology. Storage technologies are still being developed and are likely to increase the cost of using renewables, although how much is an open question.

Undoubtedly, however, the share of renewables on the nation’s power grid will grow, along with the deployment of other novel technologies that can render natural gas power generation nearly carbon neutral. One example is using the patented Allam cycle to drive generation turbines with high pressure, high temperature CO2 and then capturing the carbon, a less expensive and possibly less complicated process.

These competing technologies are likely to be adopted much more rapidly than the current technology based on coal, even if the lessons learned from NRG’s Parish plant serve to guide research into other sequestration technologies.

Renewables, backed by newer competing technologies, and the low price of natural gas-based power generation in the United States, will be a significant challenge to the continued nationwide deployment of this clean coal technology deployed in Texas.






The Future Of Transportation: Not So Fast Marty McFly!

By Ramanan Krishnamoorti, UH Chief Energy Officer

Back when I was a college student in India, I was quite taken by “Back to the Future,the Steven Spielberg movie that promised hoverboards for personal transportation and fantasy nuclear powered cars to make time travel possible.  Today, hoverboards are a reality, albeit as a toy for adults and not quite ready to serve as a personal transportation device. Tesla is selling cars with “falcon” wings, like the DeLorean in Back to the Future, and the “insane speed” button.  And I am left pondering the future of personal transportation.

Three fundamental changes are asking us to pause and consider the future of transportation:  the ascendency of alternate and sustainable forms of energy that look to replace fossil fuels, specifically crude oilbased gasoline and diesel; lighter and higher energy density storage batteries and the ubiquitous sweep of global positioning satellites, cellular data and the internet of things.

Is the future of transportation one where a driverless, ownerless, electric car is called up on demand, either from the road in the neighborhood — a la Uber — or from a nearby garage?

Very possibly and in the not so distant future. But only in niche markets.  Expanding this to the whole realm of personal transportation still faces significant hurdles.

I see this happening in urban locations where space is limited, travel times are relatively short and significant advantages of scale can be used to replace the existing infrastructure of private cars, private garages and public parking lots. In fact, every time I visit New York City or Washington, D.C., I believe those cities are already living in a socialized era of personal transportation that reflects an ownerless car society.  In a controlled urban environment – say within the city limits of any major metropolis where driving bans are common, such as Singapore, Beijing or New Delhi – it would appear that the remaining hurdles of driverless and electric cars would be most easily tackled.

Google and Tesla are already experimenting with driverless cars.  And yes, electric cars are becoming more commonplace.  But an enormous challenge remains.  These concepts, especially driverless cars, must be executed at full scale or risk failure, possibly disaster.

The question is whether human-driven cars and driverless cars can coexist.  Testing is underway in California and Austin, among other places, and much of the preliminary data suggest accidents occur when the driverless cars are confounded by the driving patterns of cars driven by humans.

Even at scale, there are serious ethical questions about driverless cars and the decision-making that might be required of machines. It’s easy to say the cars would be programmed to protect life over property. But what about avoiding an errant pedestrian while risking injury to the car’s passengers? How do we feel about allowing those split-second decisions to be made by the algorithms maneuvering the car?

And then there are natural disasters (such as the 11-year cyclical solar flares) that could knock out all communications and significantly disrupt the power-grid. We will have to have safety features built in to the technology before large-scale deployment. The weakest links of this increasingly complex communications network will define the resilience of the technology.

Even with all of that confronting driverless cars, replacing the entire fleet of internal combustion engine based personal transportation vehicles with electric cars will prove the more challenging issue.

First, why electric cars?  The use of fossil fuels – gasoline, diesel, compressed or liquefied natural gas – and biofuels, including ethanol and biodiesel, lead to the widespread generation of carbon dioxide and in some cases pollutants such as nitrous oxides and particulate matter, including soot.  Moreover, natural gas, either liquefied or compressed, faces two challenges: natural gas is intrinsically less energy intensive than petroleum-based gasoline and the infrastructure to switch to natural gas in a country as large as the U.S. are enormous  – several trillion dollars, according to some estimates. On the other hand, biofuels production at scale in the U.S. would seriously raise the debate of food vs. fuel, and their use would pose some of the same challenges associated with the use of diesel, such as increased particulate pollution.  Hydrogen-powered automobiles were considered the panacea about 10 years ago but have fallen behind because of the significant engineering challenges.


Source: U.S. Energy Information Administration, based on the National Defense University

Today, with over 300 million automobiles and about three trillion annual vehicle miles driven in the U.S., a little over a quarter of the energy demand comes from the transportation sector. Cars, light trucks and motorcycles represent about 20% of all the energy consumed in the U.S. Even the most ambitious plans for the adoption of alternate energy platforms suggests that those platforms, if deployed at full throttle 50 years from now, would be able to provide the energy needs for personal transportation and nothing more.

In the meantime – until the alternate energy platforms mature – where will we build the electric power generation we would need to power all of those electric vehicles?  And will those be powered by coal, natural gas or nuclear energy? These are critical questions that need to be confronted by all of us as we consider the future of transportation.

As daunting as those challenges are, there is one that looms larger for the broad-based electrification of the automobile fleet, and that is current battery technology. While significant improvements in battery storage technologies have been made over the last two decades, the challenge of weight, energy density, cost and reliability over an extended period of use have fueled significant anxiety regarding batteries and more broadly, electric vehicles. The technology improved, thanks to millions of dollars invested from both public and private sources, but it will take several years before widespread use might be expected.

Add in one last challenge – we are facing a generational low in gasoline prices, improvements in the efficiency of internal combustion engines and sorry, Marty McFly, but the anticipated widespread adoption of driverless, ownerless, nomadic, all electric automobiles is probably going to be later rather than sooner.

Feature image: By Steve Jurvetson, via Wikimedia Commons

Are High Efficiency Clean Diesel Automobiles a Myth?

By Ramanan Krishnamoorti, UH Chief Energy Officer & William S. Epling, Cullen College of Engineering

The controversy over Volkswagen’s admission that it rigged pollution tests in the United States – causing the tests to show fewer emissions than the company’s highly efficient diesel vehicles produced during normal operations – has raised many questions Continue reading