U.S. Nuclear Energy: Transform Or Become Irrelevant

By Ramanan Krishnamoorti, Chief Energy Officer, Interim Vice Chancellor for Research & Technology Transfer, Interim Vice President for Research & Technology Transfer and S. Radhakrishnan, Managing Director, UH Energy

The recent financial crisis facing Toshiba due to construction cost overruns at the newest nuclear power plants in the U.S. brought home the message: the nuclear power industry in the U. S. must change or become increasingly irrelevant.

This latest financial crisis strikes an industry that already has undergone a radical slowdown since the Fukushima disaster in 2011, which followed stricter regulations and safety concerns among the public after the Chernobyl disaster in 1985 and the partial meltdown at Three Mile Island in 1979.  The increased cost of building traditional high pressure light water reactors comes at a time when natural gas prices have plummeted and grid-scale solar and wind are becoming price competitive. So with all the financial and environmental concerns – including the very real issue of where and how we should store spent nuclear rods – why should the world even want nuclear power?

Several reasons.

First, nuclear power represents nearly 20% of the electricity generated in the U.S. Only coal and natural gas account for a higher percentage. More important than the total percentage, nuclear has the ability to provide highly reliable base load power, a critical factor as we go towards more intermittent sources, including wind and solar.  The power generated using nuclear power has the highest capacity utilization factor that is, among all fuel sources, it has the highest ratio of power actually produced compared to potential power generation, highlighted by the fact that it represents only 9% of the installed capacity in the U.S.

Clearly, nuclear, combined with natural gas, could be a great mechanism for replacing coal as base-load power. Moreover, natural gas power plants can be rapidly mobilized and de-mobilized and effectively offset the inherent intermittency of solar and wind in the absence of effective grid-scale storage.

Which points to the second reason: energy sources not based on hydrocarbons have become the de facto option to decrease anthropogenic carbon dioxide. Thus, along with solar and wind, nuclear represents a significant technological solution to address the human-caused CO2 issue.

A strong case for nuclear was recently presented at a symposium hosted by UH Energy, especially if we are looking for a rapidly scalable solution. Nuclear power technology continues to evolve away from the concrete-intensive light water high pressure process and toward a modular and molten salt-based process, especially outside the U.S. With the broad availability of nuclear fuel, especially in a world where thorium and other trans-uranium elements are increasingly becoming the fuel of choice, this technology is scalable and ready for global consumption. If done right, the use of thorium and some of the trans-uranium elements might quite substantially scale-down the issue of spent fuel disposal.

But other, less tangible barriers remain. Perhaps the single largest barrier for nuclear energy, after the economics associated with traditional nuclear power plants, is one of social acceptance. The near-misses such as Three Mile Island and the catastrophic incidents at Chernobyl and Fukushima highlight the challenge of gaining broad societal acceptance of nuclear energy.  Compounding these challenges is the much publicized possibility of a “dirty-bomb” based on nuclear material from rogue nations.

Reducing the amount of fissile material in a power plant and reducing and even eliminating the risk are crucial to gain the public’s confidence. One significant advancement that might help minimize the challenges with public confidence is that of fuel reprocessing and, with that, the virtual elimination of nuclear fuel waste. While these technologies are in their infancy, rapid advancement and scale-up might result in a significant shift in public perception of nuclear power.

Despite the barriers, several symposium speakers argued that the increased use of nuclear energy is not only possible but the best bridge to a low-carbon future. They did not deny the concerns, especially the staggering upfront cost of building a new nuclear power plant. Jessica Lovering, director of energy at The Breakthrough Institute, acknowledged the upfront cost has quadrupled since the 1970s and ’80s in the U.S., largely stemming from increased safety engineering in response to tougher regulations and the custom development of each nuclear facility. In contrast, Lovering has reported that the cost in France, through standardization of equipment and centralization of generation capacity, for new generation capacity has risen far more slowly. And therein lies a potential path forward for how the nuclear industry may adapt.

Perhaps the biggest disruption to the current nuclear paradigm are two large changes that are just getting started: First is the global reach of South Korea and its desire to become the leading global supplier of nuclear energy production. Based on imported technologies from Canada, France and the U.S., and using the key lessons from the success of the French nuclear industry due to standardization and centralization, Korea has taken on building modular nuclear power plants, assembled at a single site. And the site that they are working from is the United Arab Emirates! Using these advances, they have been able to keep capital costs for new generation capacity to under $2,400 per kilowatt hour. That compares to $5,339 per kilowatt hour in 2010 in the United States, according to the Nuclear Energy Agency.  Interestingly, China is looking to emulate the Korean model and with as many as 30 new nuclear reactors for power generation planned over the next two decades in China alone, the global competition is heating up.

Second is the advancement of small modular nuclear reactor (SMR) technologies, which have now achieved prototype testing. The opportunity and challenge associated with SMRs is captured in a recent DOE report. These reactors are designed with smaller nuclear cores and are inherently more flexible, employ passive safety features, have fewer parts and components, thus fewer dynamic points of failure, and can be easily scaled-out through their modular design.

Done at scale, these would result in reactors being constructed more quickly and at much lower capital costs than the traditional reactors. Aside from technical advances that would enable this technology to be produced at scale, issues of public policy, public perception, regulatory predictability and (micro) grid integration need to be resolved.

The U.S. nuclear power industry needs to embrace the Korean model and SMR technologies in order to transform and provide the base load capacity.  The traditional model has failed us in too many ways.


Dr. Ramanan Krishnamoorti is the interim vice chancellor and vice president for research and technology transfer and the chief energy officer at the University of Houston. During his tenure at the university, he has served as chair of the Cullen College of Engineering’s chemical and biomolecular engineering department, associate dean of research for engineering, professor of chemical and biomolecular engineering with affiliated appointments as professor of petroleum engineering and professor of chemistry.

Dr. Suryanarayanan Radhakrishnan is a Clinical Assistant Professor in the Decision and Information Sciences and the Managing Director of Energy. He previously worked with Shell Oil Company where he held various positions in Planning, Strategy, Marketing and Business Management. Since retiring from Shell in 2010, Dr. Radhakrishnan has been teaching courses at the Bauer College of Business in Supply Chain Management, Project Management, Business Process Management and Innovation Management and Statistics.

Advertisements

Wind And Solar Power Seem Cheap Now, But Will The Cost Go Up As We Use More Of It?

By Earl J. Ritchie, Lecturer, Department of Construction Management

Everyone talks about wind and solar power becoming cost competitive, but the cost will rise as its share of generation increases and we have to pay more to integrate it into the electrical system. How much it will rise remains the subject of debate.

The cost of electricity from wind and solar energy, as well as other variable sources, has two components: the cost of generation and the cost of integration into the electrical system. As discussed in an earlier post, integration costs are expected to increase disproportionately as the share of wind and solar increases, potentially offsetting the decreasing cost of generation.

The cost of generation alone is fairly well defined. There is some disagreement about the likely extent of future cost reduction but the ranges are relatively narrow. The Bloomberg New Energy Finance estimates of about $40-$50 per megawatt-hour (MWh) are typical.

ritchie_1_020717

Source: Bloomberg 2016

As shown below, except for utility scale solar, the rate of cost reduction has slowed in recent years, so estimates for future reductions in wind power and rooftop solar costs may be optimistic. These are levelized costs, estimates of the actual cost of generation. They do not include integration costs and may differ from reported auction costs, which are affected by market conditions and subsidies.

ritchie_2_020717

Source: Lazard 2016

The IPCC estimate

As addressed in Section 7.8.2 of the IPCC’s fifth Assessment Report, there are three components of integration cost: (1) balancing costs (originating from the required flexibility to maintain a balance between supply and demand), (2) capacity adequacy costs (due to the need to ensure operation even at peak times of the residual load), and (3) transmission and distribution costs.

The IPCC does not give specific costs at high penetration levels. Their ranges for levels of 20% to 30% penetration are $1-$7 for balancing, $0-$10 for capacity adequacy, and $0-$15 for transmission and distribution. Total range is $1-32.

Even at these levels the integration costs are significant. At an estimated future generation cost of $45, the middle of the IPCC range of integration costs adds 37%. It is generally recognized that the integration cost of variable renewable energy (VRE) penetration above 30% will be higher but is difficult to estimate.

The complexities of integration

Dealing with intermittency must be managed at a continuum of time scales from milliseconds to years. There are costs associated with all timeframes; however, published analyses focus primarily on the longer intervals of balancing and adequacy.

ritchie_3_020717

Source: World Bank 2015

Various measures to manage this variation – storage, source mix, overcapacity, demand management, etc. – have differing costs, advantages and disadvantages which can be traded off. This results in a complex situation in which the optimum solution is typically not obvious.

Estimates of integration cost at higher levels vary so widely that it is almost impossible to generalize. Local conditions and design choices significantly affect cost. As a study by the Danish Association of Engineers put it “the design of future 100% renewable energy systems is a very complex process.” An almost infinite number of possible combinations of sources is possible depending upon location, anticipated demand, degree of decarbonization and emphasis on economics.

How future costs are estimated

Both optimization and cost forecasting are done with mathematical models. Significant differences may result from the model used. Some characteristics and weaknesses of the three main classes of model are shown below.

ritchie_4_020717

Source: Ueckerdt 2015

Limitations of the models mean that not all aspects of the system can be incorporated in any one model. This may result in overestimates or underestimates. In addition, published studies frequently consider only one aspect, such as the addition of wind power alone.

The limitations and possible sources of error in these studies are normally well understood by the authors, and explained in the original articles. Such caveats rarely reach popular articles quoting the results. There is also deliberate or subconscious bias in the choice of parameters due to the prejudices of the authors.

The variation in estimates

The result of these factors is considerable variation in cost estimates, even when similar systems are being analyzed. Two examples demonstrate the range:

The first estimate below is a model of adding wind energy to an existing grid similar to the European grid. It does not consider externalities, such as renewables mandates, but does include a carbon tax of 20 Euros per ton of CO2. The upper dashed line shows short term costs, and the solid black line long term.

The model shows integration cost equal to generation cost at 40% penetration. That is, the cost doubles. It does not consider possible storage or extending the grid to optimize the system.

ritchie_5_020717

Source: Ueckerdt, et al. 2013

A 2016 US study by Lantz, et al., showed a mix of about 42% variable renewable energy to have a net present value cost $59 billion higher than an economically optimized scenario. They did not give a per kilowatt-hour cost, but modeled a modest 3% increase in retail electricity cost in 2050. The authors comment that the cost may be understated because of lack of detail in the model.

ritchie_6_020717

Source: Modified from Lantz, et al. 2016

Further examples include the widely publicized papers by DeLucchi and Jacobson, which estimate transmission and storage costs as $20/MWh for 100% variable renewables, and the 2012 NREL study, based on somewhat dated costs, which estimates up to $54/MWh over a fossil fuel dominated scenario for 90% renewables (48% wind and solar). Published scenarios are hotly debated.

The headline cost in such studies cannot be taken at face value. In addition to variances due to choice of model, such obvious influences as assumed fossil fuel prices and future cost reductions in generation methods must be weighed in assessing the estimates. As might be expected, proponents of a particular technology will frequently make assumptions favorable to their preferred energy source.

Other renewables and the social cost of carbon

Some issues not discussed in detail here include the other variable renewables, wave and tide; the dispatchable renewables, hydroelectric, geothermal, and biomass; and the social cost of carbon.

Wave and tide are expected to contribute only a small fraction of future electricity generation. They may be complementary to other forms of variable renewable energy.

Hydroelectric and geothermal can be highly desirable as low carbon, low-cost and dispatchable. Very high renewables penetration has already occurred in areas where these resources are abundant. New Zealand is above 80%; Norway and Iceland are over 90%.

Electricity generated from biomass is dispatchable but creates greenhouse gases at the site of generation. The extent to which this is offset by land use changes and carbon storage of the fuel crops depends upon the generation technology, the type of fuel crop and management of the crop. Estimates of offset are controversial but most calculate net reduction in greenhouse gases compared to fossil fuel generation.

The social cost of carbon (SCC) is not the focus of this article, which concentrates on the actual cost of generation. SCC is speculative, with typically quoted numbers from about $5 per ton of CO2 to $100, although extremes can exceed $1,000. The US government’s 5th percentile to 95th percentile range of the cost in 2020 is from zero to about $180. Obviously, the inclusion of any positive SCC will shift economic analysis toward low carbon sources.

Little effect in the short run

Wind and solar intermittency are not likely to be very costly in the near-term, say to 2030, because most scenarios do not have them reaching high penetration levels by that time. For example, wind and solar are 15% of electricity generation in the Reference Case of the EIA’s 2016 Annual Energy Outlook.

Even the highly publicized German Energiewende (Energy Transformation) has wind and solar currently at 21%, below the level of potential significant cost increase. Intermittency is still being handled by fossil fuels, dispatchable renewables, and exports. Germany’s target for 2030 is 33%.

ritchie_7_020717

Source: Burger 2017

Local areas with more ambitious goals will be an interesting test. California has a goal of 50% of retail electricity sales from renewables by 2030. A 2014 analysis by the consulting firm E3 modeled reaching this goal with 43% wind and solar. The report said “This is a much higher penetration of wind and solar energy than has ever been achieved anywhere in the world.” Capital costs under various scenarios ranged from $89 billion to $128 billion in 2012 dollars, with electricity rates increasing between 15% and 30% solely due to the renewables standard. An additional 40% would be due to infrastructure replacement and other factors. The report further says “overgeneration and other integration challenges have a substantial impact of (sic) the total costs for the 50% RPS scenarios.”

Will intermittency costs limit high penetration?

It is clear that there is a cost to managing intermittency and this cost will likely be greater than the decrease in generation cost itself. Actual experience suggests that this cost will be higher than is envisioned in the more optimistic scenarios.

However, cost is not the only consideration. High cost generation may have value where the cost of alternative sources is higher or the match to demand is good. Carbon taxes and renewables mandates will increase the share of renewables, regardless of the underlying economics.

Predictions of whether costs associated with increasing share of variable renewables will outweigh future cost reductions depend upon expectations of both, as well as future costs of storage and other means of dealing with intermittency, all of which are speculative. Storage costs are a topic for another day.

Is Flaring Just Bad For Business, Or Is It A Violation Of The Landowner’s Contract?

By Bret Wells, George Butler Research Professor of Law, UH Law Center

I have previously argued that the downturn in oil and gas development is the perfect time for the Texas Railroad Commission to change its regulations on flaring associated gas.

The current rules – known as Rule 32 – allow drillers to burn off natural gas produced along with more profitable crude oil if there isn’t an immediately available pipeline or other marketing facility to take it. That’s been generously interpreted, despite the fact that the gas could be captured and sold.

And while energy companies working in the Eagle Ford and other shale fields may find it more expedient to flare off that excess gas, landowners and other royalty owners may not be so quick to agree.

The landowner does not typically have a working interest in the oil, gas and other minerals that lie beneath their property; instead, mineral rights are typically transferred to an oil and gas operator through a lease. In return, the landowner typically reserves the right to be paid a royalty. Royalty clauses differ, but a typical clause would call for the landowner to be paid a royalty based on the amount of oil and gas that is produced and saved from the well, traditionally 1/8th of the gross production. At the peak of the last boom, that percentage rose to a higher fraction of the gross value of production.

If an operator flares commercially profitable associated gas, however, under the “expressed” terms of many leases, the landowner and other royalty owners would not be due any payment.

But there’s another factor at play, too. Texas courts have ruled that oil and gas leases create implied obligations among the parties, an effort to enforce the intent of the parties who executed the lease.  Under implied covenant law, the Texas courts require the operator to act in a reasonably prudent manner. Under this standard, the producer is required to consider not only its own financial interest but also that of the royalty owner. The producer is also required to act in a manner that prudently administers the mineral estate. If the operator fails to do any of that, it can be sued over the lost royalty that the landowner would have received had the mineral estate been operated in accordance with this reasonably prudent operator standard.

So, the legally relevant question is whether flaring commercially profitable natural gas in order to accelerate the production of crude oil violates this implied covenant standard.

Certainly the operator, which has a significant financial investment, may want to accelerate the timing of cash flow in order to recover its cost from its investment. But again, the reasonably prudent operator standard requires the operator to consider the financial interest of the royalty owner as well. The royalty owner has no cost investment and would likely want the operator to pursue a strategy that maximizes the amount of gross royalties paid on the associated gas over time. Flaring commercially profitable natural gas diminishes gross royalties. So, although a factual issue, a jury could well conclude that the reasonably prudent operator would have employed a strategy to minimize the flaring of commercially profitable natural gas.

If so, then an operator would be subject to a claim for damages for failing to live up to this standard, and the measure of damages would be the gross royalty that should have been paid on the imprudently flared gas.  Determining a damage award would be provable because the operator is required to meter and file a monthly, public report to the Railroad Commission on the amount of natural gas that it flares.

Successful damage claims for lost royalties on flared natural gas would send a clear message to the industry that it must immediately adopt sound conservation practices in the Eagle Ford shale, including not flaring commercially valuable natural gas. In the end, sound public policy is promoted when private litigation and the Railroad Commission’s own rules work together to motivate operators to conserve finite natural resources.

Providence, coupled with the ingenuity of the oil and gas industry, has blessed the state of Texas with another chance at a prolonged development of its natural resources in ways that were unimaginable less than 20 years ago.  Now is the time to ensure the Eagle Ford shale is developed in accordance with sound conservation practices — flaring commercially valuable natural gas is not one of them. The benefits of private litigation and regulatory changes that discourage flaring are two-fold: helping to create positive public policy for the state of Texas and assuring that reasonable expectations agreed to in oil and gas leases are met.

Bret Wells is the George Butler Research Professor of Law at the University of Houston Law Center.  He is also an Energy Fellow with UH Energy.   For the author’s further scholarly writings on this topic, please see Bret Wells, Please Give Us One More Oil Boom – I Promise Not to Screw It Up This Time: The Broken Promise of Casinghead Gas Flaring in the Eagle Ford Shale, 9 Tex. J. Oil, Gas & Energy Law 319 (2014).