Have We Passed the Climate Change Tipping Point?

By Earl J. Ritchie, Lecturer, Department of Construction Management

A few years ago, 400 parts per million for carbon dioxide was widely cited as the tipping point for climate change. Now that we have passed that value, it has become common to say that it wasn’t really a tipping point, that it was symbolic or a milestone.

Whether it’s a tipping point or a milestone, we have decisively passed it and CO2 levels appear certain to continue higher. Ralph Keeling, the originator of the famous Keeling Curve, said “it already seems safe to conclude that we won’t be seeing a monthly value below 400 ppm this year – or ever again for the indefinite future.”

Let’s consider what a tipping point actually is. The IPCC describes it as “abrupt and irreversible change.” Lenton, et al. say it “will inevitably lead to a large change of the system, i.e., independently of what might happen to the controls thereafter.” In other words, past the tipping point there will be drastic changes even if we stop emitting CO2. Rather than staying “well below 2 degrees Celsius above pre-industrial levels” as is the target of the United Nations Framework Convention on Climate Change (UNFCCC), there could be warming of several degrees, with associated sea level rise and rainfall changes.

tipping-point

Source: Alchemy 4 the Soul

In contrast to these definitions, others say climate change at projected CO2 levels may be reversible. Reversibility is important because otherwise it’s impossible, or at least very difficult, to do anything once you have passed the tipping point. I’ll return to this.

Where do we stand on CO2?

Atmospheric CO2 has not only been increasing; it has been accelerating. The 2001-2016 annual average increase is double that of 1960-1980. As pointed out in an earlier post, commitments under the UNFCC Paris Agreement do not decrease global CO2 emissions, so it is virtually certain that CO2 concentrations will continue to rise.

Much has been made of the potential impact of Trump’s policies on CO2 emissions. The frequently quoted Lux Research analysis of Clinton and Trump policies projected a difference well under a billion metric tons in 2025. This is just over 1% of the world total under the Paris Agreement commitments. The difference is not significant insofar as it relates to tipping mechanisms.

Climate tipping mechanisms

There are multiple possible tipping mechanisms, some of which are shown on the map below. Several of these are occurring today: Arctic sea ice loss, melt of the Greenland ice sheet and boreal forest dieback (and range shifts) are well documented. The extent of permafrost loss, instability of the West Antarctic Ice Sheet and slowing of the Atlantic deep water formation (also called Atlantic Thermohaline Circulation or Atlantic Meridional Overturning Circulation) are less well supported, but there are indications that these are occurring.

These mechanisms are not directly dependent on CO2 concentration; they are triggered by warming alone. Given the amount of warming in recent decades, it is not surprising that they are occurring.

3-1200x739 (1)

Source: Lenton, et al. PNAS 2008

The effects of potential tipping mechanisms are difficult to judge. It’s generally agreed that Arctic sea ice melting is a positive feedback event. Less ice means a darker ocean and more warming. Others are not so clear-cut.

For example, boreal forests, which represent about one-third of the world’s forest cover, are carbon sinks but have variable reflectance depending upon the season, snow cover and vegetation type. Compared to tundra and deciduous forests, they have a net warming effect. The extent to which they will migrate due to warming, and the type of vegetation which will succeed them, are speculative.

Further uncertainty exists because climate effects interact. It is possible to have a cascade, in which increased warming from exceeding one tipping point triggers another.

Is climate change reversible?

The IPCC considers some additional warming irreversible. They say “Many aspects of climate change and associated impacts will continue for centuries, even if anthropogenic emissions of greenhouse gases are stopped. The risks of abrupt or irreversible changes increase as the magnitude of the warming increases.”

Per the models cited in the IPCC assessments, anthropogenic climate change can be halted at 2 degrees, although this scenario requires negative industry and energy-related CO2 emissions later this century. By this interpretation, a tipping point has not been reached.

Accomplishing the 2 degree scenario may be difficult. The world’s track record in emissions reductions is poor. According to Friedlingstein, et al., “Current emission growth rates are twice as large as in the 1990s despite 20 years of international climate negotiations under the United Nations Framework Convention on Climate Change (UNFCCC).”

There has been a reported flattening in fossil fuel emissions for the past couple of years, due primarily to reported coal reductions in China. It remains to be seen whether this is the beginning of a reversal. Even so, emissions would have to decrease rapidly to meet even the 2 degree goal.

Prescriptions for reversal of global warming include proposed geoengineering methods for removing CO2 from the atmosphere and cooling the Earth by reflecting or blocking solar radiation. These do not mean that a tipping point was not passed. In the analogy shown in the cartoon above, one can push the rock back up the hill even after it has rolled to the bottom.

Have we passed the tipping point?

Observed advances in multiple tipping mechanisms certainly raise the question whether the tipping point has been passed. However, these mechanisms are accounted for to at least some degree in climate models, so interpreting that we have passed the tipping point requires that the models understate warming effects.

This is essentially an issue of the sensitivity of climate, that is, how much warming results from a given greenhouse gas concentration. The IPCC’s analysis concludes the likely range of equilibrium sensitivity for doubling of CO2 is 1.5 degrees to 4.5 degrees. As the graph below shows, there is reasonable probability that it could be substantially higher.

simulated-climate-sensitivity

Source: NASA Earth Observatory

If the actual value of sensitivity falls in these higher ranges, warming will be greater than predicted by the IPCC models and a tipping point or points may have been exceeded. I’m not sure that anyone actually knows the answer, which leaves me with the unsatisfactory conclusion of not having answered the question I have raised.

Regardless of whether we have passed the tipping point, continued warming, rainfall pattern changes, significant sea level rise and continued northward and vertical migration of plant and animal species in the Northern Hemisphere seem certain. We are looking at a changed world and must adapt to it.

Not an excuse for inaction

One should not view the possibility that we have passed a significant tipping point as a reason for inaction. Although I remain somewhat skeptical of the degree of human contribution to climate change, it is prudent to take reasonable actions that may reduce the problem. In addition, there are multiple possible tipping points with different thresholds. Exceeding one does not mean you cannot avoid another.


Earl J. Ritchie   is a retired energy executive and teaches a course on the oil and gas industry at the University of Houston. He has 35 years’ experience in the industry. He started as a geophysicist with Mobil Oil and subsequently worked in a variety of management and technical positions with several independent exploration and production companies. Ritchie retired as Vice President and General Manager of the offshore division of EOG Resources in 2007. Prior to his experience in the oil industry, he served at the US Air Force Special Weapons Center, providing geologic and geophysical support to nuclear research activities.

Advertisements

Managing Wind And Solar Intermittency In Current And Future Systems

By Earl J. Ritchie, Lecturer, Department of Construction Management

The problem with variable renewable energy (VRE) – primarily wind and solar – is sometimes it generates too much power and sometimes it doesn’t generate enough. That’s manageable, but it’s more complicated than it may seem.

In the majority of today’s installations, variability can be balanced with so-called dispatchable generation: traditional power plants, hydroelectric and biomass. Generation from traditional power plants is cut when generation from wind and solar is too high, and increased when it’s too low. This creates some power management problems but is manageable at modest cost.

In a system with a large share of wind and solar, maintaining enough dispatchable power in reserve becomes expensive. The electrical grid must be modified to manage the increased variability. It remains to be seen how quickly the transformation to a high share of VRE can be made.

The nature of variability

Power from wind and solar varies on all time scales from seconds to years. The graph below illustrates variation in Irish wind power over one year. The Irish example is pertinent because at 23% of electricity generated, they have one of the highest shares of wind power, and the wind farms are dispersed over the country. Despite the benefit of the geographic spread, there are moderately long periods during which little or no electricity was generated by wind. The historical average amount generated is 31% of installed capacity according to EirGrid and SONI, but the range is from near zero to about 50%.

ritchie-1_022717

Source: EIA

Managing variability is not new

Variability is not a new issue in the power industry since traditional power sources have some variability, and demand is also variable over all timeframes. The graph below of demand in a large U.S. grid has much less variability than the Irish wind power example, but it is still almost 3:1 and has a noticeable seasonal component.

ritchie-2_022717-2.jpg

Source: EIA

Managing the system is a function not only of source variation, but also of matching generation with demand. In an earlier post I discussed the “duck curve” illustrating the ramp down and ramp up needed in dispatchable generation due to the mismatch of daily solar generation peaks with demand.

Reducing source variability

Variability can be reduced by combining different types of variable sources and by spreading sources over a large geographic area. Either of these will reduce short-term variability but may or may not significantly reduce variability on a scale of hours or days.

A study of the European Union showed that wind power in 2014 fell to as low as 4% of capacity and was less than 10% of capacity 11% of the time, even when aggregated over the entire EU. Since the countries are not all grid connected, the distribution was hypothetical. Variation on the actual smaller grids was higher.

Patterns of available wind and solar power vary tremendously with location. Wind and solar may tend to peak together or at different times. They may generate more during peak demand periods or during low demand periods. This makes generation design a local issue unless very widespread interconnections are available.

The potential for greater smoothing has led to the concept of the supergrid, connecting generating sources over larger areas than traditional grids. Some technological development is necessary to implement supergrids but they likely will be constructed. Even so, they will not completely eliminate variability since weather patterns tend to occur over large areas.

Reducing demand variability

Variability of demand can be reduced by a variety of techniques that shift usage from high demand periods. These include differential pricing, smart controls, jawboning and direct utility control of load. Perhaps the most obvious example is encouraging people to shift tasks such as washing and drying to the night in order to reduce demand during the daytime peak. These methods are discussed within the industry along with methods for reducing overall demand under the term demand-side management.

Managing the remaining variability

In existing grids and those foreseeable in the near term, substantial variability and mismatch between generation and demand will continue. Management methods include dispatchable generation, overcapacity, storage and tolerating insufficiency. All have costs.

Dispatchable generation is the traditional method. In effect, it is a form of overcapacity since the dispatchable plants run below capacity until more electricity is needed. The cost of maintaining standby capacity and efficiency losses associated with ramping and partial load operation can be substantial.

Renewables can serve as dispatchable sources, so this method would not preclude achieving 100% renewables. Some very high renewables scenarios use biomass to balance variability.

The premise of overcapacity is that if you build more generation than is necessary, you will have enough even when the variable sources operate at a fraction of their capacity. As the graph of Irish wind power shows, it is a practical and economic impossibility to build enough variable capacity to meet supply during very low periods.

The downside of overcapacity is that you generate too much electricity during favorable periods of high wind or intense sunlight. Ideally, the excess electricity can be stored. This has some disadvantages which will be discussed below.

A possibility suggested by Mark Jacobson and Mark DeLucchi is generating hydrogen during periods of oversupply. In essence, this is increasing demand to match supply, and it could be applied to products other than hydrogen. It is conceptually similar to encouraging electricity use by very low or negative prices during oversupply periods, as has been practiced in Germany and other areas with moderately high VRE share.

Storage to clip the peaks and fill the valleys of demand is part of nearly all high VRE scenarios. There are numerous storage technologies with varying cost, scale, duration and technological maturity. This table from Lazard’s 2016 Levelized Cost of Storage shows the cost of the primary technologies and applications. The costs should be taken only as approximations since some of the technologies are not mature, costs vary with location and future cost reductions are likely. Taken at face value, only compressed air, pumped hydro and lithium-ion are competitive today with natural gas peaking cost of about $200 per megawatt hour.

ritchie-3_022717-1200x645

Source: Lazard

Storage cost depends not only on the cost per kilowatt hour, but also on the amount of storage capacity installed. There are no guidelines for the amount of storage needed for a given level of VRE. The optimum capacity is influenced by cost dependent tradeoffs between generation and storage, as well as the mix of sources and match with demand. A model study of the PJM Interconnection used as the demand example above showed the lowest cost alternative relied heavily on overcapacity, with little storage. Other locations and assumptions might give very different answers.

Storage technology is in an early stage of development. Most storage installations to date can only supply rated power for a few minutes to a few hours. Capability to handle extended shortage remains an issue. The extent of storage that will be incorporated in future systems will be heavily dependent upon development of storage methods and cost of generation.

It is likely impossible to build a grid with a very high share of VRE that has complete certainty of providing adequate power at all times. A necessity or deliberate choice may be to allow for curtailment, that is, not supplying some customers when generation does not equal demand.

Market mechanisms, such as interruptible supply contracts, are other ways to match supply and demand.

Optimizing the system

On a theoretical basis, an electrical grid can be optimized through the proper mix of sources, storage and locations. There is a question what is to be optimized. Is it lowest cost, least pollution, greatest economic benefit, energy security, social equity or some combination of factors? Once the measure is determined, assumptions must still be made regarding performance, cost and demand. Actual performance will frequently differ from modeled performance.

Since the amount of electricity generated by wind and solar vary somewhat randomly, statistical forecasting techniques are used. These generate a distribution of forecasted supply as a function of time. There will be some probability of extreme events, for example, a prolonged inadequacy of supply.

The choice of when, where, how much and what type of generation to build is decided in most countries by private companies. Their choices may be substantially influenced, but not controlled by, government policy. As a result, the grid will not be optimum. Renewables requirements and the structure of government incentives will be important factors.

Very high renewables scenarios

The majority of published scenarios, including those of the IPCC, have traditional sources – nuclear and fossil fuels – continuing to provide a significant fraction of electricity generation through 2050. A few have all electricity, or even all primary energy, from renewables. These scenarios depend not only on rapid technological advancement and implementation of renewable sources, but also on reduction of energy consumption, such as in this World Wildlife Fund (WWF) scenario of 95% renewables.

ritchie-4_022717-1200x763

Source: World Wildlife Fund

The WWF scenario decreases overall energy demand by about 25% from a peak in 2020. It is at odds with many other scenarios that envision continued growth in energy demand due to increasing population and increases in consumption in the developing and less developed countries.

Similarly, this scenario envisions a decrease in annual energy cost of 4 trillion Euros by 2050, based on reduced demand and lower fuel costs. These numbers are at odds with the predicted increase in generation cost associated with high shares of VRE discussed in an earlier post.

It’s not clear to me whether scenarios that envision drastic shifts in energy source are considered plausible or are thought experiments expressing ideal goals. The WWF report describes the task of transforming the system as “a huge one, raising major challenges.” Considering the modest progress to date, differing views of the priority of decarbonization, the need for as yet unproven technology and the time needed to construct new systems, it seems unlikely that this transformation will be completed by 2050.


Earl J. Ritchie is a retired energy executive and teaches a course on the oil and gas industry at the University of Houston. He has 35 years’ experience in the industry. He started as a geophysicist with Mobil Oil and subsequently worked in a variety of management and technical positions with several independent exploration and production companies. Ritchie retired as Vice President and General Manager of the offshore division of EOG Resources in 2007. Prior to his experience in the oil industry, he served at the US Air Force Special Weapons Center, providing geologic and geophysical support to nuclear research activities.

Wind And Solar Power Seem Cheap Now, But Will The Cost Go Up As We Use More Of It?

By Earl J. Ritchie, Lecturer, Department of Construction Management

Everyone talks about wind and solar power becoming cost competitive, but the cost will rise as its share of generation increases and we have to pay more to integrate it into the electrical system. How much it will rise remains the subject of debate.

The cost of electricity from wind and solar energy, as well as other variable sources, has two components: the cost of generation and the cost of integration into the electrical system. As discussed in an earlier post, integration costs are expected to increase disproportionately as the share of wind and solar increases, potentially offsetting the decreasing cost of generation.

The cost of generation alone is fairly well defined. There is some disagreement about the likely extent of future cost reduction but the ranges are relatively narrow. The Bloomberg New Energy Finance estimates of about $40-$50 per megawatt-hour (MWh) are typical.

ritchie_1_020717

Source: Bloomberg 2016

As shown below, except for utility scale solar, the rate of cost reduction has slowed in recent years, so estimates for future reductions in wind power and rooftop solar costs may be optimistic. These are levelized costs, estimates of the actual cost of generation. They do not include integration costs and may differ from reported auction costs, which are affected by market conditions and subsidies.

ritchie_2_020717

Source: Lazard 2016

The IPCC estimate

As addressed in Section 7.8.2 of the IPCC’s fifth Assessment Report, there are three components of integration cost: (1) balancing costs (originating from the required flexibility to maintain a balance between supply and demand), (2) capacity adequacy costs (due to the need to ensure operation even at peak times of the residual load), and (3) transmission and distribution costs.

The IPCC does not give specific costs at high penetration levels. Their ranges for levels of 20% to 30% penetration are $1-$7 for balancing, $0-$10 for capacity adequacy, and $0-$15 for transmission and distribution. Total range is $1-32.

Even at these levels the integration costs are significant. At an estimated future generation cost of $45, the middle of the IPCC range of integration costs adds 37%. It is generally recognized that the integration cost of variable renewable energy (VRE) penetration above 30% will be higher but is difficult to estimate.

The complexities of integration

Dealing with intermittency must be managed at a continuum of time scales from milliseconds to years. There are costs associated with all timeframes; however, published analyses focus primarily on the longer intervals of balancing and adequacy.

ritchie_3_020717

Source: World Bank 2015

Various measures to manage this variation – storage, source mix, overcapacity, demand management, etc. – have differing costs, advantages and disadvantages which can be traded off. This results in a complex situation in which the optimum solution is typically not obvious.

Estimates of integration cost at higher levels vary so widely that it is almost impossible to generalize. Local conditions and design choices significantly affect cost. As a study by the Danish Association of Engineers put it “the design of future 100% renewable energy systems is a very complex process.” An almost infinite number of possible combinations of sources is possible depending upon location, anticipated demand, degree of decarbonization and emphasis on economics.

How future costs are estimated

Both optimization and cost forecasting are done with mathematical models. Significant differences may result from the model used. Some characteristics and weaknesses of the three main classes of model are shown below.

ritchie_4_020717

Source: Ueckerdt 2015

Limitations of the models mean that not all aspects of the system can be incorporated in any one model. This may result in overestimates or underestimates. In addition, published studies frequently consider only one aspect, such as the addition of wind power alone.

The limitations and possible sources of error in these studies are normally well understood by the authors, and explained in the original articles. Such caveats rarely reach popular articles quoting the results. There is also deliberate or subconscious bias in the choice of parameters due to the prejudices of the authors.

The variation in estimates

The result of these factors is considerable variation in cost estimates, even when similar systems are being analyzed. Two examples demonstrate the range:

The first estimate below is a model of adding wind energy to an existing grid similar to the European grid. It does not consider externalities, such as renewables mandates, but does include a carbon tax of 20 Euros per ton of CO2. The upper dashed line shows short term costs, and the solid black line long term.

The model shows integration cost equal to generation cost at 40% penetration. That is, the cost doubles. It does not consider possible storage or extending the grid to optimize the system.

ritchie_5_020717

Source: Ueckerdt, et al. 2013

A 2016 US study by Lantz, et al., showed a mix of about 42% variable renewable energy to have a net present value cost $59 billion higher than an economically optimized scenario. They did not give a per kilowatt-hour cost, but modeled a modest 3% increase in retail electricity cost in 2050. The authors comment that the cost may be understated because of lack of detail in the model.

ritchie_6_020717

Source: Modified from Lantz, et al. 2016

Further examples include the widely publicized papers by DeLucchi and Jacobson, which estimate transmission and storage costs as $20/MWh for 100% variable renewables, and the 2012 NREL study, based on somewhat dated costs, which estimates up to $54/MWh over a fossil fuel dominated scenario for 90% renewables (48% wind and solar). Published scenarios are hotly debated.

The headline cost in such studies cannot be taken at face value. In addition to variances due to choice of model, such obvious influences as assumed fossil fuel prices and future cost reductions in generation methods must be weighed in assessing the estimates. As might be expected, proponents of a particular technology will frequently make assumptions favorable to their preferred energy source.

Other renewables and the social cost of carbon

Some issues not discussed in detail here include the other variable renewables, wave and tide; the dispatchable renewables, hydroelectric, geothermal, and biomass; and the social cost of carbon.

Wave and tide are expected to contribute only a small fraction of future electricity generation. They may be complementary to other forms of variable renewable energy.

Hydroelectric and geothermal can be highly desirable as low carbon, low-cost and dispatchable. Very high renewables penetration has already occurred in areas where these resources are abundant. New Zealand is above 80%; Norway and Iceland are over 90%.

Electricity generated from biomass is dispatchable but creates greenhouse gases at the site of generation. The extent to which this is offset by land use changes and carbon storage of the fuel crops depends upon the generation technology, the type of fuel crop and management of the crop. Estimates of offset are controversial but most calculate net reduction in greenhouse gases compared to fossil fuel generation.

The social cost of carbon (SCC) is not the focus of this article, which concentrates on the actual cost of generation. SCC is speculative, with typically quoted numbers from about $5 per ton of CO2 to $100, although extremes can exceed $1,000. The US government’s 5th percentile to 95th percentile range of the cost in 2020 is from zero to about $180. Obviously, the inclusion of any positive SCC will shift economic analysis toward low carbon sources.

Little effect in the short run

Wind and solar intermittency are not likely to be very costly in the near-term, say to 2030, because most scenarios do not have them reaching high penetration levels by that time. For example, wind and solar are 15% of electricity generation in the Reference Case of the EIA’s 2016 Annual Energy Outlook.

Even the highly publicized German Energiewende (Energy Transformation) has wind and solar currently at 21%, below the level of potential significant cost increase. Intermittency is still being handled by fossil fuels, dispatchable renewables, and exports. Germany’s target for 2030 is 33%.

ritchie_7_020717

Source: Burger 2017

Local areas with more ambitious goals will be an interesting test. California has a goal of 50% of retail electricity sales from renewables by 2030. A 2014 analysis by the consulting firm E3 modeled reaching this goal with 43% wind and solar. The report said “This is a much higher penetration of wind and solar energy than has ever been achieved anywhere in the world.” Capital costs under various scenarios ranged from $89 billion to $128 billion in 2012 dollars, with electricity rates increasing between 15% and 30% solely due to the renewables standard. An additional 40% would be due to infrastructure replacement and other factors. The report further says “overgeneration and other integration challenges have a substantial impact of (sic) the total costs for the 50% RPS scenarios.”

Will intermittency costs limit high penetration?

It is clear that there is a cost to managing intermittency and this cost will likely be greater than the decrease in generation cost itself. Actual experience suggests that this cost will be higher than is envisioned in the more optimistic scenarios.

However, cost is not the only consideration. High cost generation may have value where the cost of alternative sources is higher or the match to demand is good. Carbon taxes and renewables mandates will increase the share of renewables, regardless of the underlying economics.

Predictions of whether costs associated with increasing share of variable renewables will outweigh future cost reductions depend upon expectations of both, as well as future costs of storage and other means of dealing with intermittency, all of which are speculative. Storage costs are a topic for another day.

The Cost Of Wind And Solar Intermittency

By Earl J. Ritchie, Lecturer, Department of Construction Management

Until relatively recently, generation of electricity with wind and solar has not been cost competitive. Growth has largely been due to subsidies and renewable energy mandates. Due to decreasing cost, wind and solar are now cost competitive with fossil fuels in favorable locations.

The continuing decrease in wind and solar costs is a very positive development. However, this trend may reverse as the percentage of variable renewable energy (VRE) energy that isn’t available on-demand but only at specific times, such as when the wind is blowingreaches high levels. Countries such as Germany that have integrated significant amounts of wind and solar have already seen price increases.

The levelized cost of electricity

Comparisons of electrical generation cost are usually based on the so-called levelized cost of energy (LCOE), an estimate of the total cost of generation expressed in dollars per megawatt hour ($/WMh). The calculation includes capital costs, operating and maintenance costs and fuel cost. It is affected by assumed utilization rate and interest rates.

The most widely cited levelized cost estimates are those of the U.S. Energy Information Agency (EIA) and the investment firm Lazard. Although these estimates are useful for comparison, they exclude such costs as network upgrades, integration and transmission, which can become significant as renewables penetration increases. As the International Energy Agency (IEA) put it in the context of integrating variable renewable energy, “comparison based on LCOE is no longer sufficient and can be misleading.”

Levelized cost estimates are based on a large number of assumptions, not least of which is the future cost of fossil fuels. There are some differences in these estimates, with Lazard showing unsubsidized utility scale solar and onshore wind as competitive with natural gas and the EIA not.

The table shows national averages. For wind and solar, location is very important; they are in places locally cheaper than natural gas combined cycle. For the purposes of this discussion, these differences are not significant. The more important point is the added cost of factors not included in the levelized cost.

The sources of integration costs

As described by Mark Delucchi and Mark Jacobson, “any electricity system must be able to respond to changes in demand over seconds, minutes, hours, seasons and years, and must be able to accommodate unanticipated changes in the availability of generation.” Traditionally, this is handled by base load and peak load plants, which handle the minimum load and increases above that level, respectively. This is an oversimplification, since supply is managed by the minute using a variety of sources with different response times.

Wind and solar are non-dispatchable, meaning that they are not under the control of the operator. They only generate electricity when the wind blows or the sun shines. This adds integration costs, shown conceptually below.

Source: Ueckerdt, 2015

 When variable sources are a small fraction of electricity supply, the cost of integration is low. The current level of deployment is below thresholds where the cost of dealing with intermittency becomes significant.

There are numerous possible solutions to intermittency. These include diversification, redundancy, storage and demand shifting. That redundancy and storage add cost is obvious. Diversification also adds cost in control equipment and transmission capability between geographically separated sources.

Demand shifting can theoretically lower cost by reducing the peak capacity needed. It is often discussed jointly with efficiency improvement under the term demand-side management.

One issue in demand management is illustrated in this graph of daily load for a location in Australia. Solar is only available when the sun shines and peaks around midday. As solar generation increases, the average load on the remainder of the system decreases, but the peak is barely affected. Dispatchable sources must make up the difference between the midday low and the evening and morning peaks. This relationship is called the “duck curve.”

Source: Ledwich 2015

Measures to shift usage from peak periods include education, jawboning, differential pricing and control of end use by the utility through the smart grid. Education, jawboning and even differential pricing have had limited success to date. Time of day pricing and end-use control require a smart grid, with attendant cost.

Wind power typically will generate throughout the day, but it has its own limitations. It is less predictable, more variable over short periods than solar, may be seasonal and may need to be shut down when the wind is too strong.

The graph below shows generation for one day on the island of Crete. Renewables penetration reaches a peak of 60%, accommodated by curtailment of diesel and gas generation. Even so, average annual renewable share is only 20%, and some difficulties were encountered during peak renewables generation periods.

The Crete example is typical of existing systems in that balancing is done with fossil fuels. Balancing may also be done by dispatchable renewable energy, primarily hydroelectric and biomass, and with storage.

What’s the best generation mix?

Due to the wide variety of generating sources and unique local circumstances, there is considerable flexibility in the design of generating systems. The trade-offs in cost and environmental benefit are complex.

Hundreds of studies which address increasing the share of renewables have been published. These vary greatly in scope and sophistication. Some do not include cost analysis or ignore integration costs. Adequate analysis of high levels of variable generation requires that balancing demand within short time frames be included.

The sample of published scenarios below illustrates the wide range of possible combinations. Wind and solar range from less than 20% to over 80%. The mix is influenced by availability of other sources, and by ideology.

Source: Modified from Cochran 2014

Big differences result from design choices, such as whether expansion or retention of some fossil fuels are included. Accepting periods of inadequate capacity is also a factor.

Most scenarios with high percentages of renewables rely on substantial reduction in growth of electricity demand. It’s questionable how realistic this is, particularly if strong growth in electric automobiles is anticipated.

What is the integration threshold?

There is no threshold, per se. The cost of managing intermittency is nonlinear and depends upon the mix and location of dispatchable and non-dispatchable sources, the match of local demand patterns with variable source pattern, and various other factors.

Based on model studies of Germany and Indiana, Falko Ueckerdt found integration costs began to become significant at 20%. As of 2015, only four countries have variable renewable energy over 20%.

Hawaii Electric recently approached 50% renewables; however, the share of wind and solar was only about 15%. Even so, they have requested a 6.9% rate increase based partly on the cost of renewables integration, and estimate the cost of grid upgrades necessary to reach 100% renewables as $8 billion.

Champions of wind and solar have characterized integration cost estimates as ploys to discourage renewable energy, but integration costs are real.

Isn’t it being done already?

The poster child for variable renewable energy is Denmark, reported to be over 50% in 2015. Denmark’s success is often used to illustrate that high levels are readily achievable. This is misleading in that Denmark is a small country tied into the European grid. Variable wind power is balanced with hydroelectric and other sources in adjacent countries. De facto share for the system is lower. Denmark’s installed wind capacity ranks ninth among EU countries and represents less than 4% of EU.

Source: EIA 2016

Germany’s combined wind and solar has the largest capacity in Europe and is second highest per capita. Despite Germany’s progress, the share of variable renewable energy for electrical generation is less than 25% and has been achieved at significant cost. The renewable energy surcharge is 22% of household electricity price.

Even at relatively low levels of renewables share, there is a clear correlation between the share of variable renewable energy and the retail price of electricity. This is largely due to feed-in tariffs and net metering, which transfer renewable subsidies costs to the retail customer.

 The range of published integration cost estimates at higher shares of wind and solar is very broad and dependent upon both parameter assumptions and model structure. I will discuss these in a later post.


Earl J. Ritchie is a retired energy executive and teaches a course on the oil and gas industry at the University of Houston. He has 35 years’ experience in the industry. He started as a geophysicist with Mobil Oil and subsequently worked in a variety of management and technical positions with several independent exploration and production companies. Ritchie retired as Vice President and General Manager of the offshore division of EOG Resources in 2007. Prior to his experience in the oil industry, he served at the US Air Force Special Weapons Center, providing geologic and geophysical support to nuclear research activities.

Fact Checking The 97% Consensus On Anthropogenic Climate Change

By Earl Ritchie, Lecturer, Department of Construction Management

The claim that there is a 97% consensus among scientists that humans are the cause of global warming is widely made in climate change literature and by political figures. It has been heavily publicized, often in the form of pie charts, as illustrated by this figure from the Consensus Project.

ritchie-1_121416

The 97% figure has been disputed and vigorously defended, with emotional arguments and counterarguments published in a number of papers. Although the degree of consensus is only one of several arguments for anthropogenic climate change – the statements of professional societies and evidence presented in reports from the Intergovernmental Panel on Climate Change are others – there is data to suggest that support is lower. In this post, I attempt to determine whether the 97% consensus is fact or fiction.

The 97% number was popularized by two articles, the first by Naomi Oreskes, now Professor of Science History and Affiliated Professor of Earth and Planetary Sciences at Harvard University, and the second by a group of authors led by John Cook, the Climate Communication Fellow for the Global Change Institute at The University of Queensland. Both papers were based on analyses of earlier publications. Other analyses and surveys arrive at different, often lower, numbers depending in part on how support for the concept was defined and on the population surveyed.

This public discussion was started by Oreskes’ brief 2004 article, which included an analysis of 928 papers containing the keywords “global climate change.” The article says “none of the papers disagreed with the consensus position” of anthropogenic global warming. Although this article makes no claim to a specific number, it is routinely described as indicating 100% agreement and used as support for the 97% figure.

In a 2007 book chapter, Oreskes infers that the lack of expressed dissent “demonstrates that any remaining professional dissent is now exceedingly minor.” The chapter revealed that there were about 235 papers in the 2004 article, or 25%, that endorsed the position. An additional 50% were interpreted to have implicitly endorsed, primarily on the basis that they discussed evaluation of impacts. Authors addressing impacts might believe that the Earth is warming without believing it is anthropogenic. In the article, Oreskes said some authors she counted “might believe that current climate change is natural.” It is impossible to tell from this analysis how many actually believed it. On that basis, I find that this study does not support the 97% number.

The most influential and most debated article was the 2013 paper by Cook, et al., which popularized the 97% figure. The authors used methodology similar to Oreskes but based their analysis on abstracts rather than full content. I do not intend to reopen the debate over this paper. Instead, let’s consider it along with some of the numerous other surveys available.

Reviews of published surveys were published in 2016 by Cook and his collaborators and by Richard S. J. Tol, Professor of Economics at the University of Sussex. The 2016 Cook paper, which reviews 14 published analyses and includes among its authors Oreskes and several authors of the papers shown in the chart below, concludes that the scientific consensus “is robust, with a range of 90%–100% depending on the exact question, timing and sampling methodology.” The chart shows the post-2000 opinions summarized in Table 1 of the paper. Dates given are those of the survey, not the publication date. I’ve added a 2016 survey of meteorologists from George Mason University and omitted the Oreskes article.

The classification of publishing and non-publishing is that used by Cook and his collaborators. These categories are intended to be measures of how active the scientists in the sample analyzed have been in writing peer-reviewed articles on climate change. Because of different methodology, that information is not available in all of the surveys. The categorization should be considered an approximation. The chart shows that over half the surveys in the publishing category and all the surveys in the non-publishing category are below 97%.

ritchie-2_121416

Cook is careful to describe his 2013 study results as being based on “climate experts.” Political figures and the popular press are not so careful. President Obama and Secretary of State John Kerry have repeatedly characterized it as 97% of scientists. Kerry has gone so far as to say that “97 percent of peer-reviewed climate studies confirm that climate change is happening and that human activity is largely responsible.” This is patently wrong, since the Cook study and others showed that the majority of papers take no position. One does not expect nuance in political speeches, and the authors of scientific papers cannot be held responsible for the statements of politicians and the media.

Given these results, it is clear that support among scientists for human-caused climate change is below 97%. Most studies including specialties other than climatologists find support in the range of 80% to 90%. The 97% consensus of scientists, when used without limitation to climate scientists, is false.

In the strict sense, the 97% consensus is false, even when limited to climate scientists. The 2016 Cook review found the consensus to be “shared by 90%–100% of publishing climate scientists.” One survey found it to be 84%. Continuing to claim 97% support is deceptive. I find the 97% consensus of climate scientists to be overstated.

An important consideration in this discussion is that we are attempting to define a single number to represent a range of opinions which have many nuances. To begin with, as Oreskes says, “often it is challenging to determine exactly what the authors of the paper[s] do think about global climate change.” In addition, published surveys vary in methodology. They do not ask the same questions in the same format, are collected by different sampling methods, and are rated by different individuals who may have biases. These issues are much discussed in the literature on climate change, including in the articles discussed here.

The range of opinions and the many factors affecting belief in anthropogenic climate change cannot be covered here. The variety of opinion can be illustrated by one graph from the 2013 repeat of the Bray and von Storch survey showing the degree of belief that recent or future climate change is due to or will be caused by human activity. A value of 1 indicates not convinced and a value of 7 is very much convinced. The top three values add to 81%, roughly in the range of several other surveys.

ritchie-3_121416

Even though belief is clearly below 97%, support over 80% is strong consensus. Would a lower level of consensus convince anyone concerned about anthropogenic global warming to abandon their views and advocate unrestricted burning of fossil fuels? I think not. Even the 2016 Cook paper says “From a broader perspective, it doesn’t matter if the consensus number is 90% or 100%.”

Despite the difficulty in defining a precise number and the opinion that the exact number is not important, 97% continues to be widely publicized and defended. One might ask why 97% is important. Perhaps it’s because 97% has marketing value. It sounds precise and says that only 3% disagree. By implication, that small number who disagree must be out of the mainstream: cranks, chronic naysayers, or shills of the fossil fuel industry. They are frequently described as a “tiny minority.” It’s not as easy to discount dissenters if the number is 10 or 15 percent.

The conclusions of the IPCC are the other most often cited support for anthropogenic climate change. These conclusions are consensus results of a committee with thousands of contributors. Although this is often viewed as a monolithic conclusion, the nature of committee processes makes it virtually certain that there are varying degrees of agreement, similar to what was shown in the Bray and von Storch survey. The Union of Concerned Scientists says of the IPCC process “it would be clearly unrealistic to aim for unanimous agreement on every aspect of the report.” Perhaps this is a subject for another day.

Earl J. Ritchie is a retired energy executive and teaches a course on the oil and gas industry at the University of Houston. He has 35 years’ experience in the industry. He started as a geophysicist with Mobil Oil and subsequently worked in a variety of management and technical positions with several independent exploration and production companies. Ritchie retired as Vice President and General Manager of the offshore division of EOG Resources in 2007. Prior to his experience in the oil industry, he served at the US Air Force Special Weapons Center, providing geologic and geophysical support to nuclear research activities.

UH Energy is the University of Houston’s hub for energy education, research and technology incubation, working to shape the energy future and forge new business approaches in the energy industry.

How Bad Will Donald Trump Be For Renewable Energy?

By Earl J. Ritchie, Lecturer, Department of Construction Management

Donald Trump’s rhetoric on climate change and other regulations sounds like bad news for renewables. But subsidies and abundant natural gas will also be factors in how quickly renewable energy grows.

There are varying opinions of Donald Trump’s likely effect on the growth of renewable energy in the U.S.: He’s bad for it; he’s not bad for it. Trump has called climate change a hoax and said he would abolish the Environmental Protection Agency, abandon the EPA’s Clean Power Plan, pull out of the Paris Agreement and boost coal and natural gas, positions which he has since largely moderated.

Certainly, Trump’s pre-election statements on fossil fuels and renewable energy were worlds apart from Hillary Clinton’s. A pre-election estimate of their comparative effects can be seen in this Platts analysis.

 ritchie-1_113016

So, Clinton would have been great for renewable energy, Trump not so great. Everybody knows that. But, how bad would Trump be?

The pace of renewables growth will be affected by numerous separate policy decisions. These include the Clean Power Plan, the Investment Tax Credit and Production Tax Credit for renewables and the addition or reduction of restrictions on fossil fuel production and consumption.

Let’s look at the analysis in the EIA’s 2016 Annual Energy Outlook of the Clean Power Plan (CPP), which Trump said he would eliminate. In the base case, the impact of the CPP on renewables is actually relatively small. By 2030, the amount of electricity generated by wind and solar are 683 billion kWh; without it they are 571. These are annual growth rates of about 7.5% and 6%, respectively. The big impact is on coal, which declines by 28% under the CPP but grows by 5% without it.

The CPP mandates targets, not methods, so other scenarios are possible, some of which are shown in the EIA report.

ritchie-2_113016

The Clean Power Plan is only one factor in potential growth of renewables. As I pointed out in earlier blogs, subsidies have a big impact. The two important ones at the federal level are the Investment Tax Credit and the Production Tax Credit. The estimate in the graph below, modified from a National Energy Renewable Laboratory report which modeled the effects of the five-year extension passed in 2015, shows the difference that the extension of these credits makes. Added renewable generation capacity due to the credits is about 50 gigawatts in just five years, or about 25% of currently installed capacity. The growth rate with the credits is about twice the rate without.

Continued differences could be expected if subsidies are extended beyond 2020. Trump has not specifically threatened these credits, although he has promised to “cancel billions in climate change spending.”

Installed Renewable Capacity

Assuming the credits for renewables are not extended beyond 2020, natural gas prices, which will also be affected by Trump’s policies, will be a more significant factor. In the National Energy Laboratory’s analysis, both the extension and no extension scenarios show slower renewables growth when gas prices are low. In the extension scenario, renewable capacity in 2030 is about 100 gigawatts lower in the Low Gas Price case than in the Base Gas Price case; in the no extension scenario it is about 125 gigawatts lower.

The 2020 gas price in the low price scenario was about $3 per thousand cubic feet, significantly higher than the current price. If government policies favoring the industry result in continued low natural gas prices, it would further suppress renewables growth.

Arguments for the continued rapid growth of renewables include the possibility that the Clean Power Plan, Investment Tax Credit and Production Tax Credit will not be repealed, that state mandates and subsidies will continue and that the continuing cost decrease of renewables will make them more competitive. The latter two factors will almost certainly continue, so the important differences will be in the federal credits and fossil fuel policies.

It is impossible to tell which policies will be implemented at the national level. The Investment Tax Credit and Production Tax Credit may stay in force because they are favored by many Republicans in states that benefit from these credits. Policies favoring the oil industry and weakening or abandoning the Clean Power Plan seem likely.

It looks virtually certain that renewables growth will continue, but at a much-reduced pace.


Earl J. Ritchie is a retired energy executive and teaches a course on the oil and gas industry at the University of Houston. He has 35 years’ experience in the industry. He started as a geophysicist with Mobil Oil and subsequently worked in a variety of management and technical positions with several independent exploration and production companies. Ritchie retired as Vice President and General Manager of the offshore division of EOG Resources in 2007. Prior to his experience in the oil industry, he served at the US Air Force Special Weapons Center, providing geologic and geophysical support to nuclear research activities.

100% Renewable Energy? Here’s Why It’s Not Happening Anytime Soon

By Earl J. Ritchie, Lecturer, Department of Construction Management

In an earlier post, I established that, with massive effort, it would be possible to generate all electricity and a substantial fraction of transportation energy with renewable fuels. The pace of conversion is said to depend upon political will.

“Will” is not the correct term. Will implies both desire and determination. A substantial fraction of the public do not have the desire. Some do not think it is necessary, some do not want to sacrifice conveniences, some are not willing or able to pay. On one hand, there is a highly vocal contingent that believes anthropogenic climate change is literally a life or death issue; on the other hand, there are groups that do not see it as a major problem or have a vested interest in the existing energy structure. It is difficult to predict the relative influence these two radically different viewpoints will have on how quickly it will happen.

Public Support

The Gallup Poll results for the past several years show that only about 40% of Americans believe global warming will be a serious threat to them personally. A 2015 Pew poll indicates higher concern internationally, with 54% saying that climate change is a very serious problem and 78% saying greenhouse gases should be limited.

There is a strong component of political orientation in support for carbon reduction measures, both in the U.S. and internationally. In the Pew poll, Democrats score approximately 2 to 3 times higher on questions of climate change concern. The pace of carbon reduction will be significantly influenced by which party is in power.

ritchie-1_10-26-16

Expression of concern says nothing directly about willingness to spend on carbon reduction or change lifestyle. For example, the average size of American houses continues to increase. Three-fourths of Americans drive to work alone, and electric and plug-in hybrid cars are currently less than 1% of U.S. auto and light truck sales. The strong correlation between subsidies and renewable energy spending indicates the pocketbook is more important than the environment.

The majority of people just don’t put their money where their mouth is.

What will happen?

It’s hardly earthshaking to predict the outcome will fall between predicted extremes. A couple of observations can be safely made:

  1. It will happen faster than supporters of traditional energy sources think. There is already considerable support at the government level and the decreasing cost of renewables will favor their use.
  2. It won’t happen as fast as the proponents of 100% renewables predict. The rapid growth of solar and wind power is largely due to projects supported by other people’s money. As cumulative cost increases, there will be resistance by those paying the freight. This is already happening. The technical and economic barriers that begin to become important with a higher share of renewables will slow implementation further.

Perhaps the best indicators of the pace in the near term are the pledges made in the Paris Agreement. While the agreement is hailed as a milestone, it is generally recognized that the INDCs (Intended Nationally Determined Contributions), if implemented, will not decrease CO2 emissions, will not keep global warming below 2 degrees C, and will not mean the end of fossil fuels. CO2 equivalent emissions rates expected to be attained through the agreement are indicated by the red bars in this graph from the United Nations Framework Convention on Climate Change. Emissions increase throughout the commitment period and end well above the historical levels shown in dark gray.

ritchie-2_10-26-16

Note also that limiting warming to 2 degrees C requires the sharp decrease, shown in aqua, immediately after the end of the commitment period. This pattern is the same as has been the case since the first Intergovernmental Panel on Climate Change report in 1990: Each report says we have to start reducing greenhouse gas emissions immediately. Although estimated CO2 emissions have recently flattened, measured greenhouse gas concentrations not only have not decreased, they have continued to increase at an accelerating rate. The discrepancy may be due to errors in the estimate or reporting of fossil fuel consumption from the various countries but, in any case, there is no indication in the measurements that emissions have actually decreased.

The effect of the Paris Agreement on fossil fuel consumption is illustrated by this graph of oil consumption in the IEA New Policies Scenario, which incorporates the INDCs. The growth rate through 2040 is about 0.5 % annually, about one third of historical but still increasing. Natural gas (not shown) grows at 3 %.

ritchie-3_10-26-16

Even the IEA 450 scenario, consistent with a 2 degree target, leaves oil consumption in 2040 above that in 2000. Natural gas grows at close to 1 %.

Barring a drastic change in policy, we will not get anywhere near 100 % renewables by 2050. Low carbon energy sources will likely be less than 40 % of total energy supply; the “new renewables,” wind, water, and solar, will be less than 15 %. This is not a happy scenario for those who worry about anthropogenic climate change. Several analyses of the impact of the INDCs forecast global warming in the range of 2.7 degrees to 3.5 degrees by 2100. Since the commitment period ends in 2030, such analyses require assumptions of actions beyond that date.

Of course, other predictions are possible. Expectations of faster replacement of fossil fuels rely upon more optimistic assumptions of adoption of government policies, speed of implementation, reduction in energy demand, and the availability of funding. Other conditions – not strictly necessary, but probably realistically needed – are continued significant reductions in the cost of renewables and improvements in energy storage methods and carbon capture and storage.

The public has not shown much willingness to sacrifice, unless it’s someone else making the sacrifice. I don’t have a crystal ball but getting even as high as 50 % renewables by 2050 seems highly unlikely to me.