By Earl J. Ritchie, Lecturer, Department of Construction Management
Powering the United States or the world with 100% renewable energy is the stated goal of many individuals and organizations. What they are really talking about is 100% renewables to generate electricity, because it’s not feasible in the near-term to replace motor fuels with renewables. Views of how quickly this can be done are highly polarized – some predict less than two decades, while others see fossil fuels as the dominant source at least through 2050.
The primary argument for renewable energy is to avoid anthropogenic, or human-caused, climate change by reducing CO2 emissions. Progress toward that goal has fallen well short of reductions believed by the Intergovernmental Panel on Climate Control (IPCC) to be necessary to avoid catastrophic climate change. In fact, the only year in the past 40 in which CO2 emissions decreased was the first full year of the 2008 recession. The rate of growth of carbon emissions has slowed over the past five years, however, giving proponents of carbon reduction some encouragement.
Let’s look at some of the claims of the feasibility of going to 100% renewables.
How quickly can it be done?
In a 2008 speech, former Vice President Al Gore said it was “achievable, affordable and transformative” to generate all electricity in in the United States using wind, solar and other renewable sources within 10 years. One might dismiss this as political hyperbole, and it has not happened.
A claim that arguably has a better technical basis appeared in a widely publicized November 2009Scientific American article by Mark Jacobson and Mark Delucchi, professors at Stanford University and the University of California respectively. They suggested all electrical generation and ground transportation internationally could be supplied by wind, water and solar resources as early as 2030. Even that is wildly optimistic, since the median of the most optimistic of the projections in the latest IPCC assessment has low carbon sources (which include nuclear, hydro, geothermal and fossil fuels with carbon capture and storage) generating only 60% of world energy supplies by 2050; wind, water and solar are less than 15%.
In a 2015 report addressing only the U.S., Jacobson, Delucchi, and co-authors revised the schedule to 80-85% renewables by 2030 and 100% by 2050. As with nearly all low carbon scenarios, their plan depends heavily on reducing energy demand through efficiency improvements.
Other forecasts are considerably less optimistic. Two examples: the 2015 MIT Energy and Climate Outlook has low carbon sources worldwide as only 25% of primary energy by 2050, and renewables only 16%; the International Energy Agency’s two-degree scenario has renewables, including biomass, as less than 50%. Even the pledges of the widely praised Paris Agreement of the parties to the United Nations Framework Convention on Climate Change (UNFCCC) leave fossil fuels near 75% of energy supply in 2030, when the commitments end.
How are we doing?
Growth of renewables as a fraction of the overall energy supply has been slow, although recent growth of wind and solar is impressive. This graph shows the annual growth rate of renewables in the U.S. since 1980 as less than 2%.
Since 2007, wind and solar have grown over 20% per year in absolute terms, and about 15% as a percent of supply. There was no growth in other renewables during that period. The international numbers are similar.
What is possible?
Proponents of renewable energy are fond of saying that 100% renewable is technically feasible; it only requires political will. With some caveats, this is true. There is theoretically enough sunlight and wind, and a growth rate of 20% means a doubling every four years. If sustained, this would mean we could have 500 times the existing amount of wind and solar by 2050. However, there are both economic and technical barriers.
The rapid growth of renewables in both the United States and Europe has been due in large part to subsidies that make investment in renewables highly profitable. As installed capacity has increased, both state and national governments have tended to cut subsidies, resulting in substantial decreases in renewable investments.
Per the United Nations Environment Programme, worldwide new investment in renewable energyhas been basically flat for the past five years. This overall view masks substantial local and regional differences. Investment in the developed countries has declined about 30% since the 2011 peak, while investment in the developing countries has almost doubled.
Technical barriers to wind and solar are largely the result of intermittency and the location of favorable areas. Intermittency is not a problem as long as the proportion of renewable energy is small and excess capacity exists in conventional generating plants. It begins to become a problem when intermittent sources reach 30% of capacity and is very significant when it reaches 50%. The numbers are somewhat variable depending upon the makeup of existing plants. A 2008 report of the House of Lords estimated that reaching 34% of renewable energy in the United Kingdom, largely with wind power, would raise electricity costs 38%. The cost goes up as the share of variable renewables increases due to storage and grid flexibility requirements.
Intermittency can theoretically be handled by diversification of sources, load shifting, overbuilding capacity, and storage. All add cost. Diversification on a broad scale would require substantial changes to the energy grid. Storage on a utility scale is in an early stage of development, so costs remain uncertain. A large number of technologies exist, with varying estimated costs and applicability.
A 2012 Deutsche Bank report estimated that renewables plus storage could be competitive in Germany by 2025, however, the calculation included a carbon tax, effectively a subsidy for renewables. Any such comparisons of future costs depend upon assumptions of technological improvements and fossil fuel costs.
100% renewable electricity generation is technically feasible. However, even if you assume cost competitiveness, money has to be spent in the near-term to not only add capacity but to replace existing plants. In the industrialized countries, this is not an insurmountable problem but it does require allocation of funds that have competing demands. In some developing countries, there is just not money available.
Some proponents of accelerating the replacement of fossil fuels advocate a massive effort, which they call a “moon shot” or compare to World War II. But this transition requires a great deal more effort than the moon shot, and there is serious question whether there is political motivation comparable to World War II. I’ll talk about that in a future post.
Earl J. Ritchie is a retired energy executive and teaches a course on the oil and gas industry at the University of Houston. He has 35 years’ experience in the industry. He started as a geophysicist with Mobil Oil and subsequently worked in a variety of management and technical positions with several independent exploration and production companies. Ritchie retired as Vice President and General Manager of the offshore division of EOG Resources in 2007. Prior to his experience in the oil industry, he served at the US Air Force Special Weapons Center, providing geologic and geophysical support to nuclear research activities.