Dear President Trump – What Are You Doing About Energy?

By Terry Hallmark, Instructional Assistant Professor, Honors College

Dear President Trump,

I thought I’d drop you a line. They had a symposium at the University of Houston recently on “The Future of Energy Policy.” It was good. Even tempered. A Democrat and Republican Republican U.S. Rep. Pete Olson and Democrat Rep. Gene Greeneven got along, and no one had a bad word to say about you.

That was refreshing, because lately everywhere I go on campus someone is cracking a joke every time your name is mentioned. Guess it’s because it’s a university campus – you know, where lots of left-leaning college professors hang out. A fellow who ran a bar in Brooklyn laughingly used to call professors “the Intelligenski,” because they think they’re smarter than everybody else. They can’t believe anybody would be foolish enough to pick you over Hillary. Well, I think they’re the fools. Plenty of folks voted for you – after all, you won – they’re just afraid to admit it. Maybe there needs to be something like Alcoholic Anonymous, you know, like Trump Supporters Anonymous –TSA – although it might get confused with the gang that makes you take your shoes off at the airport.

Seriously, the numbskulls who don’t like you say you’re dumb as a shovel, but you don’t get as rich as you are by being dumb – and besides, shovels are useful, especially when you’re digging holes. Plus, you’ve got the support of some smart, conservative academic types. A few weeks ago, the Chronicle of Higher Education published an article about a bunch of political scientists at the Claremont Colleges in California you’re apparently leaning on for advice. That’s where I got my Ph.D., so I know nearly all of them. Charles Kessler, who got most of the coverage in the article, was the chairman of my dissertation committee. He’s an expert on American Political Thought (back when Americans were thinking) and on the U.S. Constitution and the Federalist Papers (the “go to” handbook on how the Constitution is supposed to work). He and his buddies will be handy.

And what about your cabinet appointees, especially those who know something about energy? Rex Tillerson was a bold pick as Secretary of State. I used to work in the oil industry for this outfit called IHS, and the firm has a week-long shindig every spring called CERAWeek, where all the energy execs hang out, network and give talks. It’s run by a member of your Strategic and Policy Forum, Dan Yergin. I spoke there once. Tillerson spoke there in 2015. He has a presence, as they say. He is an Eagle Scout, and he’s from Texas. That means he’s solid and will probably do a good job.

And since he used to run ExxonMobil, he knows energy and has experience with Vladimir Putin and other heavy-handed types. He also knows about oil exploration in garden spots like Chad and Equatorial Guinea – where the people don’t give a flip about their Size 3 carbon footprint and the leaders have names that are impossible to pronounce. (Try saying Teodoro Obiang Nguema Mbasogo three times fast.) I’m a little bit worried, though, because you’re both big time wheelers and dealers at the highest levels of Big Oil and Big Buildings. Hope you guys don’t have to have your egos shoehorned into the Oval Office just to have a chat.

I’m not quite as gung-ho about your pick for Secretary of Energy, Texas’ ex-Governor, Rick Perry (now a member of your National Security Council). Sure, he’s smarter than folks think, he’s won more races for governor than anybody in the state’s history, and Texas is a big energy state – but I still wonder why you picked him. I’m not sure he’s got what President George H.W. Bush used to call “the vision thing.” He’s run for your job twice, and you’ll remember he wanted to shut down the Energy Department. Now I guess he doesn’t. Kinky Friedman, this musician/comedian/writer from Austin, ran against Perry for governor a few years back and called him “Governor Good Hair.” Maybe that’s why you picked him. You clearly know a good ’do when you see one.

As far as the issues go, I think you’ve got some things right, including support for the Dakota Access and Keystone XL oil pipelines. You’re going to take some heat from environmentalists, but don’t let that bother you. Those pipelines mean jobs for Americans, and don’t worry about all those reports casting doubt on that. If the Canadian oil intended for the Keystone XL pipeline doesn’t come here, it’ll go someplace else – like China. That’s no good.

Kudos to you, too, for being bullish on fracking. The country’s awash with shale oil and gas, and oil exports are back for the first time in years. Just when it looked like oil prices might put the kibosh on several fracking projects, low oil prices have allowed them to move forward. Voila, “Permania”! The giant shale play in the Permian Basin could have 20 billion barrels of oil and 16 trillion cubic feet of natural gas. That means more oil on the market and lower crude oil prices, which give our friends in OPEC and the Russians a bad case of nerves. Good.

All the shale oil and natural gas showing up to the Energy Prom brings me to my last point. A decade ago everyone was babbling about “peak oil” and the evils of those God-forsaken, gas-guzzling Hummers. Now the issue is “peak demand,” and GM doesn’t even make Hummers anymore (they were ugly). In 2006, the US ranked 11th in the world in proven oil reserves. Now, thanks to the fracking boom and shale oil, the U.S. is Numero Uno. Check it out. America is great again.

A speaker at the UH symposium said oil and natural gas are cheap, reliable and plentiful sources of energy. He’s right, but that’s just for now. A decade’s nothing – just two years past the end of your next term in office. If nothing else, the last 10 years have shown us just how quickly things can change, and change is certainly in the air when it comes to energy. So, go long – take the blinders off and think about energy out 30 or 40 years. Don’t be afraid to cozy up to new sources of energy, including renewables like solar and wind. Not many people know it, but Texas produces more energy from wind than any other state (plenty of hot air). I’m afraid you’re going to have to finalize a split with coal, though. That miner’s daughter’s not coming back.

Well, that’s it for now. I’ve got to go fill up my car and then wade through as much of Alexis de Tocqueville’s Democracy in America as I can manage before noon (it’s a beast – be glad you don’t need to read it). Maybe I’ll write again sometime. Until then, I remain,

Yours in oil (crude, that is – with associated gas),

Politicus Maximus Texanus


Terry Hallmark is an Instructional Assistant Professor in the Honors College. He teaches the Human Situation sequence, along with courses in ancient, medieval and early modern political philosophy, American political thought, American foreign policy and energy studies. His current research is focused on the political rhetoric and writings of Will Rogers. Prior to his appointment in the Honors College, Dr. Hallmark worked in the international oil and gas industry, where he had a 30-year career as a political risk analyst. He has been an advisor to international oil exploration and service companies, financial institutions and governmental agencies, including the World Bank, U.S. Department of Defense and members of the intelligence community. He is the Honors College coordinator for the minor in Energy and Sustainability Studies.

International Politics Is Always A Risk For Oil Companies, But Business Conditions May Matter More

By Terry Hallmark, Instructional Assistant Professor, Honors College

The field of political risk assessment has been in existence for roughly 40 years, and I was a practitioner in the international oil and gas industry for 30 of those, from 1983 to 2013. When I told someone that I was a political risk analyst, the next question was usually “Do you travel?” (not as much as one might expect); and then, “How many countries do you cover?” (90).

Once the preliminaries were over, the conversations usually turned to the risks themselves – all the “shoot ‘em up” stuff folks might come up with if they think about political risks – war, civil unrest, political violence, regime instability and the like. It always came as a surprise, though, when I noted that while international oil companies care about such risks – because they can mess up operations in a big way and the mitigation takes time and money – they’re not the oil companies’ primary concern.

International oil companies are more interested in what might be called commercial risks – opposition to foreign investment, repatriation difficulties and ad­­verse contract changes.

Last year was a busy one for contract changes – 40 countries changed contract terms in 2016, and predictions are that 2017 will be just as busy. That has implications that go beyond a company’s bottom line, potentially even affecting the price of oil.

Opposition to foreign investment can range from a country being completely closed off to foreign oil exploration to protests by environmentalists or indigenous peoples. Oil companies will simply look elsewhere if they can’t get in or if working in a country is too big a hassle. Repatriation of oil earnings can be a problem, too, but it is more an irritant than anything else, since the regulations are either stipulated in the contract or set forth in standing law.

Contract changes – especially adverse contract changes – are a different story. Oil companies expect the contracts they sign to hold, but that’s not always the case. There are three kinds of adverse contract changes – nationalizations, expropriations, and simple, unilateral changes by the government to existing contracts. The first two aren’t contract changes in the usual sense of the word, although tearing up an existing contract surely is a “change.” Nationalizations occur when a government takes over a complete industry; in the oil patch, that typically means establishing a national oil company to run things. Expropriations occur when a country unilaterally seizes control, through extra-legal means, of a project or facility. Both are “oil weapons” in a country’s arsenal that can be used to exert power, gain influence and implement foreign policy.

There have been several nationalizations over the years – the Soviet Union in 1918, Mexico in 1938, Iran in 1951 and Argentina, Egypt, Indonesia, Iraq and Peru in the 1960s. And while most analysts believed that nationalizations were long gone and a thing of the past, Bolivia nationalized the country’s natural gas sector in 2006.

Expropriations are more frequent.  For example, Russia took a 50 percent + one stake in Shell’s Sakhalin Island project in late 2006. In May 2007, a subsidiary of Venezuela’s national oil company, PDVSA, assumed control of the Cerro Negro heavy oil project following a decree issued by then-President Hugo Chavez.

Simple contract changes happen all the time. There are two kinds (and this isn’t rocket science): contract changes that are anticipated or known that pertain to new projects and those that are not – contract changes that may come out of the blue and affect existing projects. Wood Mackenzie, a United Kingdom-based oil and gas consulting firm, classifies contract changes as “evolutionary changes” (changes for new projects) and “disruptive changes” (contract changes for existing projects).

Oil companies can generally deal with contract changes that are evolutionary. They simply decide to invest in given country under the terms of the new contract or not.

Disruptive changes can be more problematic in that they can, and frequently do, result in a negative change in cash flow from a given exploration project. However, some disruptive contractual or legislative changes can be positive – i.e. designed to spur exploration activity and investment, such as recent cuts in corporate income tax rates in several countries around the world.

Things have been quite fluid lately, as some 40 countries changed contract terms in 2016. The changes were mostly in response to low oil prices and the subsequent budget shortfalls in oil-dependent countries, which resulted in higher tax rates for foreign oil companies. However, some countries, like the United Kingdom, lowered taxes to help oil companies break even in the low crude oil price environment.

Mexico was the most active country. The United States’ neighbor to the south changed contract terms five times, as it ended more than 70 years of government control of the oil sector. Other contract changes occurred in Russia (which seems to make contract changes constantly), the state of Rio de Janeiro in Brazil and Alaska in the U.S. This year looks to be just as busy as 2106, maybe even busier, as Wood Mackenzie anticipates evolutionary contract changes in Brazil, India, Indonesia, Iran, Mexico, South Africa, Thailand and Trinidad and Tobago; and disruptive changes in Australia, Alaska, Nigeria, Russia and parts of the North Sea. Some of the changes are expected to be positive, others negative, and some are likely to be mixed.

So why does all this matter? Who cares what kind of contract is in effect in a given country or if contract is evolutionary or disruptive or whatever? There are several reasons. At the simplest level, it’s about money. Dealing with an unstable investment climate takes time and trouble, and time is money. Anything that costs major oil companies money has an effect at the pump.  Changes in contracts – especially something like a nationalization or a major expropriation – can roil oil markets and drive crude prices through the roof. They can shut a country off from foreign investment altogether.

Further, a host country’s petroleum legislation, and the contracts that flow from it, is instructive, for it says something about how a country views its position in the international oil arena and what it hopes to gain from its oil sector. Is the country trying to maximize its oil earnings by establishing higher tax and royalty rates or is it trying to induce new investment by cutting taxes and royalties or sweetening the pot in other ways? Also, frequent contract changes, especially if they are adverse changes, are an indicator of how a country does business, how it views foreign investment, and perhaps most importantly, how it views the sanctity of law.

Finally, it’s worth reiterating: international oil companies are far more worried about getting into a country, getting the company’s money out of the country and being able to work under the auspices of a stable, signed contract through the life of the project than they are about the political risks in the country. And since contract changes – especially adverse or “disruptive” ones – usually occur unexpectedly, an effort to develop a method perhaps capable of anticipating such changes would seem in order.

That could be a good topic for another blog post down the road.


Terry Hallmark is an Instructional Assistant Professor in the Honors College. He teaches the Human Situation sequence, along with courses in ancient, medieval and early modern political philosophy, American political thought, American foreign policy and energy studies. His current research is focused on the political rhetoric and writings of Will Rogers.  Prior to his appointment in the Honors College, Dr. Hallmark worked in the international oil and gas industry, where he had a 30-year career as a political risk analyst. He has been an advisor to international oil exploration and service companies, financial institutions and governmental agencies, including the World Bank, U.S. Department of Defense and members of the intelligence community. He is the Honors College coordinator for the minor in Energy and Sustainability Studies.

Have We Passed the Climate Change Tipping Point?

By Earl J. Ritchie, Lecturer, Department of Construction Management

A few years ago, 400 parts per million for carbon dioxide was widely cited as the tipping point for climate change. Now that we have passed that value, it has become common to say that it wasn’t really a tipping point, that it was symbolic or a milestone.

Whether it’s a tipping point or a milestone, we have decisively passed it and CO2 levels appear certain to continue higher. Ralph Keeling, the originator of the famous Keeling Curve, said “it already seems safe to conclude that we won’t be seeing a monthly value below 400 ppm this year – or ever again for the indefinite future.”

Let’s consider what a tipping point actually is. The IPCC describes it as “abrupt and irreversible change.” Lenton, et al. say it “will inevitably lead to a large change of the system, i.e., independently of what might happen to the controls thereafter.” In other words, past the tipping point there will be drastic changes even if we stop emitting CO2. Rather than staying “well below 2 degrees Celsius above pre-industrial levels” as is the target of the United Nations Framework Convention on Climate Change (UNFCCC), there could be warming of several degrees, with associated sea level rise and rainfall changes.

tipping-point

Source: Alchemy 4 the Soul

In contrast to these definitions, others say climate change at projected CO2 levels may be reversible. Reversibility is important because otherwise it’s impossible, or at least very difficult, to do anything once you have passed the tipping point. I’ll return to this.

Where do we stand on CO2?

Atmospheric CO2 has not only been increasing; it has been accelerating. The 2001-2016 annual average increase is double that of 1960-1980. As pointed out in an earlier post, commitments under the UNFCC Paris Agreement do not decrease global CO2 emissions, so it is virtually certain that CO2 concentrations will continue to rise.

Much has been made of the potential impact of Trump’s policies on CO2 emissions. The frequently quoted Lux Research analysis of Clinton and Trump policies projected a difference well under a billion metric tons in 2025. This is just over 1% of the world total under the Paris Agreement commitments. The difference is not significant insofar as it relates to tipping mechanisms.

Climate tipping mechanisms

There are multiple possible tipping mechanisms, some of which are shown on the map below. Several of these are occurring today: Arctic sea ice loss, melt of the Greenland ice sheet and boreal forest dieback (and range shifts) are well documented. The extent of permafrost loss, instability of the West Antarctic Ice Sheet and slowing of the Atlantic deep water formation (also called Atlantic Thermohaline Circulation or Atlantic Meridional Overturning Circulation) are less well supported, but there are indications that these are occurring.

These mechanisms are not directly dependent on CO2 concentration; they are triggered by warming alone. Given the amount of warming in recent decades, it is not surprising that they are occurring.

3-1200x739 (1)

Source: Lenton, et al. PNAS 2008

The effects of potential tipping mechanisms are difficult to judge. It’s generally agreed that Arctic sea ice melting is a positive feedback event. Less ice means a darker ocean and more warming. Others are not so clear-cut.

For example, boreal forests, which represent about one-third of the world’s forest cover, are carbon sinks but have variable reflectance depending upon the season, snow cover and vegetation type. Compared to tundra and deciduous forests, they have a net warming effect. The extent to which they will migrate due to warming, and the type of vegetation which will succeed them, are speculative.

Further uncertainty exists because climate effects interact. It is possible to have a cascade, in which increased warming from exceeding one tipping point triggers another.

Is climate change reversible?

The IPCC considers some additional warming irreversible. They say “Many aspects of climate change and associated impacts will continue for centuries, even if anthropogenic emissions of greenhouse gases are stopped. The risks of abrupt or irreversible changes increase as the magnitude of the warming increases.”

Per the models cited in the IPCC assessments, anthropogenic climate change can be halted at 2 degrees, although this scenario requires negative industry and energy-related CO2 emissions later this century. By this interpretation, a tipping point has not been reached.

Accomplishing the 2 degree scenario may be difficult. The world’s track record in emissions reductions is poor. According to Friedlingstein, et al., “Current emission growth rates are twice as large as in the 1990s despite 20 years of international climate negotiations under the United Nations Framework Convention on Climate Change (UNFCCC).”

There has been a reported flattening in fossil fuel emissions for the past couple of years, due primarily to reported coal reductions in China. It remains to be seen whether this is the beginning of a reversal. Even so, emissions would have to decrease rapidly to meet even the 2 degree goal.

Prescriptions for reversal of global warming include proposed geoengineering methods for removing CO2 from the atmosphere and cooling the Earth by reflecting or blocking solar radiation. These do not mean that a tipping point was not passed. In the analogy shown in the cartoon above, one can push the rock back up the hill even after it has rolled to the bottom.

Have we passed the tipping point?

Observed advances in multiple tipping mechanisms certainly raise the question whether the tipping point has been passed. However, these mechanisms are accounted for to at least some degree in climate models, so interpreting that we have passed the tipping point requires that the models understate warming effects.

This is essentially an issue of the sensitivity of climate, that is, how much warming results from a given greenhouse gas concentration. The IPCC’s analysis concludes the likely range of equilibrium sensitivity for doubling of CO2 is 1.5 degrees to 4.5 degrees. As the graph below shows, there is reasonable probability that it could be substantially higher.

simulated-climate-sensitivity

Source: NASA Earth Observatory

If the actual value of sensitivity falls in these higher ranges, warming will be greater than predicted by the IPCC models and a tipping point or points may have been exceeded. I’m not sure that anyone actually knows the answer, which leaves me with the unsatisfactory conclusion of not having answered the question I have raised.

Regardless of whether we have passed the tipping point, continued warming, rainfall pattern changes, significant sea level rise and continued northward and vertical migration of plant and animal species in the Northern Hemisphere seem certain. We are looking at a changed world and must adapt to it.

Not an excuse for inaction

One should not view the possibility that we have passed a significant tipping point as a reason for inaction. Although I remain somewhat skeptical of the degree of human contribution to climate change, it is prudent to take reasonable actions that may reduce the problem. In addition, there are multiple possible tipping points with different thresholds. Exceeding one does not mean you cannot avoid another.


Earl J. Ritchie   is a retired energy executive and teaches a course on the oil and gas industry at the University of Houston. He has 35 years’ experience in the industry. He started as a geophysicist with Mobil Oil and subsequently worked in a variety of management and technical positions with several independent exploration and production companies. Ritchie retired as Vice President and General Manager of the offshore division of EOG Resources in 2007. Prior to his experience in the oil industry, he served at the US Air Force Special Weapons Center, providing geologic and geophysical support to nuclear research activities.

U.S. Nuclear Energy: Transform Or Become Irrelevant

By Ramanan Krishnamoorti, Chief Energy Officer, Interim Vice Chancellor for Research & Technology Transfer, Interim Vice President for Research & Technology Transfer and S. Radhakrishnan, Managing Director, UH Energy

The recent financial crisis facing Toshiba due to construction cost overruns at the newest nuclear power plants in the U.S. brought home the message: the nuclear power industry in the U. S. must change or become increasingly irrelevant.

This latest financial crisis strikes an industry that already has undergone a radical slowdown since the Fukushima disaster in 2011, which followed stricter regulations and safety concerns among the public after the Chernobyl disaster in 1985 and the partial meltdown at Three Mile Island in 1979.  The increased cost of building traditional high pressure light water reactors comes at a time when natural gas prices have plummeted and grid-scale solar and wind are becoming price competitive. So with all the financial and environmental concerns – including the very real issue of where and how we should store spent nuclear rods – why should the world even want nuclear power?

Several reasons.

First, nuclear power represents nearly 20% of the electricity generated in the U.S. Only coal and natural gas account for a higher percentage. More important than the total percentage, nuclear has the ability to provide highly reliable base load power, a critical factor as we go towards more intermittent sources, including wind and solar.  The power generated using nuclear power has the highest capacity utilization factor that is, among all fuel sources, it has the highest ratio of power actually produced compared to potential power generation, highlighted by the fact that it represents only 9% of the installed capacity in the U.S.

Clearly, nuclear, combined with natural gas, could be a great mechanism for replacing coal as base-load power. Moreover, natural gas power plants can be rapidly mobilized and de-mobilized and effectively offset the inherent intermittency of solar and wind in the absence of effective grid-scale storage.

Which points to the second reason: energy sources not based on hydrocarbons have become the de facto option to decrease anthropogenic carbon dioxide. Thus, along with solar and wind, nuclear represents a significant technological solution to address the human-caused CO2 issue.

A strong case for nuclear was recently presented at a symposium hosted by UH Energy, especially if we are looking for a rapidly scalable solution. Nuclear power technology continues to evolve away from the concrete-intensive light water high pressure process and toward a modular and molten salt-based process, especially outside the U.S. With the broad availability of nuclear fuel, especially in a world where thorium and other trans-uranium elements are increasingly becoming the fuel of choice, this technology is scalable and ready for global consumption. If done right, the use of thorium and some of the trans-uranium elements might quite substantially scale-down the issue of spent fuel disposal.

But other, less tangible barriers remain. Perhaps the single largest barrier for nuclear energy, after the economics associated with traditional nuclear power plants, is one of social acceptance. The near-misses such as Three Mile Island and the catastrophic incidents at Chernobyl and Fukushima highlight the challenge of gaining broad societal acceptance of nuclear energy.  Compounding these challenges is the much publicized possibility of a “dirty-bomb” based on nuclear material from rogue nations.

Reducing the amount of fissile material in a power plant and reducing and even eliminating the risk are crucial to gain the public’s confidence. One significant advancement that might help minimize the challenges with public confidence is that of fuel reprocessing and, with that, the virtual elimination of nuclear fuel waste. While these technologies are in their infancy, rapid advancement and scale-up might result in a significant shift in public perception of nuclear power.

Despite the barriers, several symposium speakers argued that the increased use of nuclear energy is not only possible but the best bridge to a low-carbon future. They did not deny the concerns, especially the staggering upfront cost of building a new nuclear power plant. Jessica Lovering, director of energy at The Breakthrough Institute, acknowledged the upfront cost has quadrupled since the 1970s and ’80s in the U.S., largely stemming from increased safety engineering in response to tougher regulations and the custom development of each nuclear facility. In contrast, Lovering has reported that the cost in France, through standardization of equipment and centralization of generation capacity, for new generation capacity has risen far more slowly. And therein lies a potential path forward for how the nuclear industry may adapt.

Perhaps the biggest disruption to the current nuclear paradigm are two large changes that are just getting started: First is the global reach of South Korea and its desire to become the leading global supplier of nuclear energy production. Based on imported technologies from Canada, France and the U.S., and using the key lessons from the success of the French nuclear industry due to standardization and centralization, Korea has taken on building modular nuclear power plants, assembled at a single site. And the site that they are working from is the United Arab Emirates! Using these advances, they have been able to keep capital costs for new generation capacity to under $2,400 per kilowatt hour. That compares to $5,339 per kilowatt hour in 2010 in the United States, according to the Nuclear Energy Agency.  Interestingly, China is looking to emulate the Korean model and with as many as 30 new nuclear reactors for power generation planned over the next two decades in China alone, the global competition is heating up.

Second is the advancement of small modular nuclear reactor (SMR) technologies, which have now achieved prototype testing. The opportunity and challenge associated with SMRs is captured in a recent DOE report. These reactors are designed with smaller nuclear cores and are inherently more flexible, employ passive safety features, have fewer parts and components, thus fewer dynamic points of failure, and can be easily scaled-out through their modular design.

Done at scale, these would result in reactors being constructed more quickly and at much lower capital costs than the traditional reactors. Aside from technical advances that would enable this technology to be produced at scale, issues of public policy, public perception, regulatory predictability and (micro) grid integration need to be resolved.

The U.S. nuclear power industry needs to embrace the Korean model and SMR technologies in order to transform and provide the base load capacity.  The traditional model has failed us in too many ways.


Dr. Ramanan Krishnamoorti is the interim vice chancellor and vice president for research and technology transfer and the chief energy officer at the University of Houston. During his tenure at the university, he has served as chair of the Cullen College of Engineering’s chemical and biomolecular engineering department, associate dean of research for engineering, professor of chemical and biomolecular engineering with affiliated appointments as professor of petroleum engineering and professor of chemistry.

Dr. Suryanarayanan Radhakrishnan is a Clinical Assistant Professor in the Decision and Information Sciences and the Managing Director of Energy. He previously worked with Shell Oil Company where he held various positions in Planning, Strategy, Marketing and Business Management. Since retiring from Shell in 2010, Dr. Radhakrishnan has been teaching courses at the Bauer College of Business in Supply Chain Management, Project Management, Business Process Management and Innovation Management and Statistics.

Saudi Oil Minister Sounds Trouble For Russia At Houston Conference

By Paul Gregory, Professor, Department of Economics

Energy producers and OPEC ministers, meeting at CERAWeek in Houston, grappled with a global glut of oil that was not supposed to be. Back in November, OPEC and non-OPEC oil producers agreed to their first production cut in eight years. Thus ended a Saudi-led experiment with free markets that had driven down crude prices to historic lows. The Saudi gamble was that low prices would dry up U.S. shale investment, rig counts, and hence crude production, that competes with OPEC and Russian output.

The experiment apparently failed.

Meeting in Houston with $50 plus crude, the OPEC team, represented by the Saudi oil minister, Khalid Al-Falih, and Russia’s energy minister, Alexander Novak, grudgingly acknowledged being caught off guard by a second wave of U.S. shale production at prices they had thought would throttle the shale industry. The production quotas orchestrated by OPEC and Russia were supposed to stabilize prices below production costs of shale producers and drive them from the market. To everyone’s surprise, shale producers had used technological advances and short start-up times to push down break-even costs, below $50 in the Permian Basin.

Even the shale oil producers themselves were surprised by the speed of recovery. U.S. crude output rose to nine million barrels a day, and the global glut as expressed by rising crude inventories refused to go away, despite OPEC actions.

The OPEC-Russia coalition apparently did not anticipate that they were facing a new type of competition, one that could respond quickly and innovate to plumb the depths of cost economies.

The OPEC-Russia production cuts had been scheduled to last a half year, and they appear to have been implemented. With crude prices falling and inventories rising, OPEC – mainly Saudi Arabia – must decide whether to extend the cuts, even though the first set of cuts did not work out as planned.

The Saudi minister’s comments in Houston must have sent a chill down the spine of his Russian counterpart, as he announced that Saudi Arabia will not “bear the burden of free riders.” He also warned U.S. producers that it would be “wishful thinking” to expect Saudi Arabia and OPEC to “underwrite the investments of others [US shale producers] at our expense” through production cuts. Don’t expect us to keep prices so high that your investments are safe and you are freed from the pressure to push down costs to stay in business.

Translated, the Saudi minister warned his fellow OPEC members and Russia that Saudi Arabia is not prepared to cut its own production, which it can pump at low breakeven costs, to keep prices up for high-cost “free riders,” such as Russia. Russia, with its depleting reserves, antiquated technology, isolation from Western capital and technology, and Petrostate dependence on oil revenues must learn to live with $50 (or below) oil.

Energy producers from the Middle East. Latin American, Africa and North America must come to the realization that the energy market has reconstituted itself, with the U.S. as the swing producer. U.S. breakeven costs on unconventional oil will henceforth determine the long-run price of crude.  Over the next decade, there will be fluctuations in crude prices as world demand fluctuates and political disruptions interrupt supplies, but the price should tend towards the equilibrium set by marginal costs in the U.S.

Two further factors could push the price even lower. The United States has elected a pro-energy president, who will lessen environmental and other regulations on energy production. These steps will drive break even cost even lower. If Europe and other countries follow the United States’ political changes, restrictions on unconventional oil would begin to disappear worldwide. If so, the world economy can look forward to cheap energy for decades to come.

In the meantime, Russia’s Putin will be on the outside looking in. The Russian economy and state have survived two plus years by tapping foreign reserves and depressing living standards. It is unclear how it could survive decades of energy prices below what Russia needs to stabilize its economy and provide the government with the funds it needs to maintain political harmony while fighting its hybrid wars abroad.

Paul Gregory is a professor of economics at the University of Houston. He currently teaches a course on comparative economic systems.

Gregory has published several articles in scholarly journals, in both Russian and English, and is an expert in Soviet economics and transition economics. His current research, funded by the National Science Foundation, is on the topic “High-Level Decision Making in the Soviet Administrative Planned Economy: Evidence from Soviet State and Party Archives.”

In the past, Gregory was involved in several funded economics research projects, and has earned multiple awards and honors, including the Fulbright Fellowship.

Gregory earned his bachelor’s in economics and master’s in Russian in economics from Oklahoma University, and his PhD in economics from Harvard University.

Managing Wind And Solar Intermittency In Current And Future Systems

By Earl J. Ritchie, Lecturer, Department of Construction Management

The problem with variable renewable energy (VRE) – primarily wind and solar – is sometimes it generates too much power and sometimes it doesn’t generate enough. That’s manageable, but it’s more complicated than it may seem.

In the majority of today’s installations, variability can be balanced with so-called dispatchable generation: traditional power plants, hydroelectric and biomass. Generation from traditional power plants is cut when generation from wind and solar is too high, and increased when it’s too low. This creates some power management problems but is manageable at modest cost.

In a system with a large share of wind and solar, maintaining enough dispatchable power in reserve becomes expensive. The electrical grid must be modified to manage the increased variability. It remains to be seen how quickly the transformation to a high share of VRE can be made.

The nature of variability

Power from wind and solar varies on all time scales from seconds to years. The graph below illustrates variation in Irish wind power over one year. The Irish example is pertinent because at 23% of electricity generated, they have one of the highest shares of wind power, and the wind farms are dispersed over the country. Despite the benefit of the geographic spread, there are moderately long periods during which little or no electricity was generated by wind. The historical average amount generated is 31% of installed capacity according to EirGrid and SONI, but the range is from near zero to about 50%.

ritchie-1_022717

Source: EIA

Managing variability is not new

Variability is not a new issue in the power industry since traditional power sources have some variability, and demand is also variable over all timeframes. The graph below of demand in a large U.S. grid has much less variability than the Irish wind power example, but it is still almost 3:1 and has a noticeable seasonal component.

ritchie-2_022717-2.jpg

Source: EIA

Managing the system is a function not only of source variation, but also of matching generation with demand. In an earlier post I discussed the “duck curve” illustrating the ramp down and ramp up needed in dispatchable generation due to the mismatch of daily solar generation peaks with demand.

Reducing source variability

Variability can be reduced by combining different types of variable sources and by spreading sources over a large geographic area. Either of these will reduce short-term variability but may or may not significantly reduce variability on a scale of hours or days.

A study of the European Union showed that wind power in 2014 fell to as low as 4% of capacity and was less than 10% of capacity 11% of the time, even when aggregated over the entire EU. Since the countries are not all grid connected, the distribution was hypothetical. Variation on the actual smaller grids was higher.

Patterns of available wind and solar power vary tremendously with location. Wind and solar may tend to peak together or at different times. They may generate more during peak demand periods or during low demand periods. This makes generation design a local issue unless very widespread interconnections are available.

The potential for greater smoothing has led to the concept of the supergrid, connecting generating sources over larger areas than traditional grids. Some technological development is necessary to implement supergrids but they likely will be constructed. Even so, they will not completely eliminate variability since weather patterns tend to occur over large areas.

Reducing demand variability

Variability of demand can be reduced by a variety of techniques that shift usage from high demand periods. These include differential pricing, smart controls, jawboning and direct utility control of load. Perhaps the most obvious example is encouraging people to shift tasks such as washing and drying to the night in order to reduce demand during the daytime peak. These methods are discussed within the industry along with methods for reducing overall demand under the term demand-side management.

Managing the remaining variability

In existing grids and those foreseeable in the near term, substantial variability and mismatch between generation and demand will continue. Management methods include dispatchable generation, overcapacity, storage and tolerating insufficiency. All have costs.

Dispatchable generation is the traditional method. In effect, it is a form of overcapacity since the dispatchable plants run below capacity until more electricity is needed. The cost of maintaining standby capacity and efficiency losses associated with ramping and partial load operation can be substantial.

Renewables can serve as dispatchable sources, so this method would not preclude achieving 100% renewables. Some very high renewables scenarios use biomass to balance variability.

The premise of overcapacity is that if you build more generation than is necessary, you will have enough even when the variable sources operate at a fraction of their capacity. As the graph of Irish wind power shows, it is a practical and economic impossibility to build enough variable capacity to meet supply during very low periods.

The downside of overcapacity is that you generate too much electricity during favorable periods of high wind or intense sunlight. Ideally, the excess electricity can be stored. This has some disadvantages which will be discussed below.

A possibility suggested by Mark Jacobson and Mark DeLucchi is generating hydrogen during periods of oversupply. In essence, this is increasing demand to match supply, and it could be applied to products other than hydrogen. It is conceptually similar to encouraging electricity use by very low or negative prices during oversupply periods, as has been practiced in Germany and other areas with moderately high VRE share.

Storage to clip the peaks and fill the valleys of demand is part of nearly all high VRE scenarios. There are numerous storage technologies with varying cost, scale, duration and technological maturity. This table from Lazard’s 2016 Levelized Cost of Storage shows the cost of the primary technologies and applications. The costs should be taken only as approximations since some of the technologies are not mature, costs vary with location and future cost reductions are likely. Taken at face value, only compressed air, pumped hydro and lithium-ion are competitive today with natural gas peaking cost of about $200 per megawatt hour.

ritchie-3_022717-1200x645

Source: Lazard

Storage cost depends not only on the cost per kilowatt hour, but also on the amount of storage capacity installed. There are no guidelines for the amount of storage needed for a given level of VRE. The optimum capacity is influenced by cost dependent tradeoffs between generation and storage, as well as the mix of sources and match with demand. A model study of the PJM Interconnection used as the demand example above showed the lowest cost alternative relied heavily on overcapacity, with little storage. Other locations and assumptions might give very different answers.

Storage technology is in an early stage of development. Most storage installations to date can only supply rated power for a few minutes to a few hours. Capability to handle extended shortage remains an issue. The extent of storage that will be incorporated in future systems will be heavily dependent upon development of storage methods and cost of generation.

It is likely impossible to build a grid with a very high share of VRE that has complete certainty of providing adequate power at all times. A necessity or deliberate choice may be to allow for curtailment, that is, not supplying some customers when generation does not equal demand.

Market mechanisms, such as interruptible supply contracts, are other ways to match supply and demand.

Optimizing the system

On a theoretical basis, an electrical grid can be optimized through the proper mix of sources, storage and locations. There is a question what is to be optimized. Is it lowest cost, least pollution, greatest economic benefit, energy security, social equity or some combination of factors? Once the measure is determined, assumptions must still be made regarding performance, cost and demand. Actual performance will frequently differ from modeled performance.

Since the amount of electricity generated by wind and solar vary somewhat randomly, statistical forecasting techniques are used. These generate a distribution of forecasted supply as a function of time. There will be some probability of extreme events, for example, a prolonged inadequacy of supply.

The choice of when, where, how much and what type of generation to build is decided in most countries by private companies. Their choices may be substantially influenced, but not controlled by, government policy. As a result, the grid will not be optimum. Renewables requirements and the structure of government incentives will be important factors.

Very high renewables scenarios

The majority of published scenarios, including those of the IPCC, have traditional sources – nuclear and fossil fuels – continuing to provide a significant fraction of electricity generation through 2050. A few have all electricity, or even all primary energy, from renewables. These scenarios depend not only on rapid technological advancement and implementation of renewable sources, but also on reduction of energy consumption, such as in this World Wildlife Fund (WWF) scenario of 95% renewables.

ritchie-4_022717-1200x763

Source: World Wildlife Fund

The WWF scenario decreases overall energy demand by about 25% from a peak in 2020. It is at odds with many other scenarios that envision continued growth in energy demand due to increasing population and increases in consumption in the developing and less developed countries.

Similarly, this scenario envisions a decrease in annual energy cost of 4 trillion Euros by 2050, based on reduced demand and lower fuel costs. These numbers are at odds with the predicted increase in generation cost associated with high shares of VRE discussed in an earlier post.

It’s not clear to me whether scenarios that envision drastic shifts in energy source are considered plausible or are thought experiments expressing ideal goals. The WWF report describes the task of transforming the system as “a huge one, raising major challenges.” Considering the modest progress to date, differing views of the priority of decarbonization, the need for as yet unproven technology and the time needed to construct new systems, it seems unlikely that this transformation will be completed by 2050.


Earl J. Ritchie is a retired energy executive and teaches a course on the oil and gas industry at the University of Houston. He has 35 years’ experience in the industry. He started as a geophysicist with Mobil Oil and subsequently worked in a variety of management and technical positions with several independent exploration and production companies. Ritchie retired as Vice President and General Manager of the offshore division of EOG Resources in 2007. Prior to his experience in the oil industry, he served at the US Air Force Special Weapons Center, providing geologic and geophysical support to nuclear research activities.

Don’t Expect Carbon Capture To Save Coal

By Ramanan Krishnamoorti, Chief Energy Officer and Interim Vice Chancellor for Research and Technology Transfer, University of Houston

There has been a lot of excitement around the recent startup of commercial scale carbon capture and sequestration operations by NRG Energy at the W.A. Parish coal-fired power plant near Houston. The reason is clear: the technology offers the promise of “clean” coal, with little or no CO2 emission, and the potential to revive the coal-based power generation industry, which has declined nationally from about 44% of electric power generation in 2009 to 31% today.

Moreover, effectively sequestering and using the CO2 to enhance oil recovery operations in declining oil and gas fields bolsters the case, both from an environmental perspective and an economic perspective.

rk_1_022017

This raises the question of why I was less than enthusiastic about the scale out and advancement of this technology in a recent Houston Public Media interview.

Let’s start with economics.

Capital expenses required for the technology, the energy required to power the CO2 capture system and the current price of crude oil that would be recovered through the enhanced oil recovery process,  mean that operating costs for a coal-fired generating plant coupled with carbon capture technology are 30% to 35% higher than the operating costs for a coal-fired plant alone This is consistent with extensive life cycle analysis studies of carbon capture and sequestration reported by Sathre, R.  2011.

NRG executives, I should note, disagree and say the technology is essentially cost-neutral when oil is $50 a barrel, as profits from the additional oil harvested with the use of the sequestered carbon cover both capital and operating expenses. Moreover, they estimate that with scale up and improvements in technology, the W. A. Parish plant operates its carbon capture and sequestration with a parasitic energy load of 21% or lower and not the broad industry standard of 30% to 35%.

Their bigger argument in support of the project is that investing in this technology now will pay off globally down the road. Even if the decline of coal in the United States isn’t reversed, due to concerns about climate change and because cleaner-burning natural gas is cheaper, this thinking suggests that new markets for the capture technology may open in China and India, where new coal units continue to come online. They also see it as an important step toward developing other low-carbon technologies, including the use of this carbon capture at natural-gas fired power plants.

I respect that argument, and NRG’s investment. Their project at the W.A. Parish plant has moved us further along the learning curve for this and future technologies that would allow for a more sustainable use of hydrocarbon fuels.

But I question whether the project and technology is economically scalable to other locations – especially coal-fired plants that aren’t near an existing oil or gas field in the U. S. and in the absence of substantial economic incentives for the use of such capture technologies.  And there are other concerns:

The physical footprint.

The technology will significantly strain the need for physical space associated with the power generation unit, and if scaled up to accommodate the average 1 to 4 gigawatt coal-based power plant, I believe it would be a significant impediment. In an era where most communities globally operate with a “not in my neighborhood” philosophy, the physical size of current technology will pose a serious barrier to adoption on a wide scale, especially in new construction of coal-fired power plants.

The price of oil.

Using the captured CO2 to improve oil production, and storing it geologically in oil formations, is the critical piece in making carbon capture and sequestration technology economically viable, whether it involves coal or natural gas.

The proximity between power plant and oilfield will matter – sending the CO2 by pipeline to a field 100 miles away is far different than sending it to an oilfield 1,000 miles away. NRG and its partner in the Parish plant, JX Nippon, are sending the CO2 about 90 miles away, to a field in South Texas.

But the price for which a producer can sell the additional oil harvested by the CO2 injection is a more fundamental issue for the technology’s viability. With oil hovering around $50 per barrel and the expected improved production of four barrels per ton of captured CO2, the economics of the combined process is not promising.

Should the price of oil increase to between $80 and $100 per barrel, or if a considerable carbon tax is incorporated and natural gas prices stay near their current prices, this technology might, in fact, be economically viable in spite of the high capital costs.

Competing technologies, especially solar, solar thermal and wind.

Renewables are becoming increasingly cost competitive with coal and natural gas; the unsubsidized levelized cost of wind energy has dropped by 66% since 2009, and the unsubsidized levelized cost of photovoltaic solar has dropped by 85% over the same time period. New power generation from utility scale solar and wind are now cost comparable to that from combined cycle natural gas.

There are legitimate questions about how soon these intermittent energy sources can be incorporated into the grid without improvements in affordable grid-scale storage technology. Storage technologies are still being developed and are likely to increase the cost of using renewables, although how much is an open question.

Undoubtedly, however, the share of renewables on the nation’s power grid will grow, along with the deployment of other novel technologies that can render natural gas power generation nearly carbon neutral. One example is using the patented Allam cycle to drive generation turbines with high pressure, high temperature CO2 and then capturing the carbon, a less expensive and possibly less complicated process.

These competing technologies are likely to be adopted much more rapidly than the current technology based on coal, even if the lessons learned from NRG’s Parish plant serve to guide research into other sequestration technologies.

Renewables, backed by newer competing technologies, and the low price of natural gas-based power generation in the United States, will be a significant challenge to the continued nationwide deployment of this clean coal technology deployed in Texas.

rk_3_022017

rk_4_022017