Building Climate Resilience: Why Lessons from Houston Matter

Ramanan Krishnamoorti

Ramanan Krishnamoorti, Chief Energy Officer, University of Houston

Aparajita Datta, UH Research Scholar, University of Houston

Most of the conversation around addressing climate change has focused on what the federal government and global community can do. In energy-centric Houston, pledges by oil companies to cut emissions have drawn attention. But when it comes to the risks of climate change, cities are on the front line, and nowhere is that better illustrated than in Houston and along the Gulf Coast, where much of the nation’s refining and petrochemical manufacturing capacity is concentrated. With the start of hurricane season and an overheated Gulf of Mexico and Atlantic Ocean, the issue of our preparedness are front and center.

Like other cities, Houston has worked to promote energy efficiency and a cleaner transportation sector, which are important for addressing climate risks. But cities aren’t equipped to adopt other policy innovations that can quickly and adequately mitigate the impacts of a changing climate. Climate resilience requires coordinated policies across all levels of government and the private sector, but the nation has fallen short on building this resilience and breaking down silos. Houston tells the story of why it is critical to empower local governments with the right resources and facilitate integration across local, state, and federal jurisdictions to build a more resilient country.

It has been almost five years since Hurricane Harvey battered the city and brought the national economy to a temporary halt, as refineries and petrochemical plants that supply the country with gasoline, jet fuel, and other products were shut down. The impact on Houston has been far longer lasting. Every extreme weather event since then, from tropical storm Imelda in 2019 to winter storm Uri in 2021, has tested the limits of Houston’s resilience. As the risks continue to grow, Houston’s future depends on the pace of coordinated policy change and on rethinking how to build resilience within communities and across the systems that connect us.

The Houston Metropolitan Area is expected to add 3 million people, growing from about 7 million to 10 million between now and mid-century. The projected population growth, accompanied by an increasing urban sprawl, will compound the risk presented by flooding—a threat well-known to Houston-area leaders and residents—and two that have received far less attention: sea-level rise and land subsidence.

Currently, about a quarter of the homes in the Greater Houston area face a significant flood risk. Increased precipitation by 2100 means that the annual risk of at least one flood exceeding 7 feet will double. While homeownership in Houston once provided working-class families with the promise of upward mobility, this increased risk will expand the homes experiencing significant or repeated flood damage. This will push and keep low-income families in neighborhoods that face enduring impacts, increase the cost of homeownership, and make it harder for them to access safe and livable housing.

Simultaneously, sea levels along the Gulf Coast are expected to rise by 5 feet over 1992 levels by 2100. Storm surges are expected to grow 10 times by 2050, resulting in a three-fold increase in the number of homes at risk, potentially displacing 500,000 people.

Population growth in the city will increase demand for water and housing over the next three decades, exerting additional pressure on our land and water resources. The increased groundwater pumping and developed land cover directly affect the magnitude and extent of land subsidence, which in some areas of Houston is already as much as 0.3 feet per decade.

Houston’s extensive commercial infrastructure is threatened, too. The Port of Houston, one of the U.S.’ largest ports in terms of waterborne tonnage, and the Houston Ship Channel are among the most vulnerable to climate and extreme weather risks. Operational disruptions of the Ship Channel from past extreme weather events have caused economic losses of more than $300 million per day. Similarly, a third of the petrochemical facilities in the region are prone to inundation during a 100-year flood, the frequency of which is now likely to increase to every one to thirty years along the Gulf Coast. Over the next decade, the cost of climate risks to the Houston Ship Channel and petrochemical facilities in the Gulf Coast region could increase by as much as 800%, while the cost of the failure of critical equipment and the associated punitive fines will be much higher.

In the absence of upgraded standards of engineering, design, and remediation based on realistic risk analysis, not only would property damages and costs be significant, but the functionality of the city’s energy facilities will be threatened. Robust governmental policies in line with high-resolution real-time modeling of facility-level risks are generally lacking and unable to capture the effects of land subsidence and encroachment in watersheds and wetlands. As a result, the impacts of extreme weather events extend beyond immediate damages, operations, supply chain disruptions, and personnel safety, and have lasting consequences for the neighboring communities and the environment.

Houston’s economic recovery from Harvey has led to the common misperception that the city has fully and successfully bounced back since 2017. Some of the most vulnerable and marginalized Houstonians are still rebuilding from Harvey, and Uri was yet another setback. It held a mirror up to the city that is unprepared and ill-equipped, and, like the rest of the nation, is faced with deep political divergences between local, state, and federal agencies. What remains unaddressed is that building climate resilience goes beyond immediate recovery. It requires systems-level planning for the unanticipated, equipping local governments with the resources that can serve the unique needs of its people, and facilitating communication with federal and state counterparts to safeguard infrastructure, social systems, and communities. Houston is the harbinger of America’s future- demographically and climate-wise- and how the city persists and thrives in its efforts to build resilience will shape the nation’s path forward.

UH Energy is the University of Houston’s hub for energy education, research and technology incubation, working to shape the energy future and forge new business approaches in the energy industry.

Follow us on Twitter. Check out our website.

Fact Checking The 97% Consensus On Anthropogenic Climate Change

By Earl Ritchie, Lecturer, Department of Construction Management

The claim that there is a 97% consensus among scientists that humans are the cause of global warming is widely made in climate change literature and by political figures. It has been heavily publicized, often in the form of pie charts, as illustrated by this figure from the Consensus Project.

ritchie-1_121416

The 97% figure has been disputed and vigorously defended, with emotional arguments and counterarguments published in a number of papers. Although the degree of consensus is only one of several arguments for anthropogenic climate change – the statements of professional societies and evidence presented in reports from the Intergovernmental Panel on Climate Change are others – there is data to suggest that support is lower. In this post, I attempt to determine whether the 97% consensus is fact or fiction.

The 97% number was popularized by two articles, the first by Naomi Oreskes, now Professor of Science History and Affiliated Professor of Earth and Planetary Sciences at Harvard University, and the second by a group of authors led by John Cook, the Climate Communication Fellow for the Global Change Institute at The University of Queensland. Both papers were based on analyses of earlier publications. Other analyses and surveys arrive at different, often lower, numbers depending in part on how support for the concept was defined and on the population surveyed.

This public discussion was started by Oreskes’ brief 2004 article, which included an analysis of 928 papers containing the keywords “global climate change.” The article says “none of the papers disagreed with the consensus position” of anthropogenic global warming. Although this article makes no claim to a specific number, it is routinely described as indicating 100% agreement and used as support for the 97% figure.

In a 2007 book chapter, Oreskes infers that the lack of expressed dissent “demonstrates that any remaining professional dissent is now exceedingly minor.” The chapter revealed that there were about 235 papers in the 2004 article, or 25%, that endorsed the position. An additional 50% were interpreted to have implicitly endorsed, primarily on the basis that they discussed evaluation of impacts. Authors addressing impacts might believe that the Earth is warming without believing it is anthropogenic. In the article, Oreskes said some authors she counted “might believe that current climate change is natural.” It is impossible to tell from this analysis how many actually believed it. On that basis, I find that this study does not support the 97% number.

The most influential and most debated article was the 2013 paper by Cook, et al., which popularized the 97% figure. The authors used methodology similar to Oreskes but based their analysis on abstracts rather than full content. I do not intend to reopen the debate over this paper. Instead, let’s consider it along with some of the numerous other surveys available.

Reviews of published surveys were published in 2016 by Cook and his collaborators and by Richard S. J. Tol, Professor of Economics at the University of Sussex. The 2016 Cook paper, which reviews 14 published analyses and includes among its authors Oreskes and several authors of the papers shown in the chart below, concludes that the scientific consensus “is robust, with a range of 90%–100% depending on the exact question, timing and sampling methodology.” The chart shows the post-2000 opinions summarized in Table 1 of the paper. Dates given are those of the survey, not the publication date. I’ve added a 2016 survey of meteorologists from George Mason University and omitted the Oreskes article.

The classification of publishing and non-publishing is that used by Cook and his collaborators. These categories are intended to be measures of how active the scientists in the sample analyzed have been in writing peer-reviewed articles on climate change. Because of different methodology, that information is not available in all of the surveys. The categorization should be considered an approximation. The chart shows that over half the surveys in the publishing category and all the surveys in the non-publishing category are below 97%.

ritchie-2_121416

Cook is careful to describe his 2013 study results as being based on “climate experts.” Political figures and the popular press are not so careful. President Obama and Secretary of State John Kerry have repeatedly characterized it as 97% of scientists. Kerry has gone so far as to say that “97 percent of peer-reviewed climate studies confirm that climate change is happening and that human activity is largely responsible.” This is patently wrong, since the Cook study and others showed that the majority of papers take no position. One does not expect nuance in political speeches, and the authors of scientific papers cannot be held responsible for the statements of politicians and the media.

Given these results, it is clear that support among scientists for human-caused climate change is below 97%. Most studies including specialties other than climatologists find support in the range of 80% to 90%. The 97% consensus of scientists, when used without limitation to climate scientists, is false.

In the strict sense, the 97% consensus is false, even when limited to climate scientists. The 2016 Cook review found the consensus to be “shared by 90%–100% of publishing climate scientists.” One survey found it to be 84%. Continuing to claim 97% support is deceptive. I find the 97% consensus of climate scientists to be overstated.

An important consideration in this discussion is that we are attempting to define a single number to represent a range of opinions which have many nuances. To begin with, as Oreskes says, “often it is challenging to determine exactly what the authors of the paper[s] do think about global climate change.” In addition, published surveys vary in methodology. They do not ask the same questions in the same format, are collected by different sampling methods, and are rated by different individuals who may have biases. These issues are much discussed in the literature on climate change, including in the articles discussed here.

The range of opinions and the many factors affecting belief in anthropogenic climate change cannot be covered here. The variety of opinion can be illustrated by one graph from the 2013 repeat of the Bray and von Storch survey showing the degree of belief that recent or future climate change is due to or will be caused by human activity. A value of 1 indicates not convinced and a value of 7 is very much convinced. The top three values add to 81%, roughly in the range of several other surveys.

ritchie-3_121416

Even though belief is clearly below 97%, support over 80% is strong consensus. Would a lower level of consensus convince anyone concerned about anthropogenic global warming to abandon their views and advocate unrestricted burning of fossil fuels? I think not. Even the 2016 Cook paper says “From a broader perspective, it doesn’t matter if the consensus number is 90% or 100%.”

Despite the difficulty in defining a precise number and the opinion that the exact number is not important, 97% continues to be widely publicized and defended. One might ask why 97% is important. Perhaps it’s because 97% has marketing value. It sounds precise and says that only 3% disagree. By implication, that small number who disagree must be out of the mainstream: cranks, chronic naysayers, or shills of the fossil fuel industry. They are frequently described as a “tiny minority.” It’s not as easy to discount dissenters if the number is 10 or 15 percent.

The conclusions of the IPCC are the other most often cited support for anthropogenic climate change. These conclusions are consensus results of a committee with thousands of contributors. Although this is often viewed as a monolithic conclusion, the nature of committee processes makes it virtually certain that there are varying degrees of agreement, similar to what was shown in the Bray and von Storch survey. The Union of Concerned Scientists says of the IPCC process “it would be clearly unrealistic to aim for unanimous agreement on every aspect of the report.” Perhaps this is a subject for another day.

Earl J. Ritchie is a retired energy executive and teaches a course on the oil and gas industry at the University of Houston. He has 35 years’ experience in the industry. He started as a geophysicist with Mobil Oil and subsequently worked in a variety of management and technical positions with several independent exploration and production companies. Ritchie retired as Vice President and General Manager of the offshore division of EOG Resources in 2007. Prior to his experience in the oil industry, he served at the US Air Force Special Weapons Center, providing geologic and geophysical support to nuclear research activities.

UH Energy is the University of Houston’s hub for energy education, research and technology incubation, working to shape the energy future and forge new business approaches in the energy industry.

The Shift To Renewables: How Far, How Fast?

By Earl J. Ritchie, Lecturer, Department of Construction Management

Powering the United States or the world with 100% renewable energy is the stated goal of many individuals and organizations. What they are really talking about is 100% renewables to generate electricity, because it’s not feasible in the near-term to replace motor fuels with renewables. Views of how quickly this can be done are highly polarized – some predict less than two decades, while others see fossil fuels as the dominant source at least through 2050.

The primary argument for renewable energy is to avoid anthropogenic, or human-caused, climate change by reducing CO2 emissions. Progress toward that goal has fallen well short of reductions believed by the Intergovernmental Panel on Climate Control (IPCC) to be necessary to avoid catastrophic climate change. In fact, the only year in the past 40 in which CO2 emissions decreased was the first full year of the 2008 recession. The rate of growth of carbon emissions has slowed over the past five years, however, giving proponents of carbon reduction some encouragement.

Let’s look at some of the claims of the feasibility of going to 100% renewables.

How quickly can it be done?

In a 2008 speech, former Vice President Al Gore said it was “achievable, affordable and transformative” to generate all electricity in in the United States using wind, solar and other renewable sources within 10 years. One might dismiss this as political hyperbole, and it has not happened.

A claim that arguably has a better technical basis appeared in a widely publicized November 2009Scientific American article by Mark Jacobson and Mark Delucchi, professors at Stanford University and the University of California respectively. They suggested all electrical generation and ground transportation internationally could be supplied by wind, water and solar resources as early as 2030. Even that is wildly optimistic, since the median of the most optimistic of the projections in the latest IPCC assessment has low carbon sources (which include nuclear, hydro, geothermal and fossil fuels with carbon capture and storage) generating only 60% of world energy supplies by 2050; wind, water and solar are less than 15%.

In a 2015 report addressing only the U.S., Jacobson, Delucchi, and co-authors revised the schedule to 80-85% renewables by 2030 and 100% by 2050. As with nearly all low carbon scenarios, their plan depends heavily on reducing energy demand through efficiency improvements.

Other forecasts are considerably less optimistic. Two examples: the 2015 MIT Energy and Climate Outlook has low carbon sources worldwide as only 25% of primary energy by 2050, and renewables only 16%; the International Energy Agency’s two-degree scenario has renewables, including biomass, as less than 50%. Even the pledges of the widely praised Paris Agreement of the parties to the United Nations Framework Convention on Climate Change (UNFCCC) leave fossil fuels near 75% of energy supply in 2030, when the commitments end.

How are we doing?

Growth of renewables as a fraction of the overall energy supply has been slow, although recent growth of wind and solar is impressive. This graph shows the annual growth rate of renewables in the U.S. since 1980 as less than 2%.

Primary Energy Production, 1980-2015

Since 2007, wind and solar have grown over 20% per year in absolute terms, and about 15% as a percent of supply. There was no growth in other renewables during that period. The international numbers are similar.

What is possible?

Proponents of renewable energy are fond of saying that 100% renewable is technically feasible; it only requires political will. With some caveats, this is true. There is theoretically enough sunlight and wind, and a growth rate of 20% means a doubling every four years. If sustained, this would mean we could have 500 times the existing amount of wind and solar by 2050. However, there are both economic and technical barriers.

The rapid growth of renewables in both the United States and Europe has been due in large part to subsidies that make investment in renewables highly profitable. As installed capacity has increased, both state and national governments have tended to cut subsidies, resulting in substantial decreases in renewable investments.

Investment in Renewable Power and Fuels

Per the United Nations Environment Programme, worldwide new investment in renewable energyhas been basically flat for the past five years. This overall view masks substantial local and regional differences. Investment in the developed countries has declined about 30% since the 2011 peak, while investment in the developing countries has almost doubled.

Technical barriers to wind and solar are largely the result of intermittency and the location of favorable areas. Intermittency is not a problem as long as the proportion of renewable energy is small and excess capacity exists in conventional generating plants. It begins to become a problem when intermittent sources reach 30% of capacity and is very significant when it reaches 50%. The numbers are somewhat variable depending upon the makeup of existing plants. A 2008 report of the House of Lords estimated that reaching 34% of renewable energy in the United Kingdom, largely with wind power, would raise electricity costs 38%. The cost goes up as the share of variable renewables increases due to storage and grid flexibility requirements.

Intermittency can theoretically be handled by diversification of sources, load shifting, overbuilding capacity, and storage. All add cost. Diversification on a broad scale would require substantial changes to the energy grid. Storage on a utility scale is in an early stage of development, so costs remain uncertain. A large number of technologies exist, with varying estimated costs and applicability.

A 2012 Deutsche Bank report estimated that renewables plus storage could be competitive in Germany by 2025, however, the calculation included a carbon tax, effectively a subsidy for renewables. Any such comparisons of future costs depend upon assumptions of technological improvements and fossil fuel costs.

100% renewable electricity generation is technically feasible. However, even if you assume cost competitiveness, money has to be spent in the near-term to not only add capacity but to replace existing plants. In the industrialized countries, this is not an insurmountable problem but it does require allocation of funds that have competing demands. In some developing countries, there is just not money available.

Some proponents of accelerating the replacement of fossil fuels advocate a massive effort, which they call a “moon shot” or compare to World War II. But this transition requires a great deal more effort than the moon shot, and there is serious question whether there is political motivation comparable to World War II. I’ll talk about that in a future post.


Earl J. Ritchie is a retired energy executive and teaches a course on the oil and gas industry at the University of Houston. He has 35 years’ experience in the industry. He started as a geophysicist with Mobil Oil and subsequently worked in a variety of management and technical positions with several independent exploration and production companies. Ritchie retired as Vice President and General Manager of the offshore division of EOG Resources in 2007. Prior to his experience in the oil industry, he served at the US Air Force Special Weapons Center, providing geologic and geophysical support to nuclear research activities.

Water, Energy, Food – Increasingly, Everything Is Connected

By Debora Rodrigues, Associate Professor of Civil and Environmental Engineering

People often think of scientists as solitary types, working alone in our labs, focused on a narrow topic. But if that was ever true, it’s not now. Scientific discovery and creating new technologies don’t fit in a box.

That’s certainly the case with questions involving water and energy, and the so-called water-energy nexushas gained attention from both the government and from researchers over the past few years.

The two intersect like this: Producing clean water requires energy – to treat the water, to distribute the water and so on – while it takes water to produce energy, from generating electricity to blasting chemicals and sand into shale rock to extract oil and natural gas. Water is a key component of the cooling process in utility plants powered by fossil fuels, and it generates electricity directly in the case of hydroelectricity. Drought can affect power plants by limiting water availability. Similarly, water treatment plants can be shut down when a storm knocks out the power supply.

I experienced the connection in my work, which focuses on bio- and nanotechnologies for water and wastewater treatment. Growing up in Brazil, I saw firsthand that people in rural areas too often were sick or even died because they didn’t have access to clean, safe drinking water. Established techniques such asreverse osmosis – which forces water through a membrane to remove bacteria and other particles – requires huge amounts of energy, driving up the cost. That may not be a concern for richer countries, but in the developing world, clean water solutions need to be simple and inexpensive.

And now we know it’s not just energy and water. More recently, food has been added to the wheel.

The United Nations reports that agriculture accounts for 70 percent of global freshwater use. Food production and transportation consumes about 30 percent of global energy use. As the demand for food increases to meet projected population growth, it will require both more water and more energy.

It doesn’t stop there, however. Runoff from agricultural operations can lead to pollution, requiring the water to be treated. The treatment requires energy. But agriculture doesn’t just consume water and energy – crops and agricultural waste are used to produce biofuels. About 42 percent of Brazil’s gasoline requirements are fulfilled with ethanol made from sugar cane.

There’s no place to get off the wheel. It goes in so many directions, and if we want to manage our resources sustainably, we have to pay attention.

Why do all of these connections matter? Maybe they don’t to the average consumer. At the height of the California drought last year, the news was full of stories about how much of the state’s dwindling water supply went to almonds, walnuts and other nut crops – almonds and walnuts both require about 50 gallons of water per ounce, a figure that rises almost to $100 an ounce when the nuts are measured unshelled, according to the UNESCO U.S. Institute for Water Education. But people didn’t stop eating pistachios.

Researchers are paying attention, however, and that already has changed the way we think about solving problems. My lab is no longer focused just on finding ways to remove microbes and other toxins from water; instead we make sure the coatings, filters and other technologies we develop are reusable and require little if any energy.

Other researchers are working to reduce water requirements for food production, to more efficiently convert agricultural waste to biofuels, and to address other issues along the wheel.

The Food and Agriculture Organization of the United Nations has called for more data and research to help nations around the world navigate the decisions that these interrelationships will require, allowing individual countries to better manage the tradeoffs that will be required.

We have learned that nothing happens in isolation, and we are moving out of our silos.

Debora Rodrigues is an Associate Professor in the Department of Civil and Environmental Engineering at the Cullen College of Engineering at the University of Houston. Her work focuses on developing bio- and nanotechnologies to reduce energy costs in water and wastewater treatment.

Inventory, Demand And The Enigma Of The Missing Barrels Of Oil

By Chris Ross, Executive Professor, C.T. Bauer College of Business

The Yom Kippur War, Ramadan War, or October War, also known as the 1973 Arab–Israeli War, was fought by a coalition of Arab states led by Egypt and Syria against Israel from October 6-25, 1973. The fighting mostly took place in the Sinai and the Golan Heights, territories that had been occupied by Israel since the Six Day War of 1967.

In retaliation against Israel’s perceived allies, Arab OPEC members cut production and embargoed the U.S., Netherlands and a few other countries, causing spot oil prices to rapidly increase; OPEC solidified the increase into its Saudi Arabian Light “marker” crude oil reference price.

The consequences of the price increase and embargo were compounded by ill-advised price controls installed by the Nixon administration and caused lengthy gas lines in the U.S. The global economy contracted. Oil consumption declined in 1974 after a decade of 7 percent growth per year, during which demand for light, lowsulfur crude oil had been particularly strong as utilities responded to the 1970 Clean Air Act by switching from high sulfur coal and No. 6 Fuel Oil to low sulfur fuel refined mainly from North African crude oils.

The sudden escalation of crude oil prices led to the nationalization of major oil companies’ oil production in most OPEC countries, and the new national oil companies had to face the reality that oil consumption was not inelastic. They could control production volume or price, but not both.

This lesson was relearned most severely in 1980 and again in 2009 (Figure 1).

Oil Price and Consumption

But back to the 1970s and the disruption of the Arab oil embargo. My colleagues and I had been consultants for several years for the Algerian national oil company Sonatrach, and for the first time we and our client were facing a weak crude oil market. The question was raised: how long will this trough last? We created a methodology to track and develop an outlook for future oil supply and demand on a quarterly basis, which helped our client understand how prices responded to the fundamentals of supply and demand.

The methodology migrated out to Petroleum Intelligence Weekly and on to the International Energy Agency(IEA), where it is the basis for its monthly Oil Market Report (OMR), which is closely studied by oil companies and traders. A critical element in the methodology has always been a reconciliation of observed imbalances between oil supply and demand, from which apparent inventory changes can be calculated, with observed actual changes in global inventories.

The problem has always been the integrity of the data: Most countries in the Organization for Economic Co-operation and Development – an international group promoting economic and social well-being – publish reliable data on oil production and consumption, but data outside the developed world are less reliable. Similarly, inventory data for the developed countries is well documented. Indeed, the IEA’s initial mandate was to propose minimum strategic oil storage levels to be adopted by members as a buffer against possible future oil supply interruptions. In addition, there are large quantities of oil stored temporally in transit on tankers, which the OMR estimates, and there are unpublished quantities in countries such as China stored as strategic oil reserves and elsewhere as a bet on future price increases.

So the data integrity is fragile, and analysts expend considerable energy tracking tanker movements and picking up clues and anecdotes that can illuminate the overall situation. Despite its best efforts, the OMR retains a line item called “Miscellaneous to Balance (MTB)” as an admission that the difference between supply and demand does not match observed changes in inventories. That line item exposes a serious gap in our understanding.

Moreover, it has been getting worse.

From the first quarter of 2009 through the fourth quarter of 2011, quarterly changes in the calculated MTB varied seasonally, and the cumulative change was slightly negative (Figure 2), suggesting that either demand was a little higher than assumed, that supply might have been a little lower, or that there had been a small withdrawal from inventories outside those reported. However, the differences were small, and the OMR presented a reasonable picture of the overall market situation.

That changed in the first quarter of 2012, and cumulative MTB increased to 700 million barrels by the beginning of 2016. This means either demand is higher than reported, production is lower than reported, or there is a massive overhang of oil in storage in addition to the observed 400 million barrels increase in reported OECD inventories.

If these missing barrels are, as in the U.S., in excess of the amounts required to support the oil supply chain, they could act as a serious drag on the market and slow the process of rebalancing of the market.

Missing Barrels since 1Q09

A lot depends on which is the correct interpretation of where these barrels are held. Let’s try this one:

  • OECD inventories held by industry in the first quarter of 2011, when inventories were thought to be “normal”, amounted to 2,562 million barrels, which represented 57.1 days of average yearly demand. Non-OECD oil demand grew from 43.1 million barrels per day in 2011 to an expected (by OMR) 49.7 million barrels per day in 2016. If industry holds similar inventories in non-OECD countries as in the developed world, this would require an increase in working inventory from 2,460 to 2,836 million barrels, an increase of 376 million barrels.
  • The U.S. holds approximately 700 million barrels in its Strategic Petroleum Reserve. China has reportedly been building and filling its own strategic oil reserve, which is aimed at being sufficient to cover 90 days of net imports. Chinese oil demand in 2016 is expected to be 13.1 million barrels per day; with production expected to be 4.1 million barrels per day that would require a reserve of 810 million barrels. It seems quite credible that China may have added at least 300 million barrels since the beginning of 2012.

This interpretation seems plausible: if correct, the missing barrels are safely tucked away in inventory required to meet growing demand in non-OECD countries and in the Chinese strategic petroleum reserve. History suggests that governments are very reluctant to deplete their strategic reserves except in moments of extreme supply insecurity. So there may in fact not be a substantial inventory overhang outside the OECD that could amplify the known overhang of about 400 million barrels within the OECD.

The Oil Market Report is projecting global demand growth of 1.4 million barrels per day in 2016 and 1.3 million barrels per day in 2017, along with declining non-OPEC supplies in 2016, then flat in 2017.

If they are right and OPEC producers maintain current production levels, excess inventories should start being depleted fairly soon. Then prices and rig activity should strengthen further. We shall see.

As a consultant, Professor and Energy Fellow Chris Ross works with senior oil and gas executives to develop and implement value creating strategies. His work has covered all stages in the oil and gas value chain.

UH Energy is the University of Houston’s hub for energy education, research and technology incubation, working to shape the energy future and forge new business approaches in the energy industry.

Flaring In The Eagle Ford Shale And Rule 32

By Bret Wells, George Butler Research Professor of Law, UH Law Center

The oil downturn offers an opportunity to reconsider rules for flaring natural gas.

The Eagle Ford shale has provided an economic boom to South Texas. It is the source rock for the storied East Texas Field and also for the Austin Chalk formation, but it wasn’t until 2008 that the industry discovered the viability of producing directly from the Eagle Ford shale using horizontal drilling and hydraulic fracturing techniques.

However, the state of Texas finds itself at an important transition point. The severe downturn in oil and gas development has given regulators and the industry an opportunity to calmly assess whether current development practices in the Eagle Ford shale are appropriate.  One of the most visible and controversial practices has been the flaring of commercially usable and profitable natural gas that could have been efficiently produced but instead was burned off in the rush to bring crude oil to market.

In the oil-rich portions of the Eagle Ford, the formation produces enormous amounts of associated gas along with the liquid-rich crude oil. But pipeline construction was not able to keep pace with the number of wells completed before the downturn.  Statewide, the Texas Railroad Commission reported that Texas flared or vented more than 47.7 billion cubic feet (bcf) of associated gas in 2012. According to the commission, this was the largest volume of gas flared in the state since 1972. In 2012, based on the amount of flaring nationwide, the United States had the dubious distinction of being one of the most prodigious countries for flaring in the world.

This downturn, therefore, represents an appropriate time for the Railroad Commission to reassess its existing regulations on flaring.  The industry should know the rules of the game before significant new capital is invested.

Under existing Rule 32, the Railroad Commission accepts that flaring commercially profitable associated gas is “a necessity” any time an oil well is capable of producing crude oil in paying quantities and there is no immediately available pipeline or other marketing facility for the natural gas. Rule 32 doesn’t require weighing the relative benefit of producing the crude oil more quickly versus the economic loss caused by the flaring of the natural gas, nor does it require any factual showing that crude oil would ultimately be lost if it were not produced immediately.

Instead, the only evidence needed to flare an oil well for as long as 180 days is proof that a pipeline is not immediately available. An application does not need to contain a statement that correlative rights are at risk or that the operator is in danger of suffering either drainage or the permanent loss of oil. Instead, the operator need only show that crude oil production would be delayed (not lost, but delayed) if the requested flaring exception were not granted.

In the past, flaring exceptions were requested and granted even though gas pipeline connections were within three miles of the new well and connections were expected to be completed within a matter of a few months. Exceptions were also routinely granted for flaring profitable associated gas even when the operator only needed a few months to remove excessive hydrogen sulfide from the gas.  What is more, Rule 32 allows the commission to provide a flaring exception after the 180-day period as part of an administrative hearing and via the issuance of a final order signed by the Railroad Commission.

Turning reality upside down

The commission has historically provided numerous exceptions for flaring in the Eagle Ford shale. Routinely issuing permits to avoid any delay in crude oil production highlights the oxymoronic reality of the existing Rule 32 exception practice. Within the construct of Rule 32, flaring commercially profitable associated gas is viewed as “not wasting,” while conserving the natural resource and deferring crude oil production until pipeline connections are made is defined as “waste.” It is ironic to suggest, as Rule 32 currently does, that burning a valuable natural resource directly into the atmosphere is “nonwasteful,” while waiting until the crude oil and natural gas could be efficiently and commercially produced is “wasteful.” Rule 32 currently turns reality upside-down.

There is some hope the Railroad Commission may be rethinking its existing rules.  On June 3, 2016, in an interview with the Texas Tribune, Commissioner Ryan Sitton indicated the commission is using this downturn as an opportunity to reconsider rules that are “outdated and need to be updated,” and he specifically referenced rules on flaring as one example.  See Texas Tribune Interview of Commissioner Sitton. This is encouraging, as it is time for the Railroad Commission to revise Rule 32 so that it affirmatively states that flaring gas represents “waste” unless an operator can prove a delay in access to pipeline connections would diminish the ultimate recovery of crude oil or result in significant drainage from neighboring tracts. Said differently, the flaring of natural gas should be allowed only after proof is given that a “no-flare” policy would itself result in the loss of the ultimate recovery of crude oil or would represent a potential loss of one’s opportunity to obtain a fair share of the oil and gas in place.

The mere delay in crude oil production should not be considered “wasteful” for purposes of Rule 32. The Railroad Commission did not think flaring associated gas from oil wells in conventional oil formations made sense in 1947 when it issued no-flare orders to stop massive flaring. That logic still holds in today’s unconventional shale formations.  Amending Rule 32 in the manner I have described would elevate natural gas produced from an oil well to its rightful status as a valuable natural resource that must be produced in accordance with sound conservation practices, rather than a byproduct that need only be conserved if there is an immediately available gas pipeline connection.

If the commission were to amend Rule 32 and grant fewer flaring exceptions in the Eagle Ford shale, the oil would still be in place. Given the low permeability of the Eagle Ford shale formation, the historic issues of conventional formations — the risk of substantial drainage from neighboring tracts and the risk of not allowing the formation to produce at its maximum efficient recovery rate — would appear to be largely inappropriate for today’s unconventional shale formations. Flaring of associated gas in the context of the Eagle Ford shale, therefore, provides an even easier factual case for the Railroad Commission to issue “no-flare” orders than the situation it confronted in 1947.

And this dramatic downturn means now is the time to act. Operators should use sound conservation-minded operating practices to efficiently produce the state’s natural resources before the next upturn,hopefully next year. Changing the standards now gives the industry time to consider how it will complete future oil wells in the Eagle Ford without wasting a valuable natural resource.

Bret Wells is the George Butler Research Professor of Law at the University of Houston Law Center.  He is also an Energy Fellow with UH Energy.  For the author’s further scholarly writings on this topic, please see Bret Wells, Please Give Us One More Oil Boom – I Promise Not to Screw It Up This Time: The Broken Promise of Casinghead Gas Flaring in the Eagle Ford Shale, 9 Tex. J. Oil, Gas & Energy Law 319 (2014).

UH Energy is the University of Houston’s hub for energy education, research and technology incubation, working to shape the energy future and forge new business approaches in the energy industry.

Impending Electric Shock? Consumers And Investors Should Brace Themselves

By Ed Hirs, Energy Economist

What happens when governmental regulation dictates that producers charge less for something than the cost of creating it?  Shortages, of course:  no sane producer is going to make something just to lose money in the process.

That is the road we appear to be headed down with electricity in the United States.  Costs are increasing partly because of increased environmental regulation, but the principal factor is that the cost of building new generating plants has in many cases outstripped the price at which generators can sell their product.

The industry is in a transitional phase.  Gas-fired plants can operate cheaply, but there aren’t enough of them to go around. Renewable sources, such as wind and water, continue to increase, but not quickly enough to make up for the increasing closures of older coal and nuclear plants, and they still cannot come close to competing on cost with gas-fired ones.

The result is that grid managers and utility regulators are worried that capacity will diminish to the point where scarcity will result in dramatic price spikes.

In 2016, no region of the country has an average wholesale price of electricity greater than $32 per megawatt hour. (One megawatt hour is equal to the amount of electricity used in about 330 homes during one hour.) This does not compare well with the costs of new generation as compiled by the Energy Information Agency. These levelized costs then provide us a way to compare the all-in costs of producing one megawatt per hour of power as summarized in this chart.

Gas is the winner by a mile on its current low price, and gas is also attractive because generation facilities can be built in less than two years, less than half the time of a coal plant of the same size, and for less than half the capital investment to boot.

Unlike the old days, when utilities were highly regulated and regulators made sure they were profitable, in today’s world utilities are subject to free market competitive forces.  The Supreme Court has ruled that utilities are not guaranteed to recover their costs or investments.

How did we get here?

In the past, electric utilities were generally monopolies and limited to a regulated return on invested capital — for example, a $1 billion power plant would be limited to electricity rates no higher than those that would generate a 16 percent return on invested capital.

The incentive for utilities, then, was to build larger plants in the guise of ever increasing reliability. The overbuild included generators, transmission lines and local distribution lines. There were obviously cross-subsidies built into this system, because running one line to a rural area would not pay for itself with that one customer at the end of the line, but with every customer on the system paying to string the line to until it reached the last customer in the boondocks, utilities always had the incentive to keep stringing the line. This led to an overbuild of generation capacity, and sharp operators including Enron and other energy traders convinced governors, legislators and regulatory bodies that by unbundling the various services provided by the industry — “deregulation” — they could open up wholesale power markets and provide lower costs to consumers. That is, generators would have to offer their electricity for sale into a market under strict rules and regulations. Wholesale purchasers would then bundle purchases and resell the electricity to consumers. Generators would be separate from transmission companies and vice versa.

The transmission companies generally remain under old regulations as common carriers; think of them as toll highways for electricity supplied to the consumer. Access and exit are controlled. Payments and profits are guaranteed. Consumers collectively still pay for the last mile of transmission lines.

For the companies that generate electricity, however, it’s a brave new world.  Deregulation began when natural gas was scarce and therefore expensive, making it noncompetitive versus coal and nuclear. However, because natural gas plants were relatively quick and cheap to bring online, they became the go-to solution for short term power supplies necessary to balance the grid during peak periods. These peaker plants extracted monopoly prices from the grid operator simply because they could, and these high costs were spread across all consumers in the market.

Shale gas – and the resulting bonanza of cheap natural gas – upended the old order of electricity generation economics. Beginning with the shale gas revolution, utilities could consider using gas-fired plants not just to manage peak demand, but to compete directly with coal and nuclear for all levels of business. Peaker plants have now been repurposed to also provide continuous electricity supply when required.

Today, nuclear facilities Diablo Canyon (PG&E), Pilgrim (Entergy), Fitzpatrick (Entergy), Clinton (Exelon) and Quad Cities (Exelon) are slated to close.  Dominion Resources has requested that the state of Connecticut consider economic incentives to keep open the Millstone nuclear power plants, which can provide more than one-half Connecticut’s daily electricity requirements. Nuclear power plant operators cannot cut costs any more. They are acutely aware of Northeast Utilities’ 25 felony convictions for unsafe reactor operations due to zealous cost cutting.

Nuclear operators had expected salvation from the Obama administration’s promises to impose a cap-and-trade scheme or carbon tax on fossil fuels. But the Great Recession interfered, and no congressman could vote to increase electricity prices and expect to be re-elected.

Coal-fired plants received a temporary reprieve when Supreme Court stayed the implementation of EPA’s Clean Power Plan, but the future for coal still looks dim.

Grid operators are in the unique position of managing electricity supplies and distribution, but they cannot force utilities to continue to operate at a loss. Utilities know the history of the state of California forcing Pacific Gas & Electric to sell electricity below costs and driving the company into bankruptcy.

The challenge for grid operators in regulated and “deregulated” markets will come when their grids come up short on hot or cold days. Eventually, costs to consumers will begin to increase and be realized either at the meter or by consumers turning to their own solutions, such as rooftop solar, battery storage, backup diesel and gasoline generators. Come up one megawatt hour short at a data or medical center on a hot summer day and prices will skyrocket. One grid manager for a “deregulated” market that has experienced such shortfalls has imposed an old-fashioned regulated price cap of $9,000 per megawatt hour on generators in those circumstances, or about 300 times the average price across the grid. Prudence dictates planning ahead, but grid operators and regulators can only encourage new generation sources. Rising prices will make new generation capacity happen.

Intellectual Property In The Age Of Open Sourcing: Who Owns It, And How Do They Get Paid?

By Wendy W. Fok, Gerald D. Hines College of Architecture

The Internet of Things, as you may have noticed, is changing the world. Architecture, design and construction aren’t immune, as young architects no longer line up to work for the field’s undisputed stars, instead launching self-directed crowdsourced projects and using Kickstarter campaigns as a means to fund their own projects and seeking collaborators for projects big and small.

With projects like WikiHouse and the Resilient Modular Systems 2.0 digital platforms, now people can use a smartphone to connect with a manufacturer to order their house.

In some ways, that makes sense. Design no longer lives in a locked filing cabinet. The conversation I’m interested in is the virtual estate – what becomes of the ownership of digital property? (Who owns digital property). If you design a digital system, do you lose ownership if it’s widely reproduced in manufacturing?

The question arose in the 1990s with Napster, the internet company that allowed people to share music, in the form of MP3 files, with their peers. The industry panicked: Would people still pay for music if it wasn’t in the form of a physical compact disc?

The answer to that is still evolving, although iTunes and other music streaming services suggest a qualified “yes.”

But the details of how the internet and open source software changes who performs specific tasks and, perhaps equally important, who gets paid for that work, are still unresolved. Ownership at this stage in the contemporary digital conversation, therefore, becomes a more active concern than Authorship.

How do you protect your work?

That already is disrupting traditional views of innovation, and the global movement toward building a more sustainable future – increasing use of alternative energy, designing “smart” buildings that automatically adjust lighting, heating and air conditioning to conserve power – is a key example.

Current intellectual property law favors the creator and suggests work can’t be taken without payment or changed. That’s outdated. (Current law favors creators with privatized venture funding, or corporate backing, with deep pockets, i.e.: Google and companies that have funds to patent and trademark their designs and ideas.)

What happens, for example, if a product is translated into code and produced on a 3D printer? Are digital footprints developable concerns for creators of the built environment? Organizations, including the U.S. Library of Congress, are dealing with the thorny issue of sharing digital properties while still protecting their value.

The implications are enormous for medical privacy, private property rights, energy efficiency and other areas.

So-called “smart” building systems are a hot topic of research, as scientists work to develop living buildings, which can learn how occupants behave and adapt to that behavior automatically, without the intervention of a building manager.

But the concept relies on data collected from sensors located throughout the building. To whom does that information belong?

Similarly, what happens when an architect designs a house, and the plans end up online? It’s easy, and common, for people to download the files and buy the plans. Common, too, for a contractor to copy the design of a house built and designed by someone else.

John Locke, the 17th century English philosopher and political theorist, established common theories about ownership – back then, it was ownership of land, cattle and other physical properties – which influenced the founding fathers of the United States.

But there is no virtual line in the sand with digital property. You might own a building, but information harvested from that building detailing energy use and similar data, can be equally important. It’s the same with data collected by toll road agencies about the use of your EZ Tag.

Who owns that? Maybe Elon Musk has suggested a middle ground, registering the Tesla battery as open source software, meaning anyone can access the information and work to improve or change it, while retaining the patent. Or, Alejandro Aravena’s Elemental Open Sourced social housing construction plans, which open up the field of architecture for social good. Those allow for innovation without giving away the company.

“We believe that Tesla, other companies making electric cars, and the world would all benefit from a common, rapidly-evolving technology platform,” Musk wrote on the Tesla website. “Technology leadership is not defined by patents, which history has repeatedly shown to be small protection indeed against a determined competitor, but rather by the ability of a company to attract and motivate the world’s most talented engineers. We believe that applying the open source philosophy to our patents will strengthen rather than diminish Tesla’s position in this regard.”

Today’s millennials share that sense of social good as they seek to make a difference. They are interested in creating products, but they want something bigger than an app or a new sneaker. A lot of people in their 20s and 30s think of design, product development and architecture as bigger than real estate.

So the culture shift is well underway. Even architecture, long a field that values ownership, originality and being the first to do something, is getting there.

The work itself is evolving, too, from traditional “architect” to more of a creative director, such as myself, where the responsibility of the architect becomes a conductor of a plethora of issues, not only for the design of a structure but for what happens within that structure, from heating and air conditioning to coding the technologies for a building to the storage of digital data within a building.

My students know they need more business savvy than architects of a past era in order to successfully work with the community.

The role of the architect continues to become an integrated design proposition.  Architects have always been salesmen. Now we need to be hustlers and entrepreneurs.

UH Energy is the University of Houston’s hub for energy education, research and technology incubation, working to shape the energy future and forge new business approaches in the energy industry.

Driving To Work Alone Is A Costly Habit, So Why Do We Keep Doing It?

By Earl J. Ritchie, Lecturer, Department of Construction Management

In Houston, a quick way to get agreement in a conversation is to bring up the subject of traffic. You’ll almost certainly get comments about how bad it is and that it’s getting worse.

And it’s not anybody’s imagination. Statistics show that despite considerable expansion of the freeways and the addition of HOV, or high occupancy vehicle, lanes, commute times are increasing.

And the largest increase has come over the past few years.

Data from the Texas A&M Transportation Institute’s latest Mobility Scorecard illustrates the problem, and the cost in our daily lives. More than 2.4 million Houston-area commuters are trapped by congestion every day, costing the average commuter 61 hours a year in 2014. That’s up by almost one-third since 1982, when congestion cost commuters about 42 hours a year.

About half of that increase occurred between 2010 and 2013.

It’s not just Houston. Cities of all sizes from around the country have seen similar trends, as this graph from the Mobility Scorecard shows. It illustrates trends in traffic congestion in 471 urban areas.

ritchie-1

But more than personal inconvenience is at stake. The institute reported that all this time in traffic adds up to $160 billion in additional costs nationally, or $960 per commuter in lost time and wasted fuel. The researchers project that will grow to $192 billion by 2020.

There is no shortage of literature extolling the virtues of mass transit, carpooling, bicycling and other alternatives to driving to work. Despite these virtues, and in spite of complaints about congestion and significant expenditures on mass transit ($69 billion in 2014, according to the Congressional Budget Office), we continue to not only drive to work, but overwhelmingly drive by ourselves.

ritchie-2

Source: Fusion (Wile 2015)

There is a wealth of statistical analysis of U.S. and international driving habits. You can see how we drive by location, income level, ethnicity, age, gender, price of gasoline, state of the economy and virtually any other category you can imagine, but the literature does not have good agreement on why we choose to drive alone. Some articles attribute it to a preference for independence or convenience. Elon University economists Stephen B. DeLoach and Thomas Tiemann mentionedthe possible influence of the cultural trend described in Robert Putnam’s Bowling Alone. They also cite “assembly time,” effectively a measure of the added duration of commute, as a factor.

Cost of operation, including gasoline prices, is often cited as a factor, although this seems to have a minor influence.

ritchie-3

Single driving increased from 1980 to 2000 despite a significant decrease in gasoline price. The flattening from 2006 to 2014 is likely partly due to gasoline prices, and partly due to the 2008 recession. The increased price did not materially change driving patterns.

Similarly, demographic factors, such as population density and length of commute, have some influence, but they do not change the strong preference for driving alone. The graphic below shows the fraction of commuters carpooling from the 2011 American Community Survey.

ritchie-4.png

By way of scale, the value for Chicago is 8.6 percent; for Houston, it’s 11.1 percent. Nationally, almost twice as many people carpool as ride public transportation, although proportionately more commuters in a few densely populated cities, such as New York, San Francisco, and Chicago, use public transportation. Washington. D.C., Boston, Seattle and Portland stand out as having high public transportation usage relative to their population density. This suggests local attitudes can affect commuting mode.

Some insight into the psychology of commuters is provided by a 1976 study of ridesharing fromAbraham D. Horowitz and Jagdish N. Sheth. They identify differences in attitudes between solo drivers and carpoolers, with solo drivers perceiving ridesharing as significantly less convenient, reliable, pleasant and time-saving than carpoolers did. Interestingly, there was not a significant difference in perception regarding cost, energy use, traffic and effect on the environment. They conclude that arguments of cost saving and pollution reduction would have little influence on solo drivers.

Apparently, a significant majority of drivers perceive the convenience, independence and time savings of driving alone to outweigh cost and environmental considerations. This would explain why HOV lanes, expanded light rail, ride matching services and the numerous arguments for mass transit have not decreased single driving.

Drivers haven’t been convinced by the argument that increased carpooling and mass transit usage would decrease traffic and commute time.

Some environmental advocates want to raise gasoline prices, thereby forcing people to reduce automobile usage, citing the European model. Based on the evidence above, this would require a large increase and is not likely to be politically acceptable in the U.S.

Oil Bust Blowback: Why Are The Boards Of Directors Still Here?

By Ed Hirs, Energy Economist

The spate of bankruptcies among oil and gas producers has reached epic proportions — more than 69 since January 2015 by one count. And the bankruptcy of Energy Future Holding Corp., a group of electricity companies undone by the low price of natural gas, and the recent filing of solarenergy company SunEdison, Inc., illustrate that financial crisis and questionable management is not confined to oil and gas.

In all of these recent bankruptcies, not only are the shareholders wiped out, but bondholders and banks that provided senior debt have lost money. So what happens to the directors and senior management in those companies, the people who made the decisions that ultimately led to financial crisis and bankruptcy? In the United Kingdom, leading a company into bankruptcy generally leads directors and management team to jail. In the United States, that almost never happens.

History suggests that for many U.S. energy companies, life after bankruptcy may be temporarily uncomfortable, but it seldom leads to exile.

The great bankruptcy of Texaco in 1987 came after the company lost a $10.5 billion judgment in litigation resulting from its acquisition of Getty Oil, breaking a prior deal Getty had made with Pennzoil. The Texaco management and board remained relatively intact after the company emerged from bankruptcy a year later, following a $3 billion payment to Pennzoil.

Northeast Utilities NU +% barely averted bankruptcy in the late 1990s, another case unprecedented in size and scope and threat to the public — company management pursued aggressive cost-cutting, finally to the point of the Nuclear Regulatory Commission declaring its operations unsafe.  Nuclear power plants were shut down, costing the company billions in losses; the corporation pled guilty to 25 felony counts. The management was removed, but no one went to jail.

The failure of Enron in 2001 broke ground, with members of the management team convicted of felony charges and the Enron board of directors ordered to personally pay to settle charges brought by the U.S. Department of Labor. The business model of Enron, a Houston-based energy trading and utility company, ultimately failed to produce ever-growing profits. Management hid the true performance from shareholders and debt holders for four years, while board members professed their profound ignorance. The ex-CEO is still in prison.

What do these textbook cases of energy companies on the brink and beyond tell us about their corporate governance?

The late Paul W. MacAvoy, former dean at the Yale School of Management, described the failure of corporate governance in syllogism: The CEO sets the strategic direction of the company in consultation with the board of directors. The board is then tasked with monitoring the CEO’s execution and implementation of the strategy. If the company does not meet its performance metrics, there are two possibilities with one common outcome: 1) The strategy is sound, but the CEO is ineffectual and must be fired, or 2) The strategy is bad, and the CEO who is responsible for the strategy must be fired.

In my experience, the directors of failed companies do not think critically about their companies’ business models. It is usually a matter of incompetence, negligence, gross negligence (as the law defines it) or laziness — exacerbated by being cronies of the CEO and not having the personal integrity to act independently on behalf of the shareholders. In the 2012 shakeup of Chesapeake’s board of directors — a full 10 ½ years after Enron — I pointed out,  “It’s like they at last realized that no one on the board had ever leased an acre or drilled a well.”

No one on the Chesapeake board was competent in the company’s business.

Looking at the energy bankruptcies now in process, the management teams and boards appear so far to have been relatively unscathed. If removed from one company, they enter the revolving door to reappear as part of a newly reconstituted management team or board for another company. The notion that the same CEOs and boards of directors that steered the companies into bankruptcy and wiped out shareholders — who now can no longer vote to change out the board directors — remain in place for the company’s new owners seems preposterous. Is there any accountability? The decline in oil and gas prices is not an act of God but a real business risk faced in the normal course of business. These companies’ strategic plans should have adequately managed that risk.

We know where these boards of directors were.  The question is: Why are they still here?

Ed Hirs teaches energy economics in the University of Houston’s College of Liberal Arts and Social Sciences. In addition, Hirs is managing director for Hillhouse Resources, LLC, an independent exploration and production company. He founded and co-chairs an annual energy conference at Yale University.