Proximity Counts: How Houston Dominates the Oil Industry

Bill Gilmer, Director of Institute for Regional Forecasting, C.T. Bauer College of Business

Say Detroit, and people think cars. Houston is no different. The city’s oil and gas industry is a broad reflection of the industry as a whole, from the oil and gas extraction, oil services, machinery and fabricated metals that make up the upstream sector to the midstream pipeline construction and management; the Houston Ship Channel is home to a major downstream refining and petrochemical complex. This article focuses narrowly on Houston’s upstream oil business and explains why it stands well apart from other oil-producing cities like Midland, Tulsa or Oklahoma City.

When we think of Houston and oil, the better economic model is an oil city, in the same way other cities operate as headquarters and technical centers for their respective industries, such as Detroit and the auto industry, San Jose and tech, New York and finance, and Hollywood as home to the movie industry.

Houston stands apart from other oil-producing cities in both its scale and its daily operations. There are 175,000 Houston-based employees working directly in production, oil services and machinery and fabricated metals, and tens of thousands more serve as suppliers or contractors. Measured statewide, oil-extraction workers based in Houston earn 64.5% of the sector’s payroll in Texas, and almost half of the U.S. total. For oil services, Houston’s share of Texas extraction payrolls is 45.3% and 32.0% for the U.S. (See Table 1 for details on this and other comparisons.)

However, while the other oil cities are operational centers for oil production, any oil and gas drilling activity in Houston is now a relic of the past. Earlier this year, the nine-county Houston metro area accounted for only 0.8% of Texas oil production, and 0.9% of natural gas output.

The explanation is simple. Modern Houston is a headquarters city, and the chief technical center for a global oil industry. Houston’s daily oil operations are dominated by executives, geoscientists and high-end engineers, not roughnecks and tool pushers. The compensation rates paid by the oil industry in Houston versus the rest of Texas or the U.S. clearly reflect these differences in skills. (See Table 2.) The more difficult and complicated the drilling job, the more likely that a phone call for technical help will be placed to Houston from somewhere around the world.

Houston Share of Oil Industry Activity

 

US Oil Industry

Historical Accident

The best way to think of Houston’s upstream oil sector is as a cluster of headquarters and technical companies like Wall Street, San Jose, Detroit or Hollywood. All these cities operate on similar fundamentals, driven by key decision makers, major suppliers and a deep concentration of technical talent. Once these cities form, the proximity of hundreds of industry-specific companies generates large cost savings for every company that join the cluster, and these lower operating costs becomes the glue that binds these cities together for decades.

Historical accident often plays an important role in the formation of these cities. Tech and San Jose, for example, were linked in the 1930s by Stanford University’s aggressive promotion and commercialization of inventions like the vacuum tube and the audio oscillator. Wall Street moved to center stage by the early 19th Century, when the newly opened Erie Canal generated great wealth by linking New York Harbor to the Great Lakes. Detroit’s role as an auto center was cemented by Henry Ford’s new mass production techniques and construction of the huge River Rouge plant. Before the invention of powerful electric lights, the movies needed good weather and California sunshine, and the first studios were in Hollywood by 1911.

For Houston, the historical trigger was Spindletop in 1900, serving as the first of a string of salt dome discoveries in southeast Texas that would bring a huge new wave of American oil production. A series of new discoveries led from Beaumont to Batson, to Sour Lake and on to the Humble oilfield near Houston. Houston emerged as the closest big city with good telegraph and rail connections, the economic development equivalent of today’s internet and big airport.

Inside Houston’s Oil Cluster

As any industrial cluster forms, the key actors are a group of decision makers. For Houston and the oil industry these are the oil producers, who decide whether to drill for oil or gas, where to drill, arrange the financing and share the profit or loss. These local producers can be large integrated oil companies like BP, Shell, Chevron or ExxonMobil, or independents like Anadarko, Apache, Burlington Resources or EOG.

Suppliers then join the cluster to be near the decision makers. Chief among Houston’s local suppliers are the big three oil service companies of Baker Hughes, Halliburton and Schlumberger. The service providers work with the producers at the wellhead on each project, carrying out the geology, drilling, downhole testing and ultimately delivering hydrocarbons to wellhead. Houston has long been the heart of a global oil services industry. In the 1960s, when oil was discovered in the North Sea, for example, the British set a public policy goal of becoming a major oil-service provider. When the oil was gone, they could would carry these skills forward to future discoveries. Unfortunately for the British, the Texan lead in experience, patents and a history of work in frontier oil horizons simply could not be overcome.

Closely related to oil services, and often overlapping with services in many companies, is a large local machinery and fabricated metal industry that specializes in oil products. Howard Hughes, for example, patented the rotary bit in 1909, and founded the Sharp Hughes Tool Company on Houston’s Second and Girard Streets. And Houston’s “machine shop row” on Hardy Street was in full swing by the 1920s.

Cost Savings and the Power of Proximity

Whether it is oil, autos, finance, tech or the movies, economists call the glue that holds these clusters together economies of agglomeration. These economies are simply cost savings shared by every member of the cluster; they do not derive from the efforts of any one firm but accrue to all of them by virtue of mutual proximity. Every company inside the cluster gains substantial competitive advantage over any company located outside.

Once formed, these clusters set up a virtuous cycle that eventually draws in a major piece of their industry: the bigger the cluster, the greater the cost savings; the greater the savings, the more firms are drawn into the cluster; more firms mean more savings … and the industry concentration continues on. These cost advantages are powerful enough to (1) explain why only one large headquarters/technical center typically dominates each industry, and (2) why it is so hard for other cities to challenge these centers for a share of their work.

Proximity generates the cost savings that accrue to companies operating inside Houston’s oil cluster, and these savings arise in three ways: access to many local companies specializing in oil; large numbers of skilled and specialized employees; and by generating company-specific intelligence on oil markets through its local knowledge loop.

  • We have already seen how Houston’s oil producers have immediate access to the major companies in oil services, machinery and fabricated metals. But dozens of such companies occupy a more modest niche, along with a wide variety of technical, engineering, legal, consulting and other industry-specific services. Because most such companies are heavily specialized, they must be constantly shopping across a large number of potential customers if they hope to drive down average cost. Houston’s large cluster of firms eases the search.
  • Companies operating in Houston have access to tens of thousands of potential employees. Table 3 offers insight into the remarkable concentration of Houston’s cluster of key oil-related skills.  If the concentration ratio in the table is greater than one, then the occupation is more heavily concentrated in Houston than in the rest of the nation. A ratio of 1.1 or 1.2 means that it is 10% or 20% more concentrated than the nation and can serve as a marker that something interesting is happening.  Any ratio above two – doubling the national share – points to something extraordinary.  And a ratio of 8 to 16 defines Houston a critical industry hub.

Augmenting the concentration ratios is the number of local employees working in each occupation, and Houston’s rank by the number of workers across all 383 U.S. metropolitan areas. It is the combination of large numbers of oil workers and their concentration that sets Houston apart as you go down this list of petroleum engineers, geoscientists, chemical engineers, health and safety engineers, cartographers, etc.  For example, the much smaller metro area of Midland is the only place with a higher concentration ratio of petroleum engineers than Houston, 66.3 versus 16.5.  But Houston has 10,950 petroleum engineers versus 1,310 in Midland.

  • Even among the relatively low concentration ratios for purchasing agents, logisticians and cost estimators, their raw numbers put Houston inside the top four metro areas as a major business center.

How does this concentration of oil skills lower cost?  Normal turnover or industry expansion requires hiring, and the nearby workforce lowers the cost of search, hiring, relocation, and training. The concentration of local skills benefits employees as well, allowing them easy access to dozens of potential employers without relocation.  Local workers are certainly paid better inside the oil cluster. We saw earlier in Table 2 that local oil-related compensation rates are much higher in Houston than elsewhere. Much of this difference reflects higher local skill levels, but companies also probably share part of their “agglomeration” savings with local employees as a bonus for proximity.

Concentration and number

  • Finally, there is an important knowledge loop for the oil industry, and it is located in Houston. Proximity allows for the sharing of industry-specific intelligence from many sources, as local companies constantly seek to piece together additional data to form a better strategic picture. The intelligence sources can be local business meetings, industry-wide conferences or local professional meetings. Or they can be an informal lunch, an Astros’ game or a round of golf at the local country club.

As an example, consider the Geophysical Society of Houston and the professional interaction that it generates. It is the largest chapter of the society in the nation with over 2,000 members; it conducts monthly technical meetings in three locations, has six special interest groups, presents an annual symposium, offers distinguished instructor short courses and forms liaisons with local universities. The Gulf Coast Chapter of Petroleum Engineers plays a comparable role in its profession, as do dozens of other local professional, technical and business groups.


Bill Gilmer is director of the Institute for Regional Forecasting at the University of Houston’s Bauer College of Business. The Institute monitors the Houston and Gulf Coast business cycle, analyzing how oil markets, the national economy and global expansion influence the regional economy. Gilmer previously served the Federal Reserve Bank of Dallas for 23 years, retiring from the bank as a Senior Economist and Vice President.

Advertisements

How Much Sea Level Rise Is Actually Locked in?

Earl J. Ritchie, Lecturer, Department of Construction Management

 

Helen Island, Palau, is shown on June 10, 2012. It is believed to be shrinking, as sea levels rise and storms increase in intensity and frequency. The reef is threatened by climate change and overfishing. Photographer: Mike Di Paola/Bloomberg

One frequently sees articles claiming a certain amount of global warming or sea level rise is inevitable based on the amount of CO2 already in the atmosphere. Locked-in warming is commonly estimated to be 1.5 degrees C (2.7° F) above preindustrial levels, about a one-half degree above the current temperature. This is the aspirational target of the 2015 Paris Agreement of the United Nations Framework Convention on Climate Change.

Although there may be some rhetorical benefit in this number, it understates the actual amount of committed warming and sea level rise predicted by mainstream climate change theory. The IPCC says, “Stopping emissions today is a scenario that is not plausible.” Therefore, we will inevitably have higher CO2 concentration than the present, greater warming and more sea level rise.

Under the lowest of the IPCC’s four scenarios, RCP2.6, peak temperature rise of 2 degrees C will be reached before 2100, and sea level rise will be less than about a half meter. However, due to lag effects in ocean warming and ice melt, sea level will continue to rise for centuries. Rise can theoretically be reduced by negative carbon emissions or geoengineering.

The best-case scenario

If one accepts “locked-in” as actually involving some amount of future emissions, the door is opened to the numerous speculations about what is achievable and what will actually happen. RCP2.6 requires very rapid CO2 reduction and, ultimately, negative emissions. There is considerable question, even as expressed by the original authors, whether these reductions can be realized. Assuming the IPCC’s models are correct and RCP2.6 is achievable, one might say locked-in warming is the 2 degree primary target of the Paris Agreement, approximately 1 degree higher than today.

What happens after 2100

The IPCC projects temperatures to decline slowly after 2100 under RCP2.6 and rise slowly under RCP4.5, the second most favorable scenario. However, due to lag, sea level continues to rise under all scenarios. The graph below shows sea level projections to 2500 for scenarios roughly equivalent to RCP2.6 (low CO2) and RCP4.5 (medium CO2).

Global mean sea level riseSOURCE: MODIFIED FROM IPCC CLIMATE CHANGE 2014

Source: Modified from IPCC Climate Change 2014

The maximum rise of about 2 meters (7 feet) in these scenarios is quite moderate compared to what could happen. The IPCC’s range for “multi-millennial” commitment is 3 to 13 meters (10 to 36 feet) for warming of 2 degrees C.

There is ongoing debate in the scientific community about melting thresholds and rates. As discussed in a recent post, several recent articles have predicted faster rise in this century. Differences of opinion over short-term rise, to 2100, are primarily a question of how fast the Antarctic and Greenland ice sheets will melt. Longer-term rise is a question of how much of the sheets will melt.

Comparison to past warm periods

Although I am somewhat skeptical of the ability of current climate models to predict melt rates, an independent estimate of long-term sea level rise can be made by analogy to earlier warm periods. The argument is shown in the graph below, with the different colored lines representing sea level reconstruction by different researchers.

Source: Modified from Siddall, et al.NOAA

Source: Modified from Siddall, et al.

In this case, the comparison is to the last interglacial period, known as the Eemian or MIS 5e, about 125,000 years ago.

Temperature in the Eemian is estimated to have been about 1 degree C higher than today and maximum sea level about 5 meters higher. The analogy argument is that peak temperature expected in the near future will be similar to the Eemian maximum; therefore, we can expect ultimate sea level rise to be similar.

Unfortunately, there are two reasons why the value of this comparison is limited. First, there is considerable disagreement about the actual temperature and sea level during the Eemian. The temperature difference is commonly quoted as 1 degree to 2 degrees C; however, estimates range from negligible to “several degrees.” Similarly, sea level has been estimated to have been 3 to 10 meters higher, with estimates around 5 to 6 meters most common.

Second, conditions during the Eemian are not similar to the present. Solar heating was higher, and CO2 was lower.

The Eemian and other warm periods are not great analogs. But they do indicate a high probability of substantial sea level rise over the longer term for temperatures that are already locked in.

How high will the water get?

Kopp, et al. say, “future sea-level rise remains an arena of deep uncertainty.” The range of projected sea level is very wide, and it depends upon how far into the future you project.

The main cause of variation is the amount of melt of Greenland and Antarctic ice sheets. It’s estimated that complete melt of the ice sheets would raise sea level by about 66 meters (217 feet). Since only a small fraction of the sheets melted in the Eemian, projected rise at 2 degrees will be much less.

Levermann, et al. modeled sea level rise 2000 years into the future at 1 degree and 2 degrees above pre-industrial levels, bracketing the Paris Agreement goals. Their median estimates are 2.3 meters at 1 degree and 4.8 meters at 2 degrees. Not too much should be made of the specific numbers, because the model range is large (1 to 4.9 and 2.6 to 9.8, respectively), and other articles have different projections.

It is fair to say ultimate sea level rise could be in the range of 3 to 10 meters (10 to 33 feet). The difference in rise predicted between different models is small for the next two or three decades so there will be little evidence in the near term pointing toward a clearer estimate.

If you can reverse it, is it locked in?

Both warming and sea level rise can theoretically be halted or reversed by geoengineering methods: removing carbon dioxide to reduce the greenhouse effect (carbon dioxide removal, CDR) or reflecting sunlight (solar radiation management, SRM). There are dozens of proposed methods of each. Some are pretty innocuous, such as growing more forest to remove carbon dioxide and having more white roofs to reflect sunlight. Others, such as fertilizing the ocean to encourage algae or phytoplankton growth, have side effects and could get out of control.

At present, they are considered by most to be impractical, too expensive or too dangerous. Keller, et al. say, “At present, there is little consensus on the impacts and efficacy of the 60 different types of proposed CDR.” This is also true of solar radiation management, about which there is considerable concern about adverse effects. In any case, it is unlikely these methods will be implemented on a significant scale for at least two or three decades.

Why should we care?

It’s hard to get people concerned about possible events hundreds, or even thousands, of years in the future. James Hansen says, “nobody cares about matters 1,000 years in the future.” However, these are serious matters. Sea level rise of several meters has significant implications for displacement of populations, damage to infrastructure and loss of land.

Several visualizations available online show the effect of even modest sea level rise. The photo below is a NOAA simulation of 1-meter (4 foot) rise at Galveston, TX.

Source: NOAANOAA

We are very likely facing amounts of sea level rise with serious consequences even at temperatures we have already reached.

What is to be done

Significant sea level rise is unavoidable. Adaptation will be necessary.

The uncertain rate of sea level rise makes planning difficult. A 2015 report by the New Zealand Parliamentary Commissioner for the Environment discusses the choice of time horizons. They describe that a 50-year planning horizon may be sufficient for projects with a short intended life and a 100-year planning horizon may not be enough for those with a long life. Their recommendation is to use a timeframe “appropriate for different types of development.”

What is necessary or feasible will vary by location. Levees, tidal barriers, seawalls and elevating infrastructure may be possible. It would make sense not to allow new development in low-lying areas. Adaptation methods are extensively discussed in the IPCC Fifth Assessment Working Group II Report.


Earl J. Ritchie is a retired energy executive and teaches a course on the oil and gas industry at the University of Houston. He has 35 years’ experience in the industry. He started as a geophysicist with Mobil Oil and subsequently worked in a variety of management and technical positions with several independent exploration and production companies. He retired as Vice President and General Manager of the offshore division of EOG Resources in 2007. Prior to his experience in the oil industry, he served at the US Air Force Special Weapons Center, providing geologic and geophysical support to nuclear research activities.  Ritchie holds a Bachelor of Science in Geology–Geophysics from the University of New Orleans and a Master of Science in Petroleum Engineering and Construction Management from the University of Houston.

How Houston Survived the Great Oil Bust of 2015-16

Bill Gilmer, Director of Institute for Regional Forecasting, C.T. Bauer College of Business

 

Buildings stand on the city skyline during the 2018 CERAWeek by IHS Markit conference in Houston, Texas, U.S., on Tuesday, March 6, 2018. CERAWeek gathers energy industry leaders, experts, government officials and policymakers, leaders from the technology, financial, and industrial communities to provide new insights and critically-important dialogue on energy markets. Photographer: Aaron M. Sprecher/Bloomberg

In November 2014, OPEC announced it would no longer serve as swing producer in world oil markets, triggering what would arguably become the worst downtufurn in the history of American oil.  Based on the rate of decline of the rig count, the number of rigs left working at the worst of it, the lost oil jobs or the fall in capital expenditure, the 2015-16 oil downturn rivaled or exceeded that of the 1980s.

However, the story for Houston’s economy was very different.  Between 1982 and 1986, for example, Houston suffered its worst recession ever, losing 211,000 jobs or about one job in 12.  In contrast, 2015-16 brought the local metro area a loss of only 4,300 jobs overall, or about 0.1% of payroll employment, making it the mildest of any oil-related downturn in local history.

This improved performance was not because Houston was somehow immune to the oil price collapse.  Before 2014, the city was riding a huge boom in oil-related activity, and its abrupt end spelled the loss of 74,200 local oil-related jobs over the following two years.  Figure 1 shows that at the cyclical peak in both 1982 and 2014, Houston had a similar number of oil-related jobs, and as losses mounted, both cycles followed a very similar path.

The Fall in Houston’s Oil Jobs in 2015-2016 Closely Follows an Earlier Path from the 1980’s

Offsetting Losses to Oil

There were probably three key factors that helped Houston’s economy offset the 2015-16 losses in the oil sector: continued growth of the U.S. economy, sustained momentum from a decade of boom-time growth, and a huge petrochemical construction boom driven by low natural gas prices.  This combination added up to just enough to offset a serious setback in oil employment.

  • The most important factor was that the U.S. economy performed well, growing moderately but steadily after 2012. This growth supported Houston’s many companies that are unrelated to oil but which sell into national and global markets.  Local examples would be United Airlines, AIG, Sysco, Men’s Wearhouse and Waste Management.  This contrasts with the 1980s downturn, for example, which began with the long and deep 1982 U.S. recession.
  • Houston had built up tremendous economic momentum from the oil-boom years. Between December 2003 and December 2014, Houston added 696,000 new jobs or enough new jobs to match the total employment of major metro areas like Jacksonville, Salt Lake City or Richmond.  By 2014, Houston was still seriously pressed to finish building the equivalent of a major new metro area in a short period of time, and just because the price of oil fell, the city had not nearly caught up on much-needed roads, schools, hospitals, shopping centers, banks and restaurants. Growth in these secondary sectors continued at boom-time rates in 2015 and slowed only slightly in 2016.
  • Finally, low natural gas prices came to Houston’s rescue. High oil and natural gas prices drive high levels of activity in exploration, drilling and production, and this same activity contracts if prices fall.  However, for Houston’s Ship Channel complex of refineries and petrochemical plants, low energy prices bring good news in the form of reduced feedstock costs and higher profit margins.  In this particular case, a sharp fall in natural gas prices after 2012, to levels below $4 per thousand cubic feet, kicked off a massive $50 billion construction boom.  Centered in East Houston and along the Ship Channel, it was primarily a petrochemical and plastic boom, with some help from LNG exports and refining.  The construction peaked in 2015-16 and has been winding down quickly since then.

How did all this add up for Houston’s economy?  The Federal Reserve Bank of Dallas publishes a business cycle index for Houston that is specifically designed to track the local business cycle.  The index includes four variables: payroll employment, the unemployment rate, real wages, and real retail sales.  (Figure 2)

From 2001-03, for example, the index says that the U.S. tech bust pushed Houston into its mildest recession since 1972, a 2.5% decline measured peak to trough.  In 2008-09, the Great Recession saw Houston’s index fell much further, to 7.9%.  If we had included the 1982-86 period in the chart, it would have shown a local economic collapse of 18.0%.

What is This? What’s in the Air? We Need a Comprehensive Approach to Managing Pollution

 by Stephanie Coates, UH Energy, University of Houston

When a waterway is deemed too heavily polluted, there is a federal protocol that state and local authorities can follow to measure pollutants, evaluate and enforce cleanup of the waterway. When air becomes too heavily polluted in an environmental “hotspot,” there is no similar mechanism.

And people living in these hotspots too often pay the price.

It’s essential to regulate air pollution, not only for the sake of clean air but also for the health of communities living nearest the highest concentrations. We already have a model for how to do that in the Clean Water Act.

Under the Clean Water Act, if a state identifies a waterway that is “impaired,” or in danger of not meeting water quality standards, the state is supposed to calculate the pollution affecting the waterway and determine a plan, or Total Maximum Daily Load (TMDL), to reduce the pollution to levels that meet water quality standards. Part of the plan includes identifying the sources of pollution and, determining how to allocate responsibility among the various sources for reducing the pollutants to an overall acceptable level.

The plan is implemented and the waterway is then reassessed.

As the graphic demonstrates, states are constantly reevaluating and updating their plans throughout this process and moving their waterways toward meeting cleaner standards.

WLA is the sum of wasteload allocations (point sources), LA is the sum of load allocations (nonpoint sources and background) and MOS is the margin of safety.EPA

Source: https://www.epa.gov/tmdl/program-overview-impaired-waters-and-tmdls

A key feature of this process is that if a body of water is threatened by more than one pollutant, TMDLs account for the heavier cumulative load posed by multiple pollutants, then permits for sources of pollution are issued through the Environmental Protection Agency’s National Pollutant Discharge Elimination System, or NPDES program.

There isn’t a comparable plan for air pollution.

The EPA sets limits for six pollutants, including carbon monoxide and lead, but what if an area is already exposed to several pollutants and a company there is seeking a permit for another? Or if an area experiences emissions of a chemical not on the EPA list?

Since there is nothing like a Total Maximum Daily Load for air pollutants – which would set overall levels allowed, adjusting for how many types of pollution are found in one geographic area – communities in “hotspots” are pitted against individual emitters and have to fight each new pollutant one at a time,  without federal support. The situation is exacerbated by the lack of a flexible process for evaluating and lowering those pollutants.

Public health is potentially at risk.

As an example of how this is playing out, consider the permit fight between Valero Refining – Texas, LP, and the community of Manchester, the southeast Houston neighborhood where the refinery is located.

The Texas Commission on Environmental Quality (TCEQ) in June held a public meeting to take comments on a permit Valero requested to authorize already existing emissions of hydrogen cyanide from the Fluid Catalytic Cracking (FCC) Unit. Emissions of hydrogen cyanide (HCN) have been occurring since the cracking unit was deployed, but Valero was not previously required to track them. However, the EPA recently started requiring testing for HCN, meaning the company needed an addendum to its existing permit for other types of emissions at the site.

According to the notice published by TCEQ, after reviewing the technical aspects of the amendment, the agency’s executive director “made a preliminary decision to issue the permit because it meets all rules and regulations.” The executive director of TCEQ appeared to see it as a straightforward issue and granted preliminary approval.

But to the citizens testifying at the public meeting, the permit feels like another nail in the coffin.

The town of Manchester, zip code 77012, straddles Interstate-10 and sits in a fork of Buffalo Bayou at the Houston Ship Channel – the interstate and ship channel are both heavily trafficked. Other prominent features of the immediate neighborhood include a fertilizer plant, two recycling facilities, two refineries including Valero, and the Union Pacific train yard. A number of chemical plants sit within a three-mile radius.

The University of Texas School of Public Health found a possible link between cancer risk in the area and the air pollutants. In 2016, the Union of Concerned Scientists concluded similarly and also noted that the risk of respiratory hazards is 24 percent greater in Manchester than in more affluent parts of Houston.

At the public meeting with representatives from TCEQ, residents reported health-related issues, including frequent nosebleeds, asthma and headaches. Without regulations on total air quality, it was easy for TCEQ to dismiss the complaints. It is not the hydrogen cyanide alone that causes all the noted health problems, but that was the only issue being considered.

HCN is a neurotoxin, and at high concentrations causes death. Lower chronic exposure can cause headaches, weakness, nausea and enlarged thyroid, but HCN is also lighter than air. That means when it’s released from the refinery, it usually rises rapidly and since it is being emitted from a tall stack, it will be able to disperse into the atmosphere to break down (albeit slowly). At that point, most people would not consider it a health risk.

In July 2017, TCEQ wrote an interoffice memo regarding the health effects from the emissions related to the new permit. It concludes that they “do not anticipate any short- or long-term adverse health effects to occur among the general public as a result of exposure to the proposed emissions from this facility.”

However, this memo intended to attest to the health risk does not examine the already existing total accumulation of emissions, nor how allowing the HCN emissions impacts the risk. It does not consider the possibility of potential leaks or other unplanned emissions, or potential explosions.

Without an overarching federal rule requiring it to do so, TCEQ, although it could do otherwise, grants permit requests for each individual chemical emitted at each individual facility instead of considering the overall impact of adding hydrogen cyanide to the pollution mix over Manchester.

In this permit fight, Valero is not to be seen as an enemy or villain – many Manchester residents work at the refinery, which by at least some accounts has been a good and responsive neighbor.

In fact, we can’t blame any individual refinery, especially since emissions only come as a by-product of supplying the gas, chemicals and other valuable consumer products we all demand.

The cumulative risk – not only the air quality risk posed by total pollutants, but also the health risk from pollutants in an area already made vulnerable by the fact that so many residents are poor, members of a minority ethnic or racial group and speak limited English – should be considered when permitting an additional facility or more emissions. We have a system for reporting air pollution emissions through the Toxic Release Inventory, for example, but after we collect and report the data, we don’t do enough to ensure the safety of affected communities. As it is now, health risk is only assessed as individual chemicals newly become regulated, as in the case of HCN; even then, the assessment is incomplete since it does not address total ambient air quality.

Limiting the overall load of air pollutants is a better way to address hotspots and is already working well under the Clean Water Act.

Residents haven’t given up the fight against allowing hydrogen cyanide emissions at the Valero refinery, but the odds aren’t in their favor. Until Manchester and similar communities have a better way to deal with the source of public health problems, they will need to keep fighting, one chemical at a time.


Stephanie Coates is a member of the staff at UH Energy and is a graduate student, pursuing her master’s degree in public policy, along with her master’s degree in social work at the University of Houston. She has received several awards including the Phi Alpha Honor Society Scholarship, Women’s and Gender Resource Center Scholarship and the Hobby School of Excellence Scholarship. Stephanie is a member of MACRO Student Organization and serves on the Student Center Policy Board, where she chairs the subcommittee for Facilities Use and Planning.  She serves on the UH Sustainability Committee and volunteers with Staff Council. Stephanie has a bachelor’s degree in Spanish from the University of Houston.

What Happened To The IPO Market For Oil And Gas Independents?

There is no IPO market for oil and gas independents today. Why is this?  Because the market value of publicly traded shale companies today is less than the cost of replacing the leaseholds, seismic, reserves and drilling inventory that make up their assets. Consequently, cash-rich companies and private equity managers have acquired or merged publicly held companies into their portfolio companies to acquire assets more cheaply.

When will the market again favor private equity managers’ favored strategy of privately acquiring assets and then exiting to an overvalued public market? Simple: when market values exceed replacement costs.

To understand when that might happen, let’s take a quick look at the fundamentals driving today’s market.  After that we’ll look at some time-honored ways to view risk and reward.

The supply of public equities in oil and gas is disproportionately smaller than the use and the value of oil and gas in the national economy. “Market allocations” for oil and gas are underweighted now in the public equities market. Until there is a flow back into public equities, independents and their investors must rely upon excellent science to discover the next low-cost play, to drive down current drilling and operating expenses, and maintain positive cash flow. It will happen.

Consumers are short oil and gas for the rest of this year, next year and the years afterward, no matter how much they plan to use. Threats of supply shortfalls lead to remarkable inflows of capital, price increases in the futures markets, surging equity prices, and overweighting of oil and gas equities in the portfolios of institutional investors. Always.

The current price elasticity of demand for oil is negative0.04. This means that a relatively small change in world supply changes the price (in the opposite direction) by 25%.

This lack of elasticity is what Saudi Arabia used to take aim at U.S. shale drillers in 2014, resulting in a catastrophic loss of capital, 330-plus bankruptcies, 250,000 direct jobs lost and more than $200 billion in lost annual GDP. This lack of elasticity also means that despite current sentiment that the world has plenty of oil and gas and that peak demand is only a few years away, OPEC has succeeded in withdrawing sufficient oil supplies to drive up the price from $40 per barrel to more than $60.

Note that “Peak Oil” supply has always been a quaint fiction — especially so in the price regulated U.S. market in which the notion was advanced. Increased demand and higher prices will draw out more supply, putting upward pressure on prices. The 2009 Energy Journal paper “Depletion and the Future Availability of Petroleum Resources” lays out the supply availability of oil, gas and gas liquids as the real price increases and allows for economic production.

The biennial study of the Potential Gas Committee details that gas resources will last well beyond several lifetimes. The marginal cost of producing natural gas from the Barnett and Haynesville shales was about $1 per Mcf in 2011. That number has only decreased as technology has improved by leaps and bounds.

According to recent data, private equity sponsors have stakes in 350 portfolio companies to which $200 billion of equity has been added since 2014.

Much of this funding went to shore up expensive shale and offshore investments that were bleeding cash at $40 per barrel oil.

These ideas are not new. Time honored analysis

  • In a 1931 article, Stanford University professor Harold Hotelling detailed conditions under which the owner of a limited amount of natural resources would be indifferent between current production and future production if the forecast price increase of the resource was equal to the rate of interest. Known U.S. shale plays offer the certainty of hydrocarbons — essentially, storage in place — the commercial production of which is entirely dependent upon the current gross margin.

Barring supply manipulations elsewhere in the world, investors today in the U.S. domestic shale plays face the prospect of bringing oil to market when the long run prospects for price exceeding marginal costs are not good and, in fact, while the prospect of price increasing at a rate greater than the rate of interest is decidedly negative.

  • Yale University professor William Nordhaus forecastin 1979 that the real price of crude oil would increase at the rate of real economic growth. Discounting short run manipulations by OPEC, misguided political responses and reactions by producers and consumers adjusting to these divergences, the real price of crude has indeed increased at the rate of real economic growth for the past 40 years. The manipulations and reactions have provided the volatility needed for smart active investors to realize outsized returns.
  • One’s level of success depends on what others do. Think John Nash of “A Beautiful Mind” and his paper “Non-Cooperative Games.” OPEC remains the “swing” producer in the global oil market. The U.S. shale plays have improved their costs, but one cannot characterize these high cost producers as “swing” producers because they do not have the incentives or abilities to increase or decrease production at will.

In recent decades, OPEC works backward to assign quotas based on their assessment of world demand and non-OPEC production. OPEC’s quotas were designed to provide an intersection of supply and demand at a forecast price. OPEC often got it right, but when it failed to respond rapidly to China’s 2008 increased demand (necessary to replace coal to clean up the air before the Beijing Olympics), OPEC inadvertently created a new competitor; the U.S. shale plays. By 2013, it was obvious that the U.S. shale plays had encroached on OPEC market share and that OPEC would employ another Nash response, predatory pricing.

  • Martin Shubik is a titan of game theory and value investing, and in his Dollar Auction paper, he describes a game that investors must avoid. The auction is for a dollar bill. It is won by the high bidder, but the second-place bidder must also pay out his bid while gaining nothing. The Dollar Auction describes perfectly what happens when nations go to war; the winner survives (sometimes barely) and the loser is wiped out.

For some investors, the game also describes the challenge faced when too much money chases too few assets. Investors can find themselves upside down or bidding more than one dollar to win the asset, just to stay in the game.

  • The shale drillers that survived $40 oil are those who followed the dictum of Michael Porter’s book “Competitive Strategy” – be the lowest cost producers. For commodities, it is the only strategy that succeeds over the long run.

Private Equity Game

Private equity sponsors have become larger and larger over the past 20 years.   Portfolio companies backed by hundreds of millions of dollars are rarely allowed to make money on new discoveries and new drilling. Nowadays, they are kept on short leashes and directed to infill drilling of known shale plays that commonly have inbound costs of $30,000 to $40,000 per acre. Ouch! These numbers are reflected in the publicly traded companies adjacent to the private companies in the shale plays.

Here, we see that the efficient market hypothesis and Stephen Ross’ Arbitrage Price Model begin to work against outsize returns for the shale play companies and especially against those that have to pay a premium price for entry. In this instance, the sponsor may be better served by making a long-only bet on NYMEX and avoiding the liabilities of owning an operating company.

The time horizons of sponsors do not match those of their pension fund and endowment investors. Institutional investors typically invest in oil and gas as a hedge against increasing energy prices and for diversification. Private equity sponsors have shorter horizons (generally not more than seven years for a fund) and, consequently, their portfolio companies have shorter time horizons. With cycles and manipulations by OPEC occurring over years and even decades, there is often a mismatch of timing among capital providers and their investments.

Where is the opportunity now?

Let’s define microindependents as small oil and gas ventures that have the potential to be company makers. The companies have a competitive advantage in proprietary science and perhaps a portfolio land position. They may or may not have production, but no one can dispute the risk-reward profile they offer to investors. These are not the one-well projects with the prospect of a trillion cubic feet (TCF) payoff but the portfolio of a half-dozen targets with a TCF payoff. It is difficult for a microindependent to be so well diversified but easy for a private equity portfolio company to assemble a portfolio of such geologically independent targets. $50 million to $100 million of investment is needed to get one of these companies over the threshold. No one forgets the lesson of Newfield Exploration’s first 11 busted wells and the success that came with the 12th, which paid for first 11 and more. This approach does not exclude shale plays per se, but it excludes the strategy of paying top dollar to buy into the current roster of producing shale plays.

Investment strategies that provide investor exposure to upside beyond simple oil price increases will dominate. The options pricing models, however limited, argue in favor of equity investment in assets with higher risks coupled with high potential. See, for example, the Cox, Ross, Rubenstein model or Stanford professor Myron Scholes’ recent workthat directs investment managers to move away from “average” or “The Black Swan” by Nassim Nicholas Taleb of New York University.

Profitably selling oil and gas is the first exit.  Fundamentals and risk analysis never go out of style.


Ed Hirs is a BDO Fellow for Natural Resources and a UH Energy Fellow at the University of Houston.   He teaches energy economics courses to undergraduate and graduate students within the department of economics at the University of Houston. He is also Managing Director for Hillhouse Resources, LLC, an independent E&P company developing onshore conventional oil and gas discoveries on the Texas Gulf Coast.  Previously, Ed was CFO of DJ Resources, Inc., an early leader in the Niobrara Shale. He holds a Bachelor of Arts with honors and distinction in Economics, a Masters in Economics, and a MBA from Yale and holds the CFA designation.

UH Energy is the University of Houston’s hub for energy education, research and technology incubation, working to shape the energy future and forge new business approaches in the energy industry.

LNG Projects Have Stalled. A New Business Model Could Help

by Chris Ross, Executive Professor, Finance, University of Houston and Justin Varghese, MBA Candidate, Bauer College of Business


Liquefied natural gas (LNG) developers and natural gas producers have depended on third parties to create demand for their product. In recent years, LNG market prices have dropped in response to a surge in supplies and roughly two million tons of LNG contracts are set to expire in the next 10 years. Promising new LNG projects cannot be financed and have stalled.

Developers need to do more to encourage end users – including industrial users and electric generation facilities – to switch from diesel and other liquid fuels to LNG. A new business model could help. We propose a broad collaborative, including natural gas producers, pipeline companies, Engineering, Procurement and Construction (EPC) companies, equipment manufacturers and end users to accelerate market growth.

The International Energy Agency predicts that global oil use will decline as it is replaced by natural gas and renewables. The collaboration we are proposing could accelerate the switch.

A Little Background

Early LNG developments in the 1970s were driven by oil companies that had the misfortune to discover natural gas distant from gas markets. The discovery would have been stranded but for the advent of integrated LNG developments to liquefy, transport and regasify the gas for use in power plants and local distribution. Although LNG was more expensive than oil, utilities in Japan and Europe were prepared to sign long term, take-or-pay contracts because of natural gas’ low emissions and enhanced energy security through the interdependence of buyer and seller and diversification from oil.

U.S. utilities signed similar deals with Sonatrach, the Algerian national oil company, but reneged when domestic production and pipeline companies were deregulated from 1978 through 1985 and advances in 3D seismic technologies opened the Gulf of Mexico shelf as a prolific hydrocarbons resource. A natural gas oversupply “bubble” caused prices to decline below the contractual costs of LNG, and a long arbitration process resulted in settlement agreements. Regasification plants were built, but essentially no LNG was delivered until the bubble deflated after 2000.

Meanwhile, successful lobbying encouraged new domestic natural gas demand, notably through cogeneration facilities that provided steam to industrial customers and sold surplus electricity into the grid at “avoided cost” that would have been incurred from a new power generator.

It is time to shake the dust off that playbook.

Recent LNG Contracting Evolution

Those early LNG sales contracts were all point-to-point, stressing the interdependence of buyer and seller. Cracks in the global contracting regime began to emerge in 1995 with Atlantic LNG’s waiver of destination restrictions. From its website: “Atlantic was often described as “The Trinidad Model”, which referred to the unique partnership between four energy majors and the Government of Trinidad and Tobago to form an LNG company. The model was unique too in its objective to target two dedicated primary markets at that time: the US East Coast and Spain, capitalizing on Trinidad and Tobago’s geographic proximity to these markets and therefore competitive delivery costs.” To further that goal, Atlantic successfully lowered the construction cost of its liquefaction plant below previous international LNG projects.

Fifteen years later, the majors led by ExxonMobil doubled the size of single liquefaction trains and the size of the LNG carriers as they invested in massive Qatargas LNG projects commissioned in 1998 through 2011. LNG supplies surged, and the global contracting regime could have come under extreme pressure (Figure 1).

fig1

However, on March 11, 2011, a massive earthquake offshore Japan caused a tsunami which killed thousands of people and inundated the Fukushima Diichi nuclear power plant. Failure of back-up systems resulted in a meltdown and release of radiation. In reaction, most nuclear power plants in Japan were shut down and fossil fuel power generation plants had to fill the supply gap; demand for LNG escalated and fortunately major new Qatar LNG plants were able to supply it.

A robust spot market soon emerged to provide incremental LNG supply to Japan beyond that assured under previously executed long term contracts. LNG prices rose to support new LNG plants in Australia to address growing Asian LNG demand.

At the same time global LNG suppliers were realizing premium prices for their spot sales, U.S. natural gas prices were under tremendous downward pressure in the face of the oversupply of unconventional gas.  The coupling of these premium LNG prices and the glut of U.S. gas combined to provide the economic incentive for the U.S. to evolve from LNG importer to exporter, adding to LNG capacity being built in Australia and Papua New Guinea (Figure 1).

Cheniere was first and pioneered a new tolling contracting model to support financing its Sabine Pass natural gas liquefaction complex. Under this model, buyers would acquire U.S. natural gas at spot market prices and make long term take-or-pay commitments to liquefy their gas in Cheniere’s facilities. Buyers took the risk that the delivered cost of LNG would be lower than it would be under a traditional oil-indexed contracting regime.

Table 1: Traditional and New LNG Contracting Models

International North America
Natural Gas Supply Integrated with field production Purchased at market prices
Liquefaction Cost Passed through by seller to buyer Long term tolling fee charged to buyer
Transportation Dedicated tanker fleet Buyer’s responsibility
Marketing/ Pricing Point-to-point long term S-Curve Cost Recovery
Price risk Passed to end user Buyer’s responsibility

Today we have two competing contracting models (Table 1): the traditional model still used for integrated LNG projects from reservoir through end user, with prices indexed to oil prices, coexisting with the new tolling model seen in the wave of U.S. liquefaction projects. This should provide arbitrage opportunities for global LNG traders, while LNG project developers will see enhanced spot liquidity as they optimize not only the rights they retained to process uncontracted volumes from the new projects but also those volumes from contracts which are soon to expire.

The problem with spot markets for a capital-intensive commodity such as LNG is that variable operating costs are low, especially for the traditional integrated field to liquefaction facilities. It costs very little to produce incremental volumes at the field, especially if condensate is a co-product. Any price above these costs will contribute positively to cash flow and the economic incentive will favor running the liquefaction complex at full utilization. The consequence was illustrated by the collapse of spot Japanese LNG prices in advance of crude oil in 2014 (Figure 2).

fig2

The market rebalanced in 2016 and 2017, but contracts were shorter term and covered lower volume, with prices influenced by local alternatives and less creditworthy buyers than in the past (Figure 3). New importing countries Egypt, Pakistan, Jordan, Jamaica and Colombia were added in 2016, showing newly price-elastic demand segments benefiting from pre-existing infrastructure but contributing to lower overall credit risk. Buyers have become more sophisticated and are putting together portfolios of contract supplies with different tenors and pricing but will soon need new downstream infrastructure to accommodate higher export volumes.

Figure 3: Deteriorating Contract Quality in 2016-17

fig3

Australian supplies continue to expand, the U.S. is emerging as a major LNG supplier and Qatar has promised to increase its LNG production 30% by 2020. Natural gas discoveries in the Levant Basin have the potential to supply Egypt, Jordan and Israel, displacing LNG imports in the next few years.

China and India both suffer from appalling air quality and benefit from switching from coal to natural gas in power generation. However, coal extraction is a major employer in both countries, and there are political risks in switching too fast. China and India will want to negotiate low prices based on coal economics; in the medium term the industry must find innovative ways to expand global LNG demand by providing end users with incentives to encourage a switch from oil to LNG.

Absent long-term contracts with high credit counterparties, it has become almost impossible for an independent LNG developer to finance the huge capital investment required for a new project, and major oil companies are demonstrating capital discipline. Domestic natural gas producers will struggle to find markets and prices will remain depressed as associated gas production increases. Project developers are trying different business models but fail to engage with end-users, hoping that low LNG prices alone will stimulate demand. Opening a new market segment has the potential to smooth the typical bust and boom commodity price cycle.

Unpacking the LNG Value Chain

It is helpful to start with an assessment of the LNG value chain and its participants (Figure 4) and then review some initiatives that address the need to finance new projects in a market where buyers are looking for flexible pricing mechanisms and are no longer receptive to long term take-or-pay contracts.

fig4

A wide range of countries and companies have a potential interest in successful development of new U.S. LNG projects that contribute to a broader and deeper LNG market, with prices below oil prices and reducing greenhouse gas emissions.

  • Natural gas producers would benefit from access to a new market segment of companies and utilities currently dependent on expensive diesel fuel and should be interested in a pricing mechanism linked to diesel prices.
  • EPC companies would benefit from the opportunity to provide services and equipment for the switching LNG customer as well as in the liquefaction complex and may be prepared to consider innovative contracting features.
  • Importers and traders may be prepared to take on some price risk to catalyze collaboration among disparate partners.
  • Investors and developers of independent LNG projects should be willing to consider innovative tolling fee structures to spread price risks among collaboration participants.
  • The federal government has expressed its support for LNG exports as helpful to narrowing the trade deficit and achieving “dominance” in global energy. Importing countries should welcome a transition from oil to natural gas as positive for air quality and competitiveness if priced below diesel.
  • End users may be willing to invest in switching to LNG as fuel at a price below diesel prices, so long as supplies are secure and reliable.

New Business Models

The traditional model is an integrated supply chain: IOCs and NOCs develop and operate the gas field, negotiate EPC contracts for construction of the liquefaction complex, sign charter parties with shipping companies, and often offer an ownership share to end use buyers. As the initial contracts reach their term, the IOCs have uncontracted volumes that they can recontract or sell in spot markets. Roughly two million tons of LNG contracts will expire in the next 10 years. The IOCs and publicly traded NOCs have strong balance sheets that can support new projects without recourse to project financing. Independent LNG project developers must find different business models.

The first movers for U.S. LNG exports were able to negotiate long term tolling contracts with creditworthy customers, allowing project financing of the liquefaction complexes. These have been difficult to secure in current times of low spot market LNG prices, where the large global commodity traders (e.g., Koch, Vitol, Trafigura) are finding opportunities to develop new LNG customers, using existing infrastructure and managing the credit risk as part of their risk portfolios. With access to long term take-or-pay contracts scarce, new business models are being tried by developers of new LNG projects with mixed results. However, most of these business models assume that increased LNG supplies will create their own market demand if the price is low enough. The problem is that the required price may not guarantee sufficient cash flow to service debt required to finance a new LNG project and higher prices would suppress new demand. Examples of new business models are:

  • Supply Chain Integration: Tellurian has devised an integrated LNG supply chain from natural gas resource through end use buyer. They have raised over $400 million from Total SA, Bechtel and public equity and have acquired 11,620 net acres in the Haynesville, taking advantage of low prices for natural gas reserves outside Appalachia. They have completed a FEED study for their subsidiary’s Driftwood LNG project and have signed a fixed price EPC contract with Bechtel. They have announced open seasons for planned pipelines connecting Permian and Haynesville production to its LNG project. They have reserved up to 40% of the equity for potential buyers so they can participate in the full U.S. natural gas supply chain. The complete project cost, excluding LNG tankers and regasification investment is estimated to be over $20 Billion: $7.3 billion for pipelines to supply the LNG plant, $15 Billion to build the liquefaction facility, plus further investments in acquiring natural gas resources.
  • LNG Demand Creation: AES LNG is a subsidiary of AES Corporation, an international electric power company. They “aim to radically improve the environmental and economic condition of many small liquid petroleum fuel consumers in the Caribbean, Central America and the northern parts of South America by substituting dirty and often expensive fuel oil or diesel with clean-burning natural gas.” Certainly, diesel prices in the Caribbean are set by U.S. Gulf Coast prices, which have historically been significantly above Henry Hub spot natural gas prices (Figure 5). The lowest annual Gulf Coast price spread was $4.96 in 2016, when oil prices were abnormally low. The price spread should most of the time be above $6/MMBtu, sufficient to cover debt finance of the liquefaction and regasification and fuel switching facilities and cover the higher LNG marine transport cost compared to diesel.

fig5

AES Dominicana has safely been operating the large-scale LNG receiving facility since its inception in 2003. The facility provides gas to AES Dominicana’s two gas-fired power plants as well as 50 industrial clients, two third-party power plants and 15,000 gas vehicles in the Dominican Republic. As well as serving the domestic market the terminal has capacity to serve the regional market. AES is currently constructing a second LNG receiving terminal in Panama to be completed in mid-2019, the first of its kind in Central America. With slightly larger capacity than AES Andres, the Colón terminal will serve AES Panama’s own 381MW power plant as well as the domestic and regional demand for gas. Both terminals are designed to receive LNG on standard large carriers of 125,000m3 – 175,000m3 and redistribute LNG via re-loading small bulk carriers and ISO containers.

Utilizing AES expertise and in partnership with several infrastructure providers, AES believes it can provide entire value chain solutions including LNG supply, logistics, design, build, commission and start-up of an LNG receiving terminal. AES operates in 15 countries so it could extend its model beyond the Dominican Republic and Panama. The question is whether it can negotiate price and credit terms that can support project financing of fuel switching its current power generation assets and underwrite a new liquefaction plant.

Independently of AES, Crowley Maritime through its subsidiary Carib Energy since 2013 has been supplying LNG to Coca-Cola Puerto Rico Bottlers in specially designed vessels, providing technical solutions include customized regasification systems; design services; mechanical, electrical and site/civil engineering; commissioning; storage and supply management; consultation and training; bunkering; providing a lower cost energy source, utilizing the cold from regasification to chill its products and even capturing CO2 exhaust gases to provide the fizz for its sparkling drinks.

  • S. Midstream Growth: Dominion Energy, Sempra and KMI saw LNG plants as a natural extension of their midstream pipeline businesses, but shareholders have been less enthusiastic:
    • Dominion Energy, a large electricity and natural gas company has shipped its first cargo from its Cove Point LNG plant with natural gas supplied by Shell and has 20-year sales contracts with Japanese and Indian buyers.
    • Kinder Morgan in 2015 bought out Shell’s interest in Elba Island LNG but Shell remained committed to supply natural gas and purchase all the plant’s LNG production; KMI then in 2017 sold 49% of its Elba Island LNG project to a private equity firm EIG as a “strategic step towards achieving our stated goals of strengthening our balance sheet and positioning the company for long-term value creation” according to Steve Kean, KMI President and CEO.
    • Sempra LNG & Midstream (SLM) is a subsidiary of Sempra Energy, whose main businesses are Southern California Gas; San Diego Gas & Electric; Oncor Electric Delivery and Sempra South American Utilities. SLM is a partner in Cameron LNG in Hackberry, LA with Engie, Mitsubishi, Mitsui and Japanese shipping company MYK Line. The project is under construction, with expected completion in 2019 though the capacity is not yet fully contracted.

Midstream companies are generally unwilling to take commodity price risk and seek a tolling agreement for liquefying the natural gas with a counterparty that has strong credit, leaving the buyer to take any price risk. U.S. midstream companies are quite uncomfortable with any deviation from the contracting model inherited from gas pipeline developments.

AES and Crowley Maritime are making a useful contribution by enabling fuel switching in end-user facilities, but so far on a small scale with a business model that currently depends on low spot LNG prices. Engie, however, with a footprint in 70 countries, may have more upside potential.

Tellurian is taking most of the commodity price risk in its newly acquired production subsidiary. Tellurian recognizes that the variable costs of natural gas production are quite low, where the capital costs are small relative to the cost of liquefaction. By buying producing acreage in the Haynesville, they can absorb periods of low end-user prices through a reduced return on investment in its production subsidiary and hopefully can compensate investors with superior returns during an up-cycle. They can also modulate their own returns on investment in liquefaction during the price cycle.

However, the Tellurian approach would not be necessary if large natural gas producers (e.g., EOG Resources, Apache and others with assets in the Permian and Haynesville plays) come to recognize that they are likely to achieve better netbacks to their wellheads by negotiating long term contracts with LNG developers including price formulae that incent international substitution of oil by natural gas.

Proposed Collaborative

With a plentiful supply, barriers to continued growth in demand and reluctance by traditional buyers to commit to long-term contracts required to finance needed infrastructure, new projects will be stranded. We propose a new model (Figure 6) that may be difficult to negotiate but would spread the risk among entities which in aggregate should have sufficient credit to support project finance.

Figure 6: Schematic of Hypothetical Collaboration Relationships

fig6

In our view, natural gas producers are the primary medium-term beneficiaries of expanding the global LNG market by encouraging fuel switching from diesel to natural gas. By securing new markets on long-term contracts, producers will eliminate the need to sell at sometimes distressed spot prices and will strengthen the overall market by increasing global demand. End users should also reap strong benefits of improved air quality, lower carbon emissions and lower costs.

  • Natural gas producers should be prepared to commit a proportion of their production to long-term reserve-backed contracts with emerging LNG markets at prices related to the oil products that are being substituted.
  • End users and their stakeholders should benefit from lower costs and improved air quality by switching from diesel fuel to regasified LNG.
  • Providers of equipment needed to switch from oil to LNG should be prepared to lease the equipment and provide ongoing maintenance at fair prices, rather than trying to sell the units at prices that the end user would find difficult to finance.
  • A shipping agreement for small used LNG tankers should be negotiable at favorable rates.
  • A liquefaction agreement could be negotiated with “ceiling and floor” features that allows the developer low returns on investment when netback prices to the producer are below Henry Hub spot rates but delivers superior returns when netback prices are above spot prices.
  • The “fixed price” construction agreement with the EPC contractor could also provide upside when netback prices are favorable.
  • By repeating the same model to various end users in various countries, country risk can be reduced.

This arrangement should spur expanded LNG demand from end users who might not otherwise switch from oil and aggregate credit strength to allow project financing and FID (Final Investment Decision) of the fuel switching and liquefaction construction projects.

The primary economic driver is the current and expected future gap between oil and natural gas prices. Google has recently compiled a database of power plants, listing nearly 3,000 globally (other than China) that rely primarily on oil as fuel.

Figure 7: South and Central Americas Power Plants Using Oil as Primary Fuel (Top Capacity Quartile)

fig7

The natural targets for switching to LNG may be in South and Central America (Figure 5) where there are close to 100 oil-fired power plants greater than 80 MW in capacity. The IEA estimates worldwide oil use for power generation in 2016 at 275 million tons of oil equivalent (over 5 million barrels per day) so the potential market is large.

Perhaps over time, LNG penetration may happen organically, but it is important to recognize the high inertia for change. The schematic we propose will be difficult to negotiate, but the alternative absent a catalyst to overcome inertia is a bust period of low LNG capacity growth as good project ideas are stranded, coupled with depressed U.S. natural gas prices. LNG supplies will then fail to meet demand growth ultimately leading to a commodity boom with higher LNG (but not domestic natural gas) prices leading to stifled global LNG demand growth and frustrating low cost domestic natural gas producers.

It’s an appropriate time to look for innovative ways to accelerate creditworthy LNG demand growth in the medium term. Our hope is that this article will stimulate some productive conversations.

Chris Ross is an Executive Professor of Finance at the C.T. Bauer College of Business, Gutierrez Energy Management Institute (GEMIand the University of Houston, where he teaches classes on strategies in the oil and gas industry. He also leads research classes investigating how different energy industry segments are creating value for shareholders. Ross holds a Bachelor of Science in Chemistry from King’s College at the University of London and a PMD from Harvard Business School.

Justin Varghese is a Spring 2018 Professional MBA graduate from C.T. Bauer College of Business at the University of Houston. He currently works as a project manager for Siemens and specializes in solutions for the oil and gas industry. He holds a Bachelor of Science in Industrial and Systems Engineering from Texas A&M University in College Station.

Calling Generation Z: The Energy Industry Reaches Out To Its Future Workforce

by Dr. Heather Domjan, Interim Executive Director, University of Houston STEM Center

The energy industry is engaged in a tug of war – it sees itself as playing a crucial role in helping mankind, while many Americans possess a deep-seated mistrust of oil and gas companies. That’s especially true of today’s school-age students.

According to Gallup, almost half of Americans (47%) had a negative view of the oil and gas industry in 2015, while just more than one-third (34%) viewed the industry positively. By 2017, the gap had narrowed, but negative opinions still topped positive ratings by 2%.

2017 Business & Industry Ranking Net Positive Scores

Public opinion has dampened energy companies’ ability to overcome misconceptions and differences in opinion. And young people may be their toughest audience, at a time when the industry is facing a growing demand for new workers.

Generation Z’s Perception:

Here’s the storyline for America’s youth:

  • Coal was the fuel for their grandparent’s lifetime
  • Oil and gas was for their parent’s generation, and
  • Renewable energy is the future.

This should be a wake-up call for the industry, which must make members of Generation Z – definitions vary, but generally those between 2 and 19 – a priority, as these individuals have the ability to shape the future of energy through innovation. The complexity of this task becomes clear when you realize this generation may hold beliefs that are not necessarily substantiated by facts, contributing to the divide between supporters of the oil and natural gas industries and those whose concerns about climate change and the production of fossil fuels push them toward renewable energy.

EY  last year surveyed U.S. consumers and energy industry executives about current perceptions of the industry with striking results, especially among teens. Generation Z described the industry as a “problem causer, rather than a problem solver.” More than half of teens – 56% — said the industry isn’t worth the damage it causes to the environment. Media coverage of oil spills and other accidents become ingrained in the minds of these young people and, over time, they have developed a one-sided mindset.

Teens are digital natives and when only 44% deem the energy industry a leader in technology and 41% consider it “innovative,” clearly there is a disconnect. Only 45% of teens surveyed said the industry is trustworthy.

It is difficult to overcome these negative images, especially when only 35% of teens believe your industry will be important for another century.

US perceptions of the oil and gas industry survey, 2017

This disdain may originate from embedded misconceptions developed through exposure to various media. Young people want to find solutions to climate change, display responsibility through “green” actions and showcase their consumer power by using the premise of renewable initiatives to speak to government and industry regulations.

But these young people can miss the nuances of an argument. For example, teens often fail to note that although renewable energy is considered “clean” because solar and wind power don’t themselves generate greenhouse gases, it has other drawbacks, including that it is a variable source of energy, available only when the sun shines and the wind blows. Therefore renewable energy currently is usually supplemented with fossil fuels to meet consumer demands.

The insights from the EY survey should capture the industry’s attention, especially considering they are already up against the wall of time, with one-third of the energy workforce at retirement age.

So how can industry overturn this perceptional tide among young people? It has begun to fight back.

Energy Industry Response:

Investing in K-16 students – that is, those from kindergarten through higher education – is vital, but how can oil and gas companies obtain a return on their investment when identifying what action best works can takes months or even years?

Even with so many education programs encouraged and funded, in part, by the industry, Generation Z remains skeptical.

Time is of the essence for industry to re-evaluate its stance within K-16 education and make a calculated effort to ensure students are exposed to valid points on both sides of the discussion to debunk any falsifications. The industry must step up its efforts to collaborate with educational experts to forge a united front that ensures the message of transformative energy is appropriately delivered.

Social interaction will be key, too, recognizing that Generation Z will be tomorrow’s decision makers about critical energy issues. Students are exposed to many opinions as they surf the web’s turbulent waves , and if the energy industry is to get buy-in, it must continue to be visible.

There are options. A massive career awareness media campaign highlighting the variety of jobs within the industry could expose students to the possibilities. When was the last time you saw a commercial about careers in the energy industry?

Oil and gas companies are investing both money and manpower in America’s youth, but will the effort be enough to overcome the views Generation Z currently holds? Oil and gas companies invest in initiatives such as STEM programs and competitions that emphasize science, technology, engineering and math skills, diversity outreach, educator support, career awareness campaigns and community engagement. In Houston, home to dozens of both majors and independent energy firms, and elsewhere, company employees are encouraged to volunteer with schools as mentors and guest speakers.

Only time will tell; however, energy industry executives must remain in the game so college-bound students will consider the industry with confidence.


Dr. Heather Domjan is the Interim Executive Director of the University of Houston STEM Center as well as a clinical assistant professor in curriculum and instruction. She instructs classes on science pedagogy to future educators with a focus of science, technology, engineering, and mathematics. Dr. Domjan also serves as the Executive Director of the Science and Engineering Fair of Houston which is one of the largest STEM events in Texas.

UH Energy is the University of Houston’s hub for energy education, research and technology incubation, working to shape the energy future and forge new business approaches in the energy industry.