Watching the Hurricane’s Path

8 09 2017

I can really relate to this latest article by Richard Heinberg….  I still get people saying to me “you’ve been saying this for twenty years, and look, nothing’s happened…” Yet, every day, we are one day closer to the inevitable outcome, just like watching the hurricane coming from your favourite armchair…

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

heinbergIt’s an eerie experience. You’ve just heard that another hurricane has formed in the Atlantic, and that it’s headed toward land. You search for NOAA’s National Hurricane Center website so you can see the forecast path for the storm. You’re horrified at the implications, and you bookmark the site. You check in every few hours to see forecast updates. You know in general terms what’s coming—devastation for the lives of thousands, maybe millions of people. Then a few days later you begin to see the sad, shocking photos and videos of destruction.

Thanks to modern science and technology—satellites and computers—we have days of warning before a hurricane hits. That’s extremely helpful: while people can’t move their houses and all their possessions, they can board up windows, stock up on food and water, and perhaps get out of town. Huge storms are far less deadly than they would be if we didn’t have modern weather forecasting.

Science and technology have also enabled us to forecast “storms” of another kind. Using computers and data about population, energy, pollution, natural resources, and economic trends, it’s possible to generate scenarios for the future of industrial civilization. The first group of researchers to do this  in 1972, found that the “base case,” or most likely scenario, showed essentially the collapse of society: in the early-to-middle decades of the 21st century, industrial production would peak and begin to decline sharply; so would food production and (with a lag of a few years) population. For decades scientists have been updating the software and plugging in new and better data, but ever-more-powerful computers keep spitting out the same base-case scenario.

One of the factors the 1972 researchers thought would be of increasing significance was climate change. Now, 45 years later, many thousands of scientists around the world are feeding their supercomputers data on carbon emissions, carbon cycles, carbon sinks, climate sensitivity, climate feedbacks, and more. They likewise see a “hurricane” on the way: we are altering the chemistry of the Earth’s atmosphere and oceans so significantly, and so quickly, that dire consequences are almost certain, if not already here. Later this century we’ll see storms, droughts, heat waves, and wildfires like none on record. Agriculture will likely be impacted severely.

Ever since I read the 1972 report on Limits to Growth, I’ve had that same eerie feeling as when looking at the charts on the NOAA website. Only the feeling is deeper, more pervasive, and (of course) long-lasting. A storm is coming. We should batten down the hatches.

But, 45 years down the line, the storm is no longer far away. In fact, the photos and videos of destruction are starting to come in. No nations have bothered to make sensible efforts to minimize the storm’s impact by reducing fossil fuel consumption, stabilizing population at 1970s levels, or reconfiguring their economy so it doesn’t require continuous growth in resource and energy usage. Why didn’t we do those sensible things, even though we had plenty of warning?

Our failure to respond has a lot to do with the long time lag. We humans are much better at dealing with immediate threats than ones years ahead. In effect, we have an internal discount rate that we apply to possible disasters, depending on their temporal proximity.

Given a long-term threat, some of us are more likely to develop complicated rationales for doing nothing. After all, averting a really big disaster may require substantial inconvenience. Getting out of the way of a hurricane might mean packing up your most treasured belongings, driving a couple of hundred miles, and trying to find a motel that’s not already overbooked (that is, if you are among the fortunate with the resources to do so).  Minimizing the threat of global overshoot might mean changing our entire economic system—from how we grow food to how we get to work and what kind of work we do. Escaping the hurricane engages our survival instincts; we don’t have time to doubt the weatherman. But given a few decades to think about it, we might come up with lots of (ultimately wrongheaded but carefully reasoned nonetheless) reasons why our current economic system is really just fine, and why global overshoot really isn’t a threat.

Those of us who aren’t so good at coming up with such rationalizations are stuck with the eerie feeling that something very bad is about to happen—maybe in Florida this weekend, maybe everywhere before long. Here’s my recommendation, based on a few decades of watching all kinds of storm charts: please pay attention to the weatherman. Stop finding reasons why you really don’t have to change or prepare. Make your way to higher ground. And be sure to help your neighbors.

Advertisements




The Earth is full

7 09 2017





Peak ERoEI…?

22 08 2017

Inside the new economic science of capitalism’s slow-burn energy collapse

nafeezAnd why the struggle for a new economic paradigm is about to get real

Another MUST READ article by Nafeez Ahmed……….

 

Originally published by INSURGE INTELLIGENCE, a crowdfunded investigative journalism project for people and planet. Support us to keep digging where others fear to tread.

New scientific research is quietly rewriting the fundamentals of economics. The new economic science shows decisively that the age of endlessly growing industrial capitalism, premised on abundant fossil fuel supplies, is over.

The long-decline of capitalism-as-we-know-it, the new science shows, began some decades ago, and is on track to accelerate well before the end of the 21st century.

With capitalism-as-we-know it in inexorable decline, the urgent task ahead is to rewrite economics to fit the real-world: and, accordingly, to redesign our concepts of value and prosperity, precisely to rebuild our societies with a view of adapting to this extraordinary age of transition.


A groundbreaking study in Elsevier’s Ecological Economics journal by two French economists, for the first time proves the world has passed a point-of-no-return in its capacity to extract fossil fuel energy: with massive implications for the long-term future of global economic growth.

The study, ‘Long-Term Estimates of the Energy-Return-on-Investment (EROI) of Coal, Oil, and Gas Global Productions’, homes in on the concept of EROI, which measures the amount of energy supplied by an energy resource, compared to the quantity of energy consumed to gather that resource. In simple terms, if a single barrel of oil is used up to extract energy equivalent to 50 barrels of oil, that’s pretty good. But the less energy we’re able to extract using that single barrel, then the less efficient, and more expensive (in terms of energy and money), the whole process.

Recent studies suggest that the EROI of fossil fuels has steadily declined since the early 20th century, meaning that as we’re depleting our higher quality resources, we’re using more and more energy just to get new energy out. This means that the costs of energy production are increasing while the quality of the energy we’re producing is declining.

But unlike previous studies, the authors of the new paper — Victor Court, a macroeconomist at Paris Nanterre University, and Florian Fizaine of the University of Burgundy’s Dijon Laboratory of Economics (LEDi)—have removed any uncertainty that might have remained about the matter.

Point of no return

Court and Fizaine find that the EROI values of global oil and gas production reached their maximum peaks in the 1930s and 40s. Global oil production hit peak EROI at 50:1; while global gas production hit peak EROI at 150:1. Since then, the EROI values of oil and gas — the overall energy we’re able to extract from these resources for every unit of energy we put in — is inexorably declining.

Source: Court and Fizaine (2017)

Even coal, the only fossil fuel resource whose EROI has not yet maxed out, is forecast to undergo an EROI peak sometime between 2020 and 2045. This means that while coal might still have signficant production potential in some parts of the world, rising costs of production are making it increasingly uneconomical.

Axiom: Aggregating this data together reveals that the world’s fossil fuels overall experienced their maximum cumulative EROI of approximately 44:1 in the early 1960s.

Since then, the total value of energy we’re able to extract from the world’s fossil fuel resource base has undergone a protracted, continuous and irreversible decline.

Insight: At this rate of decline, by 2100, we are projected to extract the same value of EROI from fossil fuels as we were in the 1800s.

Several other studies suggest that this ongoing decline in the overall value of the energy extracted from global fossil fuels has played a fundamental role in the slowdown of global economic growth in recent years.

In this sense, the 2008 financial crash did not represent a singular event, but rather one key event in an unfolding process.

The economy-energy nexus

This is because economic growth remains ultimately dependent on “growth in material and energy use,” as a study in the journal PLOS One found last October. That study, lead authored by James D. Ward of the School of Natural and Built Environments, University of South Australia, challenged the idea that GDP growth can be “decoupled” from environmental impacts.

The “illusion of decoupling”, Ward and his colleagues argued, has been maintained through the following misleading techniques:

  1. substituting one resource for another;
  2. financialization of GDP, such as through increasing “monetary flows” through creation of new debt, without however increasing material or energy throughput (think quantitative easing);
  3. exporting environmental impacts to other nations or regions, so that the realities of increasing material throughput can be suppressed from data calculations.
  4. growing inequality of income and wealth, which allows GDP to grow for the benefit of a few, while the majority of workers see decreases in real income —in other words, a wealthy minority monopolises the largest fraction of GDP growth, but does not increase their level of consumption with as much demand for energy and materials.

Ward and his co-authors sought to test these factors by creating a new economic model to see how well it stacks up against the data.

Insight: They found that continued economic growth in GDP “cannot plausibly be decoupled from growth in material and energy use, demonstrating categorically that GDP growth cannot be sustained indefinitely.”

Other recent scientific research has further fine-tuned this relationship between energy and prosperity.

The prosperity-resource nexus

Adam Brandt, a leading EROI expert at Stanford University’s Department of Energy Resources Engineering, in the March edition of BioPhysical Economics and Resource Quality proves that the decline of EROI directly impacts on economic prosperity.

Earlier studies on this issue, Brandt points out, have highlighted the risk of a “net energy cliff”, which refers to how “declining EROI results in rapid increases in the fraction of energy dedicated to simply supporting the energy system.”

Axiom: So the more EROI declines, a greater proportion of the energy being produced must be used simply to extract more energy. This means that EROI decline leads to less real-world economic growth.

It also creates a complicated situation for oil prices. While at first, declining EROI can be expected to lead to higher prices reflecting higher production costs, the relationship between EROI and prices begins to breakdown as EROI becomes smaller.

This could be because, under a significantly reduced EROI, consumers in a less prosperous economy can no longer afford, energetically or economically, the cost of producing more energy — thus triggering a dramatic drop in market prices, despite higher costs of production. At this point, in the new era of shrinking EROI, swinging oil prices become less and less indicative of ‘scarcity’ in supply and demand.

Brandt’s new economic model looks at how EROI impacts four key sectors — food, energy, materials and labor. Exploring what a decline in net energy would therefore mean for these sectors, he concludes:

“The reduction in the fraction of a resource free and the energy system productivity extends from the energy system to all aspects of the economy, which gives an indication of the mechanisms by which energy productivity declines would affect general prosperity.

A clear implication of this work is that decreases in energy resource productivity, modeled here as the requirement for more materials, labor, and energy, can have a significant effect on the flows required to support all sectors of the economy. Such declines can reduce the effective discretionary output from the economy by consuming a larger and larger fraction of gross output for the meeting of inter-industry requirements.”

Brandt’s model is theoretical, but it has direct implications for the real world.

Insight: Given that the EROI of global fossil fuels has declined steadily since the 1960s, Brandt’s work suggests that a major underlying driver of the long-term process of economic stagnation we’re experiencing is resource depletion.

The new age of economic stagnation

Exactly how big the impact of resource depletion on the economy might be, can be gauged from a separate study by Professor Mauro Bonauiti of the Department of Economics and Statistics at the University of Turn.

His new paper published in February in the Journal of Cleaner Production assesses data on technological innovations and productivity growth. He concludes that:

“… advanced capitalist societies have entered a phase of declining marginal returns — or involuntary degrowth — with possible major effects on the system’s capacity to maintain its present institutional framework.”

Bonauiti draws on anthropologist Joseph Tainter’s work on the growth and collapse of civilizations. Tainter’s seminal work, The Collapse of Complex Societies, showed that the very growth in complexity driving a civilization’s expansion, generates complex new problems requiring further complexity to solve them.

 

Axiom: Complex civilizations tend to accelerate the use of resources, while diminishing the quantity of resources available for the civilization’s continued expansion — because they are continually being invested in solving the new problems generated by increasing complexity.

The result is that complex societies tend to reach a threshold of growth, after which returns diminish to such an extent that the complexification of the society can no longer be sustained, leading to its collapse or regression.

Bonauiti builds on Tainter’s framework and applies it to new data on ‘Total Factor Productivity’ to assess correlations between the growth and weakening in productivity, industrial revolutions, and the implications for continued economic growth.

The benefits that a certain society obtains from its own investments in complexity “do not increase indefinitely”, he writes. “Once a certain threshold has been reached (T0), the social organisation as a whole will enter a phase of declining marginal returns, that is to say, a critical phase, which, if ignored, may lead to the collapse of the whole system.”

This threshold appears to have been reached by Europe, Japan and the US before the early 1970s, he argues.

Insight: The US economy, he shows, appears to have reached “the peak in productivity in the 1930s, the same period in which the EROI of fossil fuels reached an extraordinary value of about 100.”

Of course, Court and Fizaine quantify the exact value of this peak EROI differently using a new methodology, but they agree that the peak occurred roughly around this period.

The US and other advanced economies are currently tapering off the end of what Bonauiti calls the ‘third industrial revolution’ (IR3), in information communications technologies (ICT). This was, however, the shortest and weakest industrial revolution from a productivity standpoint, with its productivity “evaporating” after just eight years.

In the US, the first industrial revolution utilized coal to power steam engine and telegraph technology, stimulating a rapid increase in productivity that peaked between between 1869 and 1892, at almost 2%.

The second industrial revolution was powered by the electric engine and internal combustion engine, which transformed manufacturing and domestic consumption. This led productivity to peak at 2.78%, remaining at around 2% for at least another 25 years.

After the 1930s, however, productivity continually declined, reaching 0.34% in the period 1973–95. Since then, the third industrial revolution driven by computing technology led to a revival of productivity which, however, has already tapered out in a way that is quite tepid compared to the previous industrial revolutions.

Axiom: The highest level of productivity was reached around the 1930s, and since then with each industrial revolution has declined.

The decline period also roughly corresponds to the post-peak EROI era for total fossil fuels identified by Court and Fizaine.

Thus, Bonauiti concludes, “the empirical evidence and theoretical reasons lead one to conclude that the innovations introduced by IR3 are not powerful enough to compensate for the declining returns of IR2.”

Insight: The implication is that the 21st century represents the tail-end of the era of industrial economic expansion, originally ushered in by technological innovations enabled by abundant fossil fuel energy sources.

The latest stage is illustrated with the following graph which demonstrates the rapid rise and decline in productivity of the last major revolution in technological innovation (IR3):

The productivity of the third industrial revolution thus peaked around 2004 and since then has declined back to near 1980s levels.

Bonauiti thus concludes that “advanced capitalist societies (the US, Europe and Japan) have entered a phase of declining marginal returns or involuntary degrowth in many key sectors, with possible major detrimental effects on the system’s capacity to maintain its present institutional framework.”

In other words, the global economic system has entered a fundamentally new era, representing a biophysical phase-shift into an energetically constrained landscape.

Going back to the new EROI analysis by French economists, Victor Court and Florian Fizaine, the EROI of oil is forecast to reduce to 15:1 by 2018. It will continue to decline to around 10:1 by 2035.

They broadly forecast the same pattern for gas and coal: Overall, their data suggests that the EROI of all fossil fuels will hit 15:1 by 2060, and decline further to 10:1 by 2080.

If these projections come to pass, this means that over the next few decades, the overall costs of fossil fuel energy production will increase, even while the market value of fossil fuel energy remains low. The total net energy yield available to fuel continued economic growth will inexorably decline. This will, in turn, squeeze the extent to which the economy can afford to buy fossil fuel energy that is increasingly expensive to produce.

We cannot be sure what this unprecedented state of affairs will herald for the market prices of oil, gas and coal, which are unlikely to follow the conventional supply and demand dynamics we were used to in the 20th century.

But what we can know for sure from the new science is that the era of unlimited economic growth — the defining feature of neoliberal finance capitalism as we know it — is well and truly over.

UK ‘end of growth’ test-case

The real-world workings of this insight have been set out by a team of economists at the University of Leeds’ Centre for Climate Change Economics and Policy, whose research was partly funded by giant engineering firm Arup, along with the main UK government-funded research councils — the UK Energy Research Centre, the Economics and Social Research Council and the Engineering and Physical Sciences Research Council.

In their paper published by the university’s Sustainability Research Institute this January, Lina Brand-Correa, Paul Brockway, Claire Carter, Tim Foxon, Anne Owen, and Peter Taylor develop a national-level EROI measure for the UK.

Studying data for the period 1997-2012, they find that “the country’s EROI has been declining since the beginning of the 21st Century”.

Energy Returned (Eout) and Energy Invested (Ein) in the UK (1997–2012) Source: Brand-Correa (2017)

The UK’s net EROI peaked in 2000 at a maximum value of 9.6, “before gradually falling back to a value of 6.2 in 2012.” What this means is that on average, “12% of the UK’s extracted/captured energy does not go into the economy or into society for productive or well-being purposes, but rather needs to be reinvested by the energy sectors to produce more energy.”

The paper draws on previous work by economists Court and Fizaine suggesting that continuous economic growth requires a minimal societal EROI of 11, based on the current energy intensity of the UK economy. By implication, the UK is dropping increasingly below this benchmark since the start of the 21st century:

“These initial results show that more and more energy is having to be used in the extraction of energy itself rather than by the UK’s economy or society.”

This also implies that the UK has had to sustain continued economic growth through other mechanisms outside of its own domestic energy context: in particular, as we know, the expansion of debt.

It is no coincidence, then, that debt-to-GDP ratios have continued to grow worldwide. As EROI is in decline, an unsustainable debt-bubble premised on exploitation of working and middle classes is the primary method to keep growth growing — an endeavour that at some point will inevitably come undone under its own weight.

We need a new economics

According to MIT and Harvard trained economist Dr. June Sekera — who leads the Public Economy Project at Tufts University’s Global Development And Environment Institute (GDAE) — net energy decline proves that neoclassical economic theory is simply not fit for purpose.

In Working Paper №17–02 published by the GDAE, Sekera argues that: “One of the most important contributions of biophysical economics is its critique that mainstream economics disregards the biophysical basis of production, and energy in particular.”

Policymakers, she says, “need to understand the biophysical imperative: that societal net energy yield is falling. Hence the need for a biophysical economics, and for policymakers to comprehend its central messages.”

Yet a key problem is that mainstream economics is held back from being able to even comprehend the existence of net energy decline due to an ideological obsession with the market. The result is that production that occurs outside the market is seen as an aberration, a form of government, state or ‘political’ interference in the ‘natural’ dynamics of the market.

And this is why the market alone is incapable of generating solutions to the net energy crisis driving global economic stagnation. The modern market paradigm is fatally self-limited by the following dynamics: “short time horizons, growth as a requisite, gratuitous waste baked-in, profits as life-blood.” This renders it “incapable of producing solutions that demand long-view investment without profits.”

Thus, Sekera calls for a new “public economics” commensurate with what is needed for a successful energy transition. The new public economics will spur on breakthrough scientific and technological innovations that solve “common-need problems” based on “distributed decision-making and collective action.”

The resulting solutions will require “long time-horizon investment: investments with no immediate payoff in terms of saleable products, no visible ROI (return on investment), no profit-making in the near-term. Such investment can be generated only in a non-market environment, in which payment is collective and financial profit is not the point.”

The only problem is that, as Sekera herself recognizes, the main incubator and agent of the non-market public economy is government — but government itself is playing a key role in dismantling, hollowing-out and privatizing the non-market public economy.

There is only one solution to this conundrum, however difficult it might seem:

Citizens themselves at all scales have an opportunity to work together to salvage and regenerate new public economies based on pooling their human, financial and physical assets and resources, to facilitate the emergence of more viable and sustainable economic structures. Part of this will include adapting to post-carbon energy sources.

Far from representing the end of prosperity, this transition represents an opportunity to redefine prosperity beyond the idea of endlessly increasing material accumulation; and realigning society with the goal of meeting real-world human physical, psychological and spiritual needs.

What will emerge from efforts to do so has not yet been written. But those efforts will define the contours of the new post-carbon economy, as the unsustainable juggernaut of the old grinds slowly and painfully to a protracted, chaotic halt.

In coming years and decades, the reality of the need for a new economic science that reflects the dynamics of the economy’s fundamental embeddedness in the biophysical environment will become evermore obvious.

So say goodbye to endless growth neoliberalism.


This INSURGE story was enabled by crowdfunding: Please support independent journalism for the global commons for as little as a $1/month via www.patreon.com/nafeez


Dr. Nafeez Ahmed is an award-winning 16-year investigative journalist and creator of INSURGE intelligence, a crowdfunded public interest investigative journalism project. He is ‘System Shift’ columnist at VICE’s Motherboard.

His work has been published in The Guardian, VICE, Independent on Sunday, The Independent, The Scotsman, Sydney Morning Herald, The Age, Foreign Policy, The Atlantic, Quartz, New York Observer, The New Statesman, Prospect, Le Monde diplomatique, Raw Story, New Internationalist, Huffington Post UK, Al-Arabiya English, AlterNet, The Ecologist, and Asia Times, among other places.

Nafeez has twice been featured in the Evening Standard’s ‘Top 1,000’ list of most influential people in London.

His latest book, Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence (Springer, 2017) is a scientific study of how climate, energy, food and economic crises are driving state failures around the world.





Dick Smith on growth; emphatically yes…and no

16 08 2017

tedtrainer

Ted Trainer

Another article by my friend Ted Trainer, originally published at on line opinion……

The problems of population and economic growth have finally come onto the public agenda, and Dick Smith deserves much of the credit…but he doesn’t realise what’s on the other end of the trail he’s tugging.

For fifty years a small number of people have been saying that pursuing population and economic growth on a finite planet is a very silly thing to do. Until recently almost no one has taken any notice. However in the last few years there has emerged a substantial “de-growth” movement, especially in Europe. Dick Smith has been remarkably successful in drawing public attention to the issue in Australia. He has done more for the cause in about three years than the rest of us have managed to achieve in decades. (I published a book on the subject in 1985, which was rejected by 60 publishers…and no one took any notice of it anyway.) Dick’s book (2011) provides an excellent summary of the many powerful reasons why growth is absurd, indeed suicidal.

Image result for dick smith

Dick Smith

The problem with the growth-maniacs, a category which includes just about all respectable economists, is that they do not realise how grossly unsustainable present society is, let alone what the situation will be as we continue to pursue growth. Probably the best single point to put to them is to do with our ecological “footprint”. The World Wildlife Fund puts out a measure of the amount of productive land it takes to provide for each person. For the average Australian it takes 8 ha of to supply our food, water, settlement area and energy. If the 10 billion people we are likely to have on earth soon were each to live like us we’d need 80 billion ha of productive land…but there are only about 8 billion ha of land available on the planet. We Australians are ten times over a level of resource use that could be extended to all people. It’s much the same multiple for most other resources, such as minerals, nitrogen emissions and fish. And yet our top priority is to increase our levels of consumption, production, sales and GDP as fast as possible, with no limit in mind!

The World Wildlife Fund also puts the situation another way. We are now using resources at 1.4 times the rate the planet could provide sustainably. We do this by for example, consuming more timber than grows each year, thereby depleting the stocks. Now if 10 billion people rose to the “living standards” we Australians would have in 2050 given the 3% p.a. economic growth we expect, then every year the amount of producing and consuming going on in the world would be 20 times as great as it is now.

Over-production and over-consumption is the main factor generating all the alarming global problems we face is. Why is there an environmental problem? Because we are taking far more resources from nature, especially habitats, than is sustainable. Why do about 3+ billion people in the Third World wallow in poverty? Primarily because the global economy is a market system and in a market resources go to those who can pay most for them, i.e., the rich. That’s why we in rich countries get almost all the oil, the surpluses produced from Third World soils, the fish caught off their coasts, etc. It’s why “development” in the Third World is mostly only development of what will maximise corporate profit, meaning development of industries to export to us. Why is there so much violent conflict in the world? Primarily because everyone is out to grab as many of the scarce resources as they can. And why is the quality of life in the richest countries falling now, and social cohesion deteriorating? Primarily because increasing material wealth and business turnover has been made the top priority, and this contradicts and drives out social bonding.

Dick has done a great job in presenting this general “limits to growth” analysis of our situation clearly and forcefully, and in getting it onto the public agenda. But I want to now argue that he makes two fundamental mistakes.

The first is his assumption that this society can be reformed; that we can retain it while we remedy the growth fault it has. The central argument in my The Transition to a Sustainable and Just World (2010a) is that consumer-capitalist society cannot be fixed. Many of its elements are very valuable and should be retained, but its most crucial, defining fundamental institutions are so flawed that they have to be scrapped and replaced. Growth is only one of these but a glance at it reveals that this problem cannot be solved without entirely remaking most of the rest of society. Growth is not like a faulty air conditioning unit on a house, which can be replaced or removed while the house goes on functioning more or less as before. It is so integrated into so many structures that if it is dumped those structures will have to be scrapped and replaced.

The most obvious implication of this kind is that in a zero growth economy there can be no interest payments at all. Interest is by nature about growth, getting more wealth back than you lent, and this is not possible unless lending and output and earnings constantly increase. There goes almost the entire financial industry I’m afraid (which recently accounted for over 40% of all profits made.) Banks therefore could only be places which hold savings for safety and which lend money to invest in maintenance of a stable amount of capital stock (and readjustments within it.) There also goes the present way of providing for superannuation and payment for aged care; these can’t be based on investing to make money.

The entire energising mechanism of society would have to be replaced. The present economy is driven by the quest to get richer. This motive is what gets options searched for, risks taken, construction and development underway, etc. The most obvious alternative is for these actions to be come from a collective working out of what society needs, and organising to produce and develop those things cooperatively, but this would involve an utterly different world view and driving mechanism.

The problem of inequality would become acute and would not only demand attention, it would have to be dealt with in an entirely different way. It could no longer be defused by the assumption that “a rising tide will lift all boats”. In the present economy growth helps to legitimise inequality; extreme inequality is not a source of significant discontent because it can be said that economic growth is raising everyone’s “living standards”.

How would we handle unemployment in a zero-growth economy? At present its tendency to increase all the time is offset by the increase in consumption and therefore production. Given that we could produce all we need for idyllic lifestyles with a fraction of the present amount of work done, any move in this direction in the present economy would soon result in most workers becoming unemployed. There would be no way of dealing with this without scrapping the labour market and then rationally and deliberately planning the distribution of the (small amount of) work that needed doing.

Most difficult of all are the cultural implications, usually completely overlooked. If the economy cannot grow then all concern to gain must be abandoned. People would have to be content to work for stable incomes and abandon all interest in getting richer over time. If any scope remains for some to try to get more and more of the stable stock of wealth, then some will succeed and take more than their fair share of it and others will therefore get less…and soon it will end in chaos, or feudalism as the fittest take control. Sorry, but the 500 year misadventure Western culture has had with the quest for limitless individual and national wealth is over. If we have the sense we will realise greed is incompatible with a sustainable and just society. If, as is more likely we won’t, then scarcity will settle things for us. The few super privileged people, including Australians, will no longer be able to get the quantities of resources we are accustomed to, firstly because the resources are dwindling now, and secondly because we are being increasingly outmanoeuvred by the energetic and very hungry Chinese, Indians, Brazilians…

And, a minor point, you will also have to abandon the market system. It is logically incompatible with growth. You go into a market not to exchange things of equal value but to make money, to get the highest price you can, to trade in a way that will make you richer over time. There are “markets” where people don’t try to do this but just exchange the necessities without seeking to increase their wealth over time e.g., in tribal and peasant societies. However these are “subsistence” economies and they do not operate according to market forces. The economies of a zero-growth society would have to be like this. Again, if it remains possible for a few to trade their way to wealth they will end up with most of the pie. This seems to clearly mean that if we are to have a zero-growth economy then we have to work out how to make a satisfactory form of “socialism” work, so that at least the basic decisions about production, distribution and development can be made by society and not left to be determined by what maximises the wealth of individuals and the profits of private corporations competing in the market. Richard Smith (2010) points this out effectively, but some steady-staters, including Herman Daly and Tim Jackson (2009) seem to have difficulty accepting it.

Thus growth is not an isolated element that can be dealt with without remaking most of the rest of society. It is not that this society has a growth economy; it is that this is a growth society.

So in my view Dick has vastly underestimated the magnitude of the changes involved, and gives the impression that consumer-capitalist society can be adjusted, and then we can all go on enjoying high levels of material comfort (he does say we should reduce consumption), travel etc. But the entire socio-economic system we have prohibits the slightest move in this direction; it cannot tolerate slowdown in business turnover (unemployment, bankruptcy, discontent and pressure on governments immediately accelerate), let alone stable levels, let alone reduction to maybe one-fifth of present levels.

This gets us to the second issue on which I think Dick is clearly and importantly mistaken. He believes a zero growth economy can still be a capitalist economy. This is what Tim Jackson says too, in his very valuable critique of the present economy and of the growth commitment. Dick doesn’t offer any explanation or defence for his belief; it is just stated in four sentences. “Capitalism will still be able to thrive in this new system as long as legislation ensures a level playing field. Huge new industries will be created, and vast fortunes are still there to be made by the brave and the innovative.” (p. 173.) “I have no doubt that the dynamism and flexibility of capitalism can adjust to sustainability laws. The profit imperative would be maintained and, as long as there was an equitable base, competition would thrive.” (p. 177.)

Following is a sketch of the case that a zero growth economy is totally incompatible with capitalism.

Capitalism is by definition about accumulation, making more money than was invested, in order to invest the surplus to have even more…to invest to get even richer, in a never-ending upward spiral. Obviously this would not be possible in a steady state economy. It would be possible for a few to still own most capital and factories and to live on income from these investments, but they would be more like rentiers or landlords who draw a stable income from their property. They would not be entrepreneurs constantly seeking increasingly profitable investment outlets for ever-increasing amounts of capital.

Herman Daly believes that “productivity” growth would enable capitalism to continue in an economy with stable resource inputs. This is true, but it would be a temporary effect and too limited to enable the system to remain capitalist. The growth rate which the system, and capitalist accumulation, depends on is mostly due to increased production, not productivity growth. Secondly the productivity measure used (by economists who think dollars are the only things that matter) takes into account labour and capital but ignores what is by far the most important factor, i.e., the increasing quantities of cheap energy that have been put into new productive systems. For instance over half a century the apparent productivity of a farmer has increased greatly, but his output per unit of energy used has fallen alarmingly. From here on energy is very likely to become scarce and costly. Ayres (1999) has argued that this will eliminate productivity gains soon (which have been falling in recent years anyway), and indeed is likely to entirely stop GDP growth before long.

Therefore in a steady state economy the scope for continued capitalist accumulation via productivity gains would be very small, and confined to the increases in output per unit of resource inputs that is due to sheer technical advance. There would not be room for more than a tiny class, accumulating greater wealth very gradually until energy costs eliminated even that scope. Meanwhile the majority would see this class taking more of the almost fixed output pie, and therefore would soon see that it made no sense to leave ownership and control of most of the productive machinery in the hands of a few.

But the overwhelmingly important factor disqualifying capitalism has yet to be taken into account. As has been made clear above the need is not just for zero-growth, it is for dramatic reduction in the amount of producing and consuming going on. These must be cut to probably less than one-fifth of the levels typical of a rich country today, because the planet cannot sustain anything like the present levels of producing and consuming, let alone the levels 9 billion people would generate. This means that most productive capacity in rich countries, most factories and mines, will have to be shut down.

I suspect that Dick Smith is like Tim Jackson in identifying capitalism with the private ownership of firms, and in thinking that “socialism” means public ownership. This is a mistake. The issue of ownership is not central; what matters most is the drive to accumulate, which can still be the goal in socialism of the big state variety (“state capitalism”.) In my ideal vision of the future post-capitalist economy most production would take place within (very small) privately owned firms, but there would be no concern to get richer and the economy would be regulated by society via participatory democratic processes.

So I think Dick has seriously underestimated the magnitude of the change that is required by the global predicament and of what would be involved in moving to a zero-growth economy. The core theme detailed in The Transition… is that consumer-capitalist society cannot be fixed. Dick seems to think you can retain it by just reforming the unacceptable growth bit. My first point above is that you can’t just take out that bit and leave the rest more or less intact. In addition you have to deal with the other gigantic faults in this society driving us to destruction, including allowing the market to determine most things, accepting competition rather than cooperation as the basic motive and process, accepting centralisation, globalisation and representative big-state “democracy”, and above all accepting a culture of competitive, individualistic acquisitiveness.

The Transition… argues that an inevitable, dreadful logic becomes apparent if we clearly grasp that our problems are primarily due to grossly unsustainable levels of consumption. There can be no way out other than by transition to mostly small, highly self-sufficient and cooperative local communities and communities which run their own economies to meet local needs from local resources… with no interest whatsoever in gain. They must have the sense to focus on the provision of security and a high quality of life for all via frugal, non-material lifestyles. In this “Simpler Way” vision there can still be (some small scale) international economies, centralised state governments, high-tech industries, and in fact there can be more R and D on important topics than there is now. But there will not be anything like the resources available to sustain present levels of economic activity or individual or national “wealth” measured in dollars.

I have no doubt that the quality of life in The Simpler Way (see the website, Trainer 2011) would be far higher than it is now in the worsening rat race of late consumer-capitalism. Increasing numbers are coming to grasp all this, for instance within the rapidly emerging Transition Towns movement. We see our task as trying to establish examples of the more sane way in the towns and suburbs where we live while there is time, so that when the petrol gets scarce and large numbers realise that consumer-capitalism will not provide for them, they can come across to join us.

It is great that Dick is saying a zero-growth economy is no threat to capitalism. If he had said it has to be scrapped then he would have been identified as a deluded greenie/commie/anarchist out to wreck society and his growth critique would have been much more easily ignored. What matters at this point in time is getting attention given to the growth absurdity; when the petrol gets scarce they will be a bit more willing to think about whether capitalism is a good idea. Well done Dick!





Lithium’s limits to growth

7 08 2017

The ecological challenges of Tesla’s Gigafactory and the Model 3

From the eclectic brain of Amos B. Batto

A long but well researched article on the limitations of the materials needed for a transition to EVs…..

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Many electric car advocates are heralding the advent of Tesla’s enormous battery factory, known as the “Gigafactory,” and its new Model 3 electric sedan as great advances for the environment.  What they are overlooking are the large quantities of energy and resources that are consumed in lithium-ion battery manufacturing and how these quantities might increase in the future as the production of electric vehicles (EVs) and battery storage ramps up.

Most of the credible life cycle assessment (LCA) studies for different lithium-ion chemistries find large large greenhouse gas emissions per kWh of battery. Here are the CO2-eq emissions per kWh with the battery chemistry listed in parentheses:
Hao et al. (2017): 110 kg (LFP), 104 kg (NMC), 97 kg (LMO)
Ellingsen et al. (2014): 170 kg (NMC)
Dunn et al. (2012): 40 kg (LMO)
Majeau-Bettez et al. (2011): 200 kg (NMC), 240 kg (LFP)
Ou et al (2010): 290 kg (NMC)
Zackrisson et al (2010): 440 kg (LFP)

Dunn et al. and Hao et al. are based on the GREET model developed by Argonne National Laboratory, which sums up the steps in the process and is based on the estimated energy consumption for each step. In contrast, Ellingsen et al. and Zackrisson et al. are based on the total energy consumption used by a working battery factory, which better captures all the energy in the processing steps, but the data is old and the battery factory was not very energy efficient, nor was it operating at full capacity. Battery manufacturing is getting more energy efficient over time and the energy density of the batteries is increasing by roughly 7% a year, so less materials are needed per kWh of battery. It is also worth noting that no LCA studies have been conducted on the NCA chemistry used by Tesla. NCA has very high emissions per kg due to the large amount of nickel in the cathode, but is very energy dense, so less total material is needed per kWh, so it is probably similar in emissions to NMC.

The big debate in the LCA studies of battery manufacturing is how much energy is consumed per kWh of battery in the battery factory. In terms of MJ per kWh of battery, Ellingsen et al. estimate 586 MJ, Zachrisson et al. estimate 451 MJ and Majeu-Bettez et al. estimate 371-473 MJ. However, the energy for the drying rooms and factory equipment is generally fixed, regardless of the throughput. Ellingsen et al (2014) found that the energy expended to manufacture a kWh of battery could vary as much as 4 times, depending on whether the factory is operating at full capacity or partial capacity. Since the Gigafactory will probably be operating a full capacity and energy efficiency is improving, let’s assume between 100 MJ and 150 MJ per kWh of battery in the Gigafactory (which converts to 28 – 42 kWh per kWh of battery). It is unlikely to be significantly less, because it is more energy efficient to burn natural gas for the drying rooms than use electric heaters, but the Gigafactory will have to use electric heaters to meet Musk’s goal of 100% renewable energy.

If producing 105 GWh of batteries per year at 100 – 150 MJ per kWh, plus another 45 GWh of packs with batteries from other factories at 25 MJ per kWh, the Gigafactory will consume between 3,229 and 4,688 GWh per year, which is between 8.3% and 12.0% of the total electrical generation in Nevada in 2016. I calculate that 285 MW of solar panels can be placed on the roof of the Gigafactory and they will only generate 600 GWh per year, assuming a yearly average of 7.16 kWh/m2/day of solar radiation, 85% (1.3 million m2) of the roof will be covered, 20% efficiency in the panels and a 10% system loss.

Solar panels in dusty locations such as Nevada loose roughly 25% of their output if they are not regularly cleaned. Although robots have been developed to clean panels with brushes, water will most likely be used to clean the Gigafactory’s panels. A study by Sandia National Laboratory found that photovoltaic energy plants in Nevada consume 0.0520 acre-feet of water per MW of nameplate capacity per year. The solar panels at the Gigafactory will probably have 25% less area per MW than the solar panels in the Sandia study, so we can guesstimate that the solar panels on the Gigafactory roof will consume 11.1 acre-feet or 13,700 cubic meters of water per year.

Solar panels can also be placed on the ground around the factory, and but consider the fact that the Gigafactory will only receive 4.23 kWh/m2/day in December, compared to 9.81 kWh/m2/day in July. With less than half the energy from the panels during the winter, the Gigafactory will need other sources of energy during the times when it is cloudy and the sun’s rays are more indirect. Even during the summer, the Gigafactory will probably have to use temporary battery storage to smooth out the solar output or get additional energy with electric utilities which use gas peaking, battery storage or buy energy from the regional grid to give the Gigafactory a stable supply of electricity.

The original mockup of the Gigafactory showed wind turbines on the hillsides around the plant, but wind energy will not work onsite, because the area has such low wind speed. A weather station in the Truckee River valley along I-80, near the Gigafactory, measures an average wind speed of 3.3 m/s at a height of 6 meters, although the wind speed is probably higher at the site of the Gigafactory. Between 4 to 5 m/s is the minimum wind speed to start generating any energy, and between 5 and 6 m/s is generally considered the minimum for wind turbines to be economically viable. It might be possible to erect viable wind turbines onsite with 150 m towers to capture better wind, but the high costs make it likely that Tesla will forgo that option.

The region has good geothermal energy at depths of 4000 to 6000 feet and this energy is not variable like solar and wind. However, there is a great deal of risk in geothermal exploration which costs $10 million to drill a test well. It is more likely that Tesla will try to buy geothermal energy from nearby producers, but geothermal energy in the region is already in heavy demand, due to the clean energy mandates from California, so it won’t be cheap.

Despite Musk’s rhetoric about producing 100% of the Gigafactory’s energy onsite from renewable sources, Tesla knows that it is highly unrealistic, which is why it negotiated to get $8 million in electricity rebates from the state of Nevada over an 8 year period. It is possible that the Gigafactory will buy hydroelectric energy from Washington or Oregon, but California already competes for that electricity. If Tesla wants a diversified supply of renewable energy to balance out the variability of its solar panels, it will probably have to provide guaranteed returns for third parties to build new geothermal plants or wind farms in the region.

I would guesstimate that between 2/3 of the electricity consumed by the Gigafactory will come from the standard Nevada grid, whereas 1/3 will be generated onsite or be bought from clean sources. In 2016, utility-scale electricity generation in Nevada was 72.8% natural gas, 5.5% coal, 4.5% hydroelectric, 0.9% wind, 5.7% PV solar, 0.6% concentrated solar, 9.8% geothermal, 0.14% biomass and 0.03% petroleum coke. If we use the grams of CO2-eq per kWh estimated by IPCC AR5 WGIII and Bruckner et al (2014), then natural gas emits 595 g, coal emits 1027 g, petroleum emits 880 g, hydroelectric emits 24 g, terrestrial wind emits 11 g, utility PV solar emits 48 g, residential PV solar emits 41 g, concentrated solar emits 27 g, geothermal emits 38 g and biomass emits 230 g. Based on those emission rates, grid electricity in Nevada emits 499 g CO2-eq per kWh. If 2/3 comes from the grid and 1/3 comes from rooftop PV solar or a similar clean source, then the electricity used in the Gigafactory will emit 346 g CO2 per kWh. If consuming between 3,229 and 4,688 GWh per year, the Gigafactory will emit between 1.12 and 1.62 megatonnes of CO2-eq per year, which represents between 3.1% and 4.5% of the greenhouse gas emissions that the state of Nevada produced in 2014 according to the World Resources Institute.

Aside from the GHG emissions from the Gigafactory, it is necessary to consider the greenhouse gas emissions from mining, refining and processing the materials used in the Gigafactory. The materials used in batteries consume a tremendous amount of energy and resources to produce. The various estimates of the energy to produce the materials in batteries and their greenhouse gas emissions shows the high impact that battery manufacturing has on the planet.

ImpactPerKgBatteryMaterials

To get some idea of how much materials will be used in the NCA cells produced by the Gigafactory, I attempted to do a rough calculation of the weight of materials in 1 kWh of cells. Taking the weight breakdown of an NMC battery cell in Olofsson and Romare (2013), I used the same weight percentages for the cathode, electrolyte, anode and packaging, but scaled the energy density up from 233 kW per kg in the NCA cells in 2014 to 263 kW per kg, which is a 13% increase, since Telsa claims a 10% to 15% increase in energy density in the Gigafactory’s cells. Then, I estimated the weight of the components in the cathode, using 76% nickel, 14% cobalt, and 10% aluminum and some stochiometry to calculate the lithium and oxygen compared to the rest of the cathode materials. The 2170 cells produced by the Gigafactory will probably have different weight ratios between their components, and they will have more packaging materials than the pouch cells studied by Olofsson and Romare, but this provides a basic idea how much material will be consumed in the Tesla cells.

BatteryMaterialsIn1KWhGigafactory

The estimates of the energy, the emissions of carbon dioxide equivalent, sulfur dioxide equivalent, phosphorous equivalent and human toxicity to produce the metals are taken from Nuss and Eckelman (2014), which are process-sum estimates based on the EcoInvent database. These are estimates to produce generic metals, not the highly purified metals used in batteries, and the process-sum methodology generally underestimates the emissions, so the estimates should be taken with a grain of salt but they do give some idea about the relative impact of the different components in battery cells since they use the same methodology in their calculations.

At this point we still don’t know how large the battery will be in the forthcoming Model 3, but it has been estimated to have a capacity of 55 kWh based on a range of 215 miles for the base model and a 20% reduction in the size of the car compared to the Model S. At that battery size, the cells in the Model 3 will contain 6.3 kg of lithium, 26.4 kg of nickel, 4.9 kg of cobalt, 27.9 kg of aluminum, 56.6 kg of copper and 21.0 kg of graphite.

Even more concerning is the total impact of the Gigafactory when it ramps up to its planned capacity of 150 GWh per year. Originally, the Gigafactory was scheduled to produce 35 GWh of lithium ion batteries by 2020, plus package an additional 15 GWh of cells produced in other factories. After Tesla received 325,000 preorders for the Model 3 within a week of being announced on March 31, 2016, the company ambitiously announced that it would triple its planned battery production and be able to produce 500,000 cars a year by 2018–two years earlier than initially planned. Now Elon Musk is talking about building 2 to 4 additional Gigafactories and one is rumored to have signed a deal to build one of them in Shanghai.

If the components for 1 kWh of Gigafactory batteries is correct and the Nevada plant manages to produce as much as Musk predicts, then the Gigafactory and the cells it packages from other battery factories will consume 17,119 tonnes of lithium, 71,860 tonnes of nickel, 13,292 tonnes of cobalt, 154,468 tonnes of copper and 75,961 tonnes of aluminum. All of these metals except aluminum have limited global reserves, and North America doesn’t have enough production capacity to hope to supply all the demand of the Gigafactory, except in the case of aluminum and possibly copper.

150GWhInGigafactory

When the Gigafactory was originally announced, Telsa made statements about sourcing the battery materials from North America which would both reduce its costs and lower the environmental impact of its batteries. These claims should be treated with skepticism. The Gigafactory will reduce the transportation emissions in battery manufacturing, since it will be shipping directly from the refineries and processors, but the transportation emissions will still be very high because North America simply doesn’t produce enough of the metals needed by the Gigafactory. If the Gigafactory manufacturers 150 GWh of batteries per year, then it will consume almost 200 times more lithium than North America produced in 2013. In addition, it will also consume 166% of the cobalt, 133% of the natural graphite, 25.7% of the nickel, and 5.6% of the copper produced by North American mines in 2016. Presumably synthetic graphite will be used instead of natural graphite because it has a higher purity level of carbon and more uniform spheroid flakes which allow for the easier flow of electrons in the cathode, but most synthetic graphite comes from Asia. Only in the case of aluminum does it seem likely that the metal will come entirely from North America, since Gigafactory will consume 1.9% of North American mine production and the US has excess aluminum refining capacity and no shortage of bauxite. Even when considering that roughly 45 GWh of the battery cells will come from external battery factories which are presumably located in Asia, the Gigafactory will overwhelm the lithium and cobalt markets in North America, and strain the local supplies of nickel and copper.

GigafactoryMetalConsumption

Shipping from overseas contributes to greenhouse gases, but shipping over water is very energy efficient. The Gigafactory is located at a nexus of railroad lines, so it can efficiently ship the battery materials coming from Asia through the port of Oakland. The bigger problem is that most ships on international waters use dirty bunker fuels that contain 2.7% sulfur on average, so they release large quantities of sulfur dioxide into the atmosphere that cause acid rain and respiratory diseases.

A larger concern than the emissions from shipping is the fact that the production of most of these battery materials is an energy intensive process that consumes between 100 and 200 mejajoules per kg. The aluminum, copper, nickel and cobalt produced by North America is likely to come from places powered by hydroelectric dams in Canada and natural gas in the US, so they are comparatively cleaner.  Most of the metal refining and graphite production in Asia and Australia, however, is done by burning coal. Most of the places that produce battery materials either lack strong pollution controls, as is the case in Russia, the Democratic Republic of Congo (DRC), Zambia, Philippines or New Caledonia, or they use dirty sources of energy, as is the case in China, India, Australia, the DRC, Zambia, Brazil and Madagascar.

MineProductionByCountry

Most of the world’s lithium traditionally came from pumping lithium rich subsurface water out of the salt flats of Tibet, northeast Chile, northwest Argentina and Nevada, but the places with concentrated lithium brines are rapidly being exhausted. The US Geological Survey estimates that China’s annual production of lithium which mostly comes from salt flats in Tibet has fallen from 4500 tonnes in 2012 to just 2000 tonnes in 2016. Silver Peak, Nevada, which is the only place in North America where lithium is currently extracted, may be experiencing similar production problems due to the exhaustion of its lithium, but its annual production numbers are confidential.

Since 1966 when brine extraction began in Silver Peak, the concentration of lithium in the water has fallen from 360 to 230 ppm (parts per million), and it is probably around 200 ppm today. At that concentration of lithium, 14,300 liters of water need to be extracted to produce 1 kg of battery-grade lithium metal. This subsurface water is critical in a state that only receives an average of 9 inches of rain per year. Parts of Nevada are already suffering from water rationing, so a massive expansion of lithium extraction is an added stress, but the biggest risk is that brine operations may contaminate the ground water. 30% of Nevada’s water is pumped from underground aquifers, so protecting this resource is vitally important. Lithium-rich water is passed through a series of 4 or 5 evaporation pools over a series of 12 to 18 months, where it is converted to lithium chloride, which is toxic to plants and aquatic life and can contaminate the ground water. Adams-Kszos and Stewart (2003) measured the effect of lithium chloride contamination in aquatic species 150 miles away from brine operations in Nevada.

As the lithium concentrations fall in the water, more energy is expended in pumping water and evaporating it to concentrate the lithium for processing. Argonne National Laboratory estimates that it takes 3 times as much energy to extract a tonne of lithium in Silver Peak, Nevada as in the Atacama Salt Flats of Chile, where the lithium is 7 times more concentrated.  Most of the lithium in Chile and Argentina is produced with electricity from diesel generators, but in China and Australia it comes from burning coal, which is even worse.

For every kg of battery-grade lithium, 4.4 kg of slaked lime is consumed to remove magnesium and calcium from the brine in Silver Peak. The process of producing this lime from limestone releases 0.713 kg of COfor every kg of lime. In addition, 5 kg of soda ash (Na2CO3) is added for each kilo of battery-grade lithium to precipitate it as lithium carbonate. Production of soda ash is also an energy intensive process which produces greenhouse gases.

Although lithium is an abundant element and can be found in ocean water and salty lakes, there are only 4 places on the planet where it is concentrated enough without contaminants to be economically extracted from the water and the few places with concentrated lithium water are rapidly being exploited. In 2008, Meridian International estimated that 2 decades of mining had extracted 20% of the lithium from the epicenter of the Atacama Salt Flats where lithium concentrations are above 3000 ppm. According to Meridian’s calculations, the world only had 4 million tonnes of high-concentration lithium brine reserves remaining in 2008.

As the best concentrations of lithium brine are being exhausted, extraction is increasingly moving to mining pegmatites, such as spodumene. North Carolina, Russia and Canada shut down their pegmatite operations because they couldn’t compete with the cheap cost of lithium from the salt flats of Chile and Argentine, but Australia and Zimbabwe have dramatically increased their production of lithium from pegmatites in recent years. Between 2004 and 2016, the percentage of global lithium from pegmatites increased from 39% to 44%.

LithiumFromPegmatites

In 2016, Australia produced 40.9% of the global lithium supply by processing spodumene, which is an extremely energy-intensive process. It takes 125 MJ of energy to extract a kilo of lithium from Chile’s salt flats, whereas 850 MJ is consumed to extract the same amount of lithium from spodumene in Australia. The spodumene is crushed, so it can be passed through a flotation beneficiation process to produce a concentrate. That concentrate is then heated to 1100ºC to change the crystal structure of the mineral. Then, the spodumene is ground and mixed with sulfuric acid and heated to 250ºC to form lithium sulfate. Water is added to dissolve the lithium sulfate and it is filtered before adding soda ash which causes it to precipitate as lithium carbonate. As lithium extraction increasingly moves to pegmatites and salt flats with lower lithium concentrations, the energy consumption will dramatically increase to produce lithium in the future.

Likewise, the energy to extract nickel and cobalt will also increase in future. The nickel and cobalt from Canada and the copper from the United States, generally comes from sulfide ores, which require much less energy to refine, but these sulfide reserves are limited. The majority of nickel and cobalt, and a sizable proportion of the copper used by the Gigafactory will likely come from places which present ethical challenges. Nickel from sulfide ores generally consumes less than 100 MJ of energy per kg, whereas nickel produced from laterite ores consumes between 252 and 572 MJ per kg. All the sulfide sources emit less than 10 kg of CO2 per kg of nickel, whereas the greenhouse gas emissions from laterite sources range from 25 to 46 kg  CO2 per kg of nickel. It is generally better to acquire metals from sulfide ores, since they emit fewer greenhouse gases and they generally come from deeper in the ground, whereas laterite ores generally are produced by open pit and strip mining which causes greater disruption of the local ecology. Between 2004 and 2016, the percentage of global primary production of nickel from laterite ores increase from 40% to 60% and that percentage will continue to grow in the future, since 72% of global nickel “resources” are laterites according to the US Geological Survey.

globalNickelProduction

Cobalt is a byproduct of copper or nickel mining. The majority of the sulfide ores containing copper/cobalt are located in places like Norilsk, Russia, Zambia and the Katanga Province of the Democratic Republic of Congo, where there are no pollution controls to capture the large amounts of sulfur dioxide and heavy metals released by smelting. The refineries in Norilsk, Russia, which produce 11% of the world’s nickel and 5% of its cobalt, are so polluting, that nothing grows within a 20 kilometer radius of the refineries and it is reported that Norilsk has the highest rates of lung cancer in the world.

The Democratic Republic of Congo currently produces 54% of the world’s cobalt and 5% of its copper. Buying cobalt from the DRC helps fuel a civil war in the Katanga Province where the use of children soldiers and systematic rape are commonplace. Zambia, which is located right over the border from Katanga Province, produces 4% of the world’s cobalt and copper and it also has very lax pollution controls for metal refining.

Most of the cobalt and nickel produced by the DRC and Zambia is shipped to China for refining by burning coal. China has cracked down on sulfur dioxide and heavy metal emissions in recent years, and now the DRC is attempting to do more of the refining within its own borders. The problem is that the DRC produces most of its energy from hydroelectric dams in tropical rainforests, which is the dirtiest energy on the planet. According to the IPCC (AR5 WGIII 2014), hydroelectric dams typically emit a medium of 24 g of  CO2-eq per kWh, but tropical dams accumulate large amounts of vegetation which collect at the bottom of the dam where bacteria feeding on the decaying matter release methane (CH4) in the absence of oxygen. There have been no measurements of the methane released by dams in the DRC, but studies of 3 Amazonian hydroelectric dams found that they emit an average of 2556 g CO2-eq per kWh. Presumably the CO2 from these dams would have been emitted regardless of whether the vegetation falls on the forest floor or in a dam, but rainforest dams are unique environments without oxygen that produces methane. If we only count the methane emissions, then Amazonian hydroelectric dams emit an average of 2044 g CO2-eq per kWh. Any refining of copper/cobalt in the DRC and Zambia or nickel/cobalt in Brazil will likely use this type of energy which emits twice as much greenhouse gases as coal.

To avoid the ethical problems with obtaining nickel and cobalt from Russia and cobalt and copper from the DRC and Zambia, the Gigafactory will have to consume metals from laterite ores in places like Cuba, New Caledonia, Philippines, Indonesia and Madagascar, which dramatically increases the greenhouse gas emissions of these metals. The nickel/cobalt ore from Moa, Cuba is shipped to Sherritts’ refineries in Canada, so presumably it will be produced with pollution controls in Cuba and Canada and relatively clean sources of energy. In contrast, the nickel/cobalt mining in the Philippines and New Caledonia has generated protracted protests by the local population who are effected by the contamination of their water, soil and air. When Vale’s $6 billion high pressure acid leaching plant in Goro, New Caledonia leaked 100,000 liters of acid-tainted effluent leaked into a local river in May 2014, protesters frustrated by the unaccountability of the mining giant burned a third of its trucks and one of its buildings, causing between $20 and $30 million in damages. The mining companies extracting nickel and cobalt in the Philippines have shown so little regard for the health of the local people, that the public outcry induced the Duterte administration to recently announce that it will prohibit all open pit mining of nickel. If this pronouncement is enforced, the operations of 28 of the 41 companies mining nickel/cobalt in the country will be shut down and the global supply of nickel will be reduced between 8% and 10%.

Most refining of laterite ores in the world is done with dirty energy, which is problematic because these ores require so much more energy than sulfide ores. Much of the copper/cobalt from the DRC and Zambia and the nickel/cobalt from the Philippines is shipped to China where it is refined with coal. The largest nickel/cobalt laterite mine and refinery in the world is the Ambatovy Project in Madagascar. Although the majority of the electricity on the island comes from hydroelectric dams, the supply is so limited that Ambatovy constructed three 30 MW coal-powered generators, plus 30 MW diesel powered generators.

It is highly likely that many of the LCA studies of lithium-ion batteries have underestimated the energy and greenhouse gas emissions to produce their metals, because they assume that the lithium comes from brine operations and the copper, nickel and cobalt come from sulfide ores with high metal concentrations. As lithium extraction increasingly shifts to spodumene mining and nickel and cobalt mining shifts to laterite ores, the greenhouse gas emissions to produce these metals will dramatically increase.

As the global production of lithium-ion batteries ramps up, the most concentrated ores for these metals will become exhausted, so that mining will move to less-concentrated sources, which require more energy and resources in the extraction and processing.  In 1910, copper ore in the US contained 1.9% copper. By 1950, this percentage had fallen to 0.9% copper, and by 1980 it was at 0.5% copper. As the concentration of copper in the ore has fallen, the environmental impact of extraction has risen. In a study of the smelting and refining of copper and nickel, Norgate and Rankin (2000) found that the energy consumption, greenhouse gas emissions and sulfur dioxide emissions per kg of metal rose gradually when changing from ore with 3% or 2% metal to 1% metal, but below 1% the environmental impacts increased dramatically. MJ/kg, CO2/kg and SO2/kg doubled when moving from ore with 1% metal to ore with 0.5% metal, and they doubled again when moving to 0.25% metal. Producing a kilo of copper today in the US has double the environmental impact of a kg of copper half a century ago and it will probably have 4 times the impact in the future.

The enormous demand for metals by battery manufacturers will force the mining companies to switch to less and less concentrated ores and consume more energy in their extraction. If the Nevada Gigafactory produces 150 GWh of batteries per year, then it will dramatically reduce the current global reserves listed by the US Geological survey. The Nevada Gigafatory will cut the current global lithium reserves from 400 to 270 years, assuming that current global consumption in other sectors does not change (which is highly unlikely). If the Gigafactory consumes metals whose recycled content is the US average recycling rate, then the current global copper reserves will be reduced from 37.1 to 36.9 years, the nickel reserves from 34.7 to 33.9 years, and the cobalt reserves from 56.9 to 52.5 years.

Recycling at the Gigafactory will not dramatically reduce its demand for metals. If we assume that 80% of the metal consumed by the Gigafactory will come from recycled content starting in 15 years when batteries start to be returned for recycling, then current global reserves will be extended 0.04 years for copper, 0.09 years for nickel, 0.9 years for cobalt. Only in the case of lithium will recycling make a dramatic difference, extending the current reserves 82 years for lithium.

The prospects for global shortages of these metals will become even more dire if the 95.0 million vehicles that the world produced in 2016 were all long-range electrics as Elon Musk advocates for “sustainable transport.” If the average vehicle (including all trucks and buses) has a 50 kWh battery, then the world would need to produce 4750 GWh of batteries per year just for electric vehicles. With energy storage for the electrical grid, that total will probably double, so 64 Gigafactories will be needed. Even that might not enough. In Leonardo de Caprio’s documentary Before the Flood, Elon Musk states, “We actually did the calculations to figure out what it would take to transition the whole world to sustainable energy… and you’d need 100 Gigafactories.”

Lithium-ion batteries will get more energy dense in the future, but they are unlikely to reach the high energy density of the NCA cells produced in the Gigafactory, if using the LMO or LFP chemistries. For that kind of energy density, they will probably need either an NCA or an altered NMC chemistry which is 70%-80% nickel, so the proportion of lithium, nickel, cobalt and copper in most future EV batteries is likely to be similar to the Gigafactory’s NCA cells. If 4750 GWh of these batteries are produced every year at an energy density of 263 Wh/kg, then the current global reserves will be used up in 24.5 years for lithium, 31.2 years for copper, 20.2 years for nickel, and 15.4 years for cobalt. Even if those batteries are produced with 80% recycled metals, starting in 15 years time, the current global lithium reserves would be extended 6.6 years, or 7.4 years if all sectors switch to using 80% recycled lithium. Using 80% recycled metal in the batteries would extend current copper, nickel and cobalt reserves by 0.7, 0.5 and 0.1 years, respectively. An 80% recycling rate in all sectors would make a difference for copper, extending its reserves by 11.5 years, but only 2.8 years for nickel and 0.2 years for cobalt. In other words, recycling will not significantly reduce the enormous stresses that lithium-ion batteries will place on global metal supplies, because they represent so much new demand for metals.

As the demand for these metals increases, the prices will increase and new sources of these metals will be found, but they will either be in places like the DRC with ethical challenges or in places with lower quality ores which require more energy and resources to extract and refine. We can expect more energy-intensive mining of spodumene and  more strip mining of laterite ores which cause more ecological disruption. The ocean floor has enormous quantities of manganese, nickel, copper and cobalt, but the energy and resources to scrap the bottom of the ocean will dramatically increase the economic and ecological costs. If battery manufacturing dramatically raises the prices of lithium, nickel, cobalt, copper (and manganese for NMC cells), then it will be doubly difficult to transition to a sustainable civilization in other areas. For example, nickel and cobalt are essential to making carbide blades, tool dies and high-temperature turbine blades and copper is a vital for wiring, electronics and electrical motors. It is hard to imagine how the whole world will transition to a low-carbon economy if these metals are made prohibitively expensive by manufacturing over a billion lithium-ion batteries for EVs.

Future batteries will probably be able to halve their weight by switching to a solid electrolyte and using an anode made of lithium metal, lithiated silicon or carbon nanotubes (graphene), but that will only eliminate the copper, while doing little to reduce the demand for the other metals. Switching the anode to spongy silicon or graphene will allow batteries to hold more charge per kilogram, but those materials also dramatically increase the cost and the energy and resources that are consumed in battery manufacturing.

In the near future, lithium-ion batteries are likely to continue to follow their historical trend of using 7% less materials each year to hold the same amount of charge. That rate of improvement, however, is unlikely to last. An NCA cathode currently holds a maximum of 200 mAh of energy per gram, but its theoretical maximum is 279 mAh/g. It has already achieved 72% of what is theoretically possible, so there is little scope to keep improving. NMC at 170 mAh/g is currently farther from its theoretical limit of 280 mAh/g, but the rate of improvement is likely to slow as these battery chemistries bump against their theoretical limits.

Clearly the planet doesn’t have the resources to build 95 million long-range electric vehicles each year that run on lithium-ion batteries. Possibly a new type of battery will be invented that only uses common materials, such as aluminum, zinc, sodium and sulfur, but all the batteries that have been conceived with these sorts of material still have significant drawbacks. Maybe a new type of battery will be invented that is suitable for vehicles or the membranes in fuel cells will become cheap enough to make hydrogen a viable competitor, but at this point, lithium-ion batteries appear likely to dominate electric vehicles for the foreseeable future. The only way EVs based on lithium-ion can become a sustainable solution for transport is if the world learns to live with far fewer vehicles.

Currently 3% more vehicles are being built each year, and there is huge demand for vehicles in the developing world. While demand for cars has plateaued in the developed world, vehicle manufacturing since 1999 has grown 17.4% and 10.5% per year in China and India, respectively. If the developing world follows the unsustainable model of vehicle ownership found in the developed world, then the transition to electrified transport will cause severe metal shortages. Based on current trends, Navigant Research predicts that 129.9 million vehicles will be built in the year 2035, when there will be 2 billion vehicles on the road.

GlobalAutoProduction

On the other hand, James Arbib and Tony Seba believe that autonomous vehicles and Transport as a Service (TaaS) such as Uber and Lyft will dramatically reduce demand for vehicles, lowering the number of passenger vehicles on American roads from 247 to 44 million by 2030. If 95% of passenger miles are autonomous TaaS by 2030 and the lifespan of electric vehicles grows to 500,000 miles as Arbib and Seba predict, then far fewer vehicles will be needed. Manufacturing fewer electric vehicles reduces the pressure to extract metals from laterite ores, pegmatites, the ocean floor, and lower-grade ores in general with higher ecological costs.

Ellingsen et al (2016) estimate that the energy consumed by battery factories per kWh of batteries has halved since 2012, however, that has to be balanced by the growing use lithium from spodumene and nickel and cobalt from laterite ores, and ores with lower metal concentrations that require more energy and produce more pollution. Given the increased energy efficiency in battery manufacturing plants and the growing efficiencies of scale, I would guesstimate that lithium-ion battery emissions are currently at roughly 150 kg  CO2-eq per kWh of battery and that the Gigafactory will lower those emissions by a third to roughly 100 kg  CO2-eq / kWh. If the Model 3, uses a 55 kWh battery, then its battery emissions would be roughly 5500 kg  CO2-eq.

Manufacturing a medium-sized EV without the battery emits 6.5 tonnes of  CO2-eq according to Ellingsen et al (2016). Electric cars don’t have the huge engine block of an ICE car, but they have large amounts of copper in the motor’s rotor and the windings and the Model 3 will have far more electronics than a standard EV. The Model S has 23 kg of electronics and I would guesstimate that the Model 3 will have roughly 15 lbs of electronics if it contains nVidia’s Drive PX or a custom processor based on the K-1 graphics processor. If the GHG emissions are roughly 150 kg  CO2-eq per kg of electronics, we can guesstimate that 2.2 tonnes of  CO2-eq will be emitted to manufacture the electronics in the Model 3. Given the large amount of copper, electronics and sensors in the Model 3, add an additional tonne, plus 5.5 tonnes for its 50 kWh battery, so a total of 13 tonnes of  CO2-eq will be emitted to manufacture the entire car.

Manufacturing a medium-sized ICE car emits between 5 and 6 tonnes, so there is roughly a 7.5 tonne difference in GHG emissions between manufacturing the Model 3 and a comparable ICE car. A new ICE car the size of the Model 3 will get roughly 30 mpg. In the US, a gallon of gasoline emits 19.64 lbs of CO2, but it emits 24.3 lbs of  CO2e when the methane and nitrous oxide are included, plus the emissions from extraction, refining and transportation, according to the Argonne National Laboratory. Therefore, we will need to burn 680 gallons of gasoline or drive 20,413 miles at 30 mpg to equal those 7.5 extra tonnes in manufacturing the Model 3.

At this point, the decision whether the Model 3 makes ecological sense depends on where the electricity is coming from. Let’s assume that the Model 3 will consume 0.30 kWh of electricity per mile, which is what the EPA estimates the Nissan Leaf to consume. The Model S will be a smaller and more aerodynamic car than the Leaf, but it will also weigh significantly more due to its larger battery. If we also include the US national average of 4.7% transmission losses in the grid, then the Model 3 will consume 0.315 kWh per mile. After driving the Model 3 100,000 miles, the total greenhouse gas emissions (including the production emissions) will range between 14.1 and 45.3 tonnes, depending on its energy source to charge the battery.

VehicleEmissions100000miles

In comparison, driving a 30 mpg ICE car (with 5.5 tonnes in production emissions) will emit 42.2 tonnes of  CO2-eq after 100,000 miles. If we guesstimate that manufacturing a Toyota Prius will emit 7 tonnes, then driving it 100,000 miles at 52 mpg will emit 28.2 tonnes. Only in places like Kentucky which get almost all their electricity from coal is an ICE car the better environmental choice. The Model 3, however, will have worse emissions than most of its competitors in the green car market, if it is running on average US electricity, which emits 528 grams of CO2-eq per kWh. It will emit slightly more than a plugin hybrid like the Chevy Volt and an efficient hybrid like the Toyota Prius and substantially more than a short-range electric, like the Nissan Leaf.

Most previous comparisons between electric cars and ICE cars were based on short-range electrics with smaller batteries, such as the Nissan Leaf, which is why environmental advocates are so enthusiastic about EVs. However, comparing the Model S and Model 3 to the Nissan Leaf, Chevy Volt and Toyota Prius hybrid shows that the environmental benefits of long-range EVs are questionable when compared to short-range EVs, plugin hybrids and hybrids. Only when running the Model 3 on cleaner sources of electricity does it emit less greenhouse gases than hybrids and plugin hybrids, but in the majority of the United States it will emit slightly more. Many of the early adopters of EVs also owned solar panels, so buying a Model 3 will reduce their carbon footprint, but the proportion of EV owners with solar panels on their roofs is falling. According to CleanTechnica’s PlugInsights annual survey, 25% of EV buyers before 2012 had solar panels on their roofs, compared to just 12% in 2014-2015. Most people who own solar panels do not have a home battery system so they can not use their clean energy all day, and most EV charging will happen at night using dirtier grid electricity.

Another factor to consider is the effect of methane leakage in the extraction and transport of natural gas. There is a raging scientific debate about what percentage of natural gas leaks into the atmosphere without being burned. A number of studies have concluded that the leakage of methane causes electricity from natural gas to have GHG emissions similar to coal, but there is still no consensus on the matter.  If the leakage rate is as high as some researchers believe, then EVs will emit more greenhouse gases than hybrids and efficient ICE cars in places like California which burn large amounts of natural gas.

On the other hand, many people believe that EVs will last 300,000 miles or even 500,000 miles since they have so few moving parts, so their high emissions in manufacturing will be justified. However, the EV battery will probably have to be replaced, and the manufacturing emissions for a long range EV battery can be as high as building a whole new ICE car. Another factor that could inhibit the long life of Telsa’s cars is the fact that the company builds cars described as “computers on wheels,” which are extremely difficult for third parties to fix and upgrade over time. Telsa only sells its parts to authorized repair shops and much of the functionality of car is locked up with proprietary code and secret security measures, as many do-it-yourselfers have discovered to their chagrin. When Tesla cars are damaged and sold as salvage, Tesla remotely disables its cars, so that they will no longer work even if repaired. The $600 inspection fee to reactivate the car plus the towing fees discourage Teslas from being fixed by third parties. These policies make it less likely that old Teslas will be fixed and their lifespans extended to counterbalance the high environmental costs of producing the cars.

Although the Model 3 has high greenhouse gas emissions in its production and driving it is also problematic in parts of the world that currently use dirty energy, those emissions could be significantly reduced in the future if they are accompanied by a shift to renewable energy, more recycling and the electrification of mining equipment, refining and transport. The car’s ecological benefits will increase if the emissions can be decreased in producing battery materials and the greater energy density of batteries is used to decrease the total materials in batteries rather than keep extending the range of EVs. Producing millions of Model 3s will strain the supply of vital metals and shift extraction to reserves which have higher ecological costs. However, the Model 3 could become a more sustainable option if millions of them are deployed in autonomous Transport as a Service fleets, which Arbib and Seba predict will be widespread by 2030, since TaaS will cost a tenth of the price of owning a private vehicle. If the Model 3 and future autonomous EVs become a means to drop the global demand for private vehicles and that helps reduce the demand for lithium, nickel, cobalt and copper down to sustainable levels, then the high environmental costs of manufacturing the Model 3 would be justified.

Nonetheless, the Model 3 and the NCA 2170 batteries currently being produced by Tesla offer few of those possible future ecological benefits. Most of the metal and graphite in the battery is being produced with energy from fossil fuels. In the short term at least, Telsa batteries will keep growing in capacity to offer more range, rather than reducing the total consumption of metals per battery. The extra sensors, processing power and electronics in the current Model 3 will increase its ecological costs without providing the Level 4 or 5 autonomy that would make it possible to convince people to give up their private vehicles. In the here and now, the Model 3 is generally not the best ecological choice, but it might become a better choice in the future.

The Model 3 promises to transform the market not only for EVs, but cars in general. If the unprecedented 500,000 pre-orders for the Model 3 are any indication of future demand, then long-range electrics with some degree of autonomous driving like the Model 3 will capture most of the EV market. Telsa’s stunning success will induce the rest of auto-makers to also start making long-range EVs with large batteries, advanced sensors, powerful image processors, advanced AI, cellular networking, driving data collection and large multimedia touchscreens. These features will dramatically increase the environmental costs of car manufacturing. Whether these features will be balanced by other factors which reduce their environmental costs remains to be seen.

Much of this analysis is guess work, so it should be taken with a grain of salt, but it points out the problems with automatically assuming that EVs are always better for the environment. If we consider sulfate emissions, EVs are significantly worse for the environment. Also, when we consider the depletion of critical metal reserves, EVs are significantly worse than ICE vehicles.

The conclusion should be that switching to long-range EVs with large batteries and advanced electronics bears significant environmental challenges. The high manufacturing emissions of these types of EVs make their ecological benefits questionable for private vehicles which are only used on average 4% of the time. However, they are a very good option for vehicles which are used a higher percentage of the time such as taxis, buses and heavy trucks, because they will be driven many miles to counterbalance their high manufacturing emissions. Companies such as BYD and Proterra provide a model of the kinds of electric vehicles that Tesla should be designing to promote “sustainable transport.” Tesla has a few ideas on the drawing board that are promising from an ecological perspective, such as its long-haul semi, the renting out of Teslas to an autonomous TaaS fleet, and a new vehicle that sounds like a crossover between a sedan and a minibus for public transport. The current Model 3, however, is still a vehicle which promotes private vehicle ownership and bears the high ecological costs of long-range lithium batteries and contributes to the growing shortage of critical metals.

Clearly, EVs alone are not enough to reduce greenhouse gas emissions or attain sustainable transport in general. The first step is to work on switching the electric grid to cleaner renewable energy and installing more residential solar, so that driving an EV emits less CO2. However, another important step is redesigning cities and changing policies so that people aren’t induced to drive so many private vehicles. Instead of millions of private vehicles on the road, we should be aiming for walkable cities and millions of bikes and electric buses, which are far better not only for human health, but also for the environment.

A further step where future Model 3s may help is in providing autonomous TaaS that helps convince people to give up their private vehicles. However, autonomous EVs need to be matched by public policies that disincentivize the kind of needless driving that will likely occur in the future. The total number of miles will likely increase in the future due to autonomous electric cars driving around looking for passengers to pick up and people who spend more time in the car because they can surf the web, watch movies, and enjoy the scenery without doing the steering. Plus, the cost of the electricity to charge the battery is so cheap compared to burning gasoline that people will be induced to drive more, not less.





Chris Martenson on insanity

5 08 2017

Published on 4 Aug 2017

Read the latest articles at Peak Prosperity: https://www.peakprosperity.com/

Our Brave New ”’Markets”’
https://www.peakprosperity.com/blog/1…

The Inevitability Of DeGrowth
https://www.peakprosperity.com/blog/1…

Suicide By Pesticide
https://www.peakprosperity.com/inside…

View the “Accelerated” Crash Course Here: https://www.youtube.com/watch?v=pYyugz5wcrI





What’s really driving the global economic crisis is net energy decline

3 08 2017

And there’s no going back. So let’s step into the future.

By Jonathan Rutherford

Source: Doug Menuez

Published by INSURGE INTELLIGENCE, a crowdfunded investigative journalism project for people and planet. Support us to keep digging where others fear to tread.

In the fifth contribution to our symposium, ‘Pathways to the Post-Carbon Economy’, Jonathan Rutherford explores the fundamental driver of global economic malaise: not debt; not banks; but a protracted, slow-burn crisis of ‘net energy decline.’

Cutting through the somewhat stale debate between advocates and critics of ‘peak oil’, Rutherford highlights some of the most interesting and yet little-known scientific literature on the intimate relationship between the global economy and energy.

Whatever happens with the shift to renewables, he argues, we are moving into an era in which fossil fuels will become increasingly defunct, especially after mid-century.

The implications for the future of the global economy will not be pretty — but if we face up to it, the transition to more sustainable societies will be all the better for facing reality, rather than continuing with our heads in the sand (or, as per the image above, stuck up the bull’s behind).


As argued in more detail by Ted Trainer in this symposium the best hope for transition to a ‘post carbon’ — or, better, a sustainable society (a much broader goal) — lies in a process of radical societal reconstruction, focused on the building, in the here and now, of self-governing and self-reliant settlements, starting at the micro-local level.

The ‘Simpler Way’ vision we promote, in my view, is an inspiring alternative that we can and should work for. The hope is that these local movements — which have already begun to emerge — will network, educate and scale up, as the global crisis intensifies.

In what follows, I want to complement this view, by sketching why I think the global economy will inevitably face a terminal crisis of net energy in coming years. In making this prediction, I am assuming that global transnational elites (i.e. G7 elites), as well as subordinate national elites — who manage the globalised neoliberal economy — will pursue economic growth at all costs, as elites have done since the birth of the capitalist system in Britain 300+ years ago.

That is, they will not voluntarily pursue a process of organised ‘degrowth’. In my view, at best, they will vigorously pursue ‘green’ growth, i.e. via the rapid scaling up of renewable energy and promoting efficiency etc., but with no intention of actively reducing the overall level of energy consumption — indeed, most of the mainstream ‘green growth’ scenarios assume a doubling of global energy demand by 2050 (for a critical review of one report, see here).

I am focusing on energy but, of course we can, and should, add to this picture the wider multidimensional ecological crisis (climate change impacts, soil depletion, water stress, biodiversity loss etc) which, among other things, means that an ever increasing proportion GDP growth takes the form of “compensatory and defensive costs” (See i.e Sarkar, The Crisis of Capitalism, p.267–275) to deal with past and expected future ecological damage.

Energy and GDP Growth

Axiom 1: As the biophysical economists have shown global economic growth is closely correlated with growth in energy consumption.

Professor Minqi Li of Utah University’s Department of Economics, for example, shows that between 2005 and 2016:

‘an increase in economic growth rate by one percentage point is associated with an increase in primary energy consumption by 0.96 percent.’

GDP growth also depends on improvements in energy efficiency — Li reports that over the last decade energy efficiency improved by an average of 1.7% per annum.

One of the future uncertainties is how rapidly we are likely to improve energy efficiency — future supply constraints are likely to incentivise this strongly, and there will be scope for significant efficiency improvements, but there is also to be diminishing returns once the low hanging fruit has been picked.

Axiom 2: Economic growth depends not just on increases in gross energy consumption and energy efficiency, but the availability of net energy. Net energy can be defined as the energy left over after subtracting the energy used to attain energy — i.e. the energy used during the process of extraction, harvesting and transportation of energy. Net energy is critical because it alone powers the non-energy sectors of the global economy.

Without net energy all non-energy related economic activity would cease to function.

Insight: An important implication is that net energy can be in decline, even while gross primary energy supply is constant or even increasing.

Below I will make my case for a probably intensifying global net energy contraction by discussing, first, broad factors shaping the probable trajectory of global primary energy growth, followed by a discussion of overall net energy. Most of the statistics are drawn from Minqi Li’s latest report which, in turn, draws on the latest BP’s Statistical Review of World Energy.

Prospects for Gross Energy Consumption

Over the last decade, world primary energy consumption grew at an average annual rate of 1.8 percent. It’s important to note, however, as Jean- Jancovici shows, that in per-capita terms the rate of energy growth has significantly slowed since the 1980s, increasing at an average annual rate of 0.4% since that time, compared to 1.2% in the century prior. This is mainly due to the slowing growth in world oil supply, since the two oil shocks in the 1970s.

There are strong reasons for thinking that the rate of increase in gross energy availability will slow further in coming decades. Recently a peer reviewed paper estimated the maximum rate at which humanity could exploit all ultimately recoverable fossil fuel resources. It found that depending on assumptions, the peak in all fossil fuels would be reached somewhere between 2025–2050 (a finding that aligns with several other studies see i.e Maggio and Cacciola 2012; Laherrere, 2015).

This is highly significant because today fossil fuels make up about 86% of global primary energy use — a figure that, notwithstanding all global efforts to date, has barely changed in three decades. This surprising early peak estimate is substantially associated with the recent radical down-scaling of estimated economically and technically recoverable coal reserves.

The situation for oil is particularly critical, especially given that it is by far the world’s major source of liquid fuel, powering 95% of all transport. A recent HSBC report found that, already today, somewhere between 60–80% of conventional oil fields are in terminal decline. It estimated that by 2040 the world would need to find four Saudi Arabia’s (the largest oil supplier) worth of additional oil just to maintain current rates of supply and more than double that to meet 2040 projected demand.

And yet, as the same report showed, new oil discoveries have been in long term decline — lately reaching record lows notwithstanding record investments between 2001–2014. Moreover, new discoveries are invariably smaller fields with more rapid peak and decline rates. The recent boom in US tight oil — a bubble fueled by low interest rates and record oil industry debts — has been responsible for most additional supply since the peak in conventional oil in 2005, but is likely to be in terminal decline within the next 5–10 years, if it has not already peaked.

All this, as Nafeez Ahmed has argued, is generating the conditions within the next few years (once the current oil glut has been drawn down) for an oil supply crunch and price spike that has the potential to send the debt-ridden global economy into a bigger and better global financial crisis tailspin. It may well be a seminal event that future historians look back as marking the beginning of the end for the oil age.

An alternative currently fashionable view is that peak oil will be effectively trumped by a near-term voluntary decline in oil demand (so called ‘peak demand’), mainly due to the predicted rise of electric vehicles. One reason (among several), however, to be skeptical of such forecasts is that currently there is absolutely no evidence that oil demand is in decline — on the contrary, it continues to increase every year, and since the oil price drop in 2014, at an accelerating rate.

When peak oil does arrive, there are likely to be powerful incentives to implement coal-to-liquids or gas-to-liquids but, apart from the huge logistical and infrastructure problems involved, a move in this direction will only accelerate the near-term peaking of coal and gas supply, especially given the energetic inefficiencies involved in fuel conversion. Peak oil will also likely incentivise the acceleration towards electrification of transport and renewable energy, to which I will now turn.

Given peak fossil fuels, the prospects for increasing, or even just maintaining, gross energy depends heavily on how fast renewable energy and nuclear power can be scaled up. Nuclear energy currently accounts for 4.5% of energy supply, but globally is in decline and there are good reasons for thinking that it will not — and should not —play a major role in the future energy mix (see i.e Our Renewable Future, Heinberg & Findlay, 2016, p132–135).

In 2016, all forms of renewable electricity (i.e. excluding bio-fuel) accounted for about 10% of global energy consumption in 2016, but a large portion of this was hydroelectricity, which has limited potential for expansion. Wind, Solar PV and Concentrated Solar Power (CSP) are generally agreed to be the major renewable technologies capable of a large increase in capacity but, notwithstanding rapid growth in recent years, in 2016 they still accounted for just 2.2% of world primary energy consumption.

Insight: In recent years many ‘green-growth’ reports have been published with optimistic renewable energy forecasts — one even claiming that renewables could supply all world energy (not just electricity) by 2050. But, it should be recognised that this would require a very dramatic increase in the rate of growth in renewable capacity.

In the last six years, new investment (including government, private sector etc) in all forms of renewable energy has leveled off at around the $300 billion a year. Heinberg and Finlay (p.123) estimate that this rate of investment would have to multiplied by more than a factor of ten and continued each year for several decades, if renewable energy was to meet current global energy demand, let alone the projected doubling of demand in most mainstream energy scenarios.

In other words, it would require an upfront annual investment of US$3 trillion a year (and more over the entire life cycle). By comparison, in 2014 the IEA estimated that global investment for all energy supply (i.e fossil fuels and renewables etc) in 2035 would be US $2 trillion per year. In addition, if fossil fuel capacity is to be phased out entirely by 2050, it would require much premature scrapping of existing capital — depriving investors of making full returns on their capital — which can be expected to trigger fierce resistance from large sections, if not the entire, transnational capitalist class.

Currently both oil and gas supply, if not coal, are growing much faster than all renewables, at least in absolute if not percentage terms. No wonder that the most ambitious IPCC emission reduction scenarios assume continued large scale use of fossil fuels through to 2050, and rely instead on highly uncertain and problematic ‘net emission’ technologies (i.e Carbon Capture and Storage, massive planting of trees etc).

Based on current trends, Minqi Li’s recent energy forecast predicts that the growth of renewable energy will, at best, offset the inevitable decline in fossil fuel energy over coming decades. He forecasts that a peak in gross global energy supply (including fossil fuels and renewables) will be reached by about 2050.

This of course does not include the very real possibility of serious energy ‘bottlenecks,’ resulting, for example, from the peak in oil — for which no government is adequately preparing — and with no alternative liquid fuel source, on the scale required, readily available.

The Net Energy Equation

The foregoing has just been about gross energy, but as mentioned above, the real prospects for the growth-industrial economy depend on net energy, which alone fuels the non-energy sectors of the economy. This is where the picture gets really challenging.

With regards to fossil fuels, EROI is on a downward trajectory. The current estimate (in 2014) for global oil & gas is that EROI is about 18:1. And while it’s true that technological innovation can improve the efficiency of oil extraction, in general this is being overwhelmed by the increasing global reliance on lower EROI unconventional oil & gas sources — a trend which will continue from now until the end of the fossil fuel age.

Axiom 3: What is often overlooked, is that declining EROI will exacerbate the problem of peak fossil fuels.

As Charles Hall explains, declining EROI will accelerate the advent of peak fossil fuels, because more energy is needed just to maintain the ratio of net energy needed to fuel the economy. And when, inevitably, we begin to move down the other side of Hubbert’s peak, things will get even more challenging. At this point, decreasing gross supply will be combined with ever greater reliance on lower EROI supplies, rapidly reducing the amount of net energy available to society.

The situation would be improved if the main renewables could provide an additional source of high net energy (i.e EROI). But, while this question is the subject of much current scholarly debate, and is quite unsettled, it seems highly likely that any future 100% renewable energy system (as opposed to individual technology) will provide far less net-energy than humanity — or at least, the minority of us in the energy rich affluent regions — has enjoyed during the fossil fuel epoch. This is for the following theoretical reasons outlined by energy experts Moriarty and Honnery in a recent paper:

  • Due to the more energy diffuse nature of renewable energy flows (sun and wind), harvesting this energy to produce electricity, requires the construction of complex industrial technologies. Currently, this requires the ‘hidden subsidy’ of fossil fuels, which are involved in the entire process of resource extraction, manufacturing and maintenance of these industrial technologies. As fossil fuels deplete, this subsidy will become costlier in both financial and energy terms, reducing the net-energy of renewable technologies.
  • The non-renewable resources (often rare) needed for construction of renewable technologies will deplete over time, and will thus take more energy to extract, again, reducing net energy.
  • Due to the intermittency of solar and wind, a 100% renewable energy system (or even a large portion of renewable energy within the overall mix) requires investment in either large amounts of redundant capacity (to ensure there is security of supply during calm and cloudy weather) or, alternatively, large amounts of (currently unforeseen on the scale needed) storage capacity — or both. Ultimately, either option will require energy investment for the total system.
  • Because the main renewable technologies generate electricity, there will be a large amount of energy lost through conversion (i.e. via hydrogen) to the many current energy functions that cannot easily be electrified (i.e. trucks, industrial heating processors etc). In fairness, the conversion of fossil fuels to electricity also involves substantial energy loss (i.e. about 2/3 on average), but given that about 80% of global primary energy is currently in a non-electrical form, this appears to be a far bigger problem for a future 100% renewable system.
  • As renewable energy capacity expands, it will inevitably have to be built in less ideal locations, reducing gross energy yield.

Axiom 4: Regardless of the net energy that a future 100% renewable energy system would provide, it is important to recognize that attempts to ramp up renewable energy at very fast rates — far from adding to the overall energy output of the global economy — will inevitably come at a net energy cost.

This is because there would need to be a dramatic increase in energy demand associated with the transitional process itself.

Modelling done by Josh Floyd has found that in their ‘baseline scenario’ (described here) — which looks to phase out fossil fuels in 50 years — net energy services for the global economy would decline during that transition period by more than 15% before recovering.

This would be true of any rapid energy transition, but the problem is particularly acute for a transition to renewable technologies due to their much higher upfront capital (and therefore energy) costs, compared to fossil fuel technologies.

Conclusion

The implication of the above arguments is that over the coming decades, the global economy will very likely face an increasing deterioration in net energy supply that will increasingly choke off economic growth. What will this look like for people in real life?

Economically, it will likely be revealed in terms of stagnating (or falling) real wages, rising costs of living, decreasing discretionary income and decreasing employment opportunities — symptoms, as Tim Morgan argues, we are already beginning to see, albeit, to varying extents across the globe — but which will intensify in coming years.

How slow or fast this happens nobody knows. But given capitalism is a system which absolutely depends on endless capital accumulation for its effective economic functioning and social legitimacy, this will prove to be a terminal crisis, from which the system cannot ultimately escape.

We therefore have no choice but to prepare for a future economy in which net energy is far lower than what we have been used to in the industrial era.

Insight: To be clear, crisis by itself, will not lead to desirable outcomes — far from it. Our collective fate, as Trainer explains, depends largely on the rapid emergence of currently small scale new society movements — building examples of the sane alternative in the shell of the old — and rapidly multiplying and scaling up, as the legitimacy of the system declines.


Jonathan Rutherford is coordinator of the new international bookshop, Melbourne Australia. He is involved in various local sustainability projects where he lives in Belgrave.