Paris, climate and surrealism

27 07 2017

Speaker: Prof. Kevin Anderson, Professor of energy and climate change

Title: Paris, climate and surrealism: how numbers reveal an alternate reality

The Paris Agreement’s inclusion of “well below 2°C” and “pursue … 1.5°C” has catalysed fervent activity amongst many within the scientific community keen to understand what this more ambitious objective implies for mitigation. However, this activity has demonstrated little in the way of plurality of responses. Instead there remains an almost exclusive focus on how future ‘negative emissions technologies’ (NETs) may offer a beguiling and almost free “get out of jail card”.
This presentation argues that such a dominant focus reveals an endemic bias across much of the academic climate change community determined to voice a politically palatable framing of the mitigation landscape – almost regardless of scientific credibility. The inclusion of carbon budgets within the IPCC’s latest report reveals just how few years remain within which to meet even the “well below 2°C” objective.

Making optimistic assumptions on the rapid cessation of deforestation and uptake of carbon capture technologies on cement/steel production, sees a urgent need to accelerate the transformation of the energy system away from fossil fuels by the mid 2030s in the wealthier nations and 2050 globally. To put this in context, the national mitigation pledges submitted to Paris see an ongoing rise in emissions till 2030 and are not scheduled to undergo major review until 2023 – eight years, or 300 billion tonnes of CO2, after the Paris Agreement.

Despite the enormity and urgency of 1.5°C and “well below 2°C” mitigation challenge, the academic community has barely considered delivering deep and early reductions in emissions through the rapid penetration of existing end-use technologies and profound social change. At best it dismisses such options as too expensive compared to the discounted future costs of a technology that does not yet exist. At worst, it has simply been unprepared to countenance approaches that risk destabilising the political hegemony.

Ignoring such sensibilities, the presentation concludes with a draft vision of what an alternative mitigation agenda may comprise.

Reading The News On America Should Scare Everyone, Every Day… But It Doesn’t

22 07 2017

Whilst this is Amero-centric, make no mistake, it also applies to Australia in bucket loads…….

Authored by Raul Ilargi Meijer via The Automatic Earth blog,

Reading the news on America should scare everyone, and every day, but it doesn’t. We’re immune, largely. Take this morning. The US Republican party can’t get its healthcare plan through the Senate. And they apparently don’t want to be seen working with the Democrats on a plan either. Or is that the other way around? You’d think if these people realize they were elected to represent the interests of their voters, they could get together and hammer out a single payer plan that is cheaper than anything they’ve managed so far. But they’re all in the pockets of so many sponsors and lobbyists they can’t really move anymore, or risk growing a conscience. Or a pair.

What we’re witnessing is the demise of the American political system, in real time. We just don’t know it. Actually, we’re witnessing the downfall of the entire western system. And it turns out the media are an integral part of that system. The reason we’re seeing it happen now is that although the narratives and memes emanating from both politics and the press point to economic recovery and a future full of hope and technological solutions to all our problems, people are not buying the memes anymore. And the people are right.

Tyler Durden ran a Credit Suisse graph overnight that should give everyone a heart attack, or something in that order. It shows that nobody’s buying stocks anymore, other than the companies who issue them. They use ultra-cheap leveraged loans to make it look like they’re doing fine. Instead of using the money/credit to invest in, well, anything, really. You can be a successful US/European company these days just by purchasing your own shares. How long for, you ask?

There Has Been Just One Buyer Of Stocks Since The Financial Crisis

 As CS’ strategist Andrew Garthwaite writes, “one of the major features of the US equity market since the low in 2009 is that the US corporate sector has bought 18% of market cap, while institutions have sold 7% of market cap.” What this means is that since the financial crisis, there has been only one buyer of stock: the companies themselves, who have engaged in the greatest debt-funded buyback spree in history.


 Why this rush by companies to buyback their own stock, and in the process artificially boost their Earning per Share? There is one very simple reason: as Reuters explained some time ago, “Stock buybacks enrich the bosses even when business sags.” And since bond investor are rushing over themselves to fund these buyback plans with “yielding” paper at a time when central banks have eliminated risk, who is to fault them.

More concerning than the unprecedented coordinated buybacks, however, is not only the relentless selling by institutions, but the persistent unwillingness by “households” to put any new money into the market which suggests that the financial crisis has left an entire generation of investors scarred with “crash” PTSD, and no matter what the market does, they will simply not put any further capital at risk.

So that’s your stock markets. Let’s call it bubble no.1. Another effect of ultra low rates has been the surge in housing bubbles across the western world and into China. But not everything looks as rosy as the voices claim who wish to insist there is no bubble in [inject favorite location] because of [inject rich Chinese]. You’d better get lots of those Chinese swimming in monopoly money over to your location, because your own younger people will not be buying. Says none other than the New York Fed.

Student Debt Is a Major Reason Millennials Aren’t Buying Homes

 College tuition hikes and the resulting increase in student debt burdens in recent years have caused a significant drop in homeownership among young Americans, according to new research by the Federal Reserve Bank of New York. The study is the first to quantify the impact of the recent and significant rise in college-related borrowing—student debt has doubled since 2009 to more than $1.4 trillion—on the decline in homeownership among Americans ages 28 to 30. The news has negative implications for local economies where debt loads have swelled and workers’ paychecks aren’t big enough to counter the impact. Homebuying typically leads to additional spending—on furniture, and gardening equipment, and repairs—so the drop is likely affecting the economy in other ways.

As much as 35% of the decline in young American homeownership from 2007 to 2015 is due to higher student debt loads, the researchers estimate. The study looked at all 28- to 30-year-olds, regardless of whether they pursued higher education, suggesting that the fall in homeownership among college-goers is likely even greater (close to half of young Americans never attend college). Had tuition stayed at 2001 levels, the New York Fed paper suggests, about 360,000 additional young Americans would’ve owned a home in 2015, bringing the total to roughly 2.9 million 28- to 30-year-old homeowners. The estimate doesn’t include younger or older millennials, who presumably have also been affected by rising tuition and greater student debt levels.

Young Americans -and Brits, Dutch etc.- get out of school with much higher debt levels than previous generations, but land in jobs that pay them much less. Ergo, at current price levels they can’t afford anything other than perhaps a tiny house. Which is fine in and of itself, but who’s going to buy the existent McMansions? Nobody but the Chinese. How many of them would you like to move in? And that’s not all. Another fine report from Lance Roberts, with more excellent graphs, puts the finger where it hurts, and then twists it around in the wound a bit more:

People Buy Payments –Not Houses- & Why Rates Can’t Rise

 Over the last 30-years, a big driver of home prices has been the unabated decline of interest rates. When declining interest rates were combined with lax lending standards – home prices soared off the chart. No money down, ultra low interest rates and easy qualification gave individuals the ability to buy much more home for their money. The problem, however, is shown below. There is a LIMIT to how much the monthly payment can consume of a families disposable personal income.


 In 1968 the average American family maintained a mortgage payment, as a percent of real disposable personal income (DPI), of about 7%. Back then, in order to buy a home, you were required to have skin in the game with a 20% down payment. Today, assuming that an individual puts down 20% for a house, their mortgage payment would consume more than 23% of real DPI. In reality, since many of the mortgages done over the last decade required little or no money down, that number is actually substantially higher. You get the point. With real disposable incomes stagnant, a rise in interest rates and inflation makes that 23% of the budget much harder to sustain.



In 1968 Americans paid 7% of their disposable income for a house. Today that’s 23%. That’s as scary as that first graph above on the stock markets. It’s hard to say where the eventual peak will be, but it should be clear that it can’t be too far off. And Yellen and Draghi and Carney are talking about raising those rates.

What Lance is warning for, as should be obvious, is that if rates would go up at this particular point in time, even a lot less people could afford a home. If you ask me, that would not be so bad, since they grossly overpay right now, they pay full-throttle bubble prices, but the effect could be monstrous. Because not only would a lot of people be left with a lot of mortgage debt, and we’d go through the whole jingle mail circus again, yada yada, but the economy’s main source of ‘money’ would come under great pressure.

Let’s not forget that by far most of our ‘money’ is created when private banks issue loans to their customers with nothing but thin air and keyboard strokes. Mortgages are the largest of these loans. Sink the housing industry and what do you think will happen to the money supply? And since inflation is money velocity x money supply, what would become of central banks’ inflation targets? May I make a bold suggestion? Get someone a lot smarter than Janet Yellen into the Fed, on the double. Or, alternatively, audit and close the whole house of shame.

We’ve had bubbles 1, 2 and 3. Stocks, student debt and housing. Which, it turns out, interact, and a lot.

An interaction that leads seamlessly to bubble 4: subprime car loans. Mind you, don’t stare too much at the size of the bubbles, of course stocks and housing are much bigger issues, but focus instead on how they work together. As for the subprime car loans, and the subprime used car loans, it’s the similarity to the subprime housing that stands out. Like we learned nothing. Like the US has no regulators at all.

Fears Mount Over a New US Subprime Boom – Cars

It’s classic subprime: hasty loans, rapid defaults, and, at times, outright fraud. Only this isn’t the U.S. housing market circa 2007. It’s the U.S. auto industry circa 2017. A decade after the mortgage debacle, the financial industry has embraced another type of subprime debt: auto loans. And, like last time, the risks are spreading as they’re bundled into securities for investors worldwide. Subprime car loans have been around for ages, and no one is suggesting they’ll unleash the next crisis.

 But since the Great Recession, business has exploded. In 2009, $2.5 billion of new subprime auto bonds were sold. In 2016, $26 billion were, topping average pre-crisis levels, according to Wells Fargo. Few things capture this phenomenon like the partnership between Fiat Chrysler and Banco Santander. [..] Santander recently vetted incomes on fewer than one out of every 10 loans packaged into $1 billion of bonds, according to Moody’s.

If it’s alright with you, we’ll deal with the other main bubble, no.5 if you will, another time. Yeah, that would be bonds. Sovereign, corporate, junk, you name it.

The 4 bubbles we’ve seen so far are more than enough to create a huge crisis in America. Don’t want to scare you too much all at once. Just you read the news again tomorrow. There’ll be more. And the US Senate is not going to do a thing about it. They’re too busy not getting enough votes for other things.

Our Aversion to Doom and Gloom Is Dooming Us

20 07 2017

Reproduced from Commondreams.

I worked for over 35 years in the environmental field, and one of the central debates I encountered was whether to “tell it like it is,” and risk spreading doom and gloom, or to focus on a more optimistic message, even when optimism wasn’t necessarily warranted.

The optimists nearly always won this debate. For the record, I was—and am—a doom and gloomer.  Actually, I like to think I’m a realist. I believe that understating the problems we face leads to understated—and inadequate responses.  I also believe that people, when dealt with honestly, have responded magnificently, and will do so again, if and when called. Witness World War II, for example, when Churchill told the Brits, “I have nothing to offer but blood, toil, tears, and sweat.” In those words, he helped ignite one of the most noble and dedicated periods of unity and resistance in all the annals of human endeavor.

Finally, I believe that the principles of risk management dictate that when the consequences of our actions —or our inactions—are pervasive, long lasting, irreversible and potentially devastating, we should assume worst-case outcomes.  That’s why people get health insurance; it’s why they purchase insurance for their homes; it’s why they get life insurance. No one assumes they’ll get sick, that their house will burn down, or that they’re about to die, but it makes sense to hedge against these events.  It’s why we build in huge margins of safety when we design bridges or airplanes. You can’t undo an airplane crash, or reverse a bridge failure.

And you can’t restore a livable climate once it’s been compromised.  Not in anything other than geologic timeframes.

Yet we routinely understate the threat that climate change poses, and reject attempts to characterize the full extent of the potential for catastrophe it poses. And it’s killing us.

David Wallace-Wells’ recent article in the New York magazine, The Uninhabitable Earth, is a case in point.  It was an attempt to describe the worst-case scenario for climate change.  Here’s the opening sentences to give you an idea of what Mr. Wallace-Wells had to say:

It is, I promise, worse than you think. If your anxiety about global warming is dominated by fears of sea-level rise, you are barely scratching the surface of what terrors are possible, even within the lifetime of a teenager today. 

Predictably, a large part of the scientific community reacted with hostility, and environmentalists were essentially silent. For example, Climate Feedback published a critique of Wallace-Well’s article by sixteen climate scientists, leading with Michael Mann, originator of the famous hockey stick, which graphically showed how rapidly the Earth was warming. Here’s part of what Dr. Mann had to say:

The evidence that climate change is a serious problem that we must contend with now, is overwhelming on its own. There is no need to overstate the evidence, particularly when it feeds a paralyzing narrative of doom and hopelessness.

The last part of Dr. Mann’s statement may explain the real reason the environmental and scientific communities reacted so hostilely to Wallace-Well’s article, and why they generally avoid gloom and doom, even when the news is gloomy—the notion that presenting information that details just how bad climate change could be, leads to “paralysis.”

This, together with scientists’ tendency to stick to the most defensible positions and the scenarios that are accepted by the mainstream—what climate scientist James Hansen calls dangerous scientific reticence—probably explain why the scientific community has tended to understate the threat of climate change, although few would describe Dr. Mann as reticent.

And it should be noted that Mr. Wallace-Well’s did overstate some of the science. For example, given out current understanding of methane and carbon releases from permafrost, it appears as though it would take much longer to play out than Wallace-Wells suggested, although it likely would add as much as 2°C to projected warming by 2100. But for the most part, he simply took worst-case forecasts and used them. As Dr. Benjamin Horton—one of the scientists commenting on the Wallace-Wells article put it, “Most statements in the article are based on peer-reviewed literature.”

One of the reason worst-case projections seem so dire, is that the scientific community—and especially the IPCC—has been loath to use them. For the record, ex-ante analysis of previous forecasts with actual changes show a trend that is nearer to—or worse than—the worst-case forecasts than they are to the mid-range.

The article also forecast some of the social, demographic, and security consequences of climate change that can’t be scientifically verified, but which comport with projections made by our own national security experts.

For example, in this years’ Worldwide Threat Assessment of the US Intelligence Community, climate change was identified as a “threat multiplier” and Dan Coats, Director of National Intelligence, said in testimony presented to the Senate Select Committee on Intelligence in May of this year:

Climate change influences the entire geostrategic landscape. In that sense, one could  walk through the entire threat assessment report and identify ways in which climate  change will intersect with nearly every risk identified, and in most cases, make them worse.

Director Coats specifically highlighted health security, terrorism and nuclear proliferation as threats that climate change would exacerbate. This is coming from the Trump administration, which has been censoring climate-related information coming out of NOAA and EPA.  It’s a measure of how seriously the national security community takes the threat of climate change that they fought to keep the issue above the political fray.

Yet here again, the scientific community took issue with these claims, because they were conjecture.  Never mind that those whose job it is to assess these kinds of risks found the forecasts likely and actionable. Scientists want data and the certainty it brings, not extrapolation.

So what’s the gap between future worst-case and the more typically used mid-range projections the media and scientists favor?  It’s huge, and consequential.  I’ve pointed out some of the risky—if not absurd—assumptions  underlying the Paris Agreement in the past, but let’s briefly outline some numbers that highlight the difference between what’s typically discussed in the media, with projections based on worst-case—but entirely plausible—forecasts.

After Paris, there was a lot of attention paid to two targets: a limit of less than 2°C warming, and a more aggressive limit of no more than 1.5°C warming.  What was less well known and discussed was the fact that the Agreement would have only limited warming to 3.5°C by 2100, using the IPCC’s somewhat optimistic assumptions.

What is virtually unknown by most of the public and undiscussed by scientists and the media is that even before the US dropped out of the Treaty, the worst-case temperature increase under the Treaty could have been nearly twice that.

Here’s why.

As noted, the 3.5°C figure had a number of conservative assumptions built into it, including the fact that there is a 34 percent chance that warming will exceed that, and the idea that we could pass on the problem to our children and their children by assuming that they would create an as yet unknown technology that would extract massive amounts of carbon from the atmosphere in a cost-effective way, and safely and permanently sequester it, thus allowing us to exceed the targets for a limited amount of time.

But the fact is, some projections found that temperature increase resulting from meeting the Paris targets would exceed 4°C by 2100, even if we continued to make modest progress after meeting them – something the Treaty doesn’t require. The IPCC forecasts also ignored feedbacks, and research shows that just 3 of these will add another 2.5°C of warming by 2100, bringing the total to more than 6.5°C (or nearly 12°F). At this point, we’re talking about trying to live on an essentially alien planet.

Finally, there’s evidence that the Earth’s natural sinks are being compromised by the warming that’s happened so far, and this means that more of what we emit will remain in the atmosphere, causing it to warm much more than the IPCC models have forecasted. This could (not would) make Wallace-Well’s thesis not only plausible, but likely.

But rather than discussing these entirely plausible forecasts, the media, environmentalists and too many scientists, would rather focus on a more optimistic message, and avoid “doom and gloom.”

What they’re actually doing is tantamount to playing Russian Roulette with our children’s future with two bullets in the chamber. Yes, the odds are that it won’t go off, but is this the kind of risk we should be taking with our progeny’s future?

There is something paternalistic and elitist about this desire to spare the poor ignorant masses the gory details.  It is condescending at best, self-defeating at worst.  After all, if the full nature of the challenge we face is not known, we cannot expect people take the measures needed to meet it.

I believe now, and I have always believed, that humans are possessed with an inherent wisdom, and that, given the right information, they will make the right choices.

As an aside, Trump is now President because the Democrats followed the elitist and paternalistic path of not trusting the people – that and their decision to put corporate interests above the interests of citizens.

Watching Sanders stump against the Republican’s immoral tax cut for the rich disguised as a health care bill, shows the power of a little honest doom and gloom.

We could use a lot more of it across the political spectrum.

John Atcheson

John Atcheson is author of the novel, A Being Darkly Wise, and he has just completed a book on the 2016 elections titled, WTF, America? How the US Went Off the Rails and How to Get It Back on Track. It is available in hardcover now, and the ebook will be available shortly. Follow him on Twitter:@john_atcheson

How an obscure Austrian philosopher saw through our empty rhetoric about ‘sustainability’

5 07 2017

Hot Mess

Marc Hudson, University of Manchester

“Sustainability” is, ironically, a growth industry. Ever since the term “sustainable development” burst onto the scene in 1987 with the release of Our Common Future (also known as the Brundtland report), there has been a dizzying increase in rhetoric about humanity’s relationship with our planet’s resources. Glossy reports – often featuring blonde children in front of solar panels or wind turbines – abound, and are slapped down on desks as proof of responsibility and stewardship.

Every few years a new term is thrown into the mix – usually preceded by adjectives like “participatory” or “community-led”. The fashionability of “resilience” as a mot du jour seems to have peaked, while more recently the “circular economy” has become the trendy term to put on grant applications, conference notices and journal special editions. Over time journals are established, careers are built, and library shelves groan.

Meanwhile, the planetary “overshoot”, to borrow the title of a terrifying 1980 book, goes on – exemplified by rising concentrations of atmospheric carbon dioxide, warmer oceans, Arctic melting, and other signs of the times.

With all this ink being spilled (or, more sustainably, electrons being pressed into service), is there anything new to say about sustainability? My colleagues and I think so.

Three of us (lead author Ulrike Ehgartner,
second author Patrick Gould
and myself) recently published an article called “On the obsolescence of human beings in sustainable development”.

In it we explore the big questions of sustainability, drawing on some of the work of an unjustly obscure Austrian political philosopher called Gunther Anders.

Who was Günther Anders?

He was born Günther Siegmund Stern in 1902. While he was working as a journalist in Berlin, an editor wanted to reduce the number of Jewish-sounding bylines. Stern plumped for “Anders” (meaning “other” or “different”) and used that nom de plume for the rest of his life.

Anders knew lots of the big philosophical names of the day. He studied under Edmund Husserl and Martin Heidegger. He was briefly married to Hannah Arendt, and Walter Benjamin was a cousin.

But despite his stellar list of friends and family, Anders himself was not well known. Harold Marcuse points out that the name “Stern” was pretty apt, writing:

His unsparingly critical pessimism may explain why his pathbreaking works have seldom sparked sustained public discussion.

While Hiroshima and the nuclear threat were the most obvious influences on Anders’ writing, he was also crucially influenced by the events at Auschwitz, the Vietnam War, and his periods in exile in France and the United States. But why should we care, and how can his ideas be applied to modern-day ideas about sustainability?

Space precludes a blow-by-blow account of what my colleagues and I wrote, but two ideas are worth exploring: the “Promethean gap” and “apocalyptic blindness”.

Anders suggested that the societal changes wrought by the industrial age – chief among them the division of labour – opened a gap between individuals’ capability to produce machines, and their capability to imagine and deal with the consequences.

So, riffing on the Greek myth of Prometheus (the chap who stole fire from Mount Olympus and gave it to humans), Anders proposed the existence of a “Promethean gap” which manifests in academic and scientific thinking and leads to the extensive trivialisation of societal issues.

The second idea is that of “apocalyptic blindness” – which is, according to Anders, the mindset of humans in the Age of the Third Industrial Revolution. This, as we write in our paper:

…determines a notion of time and future that renders human beings incapable of facing the possibility of a bad end to their history. The belief in progress, persistently ingrained since the Industrial Revolution, causes the incapability of humans to understand that their existence is threatened, and that this could lead to the end of their history.

Put simply, we don’t want to look an apocalypse in the eye, even if it’s heading straight towards us.

The climate connection

“So what?” you might ask. Why listen to yet another obscure philosopher railing about technology, in the vein of Lewis Mumford and Jacques Ellul? But I think a passing knowledge of Anders and his work reminds us of several important things.

This is nothing new. Recently, the very notion of ‘progress’ has come under renewed assault, with books questioning our assumptions about it. This is not new of course – in a 1967 short story collection about life at the United Nations, Shirley Hazzard had written:

About this development process there appeared to be no half-measures: once a country had admitted its backwardness, it could hope for no quarter in the matter of improvement. It could not accept a box of pills without accepting, in principle, an atomic reactor. Progress was a draught that must be drained to the last bitter drop.

The time – if ever there was one – for tinkering around the edges is over. We need to take stronger action than simply pursuing our feelgood preoccupation with sustainability.

This begs the question of who is supposed to shift us from the current course (or rather, multiple collision courses. That’s a difficult one to answer.

The hope that techno-fixes (including 100% renewable energy) will sort out our problems is a dangerous delusion (please note, I’m not against 100% renewables – I’m just saying that green energy is “necessary but not sufficient” for repairing the planet).

Similarly, the “circular economy” has a rather circular feeling to it – in the sense that we’ve seen all this before. It seems (to me anyway) to be the last gasp of the “ecological modernist” belief that with a bit more efficiency, everything can simply keep on progressing.

The ConversationOur problems go far deeper. We are going to need a rapid and fundamental shift in our values, habits, behaviours, and outlooks. Put in Anders’ terms, we need to stop being blind to the possibility of apocalypse. But then again, people have been saying that for a century or more.

Marc Hudson, PhD Candidate, Sustainable Consumption Institute, University of Manchester

This article was originally published on The Conversation. Read the original article.

Latest Arctic Sea Ice Data

29 06 2017


Mark Cochrane

Another year of low ice cover in the Arctic. So what’s new? Few know about this and fewer care. The decline has been going on so long that we fail to be shocked anymore. In the graph below the gray area is where 95% of years should fall. We are well below that area, yet again, about where we were last year. The dashed line is 2012 when we experienced the lowest sea ice cover (in September). Depending on the vagaries of the weather, this year may or may not be the lowest on record but just looking at the area of cover is misleading, since it tells you nothing about the thickness of the ice.

As the ice cover expands in the cold Arctic winter it covers the ocean and traps the heat it contains. This allows the air temperatures to drop very low above the ice. Think of the ice as the covers on your bed. If your covers are thick your body heat stays contained even on a cold night. If you have just a thin sheet you don’t stay quite so comfortable.

In the Arctic, sea ice gets thicker the older it gets as it goes through successive winters. As recently as the 80s, 30% or more of the ice cover was 5+ years old and first year ice was not much different at about 35% of the area. Now, older ice area has been reduced to <5% while first year ice makes up nearly 70% of the area.

Thin ice breaks easier during Arctic storms and, much like crushed ice in your drinks, melts faster. Open water in the Arctic summer enjoys 24 hours a day of sunlight. Ice reflects most of the heat, but open water absorbs almost all of it. This makes the Arctic ocean warm more and more year after year, which in turn makes the formation of new ice in teh winter harder and harder until later in the year, after enough heat escapes the surface waters. That heat plays havoc with the regional weather in the Arctic. The Polar Vortex is weaker and slower to form making it more likely that cold Arctic air will spill out in bursts across North America and Europe.

The ‘death spiral’ map shows how sea ice volume is circling the drain that will one day, in the not too distant future, end with an ice-free Arctic summer. How much ice have we lost in the last 4 decades? Comparing April 2017 to April 1979, the reduced volume of Arctic sea ice would be nearly enough to cover the entire combined land area of both Canada and the United States with 1 meter of solid ice.

Alas, the only thing poorer than the human race’s ability to understand the exponential function and large numbers is its grasp of geography…


The green car myth

28 06 2017

How government subsidies make the white elephant on your driveway look sustainable

And this comes on top of this article that describes how just making electric cars’ battery packs is equivalent to eight years worth of driving conventional happy motoring.

I have written before about the problems with bright green environmentalism. Bright greens suggest that various technological innovations will serve to reduce carbon dioxide emissions enough to avoid catastrophic global warming and other environmental problems. There are a variety of practical problems that I outlined there, including the fact that most of our economic activities are hitting physical limits to energy efficiency.

The solution lies in accepting that we can not continue to expand our economies indefinitely, without catastrophic consequences. In fact, catastrophic consequences are in all likelihood already unavoidable, if we believe the warnings of prominent climatologists who claim that a two degree temperature increase is sufficient to cause significant global problems.

It’s easy to be deceived however and assume that we are in the process of a transition towards sustainable green technologies. The problem with most green technologies is that although their implementation on a limited scale is affordable, they have insufficient scalability to enable a transition away from fossil fuels.

Part of the reason for this limited scalability is because users of “green” technology receive subsidies and do not pay certain costs which users of “grey” technology have to shoulder as a result. As an example, the Netherlands, Norway and many other nations waive a variety of taxes for green cars, taxes that are used to maintain the network of roads that these cars use. As the share of green cars rises, grey cars will be forced to shoulder increasingly higher costs to pay for the maintenance of road networks.

It’s inevitable that these subsidies will be phased out. The idea of course is that after providing an initial gentle push, the transition towards more green driving will have reached critical mass and prove itself sustainable without any further government subsidies. Unfortunately, that’s unlikely to occur. We’ve seen a case study of what happens when subsidies for green technologies are phased out in Germany. After 2011, the exponential growth in solar capacity rapidly came to a stop, as new installs started to drop. By 2014, solar capacity in Germany had effectively stabilized.1 Peak capacity of solar is now impressively high, but the amount of solar energy produced varies significantly from day to day. On bad days, solar and wind hardly contribute anything to the electricity grid.

Which brings us to the subject of today’s essay: The green car. The green car has managed to hide its enormous price tag behind a variety of subsidies, dodged taxes and externalities it has imposed upon the rest of society. Let us start with the externalities. Plug-in cars put significant strain on the electrical grid. These are costs that owners of such cars don’t pay themselves. Rather, power companies become forced to make costs to improve their grid, to avoid the risk of blackouts, costs that are then passed on to all of us.

When it comes to the subsidies that companies receive to develop green cars, it’s important not just to look at the companies that are around today. This is what is called survivorship bias. We focus on people who have succeeded and decide that their actions were a good decision to take. Everyone knows about the man who became a billionare by developing Minecraft. As a result, there are droves of indie developers out there hoping to produce the next big game. In reality, most of them earn less than $500 a year from sales.2

Everyone has heard of Tesla or of Toyota’s Prius. Nobody hears of the manufacturers who failed and went bankrupt. They had to make costs too, costs that were often passed on to investors or to governments. Who remembers Vehicle Production Group, or Fisker automotive? These are companies that were handed 193 million and 50 million dollar in loans respectively by the US Federal government, money the government won’t see again because the companies went bankrupt.3 This brings the total of surviving car manufacturers who received loans from the government to three.

To make matters worse, we don’t just subsidize green car manufacturers. We subsidize just about the entire production chain that ultimately leads to a green car on your driveway. Part of the reason Fisker automotive got in trouble was because its battery manufacturer, A123 Systems, declared bankruptcy. A123 Systems went bankrupt in 2012, but not before raising 380 million dollar from investors in 2009 and receiving a 249 million dollar grant from the U. S. department of energy back in 2010.

Which brings us to a de facto subsidy that affects not just green cars, but other unsustainable projects as well: Central bank policies. When interest rates are low, investors have to start searching for yield. They tend to find themselves investing in risky ventures, that may or may not pay off. Examples are the many shale companies that are on the edge of bankruptcy today. This could have been anticipated, but the current financial climate leaves investors with little choice but to invest in such risky ventures. This doesn’t just enable the growth of a phenomenon like the shale oil industry affects green car companies as well. Would investors have poured their money into A123 Systems, if it weren’t for central bank policies? Many might have looked at safer alternatives.

One company that has benefited enormously from these policies is Tesla. In 2008, Tesla applied for a 465 million dollar loan from the Federal government. This allowed Tesla to produce its car, which then allows Tesla to raise 226 million in an IPO in June 2010, where Tesla receives cash from investors willing to invest in risky ventures as a result of central bank policies. A $7,500 tax credit then encourages sales of Tesla’s Model S, which in combination with the money raised from the IPO allows Tesla to pay off its loan early.

In 2013, Tesla then announces that it has made an 11 million dollar profit. Stock prices go through the roof, as apparently they have succeeded at the task of the daunting task of making green cars economically viable. In reality, Tesla made 68 million dollar that year selling its emission credits to other car companies, without which, Tesla would have made a loss.

Tesla in fact receives $35,000 dollar in clean air credits for every Model S that it sells to customers, which in total was estimated to amount to 250 million dollar in 2013.4 To put these numbers in perspective, buying a Model S can cost anywhere around $70,000, so if the 35,000 dollar cost was passed on to the customer, prices would rise by about 50%, not including whatever sales tax applies when purchasing a car.

We can add to all of this the 1.2 billion of subsidy in the form of tax exemptions and reduced electricity rates that Tesla receives for its battery factory in Nevada.5 The story gets even better when we arrive at green cars sold to Europe, where we find the practice of “subsidy stacking”. The Netherlands exempts green cars from a variety of taxes normally paid upon purchase. These cars are then exported to countries like Norway, where green cars don’t have to pay toll and are allowed to drive on bus lanes.6

For freelancers in the Netherlands, subsidies for electrical cars have reached an extraordinarily high level. Without the various subsidies the Dutch government created to increase the incentive to drive an electrical car, a Tesla S would cost 94.010 Euro. This is a figure that would be even higher of course, if Dutch consumers had to pay for the various subsidies that Tesla receives in the United States. After the various subsidies provided by the Dutch government for freelance workers, Dutch consumers can acquire a Tesla S at a price of just 25,059 Euro.7

The various subsidies our governments provide are subsidies we all end up paying for in one form or another. What’s clear from all these numbers however is that an electric car is currently nowhere near a state where it could compete with a gasoline powered car in a free unregulated market, on the basis of its own merit.

The image that emerges here is not one of a technology that receives a gentle nudge to help it replace the outdated but culturally entrenched technology we currently use, but rather, of a number of private companies that compete for a variety of subsidies handed out by governments who seek to plan in advance how future technology will have to look, willfully ignorant of whatever effect physical limits might have on determining which technologies are economically viable to sustain and which aren’t.

After all, if government were willing to throw enough subsidies at it, we could see NGO’s attempt to solve world hunger using caviar and truffles. It wouldn’t be sustainable in the long run, but in the short term, it would prove to be a viable solution to hunger for a significant minority of the world’s poorest. There are no physical laws that render such a solution impossible on a small scale, rather, there are economic laws related to scalability that render it impossible.

Similarly, inventing an electrical car was never the problem. In 1900, 38% of American cars ran on electricity. The reason the electrical car died out back then was because it could not compete with gasoline. Today the problem consists of how to render it economically viable and able to replace our fossil fuel based transportation system, without detrimentally affecting our standard of living.

This brings us to the other elephant, the one in our room rather than our driveway. The real problem here is that we wish to sustain a standard of living that was built with cheap natural resources that are no longer here today. Coping with looming oil shortages will mean having to take a step back. The era where every middle class family could afford to have a car is over. Governments would be better off investing in public transport and safe bicycle lanes.

The problem America faces however, is that there are cultural factors that prohibit such a transition. Ownership of a car is seen as a marker of adulthood and the type of car tells us something about a man’s social status. This is an image car manufacturers are of course all too happy to reinforce through advertising. Hence, we find a tragic example of a society that wastes its remaining resources on false solutions to the crisis it faces.

1 – Page 12

2 –

3 –

4 –

5 –

6 –

7 –

The Dynamics of Depletion

27 06 2017

Originally published on the Automatic Earth, this further article on ERoEI and resource depletion ties all the things you need to understand about Limits to Growth in one neat package. 

Over the years, I have written many articles on the topic of EROEI (Energy Return on Energy Invested); there’s a whole chapter on it in the Automatic Earth Primer Guide 2017 that Nicole Foss assembled recently, which contains 17 well worth reading articles.

Since EROEI is still the most important energy issue there is, and not the price of oil or some new gas find or a set of windmills or solar panels or thorium as the media will lead you to believe, it can’t hurt to repeat it once again. Brian Davey wrote this item on his site CredoEconomics, it is part of his book “Credo”.

The reason I believe it can’t hurt to repeat this is because not nearly enough people understand that in the end, everything, the survival of our world, our way of life, is all about the ‘quality’ of energy, and about what we get in return when we drill and pump and build infrastructure; what remains when we subtract all the energy used to ‘generate’ energy, from (or at) the bottom line is all that’s left…….


Nicole Foss

Nicole Foss: Energy is the master resource – the capacity to do work. Our modern society is the result of the enormous energy subsidy we have enjoyed in the form of fossil fuels, specifically fossil fuels with a very high energy profit ratio (EROEI). Energy surplus drove expansion, intensification, and the development of socioeconomic complexity, but now we stand on the edge of the net energy cliff. The surplus energy, beyond that which has to be reinvested in future energy production, is rapidly diminishing.

We would have to greatly increase gross production to make up for reduced energy profit ratio, but production is flat to falling so this is no longer an option. As both gross production and the energy profit ratio fall, the net energy available for all society’s other purposes will fall even more quickly than gross production declines would suggest. Every society rests on a minimum energy profit ratio. The implication of falling below that minimum for industrial society, as we are now poised to do, is that society will be forced to simplify.

A plethora of energy fantasies is making the rounds at the moment. Whether based on unconventional oil and gas or renewables (that are not actually renewable), these are stories we tell ourselves in order to deny that we are facing any kind of future energy scarcity, or that supply could be in any way a concern. They are an attempt to maintain the fiction that our society can continue in its current form, or even increase in complexity. This is a vain attempt to deny the existence of non-negotiable limits to growth. The touted alternatives are not energy sources for our current society, because low EROEI energy sources cannot sustain a society complex enough to produce them.



Using Energy to Extract Energy – The Dynamics of Depletion



Brian Davey

Brian Davey: The “Limits to Growth Study” of 1972 was deeply controversial and criticised by many economists. Over 40 years later, it seems remarkably prophetic and on track in its predictions. The crucial concept of Energy Return on Energy Invested is explained and the flaws in neoclassical reasoning which EROI highlights.

The continued functioning of the energy system is a “hub interdependency” that has become essential to the management of the increasing complexity of our society. The energy input into the UK economy is about 50 to 70 times as great as what the labour force could generate if working full time only with the power of their muscles, fuelled up with food. It is fossil fuels, refined to be used in vehicles and motors or converted into electricity that have created power inputs that makes possible the multiple round- about arrangements in a high complex economy. The other “hub interdependency” is a money and transaction system for exchange which has to continue to function to make vast production and trade networks viable. Without payment systems nothing functions.

Yet, as I will show, both types of hub interdependencies could conceivably fail. The smooth running of the energy system is dependent on ample supplies of cheaply available fossil fuels. However, there has been a rising cost of extracting and refining oil, gas and coal. Quite soon there is likely to be an absolute decline in their availability. To this should be added the climatic consequences of burning more carbon based fuels. To make the situation even worse, if the economy gets into difficulty because of rising energy costs then so too will the financial system – which can then have a knock-on consequence for the money system. The two hub interdependencies could break down together.

“Solutions” put forward by the techno optimists almost always assume growing complexity and new uses for energy with an increased energy cost. But this begs the question- because the problem is the growing cost of energy and its polluting and climate changing consequences.


The “Limits to Growth” study of 1972 – and its 40 year after evaluation

It was a view similar to this that underpinned the methodology of a famous study from the early 1970s. A group called the Club of Rome decided to commission a group of system scientists at the Massachusetts Institute of Technology to explore how far economic growth would continue to be possible. Their research used a series of computer model runs based on various scenarios of the future. It was published in 1972 and produced an instant storm. Most economists were up in arms that their shibboleth, economic growth, had been challenged. (Meadows, Meadows, Randers, & BehrensIII, 1972)

This was because its message was that growth could continue for some time by running down “natural capital” (depletion) and degrading “ecological system services” (pollution) but that it could not go on forever. An analogy would be spending more than one earns. This is possible as long as one has savings to run down, or by running up debts payable in the future. However, a day of reckoning inevitably occurs. The MIT scientists ran a number of computer generated scenarios of the future including a “business as usual” projection, called the “standard run” which hit a global crisis in 2030.

It is now over 40 years since the original Limits to Growth study was published so it is legitimate to compare what was predicted in 1972 against what actually happened. This has now been done twice by Graham Turner who works at the Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO). Turner did this with data for the rst 30 years and then for 40 years of data. His conclusion is as follows:

The Limits to Growth standard run scenario produced 40 years ago continues to align well with historical data that has been updated in this paper following a 30-year comparison by the author. The scenario results in collapse of the global economy and environment and subsequently, the population. Although the modelled fall in population occurs after about 2030 – with death rates reversing contemporary trends and rising from 2020 onward – the general onset of collapse first appears at about 2015 when per capita industrial output begins a sharp decline. (Turner, 2012)

So what brings about the collapse? In the Limits to Growth model there are essentially two kinds of limiting restraints. On the one hand, limitations on resource inputs (materials and energy). On the other hand, waste/pollution restraints which degrade the ecological system and human society (particularly climate change).

Turner finds that, so far it, is the former rather than the latter that is the more important. What happens is that, as resources like fossil fuels deplete, they become more expensive to extract. More industrial output has to be set aside for the extraction process and less industrial output is available for other purposes.

With signficant capital subsequently going into resource extraction, there is insufficient available to fully replace degrading capital within the industrial sector itself. Consequently, despite heightened industrial activity attempting to satisfy multiple demands from all sectors and the population, actual industrial output per capita begins to fall precipitously, from about 2015, while pollution from the industrial activity continues to grow. The reduction of inputs produced per capita. Similarly, services (e.g., health and education) are not maintained due to insufficient capital and inputs.

Diminishing per capita supply of services and food cause a rise in the death rate from about 2020 (and somewhat lower rise in the birth rate, due to reduced birth control options). The global population therefore falls, at about half a billion per decade, starting at about 2030. Following the collapse, the output of the World3 model for the standard run (figure 1 to figure 3) shows that average living standards for the aggregate population (material wealth, food and services per capita) resemble those of the early 20th century. (Turner, 2012, p. 121)


Energy Return on Energy Invested

A similar analysis has been made by Hall and Klitgaard. They argue that to run a modern society it is necessary that the energy return on energy invested must be at least 15 to 1. To understand why this should be so consider the following diagram from a lecture by Hall. (Hall, 2012)


The diagram illustrates the idea of the energy return on energy invested. For every 100 Mega Joules of energy tapped in an oil flow from a well, 10 MJ are needed to tap the well, leaving 90 MJ. A narrow measure of energy returned on energy invested at the wellhead in this example would therefore be 100 to 10 or 10 to 1.

However, to get a fuller picture we have to extend this kind of analysis. Of the net energy at the wellhead, 90 MJ, some energy has to be used to refine the oil and produce the by-products, leaving only 63 MJ.

Then, to transport the refined product to its point of use takes another 5 MJ leaving 58MJ. But of course, the infrastructure of roads and transport also requires energy for construction and maintenance before any of the refined oil can be used to power a vehicle to go from A to B. By this final stage there is only 20.5 MJ of the original 100MJ left.

We now have to take into account that depletion means that, at well heads around the world, the energy to produce energy is increasing. It takes energy to prospect for oil and gas and if the wells are smaller and more difficult to tap because, for example, they are out at sea under a huge amount of rock. Then it will take more energy to get the oil out in the first place.

So, instead of requiring 10MJ to produce the 100 MJ, let us imagine that it now takes 20 MJ. At the other end of the chain there would thus, only be 10.5MJ – a dramatic reduction in petroleum available to society.

The concept of Energy Return on Energy Invested is a ratio in physical quantities and it helps us to understand the flaw in neoclassical economic reasoning that draws on the idea of “the invisible hand” and the price mechanism. In simplistic economic thinking, markets should have no problems coping with depletion because a depleting resource will become more expensive. As its price rises, so the argument goes, the search for new sources of energy and substitutes will be incentivised while people and companies will adapt their purchases to rising prices. For example, if it is the price of energy that is rising then this will incentivise greater energy efficiency. Basta! Problem solved…

Except the problem is not solved… there are two flaws in the reasoning. Firstly, if the price of energy rises then so too does the cost of extracting energy – because energy is needed to extract energy. There will be gas and oil wells in favourable locations which are relatively cheap to tap, and the rising energy price will mean that the companies that own these wells will make a lot of money. This is what economists call “rent”. However, there will be some wells that are “marginal” because the underlying geology and location are not so favourable. If energy prices rise at these locations then rising energy prices will also put up the energy costs of production. Indeed, when the energy returned on energy invested falls as low as 1 to 1, the increase in the costs of energy inputs will cancel out any gains in revenues from higher priced energy outputs. As is clear when the EROI is less than one, energy extraction will not be profitable at any price.

Secondly, energy prices cannot in any case rise beyond a certain point without crashing the economy. The market for energy is not like the market for cans of baked beans. Energy is necessary for virtually every activity in the economy, for all production and all services. The price of energy is a big deal – energy prices going up and down have a similar significance to interest rates going up or down. There are “macro-economic” consequences for the level of activity in the economy. Thus, in the words of one analyst, Chris Skrebowski, there is a rise in the price of oil, gas and coal at which:

the cost of incremental supply exceeds the price economies can pay without destroying growth at a given point in time.(Skrebowski, 2011)

This kind of analysis has been further developed by Steven Kopits of the Douglas-Westwood consultancy. In a lecture to the Columbia University Center on Global Energy Policy in February of 2014, he explained how conventional “legacy” oil production peaked in 2005 and has not increased since. All the increase in oil production since that date has been from unconventional sources like the Alberta Tar sands, from shale oil or natural gas liquids that are a by-product of shale gas production. This is despite a massive increase in investment by the oil industry that has not yielded any increase in “conventional oil” production but has merely served to slow what would otherwise have been a faster decline.

More specifically, the total spend on upstream oil and gas exploration and production from 2005 to 2013 was $4 trillion. Of that amount, $3.5 trillion was spent on the “legacy” oil and gas system. This is a sum of money equal to the GDP of Germany. Despite all that investment in conventional oil production, it fell by 1 million barrels a day. By way of comparison, investment of $1.5 trillion between 1998 and 2005 yielded an increase in oil production of 8.6 million barrels a day.

Further to this, unfortunately for the oil industry, it has not been possible for oil prices to rise high enough to cover the increasing capital expenditure and operating costs. This is because high oil prices lead to recessionary conditions and slow or no growth in the economy. Because prices are not rising fast enough and costs are increasing, the costs of the independent oil majors are rising at 2 to 3% a year more than their revenues. Overall profitability is falling and some oil majors have had to borrow and sell assets to pay dividends. The next stage in this crisis has then been that investment projects are being cancelled – which suggests that oil production will soon begin to fall more rapidly.

The situation can be understood by reference to the nursery story of Goldilocks and the Three Bears. Goldilocks tries three kinds of porridge – some that is too hot, some that is too cold and some where the temperature is somewhere in the middle and therefore just right. The working assumption of mainstream economists is that there is an oil price that is not too high to undermine economic growth but also not too low so that the oil companies cannot cover their extraction costs – a price that is just right. The problem is that the Goldilocks situation no longer describes what is happening. Another story provides a better metaphor – that story is “Catch 22”. According to Kopits, the vast majority of the publically quoted oil majors require oil prices of over $100 a barrel to achieve positive cash flow and nearly a half need more than $120 a barrel.

But it is these oil prices that drag down the economies of the OECD economies. For several years, however, there have been some countries that have been able to afford the higher prices. The countries that have coped with the high energy prices best are the so called “emerging non OECD countries” and above all China. China has been bidding away an increasing part of the oil production and continuing to grow while higher energy prices have led to stagnation in the OECD economies. (Kopits, 2014)

Since the oil price is never “just right” it follows that it must oscillate between a price that is too high for macro-economic stability or too low to make it a paying proposition for high cost producers of oil (or gas) to invest in expanding production. In late 2014 we can see this drama at work. The faltering global economy has a lower demand for oil but OPEC, under the leadership of Saudi Arabia, have decided not to reduce oil production in order to keep oil prices from falling. On the contrary they want prices to fall. This is because they want to drive US shale oil and gas producers out of business.

The shale industry is described elsewhere in this book – suffice it here to refer to the claim of many commentators that the shale oil and gas boom in the United States is a bubble. A lot of money borrowed from Wall Street has been invested in the industry in anticipation of high profits but given the speed at which wells deplete it is doubtful whether many of the companies will be able to cover their debts. What has been possible so far has been largely because quantitative easing means capital for this industry has been made available with very low interest rates. There is a range of extraction production costs for different oil and gas wells and fields depending on the differing geology in different places. In some “sweet spots” the yield compared to cost is high but in a large number of cases the costs of production have been high and it is being said that it will be impossible to make money at the price to which oil has fallen ($65 in late 2014). This in turn could mean that companies funding their operations with junk bonds could find it difficult to service their debt. If interest rates rise the difficulty would become greater. Because the shale oil and gas sector has been so crucial to expansion in the USA then a large number of bankruptcies could have wider repercussions throughout the wider US and world economy.


Renewable Energy systems to the rescue?

Although it seems obvious that the depletion of fossil fuels can and should lead to the expansion of renewable energy systems like wind and solar power, we should beware of believing that renewable energy systems are a panacea that can rescue consumer society and its continued growth path. A very similar net energy analysis can, and ought to be done for the potential of renewable energy to match that already done for fossil fuels.


Before we get over-enthusiastic about the potential for renewable energy, we have to be aware of the need to subtract the energy costs particular to renewable energy systems from the gross energy that renewable energy systems generate. Not only must energy be used to manufacture and install the wind turbines, the solar panels and so on, but for a renewable based economy to be able to function, it must also devote energy to the creation of energy storage. This would allow for the fact that, when the wind and the sun are generating energy, is not necessarily the time when it is wanted.

Furthermore, the places where, for example, solar and wind potential are at this best – offshore for wind or in deserts without dust storms near the equator for solar – are usually a long distance from centres of use. Once again, a great deal of energy, materials and money must be spent getting the energy from where it is generated to where it will be used. For example, the “Energie Wende” (Energy Transformation) in Germany is involving huge effort, financial and energy costs, creating a transmission corridor to carry electricity from North Sea wind turbines down to Bavaria where the demand is greatest. Similarly, plans to develop concentrated solar power in North Africa for use in northern Europe which, if they ever come to anything, will require major investments in energy transmission. A further issue, connected to the requirement for energy storage, is the need for energy carriers which are not based on electricity. As before, conversions to put a current energy flux into a stored form, involve an energy cost.

Just as with fossil fuels, sources of renewable energy are of variable yield depending on local conditions: offshore wind is better than onshore for wind speed and wind reliability; there is more solar energy nearer the equator; some areas have less cloud cover; wave energy on the Atlantic coasts of the UK are much better than on other coastlines like those of the Irish Sea or North Sea. If we make a Ricardian assumption that best net yielding resources are developed first, then subsequent yields will be progressively inferior. In more conventional jargon – just as there are diminishing returns for fossil energy as fossil energy resources deplete, so there will eventually be diminishing returns for renewable energy systems. No doubt new technologies will partly buck this trend but the trend is there nonetheless. It is for reasons such as these that some energy experts are sceptical about the global potential of renewable energy to meet the energy demand of a growing economy. For example, two Australian academics at Monash University argue that world energy demand would grow to 1,000 EJ (EJ = 10 18 J) or more by 2050 if growth continued on the course of recent decades. Their analysis then looks at each renewable energy resource in turn, bearing in mind the energy costs of developing wind, solar, hydropower, biomass etc., taking into account diminishing returns, and bearing in mind too that climate change may limit the potential of renewable energy. (For example, river flow rates may change affecting hydropower). Their conclusion: “We nd that when the energy costs of energy are considered, it is unlikely that renewable energy can provide anywhere near a 1000 EJ by 2050.” (Moriarty & Honnery, 2012)

Now let’s put these insights back into a bigger picture of the future of the economy. In a presentation to the All Party Parliamentary Group on Peak Oil and Gas, Charles Hall showed a number of diagrams to express the consequences of depletion and rising energy costs of energy. I have taken just two of these diagrams here – comparing 1970 with what might be the case in 2030. (Hall C. , 2012) What they show is how the economy produces different sorts of stuff. Some of the production is consumer goods, either staples (essentials) or discretionary (luxury) goods. The rest of production is devoted to goods that are used in production i.e. investment goods in the form of machinery, equipment, buildings, roads, infrastracture and their maintenance. Some of these investment goods must take the form of energy acquisition equipment. As a society runs up against energy depletion and other problems, more and more production must go into energy acquisition, infrastructure and maintenance. Less and less is available for consumption, and particularly for discretionary consumption.


Whether the economy would evolve in this way can be questioned. As we have seen, the increasing needs of the oil and gas sector implies a transfer of resources from elsewhere through rising prices. However, the rest of the economy cannot actually pay this extra without crashing. That is what the above diagrams show – a transfer of resources from discretionary consumption to investment in energy infrastructure. But such a transfer would be crushing for the other sectors and their decline would likely drag down the whole economy.

Over the last few years, central banks have had a policy of quantitative easing to try to keep interest rates low. The economy cannot pay high energy prices AND high interest rates so, in effect, the policy has been to try to bring down interest rates as low as possible to counter the stagnation. However, this has not really created production growth, it has instead created a succession of asset price bubbles. The underlying trend continues to be one of stagnation, decline and crisis and it will get a lot worse when oil production starts to fall more rapidly as a result of investment cut backs. The severity of the recessions may be variable in different countries because competitive strength in this model goes to those countries where energy is used most efficiently and which can afford to pay somewhat higher prices for energy. Such countries are likely to do better but will not escape the general decline if they stay wedded to the conventional growth model. Whatever the variability, this is still a dead end and, at some point, people will see that entirely different ways of thinking about economy and ecology are needed – unless they get drawn into conflicts and wars over energy by psychopathic policy idiots. There is no way out of the Catch 22 within the growth economy model. That’s why degrowth is needed.

Further ideas can be extrapolated from Hall’s way of presenting the end of the road for the growth economy. The only real option as a source for extra resources to be ploughed into changing the energy sector is from what Hall calls “discretionary consumption” aka luxury consumption. It would not be possible to take from “staples” without undermining the ability of ordinary people to survive day to day. Implicit here is a social justice agenda for the post growth – post carbon economy. Transferring resources out of the luxury consumption of the rich is a necessary part of the process of finding the wherewithal for energy conservation work and for developing renewable energy resources. These will be expensive and the resources cannot come from anywhere else than out of the consumption of the rich. It should be remembered too that the problems of depletion do not just apply to fossil energy extraction coal, oil and gas) but apply across all forms of mineral extraction. All minerals are depleted by use and that means the grade or ore declines over time. Projecting the consequences into the future ought to frighten the growth enthusiasts. To take in how industrial production can hit a brick wall of steeply rising costs, consider the following graph which shows the declining quality of ore grades mined in Australia.


As ores deplete there is a deterioration of ore grades. That means that more rock has to be shifted and processed to refine and extract the desired raw material, requiring more energy and leaving more wastes. This is occurring in parallel to the depletion in energy sources which means that more energy has to be used to extract a given quantity of energy and therefore, in turn, to extract from a given quantity of ore. Thus, the energy requirements to extract energy are rising at the very same time as the amount of energy required to extract given quantities of minerals are rising. More energy is needed just at the time that energy is itself becoming more expensive.

Now, on top of that, add to the picture the growing demand for minerals and materials if the economy is to grow.

At least there has been a recognition and acknowledgement in recent years that environmental problems exist. The problem is now somewhat different – the problem is the incredibly naive faith that markets and technology can solve all problems and keep on going. The main criticism of the limits to growth study was the claim that problems would be anticipated in forward markets and would then be made the subject of high tech innovation. In the next chapter, the destructive effects of these innovations are examined in more depth.