No, I don’t hate “renewables”

20 07 2019

Another masterpiece from Tim who keeps churning out great stuff on his website……

During a conversation with a friend yesterday I was asked why I was so hostile toward “renewables” – or as I prefer to call them, non-renewablerenewable energy-harvesting technologies.  My answer was that I am not opposed to these technologies, but rather to the role afforded to them by the Bright Green techno-utopian crowd, who continue to churn out propaganda to the effect that humankind can continue to metastasise across the universe without stopping for breath simply by replacing the energy we derive from fossil fuels with energy we harvest with wind and tide turbines, solar panels and geothermal pumps.  These, I explained to my friend, will unquestionably play a role in our future; but to nowhere near the extent claimed by the proponents of green capitalism, ecosocialism or the green new deal.

It would seem that I was not alone in being asked why I was so disapproving of “renewables.”  On the same day, American essayist John Michael Greer addressed the same question on his Ecosophia blog:

“Don’t get me wrong, I’m wholly in favor of renewables; they’re what we’ll have left when fossil fuels are gone; but anyone who thinks that the absurdly extravagant energy use that props up a modern lifestyle can be powered by PV cells simply hasn’t done the math. Yet you’ll hear plenty of well-intentioned people these days insisting that if we only invest in solar PV we can stop using fossil fuels and still keep our current lifestyles.”

Greer also explains why so many techno-utopians have such a starry-eyed view of “renewables” like solar panels:

“The result of [decades of development] can be summed up quite readily: the only people who think that an energy-intensive modern lifestyle can be supported entirely on solar PV are those who’ve never tried it. You can get a modest amount of electrical power intermittently from PV cells; if you cover your roof with PV cells and have a grid tie-in that credits you at a subsidized rate, you can have all the benefits of fossil fuel-generated electricity and still convince yourself that you’re not dependent on fossil fuels; but if you go off-grid, you’ll quickly learn the hard limits of solar PV.”

Greer is not alone in having to spell this out.  The first article I read yesterday morning was a new post from Tim Morgan on his Surplus Energy Economics blog, where he makes the case that even if we were not facing a climate emergency, our dependence upon fossil fuels still dooms our civilisation to an imminent collapse:

“Far from ensuring ‘business as usual’, continued reliance on fossil fuel energy would have devastating economic consequences. As is explained here, the world economy is already suffering from these effects, and these have prompted the adoption of successively riskier forms of financial manipulation in a failed effort to sustain economic ‘normality’.”

The reason is what Morgan refers to as the rapidly-rising “energy cost of energy” (ECoE) – a calculation related to Net Energy and Energy Return on Energy Invested (EROI).  Put simply, industrial civilisation has devoured each fossil fuel beginning with the cheapest and easiest deposits and then falling back on ever harder and more expensive deposits as these run out.  The result is that the amount of surplus energy left over to grow the economy after we have invested in energy for the future and in the maintenance and repair of the infrastructure we have already developed gets smaller and harder to obtain with each passing month.

Morgan sets out four factors which determine the Energy Cost of Energy:

  • Geographical reach – as local deposits are exhausted, we are obliged to go further afield for replacements.
  • Economies of scale – as our infrastructure develops, we rationalise it in order to keep costs to a minimum; for example, having a handful of giant oil refineries rather than a large number of small ones. Unfortunately, this is a one-off gain, after which the cost of maintenance and repair results in diminishing returns.
  • Depletion – most of the world’s oil and coal deposits are now in decline, after providing the basis for the development of industrial civilisation. Without replacement, depletion dooms us to some form of degrowth.
  • Technology – the development of technologies that provide a greater return for the energy invested can offset some of the rising ECoE, but like economies of scale, they come with diminishing returns and are ultimately limited by the laws of thermodynamics:

“To be sure, advances in technology can mitigate the rise in ECoEs, but technology is limited by the physical properties of the resource. Advances in techniques have reduced the cost of shale liquids extraction to levels well below the past cost of extracting those same resources, but have not turned America’s tight sands into the economic equivalent of Saudi Arabia’s al Ghawar, or other giant discoveries of the past.

“Physics does tend to have the last word.”

Morgan argues that by focusing solely on financial matters, mainstream economics misses the central role of surplus energy in the economy:

“According to SEEDS – the Surplus Energy Economics Data System – world trend ECoE rose from 2.9% in 1990 to 4.1% in 2000. This increase was more than enough to stop Western prosperity growth in its tracks.

“Unfortunately, a policy establishment accustomed to seeing all economic developments in purely financial terms was at a loss to explain this phenomenon, though it did give it a name – “secular stagnation”.

“Predictably, in the absence of an understanding of the energy basis of the economy, recourse was made to financial policies in order to ‘fix’ this slowdown in growth.

“The first such initiative was credit adventurism. It involved making debt easier to obtain than ever before. This approach was congenial to a contemporary mind-set which saw ‘deregulation’ as a cure for all ills.”

The inevitable result was the financial crash in 2008, when unrepayable debt threatened to unwind the entire global financial system.  And while the financial crisis has been temporarily offset by more of the same medicine – quantitative easing and interest rate cuts – it has been the continued expansion of emerging markets that has actually kept the system limping along:

“World average prosperity per capita has declined only marginally since 2007, essentially because deterioration in the West has been offset by continued progress in the emerging market (EM) economies. This, though, is nearing its point of inflexion, with clear evidence now showing that the Chinese economy, in particular, is in very big trouble.

“As you’d expect, these trends in underlying prosperity have started showing up in ‘real world’ indicators, with trade in goods, and sales of everything from cars and smartphones to computer chips and industrial components, now turning down. As the economy of ‘stuff’ weakens, a logical consequence is likely to be a deterioration in demand for the energy and other commodities used in the supply of “stuff”.

“Simply stated, the economy has now started to shrink, and there are limits to how long we can hide this from ourselves by spending ever larger amounts of borrowed money.”

The question this raises is not simply, can we replace fossil fuels with non-renewable renewable energy-harvesting technologies (Morgan refers to them as “secondary applications of primary energy from fossil fuels”) but can we deploy them at an ECoE that allows us to avoid the collapse of industrial civilisation?  Morgan argues not.  The techno-utopian bad habit of applying Moore’s Law to every technology has allowed economists and politicians to assume that the cost of non-renewable renewable energy-harvesting technologies will keep halving even as the energy they generate continues to double.  However:

“[W]e need to guard against the extrapolatory fallacy which says that, because the ECoE of renewables has declined by x% over y number of years, it will fall by a further x% over the next y. The problem with this is that it ignores the limits imposed by the laws of physics.”

More alarming, however, is the high ECoE of non-renewable renewable energy-harvesting technologies; despite their becoming cheaper than some fossil fuel deposits:

“…there can be no assurance that the ECoE of a renewables-based energy system can ever be low enough to sustain prosperity. Back in the ‘golden age’ of prosperity growth (in the decades immediately following 1945), global ECoE was between 1% and 2%. With renewables, the best that we can hope for might be an ECoE stable at perhaps 8%, far above the levels at which prosperity deteriorates in the West, and ceases growing in the emerging economies.”

At this point, no doubt, some readers at least will be asking Morgan why he dislikes “renewables” so much.  And his answer is the same as Greer’s and my own:

“These cautions do not, it must be stressed, undermine the case for transitioning from fossil fuels to renewables. After all, once we understand the energy processes which drive the economy, we know where continued dependency on ever-costlier fossil fuels would lead.

“There can, of course, be no guarantees around a successful transition to renewable forms of energy. The slogan “sustainable development” has been adopted by the policy establishment because it seems to promise the public that we can tackle environmental risk without inflicting economic hardship, or even significant inconvenience.”

Morgan’s broad point here is that there is a false dichotomy between addressing environmental concerns and maintaining economic growth.  The economy is toast irrespective of whether we address environment crises or not.  There is not enough fossil fuel energy to prevent he system from imploding – the only real question to be answered is whether we continue with business as usual until we crash and burn or whether we take at least some mitigating actions to preserve a few of the beneficial aspects of the last 250 years of economic development.  After all, having clean drinking water, enough food to ward off starvation and some basic health care would make the coming collapse easier than it otherwise might be.

The problem, however, is that even with the Herculean efforts to deploy non-renewable renewable energy-harvesting technologies in the decades since the oil crisis in 1973, they still only account for four percent of our primary energy.  As Morgan cautions, it is too easy for westerners to assume that our total energy consumption is entirely in the gas and electricity we use at home and in the fuel we put in the tanks of our vehicles.  In reality this is but a tiny fraction of our energy use (and carbon footprint) with most of our energy embodied within all of the goods and services we consume.  Not only does fossil fuel account for more than 85 percent of the world’s primary energy, but both BP and the International Energy Agency reports for 2018 show that fossil fuel consumption is growing at a faster rate than non-renewable renewable energy-harvesting technologies are being installed.

Nor is there a green new deal route out of this problem.  As a recent letter to the UK’s Committee on Climate Change, authored by Natural History Museum Head of Earth Sciences Prof Richard Herrington et al., warns:

“To replace all UK-based vehicles today with electric vehicles (not including the LGV and HGV fleets), assuming they use the most resource-frugal next-generation NMC 811 batteries, would take 207,900 tonnes cobalt, 264,600 tonnes of lithium carbonate (LCE), at least 7,200 tonnes of neodymium and dysprosium, in addition to 2,362,500 tonnes copper. This represents, just under two times the total annual world cobalt production, nearly the entire world production of neodymium, three quarters the world’s lithium production and at least half of the world’s copper production during 2018. Even ensuring the annual supply of electric vehicles only, from 2035 as pledged, will require the UK to annually import the equivalent of the entire annual cobalt needs of European industry…

“There are serious implications for the electrical power generation in the UK needed to recharge these vehicles. Using figures published for current EVs (Nissan Leaf, Renault Zoe), driving 252.5 billion miles uses at least 63 TWh of power. This will demand a 20% increase in UK generated electricity.

“Challenges of using ‘green energy’ to power electric cars: If wind farms are chosen to generate the power for the projected two billion cars at UK average usage, this requires the equivalent of a further years’ worth of total global copper supply and 10 years’ worth of global neodymium and dysprosium production to build the windfarms.

“Solar power is also problematic – it is also resource hungry; all the photovoltaic systems currently on the market are reliant on one or more raw materials classed as “critical” or “near critical” by the EU and/ or US Department of Energy (high purity silicon, indium, tellurium, gallium) because of their natural scarcity or their recovery as minor-by-products of other commodities. With a capacity factor of only ~10%, the UK would require ~72GW of photovoltaic input to fuel the EV fleet; over five times the current installed capacity. If CdTe-type photovoltaic power is used, that would consume over thirty years of current annual tellurium supply.

“Both these wind turbine and solar generation options for the added electrical power generation capacity have substantial demands for steel, aluminium, cement and glass.”

Put simply, there is not enough Planet Earth left for us to grow our way to sustainability.  The only option open to us is to rapidly shrink our activities and our population back to something that can be sustained without further depleting the planet we depend upon.  Continue with business as usual and Mother Nature is going to do to us what we did to the dodo and the passenger pigeon.  Begin taking some radical action – which still allows the use of some resources and fossil fuels – to switch from an economy of desires to one of needs and at least a fewhumans might survive what is coming.

The final problem, though, is that very few people – including many of those who protest government inaction on the environment – are prepared to make the sacrifices required.  Nor are our corporations and institutions prepared to forego their power and profits for the greater good.  And that leaves us with political structures that will inevitably favour business as usual.

So no, I don’t hate “renewables” – I just regard those who blithely claim that we can deploy and use them to replace fossil fuels without breaking a sweat to be as morally bankrupt as any climate change denying politician you care to mention.  There is a crash on the horizon, the likes of which we haven’t seen since the fourteenth century.  When the energy cost of securing energy – whether fossil fuel, nuclear or renewable – exceeds the energy cost of sustaining the system; our ability to take mitigating action will be over.  Exactly when this is going to happen is a matter of speculation (we should avoid mistaking inevitability for imminence).  Nevertheless, the window for taking action is closing fast; and promising Bright Green utopias as we slide over the cliff edge is not helping anybody.





The physics of energy and resulting effects on economics

10 07 2018

Hat tip to one of the many commenters on DTM for pointing me to this excellent video…. I have featured Jean-Marc Jancovici’s work here before, but this one’s shorter, and even though it’s in French, English subtitles are available from the settings section on the toutube screen. Speaking of screens, one of the outstanding statements made in this video is that all electronics in the world that use screens in one way or another consume one third of the world’s electricity…….. Remember how the growth in renewables could not even keep up with the Internet’s growth?

If this doesn’t convince viewers that we have to change the way we do EVERYTHING, then nothing will….. and seeing as he’s presenting to politicians, let’s hope at least some of them will come out of this better informed……

Jean-Marc Jancovici, a French engineer schools politicians with a sobering lecture on the physics of energy and the effects on economics and climate change





We Need Courage, Not Hope, to Face Climate Change

11 03 2018

Originally posted at onbeing…… I hope this article rhymes with you as well as it did for me.

KATE MARVEL (@DRKATEMARVEL), CONTRIBUTING EDITOR

Kate MarvelAs a climate scientist, I am often asked to talk about hope. Particularly in the current political climate, audiences want to be told that everything will be all right in the end. And, unfortunately, I have a deep-seated need to be liked and a natural tendency to optimism that leads me to accept more speaking invitations than is good for me. Climate change is bleak, the organizers always say. Tell us a happy story. Give us hope. The problem is, I don’t have any.

I used to believe there was hope in science. The fact that we know anything at all is a miracle. For some reason, the whole world is hung on a skeleton made of physics. I found comfort in this structure, in the knowledge that buried under layers of greenery and dirt lies something universal. It is something to know how to cut away the flesh of existence and see the clean white bones underneath. All of us obey the same laws, whether we know them or not.

Look closely, however, and the structure of physics dissolves into uncertainty. We live in a statistical world, in a limit where we experience only one of many possible outcomes. Our clumsy senses perceive only gross aggregates, blind to the roiling chaos underneath. We are limited in our ability to see the underlying stimuli that, en masse, create an event. Temperature, for example, is a state created by the random motions of millions of tiny molecules. We feel heat or cold, not the motion of any individual molecule. When something is heated up, its tiny constituent parts move faster, increasing its internal energy. They do not move at the same speed; some are quick, others slow. But there are billions of them, and in the aggregate their speed dictates their temperature.

The internal energy of molecule motion is turned outward in the form of electromagnetic radiation. Light comes in different flavors. The stuff we see occupies only a tiny portion of a vast electromagnetic spectrum. What we see occupies a tiny portion of a vast electromagnetic spectrum. Light is a wave, of sorts, and the distance between its peaks and troughs determines the energy it carries. Cold, low-energy objects emit stretched waves with long, lazy intervals between peaks. Hot objects radiate at shorter wavelengths.

To have a temperature is to shed light into your surroundings. You have one. The light you give off is invisible to the naked eye. You are shining all the same, incandescent with the power of a hundred-watt bulb. The planet on which you live is illuminated by the visible light of the sun and radiates infrared light to the blackness of space. There is nothing that does not have a temperature. Cold space itself is illuminated by the afterglow of the Big Bang. Even black holes radiate, lit by the strangeness of quantum mechanics. There is nowhere from which light cannot escape.

The same laws that flood the world with light dictate the behavior of a carbon dioxide molecule in the atmosphere. CO2 is transparent to the Sun’s rays. But the planet’s infrared outflow hits a molecule in just such as way as to set it in motion. Carbon dioxide dances when hit by a quantum of such light, arresting the light on its path to space. When the dance stops, the quantum is released back to the atmosphere from which it came. No one feels the consequences of this individual catch-and-release, but the net result of many little dances is an increase in the temperature of the planet. More CO2 molecules mean a warmer atmosphere and a warmer planet. Warm seas fuel hurricanes, warm air bloats with water vapor, the rising sea encroaches on the land. The consequences of tiny random acts echo throughout the world.

I understand the physical world because, at some level, I understand the behavior of every small thing. I know how to assemble a coarse aggregate from the sum of multiple tiny motions. Individual molecules, water droplets, parcels of air, quanta of light: their random movements merge to yield a predictable and understandable whole. But physics is unable to explain the whole of the world in which I live. The planet teems with other people: seven billion fellow damaged creatures. We come together and break apart, seldom adding up to an coherent, predictable whole.

I have lived a fortunate, charmed, loved life. This means I have infinite, gullible faith in the goodness of the individual. But I have none whatsoever in the collective. How else can it be that the sum total of so many tiny acts of kindness is a world incapable of stopping something so eminently stoppable? California burns. Islands and coastlines are smashed by hurricanes. At night the stars are washed out by city lights and the world is illuminated by the flickering ugliness of reality television. We burn coal and oil and gas, heedless of the consequences.

Our laws are changeable and shifting; the laws of physics are fixed. Change is already underway; individual worries and sacrifices have not slowed it. Hope is a creature of privilege: we know that things will be lost, but it is comforting to believe that others will bear the brunt of it.

We are the lucky ones who suffer little tragedies unmoored from the brutality of history. Our loved ones are taken from us one by one through accident or illness, not wholesale by war or natural disaster. But the scale of climate change engulfs even the most fortunate. There is now no weather we haven’t touched, no wilderness immune from our encroaching pressure. The world we once knew is never coming back.

I have no hope that these changes can be reversed. We are inevitably sending our children to live on an unfamiliar planet. But the opposite of hope is not despair. It is grief. Even while resolving to limit the damage, we can mourn. And here, the sheer scale of the problem provides a perverse comfort: we are in this together. The swiftness of the change, its scale and inevitability, binds us into one, broken hearts trapped together under a warming atmosphere.

We need courage, not hope. Grief, after all, is the cost of being alive. We are all fated to live lives shot through with sadness, and are not worth less for it. Courage is the resolve to do well without the assurance of a happy ending. Little molecules, random in their movement, add together to a coherent whole. Little lives do not. But here we are, together on a planet radiating ever more into space where there is no darkness, only light we cannot see.





Who killed the electric car…….

28 11 2017

Anyone who’s seen the film (I still have a DVD of it lying around somewhere…) by the name “Who killed the electric car” will remember the outrage of the ‘owners’ (they were all only leasing the vehicles) when GM destroyed the cars they thought were working perfectly well.  The problem was, the EV1 was an experiment. It was an experiment in technology and economics, and by the time the leases ran out, all the batteries needed replacing, and GM weren’t about to do that, because the replacement cost was higher than the value of the vehicles. Never let economics get in the way of a good story…. nor profit!

Anyhow, here is another well researched article Alice Fridemann pointed me to regarding the senseless travesty of the big switch to EVs…..  It’s just too little too late, and we have the laws of physics to contend with.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

alice_friedemannThe battery did it.  Batteries are far too expensive for the average consumer, $600-$1700 per kwh (Service). And they aren’t likely to get better any time soon.  Sorry to ruin the suspense so quickly, guess I’ll never be a mystery writer.

The big advances in battery technology happen rarely. It’s been more than 200 years and we have maybe 5 different successful rechargeable batteries,” said George Blomgren, a former senior technology researcher at Eveready (Borenstein).

And yet hope springs eternal. A better battery is always just around the corner:

  • 1901: “A large number of people … are looking forward to a revolution in the generating power of storage batteries, and it is the opinion of many that the long-looked-for, light weight, high capacity battery will soon be discovered.” (Hiscox)
  • 1901: “Demand for a proper automobile storage battery is so crying that it soon must result in the appearance of the desired accumulator [battery]. Everywhere in the history of industrial progress, invention has followed close in the wake of necessity” (Electrical Review #38. May 11, 1901. McGraw-Hill)
  • 1974: “The consensus among EV proponents and major battery manufacturers is that a high-energy, high power-density battery – a true breakthrough in electrochemistry – could be accomplished in just 5 years” (Machine Design).
  • 2014 internet search “battery breakthrough” gets 7,710,000 results, including:  Secretive Company Claims Battery Breakthrough, ‘Holy Grail’ of Battery Design Achieved, Stanford breakthrough might triple battery life, A Battery That ‘Breathes’ Could Power Next-Gen Electric Vehicles, 8 Potential EV and Hybrid Battery Breakthroughs.

So is an electric car:

  • 1911: The New York Times declares that the electric car “has long been recognized as the ideal solution” because it “is cleaner and quieter” and “much more economical.”(NYT 1911)
  • 1915: The Washington Post writes that “prices on electric cars will continue to drop until they are within reach of the average family.”(WP 1915)
  • 1959: The New York Times reports that the “Old electric may be the car of tomorrow.” The story said that electric cars were making a comeback because “gasoline is expensive today, principally because it is so heavily taxed, while electricity is far cheaper” than it was back in the 1920s (Ingraham 1959)
  • 1967: The Los Angeles Times says that American Motors Corporation is on the verge of producing an electric car, the Amitron, to be powered by lithium batteries capable of holding 330 watt-hours per kilogram. (That’s more than two times as much as the energy density of modern lithium-ion batteries.) Backers of the Amitron said, “We don’t see a major obstacle in technology. It’s just a matter of time.” (Thomas 1967)
  • 1979: The Washington Post reports that General Motors has found “a breakthrough in batteries” that “now makes electric cars commercially practical.” The new zinc-nickel oxide batteries will provide the “100-mile range that General Motors executives believe is necessary to successfully sell electric vehicles to the public.”(Knight, J. September 26, 1979. GM Unveils electric car, New battery. Washington Post, D7.
  • 1980: In an opinion piece, the Washington Post avers that “practical electric cars can be built in the near future.” By 2000, the average family would own cars, predicted the Post, “tailored for the purpose for which they are most often used.” It went on to say that “in this new kind of car fleet, the electric vehicle could pay a big role—especially as delivery trucks and two-passenger urban commuter cars. With an aggressive production effort, they might save 1 million barrels of oil a day by the turn of the century.” (WP 1980)

Lithium-ion batteries appear to be the winner for all-electric cars given Elon Musk’s new $5 billion dollar li-ion battery factory in Nevada. Yet Li-ion batteries have a very short cycling life of 5 to 10 years (depending on how the car is driven), and then they’re at just 70% of initial capacity, which is too low to drive, and if a driver persists despite the degraded performance, eventually the batteries will go down to 50% of capacity, a certain end-of-life for li-ion (ADEME).

One reason people are so keen on electric cars is because they cost less to fuel.  But if electricity were $0.10 per kWh, to fill up a 53 kWh Tesla battery takes about 4 hours and costs $5.30. 30 days times $5.30 is $159. I can fill up my gas tank in a few minutes for under $40.  I drive about 15 miles a day and can go 400 miles per fill up, so I only get gas about once a month.  I’d have to drive 60 miles a day to run the cost up to $159. If your electricity costs less than ten cents, it won’t always.  Shale gas is a one-time-only temporary boom that probably ends around 2020.  Got a dinkier battery than the Tesla but go 80 miles or less at most?  Most people won’t consider buying an electric car until they go 200 miles or more.

So why isn’t there a better battery yet?

The lead-acid battery hasn’t changed much since it was invented in 1859. It’s hard to invent new kinds of batteries or even improve existing ones, because although a battery looks simple, inside it’s a churning chaos of complex electrochemistry as the battery goes between being charged and discharged many times.

Charging and recharging are hard on a battery. Recharging is supposed to put Humpty Dumpty back together again, but over time the metals, liquids, gels, chemicals, and solids inside clog, corrode, crack, crystallize, become impure, leak, and break down.

A battery is like a football player, with increasing injuries and concussions over the season. An ideal battery would be alive, able to self-heal, secrete impurities, and recover from abuse.

The number of elements in the periodic table (118) is limited. Only a few have the best electron properties (like lithium), and others can be ruled out because they’re radioactive (39), rare earth and platinum group metals (23), inert noble gases (6), or should be ruled out: toxic (i.e. cadmium, cobalt, mercury, arsenic), hard to recycle, scarce, or expensive.

There are many properties an ideal Energy Storage device would have:

  1. Small and light-weight to give vehicles a longer range
  2. High energy density like oil (energy stored per unit of weight)
  3. Recharge fast, tolerant of overcharge, undercharging, and over-discharge
  4. Store a lot of energy
  5. High power density, deliver a lot of power quickly
  6. Be rechargeable thousands of times while retaining 80% of their storage capacity
  7. Reliable and robust
  8. A long life, at least 10 years for a vehicle battery
  9. Made from very inexpensive, common, sustainable, recyclable materials
  10. Deliver power for a long time
  11. Won’t explode or catch on fire
  12. Long shelf life for times when not being used
  13. Perform well in low and high temperatures
  14. Able to tolerate vibration, shaking, and shocks
  15. Not use toxic materials during manufacture or in the battery itself
  16. Take very little energy to make from cradle-to-grave
  17. Need minimal to no maintenance

For example, in the real world, these are the priorities for heavy-duty hybrid trucks (NRC 2008):

  1. High Volumetric Energy Density (energy per unit volume)
  2. High Gravimetric Energy Density (energy per unit of weight, Specific Energy)
  3. High Volumetric Power Density (power per unit of volume)
  4. High Gravimetric Power Density (power per unit of weight, Specific Power)
  5. Low purchase cost
  6. Low operating cost
  7. Low recycling cost
  8. Long useful life
  9. Long shelf life
  10. Minimal maintenance
  11. High level of safety in collisions and rollover accidents
  12. High level of safety during charging
  13. Ease of charging method
  14. Minimal charging time
  15. Storable and operable at normal and extreme ambient temperatures
  16. High number of charge-discharge cycles, regardless of the depth of discharge
  17. Minimal environmental concerns during manufacturing, useful life, and recycling or disposal

Pick Any Two

In the real world, you can’t have all of the above. It’s like the sign “Pick any two: Fast (expensive), Cheap (crappy), or Good (slow)”.

So many different properties are demanded that “This is like wanting a car that has the power of a Corvette, the fuel efficiency of a Chevy Malibu, and the price tag of a Chevy Spark. This is hard to do. No one battery delivers both high power and high energy, at least not very well or for very long,” according to Dr. Jud Virden at the Pacific Northwest National Laboratory (House 114-18 2015).

You always give up something. Battery chemistry is complex. Anode, cathode, electrolyte, and membrane separators materials must all work together. Tweak any one of these materials and the battery might not work anymore. You get higher energy densities from reactive, less stable chemicals that often result in non-rechargeable batteries, are susceptible to impurities, catch on fire, and so on. Storing more energy might lower the voltage, a fast recharge shorten the lifespan.

You have to optimize many different things at the same time,” says Venkat Srinivasan, a transportation battery expert at Lawrence Berkeley National Laboratory in California. “It’s a hard, hard problem” (Service).

Conflicting demands. The main job of a battery is to store energy. Trying to make them discharge a lot of power quickly may be impossible. “If you want high storage, you can’t get high power,” said M. Stanley Whittingham, director of the Northeast Center for Chemical Energy Storage. “People are expecting more than what’s possible.”

Battery testing takes time. Every time a change is made the individual cells, then modules, then overall pack is tested for one cycle and again for 50 cycles for voltage, current, cycle life (number of recharges), Ragone plot (energy and power density), charge and discharge time, self-discharge, safety (heat, vibration, external short circuit, overcharge, forced discharge, etc.) and many other parameters.

Batteries deteriorate.  The more deeply you discharge a battery, the more often you charge/recharge it (cycles), or the car is exposed to below freezing or above 77 degree temperatures, the shorter the life of the battery will be. Even doing nothing shortens battery life: Li-ion batteries lose charge when idle, so an old, unused battery will last less long than a new one.  Tesla engineers expect the power of the car’s battery pack to degrade by as much as 30% in five years (Smil). [ED. the exception of course being Nickel Iron batteries….. but they are not really suitable for EVs, even if that’s what they were originally invented for]

Batteries are limited by the physical laws of the universe.  Lithium-ion batteries are getting close to theirs.  According to materials scientist George Crabtree of Argonne National Laboratory, li-ion batteries are approaching their basic electrochemical limits of density of energy they can store. “If you really want electric cars to compete with gasoline, you’re going to need the next generation of batteries.” Rachid Yazami of Nanyang Technological University in Singapore says that this will require finding a new chemical basis for them. Although engineers have achieved a lot with lithium-ion batteries, it hasn’t been enough to charge electric cars very fast, or go 500 miles (Hodson 2015).

Be skeptical of battery breakthroughs. It takes ten years to improve an existing type of battery, and it’s expensive since you need chemists, material scientists, chemical and mechanical engineers, electrochemists, computer and nanotechnology scientists. The United States isn’t training enough engineers to support a large battery industry, and within 5 years, 40% of full-time senior engineering faculty will be eligible for retirement.

Dr. Virden says that “you see all kinds of press releases about a new anode material that’s five times better than anything out there, and it probably is, but when you put that in with an electrolyte and a cathode, and put it together and then try to scale it, all kinds of things don’t work. Materials start to fall apart, the chemistry isn’t well known, there’s side reactions, and usually what that leads to is loss of performance, loss of safety. And we as fundamental scientists don’t understand those basic mechanisms. And we do really undervalue the challenge of scale-up. In every materials process I see, in an experiment in a lab like this big, it works perfectly. Then when you want to make thousands of them-it doesn’t.” (House 114-18).

We need a revolutionary new battery that takes less than 10 years to develop

“We need to leapfrog the engineering of making of batteries,” said Lawrence Berkeley National Lab battery scientist Vince Battaglia. “We’ve got to find the next big thing.”

Dr. Virden testified at a U.S. House hearing that “despite many advances, we still have fundamental gaps in our understanding of the basic processes that influence battery operation, performance, limitations, and failures (House 114-18 2015).

But none of the 10 experts who talked to The Associated Press said they know what that big thing will be yet, or when it will come (Borenstein).

The Department of Energy (DOE) says that incremental improvements won’t electrify cars and energy storage fast enough. Scientists need to understand the laws of battery physics better. To do that, we need to be able to observe what’s going on inside the battery at an atomic scale in femtoseconds (.000000000000001 second), build nanoscale materials/tubes/wires to improve ion flow etc., and write complex models and computer programs that use this data to better predict what might happen every time some aspect of the battery is meddled with to zero in on the best materials to use.

Are you kidding? Laws of Physics? Femtoseconds? Atomic Scale? Nanoscale technology — that doesn’t exist yet?

Extremely energy-dense batteries for autos are impossible because of the laws of Physics and the “Pick any Two” problem

There’s only so much energy you can force into a black box, and it’s a lot less than the energy contained in oil – pound for pound the most energy density a battery could contain is only around 6 percent that of oil. The energy density of oil 500 times higher than a lead-acid battery (House), which is why it takes 1,200 pounds of lead-acid batteries to move a car 50 miles.

Even though an electric vehicle needs only a quarter of the energy a gasoline vehicle needs to deliver the same energy to turn the wheels, this efficiency is more than overcome by the much smaller energy density of a battery compared to the energy density of gasoline.  This can be seen in the much heavier weight and space a battery requires.  For example, the 85 kWh battery in a Tesla Model S weighs 1,500 pounds (Tesla 2014) and the gasoline containing the equivalent energy, about 9 gallons, weighs 54 pounds.  The 1500 pound weight of a Tesla battery is equal to 7 extra passengers, and reduces the acceleration and range that could otherwise be realized (NRC 2015).

Lithium batteries are more powerful, but even so, oil has 120 times the energy density of a lithium battery pack. Increased driving ranges of electric cars have come more from weight reduction, drag reduction, and decreased rolling resistance than improved battery performance.

The amount of energy that can be stored in a battery depends on the potential chemical energy due to their electron properties. The most you could ever get is 6 volts from a Lithium (highest reduction) and Fluorine (highest oxidation).  But for many reasons a lithium-fluoride or fluoride battery is not in sight and may never work out (not rechargeable, unstable, unsafe, inefficient, solvents and electrolytes don’t handle the voltages generated, lithium fluoride crystallizes and doesn’t conduct electricity, etc.).

The DOE has found that lithium-ion batteries are the only chemistry promising enough to use in electric cars. There are “several Li-ion chemistries being investigated… but none offers an ideal combination of energy density, power capability, durability, safety, and cost” (NAS 2013).

Lithium batteries can generate up to 3.8 volts but have to use non-aqueous electrolytes (because water has a 2 volt maximum) which gives a relatively high internal impedance.

They can be unsafe. A thermal runaway in one battery can explode into 932 F degrees and spread to other batteries in the cell or pack.

There are many other problems with all-electric cars

It will take decades or more to replace the existing fleet with electric cars if batteries ever do get cheap and powerful enough.  Even if all 16 million vehicles purchased every year were only electric autos, the U.S. car fleet has 250 million passenger vehicles and would take over 15 years to replace.  But only 120,000 electric cars were sold in 2014. At that rate it would take 133 years.

Electric cars are too expensive. The median household income of a an electric car buyer is $148,158 and $83,166 for a gasoline car. But the U.S. median household income was only $51,939 in 2014. The Tesla Model S tends to be bought by relatively wealthy individuals,  primarily men who have higher incomes, paid cash, and did not seriously consider purchasing another vehicle (NRC 2015).

And when gasoline prices began to drop in 2014, people stopped buying EVs and started buying gas guzzlers again.

Autos aren’t the game-changer for the climate or saving energy that they’re claimed to be.  They account for just 20% of the oil wrung out of a barrel, trucks, ships, manufacturing, rail, airplanes, and buildings use the other 80%.

And the cost of electric cars is expected to be greater than internal combustion engine and hybrid electric autos for the next two decades (NRC 2013).

The average car buyer wants a low-cost, long range vehicle. A car that gets 30 mpg would require a “prohibitively long-to-charge, expensive, heavy, and bulky” 78 kWh battery to go 300 miles, which costs about $35,000 now. Future battery costs are hard to estimate, and right now, some “battery companies sell batteries below cost to gain market share” (NAS 2013). Most new cathode materials are high-cost nickel and cobalt materials.

Rapid charging and discharging can shorten the lifetime of the cell. This is particularly important because the goal of 10 to 15 years of service for automotive applications, the average lifetime of a car. Replacing the battery would be a very expensive repair, even as costs decline (NAS 2013).

It is unclear that consumer demand will be sufficient to sustain the U.S. advanced battery industry. It takes up to $300 million to build one lithium-ion plant to supply batteries for 20,000 to 30,000 plug-in or electric vehicles (NAE 2012).

Almost all electric cars use up to 3.3 pounds of rare-earth elements in interior permanent magnet motors. China currently has a near monopoly on the production of rare-earth materials, which has led DOE to search for technologies that eliminate or reduce rare-earth magnets in motors (NAS 2013).

Natural gas generated electricity is likely to be far more expensive when the fracking boom peaks 2015-2019, and coal generated electricity after coal supplies reach their peak somewhere between now and 2030.

100 million electric cars require ninety 1,000-MWe power plants, transmission, and distribution infrastructure that would cost at least $400 billion dollars. A plant can take years to over a decade to build (NAS 2013).

By the time the electricity reaches a car, it’s lost 50% of the power because the generation plants are only 40% efficient and another 10% is lost in the power plant and over transmission lines, so 11 MWh would be required to generate enough electricity for the average car consuming 4 MWh, which is about 38 mpg — much lower than many gasoline or hybrid cars (Smil).

Two-thirds of the electricity generated comes from fossil fuels (coal 39%, natural gas 27%, and coal power continues to gain market share (Birnbaum)). Six percent of electricity is lost over transmission lines, and power plants are only 40% efficient on average – it would be more efficient for cars to burn natural gas than electricity generated by natural gas when you add in the energy loss to provide electricity to the car (proponents say electric cars are more efficient because they leave this out of the equation). Drought is reducing hydropower across the west, where most of the hydropower is, and it will take decades to scale up wind, solar, and other alternative energy resources.

The additional energy demand from 100 million PEVs in 2050 is about 286 billion kWh which would require new generating capacity of ninety 1,000 MW plants costing $360 billion, plus another $40 billion for high-voltage transmission and other additions (NAS 2013).

An even larger problem is recharge time. Unless batteries can be developed that can be recharged in 10 minutes or less, cars will be limited largely to local travel in an urban or suburban environment (NAS 2013). Long distance travel would require at least as many charging stations as gas stations (120,000).

Level 1 charging takes too long, level 2 chargers add to overall purchase costs.  Level 1 is the basic amount delivered at home.  A Tesla model S85 kWh battery that was fully discharged would take more than 61 hours to recharge, a 21 kWh Nissan Leaf battery over 17 hours.  So the total cost of electric cars should also include the cost of level 2 chargers, not just the cost itself (NRC 2015).

Fast charging is expensive, with level 3 chargers running $15,000 to $60,000.  At a recharging station, a $15,000 level 3 charger would return a profit of about $60 per year and the electricity cost higher than gasoline (Hillebrand 2012). Level 3 fast charging is bad for batteries, requires expensive infrastructure, and is likely to use peak-load electricity with higher cost, lower efficiency, and higher GHG emissions.

Battery swapping has many problems: battery packs would need to be standardized, an expensive inventory of different types and sizes of battery packs would need to be kept, the swapping station needs to start charging right away during daytime peak electricity, batteries deteriorate over time, customers won’t like older batteries not knowing how far they can go on them, and seasonal travel could empty swapping stations of batteries.

Argonne National Laboratory looked at the economics of Battery swapping  (Hillebrand 2012), which would require standardized batteries and enough light-duty vehicles to justify the infrastructure. They assumed that a current EV Battery Pack costs $12,000 to replace (a figure they considered  wildly optimistic). They assumed a $12,000 x 5% annual return on investment = $600, 3 year battery life means amortizing cost is $4000, and annual Return for each pack must surpass $4600 per year. They concluded that to make a profit in battery swapping, each car would have to drive 1300 miles per day per battery pack!  And therefore, an EV Battery is 20 times too expensive for the swap mode.

Lack of domestic supply base. To be competitive in electrified vehicles, the United States also requires a domestic supply base of key materials and components such as special motors, transmissions, brakes, chargers, conductive materials, foils, electrolytes, and so on, most of which come from China, Japan, or Europe. The supply chain adds significant costs to making batteries, but it’s not easy to shift production to America because electric and hybrid car sales are too few, and each auto maker has its own specifications (NAE 2012).

The embodied energy (oiliness, EROEI) of batteries is enormous.  The energy to make Tesla’s lithium ion energy batteries is also huge, substantially subtracting from the energy returned on invested (Batto 2017).

Ecological damage. Mining and the toxic chemicals used to make and with batteries pollute water and soil, harm health, and wildlife.

The energy required to charge them (Smil)

An electric version of a car typical of today’s typical American vehicle (a composite of passenger cars, SUVs, vans, and light trucks) would require at least 150 Wh/km; and the distance of 20,000 km driven annually by an average vehicle would translate to 3 MWh of electricity consumption. In 2010, the United States had about 245 million passenger cars, SUVs, vans, and light trucks; hence, an all-electric fleet would call for a theoretical minimum of about 750 TWh/year. This approximation allows for the rather heroic assumption that all-electric vehicles could be routinely used for long journeys, including one-way commutes of more than 100 km. And the theoretical total of 3 MWh/car (or 750 TWh/year) needs several adjustments to make it more realistic. The charging and recharging cycle of the Li-ion batteries is about 85 percent efficient, 32 and about 10 percent must be subtracted for self-discharge losses; consequently, the actual need would be close to 4 MWh/car, or about 980 TWh of electricity per year. This is a very conservative calculation, as the overall demand of a midsize electric vehicle would be more likely around 300 Wh/km or 6 MW/year. But even this conservative total would be equivalent to roughly 25% of the U.S. electricity generation in 2008, and the country’s utilities needed 15 years (1993–2008) to add this amount of new production.

The average source-to-outlet efficiency of U.S. electricity generation is about 40 percent and, adding 10 percent for internal power plant consumption and transmission losses, this means that 11 MWh (nearly 40 GJ) of primary energy would be needed to generate electricity for a car with an average annual consumption of about 4 MWh.

This would translate to 2 MJ for every kilometer of travel, a performance equivalent to about 38 mpg (6.25 L/100 km)—a rate much lower than that offered by scores of new pure gasoline-engine car models, and inferior to advanced hybrid drive designs

The latest European report on electric cars—appropriately entitled How to Avoid an Electric Shock—offers analogical conclusions. A complete shift to electric vehicles would require a 15% increase in the European Union’s electricity consumption, and electric cars would not reduce CO2 emissions unless all that new electricity came from renewable sources.

Inherently low load factors of wind or solar generation, typically around 25 percent, mean that adding nearly 1 PWh of renewable electricity generation would require installing about 450 GW in wind turbines and PV cells, an equivalent of nearly half of the total U.S. capability in 2007.

The National Research Council found that for electric vehicles to become mainstream, significant battery breakthroughs are required to lower cost, longer driving range, less refueling time, and improved safety. Battery life is not known for the first generation of PEVs.. Hybrid car batteries with performance degradation are hardly noticed since the gasoline combustion engine kicks in, but with a PEV, there is no hiding reduced performance. If this happens in less than the 15 year lifespan of a vehicle, that will be a problem. PEV vehicles already cost thousands more than an ICE vehicle. Their batteries have a limited warranty of 5-8 years. A Nissan Leaf battery replacement is $5,500 which Nissan admits to selling at a loss (NAS 2015).

Cold weather increases energy consumption

cold weather increases energy consumption

 Source: Argonne National Laboratory

On a cold day an electric car consumes its stored electric energy quickly because of the extra electricity needed to heat the car.  For example, the range of a Nissan Leaf is 84 miles on the EPA test cycle, but if the owner drives 90% of the time over 70 mph and lives in a cold climate, the range could be as low as 50 miles (NRC 2015).

 

References

ADEME. 2011. Study on the second life batteries for electric and plug-in hybrid vehicles.

Batto, A. B. 2017. The ecological challenges of Tesla’s Gigafactory and the Model 3. AmosBatto.wordpress.com

Birnbaum, M. November 23, 2015. Electric cars and the coal that runs them. Washington Post.

Borenstein, S. Jan 22, 2013. What holds energy tech back? The infernal battery. Associated Press.

Hillebrand, D. October 8, 2012. Advanced Vehicle Technologies; Outlook for Electrics, Internal Combustion, and Alternate Fuels. Argonne National Laboratory.

Hiscox, G. 1901. Horseless Vehicles, Automobiles, Motor Cycles. Norman Henley & Co.

Hodson, H. Jully 25, 2015. Power to the people. NewScientist.

House, Kurt Zenz. 20 Jan 2009. The limits of energy storage technology. Bulletin of the Atomic Scientists.

House 114-18. May 1, 015. Innovations in battery storage for renewable energy. U.S. House of Representatives.   88 pages.

NAE. 2012. National Academy of Engineering. Building the U.S. Battery Industry for Electric Drive Vehicles: Summary of a Symposium. National Research Council

NAS 2013. National Academy of Sciences. Transitions to Alternative Vehicles and Fuels. Committee on Transitions to Alternative Vehicles and Fuels; Board on Energy and Environmental Systems; Division on Engineering and Physical Sciences; National Research Council

NAS. 2015. Cost, effectiveness and deployment of fuel economy tech for Light-Duty vehicles.   National Academy of Sciences. 613 pages.

NRC. 2008. Review of the 21st Century Truck Partnership. National Research Council, National Academy of Sciences.

NRC. 2013. Overcoming Barriers to Electric-Vehicle Deployment, Interim Report. Washington, DC: National Academies Press.

NRC. 2015. Overcoming Barriers to Deployment of Plug-in Electric Vehicles. National  Research Council, National Academies Press.

NYT. Novermber 12, 1911. Foreign trade in Electric vehicles. New York Times C8.

Service, R. 24 Jun 2011. Getting there. Better Batteries. Science Vol 332 1494-96.

Smil, V. 2010. Energy Myths and Realities: Bringing Science to the Energy Policy Debate. AEI Press.

Tesla. 2014. “Increasing Energy Density Means Increasing Range.”
http://www.teslamotors.com/roadster/technology/battery.

Thomas, B. December 17, 1967. AMC does a turnabout: starts running in black. Los Angeles Times, K10.

WP. October 31, 1915. Prophecies come true. Washington Post, E18.

WP. June 7, 1980. Plug ‘Er In?”. Washington Post, A10.

Please follow and like us:




Human domination of the biosphere: Rapid discharge of the earth-space battery foretells the future of humankind

27 07 2015

Chris Harries, a follower of this blog, has found an amazing pdf file on XRayMike’s blog that is so amazing, and explains civilisation’s predicaments so well, I just had to write it up for you all to share around.  I think that the concept of the Earth as a chemical battery is simply stunning…….. the importance of this paper, I think, is epic.

The paper, written by John R. Schramskia, David K. Gattiea , and James H. Brown begins with clarity…

Earth is a chemical battery where, over evolutionary time with a trickle-charge of photosynthesis using solar energy, billions of tons of living biomass were stored in forests and other ecosystems and in vast reserves of fossil fuels. In just the last few hundred years, humans extracted exploitable energy from these living and fossilized biomass fuels to build the modern industrial-technological-informational economy, to grow our population to more than 7 billion, and to transform the biogeochemical cycles and biodiversity of the earth. This rapid discharge of the earth’s store of organic energy fuels the human domination of the biosphere, including conversion of natural habitats to agricultural fields and the resulting loss of native species, emission of carbon dioxide and the resulting climate and sea level change, and use of supplemental nuclear, hydro, wind, and solar energy sources. The laws of thermodynamics governing the trickle-charge and rapid discharge of the earth’s battery are universal and absolute; the earth is only temporarily poised a quantifiable distance from the thermodynamic equilibrium of outer space. Although this distance from equilibrium is comprised of all energy types, most critical for humans is the store of living biomass. With the rapid depletion of this chemical energy, the earth is shifting back toward the inhospitable equilibrium of outer space with fundamental ramifications for the biosphere and humanity. Because there is no substitute or replacement energy for living biomass, the remaining distance from equilibrium that will be required to support human life is unknown.

To illustrate this stunning concept of the Earth as a battery, this clever illustration is used:

Fig1

That just makes so much sense, and makes such mockery of those who believe ‘innovation’ can replace this extraordinary system.

It took hundreds of millions of years for photosynthetic plants to trickle-charge the battery, gradually converting diffuse low-quality solar energy to high-quality chemical energy stored temporarily in the form of living biomass and more lastingly in the form of fossil fuels: oil, gas, and coal. In just the last few centuries—an evolutionary blink of an eye—human energy use to fuel the rise of civilization and the modern industrial-technological-informational society has discharged the earth-space battery

So then, how long have we got before the battery’s flat?

The laws of thermodynamics dictate that the difference in rate and timescale between the slow trickle-charge and rapid depletion is unsustainable. The current massive discharge is rapidly driving the earth from a biosphere teeming with life and supporting a highly developed human civilization toward a barren moonscape.

The truly surprising thing is how much I’ve been feeling this was the case, and for how long…..  the ever lowering ERoEI of the energy sources we insist on using are merely signal of entropy, and it doesn’t matter how clever we are, or how innovative, entropy rules.  People with green dreams of renewables powered EVs and houses and businesses simply do not understand entropy.

Energy in Physics and Biology

The laws of thermodynamics are incontrovertible; they have inescapable ramifications for the future of the biosphere and humankind. We begin by explaining the thermodynamic concepts necessary to understand the energetics of the biosphere and humans within the earth-space system. The laws of thermodynamics and the many forms of energy can be difficult for non-experts. However, the earth’s flows and stores of energy can be explained in straightforward terms to understand why the biosphere and human civilization are in energy imbalance. These physical laws are universal and absolute, they apply to all human activities, and they are the universal key to sustainability

The Paradigm of the Earth-Space Battery

By definition, the quantity of chemical energy concentrated in the carbon stores of planet Earth (positive cathode) represents the distance from the harsh thermodynamic equilibrium of nearby outer space (negative anode). This energy gradient sustains the biosphere and human life. It can be modeled as a once-charged battery. This earth-space chemical battery (Fig. 1) trickle charged very slowly over 4.5 billion years of solar influx and accumulation of living biomass and fossil fuels. It is now discharging rapidly due to human activities. As we burn organic chemical energy, we generate work to grow our population and economy. In the process, the high-quality chemical energy is transformed into heat and lost from the planet by radiation into outer space. The flow of energy from cathode to anode is moving the planet rapidly and irrevocably closer to the sterile chemical equilibrium of space

Fig2

Fig. 2 depicts the earth’s primary higher-quality chemical and nuclear energy storages as their respective distances from the equilibrium of outer space. We follow the energy industry in focusing on the higher-quality pools and using “recoverable energy” as our point of reference, because many deposits of fossil fuels and nuclear ores are dispersed or inaccessible and cannot be currently harvested to yield net energy gain and economic profit (4). The very large lower-quality pools of organic energy including carbon compounds in soils and oceanic sediments (5, 6) are not shown, but these are not currently economically extractable and usable, so they are typically not included in either recoverable or nonrecoverable categories. Although the energy gradients attributed to geothermal cooling, ocean thermal gradients, greenhouse air temperatures, etc., contribute to Earth’s thermodynamic distance from the equilibrium of space, they are also not included as they are not chemical energies and presumably would still exist in some form on a planet devoid of living things, including humans. Fig. 2 shows that humans are currently discharging all of the recoverable stores of organic chemical energy to the anode of the earth-space battery as heat.

Most people who argue about the viability of their [insert favorite technology] only see that viability in terms of money.  Energy, to most people is such a nebulous concept that they do not see the failures of their techno Utopian solutions…….

Fig3

Living Biomass Is Depleting Rapidly

At the time of the Roman Empire and the birth of Christ, the earth contained ∼1,000 billion tons of carbon in living biomass (10), equivalent to 35 ZJ of chemical energy, mostly in the form of trees in forests. In just the last 2,000 y, humans have reduced this by about 45% to ∼550 billion tons of carbon in biomass, equivalent to 19.2 ZJ. The loss has accelerated over time, with 11% depleted just since 1900 (Fig. 3) (11, 12). Over recent years, on average, we are harvesting—and releasing as heat and carbon dioxide—the remaining 550 billion tons of carbon in living biomass at a net rate of ∼1.5 billion tons carbon per year (13, 14). The cause and measurement of biomass depletion are complicated issues, and the numbers are almost constantly being reevaluated (14). The depletion is due primarily to changes in land use, including deforestation, desertification, and conversion of vegetated landscapes into barren surfaces, but also secondarily to other causes such as pollution and unsustainable forestry and fisheries. Although the above quantitative estimates have considerable uncertainty, the overall trend and magnitude are inescapable facts with dire thermodynamic consequences.

The Dominant Role of Humans Homo sapiens Is a Unique Species.

The history of humankind—starting with huntergatherers, who learned to obtain useful heat energy by burning wood and dung, and continuing to contemporary humans, who apply the latest technologies, such as fracking, solar panels, and wind turbines—is one of innovating to use all economically exploitable energy sources at an ever increasing rate (12, 15). Together, the biological imperative of the Malthusian-Darwinian dynamic to use all available resources and the social imperative to innovate and improve human welfare have resulted in at least 10,000 years of virtually uninterrupted population and economic growth: from a few million hunter-gatherers to more than 7 billion modern humans and from a subsistence economy based on sustainable use of plants and animals (i.e., in equilibrium with photosynthetic energy production) to the modern industrial-technological-informational economy (i.e., out of equilibrium due to the unsustainable unidirectional discharge of the biomass battery).

Fig. 4 depicts the multiplier effect of two large numbers that determine the rapid discharge rate of the earth‐space battery. Energy use per person multiplied by population gives total global energy consumption by humans. According to British Petroleum’s numbers (16), which most experts accept, in 2013, average per capita energy use was 74.6 × 109 J/person per year (equivalent to ∼2,370 W if plotted in green in Fig. 4). Multiplying this by the world population of 7.1 billion in 2013 gives a total consumption of ∼0.53 ZJ/y (equivalent to 16.8 TW if plotted in red in Fig. 4), which is greater than 1% of the total recoverable fossil fuel energy stored in the planet (i.e., 0.53 ZJ/40 ZJ = 1.3%). As time progresses, the population increases, and the economy grows, the outcome of multiplying these two very large numbers is that the total rate of global energy consumption is growing at a near-exponential rate.

fig4

ANY follower of this blog should recognise the peak in the green line as a sure sign of Limits to Growth…. while everything else – population and energy consumption – is skyrocketing exponentially, fooling the techno Utopians into a feeling of security that’s equivalent to what one might feel in their nice new modern car on its way to a fatal accident with no survivors……. everything is going just fine, until it isn’t.

Ironically, powerful political and market forces, rather than acting to conserve the remaining charge in the battery, actually push in the opposite direction, because the pervasive efforts to increase economic growth will require increased energy consumption (4, 8). Much of the above information has been presented elsewhere, but in different forms (e.g., in the references cited). Our synthesis differs from most of these treatments in two respects: (i) it introduces the paradigm of the earth‐space battery to provide a new perspective, and (ii) it emphasizes the critical importance of living biomass for global sustainability of both the biosphere and human civilization.

Humans and Phytomass

We can be more quantitative and put this into context by introducing a new sustainability metric Ω Ω = P BN [1] which purposefully combines perhaps the two critical variables affecting the energy status of the planet: total phytomass and human population. Eq. 1 accomplishes this combination by dividing the stored phytomass chemical energy P (in joules) by the energy needed to feed the global population for 1 y (joules per year; Fig. 5). The denominator represents the basic (metabolic) energy need of the human population; it is obtained by multiplying the global population N by their per capita metabolic needs for 1 y (B = 3.06 × 109 joules/person·per year as calculated from an 8.4 ×106 joules/person·day diet). The simple expression for Ω gives the number of years at current rates of consumption that the global phytomass storage could feed the human race. By making the conservative but totally unrealistic assumption that all phytomass could be harvested to feed humans (i.e., all of it is edible), we get an absolute maximum estimate of the number of years of food remaining for humankind. Fig. 5 shows that over the years 0–2000, Ω has decreased predictably and dramatically from 67,000 to 1,029 y (for example, in the year 2000, P = 19.3 × 1021 joules, B = 3.06 × 109 joules/person·per year, and N = 6.13 × 109 persons; thus, Ω =1,029 y). In just 2,000 y, our single species has reduced Ω by 98.5%. The above is a drastic underestimate for four reasons. First, we obviously cannot consume all phytomass stores for food; the preponderance of phytomass runs the biosphere. Second, basing our estimate on human biological metabolism does not include that high rate of extrametabolic energy expenditure currently being used to feed the population and fuel the economy. Third, the above estimate does not account that both the global human population and the per-capita rate of energy use are not constant, but increasing at near-exponential rates. We do not attempt to extrapolate to predict the future trajectories, which must ultimately turn downward as essential energy stocks are depleted. Finally, we emphasize that not only has the global store of phytomass energy decreased rapidly, but more importantly human dominance over the remaining portion has also increased rapidly. Long before the hypothetical deadline when the global phytomass store is completely exhausted, the energetics of the biosphere and all its inhabitant species will have been drastically altered, with profound changes in biogeochemical function and remaining biodiversity. The very conservative Ω index shows how rapidly land use changes, NPP appropriation, pollution, and other activities are depleting phytomass stores to fuel the current near-exponential trajectories of population and economic growth. Because the Ω index is conservative, it also emphasizes how very little time is left to make changes and achieve a sustainable future for the biosphere and humanity. We are already firmly within the zone of scientific uncertainty where some perturbation could trigger a catastrophic state shift in the biosphere and in the human population and economy (31). As we rapidly approach the chemical equilibrium of outer space, the laws of thermodynamics offer little room for negotiation.

THIS, is the really scary bit………..  collapse, anyone?

fig5

Discussion

The trajectory of Ω shown in Fig. 5 has at least three implications for the future of humankind. First, there is no reason to expect a different trajectory in the near future. Something like the present level of biomass energy destruction will be required to sustain the present global population with its fossil fuel‐subsidized food production and economy. Second, as the earth‐space battery is being discharged ever faster (Fig. 3) to support an ever larger population, the capacity to buffer changes will diminish and the remaining energy gradients will experience increasing perturbations. As more people depend on fewer available energy options, their standard of living and very survival will become increasingly vulnerable to fluctuations, such as droughts, disease epidemics, social unrest, and warfare. Third, there is considerable uncertainty in how the biosphere will function as Ω decreases from the present Ω = ∼1,029 y into an uncharted thermodynamic operating region. The global biosphere, human population, and economy will obviously crash long before Ω = 1 y. If H. sapiens does not go extinct, the human population will decline drastically as we will be forced to return to making a living as hunter‐ gatherers or simple horticulturalists.

The laws of thermodynamics take no prisoners. Equilibrium is inhospitable, sterile, and final.  I just wish we could get through to the people running the planet.  To say this paper blew me away is the understatement of the year, and parsing the ‘good bits’ for this post doesn’t really do it justice.  It needs to be read at least twice in fact, and if you can handle the weight, I’d urge you to read the entire thing at its source https://collapseofindustrialcivilization.files.wordpress.com/2015/07/pnas-2015-schramski-1508353112.pdf

How many of us will “return to making a living as hunter‐ gatherers or simple horticulturalists” I wonder……. We are fast running out of time.





Climate Change: The 40 Year Delay Between Cause and Effect

18 04 2014

Climate Change: The 40 Year Delay Between Cause and Effect (via Skeptical Science)

Posted on 22 September 2010 by Alan Marshall

Guest post by Alan Marshall from climatechangeanswers.org

Following the failure to reach a strong agreement at the Copenhagen conference, climate skeptics have had a good run in the Australian media, continuing their campaigns of disinformation. In such an atmosphere it is vital that we articulate the basic science of climate change, the principles of physics and chemistry which the skeptics ignore.

alanmarshall

Alan Marshall

The purpose of this article is to clearly explain, in everyday language, the two key principles which together determine the rate at which temperatures rise. The first principle is the greenhouse effect of carbon dioxide and other gases. The second principle is the thermal inertia of the oceans, sometimes referred to as climate lag. Few people have any feel for the numbers involved with the latter, so I will deal with it in more depth.
The Greenhouse Effect

The greenhouse effect takes its name from the glass greenhouse, which farmers have used for centuries, trapping heat to grow tomatoes and other plants that could not otherwise be grown in the colder regions of the world. Like glass greenhouses, greenhouse gases allow sunlight to pass through unhindered, but trap heat radiation on its way out. The molecular structure of CO2 is such that it is “tuned” to the wavelengths of infrared (heat) radiation emitted by the Earth’s surface back into space, in particular to the 15 micrometer band. The molecules resonate, their vibrations absorbing the energy of the infra-red radiation. It is vibrating molecules that give us the sensation of heat, and it is by this mechanism that heat energy is trapped by the atmosphere and re-radiated to the surface. The extent to which temperatures will rise due to a given change in the concentration of greenhouse gases is known as the “climate sensitivity,” and you may find it useful to search for this term when doing your own research.

Most principles of physics are beyond question because both cause and effect are well understood. A relationship between cause and effect is proved by repeatable experiments. This is the essence of the scientific method, and the source of knowledge on which we have built our technological civilization. We do not question Newton’s laws of motion because we can demonstrate them in the laboratory. We no longer question that light and infrared radiation are electromagnetic waves because we can measure their wavelengths and other properties in the laboratory. Likewise, there should be no dissent that CO2 absorbs infrared radiation, because that too has been demonstrated in the laboratory. In fact, it was first measured 150 years ago by John Tyndall [i] using a spectrophotometer. In line with the scientific method, his results have been confirmed and more precisely quantified by Herzberg in 1953, Burch in 1962 and 1970, and others since then.

Given that the radiative properties of CO2 have been proven in the laboratory, you would expect them to be same in the atmosphere, given that they are dependent on CO2’s unchanging molecular structure. You would think that the onus would be on the climate skeptics to demonstrate that CO2 behaves differently in the atmosphere than it does in the laboratory. Of course they have not done so. In fact, since 1970 satellites have measured infrared spectra emitted by the Earth and confirmed not only that CO2 traps heat, but that it has trapped more heat as concentrations of CO2 have risen.

harries_radiation

The above graph clearly shows that at the major wavelength for absorption by CO2, and also at wavelength for absorption by methane, that less infrared was escaping in to space in 1996 compared to 1970.

After 150 years of scientific investigation, the impact of CO2 on the climate is well understood. Anyone who tells you different is selling snakeoil.

The Thermal Inertia of the Oceans

If we accept that greenhouse gases are warming the planet, the next concept that needs to be grasped is that it takes time, and we have not yet seen the full rise in temperature that will occur as a result of the CO2 we have already emitted. The Earth’s average surface temperature has already risen by 0.8 degrees C since 1900. The concentration of CO2 in the atmosphere is increasing at the rate of 2 ppm per year. Scientists tell us that even if CO2 was stabilized at its current level of 390 ppm, there is at least another 0.6 degrees “in the pipeline”. If findings from a recent study of Antarctic ice cores is confirmed, the last figure will prove to be conservative [ii]. The delayed response is known as climate lag.

The reason the planet takes several decades to respond to increased CO2 is the thermal inertia of the oceans. Consider a saucepan of water placed on a gas stove. Although the flame has a temperature measured in hundreds of degrees C, the water takes a few minutes to reach boiling point. This simple analogy explains climate lag. The mass of the oceans is around 500 times that of the atmosphere. The time that it takes to warm up is measured in decades. Because of the difficulty in quantifying the rate at which the warm upper layers of the ocean mix with the cooler deeper waters, there is significant variation in estimates of climate lag. A paper by James Hansen and others [iii] estimates the time required for 60% of global warming to take place in response to increased emissions to be in the range of 25 to 50 years. The mid-point of this is 37.5 which I have rounded to 40 years.

In recent times, climate skeptics have been peddling a lot of nonsense about average temperatures actually cooling over the last decade. There was a brief dip around the year 2000 following the extreme El Nino event of 1998, but with greenhouse emissions causing a planetary energy imbalance of 0.85 watts per square metre [iv], there is inevitably a continual rising trend in global temperatures. It should then be no surprise to anyone that the 12 month period June 2009 to May 2010 was the hottest on record [v].

The graph below from Australia’s CSIRO [vi] shows a clear rising trend in temperatures as well as a rising trend in sea-level.

OCH_700m

Implications of the 40 Year Delay

The estimate of 40 years for climate lag, the time between the cause (increased greenhouse gas emissions) and the effect (increased temperatures), has profound negative consequences for humanity. However, if governments can find the will to act, there are positive consequences as well.

With 40 years between cause and effect, it means that average temperatures of the last decade are a result of what we were thoughtlessly putting into the air in the 1960’s. It also means that the true impact of our emissions over the last decade will not be felt until the 2040’s. This thought should send a chill down your spine!

Conservative elements in both politics and the media have been playing up uncertainties in some of the more difficult to model effects of climate change, while ignoring the solid scientific understanding of the cause. If past governments had troubled themselves to understand the cause, and acted in a timely way, climate change would have been contained with minimal disruption. By refusing to acknowledge the cause, and demanding to see the effects before action is taken, past governments have brought on the current crisis. By the time they see those effects, it will too late to deal with the cause.

The positive consequence of climate lag is the opportunity for remedial action before the ocean warms to its full extent. We need to not only work towards reducing our carbon emissions to near zero by 2050, but well before then to begin removing excess CO2 from the atmosphere on an industrial scale. Biochar is one promising technology that can have an impact here. Synthetic trees, with carbon capture and storage, is another. If an international agreement can be forged to provide a framework for not only limiting new emissions, but sequestering old emissions, then the full horror of the climate crisis may yet be averted.

Spreading the Word

The clock is ticking. All of us who understand clearly the science of climate change, and its implications for humanity, should do what we can to inform the public debate. I wrote the original version of this article in February 2010 to help inform the Parliament of Australia. The letter was sent to 40 MPs and senators, and has received positive feedback from both members of the three largest parties. To find out more about this information campaign, and for extensive coverage of the science of climate change and its technological, economic and political solutions, please visit my web site at www.climatechangeanswers.org.

References

i Gulf Times, “A Last Chance to Avert Disaster”, available at
http://www.gulf-times.com/site/topics/article.asp? cu_no=2&item_no=330396&version=1&template_id=46&parent_id=26

ii Institute of Science in Society, “350 ppm CO2 The Target”,
http://www.i-sis.org.uk/350ppm_CO2_the_Target.php, p.4

iii Science AAAS, ”Earth’s Energy Imbalance: Confirmation and Implications”, available (after free registration) at http://www.scienceonline.org/cgi/reprint/1110252v1.pdf, p.1

iv NASA, “The Ocean Heat Trap”, available at http://www.ocean.com, p.3

v NASA GISS temperature record (see http://climateprogress.org/2010/06/03/nasa-giss-james-hansen-study-global-warming-record-hottest-year/)

vi CSIRO, “Sea Level Rise”, available at http://www.cmar.csiro.au/sealevel/sl_drives_longer.html





Can The Matrix Be Tested?

3 12 2013

As surely most of my readers would know or realise, I only use the notion of the Matrix as a metaphor for the unsustainable world “out there”…….  but have you ever wondered whether there was some possibility that the vision laid out in the Matrix movies could be real?

Could we be actually living simply in a computer simulation? A research project at the University of Washington, Seattle, went a step beyond the Matrix and looked at the possibility that we are not only living in a sim world created around us here on Earth, but also a simulated universe run by our descendants…..  sounds crazy?  Read on….A team of physicists claims to have come up with a test to determine whether such an assumption could be true. They based their work on a claims published in 2003, which state that at least one of three possibilities would be true:

1) The human species is likely to go extinct before reaching a “posthuman” stage.
2) Any posthuman civilization is very unlikely to run a significant number of simulations of its evolutionary history.
3) We are almost certainly living in a computer simulation.

Nick Bostrom, who published those beliefs ten years ago, also argued that “there is a significant chance that we will one day become posthumans who run ancestor simulations is false, unless we are currently living in a simulation.”

The UW researchers said that we ultimately would have to be able to simulate the relationship between energy and momentum in special relativity at a scale of the universe to “understand the constraints on physical processes that would indicate we are living in a computer model.” The problem is that we are not even close to be able to simulate the universe. The largest supercomputers could only simulate nature “on the scale of one 100-trillionth of a meter, a little larger than the nucleus of an atom”, the researchers said. Eventually we would have to simulate a “large enough chunk” of the universe to figure out whether we live in a simulation or not.

In the movie, the action really begins when Neo is given a fateful choice: Take the blue pill and return to his oblivious, virtual existence, or take the red pill to learn the truth about the Matrix and find out “how deep the rabbit hole goes.”

Physicists can now offer us the same choice, the ability to test whether we live in our own virtual Matrix, by studying radiation from space.  Cosmic rays are the fastest particles that exist, and they originate in far-flung galaxies.  They always arrive at Earth with a specific maximum energy of 1020 electron volts.  If there is a specific maximum energy for particles, then this gives rise to the idea that energy levels are defined, specific, and constrained by an outside force…….   Therefore, according to this research, if the energy levels of particles could be simulated, so too could the rest of the universe.

Even operating with the world’s most powerful supercomputers, simulations can only be done on a vanishingly small scale, which makes the maths pretty difficult. So, physicists as yet have only managed to simulate regions of space on the femto-scale.

Never heard of the prefix femto?  Me neither….  To put it in context, a femtometre is 10-15 metres – that’s a quadrillionth of a metre or 0.000000000001mm.

However, the main problem with all such simulations is that the law of physics have to be superimposed onto a discrete three-dimensional lattice which advances in time.  And that’s where the real test comes in.

So if it were true that we lived in sim world, at Universe scale rather than femtometre scale, then the very laws of physics that allow us to devise such reality-checking technology may not have much in common (to say the least!) with the fundamental rules that govern the meta-universe inhabited by ‘our simulators’.  To us, these programmers would be gods, able to twist reality on a whim.

So should we say yes to the offer to take the red pill and learn the truth — or are the implications too disturbing……?  I wonder what the simulators have in store for us……