The physics of energy and resulting effects on economics

10 07 2018

Hat tip to one of the many commenters on DTM for pointing me to this excellent video…. I have featured Jean-Marc Jancovici’s work here before, but this one’s shorter, and even though it’s in French, English subtitles are available from the settings section on the toutube screen. Speaking of screens, one of the outstanding statements made in this video is that all electronics in the world that use screens in one way or another consume one third of the world’s electricity…….. Remember how the growth in renewables could not even keep up with the Internet’s growth?

If this doesn’t convince viewers that we have to change the way we do EVERYTHING, then nothing will….. and seeing as he’s presenting to politicians, let’s hope at least some of them will come out of this better informed……

Jean-Marc Jancovici, a French engineer schools politicians with a sobering lecture on the physics of energy and the effects on economics and climate change





We Need Courage, Not Hope, to Face Climate Change

11 03 2018

Originally posted at onbeing…… I hope this article rhymes with you as well as it did for me.

KATE MARVEL (@DRKATEMARVEL), CONTRIBUTING EDITOR

Kate MarvelAs a climate scientist, I am often asked to talk about hope. Particularly in the current political climate, audiences want to be told that everything will be all right in the end. And, unfortunately, I have a deep-seated need to be liked and a natural tendency to optimism that leads me to accept more speaking invitations than is good for me. Climate change is bleak, the organizers always say. Tell us a happy story. Give us hope. The problem is, I don’t have any.

I used to believe there was hope in science. The fact that we know anything at all is a miracle. For some reason, the whole world is hung on a skeleton made of physics. I found comfort in this structure, in the knowledge that buried under layers of greenery and dirt lies something universal. It is something to know how to cut away the flesh of existence and see the clean white bones underneath. All of us obey the same laws, whether we know them or not.

Look closely, however, and the structure of physics dissolves into uncertainty. We live in a statistical world, in a limit where we experience only one of many possible outcomes. Our clumsy senses perceive only gross aggregates, blind to the roiling chaos underneath. We are limited in our ability to see the underlying stimuli that, en masse, create an event. Temperature, for example, is a state created by the random motions of millions of tiny molecules. We feel heat or cold, not the motion of any individual molecule. When something is heated up, its tiny constituent parts move faster, increasing its internal energy. They do not move at the same speed; some are quick, others slow. But there are billions of them, and in the aggregate their speed dictates their temperature.

The internal energy of molecule motion is turned outward in the form of electromagnetic radiation. Light comes in different flavors. The stuff we see occupies only a tiny portion of a vast electromagnetic spectrum. What we see occupies a tiny portion of a vast electromagnetic spectrum. Light is a wave, of sorts, and the distance between its peaks and troughs determines the energy it carries. Cold, low-energy objects emit stretched waves with long, lazy intervals between peaks. Hot objects radiate at shorter wavelengths.

To have a temperature is to shed light into your surroundings. You have one. The light you give off is invisible to the naked eye. You are shining all the same, incandescent with the power of a hundred-watt bulb. The planet on which you live is illuminated by the visible light of the sun and radiates infrared light to the blackness of space. There is nothing that does not have a temperature. Cold space itself is illuminated by the afterglow of the Big Bang. Even black holes radiate, lit by the strangeness of quantum mechanics. There is nowhere from which light cannot escape.

The same laws that flood the world with light dictate the behavior of a carbon dioxide molecule in the atmosphere. CO2 is transparent to the Sun’s rays. But the planet’s infrared outflow hits a molecule in just such as way as to set it in motion. Carbon dioxide dances when hit by a quantum of such light, arresting the light on its path to space. When the dance stops, the quantum is released back to the atmosphere from which it came. No one feels the consequences of this individual catch-and-release, but the net result of many little dances is an increase in the temperature of the planet. More CO2 molecules mean a warmer atmosphere and a warmer planet. Warm seas fuel hurricanes, warm air bloats with water vapor, the rising sea encroaches on the land. The consequences of tiny random acts echo throughout the world.

I understand the physical world because, at some level, I understand the behavior of every small thing. I know how to assemble a coarse aggregate from the sum of multiple tiny motions. Individual molecules, water droplets, parcels of air, quanta of light: their random movements merge to yield a predictable and understandable whole. But physics is unable to explain the whole of the world in which I live. The planet teems with other people: seven billion fellow damaged creatures. We come together and break apart, seldom adding up to an coherent, predictable whole.

I have lived a fortunate, charmed, loved life. This means I have infinite, gullible faith in the goodness of the individual. But I have none whatsoever in the collective. How else can it be that the sum total of so many tiny acts of kindness is a world incapable of stopping something so eminently stoppable? California burns. Islands and coastlines are smashed by hurricanes. At night the stars are washed out by city lights and the world is illuminated by the flickering ugliness of reality television. We burn coal and oil and gas, heedless of the consequences.

Our laws are changeable and shifting; the laws of physics are fixed. Change is already underway; individual worries and sacrifices have not slowed it. Hope is a creature of privilege: we know that things will be lost, but it is comforting to believe that others will bear the brunt of it.

We are the lucky ones who suffer little tragedies unmoored from the brutality of history. Our loved ones are taken from us one by one through accident or illness, not wholesale by war or natural disaster. But the scale of climate change engulfs even the most fortunate. There is now no weather we haven’t touched, no wilderness immune from our encroaching pressure. The world we once knew is never coming back.

I have no hope that these changes can be reversed. We are inevitably sending our children to live on an unfamiliar planet. But the opposite of hope is not despair. It is grief. Even while resolving to limit the damage, we can mourn. And here, the sheer scale of the problem provides a perverse comfort: we are in this together. The swiftness of the change, its scale and inevitability, binds us into one, broken hearts trapped together under a warming atmosphere.

We need courage, not hope. Grief, after all, is the cost of being alive. We are all fated to live lives shot through with sadness, and are not worth less for it. Courage is the resolve to do well without the assurance of a happy ending. Little molecules, random in their movement, add together to a coherent whole. Little lives do not. But here we are, together on a planet radiating ever more into space where there is no darkness, only light we cannot see.





Who killed the electric car…….

28 11 2017

Anyone who’s seen the film (I still have a DVD of it lying around somewhere…) by the name “Who killed the electric car” will remember the outrage of the ‘owners’ (they were all only leasing the vehicles) when GM destroyed the cars they thought were working perfectly well.  The problem was, the EV1 was an experiment. It was an experiment in technology and economics, and by the time the leases ran out, all the batteries needed replacing, and GM weren’t about to do that, because the replacement cost was higher than the value of the vehicles. Never let economics get in the way of a good story…. nor profit!

Anyhow, here is another well researched article Alice Fridemann pointed me to regarding the senseless travesty of the big switch to EVs…..  It’s just too little too late, and we have the laws of physics to contend with.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

alice_friedemannThe battery did it.  Batteries are far too expensive for the average consumer, $600-$1700 per kwh (Service). And they aren’t likely to get better any time soon.  Sorry to ruin the suspense so quickly, guess I’ll never be a mystery writer.

The big advances in battery technology happen rarely. It’s been more than 200 years and we have maybe 5 different successful rechargeable batteries,” said George Blomgren, a former senior technology researcher at Eveready (Borenstein).

And yet hope springs eternal. A better battery is always just around the corner:

  • 1901: “A large number of people … are looking forward to a revolution in the generating power of storage batteries, and it is the opinion of many that the long-looked-for, light weight, high capacity battery will soon be discovered.” (Hiscox)
  • 1901: “Demand for a proper automobile storage battery is so crying that it soon must result in the appearance of the desired accumulator [battery]. Everywhere in the history of industrial progress, invention has followed close in the wake of necessity” (Electrical Review #38. May 11, 1901. McGraw-Hill)
  • 1974: “The consensus among EV proponents and major battery manufacturers is that a high-energy, high power-density battery – a true breakthrough in electrochemistry – could be accomplished in just 5 years” (Machine Design).
  • 2014 internet search “battery breakthrough” gets 7,710,000 results, including:  Secretive Company Claims Battery Breakthrough, ‘Holy Grail’ of Battery Design Achieved, Stanford breakthrough might triple battery life, A Battery That ‘Breathes’ Could Power Next-Gen Electric Vehicles, 8 Potential EV and Hybrid Battery Breakthroughs.

So is an electric car:

  • 1911: The New York Times declares that the electric car “has long been recognized as the ideal solution” because it “is cleaner and quieter” and “much more economical.”(NYT 1911)
  • 1915: The Washington Post writes that “prices on electric cars will continue to drop until they are within reach of the average family.”(WP 1915)
  • 1959: The New York Times reports that the “Old electric may be the car of tomorrow.” The story said that electric cars were making a comeback because “gasoline is expensive today, principally because it is so heavily taxed, while electricity is far cheaper” than it was back in the 1920s (Ingraham 1959)
  • 1967: The Los Angeles Times says that American Motors Corporation is on the verge of producing an electric car, the Amitron, to be powered by lithium batteries capable of holding 330 watt-hours per kilogram. (That’s more than two times as much as the energy density of modern lithium-ion batteries.) Backers of the Amitron said, “We don’t see a major obstacle in technology. It’s just a matter of time.” (Thomas 1967)
  • 1979: The Washington Post reports that General Motors has found “a breakthrough in batteries” that “now makes electric cars commercially practical.” The new zinc-nickel oxide batteries will provide the “100-mile range that General Motors executives believe is necessary to successfully sell electric vehicles to the public.”(Knight, J. September 26, 1979. GM Unveils electric car, New battery. Washington Post, D7.
  • 1980: In an opinion piece, the Washington Post avers that “practical electric cars can be built in the near future.” By 2000, the average family would own cars, predicted the Post, “tailored for the purpose for which they are most often used.” It went on to say that “in this new kind of car fleet, the electric vehicle could pay a big role—especially as delivery trucks and two-passenger urban commuter cars. With an aggressive production effort, they might save 1 million barrels of oil a day by the turn of the century.” (WP 1980)

Lithium-ion batteries appear to be the winner for all-electric cars given Elon Musk’s new $5 billion dollar li-ion battery factory in Nevada. Yet Li-ion batteries have a very short cycling life of 5 to 10 years (depending on how the car is driven), and then they’re at just 70% of initial capacity, which is too low to drive, and if a driver persists despite the degraded performance, eventually the batteries will go down to 50% of capacity, a certain end-of-life for li-ion (ADEME).

One reason people are so keen on electric cars is because they cost less to fuel.  But if electricity were $0.10 per kWh, to fill up a 53 kWh Tesla battery takes about 4 hours and costs $5.30. 30 days times $5.30 is $159. I can fill up my gas tank in a few minutes for under $40.  I drive about 15 miles a day and can go 400 miles per fill up, so I only get gas about once a month.  I’d have to drive 60 miles a day to run the cost up to $159. If your electricity costs less than ten cents, it won’t always.  Shale gas is a one-time-only temporary boom that probably ends around 2020.  Got a dinkier battery than the Tesla but go 80 miles or less at most?  Most people won’t consider buying an electric car until they go 200 miles or more.

So why isn’t there a better battery yet?

The lead-acid battery hasn’t changed much since it was invented in 1859. It’s hard to invent new kinds of batteries or even improve existing ones, because although a battery looks simple, inside it’s a churning chaos of complex electrochemistry as the battery goes between being charged and discharged many times.

Charging and recharging are hard on a battery. Recharging is supposed to put Humpty Dumpty back together again, but over time the metals, liquids, gels, chemicals, and solids inside clog, corrode, crack, crystallize, become impure, leak, and break down.

A battery is like a football player, with increasing injuries and concussions over the season. An ideal battery would be alive, able to self-heal, secrete impurities, and recover from abuse.

The number of elements in the periodic table (118) is limited. Only a few have the best electron properties (like lithium), and others can be ruled out because they’re radioactive (39), rare earth and platinum group metals (23), inert noble gases (6), or should be ruled out: toxic (i.e. cadmium, cobalt, mercury, arsenic), hard to recycle, scarce, or expensive.

There are many properties an ideal Energy Storage device would have:

  1. Small and light-weight to give vehicles a longer range
  2. High energy density like oil (energy stored per unit of weight)
  3. Recharge fast, tolerant of overcharge, undercharging, and over-discharge
  4. Store a lot of energy
  5. High power density, deliver a lot of power quickly
  6. Be rechargeable thousands of times while retaining 80% of their storage capacity
  7. Reliable and robust
  8. A long life, at least 10 years for a vehicle battery
  9. Made from very inexpensive, common, sustainable, recyclable materials
  10. Deliver power for a long time
  11. Won’t explode or catch on fire
  12. Long shelf life for times when not being used
  13. Perform well in low and high temperatures
  14. Able to tolerate vibration, shaking, and shocks
  15. Not use toxic materials during manufacture or in the battery itself
  16. Take very little energy to make from cradle-to-grave
  17. Need minimal to no maintenance

For example, in the real world, these are the priorities for heavy-duty hybrid trucks (NRC 2008):

  1. High Volumetric Energy Density (energy per unit volume)
  2. High Gravimetric Energy Density (energy per unit of weight, Specific Energy)
  3. High Volumetric Power Density (power per unit of volume)
  4. High Gravimetric Power Density (power per unit of weight, Specific Power)
  5. Low purchase cost
  6. Low operating cost
  7. Low recycling cost
  8. Long useful life
  9. Long shelf life
  10. Minimal maintenance
  11. High level of safety in collisions and rollover accidents
  12. High level of safety during charging
  13. Ease of charging method
  14. Minimal charging time
  15. Storable and operable at normal and extreme ambient temperatures
  16. High number of charge-discharge cycles, regardless of the depth of discharge
  17. Minimal environmental concerns during manufacturing, useful life, and recycling or disposal

Pick Any Two

In the real world, you can’t have all of the above. It’s like the sign “Pick any two: Fast (expensive), Cheap (crappy), or Good (slow)”.

So many different properties are demanded that “This is like wanting a car that has the power of a Corvette, the fuel efficiency of a Chevy Malibu, and the price tag of a Chevy Spark. This is hard to do. No one battery delivers both high power and high energy, at least not very well or for very long,” according to Dr. Jud Virden at the Pacific Northwest National Laboratory (House 114-18 2015).

You always give up something. Battery chemistry is complex. Anode, cathode, electrolyte, and membrane separators materials must all work together. Tweak any one of these materials and the battery might not work anymore. You get higher energy densities from reactive, less stable chemicals that often result in non-rechargeable batteries, are susceptible to impurities, catch on fire, and so on. Storing more energy might lower the voltage, a fast recharge shorten the lifespan.

You have to optimize many different things at the same time,” says Venkat Srinivasan, a transportation battery expert at Lawrence Berkeley National Laboratory in California. “It’s a hard, hard problem” (Service).

Conflicting demands. The main job of a battery is to store energy. Trying to make them discharge a lot of power quickly may be impossible. “If you want high storage, you can’t get high power,” said M. Stanley Whittingham, director of the Northeast Center for Chemical Energy Storage. “People are expecting more than what’s possible.”

Battery testing takes time. Every time a change is made the individual cells, then modules, then overall pack is tested for one cycle and again for 50 cycles for voltage, current, cycle life (number of recharges), Ragone plot (energy and power density), charge and discharge time, self-discharge, safety (heat, vibration, external short circuit, overcharge, forced discharge, etc.) and many other parameters.

Batteries deteriorate.  The more deeply you discharge a battery, the more often you charge/recharge it (cycles), or the car is exposed to below freezing or above 77 degree temperatures, the shorter the life of the battery will be. Even doing nothing shortens battery life: Li-ion batteries lose charge when idle, so an old, unused battery will last less long than a new one.  Tesla engineers expect the power of the car’s battery pack to degrade by as much as 30% in five years (Smil). [ED. the exception of course being Nickel Iron batteries….. but they are not really suitable for EVs, even if that’s what they were originally invented for]

Batteries are limited by the physical laws of the universe.  Lithium-ion batteries are getting close to theirs.  According to materials scientist George Crabtree of Argonne National Laboratory, li-ion batteries are approaching their basic electrochemical limits of density of energy they can store. “If you really want electric cars to compete with gasoline, you’re going to need the next generation of batteries.” Rachid Yazami of Nanyang Technological University in Singapore says that this will require finding a new chemical basis for them. Although engineers have achieved a lot with lithium-ion batteries, it hasn’t been enough to charge electric cars very fast, or go 500 miles (Hodson 2015).

Be skeptical of battery breakthroughs. It takes ten years to improve an existing type of battery, and it’s expensive since you need chemists, material scientists, chemical and mechanical engineers, electrochemists, computer and nanotechnology scientists. The United States isn’t training enough engineers to support a large battery industry, and within 5 years, 40% of full-time senior engineering faculty will be eligible for retirement.

Dr. Virden says that “you see all kinds of press releases about a new anode material that’s five times better than anything out there, and it probably is, but when you put that in with an electrolyte and a cathode, and put it together and then try to scale it, all kinds of things don’t work. Materials start to fall apart, the chemistry isn’t well known, there’s side reactions, and usually what that leads to is loss of performance, loss of safety. And we as fundamental scientists don’t understand those basic mechanisms. And we do really undervalue the challenge of scale-up. In every materials process I see, in an experiment in a lab like this big, it works perfectly. Then when you want to make thousands of them-it doesn’t.” (House 114-18).

We need a revolutionary new battery that takes less than 10 years to develop

“We need to leapfrog the engineering of making of batteries,” said Lawrence Berkeley National Lab battery scientist Vince Battaglia. “We’ve got to find the next big thing.”

Dr. Virden testified at a U.S. House hearing that “despite many advances, we still have fundamental gaps in our understanding of the basic processes that influence battery operation, performance, limitations, and failures (House 114-18 2015).

But none of the 10 experts who talked to The Associated Press said they know what that big thing will be yet, or when it will come (Borenstein).

The Department of Energy (DOE) says that incremental improvements won’t electrify cars and energy storage fast enough. Scientists need to understand the laws of battery physics better. To do that, we need to be able to observe what’s going on inside the battery at an atomic scale in femtoseconds (.000000000000001 second), build nanoscale materials/tubes/wires to improve ion flow etc., and write complex models and computer programs that use this data to better predict what might happen every time some aspect of the battery is meddled with to zero in on the best materials to use.

Are you kidding? Laws of Physics? Femtoseconds? Atomic Scale? Nanoscale technology — that doesn’t exist yet?

Extremely energy-dense batteries for autos are impossible because of the laws of Physics and the “Pick any Two” problem

There’s only so much energy you can force into a black box, and it’s a lot less than the energy contained in oil – pound for pound the most energy density a battery could contain is only around 6 percent that of oil. The energy density of oil 500 times higher than a lead-acid battery (House), which is why it takes 1,200 pounds of lead-acid batteries to move a car 50 miles.

Even though an electric vehicle needs only a quarter of the energy a gasoline vehicle needs to deliver the same energy to turn the wheels, this efficiency is more than overcome by the much smaller energy density of a battery compared to the energy density of gasoline.  This can be seen in the much heavier weight and space a battery requires.  For example, the 85 kWh battery in a Tesla Model S weighs 1,500 pounds (Tesla 2014) and the gasoline containing the equivalent energy, about 9 gallons, weighs 54 pounds.  The 1500 pound weight of a Tesla battery is equal to 7 extra passengers, and reduces the acceleration and range that could otherwise be realized (NRC 2015).

Lithium batteries are more powerful, but even so, oil has 120 times the energy density of a lithium battery pack. Increased driving ranges of electric cars have come more from weight reduction, drag reduction, and decreased rolling resistance than improved battery performance.

The amount of energy that can be stored in a battery depends on the potential chemical energy due to their electron properties. The most you could ever get is 6 volts from a Lithium (highest reduction) and Fluorine (highest oxidation).  But for many reasons a lithium-fluoride or fluoride battery is not in sight and may never work out (not rechargeable, unstable, unsafe, inefficient, solvents and electrolytes don’t handle the voltages generated, lithium fluoride crystallizes and doesn’t conduct electricity, etc.).

The DOE has found that lithium-ion batteries are the only chemistry promising enough to use in electric cars. There are “several Li-ion chemistries being investigated… but none offers an ideal combination of energy density, power capability, durability, safety, and cost” (NAS 2013).

Lithium batteries can generate up to 3.8 volts but have to use non-aqueous electrolytes (because water has a 2 volt maximum) which gives a relatively high internal impedance.

They can be unsafe. A thermal runaway in one battery can explode into 932 F degrees and spread to other batteries in the cell or pack.

There are many other problems with all-electric cars

It will take decades or more to replace the existing fleet with electric cars if batteries ever do get cheap and powerful enough.  Even if all 16 million vehicles purchased every year were only electric autos, the U.S. car fleet has 250 million passenger vehicles and would take over 15 years to replace.  But only 120,000 electric cars were sold in 2014. At that rate it would take 133 years.

Electric cars are too expensive. The median household income of a an electric car buyer is $148,158 and $83,166 for a gasoline car. But the U.S. median household income was only $51,939 in 2014. The Tesla Model S tends to be bought by relatively wealthy individuals,  primarily men who have higher incomes, paid cash, and did not seriously consider purchasing another vehicle (NRC 2015).

And when gasoline prices began to drop in 2014, people stopped buying EVs and started buying gas guzzlers again.

Autos aren’t the game-changer for the climate or saving energy that they’re claimed to be.  They account for just 20% of the oil wrung out of a barrel, trucks, ships, manufacturing, rail, airplanes, and buildings use the other 80%.

And the cost of electric cars is expected to be greater than internal combustion engine and hybrid electric autos for the next two decades (NRC 2013).

The average car buyer wants a low-cost, long range vehicle. A car that gets 30 mpg would require a “prohibitively long-to-charge, expensive, heavy, and bulky” 78 kWh battery to go 300 miles, which costs about $35,000 now. Future battery costs are hard to estimate, and right now, some “battery companies sell batteries below cost to gain market share” (NAS 2013). Most new cathode materials are high-cost nickel and cobalt materials.

Rapid charging and discharging can shorten the lifetime of the cell. This is particularly important because the goal of 10 to 15 years of service for automotive applications, the average lifetime of a car. Replacing the battery would be a very expensive repair, even as costs decline (NAS 2013).

It is unclear that consumer demand will be sufficient to sustain the U.S. advanced battery industry. It takes up to $300 million to build one lithium-ion plant to supply batteries for 20,000 to 30,000 plug-in or electric vehicles (NAE 2012).

Almost all electric cars use up to 3.3 pounds of rare-earth elements in interior permanent magnet motors. China currently has a near monopoly on the production of rare-earth materials, which has led DOE to search for technologies that eliminate or reduce rare-earth magnets in motors (NAS 2013).

Natural gas generated electricity is likely to be far more expensive when the fracking boom peaks 2015-2019, and coal generated electricity after coal supplies reach their peak somewhere between now and 2030.

100 million electric cars require ninety 1,000-MWe power plants, transmission, and distribution infrastructure that would cost at least $400 billion dollars. A plant can take years to over a decade to build (NAS 2013).

By the time the electricity reaches a car, it’s lost 50% of the power because the generation plants are only 40% efficient and another 10% is lost in the power plant and over transmission lines, so 11 MWh would be required to generate enough electricity for the average car consuming 4 MWh, which is about 38 mpg — much lower than many gasoline or hybrid cars (Smil).

Two-thirds of the electricity generated comes from fossil fuels (coal 39%, natural gas 27%, and coal power continues to gain market share (Birnbaum)). Six percent of electricity is lost over transmission lines, and power plants are only 40% efficient on average – it would be more efficient for cars to burn natural gas than electricity generated by natural gas when you add in the energy loss to provide electricity to the car (proponents say electric cars are more efficient because they leave this out of the equation). Drought is reducing hydropower across the west, where most of the hydropower is, and it will take decades to scale up wind, solar, and other alternative energy resources.

The additional energy demand from 100 million PEVs in 2050 is about 286 billion kWh which would require new generating capacity of ninety 1,000 MW plants costing $360 billion, plus another $40 billion for high-voltage transmission and other additions (NAS 2013).

An even larger problem is recharge time. Unless batteries can be developed that can be recharged in 10 minutes or less, cars will be limited largely to local travel in an urban or suburban environment (NAS 2013). Long distance travel would require at least as many charging stations as gas stations (120,000).

Level 1 charging takes too long, level 2 chargers add to overall purchase costs.  Level 1 is the basic amount delivered at home.  A Tesla model S85 kWh battery that was fully discharged would take more than 61 hours to recharge, a 21 kWh Nissan Leaf battery over 17 hours.  So the total cost of electric cars should also include the cost of level 2 chargers, not just the cost itself (NRC 2015).

Fast charging is expensive, with level 3 chargers running $15,000 to $60,000.  At a recharging station, a $15,000 level 3 charger would return a profit of about $60 per year and the electricity cost higher than gasoline (Hillebrand 2012). Level 3 fast charging is bad for batteries, requires expensive infrastructure, and is likely to use peak-load electricity with higher cost, lower efficiency, and higher GHG emissions.

Battery swapping has many problems: battery packs would need to be standardized, an expensive inventory of different types and sizes of battery packs would need to be kept, the swapping station needs to start charging right away during daytime peak electricity, batteries deteriorate over time, customers won’t like older batteries not knowing how far they can go on them, and seasonal travel could empty swapping stations of batteries.

Argonne National Laboratory looked at the economics of Battery swapping  (Hillebrand 2012), which would require standardized batteries and enough light-duty vehicles to justify the infrastructure. They assumed that a current EV Battery Pack costs $12,000 to replace (a figure they considered  wildly optimistic). They assumed a $12,000 x 5% annual return on investment = $600, 3 year battery life means amortizing cost is $4000, and annual Return for each pack must surpass $4600 per year. They concluded that to make a profit in battery swapping, each car would have to drive 1300 miles per day per battery pack!  And therefore, an EV Battery is 20 times too expensive for the swap mode.

Lack of domestic supply base. To be competitive in electrified vehicles, the United States also requires a domestic supply base of key materials and components such as special motors, transmissions, brakes, chargers, conductive materials, foils, electrolytes, and so on, most of which come from China, Japan, or Europe. The supply chain adds significant costs to making batteries, but it’s not easy to shift production to America because electric and hybrid car sales are too few, and each auto maker has its own specifications (NAE 2012).

The embodied energy (oiliness, EROEI) of batteries is enormous.  The energy to make Tesla’s lithium ion energy batteries is also huge, substantially subtracting from the energy returned on invested (Batto 2017).

Ecological damage. Mining and the toxic chemicals used to make and with batteries pollute water and soil, harm health, and wildlife.

The energy required to charge them (Smil)

An electric version of a car typical of today’s typical American vehicle (a composite of passenger cars, SUVs, vans, and light trucks) would require at least 150 Wh/km; and the distance of 20,000 km driven annually by an average vehicle would translate to 3 MWh of electricity consumption. In 2010, the United States had about 245 million passenger cars, SUVs, vans, and light trucks; hence, an all-electric fleet would call for a theoretical minimum of about 750 TWh/year. This approximation allows for the rather heroic assumption that all-electric vehicles could be routinely used for long journeys, including one-way commutes of more than 100 km. And the theoretical total of 3 MWh/car (or 750 TWh/year) needs several adjustments to make it more realistic. The charging and recharging cycle of the Li-ion batteries is about 85 percent efficient, 32 and about 10 percent must be subtracted for self-discharge losses; consequently, the actual need would be close to 4 MWh/car, or about 980 TWh of electricity per year. This is a very conservative calculation, as the overall demand of a midsize electric vehicle would be more likely around 300 Wh/km or 6 MW/year. But even this conservative total would be equivalent to roughly 25% of the U.S. electricity generation in 2008, and the country’s utilities needed 15 years (1993–2008) to add this amount of new production.

The average source-to-outlet efficiency of U.S. electricity generation is about 40 percent and, adding 10 percent for internal power plant consumption and transmission losses, this means that 11 MWh (nearly 40 GJ) of primary energy would be needed to generate electricity for a car with an average annual consumption of about 4 MWh.

This would translate to 2 MJ for every kilometer of travel, a performance equivalent to about 38 mpg (6.25 L/100 km)—a rate much lower than that offered by scores of new pure gasoline-engine car models, and inferior to advanced hybrid drive designs

The latest European report on electric cars—appropriately entitled How to Avoid an Electric Shock—offers analogical conclusions. A complete shift to electric vehicles would require a 15% increase in the European Union’s electricity consumption, and electric cars would not reduce CO2 emissions unless all that new electricity came from renewable sources.

Inherently low load factors of wind or solar generation, typically around 25 percent, mean that adding nearly 1 PWh of renewable electricity generation would require installing about 450 GW in wind turbines and PV cells, an equivalent of nearly half of the total U.S. capability in 2007.

The National Research Council found that for electric vehicles to become mainstream, significant battery breakthroughs are required to lower cost, longer driving range, less refueling time, and improved safety. Battery life is not known for the first generation of PEVs.. Hybrid car batteries with performance degradation are hardly noticed since the gasoline combustion engine kicks in, but with a PEV, there is no hiding reduced performance. If this happens in less than the 15 year lifespan of a vehicle, that will be a problem. PEV vehicles already cost thousands more than an ICE vehicle. Their batteries have a limited warranty of 5-8 years. A Nissan Leaf battery replacement is $5,500 which Nissan admits to selling at a loss (NAS 2015).

Cold weather increases energy consumption

cold weather increases energy consumption

 Source: Argonne National Laboratory

On a cold day an electric car consumes its stored electric energy quickly because of the extra electricity needed to heat the car.  For example, the range of a Nissan Leaf is 84 miles on the EPA test cycle, but if the owner drives 90% of the time over 70 mph and lives in a cold climate, the range could be as low as 50 miles (NRC 2015).

 

References

ADEME. 2011. Study on the second life batteries for electric and plug-in hybrid vehicles.

Batto, A. B. 2017. The ecological challenges of Tesla’s Gigafactory and the Model 3. AmosBatto.wordpress.com

Birnbaum, M. November 23, 2015. Electric cars and the coal that runs them. Washington Post.

Borenstein, S. Jan 22, 2013. What holds energy tech back? The infernal battery. Associated Press.

Hillebrand, D. October 8, 2012. Advanced Vehicle Technologies; Outlook for Electrics, Internal Combustion, and Alternate Fuels. Argonne National Laboratory.

Hiscox, G. 1901. Horseless Vehicles, Automobiles, Motor Cycles. Norman Henley & Co.

Hodson, H. Jully 25, 2015. Power to the people. NewScientist.

House, Kurt Zenz. 20 Jan 2009. The limits of energy storage technology. Bulletin of the Atomic Scientists.

House 114-18. May 1, 015. Innovations in battery storage for renewable energy. U.S. House of Representatives.   88 pages.

NAE. 2012. National Academy of Engineering. Building the U.S. Battery Industry for Electric Drive Vehicles: Summary of a Symposium. National Research Council

NAS 2013. National Academy of Sciences. Transitions to Alternative Vehicles and Fuels. Committee on Transitions to Alternative Vehicles and Fuels; Board on Energy and Environmental Systems; Division on Engineering and Physical Sciences; National Research Council

NAS. 2015. Cost, effectiveness and deployment of fuel economy tech for Light-Duty vehicles.   National Academy of Sciences. 613 pages.

NRC. 2008. Review of the 21st Century Truck Partnership. National Research Council, National Academy of Sciences.

NRC. 2013. Overcoming Barriers to Electric-Vehicle Deployment, Interim Report. Washington, DC: National Academies Press.

NRC. 2015. Overcoming Barriers to Deployment of Plug-in Electric Vehicles. National  Research Council, National Academies Press.

NYT. Novermber 12, 1911. Foreign trade in Electric vehicles. New York Times C8.

Service, R. 24 Jun 2011. Getting there. Better Batteries. Science Vol 332 1494-96.

Smil, V. 2010. Energy Myths and Realities: Bringing Science to the Energy Policy Debate. AEI Press.

Tesla. 2014. “Increasing Energy Density Means Increasing Range.”
http://www.teslamotors.com/roadster/technology/battery.

Thomas, B. December 17, 1967. AMC does a turnabout: starts running in black. Los Angeles Times, K10.

WP. October 31, 1915. Prophecies come true. Washington Post, E18.

WP. June 7, 1980. Plug ‘Er In?”. Washington Post, A10.

Please follow and like us:




Human domination of the biosphere: Rapid discharge of the earth-space battery foretells the future of humankind

27 07 2015

Chris Harries, a follower of this blog, has found an amazing pdf file on XRayMike’s blog that is so amazing, and explains civilisation’s predicaments so well, I just had to write it up for you all to share around.  I think that the concept of the Earth as a chemical battery is simply stunning…….. the importance of this paper, I think, is epic.

The paper, written by John R. Schramskia, David K. Gattiea , and James H. Brown begins with clarity…

Earth is a chemical battery where, over evolutionary time with a trickle-charge of photosynthesis using solar energy, billions of tons of living biomass were stored in forests and other ecosystems and in vast reserves of fossil fuels. In just the last few hundred years, humans extracted exploitable energy from these living and fossilized biomass fuels to build the modern industrial-technological-informational economy, to grow our population to more than 7 billion, and to transform the biogeochemical cycles and biodiversity of the earth. This rapid discharge of the earth’s store of organic energy fuels the human domination of the biosphere, including conversion of natural habitats to agricultural fields and the resulting loss of native species, emission of carbon dioxide and the resulting climate and sea level change, and use of supplemental nuclear, hydro, wind, and solar energy sources. The laws of thermodynamics governing the trickle-charge and rapid discharge of the earth’s battery are universal and absolute; the earth is only temporarily poised a quantifiable distance from the thermodynamic equilibrium of outer space. Although this distance from equilibrium is comprised of all energy types, most critical for humans is the store of living biomass. With the rapid depletion of this chemical energy, the earth is shifting back toward the inhospitable equilibrium of outer space with fundamental ramifications for the biosphere and humanity. Because there is no substitute or replacement energy for living biomass, the remaining distance from equilibrium that will be required to support human life is unknown.

To illustrate this stunning concept of the Earth as a battery, this clever illustration is used:

Fig1

That just makes so much sense, and makes such mockery of those who believe ‘innovation’ can replace this extraordinary system.

It took hundreds of millions of years for photosynthetic plants to trickle-charge the battery, gradually converting diffuse low-quality solar energy to high-quality chemical energy stored temporarily in the form of living biomass and more lastingly in the form of fossil fuels: oil, gas, and coal. In just the last few centuries—an evolutionary blink of an eye—human energy use to fuel the rise of civilization and the modern industrial-technological-informational society has discharged the earth-space battery

So then, how long have we got before the battery’s flat?

The laws of thermodynamics dictate that the difference in rate and timescale between the slow trickle-charge and rapid depletion is unsustainable. The current massive discharge is rapidly driving the earth from a biosphere teeming with life and supporting a highly developed human civilization toward a barren moonscape.

The truly surprising thing is how much I’ve been feeling this was the case, and for how long…..  the ever lowering ERoEI of the energy sources we insist on using are merely signal of entropy, and it doesn’t matter how clever we are, or how innovative, entropy rules.  People with green dreams of renewables powered EVs and houses and businesses simply do not understand entropy.

Energy in Physics and Biology

The laws of thermodynamics are incontrovertible; they have inescapable ramifications for the future of the biosphere and humankind. We begin by explaining the thermodynamic concepts necessary to understand the energetics of the biosphere and humans within the earth-space system. The laws of thermodynamics and the many forms of energy can be difficult for non-experts. However, the earth’s flows and stores of energy can be explained in straightforward terms to understand why the biosphere and human civilization are in energy imbalance. These physical laws are universal and absolute, they apply to all human activities, and they are the universal key to sustainability

The Paradigm of the Earth-Space Battery

By definition, the quantity of chemical energy concentrated in the carbon stores of planet Earth (positive cathode) represents the distance from the harsh thermodynamic equilibrium of nearby outer space (negative anode). This energy gradient sustains the biosphere and human life. It can be modeled as a once-charged battery. This earth-space chemical battery (Fig. 1) trickle charged very slowly over 4.5 billion years of solar influx and accumulation of living biomass and fossil fuels. It is now discharging rapidly due to human activities. As we burn organic chemical energy, we generate work to grow our population and economy. In the process, the high-quality chemical energy is transformed into heat and lost from the planet by radiation into outer space. The flow of energy from cathode to anode is moving the planet rapidly and irrevocably closer to the sterile chemical equilibrium of space

Fig2

Fig. 2 depicts the earth’s primary higher-quality chemical and nuclear energy storages as their respective distances from the equilibrium of outer space. We follow the energy industry in focusing on the higher-quality pools and using “recoverable energy” as our point of reference, because many deposits of fossil fuels and nuclear ores are dispersed or inaccessible and cannot be currently harvested to yield net energy gain and economic profit (4). The very large lower-quality pools of organic energy including carbon compounds in soils and oceanic sediments (5, 6) are not shown, but these are not currently economically extractable and usable, so they are typically not included in either recoverable or nonrecoverable categories. Although the energy gradients attributed to geothermal cooling, ocean thermal gradients, greenhouse air temperatures, etc., contribute to Earth’s thermodynamic distance from the equilibrium of space, they are also not included as they are not chemical energies and presumably would still exist in some form on a planet devoid of living things, including humans. Fig. 2 shows that humans are currently discharging all of the recoverable stores of organic chemical energy to the anode of the earth-space battery as heat.

Most people who argue about the viability of their [insert favorite technology] only see that viability in terms of money.  Energy, to most people is such a nebulous concept that they do not see the failures of their techno Utopian solutions…….

Fig3

Living Biomass Is Depleting Rapidly

At the time of the Roman Empire and the birth of Christ, the earth contained ∼1,000 billion tons of carbon in living biomass (10), equivalent to 35 ZJ of chemical energy, mostly in the form of trees in forests. In just the last 2,000 y, humans have reduced this by about 45% to ∼550 billion tons of carbon in biomass, equivalent to 19.2 ZJ. The loss has accelerated over time, with 11% depleted just since 1900 (Fig. 3) (11, 12). Over recent years, on average, we are harvesting—and releasing as heat and carbon dioxide—the remaining 550 billion tons of carbon in living biomass at a net rate of ∼1.5 billion tons carbon per year (13, 14). The cause and measurement of biomass depletion are complicated issues, and the numbers are almost constantly being reevaluated (14). The depletion is due primarily to changes in land use, including deforestation, desertification, and conversion of vegetated landscapes into barren surfaces, but also secondarily to other causes such as pollution and unsustainable forestry and fisheries. Although the above quantitative estimates have considerable uncertainty, the overall trend and magnitude are inescapable facts with dire thermodynamic consequences.

The Dominant Role of Humans Homo sapiens Is a Unique Species.

The history of humankind—starting with huntergatherers, who learned to obtain useful heat energy by burning wood and dung, and continuing to contemporary humans, who apply the latest technologies, such as fracking, solar panels, and wind turbines—is one of innovating to use all economically exploitable energy sources at an ever increasing rate (12, 15). Together, the biological imperative of the Malthusian-Darwinian dynamic to use all available resources and the social imperative to innovate and improve human welfare have resulted in at least 10,000 years of virtually uninterrupted population and economic growth: from a few million hunter-gatherers to more than 7 billion modern humans and from a subsistence economy based on sustainable use of plants and animals (i.e., in equilibrium with photosynthetic energy production) to the modern industrial-technological-informational economy (i.e., out of equilibrium due to the unsustainable unidirectional discharge of the biomass battery).

Fig. 4 depicts the multiplier effect of two large numbers that determine the rapid discharge rate of the earth‐space battery. Energy use per person multiplied by population gives total global energy consumption by humans. According to British Petroleum’s numbers (16), which most experts accept, in 2013, average per capita energy use was 74.6 × 109 J/person per year (equivalent to ∼2,370 W if plotted in green in Fig. 4). Multiplying this by the world population of 7.1 billion in 2013 gives a total consumption of ∼0.53 ZJ/y (equivalent to 16.8 TW if plotted in red in Fig. 4), which is greater than 1% of the total recoverable fossil fuel energy stored in the planet (i.e., 0.53 ZJ/40 ZJ = 1.3%). As time progresses, the population increases, and the economy grows, the outcome of multiplying these two very large numbers is that the total rate of global energy consumption is growing at a near-exponential rate.

fig4

ANY follower of this blog should recognise the peak in the green line as a sure sign of Limits to Growth…. while everything else – population and energy consumption – is skyrocketing exponentially, fooling the techno Utopians into a feeling of security that’s equivalent to what one might feel in their nice new modern car on its way to a fatal accident with no survivors……. everything is going just fine, until it isn’t.

Ironically, powerful political and market forces, rather than acting to conserve the remaining charge in the battery, actually push in the opposite direction, because the pervasive efforts to increase economic growth will require increased energy consumption (4, 8). Much of the above information has been presented elsewhere, but in different forms (e.g., in the references cited). Our synthesis differs from most of these treatments in two respects: (i) it introduces the paradigm of the earth‐space battery to provide a new perspective, and (ii) it emphasizes the critical importance of living biomass for global sustainability of both the biosphere and human civilization.

Humans and Phytomass

We can be more quantitative and put this into context by introducing a new sustainability metric Ω Ω = P BN [1] which purposefully combines perhaps the two critical variables affecting the energy status of the planet: total phytomass and human population. Eq. 1 accomplishes this combination by dividing the stored phytomass chemical energy P (in joules) by the energy needed to feed the global population for 1 y (joules per year; Fig. 5). The denominator represents the basic (metabolic) energy need of the human population; it is obtained by multiplying the global population N by their per capita metabolic needs for 1 y (B = 3.06 × 109 joules/person·per year as calculated from an 8.4 ×106 joules/person·day diet). The simple expression for Ω gives the number of years at current rates of consumption that the global phytomass storage could feed the human race. By making the conservative but totally unrealistic assumption that all phytomass could be harvested to feed humans (i.e., all of it is edible), we get an absolute maximum estimate of the number of years of food remaining for humankind. Fig. 5 shows that over the years 0–2000, Ω has decreased predictably and dramatically from 67,000 to 1,029 y (for example, in the year 2000, P = 19.3 × 1021 joules, B = 3.06 × 109 joules/person·per year, and N = 6.13 × 109 persons; thus, Ω =1,029 y). In just 2,000 y, our single species has reduced Ω by 98.5%. The above is a drastic underestimate for four reasons. First, we obviously cannot consume all phytomass stores for food; the preponderance of phytomass runs the biosphere. Second, basing our estimate on human biological metabolism does not include that high rate of extrametabolic energy expenditure currently being used to feed the population and fuel the economy. Third, the above estimate does not account that both the global human population and the per-capita rate of energy use are not constant, but increasing at near-exponential rates. We do not attempt to extrapolate to predict the future trajectories, which must ultimately turn downward as essential energy stocks are depleted. Finally, we emphasize that not only has the global store of phytomass energy decreased rapidly, but more importantly human dominance over the remaining portion has also increased rapidly. Long before the hypothetical deadline when the global phytomass store is completely exhausted, the energetics of the biosphere and all its inhabitant species will have been drastically altered, with profound changes in biogeochemical function and remaining biodiversity. The very conservative Ω index shows how rapidly land use changes, NPP appropriation, pollution, and other activities are depleting phytomass stores to fuel the current near-exponential trajectories of population and economic growth. Because the Ω index is conservative, it also emphasizes how very little time is left to make changes and achieve a sustainable future for the biosphere and humanity. We are already firmly within the zone of scientific uncertainty where some perturbation could trigger a catastrophic state shift in the biosphere and in the human population and economy (31). As we rapidly approach the chemical equilibrium of outer space, the laws of thermodynamics offer little room for negotiation.

THIS, is the really scary bit………..  collapse, anyone?

fig5

Discussion

The trajectory of Ω shown in Fig. 5 has at least three implications for the future of humankind. First, there is no reason to expect a different trajectory in the near future. Something like the present level of biomass energy destruction will be required to sustain the present global population with its fossil fuel‐subsidized food production and economy. Second, as the earth‐space battery is being discharged ever faster (Fig. 3) to support an ever larger population, the capacity to buffer changes will diminish and the remaining energy gradients will experience increasing perturbations. As more people depend on fewer available energy options, their standard of living and very survival will become increasingly vulnerable to fluctuations, such as droughts, disease epidemics, social unrest, and warfare. Third, there is considerable uncertainty in how the biosphere will function as Ω decreases from the present Ω = ∼1,029 y into an uncharted thermodynamic operating region. The global biosphere, human population, and economy will obviously crash long before Ω = 1 y. If H. sapiens does not go extinct, the human population will decline drastically as we will be forced to return to making a living as hunter‐ gatherers or simple horticulturalists.

The laws of thermodynamics take no prisoners. Equilibrium is inhospitable, sterile, and final.  I just wish we could get through to the people running the planet.  To say this paper blew me away is the understatement of the year, and parsing the ‘good bits’ for this post doesn’t really do it justice.  It needs to be read at least twice in fact, and if you can handle the weight, I’d urge you to read the entire thing at its source https://collapseofindustrialcivilization.files.wordpress.com/2015/07/pnas-2015-schramski-1508353112.pdf

How many of us will “return to making a living as hunter‐ gatherers or simple horticulturalists” I wonder……. We are fast running out of time.





Climate Change: The 40 Year Delay Between Cause and Effect

18 04 2014

Climate Change: The 40 Year Delay Between Cause and Effect (via Skeptical Science)

Posted on 22 September 2010 by Alan Marshall

Guest post by Alan Marshall from climatechangeanswers.org

Following the failure to reach a strong agreement at the Copenhagen conference, climate skeptics have had a good run in the Australian media, continuing their campaigns of disinformation. In such an atmosphere it is vital that we articulate the basic science of climate change, the principles of physics and chemistry which the skeptics ignore.

alanmarshall

Alan Marshall

The purpose of this article is to clearly explain, in everyday language, the two key principles which together determine the rate at which temperatures rise. The first principle is the greenhouse effect of carbon dioxide and other gases. The second principle is the thermal inertia of the oceans, sometimes referred to as climate lag. Few people have any feel for the numbers involved with the latter, so I will deal with it in more depth.
The Greenhouse Effect

The greenhouse effect takes its name from the glass greenhouse, which farmers have used for centuries, trapping heat to grow tomatoes and other plants that could not otherwise be grown in the colder regions of the world. Like glass greenhouses, greenhouse gases allow sunlight to pass through unhindered, but trap heat radiation on its way out. The molecular structure of CO2 is such that it is “tuned” to the wavelengths of infrared (heat) radiation emitted by the Earth’s surface back into space, in particular to the 15 micrometer band. The molecules resonate, their vibrations absorbing the energy of the infra-red radiation. It is vibrating molecules that give us the sensation of heat, and it is by this mechanism that heat energy is trapped by the atmosphere and re-radiated to the surface. The extent to which temperatures will rise due to a given change in the concentration of greenhouse gases is known as the “climate sensitivity,” and you may find it useful to search for this term when doing your own research.

Most principles of physics are beyond question because both cause and effect are well understood. A relationship between cause and effect is proved by repeatable experiments. This is the essence of the scientific method, and the source of knowledge on which we have built our technological civilization. We do not question Newton’s laws of motion because we can demonstrate them in the laboratory. We no longer question that light and infrared radiation are electromagnetic waves because we can measure their wavelengths and other properties in the laboratory. Likewise, there should be no dissent that CO2 absorbs infrared radiation, because that too has been demonstrated in the laboratory. In fact, it was first measured 150 years ago by John Tyndall [i] using a spectrophotometer. In line with the scientific method, his results have been confirmed and more precisely quantified by Herzberg in 1953, Burch in 1962 and 1970, and others since then.

Given that the radiative properties of CO2 have been proven in the laboratory, you would expect them to be same in the atmosphere, given that they are dependent on CO2’s unchanging molecular structure. You would think that the onus would be on the climate skeptics to demonstrate that CO2 behaves differently in the atmosphere than it does in the laboratory. Of course they have not done so. In fact, since 1970 satellites have measured infrared spectra emitted by the Earth and confirmed not only that CO2 traps heat, but that it has trapped more heat as concentrations of CO2 have risen.

harries_radiation

The above graph clearly shows that at the major wavelength for absorption by CO2, and also at wavelength for absorption by methane, that less infrared was escaping in to space in 1996 compared to 1970.

After 150 years of scientific investigation, the impact of CO2 on the climate is well understood. Anyone who tells you different is selling snakeoil.

The Thermal Inertia of the Oceans

If we accept that greenhouse gases are warming the planet, the next concept that needs to be grasped is that it takes time, and we have not yet seen the full rise in temperature that will occur as a result of the CO2 we have already emitted. The Earth’s average surface temperature has already risen by 0.8 degrees C since 1900. The concentration of CO2 in the atmosphere is increasing at the rate of 2 ppm per year. Scientists tell us that even if CO2 was stabilized at its current level of 390 ppm, there is at least another 0.6 degrees “in the pipeline”. If findings from a recent study of Antarctic ice cores is confirmed, the last figure will prove to be conservative [ii]. The delayed response is known as climate lag.

The reason the planet takes several decades to respond to increased CO2 is the thermal inertia of the oceans. Consider a saucepan of water placed on a gas stove. Although the flame has a temperature measured in hundreds of degrees C, the water takes a few minutes to reach boiling point. This simple analogy explains climate lag. The mass of the oceans is around 500 times that of the atmosphere. The time that it takes to warm up is measured in decades. Because of the difficulty in quantifying the rate at which the warm upper layers of the ocean mix with the cooler deeper waters, there is significant variation in estimates of climate lag. A paper by James Hansen and others [iii] estimates the time required for 60% of global warming to take place in response to increased emissions to be in the range of 25 to 50 years. The mid-point of this is 37.5 which I have rounded to 40 years.

In recent times, climate skeptics have been peddling a lot of nonsense about average temperatures actually cooling over the last decade. There was a brief dip around the year 2000 following the extreme El Nino event of 1998, but with greenhouse emissions causing a planetary energy imbalance of 0.85 watts per square metre [iv], there is inevitably a continual rising trend in global temperatures. It should then be no surprise to anyone that the 12 month period June 2009 to May 2010 was the hottest on record [v].

The graph below from Australia’s CSIRO [vi] shows a clear rising trend in temperatures as well as a rising trend in sea-level.

OCH_700m

Implications of the 40 Year Delay

The estimate of 40 years for climate lag, the time between the cause (increased greenhouse gas emissions) and the effect (increased temperatures), has profound negative consequences for humanity. However, if governments can find the will to act, there are positive consequences as well.

With 40 years between cause and effect, it means that average temperatures of the last decade are a result of what we were thoughtlessly putting into the air in the 1960’s. It also means that the true impact of our emissions over the last decade will not be felt until the 2040’s. This thought should send a chill down your spine!

Conservative elements in both politics and the media have been playing up uncertainties in some of the more difficult to model effects of climate change, while ignoring the solid scientific understanding of the cause. If past governments had troubled themselves to understand the cause, and acted in a timely way, climate change would have been contained with minimal disruption. By refusing to acknowledge the cause, and demanding to see the effects before action is taken, past governments have brought on the current crisis. By the time they see those effects, it will too late to deal with the cause.

The positive consequence of climate lag is the opportunity for remedial action before the ocean warms to its full extent. We need to not only work towards reducing our carbon emissions to near zero by 2050, but well before then to begin removing excess CO2 from the atmosphere on an industrial scale. Biochar is one promising technology that can have an impact here. Synthetic trees, with carbon capture and storage, is another. If an international agreement can be forged to provide a framework for not only limiting new emissions, but sequestering old emissions, then the full horror of the climate crisis may yet be averted.

Spreading the Word

The clock is ticking. All of us who understand clearly the science of climate change, and its implications for humanity, should do what we can to inform the public debate. I wrote the original version of this article in February 2010 to help inform the Parliament of Australia. The letter was sent to 40 MPs and senators, and has received positive feedback from both members of the three largest parties. To find out more about this information campaign, and for extensive coverage of the science of climate change and its technological, economic and political solutions, please visit my web site at www.climatechangeanswers.org.

References

i Gulf Times, “A Last Chance to Avert Disaster”, available at
http://www.gulf-times.com/site/topics/article.asp? cu_no=2&item_no=330396&version=1&template_id=46&parent_id=26

ii Institute of Science in Society, “350 ppm CO2 The Target”,
http://www.i-sis.org.uk/350ppm_CO2_the_Target.php, p.4

iii Science AAAS, ”Earth’s Energy Imbalance: Confirmation and Implications”, available (after free registration) at http://www.scienceonline.org/cgi/reprint/1110252v1.pdf, p.1

iv NASA, “The Ocean Heat Trap”, available at http://www.ocean.com, p.3

v NASA GISS temperature record (see http://climateprogress.org/2010/06/03/nasa-giss-james-hansen-study-global-warming-record-hottest-year/)

vi CSIRO, “Sea Level Rise”, available at http://www.cmar.csiro.au/sealevel/sl_drives_longer.html





Can The Matrix Be Tested?

3 12 2013

As surely most of my readers would know or realise, I only use the notion of the Matrix as a metaphor for the unsustainable world “out there”…….  but have you ever wondered whether there was some possibility that the vision laid out in the Matrix movies could be real?

Could we be actually living simply in a computer simulation? A research project at the University of Washington, Seattle, went a step beyond the Matrix and looked at the possibility that we are not only living in a sim world created around us here on Earth, but also a simulated universe run by our descendants…..  sounds crazy?  Read on….A team of physicists claims to have come up with a test to determine whether such an assumption could be true. They based their work on a claims published in 2003, which state that at least one of three possibilities would be true:

1) The human species is likely to go extinct before reaching a “posthuman” stage.
2) Any posthuman civilization is very unlikely to run a significant number of simulations of its evolutionary history.
3) We are almost certainly living in a computer simulation.

Nick Bostrom, who published those beliefs ten years ago, also argued that “there is a significant chance that we will one day become posthumans who run ancestor simulations is false, unless we are currently living in a simulation.”

The UW researchers said that we ultimately would have to be able to simulate the relationship between energy and momentum in special relativity at a scale of the universe to “understand the constraints on physical processes that would indicate we are living in a computer model.” The problem is that we are not even close to be able to simulate the universe. The largest supercomputers could only simulate nature “on the scale of one 100-trillionth of a meter, a little larger than the nucleus of an atom”, the researchers said. Eventually we would have to simulate a “large enough chunk” of the universe to figure out whether we live in a simulation or not.

In the movie, the action really begins when Neo is given a fateful choice: Take the blue pill and return to his oblivious, virtual existence, or take the red pill to learn the truth about the Matrix and find out “how deep the rabbit hole goes.”

Physicists can now offer us the same choice, the ability to test whether we live in our own virtual Matrix, by studying radiation from space.  Cosmic rays are the fastest particles that exist, and they originate in far-flung galaxies.  They always arrive at Earth with a specific maximum energy of 1020 electron volts.  If there is a specific maximum energy for particles, then this gives rise to the idea that energy levels are defined, specific, and constrained by an outside force…….   Therefore, according to this research, if the energy levels of particles could be simulated, so too could the rest of the universe.

Even operating with the world’s most powerful supercomputers, simulations can only be done on a vanishingly small scale, which makes the maths pretty difficult. So, physicists as yet have only managed to simulate regions of space on the femto-scale.

Never heard of the prefix femto?  Me neither….  To put it in context, a femtometre is 10-15 metres – that’s a quadrillionth of a metre or 0.000000000001mm.

However, the main problem with all such simulations is that the law of physics have to be superimposed onto a discrete three-dimensional lattice which advances in time.  And that’s where the real test comes in.

So if it were true that we lived in sim world, at Universe scale rather than femtometre scale, then the very laws of physics that allow us to devise such reality-checking technology may not have much in common (to say the least!) with the fundamental rules that govern the meta-universe inhabited by ‘our simulators’.  To us, these programmers would be gods, able to twist reality on a whim.

So should we say yes to the offer to take the red pill and learn the truth — or are the implications too disturbing……?  I wonder what the simulators have in store for us……





Mineral resources and the limits to growth

29 09 2013

I thought long and hard about reproducing this remarkable article here…….  It’s rather longer than anything I

Ugo Bardi

Ugo Bardi

usually put up, and I was concerned about copyright, but found nothing on the original website where this was published that says I can’t do it…… and I expect no one at resilience.org objects to ensuring the spread of this important message.

Five years ago, I published a very short item on roughly the same concept.  But I’m no Ugo Bardi…..  So make yourself a good cuppa your favourite poison, and enjoy….

 

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

 

So, ladies and gentleman, let me start with this recent book of mine. It is titled “The Plundered Planet.”  You can surely notice that it is not titled “The Developed Planet” or “The Improved Planet.”  Myself and the co-authors of the book chose to emphasize the concept of “Plundering”; of the fact that we are exploiting the resources of our planet as if they were free for us for the taking; that is, without thinking of the consequences.   And the main consequence, with which we are concerned here is called “depletion,” even though we have to keep in mind the problem of pollution as well.

Now, there have been many studies on the question of depletion, but “The Plundered Planet” has a specific origin, and I can show it to you. Here it is.

It is the rather famous study that was published in 1972 with the title “The Limits to Growth”.  It was one of the first studies that attempted to quantify depletion and its effects on the world’s economic system.  It was a complex study based on the best available data at the time and that used the most sophisticated computers available to study how the interaction of various factors would affect parameters such as industrial production, agricultural production, population and the like.  Here are the main results of the 1972 study, the run that was called the “base case” (or “standard run”).  The calculations were redone in 2004, finding similar results.  

As you can see, the results were not exactly pleasant to behold.  In 1972, the study saw a slowdown of the world’s main economic parameters that would take place within the first two decades of the 21st century.  I am sure that you are comparing, in your minds, these curves with the present economic situation and you may wonder whether these old calculations may be turning out to be incredibly good.  But I would also like to say that these curves are not – and never were – meant to be taken as specific predictions.  No one can predict the future, what we can do is to study tendencies and where these tendencies are leading us.  So, the main result of the Limits to Growth study was to show that the economic system was headed towards a collapse at some moment in the future owing to the combined effect of depletion, pollution, and overpopulation.  Maybe the economic problems we are seeing nowadays are a prelude to the collapse seen by this model, maybe not – maybe the predicted collapse is still far away in the future.  We can’t say right now. 

In any case, the results of the study can be seen at least worrisome.  And a reasonable reaction when the book came out in 1972 would have been to study the problem in greater depth – nobody wants the economy to collapse, of course.  But, as you surely know, the Limits to Growth study was not well received.  It was strongly criticized, accused of having made “mistakes” of all kinds and at times to be part of a worldwide conspiracy to take control of the world and to exterminate most of humankind.  Of course, most of this criticism had political origins.  It was mostly a gut reaction: people didn’t like these results and sought to find ways to demonstrate that the model was wrong (or the data, or the approach, or something else).  If they couldn’t do that, they resorted to demonizing the authors – that’s nothing now; I described it in a book of mine “Revisiting the limits to growth“.

Nevertheless, there was a basic criticism of the “Limits” study that made sense.  Why should one believe in this model?  What are exactly the factors that generate the expected collapse?  Here, I must say, the answer often given in the early times by the authors and by their supporters wasn’t so good.  What the creators of the models said was that the model made sense according to their views and they could show a scheme that was this (from the 1972 Italian edition of the book):

Now, I don’t know what do you think of it; to me it looks more or less like the map of the subway of Tokyo, complete with signs in kanji characters.  Not easy to navigate, to say the least.  So, why did the authors create this spaghetti model?  What was the logic in it?  It turns out that the Limits to Growth model has an internal logic and that it can be explained in thermodynamic terms.  However, it takes some work to describe the whole story.  So, let me start with the ultimate origin of these models:

If you have studied engineering, you surely recognize this object.  It is called a “governor” and it is a device developed in 19th century to regulate the speed of steam engines.  It turns with the engine, and the arms open or close depending on speed.  In so doing, the governor closes or opens the valve that sends steam into the engine.  It is interesting because it is the first self-regulating device of this kind and, at its time, it generated a lot of interest.  James Clerk Maxwell himself studied the behaviour of the governor and, in 1868, he came up with a set of equations describing it. Here is a page from his original article

I am showing you these equations just to let you note how these systems can be described by a set of correlated differential equations.  It is an approach that is still used and today we can solve this kind of equations in real time and control much more complex systems than steam engines.  For instance, drones.

You see here that a drone can be controlled so perfectly that it can hold a glass without spilling the content. And you can have drones playing table tennis with each other and much more.  Of course they are also machines designed for killing people, but let’s not go into that.  The point is that if you can solve a set of differential equations, you can describe – and also control – the behaviour of quite complex systems.

The work of Maxwell so impressed Norbert Wiener, that it led him to develop the concept of “cybernetics”

We don’t use so much the term cybernetics today.  But the ideas that started from the governor study by Maxwell were extremely fecund and gave rise to a whole new field of science.  When you use these equations for controlling mechanical system, you use the term “control theory.”  But when you use the equations for study the behaviour of socio-economic systems, you use the term “system dynamics”

System dynamics is something that was developed mainly by Jay Wright Forrester in the 1950s and 1960s, when there started to exist computers powerful enough to solve sets of coupled differential equations in reasonable times.  That generated a lot of studies, including “The Limits to Growth” of 1972 and today the field is alive and well in many areas.

A point I think is important to make is that these equations describe real world systems and real world systems must obey the laws of thermodynamics.  So, system dynamics must be consistent with thermodynamics. It does.  Let me show you a common example of a system described by system dynamics: practitioners in this field are fond of using a bathub as an example:

On the right you have a representation of the real system, a bathtub partly filled with water.  On the left, its representation using system dynamics.  These models are called “stock and flow”, because you use boxes to represent stocks (the quantity of water in the tub) and you use double edged arrows to indicate flows.  The little butterfly like things indicate valves and single edged arrows indicate relationship.

Note that I used a graphic convention that I like to use for my “mind sized” models.  That is, I have stocks flowing “down”, following the dissipation of thermodynamic potential.  In this case what moves the model is the gravitational potential; it is what makes water flow down, of course.  Ultimately, the process is driven by an increase in entropy and I usually ask to my students where is that entropy increases in this system.  They usually can’t give the right answer.  It is not that easy, indeed – I leave that to you as a little exercise

The model on the left is not simply a drawing of box and arrows, it is made with a software called “Vensim” which actually turns the model “alive” by building the equations and solving them in real time.  And, as you may imagine, it is not so difficult to make a model that describes a bathtub being filled from one side and emptied from the other. But, of course, you can do much more with these models.  So, let me show a model made with Vensim that describes the operation of a governor and of the steam engine.

Before we go on, let me introduce a disclaimer.  This is just a model that I put together for this presentation. It seems to work, in the sense that it describes a behaviour that I think is correct for a governor (you can see the results plotted inside the boxes).  But it doesn’t claim to be a complete model and surely not the only possible way to make a system dynamics model of a governor.  This said, you can give a look to it and notice a few things.  The main one is that we have two “stocks” of energy: one for the large wheel of the steam energy, the other for the small wheel which is the governor.  In order to provide some visual sense of this difference in size, I made the two boxes of different size, but that doesn’t change the equations underlying the model.  Note the “feedback”, the arrows that connect flows and stock sizes.  The concept of feedback is fundamental in these models.

Of course, this is also a model that is compatible with thermodynamics.  Only, in this case we don’t have a gravitational potential that moves the system, but a potential based on temperature differences.  The steam engine works because you have this temperature difference and you know the work of Carnot and the others who described it.  So, I used the same convention here as before; thermodynamic potential are dissipated going “down” in the model’s graphical representation

Now, let me show you another simple model, the simplest version I can think of a model that describes the exploitation of non renewable resources:

It is, again, a model based on thermodynamics and, this time, driven by chemical potentials.  The idea is that the “resources” stock as a high chemical potential in the sense that it may be thought as, for instance, crude oil, which spontaneously combines with oxygen to create energy.  This energy is used by human beings to create what I can call “capital” – the sum of everything you can do with oil; from industries to bureaucracies.

On the right, you can see the results that the model provides in terms of the behaviour as a function of time of the stock of the resources, their production, and the capital stock.  You may easily notice how similar these curves are to those provided by the more complex model of “The Limits to Growth.”  So, we are probably doing something right, even with this simple model.

But the point is that the model works!  When you apply it to real world cases, you see that its results can fit the historical data.  Let me show you an example:

This is the case of whaling in 19th century, when whale oil was used as fuel for lamps, before it became common to use kerosene.  I am showing you this image because it is the first attempt I made to use the model and I was surprised to see that it worked – and it worked remarkably well.  You see, here you have two stocks: one is whales, the other is the capital of the whaling industry that can be measured by means of a proxy that is the total tonnage of the whaling fleet.  And, as I said, the model describes very well how the industry grew on the profit of killing whales, but they killed way too many of them.  Whales are, of course, a renewable resource; in principle.  But, of course, if too many whales are killed, then they don’t have enough time to reproduce and they behave as a non-renewable resource.  Biologists have determined that at the end of this fishing cycle, there were only about 50 females of the species being hunted at that time.  Non renewable, indeed!

So, that is, of course, one of the several cases where we found that the model can work.  Together with my co-workers, we found that it can work also for petroleum extraction, as we describe in a paper published in 2009 (Bardi and Lavacchi).  But let me skip that – the important thing is that the model works in some cases but, as you would expect, not in all. And that is good – because what you don’t want is a “fit-all” model that doesn’t tell you anything about the system you are studying.  Let’s say that the model reproduces what’s called the “Hubbert model” of resource exploitation, which is a purely empirical model that was proposed more than 50 years ago and that remains a basic one in this kind of studies: it is the model that proposes that extraction goes through a “bell-shaped” curve and that the peak of the curve, the “Hubbert peak” is the origin of the concept of “peak oil” which you’ve surely heard about.  Here is the original Hubbert model and you see that it has described reasonably well the production of crude oil in the 48 US lower states.

Now, let’s move on a little.  What I have presented to you is a very simple model that reproduces some of the key elements of the model used for “The Limits to Growth” study but it is of course a very simplified version.  You may have noted that the curves for industrial production of the Limits to Growth tend to be skewed forward and this simple model can’t reproduce that.  So, we must move one step forward and let me show you how it can be done while maintaining the basic idea of a “thermodynamic cascade” that goes from higher potentials to lower potentials.  Here is what I’ve called the “Seneca model”


You see that I added a third stock to the system.   In this case I called it “pollution”; but you might also call it, for instance, “bureaucracy” or may be even “war”.  It is any stock that draws resource from the “Capital” (aka, “the economy”) stock.  And the result is that the capital stock and production collapse rather rapidly; this is what I called “the Seneca effect”; from the roman philosopher Lucius Anneaus Seneca who noted that “Fortune is slow, but ruin is rapid”.

For this model, I can’t show you specific historical cases – we are still working on this idea, but it is not easy to make quantitative fittings because the model is complicated.  But there are cases of simple systems where you see this specific behaviour, highly forward skewed curves – caviar fishing is an example.  But let’s not go there right now.

What I would like to say is that you can move onward with this idea of cascading thermodynamic potentials and build up something that may be considered as a simplified version of the five main stocks taken into account in the “Limits to Growth” calculations.  Here it is

Now, another disclaimer: I am not saying that this model is equivalent to that of the Limits to Growth, nor that it is the only way to arrange stocks and flows in order to produce similar results to the one obtained by the Limits to Growth model.  It is here just to show to you the logic of the model.  And I think you can agree, now, that there is one.  The “Limits” model is not just randomly arranged spaghetti, it is something that has a deep logic based on thermodynamics.  It describes the dissipation of a cascade of thermodynamic potentials.

In the end, all these model, no matter how you arrange their elements, tend to generate similar basic results: the bell shaped curve; the one that Hubbert had already proposed in 1956

The curve may be skewed forward or not, but that changes little on the fact that the downside slope is not so pleasant for those who live it.

Don’t expect this curve to be a physical law; after all it depend on human choices and human choices may be changed.  But, in normal conditions, human beings tend to follow rather predictable patterns, for instance exploiting the “easy” resources (those which are at the highest thermodynamic potential) and then move down to the more difficult ones.  That generates the curve.

Now, I could show you many examples of the tendency of real world systems to follow the bell shape curve.  Let me show you just one; a recent graph recently made by Jean Laherrere.

These are data for the world’s oil production.  As you can see, there are irregularities and oscillations.  But note how, from 2004 to 2013, we have been following the curve: we move on a predictable path.  Already in 2004 we could have predicted what would have been today’s oil production.  But, of course, there are other elements in this system.  In the figure on the right, you can see also the appearance of the so-called “non-conventional” oil resources, which are following their own curve and which are keeping the production of combustible liquids (a concept slightly different from that of “crude oil) rather stable or slightly increasing.  But, you see, the picture is clear and the predictive ability of these models is rather good even though, of course, approximate.

Now, there is another important point I’d like to make.  You see, these models are ultimately based on thermodynamics and there is an embedded thermodynamic parameter in the models that is called EROI (or ERoEI) which is the energy return for the energy invested. It is basically the decline in this parameter that makes, for instance, the extraction of oil gradually producing less energy and, ultimately, becoming pointless when the value of the ERoEI goes below one.  Let me show you an illustration of this concept:

You see?  The data you usually read for petroleum production are just that: how much petroleum is being produced in terms of volume.  There is already a problem with the fact that not all petroleums are the same in the sense of energy per unit volume, but the real question is the NET energy you get by subtracting the energy invested from the energy produced.  And that, as you see, goes down rapidly as you move to more expensive and difficult resources.  For EROEIs under about 20, the problem is significant and below about 10 it becomes serious.  And, as you see, there are many energy resources that have this kind of low EROEI.  So, don’t get impressed by the fact that oil production continues, slowly, to grow.  Net energy is the problem and many things that are happening today in the world seem to be related to the fact that we are producing less and less net energy.  In other words, we are paying more to produce the same.  This appears in terms of high prices in the world market.

Here is an illustration of how prices and production have varied during the past decades from the blog “Early Warning” kept by Stuart Staniford.

And you see that, although we are able to manage a slightly growing production, we can do so only at increasingly high prices.  This is an effect of increasing energy investments in extracting difficult resources – energy costs money, after all.

So, let me show you some data for resources that are not petroleum.  Of course, in this case you can’t speak in terms of ERoEI; because you are not producing energy.  But the problem is the same, since you are using fossil fuels to produce most of the commodities that enter the industrial system, and that is valid also for agriculture. Here are some data.

Food production worldwide is still increasing, but the high costs of fossil fuels are causing this increase in prices.  And that’s a big problem because we all know that the food demand is highly inelastic – in plain words you need to eat or you die.  Several recent events in the world, such as wars and revolutions in North Africa and Middle East have been related to these increases in food prices.

Now, let me go to the general question of mineral production.  Here, we have the same behaviour: most mineral commodities are still growing in terms of extracted quantities, as you can see here (from a paper by Krausmann et al, 2009 http://dx.doi.org/10.1016/j.ecolecon.2009.05.007)

These data go up to 2005 – more recent data show signs of plateauing production, but we don’t see clear evidence of a peak, yet. This is bad, because we are creating a climate disaster. As you see from the most recent data, CO2 are still increasing in a nearly exponential manner

 

But the system is clearly under strain. Here are some data relative to the average price index for aluminium, copper, gold, iron ore, lead, nickel, silver, tin and zinc (adapted from a graphic reported by Bertram et al., Resource Policy, 36(2011)315)

So, you see, there has been this remarkable “bump” in the prices of everything and that correlates well with what I was arguing before: energy costs more and, at the same time, energy requirements are increasing because of ore depletion.  At present, we are still able to keep production stable or even slowly increasing, but this is costing society tremendous sacrifices in terms of reducing social services, health care, pensions and all the rest.  And, in addition, we risk destroying the planetary ecosystem because of climate change.

Now I can summarize what I’ve been saying and get to the take-home point which, I think can be expressed in a single sentence “Mining takes energy

Of course, many people say that we are so smart that we can invent new ways of mining that don’t require so much energy.  Fine, but look at that giant wheel, above, used to extract coal in the mine of Garzweiler in Germany.  Think of how much energy you need to make that wheel; do you think you could use an i-pad, instead?

In the end, energy is the key of everything and if we want to keep mining, and we need to keep mining, we need to be able to keep producing energy.  And we need to obtain that energy without fossil fuels. That’s the concept of the “Energy Transition”

Here, I use the German term “Energiewende” which stands for “Energy Transition”. And I have also slightly modified the words by Stanley Jevons, he was talking about coal, but the general concept of energy is the same.  We need to go through the transition, otherwise, as Jevons said long ago, we’ll be forced to return to the “laborious poverty” of older times.

That doesn’t mean that the times of low cost mineral commodities will ever return but we should be able to maintain a reasonable flux of mineral commodities into the industrial system and keep it going.  But we’ll have to adapt to less opulent and wasteful life as the society of “developed” countries has been accustomed to so far.  I think it is not impossible, if we don’t ask too much:

h/t ms. Ruza Jankovich – the car shown here is an old Fiat “500” that was produced in the 1960s and it would move people around without the need of SUVs

____________________________________________

Acknowledgement:

The Club of Rome team

Daphne Davies
Ian Johnson
Linda Schenk
Alexander Stefes
Joséphine von Mitschke-Collande
Karl Wagner

And the coauthors of the book “Plundering the Planet”

Philippe Bihouix
Colin Campbell
Stefano Caporali
Partick Dery
Luis De Souza
Michael Dittmar
Ian Dunlop
Toufic El Asmar
Rolf Jakobi
Jutta Gutberlet
Rui Rosa
Iorg Schindler
Emilia Suomalainen
Marco Pagani
Karl Wagner
Werner Zittel

Take action!  

Make connections via our GROUPS page
Start your own projects. See our RESOURCES page.