Economics for the future – Beyond the superorganism

7 12 2019


Nate Hagens has written a substantial paper, four months in the writing, ten years in the making he tells me….


  1. Overview
    Despite decades of warnings, agreements, and activism, human
    energy consumption, emissions, and atmospheric CO2 concentrations
    all hit new records in 2018 (Quéré et al., 2018). If the global economy
    continues to grow at about 3.0% per year, we will consume as much
    energy and materials in the next ∼30 years as we did cumulatively in
    the past 10,000. Is such a scenario inevitable? Is such a scenario possible?
  2. Simultaneously, we get daily reminders the global economy isn’t
    working as it used to (Stokes, 2017) such as rising wealth and income
    inequality, heavy reliance on debt and government guarantees, populist political movements, increasing apathy, tension and violence, and ecological decay. To avoid facing the consequences of our biophysical reality, we’re now obtaining growth in increasingly unsustainable ways. The developed world is using finance to enable the extraction of things we couldn’t otherwise afford to extract to produce things we otherwise couldn’t afford to consume.

    With this backdrop, what sort of future economic systems are now
    feasible? What choreography would allow them to come about? In the
    fullness of the Anthropocene, what does a hard look at the relationships between ecosystems and economic systems in the broadest sense suggest about our collective future? Ecological economics was ahead of its time in recognizing the fundamental importance of nature’s services and the biophysical underpinnings of human economies. Can it now assemble a blueprint for a ‘reconstruction’ to guide a way forward?

    Before articulating prescriptions, we first need a comprehensive
    diagnosis of the patient. In 2019, we are beyond a piecemeal listing of
    what’s wrong. A coherent description of the global economy requires a
    systems view: describing the parts, the processes, how the parts and
    processes interact, and what these interactions imply about future
    possibilities. This paper provides a brief overview of the relationships
    between human behavior, the economy and Earth’s environment. It
    articulates how a social species self-organizing around surplus has
    metabolically morphed into a single, mindless, energy-hungry
    “Superorganism.” Lastly, it provides an assessment of our constraints
    and opportunities, and suggests how a more sapient economic system
    might develop.
  3. Introduction
    For most of the past 300,000 years, humans lived in sustainable,
    egalitarian, roaming bands where climate instability and low CO2 levels made success in agriculture unlikely (Richerson et al., 2001).
    Around 11,000 years ago the climate began to warm, eventually plateauing at warmer levels than the previous 100,000 years (Fig. 1).

  1. This stability allowed agriculture to develop in at least seven separate locations around the world. For the first time, groups of humans began to organize around physical surplus – production exceeding the group’s immediate caloric needs. Since some of the population no longer had to devote their time to hunting and gathering, this surplus allowed the development of new jobs, hierarchies, and complexity (Gowdy and Krall, 2013). This novel dynamic led to widespread agriculture and large-scale state societies over the next few thousand years (Gowdy and Krall, 2014).

    In the 19th century, this process was accelerated by the large-scale
    discovery of fossil carbon and the invention of technologies to use it as
    fuel. Fossil carbon provided humans with an extremely dense (but finite) source of energy extractable at a rate of their choosing, unlike the highly diffuse and fixed flow of sunlight of prior eras.

    This energy bounty enabled the 20th century to be a unique period
    in human history:
  2. more (and cheaper) resources led to sharp productivity
    increases and unprecedented economic growth, a debt
    based financial system cut free from physical tethers allowed expansive credit and related consumption to accelerate,
  3. all of which fueled resource surpluses enabling diverse and richer societies. The 21st century is diverging from that trajectory: 1) energy and resources are again becoming constraining factors on economic and societal development, 2) physical expansion predicated on credit is becoming riskier and will eventually reach a limit, 3) societies are becoming polarized and losing trust in governments, media, and science and, 4) ecosystems are being degraded as they absorb large quantities of energy and material waste from human systems.
    Where do we go from here?
  4. Human behavior
    Humans are unique, but in the same ways tree frogs or hippos are
    unique. We are still mammals, specifically primates. Our physical
    characteristics (sclera in eyes, small mouth, lack of canines etc.) are the products of our formative social past in small bands (Bullet et al., 2011; Kobayashi and Kohshima, 2008). However, our brains and behaviors too are products of what worked in our past. We don’t consciously go through life maximizing biological fitness, but instead act as ‘adaptation executors’ seeking to replicate the daily emotional states of our successful ancestors (Barkow et al., 1992). Humans have an impressive ability to process information, cooperate, and discover things, which is what brought us to the state of organization and wealth we experience today. But our stone-age minds areresponding to modern technology, resource abundance and large, fluid, social groups in emergent ways. These behaviors – summarized below – underpin many of our current planetary and cultural predicaments (Whybrow, 2013).

    3.1. Status and relative comparison Humans are a social species. Each of us is in competition for status and resources. As biological organisms we care about relative status. Historically, status was linked to providing resources for the clan, leadership, respect, storytelling, ethics, sharing, and community (Gowdy, 1998; von Rueden and Jaeggi, 2016). But in the modern culture we compete for status with resource intensive goods (cars, homes, vacations, gadgets), using money as an intermediary driver (Erk et al., 2002). Although most of the poorest 20% in advanced economies live materially richer lives than the middle class in the 1900′s, one’s income rank, as opposed to the absolute income, is what predicts life satisfaction (Boyce et al., 2010). For those who don’t ‘win’, a lack of perceived status leads to depression, drinking, stockpiling of guns and other adverse
    behaviors (Katikireddi et al., 2017; Mencken and Froese, 2019).
    Once basic needs are satisfied, we are primed to respond to the comparison of “better vs.worse” more than we do to “a little” vs. “a lot.”

    3.2. Supernormal stimuli and addiction In our ancestral environment, the mesolimbic dopamine pathways were linked to motivation, action and (calorific) reward. Modern technology and abundance can hijack this same reward circuitry. The brain of a stock trader making a winning trade lights up in an fMRI the same way a chimpanzee’s (and presumably our distant ancestors’) does when finding a nut or berry. But when trading stocks, playing video games or building shopping centers, there is no instinctual ‘full’ signal in modern brains – so we become addicted to the ‘unexpected reward’ of the next encounter, episode, or email, at an ever increasing pace (Hagens, 2011; Schultz et al., 1997). Our brains require flows (feelings) that we satisfy today mostly using non-renewable stocks. In modern resource rich culture, the ‘wanting’ becomes a stronger emotion than the ‘having’.Overview

    3.3. Cognitive biases
    We didn’t evolve to have a veridical view of our world (Mark et al.,
    2010). We think in words and images disconnected from physical reality. This imagined reality commonly seems more real than science, logic and common sense. Beliefs that arise from this virtual interface become religion, nationalism, or quixotic goals such as terraforming Mars (Harari, 2018). For most of history, we maintained groups by sharing social myths like these. Failure to believe those myths led to ostracism and death. Beliefs usually precede the reasons we use to explain them, and thus are far more powerful than facts (Gazzaniga, 2012).

    Psychologists have identified hundreds of cognitive biases whereby
    common human behaviors depart from economic rationality. These
    include: motivated reasoning, groupthink, authority bias, bystander
    effect, etc. Rationality is from a newer part of our brain that is still
    dominated by the more primitive, intuitive, and emotional brain
    structures of the limbic system. Modern economics assumes the rational brain is in charge, but it’s not. Combined with our tribal, in-group nature, it’s understandable that fake news works, and that people resist uncomfortable notions involving limits to growth, energy descent, and climate change. Evolution selects for fitness, not truth (Hoffman, 2019).

    We typically only value truth if it rewards us in the short term. Rationality is the exception, not the rule.

    3.4. Time bias (steep discount rates)
    For good evolutionary reasons (short life spans, risk of food expropriation, unstable environment, etc.) we disproportionately care
    about the present more than the future, measured by economists via a
    ‘discount rate’(Hagens and Kunz, 2010). The steeper the discount rate,
    the more the person is ‘addicted to the present.’ (Laibson et al., 2007).
    Drug users and drinkers, risk takers, people with low I.Q. scores, people who have heavy cognitive workloads, and men (vs. women) tend to more steeply discount events or issues in the future (Chabris et al., 2010).

    Unfortunately, most of our modern challenges are ‘in the future’.
    Recognition that the future exists and that we are part of it springs from a relatively new brain structure, the neocortex. It has no direct connection to deep-brain motivational centers that communicate urgency. When asked to plan a snack for next week between chocolate or fruit, people chose fruit 75% of the time. When choosing a snack for today, 70% select chocolate. When choosing a movie to watch next week 63% choose an educational documentary but when choosing a film for tonight 66% pick a comedy or sci-fi (Read et al., 1999). We have great intentions for the future, until the future becomes today. Our neocortex can imagine them, but we are emotionally blind to long-term issues like climate change or energy depletion. Emotionally, the future isn’t real.

    3.5. Cooperation and group behavior Group behavior has shaped us as much as individual behavior (Wilson and Wilson, 2008). Humans are strongly ‘groupish’ (Haidt, 2013), and before agriculture were aggressively egalitarian (Pennisi, 2014 Boehm, 1993). Those historic tribes that could act as a cohesive unit facing a common threat outcompeted tribes without such social cohesion. Because of this, today we easily and quickly form ingroups and outgroups and
    behave favorably and antagonistically towards them respectively. We are also primed to cooperate with our in-group whether that is a small
    business, large corporation, or even a nation-state – to obtain monetary (or in earlier times, physical) surplus. Me over Us, Us over Them.

    3.6. Cultural evolution, Ultrasociality and the Superorganism
    “What took place in the early 1500s was truly exceptional, something
    that had never happened before and never will again. Two cultural experiments, running in isolation for 15,000 years or more, at last came face to face. Amazingly, after all that time, each could recognize the other’s institutions. When Cortés landed in Mexico he found roads, canals, cities, palaces, schools, law courts, markets, irrigation works, kings, priests, temples, peasants, artisans, armies, astronomers, merchants, sports, theatre, art, music, and books. High civilization, differing in detail but alike in essentials, had evolved independently on both sides of the earth.” Ronald Wright, A
    Short History of Progress (2004, pp50-51)

    “Ultrasociality refers to the most social of animal organizations, with full time division of labor, specialists who gather no food but are fed by others, effective sharing of information about sources of food and danger, self-sacrificial effort in collective defense.” (Campbell, 1974; Gowdy and Krall, 2013).

    Humans are among a small handful of species that are extremely
    social. Phenotypically we are primates, but behaviorally we’re more
    akin to the social insects (Haidt, 2013). Our ultrasociality allows us to
    function at much larger scales than as individuals. At the largest scales, cultural evolution occurs far more rapidly than genetic evolution (Richerson and Boyd, 2005). Via the cultural evolution that began with agriculture, humans have evolved into a globally interconnected civilization, ‘outcompeting’ other human economic models along the way to becoming a defacto ‘superorganism’ (Hölldobler and Wilson, 2008).

    A superorganism can be defined as “a collection of agents which can act in concert to produce phenomena governed by the collective”(Kelly, 1994). Via cooperation (and coordination), fitness transfers from lower levels to higher levels of organization (Michod and Nedelcu, 2003). The needs of this higher-level entity (today for humans; the global economy) mold the behavior, organization and functions of lower-level entities (individual human behavior) (Kesebir, 2011). Human behavior is thus constrained and modified by ‘downward causation’ from the higher level of organization present in society (Campbell, 1974).

    All the ‘irrationalities’ previously outlined have kept our species
    flourishing for 300,000 years. What has changed is not ‘us’ but rather
    the economic organization of our societies in tandem with technology,
    scale and impact. Since the Neolithic, human society has organized
    around growth of surplus, initially measured physically e.g. grain, now measured by digital claims on physical surplus, (or money) (Gowdy and Krall, 2014). Positive human attributes like cooperation have been coopted to become coordination towards surplus production. Increasingly, the “purpose” of a modern human in the ultrasocial global economy is to contribute to surplus for the market (e.g. the economic value of a human life based on discounted lifetime income, the marginal productivity theory of labor value, etc.) (Gowdy 2019, in press).

    3.7. Human behavior – summary
    Our behavioral repertoire is wide, yet informed, and constrained by
    our neurological heritage and the higher level of organization exhibited by our economic system. We are born with heritable modules prepared to react to context in predictable ways. “Who we are” as a species is highly relevant to issues of ecological overshoot, sustainability and our related cultural responses.





A Green New Deal Must Not Be Tied to Economic Growth

7 07 2019

By Giorgos Kallis, originally published by TruthOut

  • March 12, 2019

The Green New Deal bill is an audacious 10-year mobilization plan to move the U.S. to a zero-carbon economy. Bold and ambitious interventions like it are necessary, in the U.S. and elsewhere, if we are to unsettle the current complacency with climate breakdown. Academics like economist Robert Pollin, who kept alive the idea of a Green New Deal in the past years and provided the science to back it up, are to be congratulated for their efforts.

Pollin has for years now proposed his simplified version of a Green New Deal — an investment of between 1.5 to 2 percent of global GDP every year to raise energy efficiency and expand clean renewable energy. This would be the moment for him to celebrate that his cause has been taken up, and contribute to working out the specifics. Instead though, he chooses to focus on the differences between his proposal and a “degrowth agenda,” which he finds “utterly unrealistic” — a waste of time for the Left at best and dangerously anti-social at worst. Whereas this is not the moment to split hairs, Pollin’s insistence on degrowth is inadvertently productive. It lets us see a sore point in the Green New Deal narrative, and this is that it risks reproducing — unless carefully framed — the hegemonic ideology of capitalist growth, which has created the problem of climate change in the first place.

To begin with, Pollin never explains why growth is a necessary ingredient for his proposal. It is not clear why he has to argue that a Green New Deal will be good for growth instead of simply advocating cutting carbon while meeting needs and fostering wellbeing. The only reason he provides for his preference for growth is that “higher levels of GDP will correspondingly mean a higher level of investment being channeled into clean energy projects.” If Pollin seriously means that he shares “the values and concerns of degrowth advocates,” then he could simply tweak his model and come up with a fixed amount of investment (independent of GDP) that would produce the same decarbonization. Higher levels of GDP will not only lead to higher levels of clean investment, but also higher levels of dirty investment — and the majority of investment is dirty. One percent growth in GDP leads to a 0.5 to 0.8 percent increase in carbon emissions, and this is as statistically robust a relation as it gets (clean energy investment has no statistically significant effect on emissions yet, though, of course, this could and should change in the future). If we continue to grow at 3 percent per year, by 2043, the global economy will be two times larger than it is now. It is difficult to imagine creating a renewable energy infrastructure for our existing economy in a short time span, much less doing so for an economy that is two times bigger. The smaller our economic output is, the easier the transition will be.

Pollin may well have chosen to emphasize growth because new deals are about growth. But a Green New Deal does not have to be like the old New Deal. Pollin does not suggest that his investment program should be financed by deficit spending, nor that it should be a short-lived stimulus, repaid by growth. An investment at the level of 2 percent of GDP does not need deficit spending — assuming there is the political will for such a program, it could be financed by replacing dirty or socially useless investments (and there are many, starting with armaments). If there is no extra spending and debt, then there is no need to stimulate growth to pay it back.

Now, at some points in his article for the New Left Review, Pollin seems to suggest that growth is an outcome of his proposal, not a goal or pre-condition. He claims that “for accounting purposes,” growth in renewable energy investments “will contribute towards increasing GDP.” But even in accounting terms, without deficit spending, there is no reason why a clean investment program will cause growth, since the 2 percent that will go to renewables would go to some other investment instead.

The economy moreover is not an accounting convention. We could just as well imagine spending lots of money on digging and filling in holes — this could serve as a temporary stimulus in a period of low liquidity and low demand, but is obviously not a recipe for sustained growth. Pollin writes in his text that “building a green economy entails more labor-intensive activities” and that the private sector does not invest in renewables because they have low profit margins. Shifting financial resources from high-productivity and high-profit sectors to low-productivity ones is not a recipe for growth. The energy productivity of renewables is also lower than that of fossil fuels. An economy of low productivity, low profits and low energy returns is unlikely to be a bigger economy that grows. And this is fine, since our priority right now should be to decarbonize, not grow the economy. But Pollin unnecessarily links the former to the latter.

Maybe Pollin is right, and I am wrong. Maybe a massive clean energy program would end up stimulating growth. However, it would be wrong to sell a program for stabilizing the climate with the promise of growth. What happens if it doesn’t produce growth? Do we abandon decarbonization? And since climate change is not the only problem with growth, there are good reasons why we can’t afford more growth even if it were powered by the sun.

Economists typically justify growth in terms of poverty or stability. Pollin innovates by justifying it in the name of climate change. And this is coming from someone who otherwise sees the irrationality of perpetual growth.

Compound growth is what Marxist scholar David Harvey calls a “bad infinity.” For Harvey, capitalism’s requirement for compound growth is the deadliest of its contradictions. Harvey points to the irrationality of expecting that demand, investment and profits will double every 24 years (this is what a 3 percent growth each year amounts to), quadruple every 48, grow eight-fold every 72, ad infinitum and ad absurdum.

Consider the following: 65 percent of anthropogenic emissions come from fossil fuels. The remaining 35 percent come from things like land-use change, soil depletion, landfills, industrial meat farming, cement and plastic production. Even if the energy mix were to become 100 percent clean and we continued to double the economy every 24 years, we would be back up to our existing emissions levels in short order. This is how irrational the pursuit of compound growth is.

Climate breakdown now threatens to bring this absurdity to an end. But it is not only the climate — biodiversity loss through mass extinction, land-use change and resource extraction are all directly linked to economic growth. Despite his claims to the contrary, there is no prospect of what Pollin calls “absolute decoupling,” or a reduction of these impacts while the economy grows.

It is fanciful to think that there is one type of neoliberal growth that is bad, and another type of growth that could be inclusive, progressive, clean, etc. Growth is an integrated process, and no matter what the ideologues of growth claim, there is no proof that we can grow the economy by selectively growing the “goods” while decreasing the “bads.” Armaments, advertising, fossil fuels, planned obsolescence and waste of all kinds are integral to capitalist growth. Since its beginnings in colonial Britain, growth has been fueled by unequal exchange of labor and resources between imperial centers and internal and external peripheries. Growth requires the investment of surplus for the creation of more surplus. And this surplus is created by exploiting wage-workers and appropriating the unpaid work of women, migrant workers and nature. Shifting of costs in space and time has also been central. Access to low-cost labor and resources is vital for economic growth; if inputs become expensive, the economy slows down.

Pollin claims that growth stalled because neoliberalism prioritized the interests of the rich. The brutal cuts of structural adjustment policies and neoliberal austerity, however, were always made in the name of growth. The promise of growth bought the social peace the neoliberal project needed. Even if the real outcome was the concentration of wealth amidst anemic growth rates, this tells us something useful about the dangers of a “growth politics.”

Pollin argues that we can’t afford to dream that another world is possible, not now, because climate change is urgent and “we do not have the luxury to waste time on huge global efforts fighting for unattainable goals.” We are asked to accept that the only game in town is capitalism, and that questioning capitalism and its destructive pursuit of growth is a luxurious waste of time. If not now, then when, one might wonder?

Erik Swyngedouw has warned against the depoliticizing tendency of carbon reductionism — that is, reducing all politics down to a question of their effect on carbon emissions, especially when coupled with claims of urgency. Granted, climate change is a huge problem, but it is not the only problem in whose service we should pause other aspirations. And climate change is not a stand-alone problem with a technical solution — it is symptomatic of the broader system that is producing it. Pollin’s reduction of climate change to a question of an investment fix is appealing because it makes the problem seem manageable. But climate change is not a technical problem. Climate change is a political problem, in the real sense of the word political, meaning a problem involving competing visions of the kind of world we want to live in.

Now, Pollin has a valid concern in that a degrowth agenda would involve a reduction of GDP, which has many problems — not least, rising poverty, inequality, debts, austerity, etc. We would be fools if we were oblivious to those risks. In a capitalist economy bound to grow or collapse, growth is fundamental for the stability of the system. But growth is also exploitative and self-destructive. Should we support capitalism forever, just because a collapsing capitalism is worse for workers than a capitalism that does well?

Those of us who write about degrowth do not advocate an intentional reduction of GDP (we are the first to criticize GDP as it mixes “goods” with “bads” and doesn’t count unpaid work). Perhaps Pollin is confused because we do claim that doing the right things, ecologically and socially, will in all likelihood slow down the economy as measured by GDP. Or because we argue that certain sectors of the current economy that are central to its expansion — armament, advertising, unnecessary consumer goods, speculative financing, etc. — should contract. Given how coupled the capitalist economy is to growth, this raises the question of how, or under what conditions, we could secure human wellbeing and equality without growth. This is a huge research question, involving economic models, historical and ethnographic studies, and an assessment of potential institutional reforms, such as work-sharing, a guaranteed basic income or a maximum income tax. It is also a political agenda for the Left, to build the capacities to decouple wellbeing from growth.

Pollin claims that those of who write about degrowth do not offer a specific program to combat climate change. Speaking for myself, I do not feel I have to add more to the excellent proposals already made by Pollin himself, Naomi Klein and many, many others. The problem with climate change is not that we are short of ideas on what is to be done. The problem is that we are not doing it. What we offer from a degrowth perspective is a different diagnosis of why we are not doing it. We argue that this is because there is a fundamental clash between capitalism’s pursuit of growth and climate mitigation. Good climate policies are not adopted because of their impact on growth, and growth is outstripping the gains made from renewable energy. Our contribution is to open up the debate about alternatives to growth.

In the climate community, people have their pet ideas. Some want a carbon tax, and others want a carbon dividend (a tax returned as basic income). Some want green bonds, others a Green New Deal. It is safe to say that if we are to decarbonize the economy at the unprecedented rate required, all of these ideas will be necessary. But decarbonization is not just a matter of adding solar and wind to the energy mix — it is also a matter of taking fossil fuels out. This requires legislation and political commitment alongside struggle to stop fossil fuel projects and coal mines, and to divest from oil companies.

Pollin suggests that a 2 percent investment in clean energy and efficiency will be sufficient on its own, but there are reasons to be skeptical about such a claim. I would like Pollin to be right, but I’ve read other reputable climate scientists and engineers who are much more reserved than Pollin about the prospect of 100 percent renewables. There are the problems with the intermittency of solar and wind, and their huge storage requirements (one of the principal solutions envisaged, storage as hydroelectric energy, requires a dramatic damming of remaining rivers: an environmental nightmare). There are the emissions involved in fueling a renewable energy transition, which might be enough on their own to overshoot the remaining carbon budget. There are the rare earth minerals necessary for constructing solar panels and batteries, minerals that are scarce and extracted from areas and communities already suffering from our unquenchable hunger for raw materials. There is the question of land use and impact on landscapes. As is common in these technical debates, Pollin prefers data favorable to his argument. But he would agree, I think, that the picture is very complicated and uncertain, to say the least.

I do not like to be a skeptic in the current political context where renewables face an uphill battle against the fossil fuel and nuclear power lobbies. I wish that a 100 percent renewable future were possible and would be as harmless as Pollin thinks. But our experience with previous technological fixes suggests we should be on the side of caution, both because of unfulfilled promises, and because there are always side effects and unforeseen costs. Even if the environmental and social costs of renewable energy are not as high as some skeptics think, they are not insignificant either — and with compound growth, even an insignificant impact quickly grows toward infinity. The lower the level of energy use, and the smaller the economy, the easier it is to decarbonize, and the fewer impacts that will be caused along the way. There is no reason for someone concerned with climate and the environment to advocate economic growth.

Furthermore, Pollin provides no evidence that the scale of investment he proposes will do the job. Granted, there has been no such massive investment in the past, so it is hard to assess its potential effect. On the campaign trail, candidate Obama promised $150 billion over a period of 10 years. In 2009, the American Recovery and Reinvestment Act provided stimulus funding of $90 billion in strategic clean-energy investments and tax incentives to promote job creation and the deployment of low-carbon technologies, promising to leverage approximately $150 billion in private and other non-federal capital for clean energy investments. Fossil fuel emissions decreased 11 percent from 2007 to 2013, but this was not a result of growth in renewables (despite a tripling of wind power and a 30-fold increase in solar power during Obama’s presidency), but mostly an after-effect of the recession, high gasoline prices and to a lesser extent, a shift from coal to natural gas.

In 2009, South Korea announced a Green New Deal Job Creation Plan: $38.1 billion invested over a period of four years dedicated to environmental projects to spur slumping economic growth and create a million jobs. Korea’s emissions were 15 percent higher in 2014 than in 2008. Pollin refers to Germany as “the most successful advanced economy in developing its clean-energy economy.” German emissions in 2014 were almost unchanged since 2009. They had fallen 20 percent since 1992, and following the collapse of industry in East Germany. And even so, in per capita terms, they are 80 percent higher than the world average. If the whole world were to consume as much as the “successful” case of Germany, not only would global carbon emissions not fall, they would almost double.

Naomi Klein wrote that climate change “changes everything.” Pollin tells us that it does not have to change anything, other than 2 percent of GDP. We will keep flying, eating beef, driving cars to suburban homes, flying helicopters and jets — with the only difference being that all this will be powered by clean electricity. I won’t debate the facts and the feasibility of this vision again, so instead I’ll just point out that intuitively this doesn’t make sense to people, and it doesn’t because you don’t have to be a scientist to understand how much our current lifestyle depends on fossil fuels. Those who deny climate change know it and those who fight for climate justice know it, too. To stop climate change, we not only need to clean production, but also to reduce and transform consumption. We need free public transport, new diets, denser modes of living, affordable housing close to where the jobs are, food grown closer to where it is consumed, reduction of working time and commuting, low-energy ways of living and finding satisfaction, curbs on excessive incomes and on ostentatious consumption. It is not as though the Green New Deal is an agenda designed to fight climate change alone — it is a green Left agenda that we should pursue even if there were no climate change. And we have to pursue it independently of whether or not it is “good for the economy,” because we put people before the economy.

The Green New Deal bill goes in the right direction and its differences from Pollin’s narrower proposal are informative and much closer to what I am arguing here. The bill does not only commit funds to renewable energies, but also to health, housing and environmental infrastructures. It has provisions for economic security, akin to job guarantee and basic income schemes — provisions that will be vital if we are to secure wellbeing without growth. Granted, the bill does not talk explicitly about post- or de-growth, and does not challenge head-on prevalent patterns of consumption as much as one like me sitting in an academic chair and not involved in parliamentary politics would have liked — but consumption would surely change too if public services were expanded to the extent foreseen in the bill. Importantly, unlike Pollin, the bill does not emphasize growth or justify the plan in terms of growth.

Pollin’s insistence, then, on accentuating the differences between degrowth and the Green New Deal is outdated and unnecessary. Pollin’s article was titled “Degrowth vs. a Green New Deal.” Maybe it is time to stop inventing more internal “versus” and do the hard work of constructing some new “ands.” What about degrowth and a Green New Deal? The opponent is formidable and what we need are alliances, not divisions.

The author thanks Jason Hickel and David Ravensbergen for their comments and suggestions to an earlier draft of this essay.





Time to rethink monetary policy

3 05 2018

“But another crisis is brewing; and there are signs that it will be bigger than 2008.  And when that crisis bursts over us, this time around we need to put these changes in place before the economists rally round and persuade our craven politicians that there is no alternative… because there is.”

Lifted from the excellent Consciousness of Sheep blog….

When the first stuffed platypus was presented to European scientists, they dismissed it.  “What we have here,” they opined, “is some unfortunate lutrinae onto which some scoundrel has attached various anatidae parts.”  And so the innocent little platypus, which had been minding its own business until the European explorers arrived, was placed on the same zoological shelf as the Yeti.

The European scientists, you see, had a model.  A map of how the world’s animal species were ordered.  At the apex, predictably, were humans themselves.  Beneath them were anatomically similar apes and monkeys; followed by cats, dogs, pigs, etc.  What all of these “higher” species had in common, however, was that they were all mammals – creatures that carry their young in an internal womb, and that suckle them with milk.  This distinguished them from other, dissimilar species like birds, reptiles, amphibians and insects.

Then along comes this upstart platypus, not just looking like it possesses bird parts, but having the audacity to lay eggs!  For several decades, despite growing evidence that platypuses were real, European scientists continued to dismiss these reported sightings as fake news.  The platypus was an unfortunate intrusion into the scientists’ neatly ordered model of how the world worked.  Despite the philosophy of science demanding that a fact – like the existence of a platypus – that disproves a model is the very essence of falsifiability, the scientists chose to reject the fact rather than deconstruct and rebuild their model.

The same European scientists later – and infamously – rejected evidence for the existence of one of the platypus’s neighbours… the black swan… which brings us to a modern pseudoscience that also famously rejects reality in order to preserve the models that it has spent decades finessing.

Economic models have already proved their – very negative – worth in the worst possible way in the shape of the 2008 financial crash and the ensuing global depression.  This ought to have been enough for the entire economics profession to be given their marching orders and afforded their true place alongside aromatherapists, astrologers and homeopaths.  However, in 2008, governments lacked any acceptable alternative.  So despite knowing that an economic forecast was of equal value to flipping a coin, they put the same economists who had broken the system in charge of fixing it.

The economists did no such thing, of course.  The financial crisis of 2008 was the platypus of our age; something so out of step with the models that it could not reasonably be incorporated into them.  They even used the term “black swan” to describe it.

Any examination of the real economy over centuries, however, demonstrates that cyclical period of boom and bust – frequently punctuated by major financial crashes – are in fact the norm.  It is the so-called “Great Moderation” in the economists’ model that is the aberration… the thing so out of step with reality that it can reasonably be dismissed as fake news.

This, however, is merely the most obvious flaw in an economic model that is based on anomalies.  Most importantly, almost everything that economists are taught about how the economy works is based on what happened in the course of the two decade long mother of all anomalies; the post war boom 1953-1973.  As historian Paul Kennedy explains:

“The accumulated world industrial output between 1953 and 1973 was comparable in volume to that of the entire century and a half which separated 1953 from 1800.  The recovery of war-damaged economies, the development of new technologies, the continued shift from agriculture to industry, the harnessing of national resources within ‘planned economies,’ and the spread of industrialization to the Third World all helped to effect this dramatic change.  In an even more emphatic way, and for much the same reasons, the volume of world trade also grew spectacularly after 1945…”

In other words, economic modelling based on how the economy operated in the decades prior to the First World War might provide a closer fit to the real world in 2018.  The same is true for interest rates. As political economist Mark Blyth has shown, economists have modelled interest rates on the two decades around the historical high point in 1981.  However, for the entire period following the introduction of derivatives by the Dutch in the sixteenth century, the average interest rate is below four percent.

This is no trifling academic issue.  Interest rates have become the primary means by which economists – to whom our politicians have handed the leavers of power – seek to manage the economy.  The aim of “monetary policy” being to raise interest rates sufficiently high to prevent a recurrence of the inflation of the 1970s, while keeping them sufficiently low that they do not trigger or exacerbate a repeat of the 2008 crash.

The problem with this as of 2018 is that despite close to zero percent interest rates – and trillions of dollars, euros, pounds and yen in stimulus packages – the rate of inflation has barely moved.  Indeed, with growth rates stalling in the USUK and Eurozonedeflation is more likely than inflation.  Despite this, the Federal Reserve, Bank of England and European Central Bank remain committed to raising interest rates and reversing quantitative easing… because that is what their model tells them that they should do.

Central to the model is a belief – based on those anomalous decades when we had growth on steroids and interest rates to match – that employment causes inflation.  So with the official rate of unemployment in the USA standing at 4.1 percent and the UK at 4.2 percent, the model is telling the economists at the central banks that inflation is already running out of control… even though it isn’t.  As Constance Bevitt, quoted in the New York Times puts it:

“When they talk about full employment, that ignores almost all of the people who have dropped out of the economy entirely. I think that they are examining the problem with assumptions from a different economic era. And they don’t know how to assess where we are now.”

Larry Elliott at the Guardian draws a similar conclusion about the UK:

“Britain’s flexible labour market has resulted in the development of a particular sort of economy over the past decade: low productivity, low investment and low wage. Since the turn of the millennium, business investment has grown by about 1% a year on average because companies have substituted cheap workers for capital. Labour has become a commodity to be bought as cheaply as possible, which might be good for individual firms, but means people have less money to buy goods and services – a shortfall in demand only partly filled by rising levels of debt. The idea that everyone is happy with a zero-hour contract is for the birds.

“Workers are cowed to an extent that has surprised the Bank of England. For years, the members of Threadneedle Street’s monetary policy committee (MPC) have been expecting falling unemployment to lead to rising wage pressure, but it hasn’t happened. When the financial crisis erupted in August 2007, the unemployment rate was 5.3% and annual wage growth was running at 4.7%. Today unemployment is 4.2% and earnings are growing at 2.8%.”

This is a very different economy to the one that operated between 1953 and 1973; a time when the workers’ share of productivity rose consistently.  In those days a semi-skilled manual worker had a sufficient wage to buy a home, support a family, run a car and afford a holiday.  In 2018, a semi-skilled manual worker living in the UK depends upon foodbanks and tax credits to remain solvent.

In short, despite mountains of evidence that the economists’ model bears no relation to the real world, like their nineteenth century zoological counter parts, they continue to reject any evidence that disproves the model as fake news.  One obvious reason for this is that all of us – whatever our specialisms – get a sinking feeling of despondency when some inconvenient fact comes along to tell us that it is time to go back to the drawing board.  Understandably, we test the inconvenient fact to destruction before deconstructing our models.  But even when the fact proves sufficiently resilient to be considered to be true, there remains the temptation to sweep it under the proverbial carpet and pretend that nothing is amiss.

There is, however, another reason why so many economists spend so many of their waking hours studiously ignoring reality when it whacks them over the head with the force of a steam hammer.  They simply do not see it.  That is, if you are on the kind of salary enjoyed by a member of one or other monetary policy committee, your lived experience will be so removed from the experience of ordinary working (and not working) people that you simply refuse to believe them when – either by anecdote or statistic – they inform you of just how bad things are down on Main Street.

The two proposed solutions to this latter problem involve the question of diversity.  Among its other work, the campaign group Positive Money has highlighted the race and gender disparity at the Bank of England.  However, were we to just swap some white male mainstream economists for some equivalent BME and female mainstream economists, this is unlikely to have much impact.  A second approach to diversity from radical economists such as Ann Pettifor is to break up the neoclassical economists’ monopoly by bringing in economists from different schools of economics.

Arguably, however, neither of these proposed solutions would be sufficient to solve the problem of economists refusal to allow facts to stand in the way of their models.  For this, something even more radical is required – a complete rethink of the way monetary policy is made.  The 2008 crash and the decade of near stagnation for 80 percent of us that followed has demonstrated that the approach of handing economic policy to technocrats has failed.  The unelected Bank of England or Federal Reserve Chairman can no longer be allowed to be the final authority.  Policy must ultimately reside with elected representatives  whose jobs are on the line if they mess up.

Of course it is entirely reasonable that our representatives base their decisions on the advice and recommendations of experts.  It is here that real diversity is required.  Not merely swapping white male economists for black female ones, or opening the door just wide enough for some token contrarian economists.  Rather, what is needed is for monetary policy committees to encompass a range of specialisms far beyond economics and the social sciences, together with representatives from trades unions, charities and business organisations that are more in touch with the realities of life in the real economy.

None of this is about to happen any time soon; not least because nobody voluntarily relinquishes power and privilege.  But another crisis is brewing; and there are signs that it will be bigger than 2008.  And when that crisis bursts over us, this time around we need to put these changes in place before the economists rally round and persuade our craven politicians that there is no alternative… because there is.





The end of work….

28 11 2016

Written by James Livingston, professor of history at Rutgers University in New Jersey, this essay challenges everything we think we know about employment and work… Livingston is the author of many books, the latest being No More Work: Why Full Employment is a Bad Idea (2016). As someone who hasn’t ‘worked’ since aged 42 (and I’m almost at ‘retiring age now!), I found this piece inspiring and refreshing…… my only criticism of this is that he doesn’t seem to realise all work is unsustainable.

Originally published here……

Work means everything to us Americans (and Australians. Ed). For centuries – since, say, 1650 – we’ve believed that it builds character (punctuality, initiative, honesty, self-discipline, and so forth). We’ve also believed that the market in labour, where we go to find work, has been relatively efficient in allocating opportunities and incomes. And we’ve believed that, even if it sucks, a job gives meaning, purpose and structure to our everyday lives – at any rate, we’re pretty sure that it gets us out of bed, pays the bills, makes us feel responsible, and keeps us away from daytime TV.

These beliefs are no longer plausible. In fact, they’ve become ridiculous, because there’s not enough work to go around, and what there is of it won’t pay the bills – unless of course you’ve landed a job as a drug dealer or a Wall Street banker, becoming a gangster either way.

These days, everybody from Left to Right – from the economist Dean Baker to the social scientist Arthur C Brooks, from Bernie Sanders to Donald Trump – addresses this breakdown of the labour market by advocating ‘full employment’, as if having a job is self-evidently a good thing, no matter how dangerous, demanding or demeaning it is. But ‘full employment’ is not the way to restore our faith in hard work, or in playing by the rules, or in whatever else sounds good. The official unemployment rate in the United States is already below 6 per cent, which is pretty close to what economists used to call ‘full employment’, but income inequality hasn’t changed a bit. Shitty jobs for everyone won’t solve any social problems we now face.

Don’t take my word for it, look at the numbers. Already a fourth of the adultsactually employed in the US are paid wages lower than would lift them above the official poverty line – and so a fifth of American children live in poverty. Almost half of employed adults in this country are eligible for food stamps (most of those who are eligible don’t apply). The market in labour has broken down, along with most others.

Those jobs that disappeared in the Great Recession just aren’t coming back, regardless of what the unemployment rate tells you – the net gain in jobs since 2000 still stands at zero – and if they do return from the dead, they’ll be zombies, those contingent, part-time or minimum-wage jobs where the bosses shuffle your shift from week to week: welcome to Wal-Mart, where food stamps are a benefit.

And don’t tell me that raising the minimum wage to $15 an hour solves the problem. No one can doubt the moral significance of the movement. But at this rate of pay, you pass the official poverty line only after working 29 hours a week. The current federal minimum wage is $7.25. Working a 40-hour week, you would have to make $10 an hour to reach the official poverty line. What, exactly, is the point of earning a paycheck that isn’t a living wage, except to prove that you have a work ethic?

But, wait, isn’t our present dilemma just a passing phase of the business cycle? What about the job market of the future? Haven’t the doomsayers, those damn Malthusians, always been proved wrong by rising productivity, new fields of enterprise, new economic opportunities? Well, yeah – until now, these times. The measurable trends of the past half-century, and the plausible projections for the next half-century, are just too empirically grounded to dismiss as dismal science or ideological hokum. They look like the data on climate change – you can deny them if you like, but you’ll sound like a moron when you do.

For example, the Oxford economists who study employment trends tell usthat almost half of existing jobs, including those involving ‘non-routine cognitive tasks’ – you know, like thinking – are at risk of death by computerisation within 20 years. They’re elaborating on conclusions reached by two MIT economists in the book Race Against the Machine (2011). Meanwhile, the Silicon Valley types who give TED talks have started speaking of ‘surplus humans’ as a result of the same process – cybernated production. Rise of the Robots, a new book that cites these very sources, is social science, not science fiction.

So this Great Recession of ours – don’t kid yourself, it ain’t over – is a moral crisis as well as an economic catastrophe. You might even say it’s a spiritual impasse, because it makes us ask what social scaffolding other than work will permit the construction of character – or whether character itself is something we must aspire to. But that is why it’s also an intellectual opportunity: it forces us to imagine a world in which the job no longer builds our character, determines our incomes or dominates our daily lives.

What would you do if you didn’t have to work to receive an income?

In short, it lets us say: enough already. Fuck work.

Certainly this crisis makes us ask: what comes after work? What would you do without your job as the external discipline that organises your waking life – as the social imperative that gets you up and on your way to the factory, the office, the store, the warehouse, the restaurant, wherever you work and, no matter how much you hate it, keeps you coming back? What would you do if you didn’t have to work to receive an income?

And what would society and civilisation be like if we didn’t have to ‘earn’ a living – if leisure was not our choice but our lot? Would we hang out at the local Starbucks, laptops open? Or volunteer to teach children in less-developed places, such as Mississippi? Or smoke weed and watch reality TV all day?

I’m not proposing a fancy thought experiment here. By now these are practical questions because there aren’t enough jobs. So it’s time we asked even more practical questions. How do you make a living without a job – can you receive income without working for it? Is it possible, to begin with and then, the hard part, is it ethical? If you were raised to believe that work is the index of your value to society – as most of us were – would it feel like cheating to get something for nothing?

We already have some provisional answers because we’re all on the dole, more or less. The fastest growing component of household income since 1959 has been ‘transfer payments’ from government. By the turn of the 21st century, 20 per cent of all household income came from this source – from what is otherwise known as welfare or ‘entitlements’. Without this income supplement, half of the adults with full-time jobs would live below the poverty line, and most working Americans would be eligible for food stamps.

But are these transfer payments and ‘entitlements’ affordable, in either economic or moral terms? By continuing and enlarging them, do we subsidise sloth, or do we enrich a debate on the rudiments of the good life?

Transfer payments or ‘entitlements’, not to mention Wall Street bonuses (talk about getting something for nothing) have taught us how to detach the receipt of income from the production of goods, but now, in plain view of the end of work, the lesson needs rethinking. No matter how you calculate the federal budget, we can afford to be our brother’s keeper. The real question is not whether but how we choose to be.

I know what you’re thinking – we can’t afford this! But yeah, we can, very easily. We raise the arbitrary lid on the Social Security contribution, which now stands at $127,200, and we raise taxes on corporate income, reversing the Reagan Revolution. These two steps solve a fake fiscal problem and create an economic surplus where we now can measure a moral deficit.

Of course, you will say – along with every economist from Dean Baker to Greg Mankiw, Left to Right – that raising taxes on corporate income is a disincentive to investment and thus job creation. Or that it will drive corporations overseas, where taxes are lower.

But in fact raising taxes on corporate income can’t have these effects.

Let’s work backward. Corporations have been ‘multinational’ for quite some time. In the 1970s and ’80s, before Ronald Reagan’s signature tax cuts took effect, approximately 60 per cent of manufactured imported goods were produced offshore, overseas, by US companies. That percentage has risen since then, but not by much.

Chinese workers aren’t the problem – the homeless, aimless idiocy of corporate accounting is. That is why the Citizens United decision of 2010 applying freedom of speech regulations to campaign spending is hilarious. Money isn’t speech, any more than noise is. The Supreme Court has conjured a living being, a new person, from the remains of the common law, creating a real world more frightening than its cinematic equivalent: say,Frankenstein, Blade Runner or, more recently, Transformers.

But the bottom line is this. Most jobs aren’t created by private, corporate investment, so raising taxes on corporate income won’t affect employment. You heard me right. Since the 1920s, economic growth has happened even though net private investment has atrophied. What does that mean? It means that profits are pointless except as a way of announcing to your stockholders (and hostile takeover specialists) that your company is a going concern, a thriving business. You don’t need profits to ‘reinvest’, to finance the expansion of your company’s workforce or output, as the recent history of Apple and most other corporations has amply demonstrated.

I know that building my character through work is stupid because crime pays. I might as well become a gangster

So investment decisions by CEOs have only a marginal effect on employment. Taxing the profits of corporations to finance a welfare state that permits us to love our neighbours and to be our brothers’ keeper is not an economic problem. It’s something else – it’s an intellectual issue, a moral conundrum.

When we place our faith in hard work, we’re wishing for the creation of character; but we’re also hoping, or expecting, that the labour market will allocate incomes fairly and rationally. And there’s the rub, they do go together. Character can be created on the job only when we can see that there’s an intelligible, justifiable relation between past effort, learned skills and present reward. When I see that your income is completely out of proportion to your production of real value, of durable goods the rest of us can use and appreciate (and by ‘durable’ I don’t mean just material things), I begin to doubt that character is a consequence of hard work.

When I see, for example, that you’re making millions by laundering drug-cartel money (HSBC), or pushing bad paper on mutual fund managers (AIG, Bear Stearns, Morgan Stanley, Citibank), or preying on low-income borrowers (Bank of America), or buying votes in Congress (all of the above) – just business as usual on Wall Street – while I’m barely making ends meet from the earnings of my full-time job, I realise that my participation in the labour market is irrational. I know that building my character through work is stupid because crime pays. I might as well become a gangster like you.

That’s why an economic crisis such as the Great Recession is also a moral problem, a spiritual impasse – and an intellectual opportunity. We’ve placed so many bets on the social, cultural and ethical import of work that when the labour market fails, as it so spectacularly has, we’re at a loss to explain what happened, or to orient ourselves to a different set of meanings for work and for markets.

And by ‘we’ I mean pretty much all of us, Left to Right, because everybody wants to put Americans back to work, one way or another – ‘full employment’ is the goal of Right-wing politicians no less than Left-wing economists. The differences between them are over means, not ends, and those ends include intangibles such as the acquisition of character.

Which is to say that everybody has doubled down on the benefits of work just as it reaches a vanishing point. Securing ‘full employment’ has become a bipartisan goal at the very moment it has become both impossible and unnecessary. Sort of like securing slavery in the 1850s or segregation in the 1950s.

Why?

Because work means everything to us inhabitants of modern market societies – regardless of whether it still produces solid character and allocates incomes rationally, and quite apart from the need to make a living. It’s been the medium of most of our thinking about the good life since Plato correlated craftsmanship and the possibility of ideas as such. It’s been our way of defying death, by making and repairing the durable things, the significant things we know will last beyond our allotted time on earth because they teach us, as we make or repair them, that the world beyond us – the world before and after us – has its own reality principles.

Think about the scope of this idea. Work has been a way of demonstrating differences between males and females, for example by merging the meanings of fatherhood and ‘breadwinner’, and then, more recently, prying them apart. Since the 17th century, masculinity and femininity have been defined – not necessarily achieved – by their places in a moral economy, as working men who got paid wages for their production of value on the job, or as working women who got paid nothing for their production and maintenance of families. Of course, these definitions are now changing, as the meaning of ‘family’ changes, along with profound and parallel changes in the labour market – the entry of women is just one of those – and in attitudes toward sexuality.

When work disappears, the genders produced by the labour market are blurred. When socially necessary labour declines, what we once calledwomen’s work – education, healthcare, service – becomes our basic industry, not a ‘tertiary’ dimension of the measurable economy. The labour of love, caring for one another and learning how to be our brother’s keeper – socially beneficial labour – becomes not merely possible but eminently necessary, and not just within families, where affection is routinely available. No, I mean out there, in the wide, wide world.

Work has also been the American way of producing ‘racial capitalism’, as the historians now call it, by means of slave labour, convict labour, sharecropping, then segregated labour markets – in other words, a ‘free enterprise system’ built on the ruins of black bodies, an economic edifice animated, saturated and determined by racism. There never was a free market in labour in these united states. Like every other market, it was always hedged by lawful, systematic discrimination against black folk. You might even say that this hedged market produced the still-deployed stereotypes of African-American laziness, by excluding black workers from remunerative employment, confining them to the ghettos of the eight-hour day.

And yet, and yet. Though work has often entailed subjugation, obedience and hierarchy (see above), it’s also where many of us, probably most of us, have consistently expressed our deepest human desire, to be free of externally imposed authority or obligation, to be self-sufficient. We have defined ourselves for centuries by what we do, by what we produce.

But by now we must know that this definition of ourselves entails the principle of productivity – from each according to his abilities, to each according to his creation of real value through work – and commits us to the inane idea that we’re worth only as much as the labour market can register, as a price. By now we must also know that this principle plots a certain course to endless growth and its faithful attendant, environmental degradation.

How would human nature change as the aristocratic privilege of leisure becomes the birthright of all?

Until now, the principle of productivity has functioned as the reality principle that made the American Dream seem plausible. ‘Work hard, play by the rules, get ahead’, or, ‘You get what you pay for, you make your own way, you rightly receive what you’ve honestly earned’ – such homilies and exhortations used to make sense of the world. At any rate they didn’t sound delusional. By now they do.

Adherence to the principle of productivity therefore threatens public health as well as the planet (actually, these are the same thing). By committing us to what is impossible, it makes for madness. The Nobel Prize-winning economist Angus Deaton said something like this when he explained anomalous mortality rates among white people in the Bible Belt by claiming that they’ve ‘lost the narrative of their lives’ – by suggesting that they’ve lost faith in the American Dream. For them, the work ethic is a death sentence because they can’t live by it.

So the impending end of work raises the most fundamental questions about what it means to be human. To begin with, what purposes could we choose if the job – economic necessity – didn’t consume most of our waking hours and creative energies? What evident yet unknown possibilities would then appear? How would human nature itself change as the ancient, aristocratic privilege of leisure becomes the birthright of human beings as such?

Sigmund Freud insisted that love and work were the essential ingredients of healthy human being. Of course he was right. But can love survive the end of work as the willing partner of the good life? Can we let people get something for nothing and still treat them as our brothers and sisters – as members of a beloved community? Can you imagine the moment when you’ve just met an attractive stranger at a party, or you’re online looking for someone, anyone, but you don’t ask: ‘So, what do you do?’

We won’t have any answers until we acknowledge that work now means everything to us – and that hereafter it can’t.





What It Means To Be A Modern Day Slave

3 10 2013

Reblogged from what-it-means-to-be-a-modern-day-slave/

enslaved

Do you ever wonder about the purpose of life? Why are we all here and what are we doing with our lives? We live in an unprecedented time full of amazing opportunities on the one hand and terrible catastrophes on the other. But for the typical person working a 9-to-5 job (or more likely a 12 hour shift these days), it is likely that neither of these possibilities even registers on their mind. So many people are simply concerned with the business of surviving: finding a job, paying the mortgage, raising their children, and finding what little time there is left to de-stress from it all. Despite all of the labor-saving devices that were supposed to usher in an Age of Leisure, people seem to be working harder today than ever before.

Economist Richard Wolff points to the decoupling of productivity gains from income gains that began in the late 1970s and has accelerated ever since. The world has never been more productive, yet the average worker is getting poorer as most of the income gains in the economy flow directly to the top.  It is obvious that something is seriously wrong on a systemic level, which beg the questions: Why are the majority of people on the planet focused on getting a job and – when or if they do – then find themselves working longer hours to receive less and less reward? If the average person isn’t really benefiting from their hard work, who is?

To answer these questions, we need to understand that there are really only two ways people can derive an income from society. Martin Adams, author of Sharing the Earth: A Proposal for a Tax Free and Prosperous World, describes these ways:

“Broadly speaking, there are only two ways a human being can make an income: he can either make an income by contributing to society, or he can extract an income from society. A person can contribute to society by providing valuable goods and services. When a person contributes a valuable service and gets paid for it, he collects a wage; and when he gets paid for providing a valuable product, he collects what economists call a capital yield or capital return. When a mechanic gets paid for repairing a car, he collects a wage. But when a company receives money for leasing out a car, a capital good, the company receives a capital return. Each entity contributed a useful good or service.

The only other way people can make an income is by receiving what economists call economic rent. They do this not by adding wealth to society, but by extracting an income from society without providing any real wealth in return. When a person owns a natural resource such as land and charges other people for their use of it, he receives economic rent because the money he gets does not pay for any man-made goods or services.”  (Source: http://sharingtheearth.com/the-production-of-wealth)

On the most basic economic level, there are two distinct classes of people – a select few have the ability to live truly free lives with absolute sovereignty over their time while most other people must trade their labor or time – which can also be called their life – for the means to survive. Isn’t your labor nothing more than your life’s energy? Isn’t it the same life from which you hope to fulfill your dreams, raise your family, and explore the fantastic experience of being alive? When we trade our life, what we get in return ought to be worth something.

There is another name for a person who doesn’t have sovereignty over his own time. We call them slaves. Up until just a few centuries ago, the elite were actually allowed to legally own other people. For example, in ancient Rome it was estimated that 35 to 40 percent of the population were slaves. Today, involuntary slavery has, for the most part, been legally abolished, but life on earth may actually be worse for the vast majority of the working class: they are subject to voluntary slavery.

In a recent post on the Sustainable Man Facebook page, I asked the following question:

What is the difference between the following two scenarios?

  1. Being in a situation where you’ve been kicked off the land that sustained your family for generations and the only options for providing any sustenance is by taking a job making iPhones at 10 cents an hour; or
  2. Slavery

One of the top responses was: “Slaves have an investment by their owners and are generally provided with at least minimal care, housing and food. In general, they are slightly better off.” Indeed, this is true. Today, workers are generally interchangeable. If they get sick or injured, they can be immediately let go, replaced, and forgotten. If that means that they can no longer pay the mortgage on their home loan, the banks will repossess their home and sell it to the replacement worker, leaving the former in a situation where they must beg for resources to keep them alive.

Johann Wolfgang von Goethe, a famous German writer, once said “none are more hopelessly enslaved than those who falsely believe they are free.” Even though most people today believe they are free, is this really true? You are certainly not free to “trespass” on land owned by others or free to take food produced on privately owned land. In some cities, you aren’t even free to sleep on the street. The entire world is now fully owned. The truth is that people are free only insofar as they have money to buy a limited amount of freedom on an ongoing basis. Without money and without the ability to provide for themselves in nature, people don’t have any other choice but to submit their labor in exchange for the “market” wage.  We have to trade in our time (and our life) in order to survive.

This brings us back to the first question posed in this article. What is all of this for? Is the purpose of our lives simply to submit to working at a job that helps fulfill the goals of the elite while our wages deteriorate? What about our desires for our own life? What do each of us have to give to the world? Sadly, many people still do not have a realistic ability to entertain this question.

Lest I be accused of being a spoiled “hippie” that doesn’t want to “get a job”, let me say that I don’t believe that there is a human being alive that is not interested in performing work that is meaningful to them. The reason why most people hate their jobs is because the jobs generally don’t represent their true desires for what they wish to create in the world. People really don’t want to spend the majority of their waking hours selling chemical dispersants or mining coal or taking orders at McDonalds. Those select few people that derive their income by extracting an income from society are not the ones who must submit to such dispiriting work. Shouldn’t the purpose of our economy be to make it possible for every human being to be truly free to find and discover their true calling?

Martin Adams offers one potential solution. From Sharing the Earth:

What would happen to us if our entire tax system were replaced with public revenues exclusively generated from natural resource values?  We already know that all profits from natural resources are unearned. We also know that anyone who profits from natural resources takes wealth that belongs to society. Given these considerations alone, doesn’t it make sense to stop taxing wages, capital gains, incomes, and sales, and start charging people for their uses of land and of other natural resources?

The United States has a landmass of approximately 2.3 billion acres, of which nearly 60 percent, or 1.35 billion acres, is privately owned. The sheer value of this land is nearly incomprehensible: according to one study, in the United States the value of residential land alone was estimated at more than $6 trillion in 2010 and this figure does not account for lucrative commercial land….the total potential revenues of land fees alone would provide about 60 percent of current U.S. federal, state, and local government revenue – 60 percent of an arguably bloated government budget is substantial.

We need to rethink our current economic model from the ground up, and find within ourselves the tremendous willpower necessary to implement this change. The hurdles we have to overcome are immense, yet we must share the Earth with one another if we are to ever create a world that works for everyone – which is only possible if we create a world that works for anyone.”  (http://sharingtheearth.com/keep-what-you-earn-pay-for-what-you-use)

Whether it is reforming our tax system, developing alternative interest-free currencies and exchangescreating local and sustainable economiesjoining the emerging sharing and gift economies, or just bringing more kindness, compassion, and awareness to our everyday lives, we must begin to challenge the outdated and seriously flawed economic, cultural and social systems that make it literally impossible for us to share the earth. We owe it not only to ourselves, but also to our children and our children’s children to leave behind a world that is stronger, safer, more sustainable, and more beautiful than we found it.