We are confronted daily with a barrage of information and opinion about climate change and the human factor. Here are some thoughts on cutting through the noise, and a couple of simple questions for everyone to answer. [13 November 2009 | Peter Boyer]
Address to National Conference, Local Government Grants Commissions, Hobart, 12 November 2009
Let me introduce to you Edgar Mitchell. He was born in a small Texas city called Hereford in 1930. Graduating in 1952 with a degree in industrial management, he became a US Navy pilot and then a teacher of pilots before earning further academic degrees in aeronautics.
When he was 30, this scientist and technocrat had a personal experience which he later described in words of a poetic quality we don’t normally associate with engineers and technocrats:
Suddenly, from behind the rim of the moon, in long, slow-motion moments of immense majesty, there emerges a sparkling blue and white jewel, a light, delicate sky-blue sphere laced with slowly swirling veils of white, rising gradually like a small pearl in a thick sea of black mystery… It takes more than a moment to fully realise that this is Earth… home.
Mitchell was the lunar module pilot of Apollo 14. On 9 February 1971 he walked for nine hours on the moon. He was one of a handful of people who have viewed Earth from another heavenly body. For him, and for all the other astronauts who had this privilege, it was a life-changing experience. It was an opportunity to see, as if for the first time, this place we all call home.
The rest of us saw photographs of the lunar astronauts’ experience, but we missed out on the real thing, this life-changing vision splendid. Besides the flat photo-images, all we’ve had are the mind-pictures, the school lessons about the solar system and, down the years, those attempts to focus our thoughts on the Whole Earth – critiques of modern humanity like the landmark study The Limits to Growth, first published a year after Mitchell walked on the moon. We carried on regardless, growing our salaries and our budgets and our economies, and, as if in defiance of The Limits to Growth, those mountains of waste they spawned.
Imagine that we had been able to see Spaceship Earth as Edgar Mitchell had done, to view that lonely, beautiful blue sphere in a sea of black and have that vision imprinted on our brains as it was on his. We might have understood more clearly that we live on a finite planet, with finite atmosphere and oceans and lands. We might have seen more clearly that thinking we can continue indefinitely to draw down Earth’s resources without regard to our planet’s physical constraints is more than faintly ridiculous.
Look hard at Planet Earth. What can you see? Swirls of cloud above the reflected blue sunlight that signifies an airy, watery planet – the kind of planet on which life is likely to be found. As Mitchell’s astronauts approached more closely, they would have discerned some colour variations: first the white ice of the polar regions, then the browns and greens of the continents and larger islands, and the larger features within the continents — deserts, forests, our largest lakes and river systems. It is only towards the end of their journey home that they would have seen the tell-tale signs of humanity, starting with the larger cities. If they managed to discern a familiar city, they might have made a remark about their own people – the society in which they lived. But at what point, amid all the wonderful sights of this long journey, would they have said to each other, “Oh look, there’s the Economy!”
They might have commented on evidence of an economy, such as our major highways and the lights of the big cities, or exposed earth amid forested land, or the shaved hills and gaping holes of open-cut mines, or the gas flares of the Saudi peninsula or Texas or Siberia. If they had made their journey closer to our time they would have seen vast acreages of boreal forest removed for access to the tar sands beneath, and thousands of square kilometres of floating waste from human activity that moves slowly around an eddy of the eastern Pacific. But without the knowledge we have today about our planet’s systems, they wouldn’t have thought to connect any of that with the economy — which as we’re all told, repeatedly, day after day, is all-important in our lives: something to be nurtured, and nourished… and grown.
If we want to understand how it was possible for us to create so rapidly this enormous human footprint on Earth, both in numbers of people (now over 6.8 billion) and in the impact of their activities, we need look no further than our relentlessly growing economy. A growing economy needs both growing populations and a growing inclination among those populations to spend their wealth; hence the endlessly varied (and clever) inducements to spend that wealth, from national and international programs down to a new shirt or gizmo.
Until recently it has been considered heresy to suggest that economic growth might not be a good idea any more. But there’s a growing feeling of unease even among some mainstream economists, a feeling that the time of demand-driven economies is past, and that we need to look for new paradigms. Professor Tim Jackson, Economics Commissioner for the UK’s Sustainable Development Commission, in his landmark 2009 study Prosperity without Growth, noted that since the 1950s “the pursuit of growth has been the single most important policy goal across the world.” He continued: “The global economy is almost ﬁve times the size it was half a century ago. If it continues to grow at the same rate the economy will be 80 times that size by the year 2100.” This is not, as he says, sustainable. We must look for prosperity without growth.
The idea that there might be a flip side to endless growth has grown on us slowly. Blake’s dark satanic mills and Malthus’s vision of death and disease from uncontrolled population growth were early manifestations of unease about industrialisation which have become clichés in more modern times. But such concerns were restricted to the fate and well-being of humankind. Now we are looking at the whole planet.
There’s a different paradigm about the planetary impact of economic activity, no less powerful, that has arisen from the sciences investigating global processes. It begins with the argument put in 1896 by Svante Arrhenius, a Nobel prizewinning Swedish chemist, that carbon dioxide from human coal-burning could cause our atmosphere to heat up. Almost simultaneously, Thomas Chamberlin described the way carbon moves through different media, changing its levels in atmosphere, ocean and earth. These two developments marked the beginning of modern climate science.
It was a stuttering beginning. For half a century, with the odd isolated demurral, orthodox science dismissed the notion that humans could change climate, arguing that planetary systems contained natural correcting mechanisms that dealt with such minor shifts as a rise of a few parts per million of carbon dioxide levels in the air. But in the mid-1950s, in the space of two years, the pendulum began a slow but steady movement in the opposite direction. The shift was the result of the research efforts of three people working in the United States: two atmospheric physicists, Gilbert Plass and Hans Suess, and the oceanographer Roger Revelle.
By 1956, Arrhenius’s hypothesis that greater concentrations of carbon dioxide in the air raised the atmosphere’s capacity to hold heat had been confirmed. In that year, Plass published a paper revealing calculations that at the rate human activity was putting carbon into the atmosphere, warming of the planet would be clearly apparent within about half a century. Around the same time, Suess found a way of identifying the particular carbon molecules in the air that had come from burning of fossil fuels. Revelle used Plass’s and Suess’s findings to show that at the rate at which human-produced carbon in the atmosphere was increasing, the oceans would not be able to absorb sufficient carbon to prevent marked warming. One of Revelle’s students, Charles Keeling, set up an atmospheric monitoring station in Hawaii that confirmed the rising trajectory of atmospheric carbon concentrations. Anthropogenic global warming (AGW) was finally on the agenda.
It was another thirty years before a critical mass of the broader scientific community accepted the Revelle-Suess-Plass argument that we had a problem. With increasing agreement that a tiny change in carbon dioxide levels can change temperatures and that rising carbon dioxide levels resulted from human activity, the main debate has focused on whether or not temperatures are actually rising.
A criticism that temperature records could be skewed by the “heat island effect”, whereby city temperature will always be warmer than the natural areas around, had been dealt with by science early in the 20th century, although the notion lingered long enough for Michael Crichton to write a novel about it, State of Fear, nearly a century later. Then there was the “Medieval Warm Period” and the “Little Ice Age” argument, in which it was said our temperature had fluctuated much more in earlier times than in this industrial era. Meticulous examination of the ice-core and tree-ring temperature records showed a much greater temperature shift in the 20th century than during either of these anomalies. Another criticism said that the global cooling which happened in three stages between 1940 and 1975 was confirmation that there was no general warming trend. The three short stages of cooling turned out to be linked to a lower level of sunspot activity, or solar minimum, and local increases in pollution in the Northern Hemisphere. (Southern Hemisphere records showed no cooling.)
More recently came the claim that satellite data disproved the warming theory by showing a cooling in middle layers of the atmosphere. That was finally scotched in 2004 by the discovery of a mistake in the analysis of the satellite data: the middle layers had in fact warmed, as projected by the models, over the preceding 25 years. Currently the most common line of attack on AGW is that we have been cooling since 1998. The problem with that is that while one of the two most widely-cited global records shows the hottest year to be 1998, the other shows it to be 2005, while all records show that at least seven of the ten hottest years on record have been since 2000. There will always be noise, or spikes, in weather records, but the trend remains a warming one.
Another criticism of the idea of man-made climate change is that while we may be warming, we have no proof that humans caused it. We are, after all, mere animals on a vast planet with known correctional processes inbuilt in its systems, and we know that temperature and carbon dioxide levels have fluctuated wildly in past epochs. Plass and Revelle and Suess looked hard at this in the 1950s and believed they had proven that humans were implicated. Since then we have seen countless efforts to find a natural cause for the sharp upward trend in temperatures since 1970, all of which have failed. We are right now in a deep solar minimum (a period of low sunspot activity), one of the deepest ever known, which ought to see the global mean temperature plummet, yet the temperature remains stubbornly high — well above the average since records began. The twenty-seven independently calibrated climate models used by the Intergovernmental Panel on Climate Change could isolate no natural cause for this warming anomaly. All of them determined that the only explanation was rising levels of carbon dioxide from fossil-fuel burning.
There are two other outcomes of fossil-fuel burning I ask you to consider. Neither has anything to do with temperature, so put aside all the conjecture over whether or not the planet is warming. The first is a phenomenon, confirmed only in the past few years, known as ocean acidification, whereby the absorption of increasing amounts of carbon dioxide causes ocean waters to become steadily more acidic. Our ocean waters have remained within a very narrow acid-alkaline band for millions of years, but are now changing discernably to such an extent that some organisms that build calcium deposits in their bodies, like shellfish, will be drastically reduced in numbers, or become extinct, by the end of this century. It is a sobering thought that among these species are some of the oceanic phytoplankton which produce over half of the oxygen we breathe.
The second fossil-fuel burning outcome that is unrelated to global warming is peak oil. Marion King Hubbert was a rather clever oil geologist who in the 1950s worked out that US oil production would peak around 1970 after which it would inevitably fall away as extraction became ever more expensive. He was right on the money. His theory, now universally accepted, has gone global. It is now estimated that we have either recently passed peak oil or will pass it within the next decade, after which we will see a steady underlying rise in the difficulty of extracting oil and, as a result, a rising global oil price. Human ingenuity may find work-arounds for this when it happens, but such work-arounds, it is acknowledged, can only delay the inevitable.
We don’t need to look far to find where the impact of these two phenomena will be keenly felt. Tasmania is surrounded by ocean and derives a large proportion of its export dollars from the sea, and living on an island without its own oil reserves we are entirely dependent on imported oil – two tonnes of it per person per year — to power transport and industry. As for the rest of Australia, what will it mean to lose the Great Barrier Reef, or to be paying many times today’s price for petrol?
Last year about 2000 peer-reviewed scientific papers were published on the subject of climate change. Of these papers, not a single one disputed that man-made climate change was real. Yet in the face of overwhelming scientific evidence, many laypeople around the world today, including many political and business leaders, continue to express doubt as to whether humans have altered our climate.
Influenced by writings and presentations by Professors Ian Plimer and Bob Carter (both geologists) and Professor Garth Paltridge (an atmospheric physicist), a bevy of Australian politicians expressed their doubts on Four Corners (Australian ABC television documentary) last week. “It seems that the world has cooled slightly since the late 1990s,” said Tony Abbott. Senator Cory Bernadi felt bold enough to tell Four Corners that the evidence for human-induced global warming was “increasingly discredited”. Nick Minchin argued that greenhouse warming was a leftist plot “to sort of de-industrialise the western world”: “the collapse of communism was a disaster for the left,” he said, “and they embraced environmentalism as their new religion.” Minchin said it was appalling that politicians and others were “trying to terrify 12-year-old girls that their planet’s about to melt.”
Now I’m not into scaring 12-year-old girls. I want them to feel happy and secure, as I do all of us. I feel sensitive about the tag “evangelical” used by Tony Abbott to describe such as me. And while I can’t speak for the scientists who influenced these politicians, I have some sympathy for the politicians themselves. This is tough to deal with. Anthropogenic climate change is hard to grasp at the best of times, and its implications are very uncomfortable. It took specialist scientists many decades to get their heads around it, so how can we expect these politicians to understand it? Let alone the rest of us.
But I take comfort from a paradox identified by the British sociologist Stanley Cohen in his 2001 book States of Denial, Knowing About Atrocities and Suffering. He found that where people deny the existence of a problem despite overwhelming contrary evidence, this necessarily involves some kind of recognition that the problem exists and that it carries moral implications. So people’s professed scepticism of man-made climate change is in a sense an acknowledgment of the problem.
Why do people persist with such beliefs? George Marshall, the English social and environmental observer, identified two main reasons. The first is that we don’t have a cultural mechanism for dealing with utterly new, monstrously big problems like climate change. The second is that we are naturally inclined to subsume our personal responsibility in our group’s collective responsibility, and wait for someone else to act. Providing information, Marshall asserted, is ineffectual in countering denial and may even strengthen it, as might the passive bystander effect. People need to be confronted by emotionally charged, meaningful, visible alternatives before they will be moved to act. I would add that such confrontation needs to happen collectively, not to isolated individuals. Coming together was never more important than now.
So it’s likely that those among us today who believe that humans cannot and do not influence climate will be unmoved by any of my arguments to this point. That being so, let me express my position in a different way.
Let’s look at the people who started all this. I’ve known a lot of scientists in my life. They tend to be a particular kind of person. They are generally not very good at expressing themselves. When it comes to selling themselves (or anything else) they are very bad. They tend to be cautious, conservative people. They also take a lot of convincing about anything. This all sounds a bit problematic, but when it comes to credibility, it works very much in their favour. The profession of science needs cautious and conservative and sceptical people who speak bluntly and very precisely, because caution, scepticism, precision and so on are the hallmarks of good science. Scientists aren’t easily moved, so if a scientist says there’s a problem you should never dismiss them out of hand. If a lot of scientists are looking especially anxious and saying there’s a really big problem, you should really be sitting up and taking notice.
Now let’s ask ourselves some questions. The best questions I’ve seen were posed by a school science teacher in Oregon, US, named Greg Craven. In a series of YouTube videos and a book published earlier this year, called What’s the Worst that could Happen, Craven looked at the whole problem of what to believe about climate and how ordinary people might reach a decision. He decided that all the information flying round out there was making people uncomfortable and switching them off. Instead, he posed these questions:
(1) Who or what are the most credible sources of information, and what do they say?
(2) Given the risks and circumstances, what’s the wisest thing to do?
Deciding who to believe and who to ignore sounds a difficult task for people unfamiliar with the science of climate, but Craven suggests we use a few simple categories. At the top of the pile of credibility are statements from professional societies, or from organisations contradicting their normal bias. Next highest on the credibility spectrum are government reports, followed by a large grey area taking in the views of organisations of various kinds. Then come individual professionals, and, at the bottom of the credibility heap, individual lay people. Armed with the headings, we then start to fill in the gaps. The CSIRO, for instance, would rank highly on this credibility spectrum, while people like me would be at the bottom. I’d be in good company; Al Gore would be near the bottom too. No offence, but he’s just an individual lay person.
Craven proposes that we ignore the question of who’s right and who’s not, instead cutting straight to the real question, which is all about risk. Given that human-induced global warming is either true or not true, he says, what would be the worst that could happen if (a) we decided to act, or (b) we decided not to act? That, I think, is how we should decide our position on climate action.
This is such a complex issue. It not only poses the biggest question we have ever had to ask ourselves, with enormous implications for our future and how we should live our lives, but it also puts us in uncharted waters when it comes to dealing with our all-too-human responses. The physical ramifications are mind-boggling, but so too are the psychological ones. To take up Nick Minchin’s concern, what do we tell a 12-year-old girl? (Or boy?) How do we deal with people’s fears? How do we deal with questions from a waterfront property owner, or a Murray Valley rice irrigator? How do we put our personal concerns to our superior at work who doesn’t want to talk about it? Is it acceptable or desirable that we shield certain people from what science is saying?
And then there are public policy and governance issues. What are our responsibilities, individually and collectively? How should our politicians respond? Our bureaucrats? Our business leaders? Is it acceptable for any of these leading figures to take a neutral position, or is it necessary to engage everyone in this challenge? How important is the transparency of our emissions accounting processes? Should we focus on adapting to change or, acknowledging there’s no upper limit to the damage caused by high carbon dioxide levels, on mitigating our emissions? What are the implications for economic and fiscal management? For road, rail, air and sea transport, or infrastructure development, or town planning? For food production and home management?
What does it mean for public policy if we are to incorporate the warnings from science that the carbon we have already put into the atmosphere will bring our global mean temperature perilously close to the danger level of two degrees above pre-industrial temperatures, and that our global emissions trajectory must start dropping five years from now? Are we prepared to risk the chance that the science may be wrong? If we think the science is probably right, why aren’t we putting emergency measures in place now?
While I and others who take public stances in this debate might pose answers to these questions, in the end they must be answered by each of us, individually and with those others we most connect with in our lives.
I wish you all the very best of luck.