For reading comfort I recommend zooming in, usually Ctrl+ or Ctrl-Shift+, or Cmnd+ on Mac

Lucidity and Science: The Deepest Connections

(Pre-publication copy of e-book)
Michael Edgeworth McIntyre
Centre for Atmospheric Science at the
Department of Applied Mathematics and Theoretical Physics
Cambridge University

This small e-book is aimed at scientifically-minded readers including young scientists. It's a book for anyone interested in understanding how things work. Comments welcome!  (Email at the bottom of my home page.) The contents are as follows:

  Prelude: the unconscious brain

  Chapter 1: What is lucidity? What is understanding?

  Chapter 2: Mindsets, evolution, and language

  Chapter 3: Acausality illusions, and the way perception works

  Chapter 4: What is science?

  Chapter 5: Music, mathematics, and the Platonic

  Postlude: the amplifier metaphor for climate

One of the most worthwhile things anyone can do, it seems to me, is to contribute to good science while understanding both its power and its limitations. I'm one of those who argue that good science has to be open science. The forces ranged against good science today are more formidable than ever; but one thing I find inspirational is the continued success of science despite all those forces. And alongside science I want to discuss the connections with other great human endeavours. Those connections are sometimes found surprising. The book draws on a lifetime's personal experience as a scientist, and as a musician. Much of what I have to say stems from knowing how music works.

Key phrases: music, mathematics, human evolution, nature-nurture myth and other dichotomizations, social-media weaponization, time and consciousness, acausality illusions, communication skills, lucidity principles, climate change, weather extremes

Acknowledgements: Many kind friends and colleagues have helped me with advice, encouragement, information, and critical comments over the years. I am especially grateful for pointers to the latest developments in systems biology and palaeoclimatology. In addition to those mentioned in the acknowledgements sections of my Lucidity and Science papers of 1997-8, I'd like to thank especially David Andrews, Pat Bateson, Kerry Emanuel, Matthew Huber, James Jackson, Sue Jackson, Kevin Laland, James Maas, Niall Mansfield, Nick McCave, Richard McIntyre, Ruth McIntyre, Steve Merrick, Gos Micklem, Simon Mitton, Alice Oven, Antony Pay, Sam Pegler, Ray Pierrehumbert, Mark Salter, Nick Shackleton, Emily Shuckburgh, Luke Skinner, Marilyn Strathern, Daphne Sulston, John Sulston, and Paul Valdes.

Prelude: the unconscious brain

Consider, if you will, the following questions.

Good answers are important to our hopes of a civilized future; and many of the answers are surprisingly simple. But a quest to find them will soon encounter a conceptual and linguistic minefield, some of it around ideas like `innateness' and `instinct'. But I think I can put us within reach of some good answers (with a small `a') by recalling, first, some points about how our pre-human ancestors must have evolved -- in a way that differed, in specific and crucial respects, from what popular culture says -- and, second, some points about how we perceive and understand the world.

One reason for considering evolution is the prevalence of misconceptions about it. Chief among them is Herbert Spencer's idea that natural selection works entirely by competition between individuals. This ignores the many examples of cooperative behaviour among social animals, which Charles Darwin himself was at pains to point out.1 Spencer's idea that competition between individuals is all there is, and all there ought to be, not only flies in the face of the evidence but has also, in my humble opinion, done great damage to human societies. (As I write, in April 2020, it's impeding our attempts to fight the COVID-19 pandemic.)

On perception and understanding, and on our language ability, it hardly needs saying that they must have been shaped by our ancestors' evolution. Less widely understood, however, is that the evolution depended not only on cooperation but also, according to the best evidence, on a strong feedback between genomic evolution and cultural evolution. In particular, our language ability could not have been a purely cultural invention as is sometimes said. Rather, as I'll discuss in chapter 2, drawing on clinching evidence from Nicaragua, it must have developed through the co-evolution of genome and culture, with each affecting the other over millions of years.

Such co-evolution is necessarily a multi-timescale process. Multi-timescale processes are ubiquitous in the natural world. They depend on strong feedbacks between different mechanisms over a large range of timescales, including very slow, and very fast, mechanisms. Here we have slow and fast mechanisms in the form of genomic evolution and cultural evolution. (Of course I'm using the word `cultural' in a broad sense, to include anything passed on by social learning, however primitive or rudimentary.) Genome-culture co-evolution has often been neglected in the literature on evolutionary biology even though its likely importance for pre-human evolution, and its multi-timescale character, were understood and pointed out long ago by, for instance, the great biologist Jacques Monod2 and the great palaeoanthropologist Phillip Tobias.3

The way perception works -- and language -- will be another central theme in this book alongside science. Included will be insights from the way music works.

It seems to me that one of the points missed in the sometimes narrow-minded debates about `nature versus nurture', `instinct versus learning', `genomic evolution versus cultural evolution', and so on is that most of what's involved in perception and understanding, and in our general functioning, takes place well beyond the reach of conscious thought. Some people find this hard to accept. Perhaps they feel offended, in a personal way, to be told that the slightest aspect of their existence might, just possibly, not be under full and rigorous conscious control. A brilliant scientist whom I know personally as a colleague took offence in exactly that way, in a recent discussion of unconscious assumptions in science -- even though the exposure of such assumptions is the usual way in which scientific knowledge improves, as history tells us again and again, and even though I offered clear examples from our shared field of expertise.4

Some of our unconscious assumptions seem to come from the dichotomization instinct. The tendency to see evolution as either genomic or cultural, with each excluding the other, is typical. Natural selection is another example, as regards the level at which it operates. There's been a tendency to see it as operating either at the level of individual organisms or at the level of cooperating groups of organisms -- with each excluding the other. The point is missed that you can have both simultaneously. I'll return to these important issues in chapter 2.

It's easy to show that plenty of things in our brains take place involuntarily, that is, entirely beyond the reach of conscious thought and conscious control. There are many examples in the book by Daniel Kahneman,5 drawing on his famous work with Amos Tversky on psychology and economics. My own favourite example is a very simple one, Gunnar Johansson's `walking lights' animation. Twelve moving dots in a two-dimensional plane are unconsciously assumed to represent a particular three-dimensional motion. When the dots are moving, everyone with normal vision sees -- has no choice but to see -- a person walking (Figure 1):

Gunnar Johansson's `walking lights' demo, courtesy James Maas                 QR code to show smartphone video of Gunnar Johansson's `walking lights' demo, courtesy James Maas

Figure 1: Gunnar Johansson's `walking lights' animation. The walking-lights phenomenon is a well studied classic in experimental psychology and is one of the most robust perceptual phenomena known. In case you can't see the animation, the QR code on the right will display it on a smartphone with a QR reader.

And again, anyone who has driven cars or flown aircraft will probably remember experiences in which accidents were narrowly avoided, ahead of conscious thought. The typical experience is often described as witnessing oneself taking, for instance, evasive action when faced with a head-on collision. It is all over by the time conscious thinking has begun. It has happened to me. I think such experiences are quite common.

Many years ago, the anthropologist-philosopher Gregory Bateson put the essential point rather well, in classic evolutionary cost-benefit terms:6

No organism can afford to be conscious of matters with which it could deal at unconscious levels.

Gregory Bateson's point applies to us as well as to other living organisms. Why? There's a mathematical reason, combinatorial largeness. Every living organism has to deal all the time with a combinatorial tree, a combinatorially large number, of present and future possibilities. Being conscious of all those possibilities would be almost infinitely costly.

Combinatorially large means exponentially large, like compound interest over millennia, or the number of ways to shuffle a pack of cards. Each branching of possibilities multiplies, rather than adds to, the number of possibilities. Such numbers are unimaginably large. No-one can feel their magnitudes intuitively. For instance the number of ways to shuffle a pack of 53 cards is just over 4 x 1069 , or four thousand million trillion trillion trillion trillion trillion.

The `instinctive' avoidance of head-on collision in a car -- the action taken ahead of conscious thought -- was not, of course, something that came exclusively from genetic memory. Learning was involved as well. The same goes for the way we see the `walking lights' animation. But much of that learning was itself unconscious, stretching back to the (instinctive) infantile groping that discovers the world and helps normal vision to develop. At a biologically fundamental level, nurture is intimately part of nature. That intimacy stretches even further back, to the genome within the embryo `discovering', and then interacting with, its maternal environment both within and outside the embryo.7 Jurassic Park is a great story, but scientifically wrong because you need dinosaur eggs as well as dinosaur DNA. (Who knows, though -- since birds are dinosaurs someone might manage it, one day, with reconstructed DNA and birds' eggs.)

Normal vision, by the way, is known not to develop in people who start life with congenital cataracts or opaque corneas. That fact has surprised many who've supposed that surgical removal of the opaque elements in later life would `make the blind to see'.8

My approach to questions like the foregoing comes from long experience as a scientist. Science was my main profession for fifty years or so. Although many branches of science interest me, my professional career was focused mainly on mathematical research to understand the highly complex, multi-timescale fluid dynamics of the Earth's atmosphere and oceans. Included are phenomena such as the great jetstreams and the air motion that shapes the Antarctic ozone hole. There are associated phenomena sometimes called the `world's largest breaking waves'. Imagine a sideways breaker in the stratosphere, the mere tip of which is almost as big as the USA. That in turn has helped us to understand the complex fluid dynamics and magnetic fields of something even bigger, the Sun's interior, in an unexpected way. But long ago I almost became a musician. Or rather, in my youth I was, in fact, a part-time professional musician and could have made it into a full-time career. So I've had artistic preoccupations too, and artistic aspirations. This book tries to get at the deepest connections between all these things.

It's obvious, isn't it, that science, mathematics, and the arts are all of them bound up with the way perception works. And common to all these human activities, including science -- whatever popular culture may say to the contrary -- is the creativity that leads to new understanding, the thrill of lateral thinking, and sheer wonder at the whole phenomenon of life itself and at the astonishing Universe we live in.

One of the greatest of those wonders is our own adaptability, our versatility. Who knows, it might even get us through today's troubles, hopeless though that might sometimes seem. That's despite our being genetically similar to our hunter-gatherer ancestors, tribes of people driven again and again to migration and warfare in increasingly clever ways by, among other things, rapid climate fluctuations -- the legendary years of famine and years of plenty.  (How else did our species -- a single, genetically-compatible species with its single human genome -- spread around the globe during the past hundred millennia or so?)  Chapter 2 will point to recent hard evidence for the sheer rapidity, and magnitude, of some of those climate fluctuations.

Chapter 2 will also point to recent advances in our understanding of evolutionary biology and natural selection -- advances that are hard to find in the popular-science literature and still less in popular culture, as already hinted. One thing implied by those advances is that not only the nastiest but also the most compassionate, cooperative parts of human nature are `biological' and deep-seated.9-11 There's a popular misconception that in human nature the nastiest parts are exclusively biological and the nicest parts exclusively cultural, a dichotomization echoing Spencer's idea.

Here, by the way, as in most of this book, I lay no claim to originality. For instance the evidence on past climates comes from the painstaking work of scientists at the cutting edge of palaeoclimatology, including great scientists such as the late Nick Shackleton whom I had the privilege of knowing personally. And the points I'll make about biological evolution rely on insights gleaned from colleagues at the cutting edge of biology, including the late John Sulston whom I also knew personally.

Our ancestors must have had not only language and lateral thinking -- and music, dance, poetry, and storytelling -- but also, no doubt, the mechanisms of greed, power games, blame games, genocide, ecstatic suicide and the rest. To survive, they must have had love and compassion too. The precise timescales and evolutionary pathways for these things are uncertain. But the timescales for at least some of them, including the beginnings of our language ability, must have been very many hundreds of millennia because, as already suggested, they must have depended on the co-evolution of genome and culture -- a long-drawn-out process whose multi-timescale character has been neglected in the literature on evolutionary biology, despite the ubiquity of multi-timescale processes in the natural world. (Of these there's a huge variety. To take one of the simplest examples, consider air pressure, as when pumping up a bicycle tyre. Fast molecular collisions mediate slow changes in air pressure, and air temperature, while pressure and temperature react back on collision rates and strengths. That's a strong and crucial feedback across arbitrarily disparate timescales. I'll give further examples in chapter 2.)

So it never made sense to me to say that long and short timescales can't interact. It never made sense to say that genomic evolution has to be separate from cultural evolution. And in particular it never made sense to say, as is sometimes said, that language started less than a hundred millennia ago, as a purely cultural invention -- the invention of a single mother tongue from which all today's languages are descended, purely by cultural transmission.1213 At a biologically fundamental level, that idea makes no more sense than does the underlying false dichotomy nature `versus' nurture. I'll return to these points in chapter 2 and will try to argue them very carefully.

It's sometimes forgotten that language and culture can be mediated purely by sound waves and light waves and held in individuals' memories -- the epic saga phenomenon, if you will, as in the Odyssey or in a multitude of other oral traditions, including Australian aboriginal songlines and the `immense wealth' of the unwritten literature of Africa.14 That's a very convenient, an eminently portable, form of culture for a tribe on the move. And sound waves and light waves are such ephemeral things. They have the annoying property of leaving no archaeological trace. But absence of evidence isn't evidence of absence.

And now, in a mere flash of evolutionary time, a mere few centuries, we've shown our versatility in ways that seem to me more astonishing than ever. We no longer panic at the sight of a comet. Demons in the air have shrunk to a small minority of alien abductors. We don't burn witches and heretics, at least not literally. The Pope dares to apologize for past misdeeds. Genocide was avoided in South Africa. We even dare, sometimes, to tolerate individual propensities and lifestyles if they don't harm others. We dare to talk about astonishing new things called personal freedom, social justice, women's rights, transgender rights, and human rights. We even dare to argue that tyrants needn't always win. Indeed, headline bias and recent events notwithstanding, governments have become less tyrannical and more democratic, on average, over the past two centuries, in what Samuel Huntington has called three waves of democratization.15 And most astonishing of all, since 1945 we've even had the good sense so far -- and very much against the odds -- to avoid the use of nuclear weapons.

We have space-based observing instruments, super-accurate clocks, and the super-accurate global positioning that cross-checks Einstein's gravitational theory -- yet again -- and now a further and completely different cross-check of consummate beauty, detection of the lightspeed spacetime ripples predicted by the theory.16 We have marvelled at the sight of our beautiful home planet in space, poised above the lunar horizon. We have the Internet, bringing us new degrees of freedom and profligacy of information and disinformation. It presents us with new challenges to exercise critical judgement and to build computational systems and artificial intelligences of unprecedented power, and to use them for civilized purposes -- exploiting the robustness and reliability growing out of the open-source software movement, `the collective IQ of thousands of individuals'.17 We can read and write genetic codes, and thanks to our collective IQ are beginning, just beginning, to understand them.18 On large and small scales we've been carrying out extraordinary new social experiments with labels like `free-market democracy', `free-market autocracy', `children's democracy'19, `microlending' conducive to population control,20 `citizen science', and now the burgeoning social media. With the weaponization of the social media now upon us there's a huge downside, as with any new technology. But there's also a huge upside, and everything to play for.

When the Mahatma Gandhi visited England in 1930 he is said to have been asked by a journalist, `Mr Gandhi, what do you think of modern civilization?' The Mahatma is said to have replied, `That would be a good idea.' The optimist in me hopes you agree. Part of it would be not only a clearer recognition of the power and limitations of science -- including its power, and its limitations, in helping us to understand our own nature -- but also a further healing of the estrangement between science and the arts and humanities. It might even -- dare I hope for this? -- further reconcile science with the more compassionate, less dogmatic, less politicized, less violent forms of religion and other belief systems that are important for the mental and spiritual health of so many people. And we need better self-understanding in any case, if only to be aware of its use, and its would-be monopolization, by the demagogues and the social-media technocrats.

I have dared to hint at the `deepest' connections amongst all these things. In a peculiar way, some of the connections can be seen not only as deep but also as simple -- provided that one is willing to maintain a certain humility, and willing to think on more than one level.

Multi-level thinking is nothing new. It has long been recognized, unconsciously at least, as being essential to science. It goes back in time beyond Newton, Galileo, Ibn Sina and Archimedes. What's complex at one level can be simple at another. Newton treated the Earth as a point mass. It led to his breakthrough in understanding planetary dynamics -- despite the enormous complexity of the real Earth.

Today we have a new conceptual framework, complexity theory or complex-systems theory, working in tandem with the new Bayesian causality theory -- the powerful mathematics now crucial to science and also to the commercial and political use of big-data analytics by the social-media technocrats. Its use within science is helping to clarify what's involved in multi-level thinking and to develop it more systematically, more generally, and more consciously. Key ideas include self-organization, self-assembling components or building blocks, and the use of the Bayesian probabilistic `do' operator21 to distinguish correlation from causality. (Its use by the social-media technocrats includes behavioural experiments on a vast scale, such as Pokémon Go -- all disguised as fun and games, but creating a secret `large hadron collider' of experimental psychology of which we'd do well to be aware.22)

Another key idea is that of emergent properties -- at different levels of description within complex systems and hierarchies of systems, not least the human brain itself. `Emergent property' is a specialist term for something that looks simple at one level even though caused by the interplay of complex, chaotic events at lower levels. A related idea is that of `order emerging from chaos'. Self-assembling building blocks are also called autonomous components, or `automata' for brevity.

We'll see that the ideas of multi-level thinking, automata, and self-organization are all crucial to making sense of many basic phenomena, such as the way genetic memory works and what instincts are -- instincts, that is, in the everyday sense relating to things we do, and perceive, and feel automatically, ahead of conscious thought. (And without those ideas, including multi-level thinking, there's no chance of making sense of such things as `consciousness' and `free will' -- aspects of which I'll touch on in chapters 3 and 4.)

Scientific progress has always been about finding a level of description and a viewpoint, or viewpoints, from which something at first sight hopelessly complex can be seen as simple enough to be understandable. The Antarctic ozone hole is a case in point. I myself made a contribution by spotting some simplifying features in the fluid dynamics, in the way the air moves and transports chemicals. And, by the way, so high is our scientific confidence in today's understanding of the ozone hole -- with many observational and theoretical cross-checks -- that the professional disinformers who tried to discredit that understanding in a well known campaign,24 which I myself encountered at close quarters, are no longer taken seriously. That's despite the enormous complexity of the problem, involving spatial scales from the planetary down to the atomic, and multiple timescales from centuries down to thousand-trillionths of a second.

We now have practical certainty, and wide acceptance, that man-made chemicals are the main cause of the ozone hole. We now understand in detail why the deepest ozone hole appears in the south, even though the chemicals causing it are emitted mostly in the north. The chemicals are transported from north to south by global-scale air motion, in a pattern of horizontal and vertical motion whose fluid dynamics was a complete mystery when I began my research career, but is now well understood. Waves and turbulence are involved, in a complex set of multi-timescale processes. And now, through what's called the Montreal Protocol, we have internationally-agreed regulations to restrict emissions of the chemicals despite the disinformers' aim of stopping any such regulation. We have a new symbiosis between market forces and regulation.25

What makes life as a scientist worth living? For me, part of the answer is the joy of being honest. There's a scientific ideal and a scientific ethic that power good science. And they depend crucially on honesty. If you stand up in front of a large conference and say of your favourite theory `I was wrong', you gain respect rather than losing it. I've seen it happen. Your reputation increases. Why? The scientific ideal says that respect for the evidence, for theoretical coherence and self-consistency, for cross-checking, finding mistakes, and improving our collective knowledge is more important than personal ego or financial gain. And if someone else has found evidence that refutes your theory, then the scientific ethic requires you to say so. The ethic says that you must not only be factually honest but must also give due credit to others, by name, whenever their contributions are relevant.

The scientific ideal and ethic are powerful because, even when imperfectly followed, they encourage not only a healthy scepticism but also a healthy mixture of competition and cooperation. Just as in the open-source software community, the ideal and ethic harness the collective IQ, the collective brainpower, of large research communities in ways that can transcend even the power of short-term greed and financial gain. The ozone-hole story is a case in point. So is the human-genome story with its promise of future scientific breakthroughs, including medical breakthroughs, calling for years of collective research effort. The scientific ideal and ethic were powerful enough to keep the genomic information in the public domain -- as open-source information available for use in open research communities -- despite an attempt to monopolize it commercially that very nearly succeeded.26 Our collective brainpower will be crucial to solving the problems posed by the genome and the molecular-biological systems of which it forms a part, and the interplay with diseases such as COVID-19. Like so many other problems now confronting us, they are problems of the most formidable complexity.

In the Postlude I'll return to some of these problems with particular reference to climate change, a problem of still greater complexity. Again, there's no claim to originality here. I merely aim to pick out, from the morass of confusion and disinformation surrounding the topic,24 a few simple points clarifying where the uncertainties lie as well as the near-certainties. These points are far simpler than many people think.

Chapter 1: What is lucidity? What is understanding?

This book reflects my own journey toward the frontiers of human self-understanding. Of course many others have made such journeys. But in my case the journey began in a slightly unusual way.

Music and the arts were always part of my life. Music was pure magic to me as a small child. But the conscious journey began with a puzzle. While reading my students' doctoral thesis drafts, and working as a scientific journal editor, I began to wonder why lucidity, or clarity -- in writing and speaking, as well as in thinking -- is often found difficult to achieve; and I wondered why some of my scientific and mathematical colleagues are such surprisingly bad communicators, even within their own research communities, let alone on issues of public concern. Then I began to wonder what lucidity is, in a functional or operational sense. And then I began to suspect a deep connection with the way music works. Music is, after all, not only part of our culture but also part of our unconscious human nature.

I now like to understand the word `lucidity' in a more general sense than usual. It's not only about what you can find in style manuals and in books on how to write, excellent and useful though many of them are. (Strunk and White27 is a little gem.) It's also about deeper connections not only with music but also with mathematics, pattern perception, biological evolution, and science in general. A common thread is the `organic-change principle'. It's familiar, I think, to most artists, at least unconsciously.

The principle says that we're perceptually sensitive to, and have an unconscious interest in, patterns exhibiting `organic change'. These are patterns in which some things change, continuously or by small amounts, while others stay the same. So an organically-changing pattern has invariant elements.

The walking lights is an example. The invariant elements include the number of dots, always twelve dots in the example of Figure 1. Musical harmony is another -- chord progressions if you will. Musical harmony is an interesting case because `small amounts' applies not in one but in two different senses, leading to the idea of `musical hyperspace'. Chord progressions can take us somewhere that's both nearby and far away. That's how some of the magic is done, in many styles of Western music. An octave leap is a large change in one sense, but small in the other, indeed so small that musicians use the same name for the two pitches. The invariant elements in a chord progression can be pitches or chord shapes.

Music consists of organically-changing sound patterns not just in its harmony or chord progressions, but also in its melodic shapes and counterpoint and in the overall form, or architecture, of an entire piece of music. Mathematics involves organic change too. In mathematics there are beautiful results about `invariants' or `conserved quantities', things that stay the same while other things change, often continuously through a vast space of possibilities. The great mathematician Emmy Noether discovered a common origin for many such results, through a profound and original piece of mathematical thinking. It is called Noether's Theorem and is recognized today as a foundation-stone of theoretical physics.

Our perceptual sensitivity to organic change exists for strong biological reasons. One reason is the survival value of sensing the difference between living things and dead or inanimate things. To see a flower opening, or a cat stalking a bird, is to see organic change.

So I'd dare to describe our sensitivity to it as instinctive. Many years ago I saw a pet kitten suddenly die of some mysterious but acute disease. I had never seen death before, but I remember feeling instantly sure of what had happened -- ahead of conscious thought. And the ability to see the difference between living and dead has been shown to be well developed in human infants a few months old.

Notice how intimately involved, in all this, are ideas of a very abstract kind. The idea of some things changing while others stay invariant is itself highly abstract, as well as simple. It's abstract in the sense that vast numbers of possibilities are included. There are vast numbers -- combinatorially large numbers -- of organically-changing patterns. Here again we're glimpsing the fact already hinted at, that the unconscious brain can handle many possibilities at once. We have an unconscious power of abstraction. That's almost the same as saying that we have unconscious mathematics. Mathematics is a precise means of handling many possibilities, many patterns, at once -- and in a logically self-consistent way. For instance the walking-lights animation shows that we have unconscious Euclidean geometry, the mathematics of angles and distances. There are combinatorially large numbers of arrangements of objects, at various angles and distances from one another. The roots of mathematics and logic, and of abstract cognitive symbolism, lie far deeper and are evolutionarily far more ancient than they're usually thought to be. They're millions of years more ancient than archaeology suggests. In chapter 5 I'll show that our unconscious mathematics includes, also, the mathematics underlying Noether's theorem, and how all this is related to Plato's world of perfect forms.

So I've been interested in lucidity, `lucidity principles', and related matters in a sense that cuts deeper than, and goes far beyond, the niceties and pedantries of style manuals. But before anyone starts thinking that it's all about Plato and ivory-tower philosophy, let's remind ourselves of some harsh practical realities -- as Plato would have done had he lived today. What I'm talking about is relevant not only to thinking and communication but also, for instance, to the ergonomic design of machinery, of software and user-friendly IT systems (information technology), of user interfaces in general, friendly and unfriendly, and of technological systems of any kind including the emerging artificial-intelligence systems, where the stakes are so incalculably high.

The organic-change principle -- that we're perceptually sensitive to organically-changing patterns -- shows why good practice in any of these endeavours involves not only variation but also invariant elements, i.e., repeated elements, just as music does. Good control-panel design might use, for instance, repeated shapes for control knobs or buttons. And in writing and speaking one needn't be afraid of repetition, if it forms the invariant element within an organically-changing word pattern. `If you are serious, then I'll be serious' is a clearer and stronger sentence than `If you are serious, then I'll be also.' Loss of the invariant element `serious' weakens the sentence. Still weaker are versions such as `If you are serious, then I'll be earnest.' Such pointless or gratuitous variation in place of repetition is what H. W. Fowler ironically called `elegant' variation, an `incurable vice' of `the minor novelists and the reporters'.28 Its opposite can be called lucid repetition, as in `If you are serious, then I'll be serious.' Lucid repetition is not the same as being repetitious. The pattern as a whole is changing, organically. It works the same way in every language I've looked at, including Chinese.29

Two other `lucidity principles' are worth noting briefly, while I'm at it. There's an `explicitness principle' -- the need to be more explicit than you feel necessary -- because, obviously, you're communicating with someone whose head isn't full of what your own head is full of. As the great mathematician J. E. Littlewood once put it,30 `Two trivialities omitted can add up to an impasse.' Again, this applies to design in general, as well as to any form of writing or speaking that aims at lucidity. And of course there's the more obvious `coherent-ordering principle', the need to build context before new points are introduced. It applies not only to writing and speaking but also to the design of any sequential process on, for instance, a website or a ticket-vending machine.

There's another reason for attending to the explicitness principle. Human language is surprisingly weak on logic-checking. That's one of the reasons why language is a conceptual minefield -- something that's long kept philosophers in business. And beyond everyday misunderstandings we have, of course, the workings of professional camouflage and deception, as in the ozone and other disinformation campaigns.

The logic-checking weakness shows up in the misnomers and self-contradictory terms encountered not only in everyday dealings but also -- to my continual surprise -- in the technical language used by my scientific and engineering colleagues. You'd think we should know better. You'd laugh if, echoing Spike Milligan, I said that someone has a `hairy bald head'. But consider for example the scientific term `solar constant'. It's a precisely-defined measure of the mean solar power per unit area reaching the Earth. Well, the solar constant isn't constant. It's variable. The Sun's output is variable, though fortunately by not too much. We have a `variable solar constant'.

Another such term is 'slow manifold', a term used in my research field of atmospheric and oceanic fluid dynamics. Well, the slow manifold is a kind of `hairy bald head'. I'm not kidding.31

In air-ticket booking systems there's a `reference number' that isn't a number. In finance there's a term `securitization' that means, among other things, making an investment less secure -- yes, less secure -- by camouflaging what it's based on. And then there's the famous `heteroactive barber'. That's the barber who shaves only those who don't shave themselves. `Heteroactive barber' may sound impressive. Some think it philosophically profound. But it's no more than just another self-contradictory term. Seeing that fact does, however, take a conscious effort. There's no instinctive logic-checking whatever. There are clear biological reasons for this state of things, to which I'll return in chapter 2. I'll leave it to you, dear reader, if need be, to go through the logical steps showing that `heteroactive barber' is indeed a self-contradictory term. (If he doesn't shave himself, then it follows that he does, etc.)

Being more explicit than you feel necessary helps you to navigate the minefield. It clarifies your own thinking. Also helpful is to get rid of gratuitous variations and replace them by lucid repetitions, maintaining the sometimes tricky discipline of calling the same thing by the same name, as in good control-panel design using repeated control-knob shapes. It's even better to be cautious about choosing which shape, or which name or term, to use. You might even want to define a technical term carefully at its first occurrence, if only because meanings keep changing, even in science.  `I'll use the idea of whatsit in the sense of such-and-such, not to be confused with whatsit in the sense of so-and-so.'  'I'll denote the so-called solar constant by S, remembering that it's actually variable.'  Another example is `the climate sensitivity'. It has multiple meanings, as I'll explain in the Postlude. In his 1959 Reith Lectures, the great biologist Peter Medawar remarks on the `appalling confusion and waste of time' caused by the `innocent belief' that a single word should have a single meaning.32

A fourth `lucidity principle' -- again applying to good visual and technical design as well as to good writing and speaking -- is of course pruning, the elimination of anything superfluous. On your control panel, or web page, or ticket-vending machine, or in your software code and documentation, it's helpful to omit visual and verbal distractions. In writing and speaking, it's helpful to `omit needless words', as Strunk and White puts it.27 You may have noticed lucidity principles in action in the meteoric rise of some businesses. Apple and Google were clear examples. There's a tendency to regard lucidity principles as trade secrets, or proprietary possessions. I recall some expensive litigation by another fast-rising business, Amazon, claiming proprietary ownership of `omit needless clicks'.

Websites, ticket-vending machines, and other user interfaces that, by contrast, violate lucidity principles -- making them `unfriendly' -- are still remarkably common, together with all those unfriendly technical manuals, and financial instruments securitized and unsecuritized. Needless complexity is mixed up with inexplicitness and gratuitous variation. The pre-google search engines were typical examples. But in case you think this is getting trivial, let me remind you of Three Mile Island Reactor TMI-2, and the nuclear accident for which it became well known in 1979.

You don't need to be a professional psychologist to appreciate the point. Before the nuclear accident, the control panels were like a gratuitously-varied set of traffic lights in which stop is sometimes denoted by red and sometimes by green, and vice versa. Thus, at Three Mile Island, a particular colour on one control panel meant normal functioning while the same colour on another panel meant `malfunction, watch out'.33 Well, the operators got confused and the nuclear accident happened, costing billions of dollars.

As I walk around Cambridge and other parts of the UK, I continually encounter the `postmodernist traffic rules' followed by pedestrians here. Postmodernism says that `anything goes'. So you keep left or keep right just as you fancy. All for the sake of interest and variety. How boring, how pedantic, to keep left all the time. Just like those boring traffic lights where red always means stop. To be fair, the UK Highway Code quite reasonably tells us to face oncoming traffic on narrow country lanes except, of course, on right-hand bends, and on unsegregated pedestrian-plus-cycle tracks where the Code does indeed say, implicitly, that `anything goes'. I always feel a sense of relief when I visit the USA, where everyone keeps right most of the time.

There's a quasi-bureaucratic mindset that seems ignorant, or uncaring, about examples like Three Mile Island. It says `User-friendliness is a luxury we can't afford.' (Yes, afford.) `Go away and read the technical manual. Look, it says on page 342 that red means `stop' on one-way streets, `go' on two-way streets, and `caution' on right-hand bends. And of course it's the other way round on Sundays and public holidays, except for Christmas which is obviously an exception. What could be clearer? Just read it carefully and do exactly what it says,' etc.

With complicated systems like nuclear power plants, or large IT systems, or space-based observation systems -- such as the Earth observation systems created by my scientific colleagues -- there's a combinatorially large number of ways for the system to go wrong even with good design, and even with communication failures kept to a minimum. I'm always amazed when any of these systems work. I'm also amazed at how our governing politicians overlook this point again and again, it seems, when commissioning the large IT systems that they hope will save money.

What then is lucidity, in the sense I'm talking about? Let me try to draw a few threads together. In the words of an earlier essay,34 which was mostly about writing and speaking, `Lucidity... exploits natural, biologically ancient perceptual sensitivities, such as the sensitivities to organic change and to coherent ordering, which reflect our instinctive, unconscious interest in the living world in which our ancestors survived. Lucidity exploits, for instance, the fact that organically changing patterns contain invariant or repeated elements. Lucid writing and speaking are highly explicit, and where possible use the same word or phrase for the same thing, similar word-patterns for similar or comparable things, and different words, phrases, and word-patterns for different things... Context is built before new points are introduced...'

I also argued that `Lucidity is something that satisfies our unconscious, as well as our conscious, interest in coherence and self-consistency' -- in things that make sense -- and that it's about `making superficial patterns consistent with deeper patterns'. It can be useful to think of our perceptual apparatus as a multi-level pattern recognition system, with many unconscious levels.

To summarize, then, four `lucidity principles' seem especially useful in practice. They amount to saying that skilful communicators and designers give attention (1) to organic change, (2) to explicitness, (3) to coherent ordering, and (4) to pruning superfluous material. The four principles apply not only to writing and speaking but also, for instance, to website and user-interface design and to the safety systems of nuclear power plants, with stakes measured in billions of dollars.

Of course a mastery of lucidity principles can also serve an interest in camouflage and deception, with even higher stakes. Such mastery was conspicuous in the ozone disinformation campaign, as well as in more recent campaigns using weaponized postmodernism.35 It is, and always was, conspicuous in the dichotomizing speeches of demagogues, as in `You're either with us or against us'. That's another case of making superficial patterns consistent with deeper patterns -- deeper patterns of an unpleasant and dangerous kind, now perilously amplified by the social media.23

Enough of that! What of my other question? What is this subtle and elusive thing we call understanding, or insight? What does it mean to think clearly about a problem? Of course there are many answers, depending on one's purpose and viewpoint. As far as science is concerned, however, let me try to counter some of the popular myths. What I've always found in my own research, and have always tried to suggest to my students, is that developing a good scientific understanding of something -- even something in the inanimate physical world, let alone a biological system -- requires looking at it, and testing it, from as many different viewpoints as possible as well as maintaining a healthy scepticism. It's sometimes called `diversity of thought', and because it respects the evidence it's to be sharply distinguished from the postmodernist `anything goes'. It's an important part of the creativity that goes into good science.

For instance, the multi-timescale fluid dynamics I've studied is far too complex to be understandable at all from a single viewpoint, such as the viewpoint provided by a particular set of mathematical equations. One needs equations, words, pictures, and feelings all working together, as far as possible, to form a self-consistent whole with experiments and observations. And the equations themselves take different forms embodying different viewpoints, with technical names such as `variational', `Eulerian', `Lagrangian', and so on. They're mathematically equivalent but, as the great physicist Richard Feynman used to say, `psychologically very different'.  Bringing in words, in a lucid way, is a critically important part of the whole but needs to be related to, and made consistent with, equations, pictures, and feelings.

Such multi-modal thinking and healthy scepticism have been the only ways I've known of escaping from the mindsets or unconscious assumptions that tend to entrap us, and of avoiding dichotomization in particular. The history of science shows that escaping from mindsets has always been a key part of progress, as already remarked.4 And an important aid to cultivating a multi-modal view of any scientific problem is the habit of performing what Albert Einstein famously called thought-experiments, and mentally viewing those from as many angles as possible.

Einstein certainly talked about feeling things, in one's imagination -- forces, motion, colliding particles, light waves -- and was always doing thought-experiments, `what-if experiments' if you prefer. The same thread runs through the testimonies of Feynman and of other great scientists, such as Peter Medawar and Jacques Monod. It all goes back to juvenile play, that deadly serious rehearsal for real life -- young children pushing and pulling things (and people!) to see, and feel, how they work. And now Bayesian causality theory21 has given us a mathematical framework for dealing more systematically with thought-experiments, as well as with real experiments.

In my own research community I've often noticed colleagues having futile arguments about `the' cause of some observed phenomenon. `It's driven by such-and-such', says one. `No, it's driven by so-and-so', says another. Sometimes the argument gets quite acrimonious. Often, though, they're at cross-purposes because, perhaps unconsciously, they have two different thought-experiments in mind.

And notice how the verb `to drive' illustrates what I mean by language as a conceptual minefield. `Drive' sounds incisive and clearcut, but is nonetheless dangerously ambiguous. I sometimes think that our word-processors should make it flash red for danger, as soon as it's typed, along with some other dangerously ambiguous words such as the pronoun `this'.

Quite often, the verb `to drive' is used when a better verb would be `to mediate', as often used in the biological literature to signify an important part of some mechanism. By contrast, `to drive' can mean `to control', as when driving a car. That's like controlling an audio amplifier via its input signal. `To drive' can also mean `to supply the energy needed' via the fuel tank or the amplifier's power supply. Well, there are two quite different thought-experiments here, on the amplifier let's say. One is to change the input signal. The other is to pull the power plug. A viewpoint that focused on the power plug alone might miss important aspects of the problem.

You may laugh, but there's been a mindset in my research community that has, or used to have, precisely such a focus. It said that the way to understand our atmosphere and oceans is through their intricate `energy budgets', disregarding questions of what they're sensitive to. Yes, energy budgets are interesting and important, but no, they're not the Answer to Everything. Energy budgets focus attention on the power supply and tempt us to ignore the input signal.

The topic of mindsets has been illuminated not only through the work of Kahneman and Tversky but also through, for instance, that of Iain McGilchrist and Vilayanur Ramachandran. They bring in the workings of the brain's left and right hemispheres. That's a point to which I'll return in chapter 3. In brief, the left hemisphere typically has great analytical power but is more prone to mindsets.3637 And the sort of scientific understanding I'm talking about -- in-depth understanding if you will -- seems to involve an intricate collaboration between the two hemispheres, with each playing to its own very different strengths. If that collaboration is disrupted, extreme forms of mindset can result.

Clinical neurologists are familiar with a condition called anosognosia. Damage to the right hemisphere paralyses, for instance, a patient's left arm, yet the patient vehemently denies that the arm is paralysed -- a sort of unconscious wilful blindness, if I may use another self-contradictory term. The patient will make up all sorts of excuses as to why he or she doesn't fancy moving the arm when asked.

Back in the 1920s, the great physicist Max Born was immersed in the mind-blowing experience of developing quantum theory. Born once commented that engagement with science and its healthy scepticism can give us an escape route from mindsets. With the more dangerous kinds of zealotry or fundamentalism in mind, he wrote38

I believe that ideas such as absolute certitude, absolute exactness, final truth, etc., are figments of the imagination which should not be admissible in any field of science... This loosening of thinking (Lockerung des Denkens) seems to me to be the greatest blessing which modern science has given to us. For the belief in a single truth and in being the possessor thereof is the root cause of all evil in the world.

Further wisdom on these topics is recorded in the classic study of cults by Conway and Siegelman,39 echoing religious wars across the centuries, as well as today's polarizations and their amplification by the social media.23 Time will tell, perhaps, how the dangers from the fundamentalist religions compare with those from the fundamentalist atheisms. Among today's fundamentalist atheisms we have not only Science-Is-the-Answer-to-Everything-And-Religion-Must-Be-Destroyed -- provoking a needless backlash against science, sometimes violent -- but also free-market fundamentalism, in some ways the most dangerous of all because of its vast financial resources. I don't mean Adam Smith's idea that market forces are useful, in symbiosis with suitable regulation, written or unwritten. Smith was clear about the need for regulation.2540 I don't mean the business entrepreneurship that can provide us with useful goods and services. By free-market fundamentalism I mean a hypercredulous belief, a taking-for-granted, an incoherent mindset that Spencer was right and that personal greed is in itself the Answer to Everything and the Only Way Forward -- regardless of evidence like the 2008 financial crash. Surprisingly, to me at least, free-market fundamentalism takes quasi-Christian as well as atheist forms.41

Common to all forms of fundamentalism, or puritanism, or extremism, is that they forbid the loosening of thinking that allows freedom to view things from more than one angle. The 2008 financial crash seems to have made only a small dent in free-market fundamentalism, so far, though perhaps reducing the numbers of its adherents. Perhaps COVID-19 will make a bigger dent. It's too early to say. And what's called `science versus religion' is not, it seems to me, about scientific insight versus religious, or spiritual, insight. Rather, it's about scientific fundamentalism versus religious fundamentalism, which of course are irreconcilable.

Such futile dichotomizations cry out for more loosening of thinking. How can such loosening work? As Ramachandran or McGilchrist might say, it's almost as if the right hemisphere nudges the left with a wordless message to the effect that `You might be sure, but I smell a rat: could you, just possibly, be missing something?' It's well known that in 1983 a Russian officer, Stanislav Petrov, saved us from likely nuclear war. At great personal cost, he disobeyed standing orders when a malfunctioning weapons system said `nuclear attack imminent'. We had a narrow escape. It was probably thanks to Petrov's right hemisphere. There have been other such escapes.

Chapter 2: Mindsets, evolution, and language

Let's fast-rewind to a few million years ago, and further consider our ancestors' evolution. Where did we, our insights, and our mindsets come from? And how on Earth did we acquire our language ability -- that vast conceptual minefield -- so powerful, so versatile, yet so weak on logic-checking? These questions are more than just tantalizing. Clearly they're germane to past and current conflicts, and to future risks including existential risks.

The first obstacle to understanding is what I'll dare -- following the suggestion of a friend of mine, the late, great John Sulston -- to call simplistic evolutionary theory. The theory is still deeply entrenched in popular culture. Many biologists would now agree with John that it's no more than a caricature. But it's a remarkably persistent caricature. It includes Spencer's idea that competition between individuals is all there is.

More precisely, simplistic evolutionary theory says that evolution has just three aspects. First, the structure of an organism is governed entirely by its genome, acting as a deterministic `blueprint' made of all-powerful `selfish genes'. Second, contrary to what Charles Darwin thought,18 natural selection is the only significant evolutionary force. And third, natural selection works solely through `survival of the fittest'.

Survival of the fittest would be a reasonable proposition were it not that an oversimplified notion of fitness is used. Following Spencer, fitness is presumed to apply solely to individual organisms. And it's presumed to mean nothing more than the individual's ability to pass on its genes, by competing with other individuals to produce more offspring. Admittedly this purely competitive, purely individualistic view does explain much of what happens in our astonishing biosphere. But it also misses many important points. It is not the evolutionary Answer to Everything.

There's a slightly more sophisticated view called `inclusive fitness' or `kin selection', which replaces individuals by families whose members share enough genes to count as closely related. But it misses the same points.

For one thing, as Charles Darwin recognized, our species and other social species, such as baboons, could not have survived without cooperation within large groups. Without such cooperation, alongside competition, our ground-dwelling ancestors would have been easy meals for the large, swift predators all around them, including the big cats -- gobbled up in no time at all! Cooperation restricted to closely related individuals would not have been enough to survive those dangers. And Darwin gives clear examples in which cooperation within large non-human groups is, in fact, observed to take place, as for instance with the geladas and the hamadryas baboons of Ethiopia.1

Even bacteria cooperate. That's well known. One way they do it is by sharing packages of genetic information called plasmids or DNA cassettes. A plasmid might for instance contain information on how to survive antibiotic attack. Don't get me wrong. I'm not saying that bacteria `think' like us, or like baboons or dolphins or other social mammals, or like social insects. And I'm not saying that bacteria never compete. They often do. But for instance it's a hard fact -- a practical certainty, and now an urgent problem in medicine -- that vast numbers of individual bacteria cooperate among themselves to develop resistance to antibiotics. Yes, selective pressures are at work, but at group level as well as at individual and kin level, and at cellular and molecular level,18 in heterogeneous populations living in heterogeneous, and ever-changing, ecological environments.

So it's plain that natural selection operates at very many levels in the biosphere, and that cooperation is widespread alongside competition. Indeed the word `symbiosis' in its standard meaning denotes a variety of intimate, and well studied, forms of cooperation not between individuals of one species but between those of entirely different species. Different species of bacteria share plasmids.42 The trouble is the sheer complexity of it all -- again a matter of combinatorial largeness as well as of population heterogeneity, and of the complexities of mutual fitness in and around various ecological niches. We're far from having comprehensive mathematical models of how it all works, despite recent breakthroughs at molecular level.18 However, the models have made great progress over the past decade or so, aided by the power of today's computers, and the detailed evidence has been accumulating for what's now called multi-level selection.43-55

The persistence of simplistic evolutionary theory, oblivious to all these considerations, seems to be bound up with a particular pair of mindsets. The first is that the genes' eye view -- or, more fundamentally, the replicators' or DNA's eye view -- gives us the only useful angle from which to view evolution. The second is that selective pressures are strictly Spencerian, operating at the level of individual organisms only. The first mindset misses the value of viewing the problem from more than one angle. The second misses most of the real complexity. And that complexity includes the intense group-level selective pressures on our ancestors noted by Monod,2 and by palaeoanthropologists such as Tobias,3 Robin Dunbar,46 and Matt Rossano48 to name but a few.

Both mindsets, along with the idea of kin selection, seem to have come from mathematical models that are grossly oversimplified by today's standards -- valuable though they were in their time. They are the old population-genetics models that were first formulated in the early twentieth century and then further developed in the 1960s and 1970s, notably by the great biologists William D. Hamilton and John Maynard Smith among others. For the sake of mathematical simplicity and computability these models exclude, by assumption, all the aforementioned complexities as well as multi-timescale processes and, in particular, realistic group-level selection scenarios. And the hypothetical `genes' in those models are themselves grossly oversimplified. They correspond to the popular idea of a `gene for' this or that trait -- nothing like actual genes, the protein-coding templates within the genomic DNA. (Very many actual genes are involved, usually, in the development of a recognizable trait, along with non-coding parts of the DNA and the associated regulatory networks.718)

The mindset against group-level selection, built into the old models and still strongly held in some circles,125657 seems in part to have been a reaction against the sloppiness of some old arguments for such selection, for instance ignoring the complex game-theoretic aspects such as multi-agent `reciprocal altruism', and conflating altruism as an anthropomorphic sentiment `for the good of the group' with altruism as actual behaviour, including its deeply unconscious aspects.9-1143 Among the arguments against that sloppiness one can find phrases such as `mathematical proofs from population genetics' (my italics);5657 but such `proofs' rely on the equations of the old models. More recent mathematical models do, by contrast, incorporate group-level selection and identify examples in which it is effective.5254

On the multi-timescale aspects there's some fascinating history. Despite these aspects having been pointed out back in 1971, both by Monod and by Tobias, and noted again in some recent reviews,47,51,53 it's hard to find mention of them elsewhere in the literature on genome-culture co-evolution. They may have been lost in the turmoil of other disputes about evolutionary dynamics. Some of those disputes are described in the remarkable book by Christopher Wills,44 and many more of them in the vast sociological study by Ullica Segerstråle.58

Two of the most acrimonious disputes, going back to the 1970s, are germane to our discussion. The first surrounded a famous attempt by Edward O. Wilson and Charles J. Lumsden to model genome-culture co-evolution without multiple timescales. Instead, a mysterious `multiplier effect' was proposed, which drastically shortens genomic timescales. Then genome and culture can co-evolve in a simple, quasi-synchronous way. Such scenarios are now recognized as unrealistic. They're inconsistent with the genomic and other evidence, including evidence from linguistics, and are no longer taken seriously.

The second was a dispute about adaptive mutations -- those that give an organism an immediate advantage -- versus neutral mutations, which have no immediate effect and are therefore invisible to natural selection. Dichotomization kicked in straight away, with protagonists assuming one thing to the exclusion of the other.44 But we now know that both are important. For instance the breakthrough described by Andreas Wagner18 shows in detail at molecular level why both kinds of mutation are practically speaking inevitable, and how neutral mutations generate genetic diversity. Genetic diversity is required to let natural selection work in ever-changing environments. What started as neutral can become adaptive. I'll argue in chapter 3 that brain-hemisphere differentiation is a likely example.

And on the palaeoanthropological side there's been a tendency to see the technology of stone tools as the only important aspect of `culture', giving an impression that culture stood still for a million years or more, just because the stone tools didn't change very much. But it's worth stressing again, I think, that tool shapes and an absence of beads and bracelets, and of other such archaeological durables, is no evidence at all for an absence, or a standing-still, of culture and of language or proto-language, whether gestural, or vocal, or both. It hardly needs saying, but apparently does need saying, again, that language and culture can be mediated purely by sound waves and light waves, leaving no archaeological trace.

And what of that other feature of simplistic evolutionary theory, the genome seen as a blueprint dictating how an organism develops? Well, that's been thoroughly superseded, and discredited, by detailed studies of the workings of the genome that recognize the systems-biological aspects.71857 Multiple layers of molecular-biological complexity -- of regulatory networks -- overlie the genome. Causal arrows point downward as well as upward. Far from dictating everything, genes are switched on and off according to need. That's why brain cells differ from liver cells. The genomic DNA is influential but not dictatorial. It would be more accurate to say that the DNA provides a kind of toolkit for building an organism. Or better still, the DNA and its surrounding regulatory networks, or allosteric circuits, provide a set of self-assembling building blocks available for use according to developmental and environmental need. Echoing the language of complexity theory touched on in the Prelude, I'll call these building blocks genetically-enabled automata.

How, then, might group-level selection have worked for our ancestors, over the past million years or more? I want to avoid good-of-the-group sloppiness but I think it reasonable, following Monod, Tobias, Dunbar, and Rossano -- and for instance an illuminating survey by Leslie Aiello59 -- to suggest that not only language but also the gradually-developing ability to create what eventually became music, dance, poetry, rhetoric, and storytelling, held in the memories of gifted individuals and in a group's collective, cultural memory -- and leaving no archaeological trace -- would have produced selective pressures for further brain development and for generations of individuals still more gifted. Not only the smartest hunters, fighters, tacticians, and social manipulators but also, in due course, the best singers and storytellers -- or more accurately, perhaps, the best singer-storytellers -- would have had the best mating opportunities. Intimately part of all this would have been the kind of social intelligence we call `theory of mind' -- the ability to feel what others are thinking and, in due course, what others think I'm thinking, and so on.46 Those developments would have built group solidarity and with it the power of a group to ward off predators, to compete against other groups for resources, and at some stage to become the top predators -- as groups, though, and not as individuals.

We need not argue about the timescales and the precise stages of development except to say that their beginnings must have gone back to times far more ancient than the past tens of millennia that we call the Upper Palaeolithic, with its variety of archaeological durables such as beads, bracelets, statuettes, cave paintings and other beautiful art objects. Such archaeological evidence is sometimes taken to mark the beginning of language.1213 However, much longer timescales are dictated by the rates at which genomic changes occur. That's shown by the workings of what are called molecular-genetic clocks.44 In particular, there had to be enough time to allow the self-assembling building blocks for language to seed themselves within genetic memory -- the genetically-enabled automata for language -- from rudimentary beginnings or proto-language as echoed, perhaps, in the speech and gestural signing of a two-year-old human today. And the existence of such automata has been conclusively demonstrated by recent events in Nicaragua, of which we'll be reminded shortly. The proposition that language is purely cultural1213 and began in, or shortly before, the Upper Palaeolithic is now completely untenable.

So it's reasonable to suggest that the earliest stages of all this, as far back as one or two million years ago, would have seen proto-language barriers beginning to be increasingly significant, gradually sharpening the separation of one group from another. The groups, regarded as evolutionary `survival vehicles', would have been developing increasingly tight outer boundaries. Such boundaries would have enhanced the efficiency of those vehicles as carriers of replicators into future generations within each group. And, contrary to what is sometimes argued,12 the replicators would have been genomic as well as cultural. Little by little, this channelling of both kinds of replicator within groups down the generations must have increasingly strengthened the feedback -- the multi-timescale, two-way coupling -- between genomes and cultures. This would have intensified the selective pressures exerted at group-versus-group level, just as Monod argued.2 The end result of this intensification would have been the runaway brain evolution, as Wills44 called it, corresponding to the left-hand extremity of Figure 2:

Human and pre-human brain sizes over the past 35,000 millennia

Figure 2: Skull capacities of some of our ancestors and their close relatives, from the fossil record. The figure is taken from ref. 59 with kind permission. Time runs from right to left. Pluses denote our early australopithecine ancestors or near-ancestors -- Monod2 calls them australanthropes. Asterisks denote early Homo (H. habilis, H. rudolfensis), black squares H. erectus including H. ergaster, and open squares archaic and modern H. sapiens.

The final, spectacular acceleration in brain-size increase seen in Figure 2 must have been associated with cultures and languages of ever-increasing complexity, setting the stage around 100 millennia ago for the Upper Paleolithic with its explosion of imaginative artwork, suddenly visible in the archaeological record. Although that explosion cannot have marked the invention of language, as such, it could well have marked important developments in the use of language and in associated cognitive abilities including the theory of mind. Paramount among those abilities would have been mythmaking -- storytelling about imagined worlds -- opening up radical possibilities for versatile cooperative behaviour within much larger, hence more powerful, groups than ever before.4869

Looking further into the past, Aiello59 pictures the whole sequence of events echoed in Figure 2 as having come from a long-drawn-out `evolutionary arms race' between competing groups or tribes. Such an arms race would have put a premium on increasing group size46 and on proto-language-mediated solidarity and coordination within a group. To be sure, any inter-group or inter-tribal warfare that took place could have diluted the within-group channelling of genomic information, especially in the late stages through, for instance, the enslavement of enemy females. But then again, that's a further evolutionary advantage. It's a one-way gene flow into the biggest, strongest, smartest groups or tribes, adding to their genetic diversity and adaptive potential as soon as it spills across internal caste boundaries and slave boundaries.

And all this is, of course, completely invisible to simplistic evolutionary theory. More realistic views of evolution can be found in the recent research literature,43-55 sharpening our understanding of the multifarious dynamical mechanisms and feedbacks that can come into play. Some of those mechanisms have been discussed under headings such as `evo-devo' (evolutionary developmental biology) and the`extended evolutionary synthesis', which includes new mechanisms of `epigenetic heritability' that operate outside the DNA sequences themselves -- all of which says that genetic memory and genetically-enabled automata are even more versatile and flexible than previously thought.

Some researchers today question the very use of the words `genetic' and `genetic memory' in this context. However, I think the words remain useful as a pointer to the important contribution from the DNA-mediated information flow, alongside many other forms of information flow into future generations including flows via culture and `niche construction'. And the idea of genetically-enabled automata seems to me so important -- not least as an antidote to the simplistic genetic-blueprint idea -- that I propose to use it without further apology.

The Nicaraguan evidence shows that human children are born with an innate potential for language, as we'll see shortly. What was observed is impossible to explain without genetically-enabled automata. Dear reader, please note that this is quite different from saying that language is innately `hard-wired' or `blueprinted'. The automata -- the building blocks -- are not the same thing as the assembled product, assembled of course in a particular environment, physical and cultural, from versatile and flexible components. Recognition of this distinction between building blocks and assembled product might even, I dare to hope, get us away from the silly quarrels about `all in the genes' versus `all down to culture'.

(Yes, language is in the genes and regulatory DNA and culturally constructed, where of course we must understand the construction as being largely unconscious, as great artists, great writers, and great scientists have always recognized -- consciously or unconsciously. And there's no conflict with the many painstaking studies of comparative linguistics, showing the likely pathways and relatively short timescales for the cultural ancestry of today's languages. Particular linguistic patterns, such as Indo-European, are one thing, while the innate potential for language is another.)

But first, what about those multi-timescale aspects? How on Earth can genome, language and culture co-evolve, and interact dynamically, when their timescales are so very, very different? And above all, how can the latest cultural whim or flash in the pan influence so slow a process as genomic evolution? Isn't the comparison with air pressure far and away too simplistic?

Well, as already mentioned there are many other examples of multi-timescale processes in the natural world. Many are far more complex than air pressure, even when falling short of biological complexity. They can be even more extreme in their range of timescales. The ozone hole is one such example that I happen to know about in detail. One might equally well ask, how can the very fast and very slow processes involved in the ozone hole have any significant interplay? How can the seemingly random turbulence that makes us fasten our seat belts have any role in a stratospheric phenomenon on a spatial scale bigger than the Antarctic continent and dependent on global-scale air motion, over timescales out to a century or more?

As I was forced to recognize in my own research there is, however, a significant and systematic interplay between atmospheric turbulence and the ozone hole. That interplay is now well understood. Among other things it involves a sort of fluid-dynamical jigsaw puzzle made up of waves and turbulence. Despite differences of detail, and greater complexity, it's a bit like what happens in the surf zone near an ocean beach. There, tiny, fleeting eddies within the foamy turbulent wavecrests not only modify, but are also shaped by, the wave dynamics in an intimate interplay that, in turn, generates and interacts with mean currents, including rip currents, and with sand and sediment transport over far, far longer timescales.

The ozone hole is even more complex, and involves two very different kinds of turbulence. One is the familiar small-scale, seat-belt-fastening turbulence, on timescales of seconds. The other is a much slower, larger-scale phenomenon involving a chaotic interplay between jetstreams, cyclones, and anticyclones. Several kinds of waves are involved, including jetstream meanders of the kind familiar from weather forecasts. (The stratospheric counterparts of such meanders can develop into the `world's largest breaking waves'.) And interwoven with all that fluid-dynamical complexity we have regions with different chemical compositions, and an interplay between the transport of chemicals, on the one hand, and a large set of fast and slow chemical reactions on the other. The chemistry interacts with solar and terrestrial radiation, from the ultraviolet to the infrared, over a vast range of timescales from thousand-trillionths of a second as photons hit air molecules out to days, weeks, months, years, and longer as chemicals are moved around by the global-scale air motions. The key point about all this, though, is that what looks like a panoply of chaotic, flash-in-the-pan, fleeting and almost random processes on the shorter timescales has systematic mean effects over far, far longer timescales. The observed ozone hole is just one of those mean effects, among many others.

In a similar way, then, our latest cultural whims and catch-phrases may seem capricious and fleeting and sometimes almost random -- almost a `cultural turbulence' -- while nevertheless exerting long-term selective pressures that, as already suggested, systematically favour the talents of gifted and versatile individuals who can grasp, exploit, build on, and reshape traditions and zeitgeists in the arts of communication, storytelling, imagery, politics, technology, music, dance, and comedy, with storytelling the most basic and powerful of these arts. As Plato once said, `Those who tell the stories rule society.' The feeling that it's `all down to culture' surely reflects the near-impossibility of imagining the vast overall timespans, out to millions of years, over which the genetically-enabled automata that mediate language and culture must have evolved under those turbulent selective pressures, all the way from rudimentary beginnings. And as already said the existence of genetically-enabled automata for language has been verified, conclusively, by recent events in Nicaragua.

Starting in the late 1970s, Nicaragua saw the creation of a new Deaf community and a fully-fledged, syntactically powerful new sign language, Nicaraguan Sign Language (NSL). Beyond superficial borrowings, NSL is considered by sign-language experts to be entirely distinct from any pre-existing sign language, such as American Sign Language or British Sign Language. It's clear moreover that NSL was created by, or emerged from, communities of schoolchildren with essentially no external linguistic input.

Deaf people had no communities in Nicaragua before the late 1970s, a time of drastic political change. It was then that dozens, then hundreds, of deaf children first came into social contact. This came about through a new educational programme. It included schools for the deaf. Today, full NSL fluency at native-speaker level, or rather native-signer level, is found in just one group of people. They are those, and only those, who were young deaf children in the late 1970s or thereafter. That's a simple fact on the ground. It's therefore indisputable that NSL was somehow created by the children.

The evidence60 clearly shows that the youngest children aged about 7 or less played a crucial role in creating NSL, with no linguistic input from their Spanish-speaking surroundings. NSL has no substantial resemblance to Spanish. A key aspect must have been a small child's unconscious urge to impose syntactic function and syntactic regularity on whatever language is being acquired or created. After all, it's commonly observed that a small child learning English will say things like `I keeped mouses in a box' instead of `I kept mice in a box'. It's the syntactic irregularities that need to be taught by older people, not the syntactic function itself.

This last point was made long ago by Noam Chomsky among others. But the way it fits in with natural selection was unclear at the time. We didn't then have today's insights into multi-level selection, multi-timescale genome-culture feedback, and genetically-enabled automata.

Regarding the detailed evidence from Nicaragua, the extensive account in ref. 60 is a landmark. It describes careful and systematic studies using video and transcription techniques developed by sign-language experts. Those studies brought to light, for instance, what are called the pidgin and creole stages in the collective creation of NSL by, respectively, the older and the younger children, with full syntactic functionality arising at the creole stage only, coming from children aged 7 or less. Pinker61 gives an excellent popular account. More recent work62 shows how the repertoire of syntactic functions in NSL is being filled out, and increasingly standardized, by successive generations of young signers.

And what of the changing climate that our ancestors had to cope with? Over the timespan of Figure 2, the climate system underwent increasingly large fluctuations some of which were very sudden, as will be illustrated shortly, drastically affecting our ancestors' living conditions and food supplies. In the later stages, culminating in runaway brain evolution and global-scale migration, the increasing climate fluctuations would have increased still further the pressure to develop social, cultural and linguistic skills and versatility, central among which would have been skills conducive to tribal solidarity. Those skills would have included the ability to create ever more elaborate rituals, belief systems, songs, and stories passed from generation to generation.

And what stories they must have been!  Great sagas etched into a tribe's collective memory. It can hardly be accidental that the sagas known today tell of years of famine and years of plenty, of battles, of epic journeys, of great floods, and of terrifying deities that are both fickle benefactors and devouring monsters63 -- just as the surrounding large predators must have appeared to our ancestors, scavenging on leftover carcasses before becoming top predators themselves. And epic journeys and great floods must have been increasingly part of our ancestors's struggle to survive, as they migrated under the increasingly changeable climatic conditions.

Figure 3 is a palaeoclimatic record giving a coarse-grain overview of climate variability going back 800 millennia, roughly corresponding to the leftmost quarter of Figure 2. Again, time runs from right to left:

Overview of the past 800 millennia, from Lüthi et al

Figure 3: Antarctic ice-core data from ref. 64 showing estimated temperature (upper graph) and measured atmospheric carbon dioxide (lower graph). Time, in millennia, runs from right to left up to the present day. The significance of the lower graph is discussed in the Postlude. The upper graph estimates air temperature changes over Antarctica, indicative of worldwide changes. The temperature changes are estimated from the amount of deuterium (hydrogen-2 isotope) in the ice, which can be reliably measured and is temperature-sensitive because of fractionation effects as water evaporates, transpires, precipitates, and redistributes itself between oceans, atmosphere, and ice sheets. The shaded bar corresponds to the relatively short time interval covered in Figure 4 below. The `MIS' numbers denote the `marine isotope stages' whose signatures are recognized in many deep-ocean mud cores, and `T' means `termination' or `major deglaciation'. The thin vertical line at around 70 millennia marks the time of the Lake Toba supervolcanic eruption.

The upper graph shows an estimate of temperature changes. The label `Holocene' marks the relatively warm, stable climate of the past ten millennia or so. The temperature changes are estimated from a reliable record in Antarctic ice (see figure caption) and are indicative of worldwide, not just Antarctic, temperature changes. There are questions of detail and precise magnitudes, but little doubt as to the order of magnitude of the estimated changes. The changes were huge, especially during the past four hundred millennia, with peak-to-peak excursions of the order of ten degrees Celsius or more. A good cross-check is that the associated global mean sea-level excursions were also huge, up and down by well over a hundred metres, as temperatures went up and down and the great land-based ice sheets shrank and expanded. There are two clear and independent lines of evidence on sea levels, further discussed in the Postlude below. Also discussed there is the significance of the lower graph, which shows concentrations of carbon dioxide in the atmosphere. Carbon dioxide as a gas is extremely stable chemically, allowing it to be reliably measured from the air trapped in Antarctic ice. The extremes of cold, of warmth, and of sea levels mark what are called the geologically recent glacial-interglacial cycles.

When we zoom in to much shorter timescales, we see that some climate changes were not only severe but also abrupt, over time intervals comparable to, or even shorter than, an individual human lifetime. We know this thanks to patient and meticulous work on the records in ice cores and oceanic mud cores and in many other palaeoclimatic records.6566 The sheer skill and hard labour of fine-sampling, assaying, and carefully decoding such material to increase the time resolution, and to cross-check the interpretation, is a remarkable story of high scientific endeavour.

Not only were there occasional nuclear-winter-like events from volcanic eruptions, including the Lake Toba supervolcanic eruption around 70 millennia ago (thin vertical line in Figure 3) -- a far more massive eruption than any in recorded history -- but there was large-amplitude internal variability within the climate system itself. Even without volcanoes the system has so-called chaotic dynamics, with scope for rapid changes in, for instance, sea-ice cover and in the meanderings of the great atmospheric jetstreams and their oceanic cousins, such as the Gulf Stream and the Kuroshio and Agulhas currents.

The chaotic dynamics sometimes produced sudden and drastic climate change over time intervals as short as a few years or even less -- practically instantaneous by geological, palaeoclimatic and evolutionary standards. Such events are called `tipping points' of the dynamics. Much of the drastic variability now known -- in its finest detail for the last fifty millennia or so -- takes the form of complex and irregular `Dansgaard-Oeschger cycles'. They're conpicuous in the records over much of the northern hemisphere. Their timescales range from millennia down to years or less.

Figure 4 expands the time interval marked by the shaded bar near the left-hand edge of Figure 3. Again, time runs from right to left, from about 42 to 25 millennia ago:

Dansgaard-Oeschger events 3-10, from Dokken et al          

Figure 4: Greenland ice-core data from Dokken et al.,67 for the time interval corresponding to the shaded bar in Figure 3. Time in millennia runs from right to left. The graph shows variations in the amount of the oxygen-18 isotope in the ice, from which temperature changes can be estimated in much the same way as in Figure 3. The abrupt warmings marked by the thin vertical lines are mostly of the order of 10°C or more. The thicker vertical lines show timing checks from layers of tephra or volcanic debris. The shaded areas refer to geomagnetic excursions.

The graph is a record from Greenland ice with enough time resolution to show details for some of the Dansgaard-Oeschger cycles, those conventionally numbered from 3 to 10. The graph estimates air temperature changes over Greenland (see caption). The thin vertical lines mark the times of major warming events in the North Atlantic region, which by convention define the end of one cycle and the start of the next. Those regional warmings were huge, typically of the order of ten degrees Celsius, in round numbers, as well as very abrupt. Indeed they were far more abrupt than the graph can show. In some cases they've been shown to take only a few years or less.6567

Between the major warming events we see an incessant variability at more modest amplitudes -- more like a few degrees Celsius -- nevertheless more than enough to have affected our ancestors' food supplies and living conditions.

To survive all this, our ancestors must have had strong leaders and willing followers. The stronger and the more willing, the better the chance of surviving hardship, migration, and warfare. Hypercredulity and weak logic-checking must have become important. They must have been strongly selected for, as genome and culture co-evolved and as language gradually became more and more sophisticated, and more and more fluent and imaginative with stories of the natural and the supernatural.

How do you make leadership work? Do you make a reasoned case? Do you ask your followers to check your logic? Do you check it yourself? Of course not! You're a leader because, with your people starving, or faced with a hostile tribe speaking a different language, you've emerged as a charismatic visionary. You're divinely inspired. You know you're right, and it doesn't need checking! `O my people, I've been shown the True Path that we must follow. Come with me! Let's take back control! Let's make our tribe great again! Beyond those mountains, over that horizon, that's where we'll win through to find our Promised Land. It is our destiny to find that Land and overcome all enemies because we, and only we, are the True Believers. Our stories are the only true stories.'  How else, in the incessantly-fluctuating climate, I ask again, did our one species -- our single human genome -- spread all around the globe during the past hundred millennia?

And what of dichotomization -- that ever-present, ever-potent source of unconscious assumptions? Well, it's even more ancient, isn't it. Hundreds of millions of years more ancient. Ever since the Cambrian, half a billion years ago, individuals have teetered on the brink of fight or flight, edible or inedible, male or female, friend or foe. Dichotomization was key to survival.

But with language and hypercredulity in place, the dichotomization instinct -- deep in the most ancient, the most primitive, the most reptilian parts of our brains -- could take the perilous new forms we still see today. Not just fight or flight but also We are right and they are wrong.  It's the Absolute Truth of our tribe's belief system versus the absolute falsehood of theirs. It's the evidence-blindness and obstacle to clear thinking so hugely, and profitably, amplified by the social media.23 It's the narrowing of focus down to `click or don't click'. It's the force behind what's been called the downward `purity spiral', in which reasoned debates turn into shouting matches between polarized factions.

And in case of temptation to dismiss all this as a mere `just so story' -- speculation unsupported by evidence -- let's take note not only of today's amplified extremisms but also of the wealth of supporting evidence and careful thinking, and mathematical modelling, summarized for instance in the book by D. S. Wilson,43 chapters 3, 6 and 7. They include detailed case studies of fundamentalist belief systems -- both religious and atheist -- illustrating their characteristic dichotomizations, `We are right and they are wrong' or `Our ideas good, their ideas bad'. For instance Ayn Rand, one of the prophets of atheist free-market fundamentalism, claimed what Adam Smith did not, that unrestrained selfishness is absolutely good and altruism absolutely bad. Personal greed is the Answer to Everything. It seems that some well-intentioned believers such as Rand's disciple Alan Greenspan were devastated when the 2008 financial crash took them by surprise, shortly after Greenspan's long reign at the US Federal Reserve Bank. By a supreme irony Rand's credo also says, or takes for granted, that `We are rational and they are irrational.' Any logic-checking that supports an alternative viewpoint is `irrational', something to be dismissed out of hand. And unless you dismiss it immediately, without stopping to think, you're a lily-livered moral weakling, aren't you. Such is the purity spiral.

Dichotomization makes us stupid, doesn't it. But thankfully many other traits must have been selected for, underpinning our species' remarkable versatility and social sophistication. People do stop to think. People can get smart when they want to. Recent advances in palaeoarchaeology have added much detail to the story of the past hundred millennia, based on evidence that includes the size and structure of our ancestors' campsites. Some of the evidence now points to inter-group trading as early as 70 millennia ago, perhaps accelerated by extreme climate stress from the Toba super-eruption around then48 -- suggesting not only warfare between groups but also wheeling and dealing, all of it favouring high levels of social sophistication and organization, and of versatility.

Indeed, group identity and the we-versus-they dichotomy can be very versatile and flexible in humans, as so brilliantly shown in the behavioural experiments of social psychologist Stephen Reicher and his co-workers. The same flexibility was illustrated by a recent (22 June 2019) electoral success in Turkey where political traction came not from demagoguery and binary-button-pressing but rather, to some people's surprise, from a politicians' colourful handbook called the `Book of Radical Love', switching the focus away from personal abuse toward pluralistic core values and `caring for each other' -- even for one's political opponents! Caring is another deeply unconscious, deeply instinctive, powerful part of human nature, invisible to simplistic evolutionary theory but crucial to personal happiness.9-11

Today we must live with the genetic inheritance from all this. In our overcrowded world, awash with powerful military, financial, cyberspatial and disinformational weaponry,232435 the dangers are self-evident. And yet our versatility can give us great hope, despite what a Spencerian human-nature cynic might say. Thanks to our improved understanding of natural selection -- far transcending Spencer's ideas68 -- we now understand much better how genetic memory works. Genetic memory and human nature are not as rigid, as hard-wired, and as uniformly nasty as simplistic evolutionary theory suggests. Wilson43 points out that this improved understanding suggests new ways of coping. We do, believe it or not, have the potential to go deeper and get smarter!

For instance practical belief systems have been rediscovered that can avert the `tragedy of the commons', the classic devastation of resources that comes from unrestrained selfishness. That tragedy, as ancient as life itself,54 now threatens our entire planet. And the push toward it by further prioritizing selfishness is increasingly recognized as -- well -- insane. Not the Answer to Everything, after all. There are signs, now, of saner and more stable compromises, or rather symbioses, between market forces and regulation, more like Adam Smith's original idea. The Montreal Protocol on the ozone hole is an inspiring example.

And the idea of genetically-enabled automata or self-assembling building blocks becomes more important than ever, displacing the older, narrower ideas of genetic blueprint, innate hard wiring, selfish genes, and rigid biological determinism that underlie simplistic evolutionary theory. The epigenetic flexibility of the automata are now seen as significant aspects of biological evolution, not least that of our ancestors.

I really do hope that these advances in our understanding of evolution might free us from the old cliché that the nastier parts of human nature are purely biological, while the nicer parts are purely cultural, or purely religious, whatever that might mean. Yes, it's clear that our genetic memory can spawn powerful, self-assembling automata for the worst kinds of human behaviour. Yet, as history shows, those automata don't always have to assemble themselves in the same way.

Everyone needs some kind of faith or hope; but personal beliefs don't have to be fundamentalist and exclusional. We don't have to carry on screaming `We are right and they are wrong'. We can pause and be mindful instead. Group identity can be flexible and multifarious. We can get smart about escaping from mindsets. We can get smart about finding face-saving formulae, and space for reasoned dialogue. We can learn about the dangers of the purity spiral. Compassion and generosity can come into play, welling up from unconscious levels and transcending the mere game-playing of reciprocal altruism. There are such things as loneliness, friendship, forgiveness, and unconditional love.9-11 They too have their self-assembling automata, deep within our unconscious being -- our unconscious, epigenetic being. Though invisible to simplistic evolutionary theory they are part of our human nature, and its potential to get smarter and wiser. Their ubiquity -- their very ordinariness -- is attested to, in a peculiar way, by the very fact that they're seldom considered newsworthy.

Love and redemption are forces strongly felt in some of the great epics, such as the story of Parsifal. And of course even they have their dark side, within the more dangerous fundamentalisms. Insights into all this go back to the great psychologist Carl Gustav Jung and before that, as the great novelist Ursula K. Le Guin has reminded us, back to ancient wisdoms such as Taoism -- exploring what Jung called the dark and light sides of our `collective unconscious', which are so inextricably intertwined:

I would know my shadow and my light; so shall I at last be whole.

In his great oratorio for peace, A Child of Our Time, the composer Michael Tippett set those words to some of the most achingly beautiful music ever written.

Chapter 3: Acausality illusions, and the way perception works

Picture a typical domestic scene. `You interrupted me!'  `No, you interrupted me!

Such stalemates can arise from the fact that perceived timings differ from actual timings in the outside world. I once tested this experimentally by secretly tape-recording a dinner-table conversation. At one point I was quite sure that my wife had interrupted me, and she was equally sure it had been the other way round. When I listened afterwards to the tape, I discovered to my chagrin that she was right. She had started to speak a few hundred milliseconds before I did.

Musical training includes learning to cope with the discrepancies between perceived timings and actual timings. For example, musicians often check themselves with a metronome, a small machine that emits precisely regular clicks. The final performance won't necessarily be metronomic, but practising with a metronome helps to remove inadvertent errors in the fine control of rhythm. `It don't mean a thing if it ain't got that swing...'

There are many other examples. I once heard a radio interviewee recalling how he'd suddenly got into a gunfight:  `It all went intuh slowww... motion.'

(A scientist who claims to know that eternal life is impossible has failed to notice that perceived timespans at death might stretch to infinity. That, by the way, is a simple example of the limitations of science. What might or might not happen to perceived time at death is a question outside the scope of science, because it's outside the scope of experiment and observation. It's here that ancient religious teachings show more wisdom, I think, when they say that deathbed compassion and reconciliation are important to us. Perhaps I should add that I'm not myself conventionally religious. I'm an agnostic whose closest approach to the numinous -- to things transcendental, to the divine if you will -- has been through music.)

Some properties of perceived time are very counterintuitive indeed. They've caused much conceptual and philosophical confusion, especially in the literature on free will. For instance, the perceived times of outside-world events can precede the arrival of the sensory data defining those events, sometimes by as much as several hundred milliseconds. At first sight this seems crazy, and in conflict with the laws of physics. Those laws include the principle that cause precedes effect. But the causality principle refers to time in the outside world, not to perceived time. The apparent conflict is a perceptual illusion. I'll call it an `acausality illusion'.

The existence of acausality illusions -- of which music provides outstandingly clear examples, as we'll see shortly -- is a built-in consequence of the way perception works. And the way perception works is well illustrated by the `walking lights'.

Consider for a moment what the walking lights tell us. The sensory data are twelve moving dots in a two-dimensional plane. But they're seen by anyone with normal vision as a person walking -- a particular three-dimensional motion exhibiting organic change.  (The invariant elements include the number of dots, and the distances, in three-dimensional space, between particular pairs of locations corresponding to particular pairs of dots.)  There's no way to make sense of this except to say that the unconscious brain fits to the data an organically-changing internal model that represents the three-dimensional motion, using an unconscious knowledge of Euclidean geometry.

This by the way is what Kahneman (2011) calls a `fast' process, something that happens ahead of conscious thought, and outside our volition. Despite knowing that it's only twelve moving dots, we immediately see a person walking.

Such model-fitting has long been recognized by psychologists as an active process involving unconscious prior probabilities, and therefore top-down as well as bottom-up flows of information (e.g. Gregory 1970, Hoffman 1998, Ramachandran and Blakeslee 1998). For the walking lights the greatest prior probabilities are assigned to a particular class of three-dimensional motions, privileging them over other ways of creating the same two-dimensional dot motion. The active, top-down aspects show up in neurophysiological studies as well (e.g. Gilbert and Li 2013).

The term pattern-seeking is sometimes used to suggest the active nature of the unconscious model-fitting process. For the walking lights the significant pattern is four-dimensional, involving as it does the time dimension as well as all three space dimensions. Without the animation, one tends to see no more than a bunch of dots. So active is our unconscious pattern-seeking that we are prone to what psychologists call pareidolia, seeing patterns in random images.

And what is a `model'? In the sense I'm using the word, it's a partial and approximate representation of reality, or presumed reality. As the famous aphorism says, `All models are wrong, but some are useful'. Models are made in a variety of ways.

The internal model evoked by the walking lights is made by activating some neural circuitry. The objects appearing in video games and virtual-reality simulations are models made of electronic circuitry and computer code. Children's model boats and houses are made of real materials but are, indeed, models as well as real objects -- partial and approximate representations of real boats and houses. Population-genetics models are made of mathematical equations, and computer code usually. So too are models of photons, of black holes, of lightspeed spacetime ripples, and of jetstreams and the ozone hole. Any of these models can be more or less accurate, and more or less detailed. But they're all partial and approximate.

So ordinary perception, in particular, works by model-fitting. Paradoxical and counterintuitive though it may seem, the thing we perceive  is  -- and can only be -- the unconsciously-fitted internal model. And the model has to be partial and approximate because our neural processing power is finite. The whole thing is counterintuitive because it goes against our visual experience of outside-world reality -- as not just self-evidently external, but also as direct, clearcut, unambiguous, and seemingly exact in many cases. Indeed, that experience is sometimes called `veridical' perception, as if it were perfectly accurate. One often has an impression of sharply-outlined exactness -- for instance with such things as the delicate shape of a bee's wing or a flower petal, the precise geometrical curve of a hanging dewdrop, the sharp edge of the sea on a clear day and the magnificence, the sharply-defined jaggedness, of snowy mountain peaks against a clear blue sky.

(Right now I'm using the word `reality' to mean the outside world. Also, I'm assuming that the outside world exists. I'm making that assumption consciously as well as, of course, unconsciously. Notice by the way that `reality' is another dangerously ambiguous word. It's another source of conceptual and philosophical confusion. To start with, the thing we perceive is often called `the perceived reality', whether it's a mountain peak, a person walking, a charging rhinoceros or a car on a collision course or anything else. Straight away we blur the distinction drawn long ago by Plato, Kant, and other great thinkers -- the distinction between the thing we perceive and the thing-in-itself in the outside world. And is music real? Is mathematics real? Is our sense of `self' real? Is religious experience real? Are love and redemption real? There are different kinds of `reality' belonging to different levels of description. And some of them are very different for different individuals. To me, music is very real and I have an excellent ear for it. When it comes to conventional religion, I'm nearly tone-deaf.)

The walking lights remind us that the unconscious model-fitting takes place in time as well as in space. Perceived times are -- and can only be -- internal model properties. And they must make allowance for the brain's finite information-processing rates. That's why, in particular, the existence of acausality illusions is to be expected.

In order for the brain to produce a conscious percept from visual or auditory data, many stages and levels of processing are involved -- top-down as well as bottom-up. The overall timespans of such processing are well known from experiments using high-speed electrical and magnetic recording such as electroencephalography and magnetoencephalography, to detect episodes of brain activity. Overall timespans are typically of the order of hundreds of milliseconds. Yet, just as with visual perception, the perceived times of outside-world events have the same `veridical' character of being clearcut, unambiguous, and seemingly exact, like the time pips on the radio. It's clear at least that perceived times are often far more accurate than hundreds of milliseconds.

That accuracy is a consequence of biological evolution. In hunting and survival situations, eye-hand-body coordination needs to be as accurate as natural selection can make it. Perceived times need not -- and do not -- await completion of the brain activity that mediates their perception. Our ancestors survived. We've inherited their timing abilities. World-class tennis players time their strokes to a few milliseconds or thereabouts. World-class musicians work to similar accuracies, in the fine control of rhythm and in the most precise ensemble playing. It's more than being metronomic; it's being `on the crest of the rhythm'.

You don't need to be a musician or sportsperson to appreciate the point I'm making. If you and I each tap a plate with a spoon or chopstick, we can easily synchronize a regular rhythm with each other, or synchronize with a metronome, to accuracies far, far better than hundreds of milliseconds. Accuracies more like tens of milliseconds can be achieved without much difficulty. So it's plain that perceived times -- internal model properties -- are one thing, while the timings of associated brain-activity events, spread over hundreds of milliseconds, are another thing altogether.

This simple point has been missed again and again in the philosophical and cognitive-sciences literature. In particular, it has caused endless confusion in the debates about consciousness and free will. The interested reader will find further discussion in Part II of Lucidity and Science but, in brief, the confusion seems to stem from an unconscious assumption -- which I hope I've shown to be nonsensical -- an assumption that the perceived `when' of hitting a ball or taking a decision should be synchronous with the `when' of some particular brain-activity event.

As soon as that nonsense is blown away, it becomes clear that acausality illusions should occur. And they do occur. The simplest and clearest examples come from music -- `the art that is made out of time', as Ursula Le Guin put it in her great novel The Disposessed. Let's suppose that we refrain from dancing to the music, and that we keep our eyes closed. Then, when we simply listen, the data to which our musical internal models are fitted are the auditory data alone.

I'll focus on Western music. Nearly everyone with normal hearing is familiar, at least unconsciously, with the way Western music works. The unconscious familiarity goes back to infancy or even earlier. Regardless of genre, whether it be commercial jingles, or jazz or folk or pop or classical or whatever -- and, by the way, the classical genre includes much film music, for instance Star Wars -- regardless of genre, the music depends on precisely timed events called harmony changes. That's why children learn guitar chords. That's how the Star Wars music suddenly goes spooky, at a precise moment after the heroic opening.

The musical internal model being fitted to the incoming auditory data keeps track of the times of musical events, including harmony changes. And those times are -- can only be -- perceived times, that is, internal model properties.

Figure 5 shows one of the simplest and clearest examples I can find. Playback is available from a link in the figure caption. It's from a well known classical piano piece that's simple, slow, and serene, rather than warlike. There are five harmony changes, the third of which is perceived to occur midway through the example, at the time shown by the arrow. Yet if you stop the playback just after that time, say a quarter of a second after, you don't hear any harmony change. You can't, because that harmony change depends entirely on the next two notes, which come a third and two-thirds of a second after the time of the arrow. So in normal playback the perceived time of the harmony change, at the time of the arrow, precedes by hundreds of milliseconds the arrival of the auditory data defining the change.

From Mozart's piano sonata K 545

Figure 5: Opening of the slow movement of the piano sonata K 545 by Wolfgang Amadeus Mozart. Here's an audio clip giving playback at the speed indicated. Here's the same with orchestral accompaniment. (Mozart would have done it more subtly -- with only one flute, I suspect -- but that's not the point here.)

That's a clear example of an acausality illusion. It's essential to the way the music works. Almost like the `veridical' perception of a sharp edge, the harmony change has the subjective force of perceived reality -- the perceived `reality' of what `happens' at the time of the arrow.

When I present this example in a lecture, it's sometimes put to me that the perceived harmony change relies on the listener being familiar with the particular piece of music. Having been written by Mozart, the piece is indeed familiar to many classical music lovers. My reply is to present a variant that's unfamiliar, with a new harmony change. It starts diverging from Mozart's original just after the time of the arrow (Figure 6):

Variant on Mozart's piano sonata K 545

Figure 6: This version is the same as Mozart's until the second note after the arrow. Here's the playback. Here's the same with orchestral accompaniment.

As before, the harmony change depends entirely on the next two notes but, as before, the perceived time of the harmony change -- the new and unfamiliar harmony change -- is at, not after, the time of the arrow. The point is underlined by the way any competent composer or arranger would add an orchestral accompaniment, to either example -- an accompaniment of the usual kind found in classical piano concertos. Listen to the second clip in each figure caption. The accompaniments change harmony at, not after, the time of the arrow.

I discussed those examples in greater detail in Part II of Lucidity and Science, with attention to some subtleties in how the two harmony changes work and with reference to the philosophical literature, including Dennett's `multiple-drafts' theory of consciousness, which is a way of thinking about perceptual model-fitting in the time dimension.

Just how the brain manages its model-fitting processes is still largely unknown, even though the cleverness, complexity and versatility of these processes can be appreciated from a huge range of examples including those just given. Interactions between many brain regions are involved and, in many cases, more than one sensory data stream.

An example is the McGurk effect in speech perception. Visual data from lip-reading can cause changes in the perceived sounds of phonemes. For instance the sound `baa' is often perceived as 'daa' when watching someone say `gaa'. The phoneme-model is being fitted multi-modally -- simultaneously to more than one sensory data stream, in this case visual and auditory. The brain often takes `daa' as the best fit to the slightly-conflicting data.

The Ramachandran-Hirstein `phantom nose illusion' -- which can be demonstrated without special equipment -- produces a striking distortion of one's perceived body image, a nose elongation well beyond Pinocchio's or Cyrano de Bergerac's (Ramachandran and Blakeslee, p. 59). It's produced by a simple manipulation of tactile and proprioceptive data. They're the data feeding into the internal model that mediates the body image, including the proprioceptive data from receptors such as muscle spindles sensing limb positions.

What's this so-called body image? Well, the brain's unconscious internal models must include a self-model -- a partial and approximate representation of one's self, and one's body, in one's surroundings. Plainly one needs a self-model, if only to be well oriented in one's surroundings and to distinguish oneself from others. `Hey -- you're treading on my toe.'

There's been philosophical confusion on this point, too. Such a self-model must be possessed by any animal. Without it, neither a leopard nor its prey would have a chance of surviving. Nor would a bird, or a bee, or a fish. Any animal needs to be well-oriented in its surroundings, and to be able to distinguish itself from others. Yet the biological, philosophical, and cognitive-science literature sometimes conflates `having a self-model', on the one hand, with `being conscious' on the other.

Compounding the confusion is another misconception, the `archaeological fallacy' that  symbolic representation  came into existence only recently, at the start of the Upper Palaeolithic with its beads, bracelets, flutes, and cave paintings, completely missing the point that leopards and their prey can perceive things and therefore need internal models. So do birds, bees, and fish. Their internal models, like ours, are -- can only be -- unconscious symbolic representations. Patterns of neural activity are symbols. Again, symbolic representation is one thing, and consciousness is another. Symbolic representation is far more ancient -- by hundreds of millions of years -- than is commonly supposed.

The use of echolocation by batsi, whales and dolphins is a variation on the same theme. For bats, too, the perceived reality must be the internal model -- not the echoes themselves, but a symbolic representation of the bat's surroundings. It must work in much the same way as our vision except that the bat provides its own illumination, with refinements such as motion detection by Doppler shifting. To start answering the famous question `what is it like to be a bat' we could do worse than imagine seeing in the dark with a stroboscopic floodlight, whose strobe frequency can be increased at will.

And what of the brain's two hemispheres? Here I must defer to McGilchrist36 and Ramachandran and Blakeslee,37 who in their different ways offer a rich depth of understanding coming from neuroscience and neuropsychiatry, far transcending the superficialities of popular culture. For present purposes, McGilchrist's key point is that having two hemispheres is evolutionarily ancient. Even fish have them. The two hemispheres may have originated from the bilaterality of primitive vertebrates but then evolved in different directions. If so, it would be a good example of how a neutral genomic change can later become adaptive.

A good reason to expect such bilateral differentiation, McGilchrist argues, is that survival is helped by having two styles of perception. They might be called holistic on the one hand, and detailed, focused, analytic, and fragmented on the other. The evidence shows that the first, holistic style is a speciality of the right hemisphere, and the second a speciality of the left, or vice versa in a minority of people.

If you're a pigeon who spots some small objects lying on the ground, then you want to know whether they are, for instance, inedible grains of sand or edible seeds. That's the left hemisphere's job. It has a style of model-fitting, and a repertoire of models, that's suited to a fragmented, dissected view of the environment, focusing on a few chosen details while ignoring the vast majority of others. The left hemisphere can't see the wood for the trees. Or, more accurately, it can't even see a single tree but only, at best, leaves, twigs or buds (which, by the way, might be good to eat). One can begin to see why the left hemisphere is more prone to mindsets.

But suppose that you, the pigeon, are busy sorting out seeds from sand grains and that there's a peculiar flicker in your peripheral vision. Suddenly there's a feeling that something is amiss. You glance upward just in time to see a bird of prey descending and you abandon your seeds in a flash! That kind of perception is the right hemisphere's job. The right hemisphere has a very different repertoire of internal models, holistic rather than dissected. They're fuzzier and vaguer, but with a surer sense of overall spatial relations, such as your body in its surroundings. They're capable of superfast deployment. The fuzziness, ignoring fine detail, makes for speed when coping with the unexpected.

Ramachandran and Blakeslee point out that another of the right hemisphere's jobs is to watch out for inconsistencies between incoming data and internal models, including any model that's currently active in the left hemisphere. When the data contradict the model, the left hemisphere has a tendency to reject the data and cling to the model -- to be trapped in a mindset. `Don't distract me; I'm trying to concentrate!' Brain scans show a small part of the right hemisphere that detects such inconsistencies or discrepancies. If the discrepancy is acute, the right hemisphere bursts in with `Look out, you're making a mistake!' If the right hemisphere's discrepancy detector is damaged, severe mindsets such as anosognosia can result.

McGilchrist points out that the right hemisphere is involved in many subtle and sophisticated games, such as playing with the metaphors that permeate language or, one might even say, that mediate language. So the popular-cultural mindset that language is all in the left hemisphere misses many of the deeper aspects of language.

And what of combinatorial largeness? Perhaps the point is obvious. For instance there's a combinatorially large number of possible visual scenes, and of possible assemblages of internal models to fit them. Even so simple a thing as a chain with 10 different links can be assembled in 3,628,800 different ways, and with 100 different links in approximately 10158 different ways, 1 followed by 158 zeros. Neither we nor any other organism can afford to deal with all the possibilities. Visual-system processes such as early-stage edge detection (e.g. Hofmann 1998) and the unconscious perceptual grouping studied by the Gestalt psychologists, as with the two groups in dot patterns like   ••   •••   (e.g. Gregory 1970), give us glimpses of how the vast combinatorial tree of possibilities is pruned by our extraordinary model-fitting apparatus -- the number of possibilities cut down at lightning speed and ahead of conscious thought.

Perceptual grouping works in time as well as in space, as for instance with the four-note groups starting at the arrows in Figures 5 and 6. This grouping in subjective time was adumbrated long ago in the thinking of the philosopher Henri Bergson, predating the work of the Gestalt psychologists. Such grouping is part of what gives rise to acausality illusions.

And what of science itself? What about all those mathematical and computer-coded models of population genetics and of photons, of molecules, of black holes, of lightspeed spacetime ripples, of jetstreams and the ozone hole, and of the myriad other entities we deal with in science? Could it be that science itself is always about finding useful models that fit data from the outside world, and never about finding Veridical Absolute Truth? Can science be a quest for truth even if the truth is never Absolute?

The next chapter will argue that the answer to both questions is an emphatic yes. One of the key points will be that, even if one were to find a candidate `Theory of Everything', one could never test it at infinite accuracy, in an infinite number of cases, and in all parts of the Universe or Universes. One might achieve superlative scientific confidence, with many accurate cross-checks, within a very wide domain of applicability. The theory might be described by equations of consummate beauty. And that would be wonderful. But in principle there'd be no way to be Absolutely Certain that it's Absolutely Correct, Absolutely Accurate, and Applicable to Everything. That's kind of obvious, isn't it?

Chapter 4: What is science?

So I'd like to replace all those books on philosophy of science by one simple, yet profound and far-reaching, statement. It not only says what science is, in the most fundamental possible way, but it also clarifies the power and limitations of science. It says that science is an extension of ordinary perception, meaning perception of outside-world reality. Like ordinary perception, science fits models to data.

If that sounds glib and superficial to you, dear reader, then all I ask is that you think again about the sheer wonder of so-called ordinary perception. It too has its power and its limitations, and its fathomless subtleties, agonized over by generations of philosophers. Both science and ordinary perception work by fitting models -- symbolic representations -- to data from the outside world. Both science and ordinary perception must assume that the outside world exists, because it can't be proven absolutely. Models, and assemblages and hierarchies of models -- schemas or schemata as they're sometimes called -- are partial and approximate representations, or candidate representations, of outside-world reality. Those representations can be anything from superlatively accurate to completely erroneous.

Notice that the walking-lights animation points to the tip of a vast iceberg, a hierarchy of unconscious internal models starting with the three-dimensional motion itself but extending all the way to the precise manner of walking and the associated psychological and emotional subtleties. The main difference between science and so-called ordinary perception is that, in science, the set of available models is even more extensive, and the model-fitting process to some extent more conscious, as well as being far slower, and dependent on vastly extended data acquisition, computation, and cross-checking. Making the process more systematic in our big-data era is one of today's grand challenges -- and the mathematical means to do so is now available (e.g. Pearl and Mackenzie 2018) and is being put to use in, for instance, artificial intelligence systems based on machine learning. These systems fit models to experimental data in a logically self-consistent way, the experimenter's actions being represented by the Bayesian probabilistic `do' operator. Such systems learn by `artificial juvenile play', by trying things out. They do so of course within some prescribed universe of discourse, which might be that of social-media profitability but might instead, by contrast, be that of a particular scientific problem, such as understanding the complex biomolecular circuitry that switches genes on and off.

And yes, all our modes of observation of the outside world are, of course, theory-laden, or prior-probability-laden. That's a necessary aspect of the model-fitting process. But that doesn't mean that `science is mere opinion' as some postmodernists say. Some models fit much better than others. And some are a priori more plausible than others, with more cross-checks to boost their prior probabilities. And some are simpler and more widely applicable than others, for example Newton's and Einstein's theories of gravity. These are both, of course, partial and approximate representations of reality even though superlatively accurate, superlatively simple, and repeatedly cross-checked in countless ways within their very wide domains of applicability -- Einstein's still wider than Newton's because it includes, for instance, the orbital decay and merging of pairs of black holes or neutron stars and the resulting spacetime ripples, or gravitational waves, which were first observed on 14 September 2015 (by a detector called LIGO, Abbott et al. 2016) and which provided yet another cross-check on the theory and opened a new window on the Universe. And both theories are not only simple but also mathematically beautiful.

Notice that all this has to do with cross-checking, data quality, goodness of fit, and beauty and economy of modelling, never with Absolute Truth and Absolute Proof, nor even with uniqueness of model choice. Currently Einstein's theory has no serious competitors in its domain of applicability, but in general the choice of model needn't be unique. There might be two or more alternative models that work equally well. They might have comparable simplicity and accuracy and offer complementary, and equally powerful, insights into the workings of outside-world reality. The possibility of such nonuniqueness has been emphasized in parts of the literature on evolutionary biology, where it is called the `principle of equivalence' (e.g. Wilson 2015) and is being used in efforts to view things from more than one angle and to get beyond simplistic evolutionary theory.

The possibility of non-uniqueness is troublesome for believers in Absolute Truth, and is much agonized over in the philosophy-of-science literature, under headings such as `incommensurability'. However, as I keep saying, even the existence of the outside world can't be proven absolutely. It has to be assumed. Both science and ordinary perception proceed on that assumption. The justification is no more and no less than our experience that the model-fitting process works, again and again -- never perfectly, but often well enough to gain our respect.

If you observe a rhinoceros charging toward you, then it's probably a good idea to jump out of the way even though your observations are, unconsciously, theory-laden and even though there's no absolute proof that the rhinoceros exists. Even a postmodernist might jump out of the way, I'd dare to suggest. And Einstein's spacetime ripples gain our respect not only for the technical triumph of observing them but also because the merging black holes emit a very specific wave pattern, closely matching the details of what's computed from Einstein's equations when the black holes have particular masses and spins.

So beauty and economy of modelling can be wonderful and inspirational. Yet the same cautions apply. Indeed, Unger and Smolin (2015) argue that the current crisis in physics and cosmology has its roots in a tendency to conflate outside-world reality with mathematical models of it. The mathematical models tend to be viewed as the same thing as the outside-world reality. Jaynes (2003) aptly calls this conflation the `mind projection fallacy'. (The late Edwin T. Jaynes was one of the great thinkers about model-fitting, prior probabilities, and Bayesian analytics, where the mind projection fallacy used to be a major impediment to understanding. Probability distribution functions are model components, not things in the outside world.) The mind projection fallacy seems to be bound up with the hypercredulity instinct. In physics and cosmology, it generates a transcendental vision of Absolute Truth in which the entire Universe is seen as a single mathematical object of supreme beauty, a Theory of Everything -- an Answer to Everything -- residing within that ultimate `reality', the Platonic world of perfect forms. Alleluia!

Because the model-fitting works better in some cases than in others, there are always considerations of just how well it is working, involving a balance of probabilities. We must always consider how many independent cross-checks have been done and to what accuracies. For Einstein's equations, the spacetime ripples from merging black holes provide a new independent cross-check, adding to the half dozen or so earlier kinds of cross-check that include an astonishingly accurate one from the orbital decay of a binary pulsar -- accurate to about 14 significant figures, or one part in a hundred million million. So the detection of spacetime ripples didn't suddenly `prove' Einstein's theory, as journalists had it, but instead just added another cross-check, and a very beautiful one.

If you can both hear and see the charging rhinoceros and if your feet feel the ground shaking in synchrony, then you have some independent cross-checks. You're checking a single internal model, unconsciously of course, against three independent sensory data streams. With so much cross-checking, it's a good idea to accept the perceived reality as a practical certainty. We do it all the time. Think what's involved in riding a bicycle, or in playing tennis, or in pouring a glass of wine. But the perceived reality is still the internal model within your unconscious brain, paradoxical though that may seem. It is still theory-laden -- unconsciously theory-laden from hundreds of millions of years of evolution. And, again, the outside world is something whose existence must be assumed.

One reason I keep banging on about these issues is the quagmire of philosophical confusion that has long surrounded them (e.g. Smythies 2009). The Vienna Circle thought that there were such things as direct, or absolute, or veridical, observations -- sharply distinct from theories or models. That's what I called the `veridical perception fallacy' in Part II of Lucidity and Science. Others have argued that all mental constructs are illusions. Yet others have argued that the entire outside world is an illusion, subjective experience being the only reality. But none of this helps! Like the obsession with absolute proof and absolute truth, it just gets us into a muddle, often revolving around the ambiguity of the words `real' and `reality'.

Journalists, in particular, often seem hung up on the idea of absolute proof, unconsciously at least. They often press us to say whether something is scientifically `proven' or not. But as Karl Popper emphasized long ago, that's a false dichotomy and an unattainable mirage. I have a dream that professional codes of conduct for scientists will clearly say that, especially in public, we should talk instead about the balance of probabilities and the degree of scientific confidence. Many scientists do that already, but others still talk about The Truth, as if it were absolute (e.g. Segerstråle 2000).

Let me come clean. I admit to having had my own epiphanies, my eurekas and alleluias, from time to time. But as a professional scientist I wouldn't exhibit them in public, at least not as absolute truths. They should be for consenting adults in private -- an emotional resource to power our research efforts -- not something for scientists to air in public. I think most of my colleagues would agree. We don't want to be lumped with all those cranks and zealots who believe, in Max Born's words, `in a single truth and in being the possessor thereof'. And again, even if a candidate Theory of Everything, so called, were to be discovered one day, the most that science could ever say is that it fits a large but finite dataset to within a small but finite experimental error. Unger and Smolin (2015) are exceptionally clear on this point and on its implications for cosmology.

Consider again the walking-lights animation. Instinctively, we feel sure that we're looking at a person walking. `Hey, that's a person walking. What could be more obvious?' Yet the animation might not come from a person walking at all. The prior probabilities, the unconscious choice of model, might be wrong. The twelve moving dots might have been produced in some other way -- such as luminous pixels on a screen! The dots might `really' be moving in a two-dimensional plane, or three-dimensionally in any number of ways. Even our charging rhinoceros might, just might, be a hallucination. As professional scientists we always have to consider the balance of probabilities, trying to get as many cross-checks as possible and trying to reach well-informed judgements about the level of scientific confidence. That's what was done with the ozone-hole work in which I was involved, which eventually defeated the ozone disinformers. That's what was done with the discovery and testing of quantum theory, where nothing is obvious!

There is of course a serious difficulty here, on the level of lucidity principles and communication skills. We do need quick ways to express extremely high confidence, such as confidence in the sun rising tomorrow. We don't want to waste time on such things when confronted with far greater uncertainties. Scientific research is like driving in the fog, straining to see ahead. Sometimes the fog is very thick. So there's a tendency to use terms like `proof' and `proven' as a shorthand to indicate things to which we attribute practical certainty, things that we shouldn't be worrying about when trying to see through the fog. But because of all the philosophical confusion, and because of the hypercredulity instinct, and the dichotomization instinct, I think it preferable in public to avoid terms like `proof' or `proven', or even `settled', and instead try to use a more nuanced range of terms like `practically certain', `indisputable', `hard fact',`well established', `highly probable', and so on, when we feel that strong statements are justifiable in the current state of knowledge. Such terms sound less final and less absolutist, especially when we're explicit about the strength of the evidence and the variety of cross-checks. I try to set a good example in the Postlude on climate.

And I think we should avoid the cliché fact `versus' theory. It's another false dichotomy and it perpetuates the veridical perception fallacy. Even worse, it plays straight into the hands of the professional disinformers, those well-resourced masters of information warfare who work to discredit good science when they think it threatens profits, or political power, or any other vested interest. The `fact versus theory' mindset gives them a ready-made framing tactic, paralleling `good versus bad' (e.g. Lakoff 2014).

I want to return to the fact -- the indisputable practical certainty, I mean -- that what's hopelessly complex at one level can be simple, or at least understandable, at another. And multiple levels of description are not only basic to science but also, unconsciously, basic to ordinary perception. They're basic to how our brains work. Straight away, our brains' left and right hemispheres give us at least two levels of description, respectively a lower level that dissects fine details, and a more holistic higher level. And neuroscience has revealed a variety of specialized internal models or model components that symbolically represent different aspects of outside-world reality. In the case of vision there are separate model components representing not only fine detail on the one hand, and overall spatial relations on the other but also, for instance, motion and colour (e.g. Sacks 1995, chapter 1; Smythies 2009). For instance damage to a part of the brain dealing with motion can produce visual experiences like successions of snapshots or frozen scenes -- very dangerous if you're trying to cross the road.

In science, as recalled in the Prelude, progress has always been about finding levels of description and a viewpoint, or viewpoints, from which something at first sight hopelessly complex becomes simple enough to be understandable. And different levels of description can seem incompatible with each other, if only because of emergent properties or emergent phenomena -- phenomena that are recognizable at a particular level of description but unrecognizable amidst the chaos and complexity of lower levels.

The need to consider multiple levels of description is especially conspicuous in the biological sciences, contrary to what simplistic evolutionary theory might suggest. For instance molecular-biological circuits, or regulatory networks, are now well recognized entities. They involve patterns of highly specific interactions between molecules of DNA, of RNA, and of proteins as well as many other large and small molecules (e.g. Noble 2006, Danchin and Pocheville 2014, Wagner 2014). Some protein molecules have long been known to be allosteric enzymes. That is, they behave somewhat like the transistors within electronic circuits (e.g. Monod 1971). Causal arrows point downward as well as upward. Genes are switched on and off by the action of molecular-biological circuits. Such circuits and actions are impossible to recognize from lower levels such as the level of genes alone, still less from the levels of chemical bonds and bond strengths within thermally-agitated molecules, jiggling back and forth on timescales of thousand-billionths of a second, and the still lower levels of atoms, atomic nuclei, electrons, and quarks. And again, there are of course very many higher levels of description, in the hierarchy of models -- level upon level, with causal arrows pointing both downward and upward. There are molecular-biological circuits and assemblies of such circuits, going up to the levels of archaea, bacteria and their communities, of yeasts, of multicellular organisms, of niche construction and whole ecosystems, and of ourselves and our families, our communities, our nations, our globalized plutocracies, and the entire planet -- which Newton treated as a point mass.

None of this would need saying were it not for the persistence, even today, of an extreme-reductionist view saying, or assuming, that looking for the lowest possible level and for atomistic `units' such as quarks, or atoms, or genes, or so-called memes, gives us the Answer to Everything and is therefore the only useful angle from which to view a problem. Some of the disputes about biological evolution seem to be disputes about `the' unit of selection -- as if such a thing could be uniquely identified. Yes, in many cases reductionism can be enormously useful; but no, it isn't the Answer to Everything! In some scientific problems, including those I've worked on myself, the most useful models aren't at all atomistic. In fluid dynamics we use accurate `continuum-mechanics' models in which highly nonlocal, indeed long-range, interactions are crucial. They're mediated by the pressure field. They're a crucial part of, for instance, how birds, bees and aircraft stay aloft, how a jetstream can circumscribe and contain the ozone hole, and how waves and vortices interact.

McGilchrist tells us that extreme reductionism comes from our left hemispheres. It is indeed a highly dissected view of things. His book can be read as a passionate appeal for more pluralism -- for more of Max Born's `loosening of thinking', for the deeper understanding that can come from looking at things on more than one level and from more than one viewpoint, while respecting the evidence. Such understanding requires a better collaboration, says McGilchrist, between our garrulous and domineering left hemispheres and our quieter, indeed wordless, but also passionate, right hemispheres.

Surely, then, professional codes of conduct for scientists -- to say nothing of lucidity principles as such -- should encourage us to be explicit, in particular, about which level or levels of explanation we're talking about. And when even the level of explanation isn't clear, or when the questions asked are `wicked questions' having no clear meaning at all, still less any clear answer, it would help to be explicit in acknowledging such difficulties. It would help to more explicit than we feel necessary.

Such an approach might also be helpful when confronted with the confusion about consciousness and free will. I want to stay off this subject -- having already had a go at it in Part II of Lucidity and Science -- except to say that some of the confusion seems to come not only from being unaware of acausality illusions, but also from conflating different levels of description. I like the aphorism that free will is a biologically indispensable illusion, but a socially indispensable reality. There's no conflict between the two statements. They belong to different, incompatible levels of description.

And they sharply remind us of the ambiguity, and the context-dependence, of the word `reality'. I ask again, is music real? Is mathematics real? Is our sense of self real? Is the outside world real? For me, at least, they're all vividly real but in four different senses. And one of life's realities is that pragmatic social functioning depends on accepting our sense of self -- our internal self-model -- as an entity having, or seeing itself as having, free will or volition or agency as it's variously called. It wouldn't do, would it, to be able to commit murder and then, like a modern-day Hamlet, to say to the jury `it wasn't me, it was my genes wot dunnit.'

Chapter 5: Music, mathematics, and the Platonic

The walking lights show that we have unconscious Euclidean geometry. We also have unconscious calculus.

Calculus is the mathematics of continuous change, as with a person walking. Calculus also deals, for instance, with objects like those shown in Figure 7. They are made of smooth curves -- pathways whose direction changes continuously, the curves that everyone calls `mathematical':

circle, ellipse, crescent, liquid drop

Figure 7: Some Platonic objects, including the outline of a liquid drop.

Such curves include perfect circles, ellipses, and portions thereof, among countless other examples. A straight line is the special case having zero rate of change of direction. A circle has a constant rate of change of direction, and an ellipse has a rate of change that's itself changing, and so on.

Experience suggests that such `Platonic objects', as I'll call them, are of special interest to the unconscious brain. Whenever one sees natural phenomena exhibiting what look like straight lines or smooth curves, such as the edge of the sea on a clear day, or the edge of the full moon, or the shape of a hanging dewdrop, they tend to excite our sense of something special, and beautiful. So do the great pillars of the Parthenon, and the smooth curves of the Sydney Opera House. We feel their shapes as resonating with something `already there'. Plato felt that the world of such shapes, or forms, and the many other beautiful entities found in mathematics, is in some mysterious sense a world more real than the outside world with its commonplace messiness. He felt his `world of perfect forms' to be something eternal -- something that is already there, and will always be there.

My heart is with Plato here. When the shapes, or forms, look truly perfect, they can excite a sense of great wonder and mystery. So can the mathematical equations describing them. How can such immutable perfection exist at all?

Indeed, so powerful is our unconscious interest in such perfection that we see smooth curves even when they're not actually present in the incoming visual data. For instance we see them in the form of what psychologists call `illusory contours'.  Figure 8 is an example. If you stare at the inner edges of the black marks for several seconds, and if you have normal vision, you will begin to see an exquisitely smooth curve joining them:

circle, ellipse, crescent, liquid drop

Figure 8: An illusory contour. To see it, stare at the inner edges of the black marks.

That curve is not present on the screen or on the paper. It is constructed by your visual system. To construct it, the system unconsciously solves a problem in calculus -- in the branch of it called the calculus of variations. The problem is to consider all the possible curves that can be fitted to the inner edges of the black marks, and to pick out the curve that's as smooth as possible, in a sense to be specified. The smoothness is specified using some combination of rates of change of direction, and rates of change of rates of change, and so on, averaged along each curve. So we have not only unconscious Euclidean geometry, but also an unconscious calculus of variations. And that in turn, by the way, gets us closer to some of the deepest parts of theoretical physics as we'll see shortly.

Sportspeople are good at unconscious calculus. The velocity of a tennis ball is the rate of change of its position. When the ball is in flight, the pathway it follows is a smooth curve.

The existence of the Platonic world is no surprise from an evolutionary perspective. It is, indeed, `already there' in the sense of being evolutionarily ancient -- something that comes to us through genetic memory and the automata that it enables -- self-assembling into, among many other things, the special kinds of symbolic representation that correspond to Platonic objects. That's again because of combinatorial largeness. Over vast stretches of time, natural selection has put the unconscious brain under pressure to make its model-fitting processes as simple as the data allow. That requires a repertoire of internal model components that are as simple as possible. Many of these components are Platonic objects, smooth curves or portions of smooth curves or, rather, their internal symbolic representations. Please remember that actual or latent patterns of neural activity are symbols -- and we are now talking about mathematical symbols -- even though we don't yet have the ability to read them directly from the brain's neural networks.

A perfect circle is a Platonic object simply because it's simple. The illusory contour in Figure 8 shows that the brain's model-fitting process assigns the highest prior probabilities to models representing objects with the simplest possible outlines consistent with the data, in this case an object with a smooth outline sitting in front of some smaller black objects. That is part of how the visual system separates an object from its background, an important part of making sense of the visual scene. Making sense of the scene has been crucial to survival for hundreds of millions of years -- crucial to navigation, crucial to finding mates, and crucial to eating and not being eaten. Many of the objects to be distinguished have outlines that are more or less smooth. They range from distant hills down to fruit and leaves, tusks and antlers, and teeth and claws.

`We see smooth curves even when they're not actually present.' Look again at Figure 7. None of the Platonic objects we see are actually present in the figure. Take the circle, or the ellipse as it may appear on some screens. It's actually more complex. With a magnifying glass, one can see staircases of pixels. Zooming in more and more, one begins to see more and more detail, such as irregular or blurry pixel edges. One can imagine zooming in to the atomic, nuclear and subnuclear scales. Long before that, one encounters the finite scales of the retinal cells in our eyes. Model-fitting is partial and approximate. What's complex at one level can be simple at another. And perfectly smooth curves are things belonging not to any part of the incoming sensory data but rather -- I emphasize once more -- to the unconscious brain's repertoire of model components. I think Plato would have found this interesting.  (I wonder if he knew about illusory contours -- perhaps a Plato scholar can tell me.)

The calculus of variations is a gateway to some of the deepest parts of theoretical physics. That's because it leads to Noether's theorem. The theorem depends on writing the laws of physics -- our most basic model of the outside world -- in what's called `variational' form. That's a form allowing the calculus of variations to be used. It is Feynman's own example of things that are mathematically equivalent but `psychologically very different'.

Think of playing tennis on the Moon. The tennis ball feels no air resistance, and moves solely under the Moon's gravity. One way to model such motion, familiar to scientists for over three centuries, is to use Newton's equations giving the moment-to-moment rates of change of quantities like the position of the tennis ball. In our lunar example, solving those equations produces a pathway for the tennis ball in the form of a smooth curve, approximately what's called a parabola. But the same smooth curve can also be derived as the solution to a variational problem, a problem more like that of Figure 8 because it treats the path of the tennis ball as a single entity. It deals with all parts of the curve simultaneously. That's psychologically very different indeed.

One considers all possible paths beginning and ending at a given pair of points. Instead of finding the smoothest of those paths, however, the problem is to find the path having the smallest value of another property, quite different from any measure of roughness or smoothness. That property is the time-average, along the whole path, of the velocity squared minus twice the gravitational altitude, or gravitational energy per unit mass. In order for the problem to make sense one has to specify a fixed travel time as well as fixed end points. Otherwise the velocity squared could be anything at all. The time-averaged quantity to be minimized is proportional to what physicists call the `action integral' for the problem, or `the action' for brevity.

If one solves this variational problem, minimizing the action, then one gets exactly the same smooth curve as one gets from solving Newton's equations. Indeed, even though psychologically different, the variational problem is mathematically equivalent to Newton's equations. Using the standard methods of the calculus of variations -- the same methods as for Figure 8 -- one can make a single calculation showing that the equivalence holds in all possible cases. Mathematics does indeed handle many possibilities at once. For the reader wanting more detail I'd recommend the marvellous discussion in Feynman et al. (1964).

And once one has the problem in variational form, one can apply Noether's theorem. The theorem tells us that, in each case of tennis-ball motion, we have organic change in the abstract sense I've defined it. There are invariant quantities, including an invariant called the total energy, that stay the same while the tennis ball changes its position. Invariant quantities become more and more important as we deal with problems that are more and more complex. Invariant total energies show why it's a waste of time to try building a complicated perpetual-motion machine. And invariants are crucial to theoretical physics at its deepest levels, including all of electrodynamics, and all of quantum mechanics and particle physics. The invariants in all these cases become accessible through Noether's theorem, which in turn connects them, in a very general way, to another powerful branch of mathematics called group theory. The only things that need changing from case to case are the formula for the action, and the kind of space in which it is calculated. The mathematical framework stays the same.

Also in the unconscious brain's repertoire of model components are the special sets of musical pitches called harmonic series. An example is shown in Figure 9.

Harmonic series on C (65.4Hz)

Figure 9: A musical harmonic series. You can hear the pitches in this audio clip. In the case shown here the first note, called the `fundamental' or `first harmonic', corresponds to a vibration frequency 65.4Hz (65.4 cycles per second), the second note or harmonic to twice this, 130.8Hz, and the third to three times, 196.2Hz, and so on. The fundamental note and its octave harmonics, the 2nd, 4th, 8th and so on all have the same musical name C or Doh. If you happen to have a tunable electronic keyboard and would like to tune it to agree with the harmonic series, then you need to sharpen the 3rd, 6th and 12th harmonics by 2 cents (2/100 of a semitone) and the 9th by 4 cents -- these differences are barely audible -- but also to flatten the 5th and 10th by 14 cents (easily audible to a good musical ear), the 7th by 31 cents and the 11th by 49 cents, relative to B flat and F sharp. The last two changes are plainly audible to just about anyone. It's worth going through this exercise if only to play the so-called `Tristan chord' (6th + 7th + 9th + 10th), to hear what it sounds like when thus tuned. The differences arise from the fact that the standard tuning, called `equal temperament', divides the octave into twelve exactly equal `semitones' with frequency ratios 21/12 = 1.059463. Equal temperament is musically useful because of a peculiar accident of arithmetic. This is the tiny, practically inaudible 2-cent difference between the 3rd harmonic and its equal-tempered approximation, whose frequency is 27/12 = 1.49831, very nearly 3/2, times the frequency of the 2nd harmonic.

The defining property is that the pitches correspond to vibration frequencies equal to the lowest frequency, in this case 65.4Hz (65.4 cycles per second), multiplied by a whole number such as 1, 2, 3, etc.

A harmonic series is a `Platonic object' in just the same sense as before. How can that be? The answer will emerge shortly, when we consider how hearing works. And it will expose yet more connections between music and mathematics. But first, dear reader, please take a moment to listen to the musical pitches themselves. Do they hint at something special and beautiful? Something that could divert you, and Plato, from commonplace messiness? Echoes of fairy horn calls, perhaps? However they strike you, these sounds are special to the musical brain, again for reasons that are evolutionarily ancient as we'll see.

Also special are combinations of these pitches played together. For instance if the pitches numbered 4, 5 and 6 are played together -- they are called the 4th, 5th, and 6th harmonics -- we hear the familiar sound of what musicians call a `common chord' or `major triad'. If we add to that chord the 1st, 2nd, 3rd, 8th, 10th, 12th, and 16th, then it sounds like a more spacious version of the same chord -- more like the grand, thunderous chord that opens the Star Wars music. If on the other hand we play the 6th, 7th, 9th and 10th together then we get what has famously been called the `Tristan chord', the first chord to be heard in Richard Wagner's opera Tristan und Isolde. (Some people think that Wagner invented this chord, even though it was actually invented -- I'd rather say discovered -- long before that. For instance the chord occurs over twenty times in another famous piece of music, Dido's Lament, written by Henry Purcell about two centuries before Tristan.)

It seems that Claude Debussy was the first great composer to exploit the fact that any subset of pitches from a harmonic series is special to the musical brain. Notice that all the chords just mentioned are harmonic-series subsets whether or not each pitch is played together with its own higher harmonics, because whole numbers multiply together to give whole numbers. Debussy made extraordinary use of these insights, together with the organic-change principle, to open up what he called a new frontier in musical harmony (Platt 1995). Extending far beyond Wagner, it has been exploited across a vast range of twentieth-century genres including, for instance, the bebop jazz of Charlie Parker.

The organic-change principle for harmony involves small pitch changes, which as mentioned in chapter 1 can be small in two different senses. These can now be stated more clearly. One sense is the obvious one, closeness on the keyboard or guitar fingerboard. The second is the inverse distance between the notes in a harmonic series. So the 1st and 2nd harmonics are closest in this second sense. They are so close that musicians give them the same name, C or Doh in the case of Figure 9, even though they're far apart in the first sense, by a whole octave. The 2nd and 3rd are the next closest in the second sense, then the 3rd and 4th, and so on. This plus the organic-change principle is almost all one needs to know in order to master musical harmony, if one has a good ear -- though admittedly the big harmony-counterpoint textbooks offer many useful examples, as well as showing the importance of the way melodic lines or `voices' move against each other. The organically-changing patterns can get quite complicated!

The musical brain is good at recognizing subsets from more than one harmonic series simultaneously, even when superposed in complicated combinations. Without this, we wouldn't have the sounds of Star Wars and symphonies and jazz bands and prog rock. In particular, many powerful chords are made by superposing subsets from more than one harmonic series. A point often missed is that the ordinary minor common chord is a polychord in this sense. For instance the 3-note chord called E minor is made up of the 5th and 6th pitches from Figure 9, the harmonic series based on the note C, overlapping with the 4th and 5th from another harmonic series, that based on the note G, 98.1Hz, an octave below the 3rd harmonic in Figure 9. The first spooky chord in Star Wars is another 3-note chord, similarly made up of the 4th and 5th from one series overlapping with the 4th and 5th from another.

If you listened to the audio clip in the caption to Figure 9, you probably noticed that there are slight differences in pitch relative to the pitches on a standard keyboard. The differences are detailed in the figure caption. These differences are easily audible for the 7th and 11th harmonics and also, more subtly, for the 5th and 10th. Such differences give us a valuable artistic resource, contrary to an impression one might get from the fuss about theoretical tuning differences -- Pythagorean commas and so on. Musicians playing non-keyboard instruments develop great skill in slightly varying the pitch from moment to moment as the music unfolds, exploiting the tension between harmonics and keyboard pitches for expressive purposes as with, for instance, `blue notes' in jazz. Blue notes flirt with the pitch of the 7th harmonic. A pianist can't sound a blue note, but a singer or saxophonist can, while the piano plays other notes. The 11th harmonic is sounded -- to magical effect, I think -- by the French horn player in Benjamin Britten's Serenade for tenor, horn and strings. (This performance by horn player Radovan Vlatković respects the composer's instructions. Accurate 11th, 7th, and 14th harmonic pitches are heard in the first minute or so.)

But -- I hear you ask -- why are these particular sets and subsets of pitches so special to the brain, and what has all this to do with evolution and survival? The answer begins with the defining property of a harmonic series, namely that its frequencies are whole-number multiples of the fundamental frequency or first harmonic. It follows that sound waves made up of any set of pitches taken from a harmonic series, played together in any combination, with any relative strengths, take a very simple form. The waveform precisely repeats itself at a single frequency. That's 65.4 times per second in the case of Figure 9. The Tristan chord, tuned as in Figure 9, produces just such a repeating waveform. A famous theorem of Joseph Fourier tells us that any repeating waveform corresponds to some combination of harmonic-series pitches, as long as you allow an arbitrary number of harmonics. Repeating waveforms, then, are mathematically equivalent to sets or subsets of harmonic-series pitches -- mathematically equivalent, even if psychologically very different.

Our neural circuitry is good at timing things. It has evolved to give special attention to repeating waveforms because they're important for survival in the natural world. Many animal sounds are produced by vibrating elements in a larynx, or a syrinx in the case of birds. Such vibrations will often repeat themselves, to good accuracy, for many cycles, as the vibrating element oscillates back and forth like the reed of a saxophone or clarinet. So repeating waveforms at audio frequencies are important for survival because it's important, for survival, to be able to pick out individual sound sources from a jungleful of animal sounds. This rather astonishing feat of model-fitting is similar to that of a musician skilled in picking out sounds from individual instruments, when an orchestra is playing. It depends on having a repertoire of model components that include repeating waveforms. That is, exactly repeating waveforms are among the simplest model components needed by the hearing brain to help identify sound sources, just as smooth curves are among the simplest model components needed by the visual brain to help identify objects. That's why repeating waveforms have the status of Platonic objects, or forms, for the hearing brain, just as smooth curves do for the visual brain. Both contribute to making sense of a complex visual scene, or of a complex auditory scene, as the case may be, while being as simple as possible.

In summary, then, for survival's sake the hearing brain has to be able to carry out auditory scene analysis, and therefore has to know about repeating waveforms -- has to include them in its repertoire of unconscious model components available for fitting to the incoming acoustic signals, in all their complexity. And that's mathematically equivalent to saying that the unconscious brain has to know about the harmonic series.

The accuracy with which our neural circuitry can measure the frequency of a repeating waveform reveals itself via musicians' pitch discrimination. Experience shows that the musical ear can judge pitch to accuracies of the order of a few cents, that is, to a few hundredths of what musicians call a semitone, the interval between adjacent pitches on a keyboard or guitar fingerboard. It used to be thought, incidentally, that our pitch discrimination is mediated by the inner ear's basilar membrane. That's wrong because, although the basilar membrane does carry out some frequency filtering, that filtering is far too crude to account for accurate pitch discrimination.

Auditory scene analysis isn't exclusive to humans. So it should be no surprise to find that other creatures can perceive pitch to similar accuracies. The European cuckoo comes to mind. I've heard two versions of its eponymous two-note call in the English countryside. One of them matched the 6th and 5th harmonics with moderate accuracy, and the other the 5th and 4th. The composer Frederick Delius used both versions in his famous piece On Hearing the First Cuckoo in Spring. They are woven with exquisite subtlety into the gentle, lyrical music, from just past two minutes into the piece. Among other pieces of music quoting cuckoo calls the most famous, perhaps, are Beethoven's Pastoral Symphony and Saint-Saëns' Carnival of the Animals. Both use only the second version, the version matching the 5th and 4th harmonics.

In New Zealand, where I grew up, I heard even clearer examples of accurate avian pitch perception -- much to my youthful astonishment. Two of them came from a wonderfully feisty native bird called the tui, also called the parson bird because of its white bib worn against dark plumage (Figure 10 below). Tuis have a vast repertoire of complex and virtuosic calls, but as a schoolboy on summer holidays in the Southern Alps of New Zealand I encountered a particular bird that liked to sing exceptionally simple tunes, using accurate harmonic-series pitches. The bird sounded these pitches with an accuracy well up to the standards of a skilled human musician, and distinctly better than the average cuckoo.

That was in the 1950s, before the days of cheap tape recorders, but I'd like to put on record what it sounded like. I can recall two of the tunes with complete clarity. Imagine a small, accurately tuned xylophone echoing through the trees of a beech forest in the Southern Alps. Here's an audio clip reconstructing one of the tunes. It uses the 4th, 5th and 10th harmonics from Figure 9, though two octaves higher. The second tune, in this audio clip, uses the 5th, 6th, 8th and 10th together with two notes from another harmonic series. To a human musician, both tunes are in C major. The bird always sang in C major, sounding the notes as accurately as any human musician. In the second tune, the third and fourth notes are the 6th and 5th harmonics of B flat. The rhythms are exceptionally simple and regular. Each tune is terminated by a complicated burst of sound, which I could imitate only crudely in the reconstructions. Such complicated sounds, with no definite, single pitch, are more typical of tui utterances. The bird sang just one tune, or slight variants of it, in the summer of one year, and the other tune in another year, sometime in the 1950s.

Even though tuis are famous for their skills in mimicry, I think we can discount the possibility that this bird learned its tunes by listening to a human musician. The location was miles away from the nearest human habitation, other than our holiday camp; and we had no radio or musical instrument apart from a guitar and a descant recorder. The recorder is a small blockflute with a pitch range overlapping the bird's. When the bird was around, I used to play various tunes on the recorder, in the same key, C major, but never got any response. It was as if the bird felt that my efforts were unworthy of notice. It usually fell silent, then later on started up with its own tune again.

An internet search turns up many examples of tui song, but I have yet to find an example remotely as simple and tuneful as the C major songs I heard. And my impression at the time was that such songs are exceptional among tuis. I did, however, find a more complex tui song that again demonstrates supremely accurate pitch perception. It is interesting in another way because one hears two notes at once, accurately tuned against each other. As is well known, the avian syrinx can be used like a pair of laryxes, to sound two notes at once. This song can be notated fairly accurately in standard musical notation, as shown in Figure 10, where audio clips are provided in the caption:

Tui song recorded by Les McPherson

Figure 10: A more complex tui song, recording courtesy of Les McPherson. Here are two links to the recording, one at actual speed, with one partial and three complete repetitions of the song -- tuis often repeat fragments of songs -- and the other slowed to half speed to make the detail more accessible to human hearing.

At the first occurrence of two notes together, they are accurately tuned to the spacing of the 3rd and 4th harmonics (of B natural), the interval that musicians call a perfect fourth, notoriously sensitive to mistuning. To my ear, when I listen to the half-speed version, the bird hits the perfect fourth very accurately before sliding up to the next smallest harmonic-series spacing, that of the 4th and 5th harmonics (of G), what is called a major third, again tuned very accurately. At half speed this musical fragment is playable on the violin, complete with the upward slide, as I've occasionally done in lectures.

Another New Zealand bird that's known to sing simple, accurately-pitched tunes is the North Island kokako, a crow-sized near-ground dweller, shown in Figure 11:

North Island Kokako

Figure 11: North Island kokako. The transcription corresponds only to the start of its song in this audio recording, again courtesy of Les McPherson, which continues in a slightly more complicated way ending with the 8th harmonic of 110Hz A, to my ear creating a clear key-sense of A major. The first three notes, those shown in the transcription, are close to the 6th, 7th, and 5th harmonics of the same A.

I want to mention one more connection between music and mathematics -- yet another connection that's not mentioned in the standard accounts, confined as they are to games with numbers. (Of course composers have always played with numbers to get ideas, but that's beside the point.)

The point is that there are musical counterparts to illusory contours. Listen to this audio clip from the the first movement of Mozart's piano sonata K 545, whose second movement was quoted in Figure 5. After the first eight seconds or so one hears a smooth, flowing passage of fast notes that convey a sense of continuous motion, a kind of musical smooth curve, bending upward and downward in this case. Mozart himself used to remark on this smoothess. In his famous letters he would describe such passages as flowing "like oil" when played well enough. But as with the black segments in Figure 8, there is no smoothness in the actual sounds. The actual sounds are abrupt, percussive sounds, distinct and separate from each other. Of course hearing works differently from vision, and the analogy is imperfect. To give the impression of smoothness in the musical case the notes have to be spaced evenly in time, with adjacent notes similar in loudness. Mozart once admitted that he'd had to practise hard to get the music flowing like oil. When the notes are not spaced evenly, as in this clip, the smoothness disappears -- a bit like the vestiges of an illusory contour that some of us can see joining the outer edges of the black segments in Figure 8.

Coming back to musical pitch perception for a moment, if you're interested in perceived pitch then you may have wondered how it is that violinists, singers and others can use what's called `vibrato' while maintaining a clear and stable sense of pitch. Vibrato can be shaped to serve many expressive purposes and is an important part of performance in many Western and other musical genres. The performer modulates the frequency by variable amounts far greater than the pitch discrimination threshold of a few cents, up and down over a range that's often a hundred cents or even more, and at a variable rate that's typically within the range of 4 to 7 complete cycles per second depending on the expressive purpose. There is no corresponding fluctuation in perceived pitch. That, however, depends on the fluctuation being rapid enough. A recording played back at half speed or less tends to elicit surprise when heard for the first time, the perception then being a gross wobble in the pitch. Here's a 26-second audio clip in which a violin, playing alone, begins a quiet little fugue before a piano joins in. The use of vibrato is rather restrained. Yet when played at half speed the pitch-wobble becomes surprisingly gross and unpleasant, to my ear at least.

It appears that in order to judge pitch the musical brain does not carry out Fourier analysis but, rather, counts repetitions of neural firings over timespans up to two hundred milliseconds or thereabouts, long enough to span a vibrato cycle. This idea was called the `long-pattern hypothesis' by Boomsliter and Creel (1961) in a classic discussion. It accounts not only for the vibrato phenomenon but also for several other phenomena familiar to musicians, including degrees of tolerance to slightly mistuned chords. Another musically significant aspect is that vibrato can influence the perceived tone quality. For instance quality can be perceived as greatly enriched when the strengths of different harmonics fluctuate out of step with each other, as happens with the sound of violins and other bowed-stringed instruments. The interested reader is referred to Figure 3 of the review by McIntyre and Woodhouse (1978). It seems that the unconscious brain has a special interest in waveforms that repeat themselves while slightly varying their shapes, as well as their periods, continuously over a long-pattern timescale. Probably that's because, by contrast with trills and tremolos, the continuity in a vibrato pattern makes it an organically-changing pattern.

Let's return for a moment to theoretical-physics fundamentals. Regarding models that are made of mathematical equations, there's an essay that every physicist knows of, I think, by the famous physicist Eugene Wigner, about the `unreasonable effectiveness of mathematics' in representing the real world. But what's unreasonable is not the fact that mathematics comes in. As I keep saying, mathematics is just a means of handling many possibilities at once, in a precise and self-consistent way. What's unreasonable is that very simple mathematics comes in when you build accurate models of sub-atomic Nature. It's not the mathematics that's unreasonable; it's the simplicity.

So I think Wigner should have talked about the `unreasonable simplicity of sub-atomic Nature'. It just happens that at the level of electrons and other sub-atomic particles things look astonishingly simple. That's just the way nature seems to be at that level. And of course it means that the corresponding mathematics is simple too. One of the greatest unanswered questions in physics is whether things stay simple, or not, when we zoom in to the far smaller scales at which quantum mechanics and gravity mesh together.

As is well known, and widely discussed under headings such as `Planck length', `proton charge radius', and `Bohr radius', we are now talking about scales of the order of a hundred billion billion times smaller than the diameter of a proton, and ten million billion billion times smaller than the diameter of a hydrogen atom -- well beyond the range accessible to observation and experimentation. At those small scales, things might for instance be complex and chaotic, like turbulent fluid flow, with order emerging out of chaos and making things simple only at much larger scales. Such possibilities have been suggested for instance by my colleague Tim Palmer, who has thought deeply about these issues -- and about their relation to the vexed questions at the foundations of quantum mechanics (e.g. Palmer 2019) -- alongside his better-known work on the chaotic dynamics of weather and climate.

Postlude: the amplifier metaphor for climate

Journalist to scientist during a firestorm, flash flood, or other weather extreme such as Cyclone Idai or Hurricane Dorian: `Tell me, Professor So-and-So, is this a one-off extreme -- pure chance -- or is it due to climate change?' Well -- once again -- dichotomization does make us stupid, doesn't it. The professor needs to say `Hey, this isn't an either-or. It's both of course. Climate change produces long-term upward trends in the probabilities of extreme weather events, and in their intensities.' This point is, at long last, gaining traction as devastating weather extremes become more frequent and more intense.

How significant are these upward trends? Here's one way to look at the long-established scientific consensus. Chapter 1 mentioned audio amplifiers and two different questions one might ask about them: firstly what powers them, and secondly what they're sensitive to. Pulling the amplifier's power plug corresponds to switching off the Sun. But in the climate system is there anything corresponding to an amplifier's sensitive input circuitry? For many years now, we've had a clear answer yes. And we have practical certainty that the upward trends are highly significant, that they're mostly caused by humans, and that weather extremes will become more frequent and more intense. They can be expected to become more intense in both directions -- cold extremes as well as hot, wet extremes as well as dry. What's uncertain is how long it will take and how far it will go, though unfortunately the extremes now appear to be ramping up sooner rather than later. They've been underestimated by the climate prediction models, for reasons I'll come to.

The climate system is -- with certain qualifications to be discussed below -- a powerful but slowly-responding amplifier with sensitive inputs. Among the amplifier's sensitive inputs are small changes in the Earth's tilt and orbit. They have repeatedly triggered large climate changes, with global mean sea levels going up and down by well over 100 metres. Those were the glacial-interglacial cycles encountered in chapter 2, `glacial cycles' for brevity, with overall timespans of about a hundred millennia per cycle. And `large' is a bit of an understatement. As is clear from the sea levels and the corresponding ice-sheet changes, those climate changes were huge by comparison with the much smaller changes projected for the twenty-first century. I'll discuss the sea-level evidence below.

Another sensitive input is the injection of carbon dioxide into the atmosphere, for instance by burning fossil fuels. Carbon dioxide, whether injected naturally or artificially, has a central role in the climate-system amplifier not only as a plant nutrient but also as our atmosphere's most important non-condensing greenhouse gas. Without recognizing that central role it's impossible to make sense of climate behaviour in general, and of the huge magnitudes of the glacial cycles in particular. Those cycles depended not only on the small orbital changes, and on the sensitive dynamics of the great land-based ice sheets, but also on natural injections of carbon dioxide into the atmosphere from the deep oceans. Of course to call such natural injections `inputs' is strictly speaking incorrect, except as a thought-experiment, but along with the great ice sheets they're part of the amplifier's sensitive input circuitry as I'll try to make clear.

The physical and chemical properties of so-called greenhouse gases are well established and uncontentious, with very many cross-checks. Greenhouse gases in the atmosphere make the Earth's surface roughly 30°C warmer than it would otherwise be. For reasons connected with the properties of heat radiation, any gas whose molecules have three or more atoms can act as a greenhouse gas. (More precisely, to interact strongly with heat radiation the gas molecules must have a structure that supports a fluctuating electric `dipole moment' at the frequency of the heat radiation, of the order of tens of terahertz or tens of millions of millons of cycles per second.) Examples include carbon dioxide, water vapour, methane, and nitrous oxide. By contrast, the atmosphere's oxygen and nitrogen molecules have only two atoms and are very nearly transparent to heat radiation.

One reason for the special importance of carbon dioxide is its great chemical stability as a gas. Other carbon-containing, non-condensing greenhouse gases such as methane tend to be converted fairly quickly into carbon dioxide. Fairly quickly means within a decade or two, for methane. And of all the non-condensing greenhouse gases, carbon dioxide has always had the most important long-term heating effect, because of its chemical stability, not only today but also during the glacial cycles. That's clear from ice-core data, to be discussed below, along with the well established heat-radiation physics.

Water vapour has a central but entirely different role. It too is chemically stable but, unlike carbon dioxide, can and does condense or freeze, in vast amounts, as well as being copiously supplied by evaporation from the tropical oceans. This solar-powered supply of water vapour -- sometimes called `weather fuel' because of the `latent' thermal energy released on condensing or freezing -- makes it part of the climate-system amplifier's power-supply or power-output circuitry rather than its input circuitry. Global warming is also global fuelling, because air can hold between six and seven percent more weather fuel for every degree Celsius rise in temperature. The power output includes tropical and extratropical thunderstorms and cyclonic storms, including those that produce the most extreme rainfall, flooding, and wind damage. The energy released dwarfs the energies of thermonuclear bombs. Cyclone Idai, which caused such devastation in Mozambique, and Hurricane Dorian, which flattened large areas of the Bahamas, and other recent examples including typhoons impacting the Philippines, remind us what those huge energies mean in reality.

Extremes tend to become more extreme because, for instance, a thunderstorm surrounded by more weather fuel becomes more vigorous and can suck the fuel up faster -- a positive feedback in the runup to peak intensity. The peak intensity is greater as well as sooner. This point is sometimes missed in statistical studies that focus on average intensities, rather than on peak intensities and their consequences such as flash floods. And extremes go in both directions. More weather fuel makes the whole climate system more active and vigorous, and its fluctuations larger, including fluctuations coming from the amplified meandering of jetstreams. An example was the `Beast from the East' that brought extreme winter conditions to the UK in late February 2018.

A century or two ago, the artificial injection of carbon dioxide into the atmosphere was only a thought-experiment. It interested only a few scientists, such as Joseph Fourier, John Tyndall, and Svante Arrhenius. Tyndall did simple but ingenious laboratory experiments to show how heat radiation interacts with carbon dioxide. For more history and technical detail I strongly recommend the textbook by Pierrehumbert (2010). Today, inadvertently, we're doing such an injection experiment for real. And we now know that the consequences will be very large indeed.

How can I say that? As with the ozone-hole problem, it's a matter of spotting what's simple about a problem at first sight hopelessly complex. But I also want to sound a note of humility. All I'm claiming is that the climate-science community now has enough insight, enough in-depth understanding, and enough cross-checks, to say that the climate system is sensitive to carbon dioxide injections by humans, and that the consequences will be very large. The main hope now is that, because of the slow response of the climate-system amplifier, there might still be enough time to ward off the worst consequences over the coming decades and centuries, if the political will can be found (e.g. Farmer et al. 2019) -- and, thankfully, signs that it can be found have now begun to emerge.

The sensitivity of the climate system is not, by the way, what's meant by the term `climate sensitivity' encountered in many of the community's technical writings. There are various technical definitions, in all of which atmospheric carbon dioxide values are increased by some given amount but, in all of which, artificial constraints are imposed. The constraints are often left unstated.

Imposing the constraints often corresponds to a thought-experiment in which important parts of the system -- such as the deep oceans, polar ice, methane-containing permafrost, and the biosphere -- are all held fixed in an artificial and unrealistic way. Adding to the confusion, the state reached under some set of artificial constraints is sometimes called `the equilibrium climate', as if it represented some conceivable reality. And, to make matters even worse, attention is often confined to global-mean temperatures -- to `global warming', not even taking note of `global fuelling' -- concealing all the many other aspects of climate change including ocean heat content, and the probabilities of weather extremes. Dear reader, I did warn you, didn't I, that human language is a conceptual minefield. Worst of all, the artificial constraints and talk of climate `equilibrium' have concealed the possibility of what's now called `runaway climate change', another topic that I'll try to discuss seriously.

Many climate scientists try to minimize confusion by spelling out which thought-experiment they have in mind. That's an important example of the explicitness principle in action. And the thought-experiments and the computer model experiments are improving year by year. As I'll try to clarify further, the climate system has many different `sensitivities' depending on the choice of thought-experiment. That's one reason why the amplifier metaphor needs qualification. In technical language, we're dealing with a system that's highly `nonlinear'. The response isn't simply proportional to the input. In audio-amplifier language, there's massive distortion and internal noise -- random fluctuations on many timescales. In these respects the climate-system amplifier is very unlike an audio amplifier. We still, however, need some way of talking about climate that recognizes some parts of the system as being more sensitive than others.

And there are still many serious uncertainties on top of all the communication difficulties. But over the past twenty or thirty years our understanding has become good enough, deep enough, and sufficiently cross-checked to show that the uncertainties are mainly about the precise timings and sequence of events over the coming decades and centuries. These coming events will include nonlinear step changes or `tipping points', perhaps in succession like dominoes falling, as now seems increasingly likely, and perhaps leading to runaway climate change toward an ice-free, extremely stormy planet with sea levels hundreds of feet higher than today. The details remain highly uncertain. But in my judgement there's no significant uncertainty about the response being very large, sooner or later, and practically speaking permanent and irreversible -- with recovery timescales of the order of a hundred millennia (e.g. Archer 2009), practically speaking infinite from a human perspective. `Runaway climate change' is yet another ambiguous term in the technical literature, as I'll try to explain below in connection with the hothouse early Eocene, roughly fifty million years ago, whose extreme storminess may have led to what became the first whales and dolphins.

Science is one thing and politics is another. I'm only a humble scientist. I've spent most of my life trying to understand how things work. My aim here is to get the most robust and reliable aspects of climate science stated clearly, simply, accessibly, and dispassionately, along with the implications, under various assumptions about the politics. I'll draw on the wonderfully meticulous work of many scientific colleagues including the late Nick Shackleton and his predecessors and successors, who have laboured so hard, and so carefully, to tease out information about past climates. Past climates, especially those of the past several hundred millennia, are our main source of information about the workings of the real system, taking full account of its vast complexity all the way down to the details of thunderstorms, forest canopies, soil ecology, ocean plankton, and the tiniest of eddies in the oceans and in the atmosphere.

Is such an exercise useful at all? The optimist in me says it is. And I hope, dear reader, that you might agree because, after all, we're talking about the Earth's life-support system and the possibilities for some kind of future civilization. Life on Earth will almost certainly survive -- even if we go into a new Eocene -- but unless we're careful human civilization might not.

In recent decades there's been a powerful disinformation campaign about climate. Significant aspects of the problem are ignored or camouflaged, including the different roles of carbon dioxide and water vapour. Scientists' efforts to get closer to an in-depth understanding of the problem, in all its fearsome complexity -- daring to look at it from more than one viewpoint -- are portrayed as duplicitous and dishonest. The `climategate' episode of 2009 was an example. The campaign used out-of-context quotes from emails among scientists trying to estimate global-mean temperatures over the past millennium. Such estimates have formidable technical difficulties, such as allowing for the ill-understood effects of industrial chemicals on tree rings. The scientists were not being dishonest; they were struggling to work out how to allow for such effects. For me it's a case of déjà vu, because the earlier ozone disinformation campaign -- which I encountered at close quarters during my professional work -- was strikingly similar.

We now know that that similarity was no accident. According to extensive documentation cited in Oreskes and Conway (2010) -- including formerly secret documents now exposed through anti-tobacco litigation -- the current climate-disinformation campaign was seeded, originally, by the same professional disinformers who masterminded the ozone-hole disinformation campaign and, before that, the tobacco companies' lung-cancer campaigns. The secret documents describe how to manipulate the newsmedia and sow confusion in place of understanding. For climate the confusion has spread into some parts of the scientific community, including a few influential senior scientists who are not, to my knowledge, among the professional disinformers and their political allies but who have tended to focus too narrowly on the shortcomings of the climate prediction models, ignoring the many other lines of evidence. And such campaigns and their political fallout are, of course, threats to other branches of science as well, and indeed to the very foundations of good science. The more intense the politicization, the harder it becomes to live up to the scientific ideal and ethic.

One reason why the amplifier metaphor is important despite its limitations is what's already been mentioned, that the climate disinformers ignore it when comparing water vapour with carbon dioxide. They use the copious supply of water vapour from the tropical oceans and elsewhere as a way of suggesting that the relatively small amounts of carbon dioxide are unimportant for climate. That's like focusing on an amplifier's power-output circuitry and ignoring the input circuitry, exactly the `energy budget' mindset mentioned in chapter 1. Again, I get a sense of déjà vu. The disinformers used exactly the same tactic with the ozone hole, saying that the pollutants alleged to cause it were present in such tiny amounts that they couldn't possibly be important. Well, for ozone there's an amplifier mechanism too. It's called chemical catalysis. One molecule of pollutant can destroy thousands of ozone molecules.

And now, as I write, in 2020, climate politics suddenly seems to have reached the point that ozone politics reached in 1985, when the ozone hole was discovered. Both cases show the same pattern -- sober scientific discourse being rubbished by the disinformers, only to be overtaken by a wave of public concern powered by real events. And in both cases the real effects of pollutants have turned out to be worse than predicted, rather than better. The shortcomings of the prediction models, especially their earliest versions, led to underprediction rather than overprediction. That's the exact opposite of what the disinformers have always claimed about the models, namely that the models are bad (true, especially the earliest versions, both for ozone and for climate) and `therefore' that they overpredict (false). And on top of the climate models' underprediction -- especially as regards the rate of sea-level rise and the probabilities of weather extremes -- scientific caution and fear of being accused of scaremongering led, for many years, to yet further understatement of the dangers.

In all humility, I think I can fairly claim to be qualified as a dispassionate observer of the climate-science scene. My own professional work has never been funded for climate science as such. However, my professional work on the ozone hole and the fluid dynamics of the great jetstreams has taken me quite close to research issues in the climate-science community. Those of its members whom I know personally are ordinary, honest scientists, respectful of the scientific ideal and ethic. They include many brilliant thinkers and innovators. They've tried hard to cope with daunting complexity. Again and again, I've heard members of the community giving careful conference talks on the latest findings, and taking questions. They are concerned about the shortcomings of climate prediction models, about the difficulty of weeding out data errors, and about the need to avoid superficial viewpoints and exaggerated claims. Those concerns are reflected in the restrained and cautious tone of the vast reports published by the Intergovernmental Panel on Climate Change (IPCC). The reports make heavy reading but contain reliable technical information about the basic physics and chemistry I'm talking about such as, for instance, the magnitude of greenhouse-gas heating as compared with variations in the Sun's output -- the slightly variable solar `constant'. Another source of reliable technical information is the textbook by Pierrehumbert (2010) already mentioned.

As it happens, my own professional work has involved me in solar physics as well; and my judgement on that aspect is that recent IPCC assessments of solar variation are substantially correct, namely that solar variation is too small to compete with past and present carbon-dioxide injections. That's based on very recent improvements in our understanding of solar physics, to be mentioned below. Another of the disinformers' ploys has been to say that climate change is due mainly to solar variation.

*   *   *

Let's pause for a moment to draw breath. At the risk of trying your patience, dear reader, I want to be more specific on how past climates have informed us about these issues, using the latest advances in our understanding. I'll try to discuss the leading implications and the reasoning behind them, and the different senses of the technical term `runaway'. The focus will be on implications that are extremely clear and extremely robust. They are independent of fine details within the climate system, and independent of the shortcomings of the climate prediction models.

*   *   *

The first point to note is that human activities are increasing the carbon dioxide in the atmosphere by amounts that will be large.

They will be large in the only relevant sense, that is, large by comparison with the natural range of variation of atmospheric carbon dioxide with the Earth system close to its present state. The natural range is well determined from Antarctic ice-core data, recording how atmospheric carbon dioxide varied over the hundred-millennium glacial cycles. That's one of the hardest, clearest, most unequivocal pieces of evidence we have. It comes from the ability of ice to trap air, beginning with compacted snowfall, giving us clean air samples from the past 800 millennia from which carbon dioxide concentrations can be reliably measured.

In round numbers the natural range of variation of atmospheric carbon dioxide is close to 100 ppmv, 100 parts per million by volume.

The increase since pre-industrial times now exceeds 120 ppmv. In round numbers we've gone from an glacial 180 ppmv through a pre-industrial 280 ppmv up to today's values, well over 400 ppmv. And on current trends -- which the climate disinformers aim to continue -- values will have increased to 800 ppmv or more by the end of this century. An increase from 180 to 800 ppmv is an increase of the order of six times the natural range of variation. Whatever happens, therefore, the climate system will be like a sensitive amplifier subject to a large new input signal, the only question being just how large -- just how many times larger than the natural range.

For comparison with, say, 800 ppmv, the natural variation across recent glacial cycles has been roughly from minima around 180-190 ppmv to maxima around 280-290 ppmv but then back again, i.e., in round numbers, over the aforementioned natural range of about 100 ppmv -- repeatedly and consistently back and forth over several hundreds of millennia (recall Figure 3 in chapter 2). The range appears to have been determined largely by deep-ocean storage and leakage rates. Storage of carbon in the land-based biosphere, and input from volcanic eruptions, appear to have played only secondary roles in the glacial cycles, though wetland biogenic methane emissions are probably among the significant amplifier mechanisms or positive feedbacks.

Recent work (e.g. Shakun et al. 2012, Skinner et al. 2014) has been clarifying how the natural 100 ppmv carbon-dioxide injections involved in `deglaciations', the huge transitions from the coldest to the warmest extremes of the glacial cycles, arose mainly by release of carbon dioxide from the deep oceans through a complex interplay of ice-sheet, sea-ice, and ocean-circulation changes, and through many other events in a complicated sequence triggering positive feedbacks -- the whole sequence having been initiated then reinforced by small orbital changes.

The deglaciations show us just how sensitive the climate-system amplifier can be. What I'm calling its input circuitry includes ice-sheet dynamics and what's called the natural `carbon cycle'. It depends on deep-ocean storage of carbon dioxide (mostly as bicarbonate ions), on chemical and biochemical transformations on land and in the oceans, on complex groundwater, atmospheric and oceanic flows down to the finest scales of turbulence, on sea-ice cover and upper-ocean layering and indeed on biological and ecological adaptation and evolution -- nearly all of which is outside the scope of the climate prediction models. Much of it is also outside the scope of specialist carbon-cycle models, if only because such models grossly oversimplify the transports of carbon and biological nutrients by fluid flows, within and across the layers of the sunlit upper ocean for instance. But we know that the input circuitry was sensitive during deglaciations without knowing all the details of the circuit diagram. It's the only way to make sense of the records in ice cores, in caves, and in the sediments under lakes and oceans, which tell us many things about the climate system's actual past behaviour 65,\nbsp;66

The records showing the greatest detail are those covering the last deglaciation. Around 18 millennia ago, just after the onset of an initiating orbital change, atmospheric carbon dioxide started to build up from a near-minimum glacial value around 190 ppmv toward the pre-industrial 280 ppmv. Around 11 millennia ago, it was already close to 265 ppmv. That 75 ppmv increase was the main part of what I'm calling a natural injection of carbon dioxide into the atmosphere. It must have come from deep within the oceans since, in the absence of artificial injections by humans, it's only the deep oceans that have the ability to store the required amounts of carbon, in suitable chemical forms. Indeed, land-based storage worked mostly in the opposite direction as ice retreated and forests spread.

The oceans not only have more than enough storage capacity, as such, but also mechanisms to store and release extra carbon dioxide, involving limestone-sludge chemistry as well as carbonate and bicarbonate ions in the water (e.g. Marchitto et al. 2006). How much carbon dioxide is actually stored or released is determined by a delicate competition between storage rates and leakage rates. For instance one has storage via dead phytoplankton sinking from the sunlit upper ocean into the deepest waters. That storage process is strongly influenced, it's now clear, by details of the ocean circulation, especially the highly variable Atlantic `overturning circulation' linking the Greenland area to Antarctica and the circumpolar Southern Ocean. The overturning circulation has deep and shallow branches and has strong effects on interhemispheric heat transport, on gas exchange with the atmosphere, and on phytoplankton nutrient supply and uptake all of which is under scrutiny in current research (e.g. Le Quéré et al. 2007; Burke et al. 2015; Watson et al. 2015, & refs.).

In addition to the ice-core record of atmospheric carbon-dioxide buildup starting 18 millennia ago, we have hard evidence for what happened to sea levels. There are two independent lines of evidence, a point to which I'll return. The fastest rate of sea level rise was between about 16 and 7 millennia ago, during which time the level rose by more than 100 metres. The total sea level rise over the whole deglaciation was well over 100 metres, perhaps as much as 140. It required the melting of huge volumes of land-based ice.

Our understanding of how the ice melted is incomplete, but it involved a complex interplay between the orbital changes and ice flow, meltwater flow, and ocean-circulation and sea-ice changes releasing carbon dioxide. The main carbon-dioxide injection starting 18 millennia ago must have significantly amplified the warming and melting. I'll return to this scenario with a closer look at some of the evidence, to be summarized in Figure 13 below.

The orbital changes are well known and have been calculated very precisely over far greater, multi-million-year timespans. Such calculations are possible because of the remarkable stability of the solar system's planetary motions. The orbital changes include a 2° oscillation in the tilt of the Earth's axis (between about 22° and 24°) and a precession that keeps reorienting the axis relative to the stars. These orbital changes redistribute solar heating in latitude and time while hardly changing its average over the globe and over seasons. Figure 12, taken from Shackleton (2000), shows one of the resulting effects, the way in which the midsummer peak in solar heating at 65°N has varied over the past 400 millennia. The time between peaks is the precession period, just over 20 millennia. Time in millennia runs from right to left:

from Fig 1 of Shackleton (2000)

Figure 12: Midsummer diurnally-averaged insolation at 65°N, in W m-2, from Shackleton (2000), using orbital calculations carried out by André Berger and co-workers. They assume constant solar output but take careful account of variations in the Earth's orbital parameters in the manner pioneered by Milutin Milanković. Time in millennia runs from right to left.

The vertical scale on the right is the local, diurnally-averaged midsummer heating rate from incoming solar radiation at 65°N, in watts per square metre. It is these local peaks that are best placed to boost summertime melting on the northern ice sheets. One gets a peak when closest to the Sun with the North Pole tilted toward the Sun. However, such melting is not in itself enough to produce a full deglaciation. Only one peak in every five or so is associated with anything like a full deglaciation. They are the peaks marked with vertical bars. The timings can be checked from Figure 3 in chapter 2. The marked peaks were accompanied by the biggest carbon-dioxide injections, as measured by atmospheric concentrations reaching 280 ppmv or more. It's noteworthy that, of the two peaks at around 220 and 240 millennia ago, it's the smaller peak around 240 millennia that's associated with the bigger carbon-dioxide and temperature response. The bigger peak around 220 millennia is associated with a somewhat smaller response.

In terms of the amplifier metaphor, therefore, we have an input circuit whose sensitivity varies over time. In particular, the sensitivity to high-latitude solar heating must have been greater at 240 than at 220 millennia ago. That's another thing we can say independently of the climate prediction models.

There are well known reasons to expect such variations in sensitivity. One is that the system became more sensitive when it was fully primed for the next big carbon-dioxide injection. To become fully primed it needed to store enough extra carbon dioxide in the deep oceans. Extra storage was favoured in the coldest conditions, which tended to prevail during the millennia preceding full deglaciations. How this came about is now beginning to be understood, with changes in ocean circulation near Antarctica playing a key role, alongside limestone-sludge chemistry and phytoplankton fertilization from iron in airborne dust (e.g. Watson et al. 2015, & refs.).

Also important was a different priming mechanism, the slow buildup and areal expansion of the great northern land-based ice sheets. The ice sheets slowly became more vulnerable to melting in two ways, first by expanding equatorward into warmer latitudes, and second by bearing down on the Earth's crust, taking the upper surface of the ice down to warmer altitudes. This ice-sheet-mediated priming mechanism would have made the system more sensitive still.

Specialized model studies (e.g. Abe-Ouchi et al. 2013, & refs.) have long supported the view that both priming mechanisms are important precursors to deglaciation. It appears that both are needed to account for the full magnitudes of deglaciations like the last. It must be cautioned, however, that our ability to model the details of ice flow and snow deposition is still very limited. That's also related to some of the uncertainties now facing us about the future. For one thing, there are signs that parts of the Greenland ice sheet are becoming more sensitive today, as well as parts of the Antarctic ice sheet, especially the part known as West Antarctica where increasingly warm seawater is intruding sideways underneath the ice, much of which is grounded below sea level.

Ice-flow modelling is peculiarly difficult because of the complex, highly variable fracture patterns and friction in glacier-like `ice streams' within the ice sheets, over areas whose sizes, shapes, and frictional properties are hard to predict and which involve complex `hydrological networks' of subglacial flow (e.g. Schoof 2010), for instance fed by meltwater chiselling downward by `hydro-fracture', forcing crevasses to open all the way to the bottom of the ice (e.g. Krawczynski et al. 2009).

As regards the deglaciations and the roles of the abovementioned priming mechanisms -- ice-sheet buildup and deep-ocean carbon dioxide storage -- two separate questions must be distinguished. One concerns the magnitudes of deglaciations. The other concerns their timings, every 100 millennia or so over the last few glacial cycles. For instance, why aren't they just timed by the strongest peaks in the orbital curve above?

It's hard to assess the timescale for ocean priming because, here, our modelling ability is also very limited, not least regarding the details of sunlit upper-ocean circulation and layering where the phytoplankton live (see for instance Marchitto et al. 2006, and my notes thereto.) We need differences between storage rates and leakage rates; and neither are modelled, nor observationally constrained, with anything like sufficient accuracy. However, Abe-Ouchi et al. make a strong case that the timings of deglaciations, as distinct from their magnitudes, must be largely determined not by ocean storage but by ice-sheet buildup. That conclusion depends not on a small difference between ill-determined quantities but, rather, on a single gross order of magnitude, namely the extreme slowness of ice-sheet buildup by snow accumulation, which is crucial to their model results. And ocean priming is unlikely to be slow enough to account for the full 100-millennia timespan. But the results also reinforce the view that the two priming mechanisms are both important for explaining the huge magnitudes of deglaciations.

Today, in the year 2020, with atmospheric carbon dioxide well over 400 ppmv and far above the pre-industrial 280 ppmv, we have already had a total, natural plus artificial, carbon-dioxide injection more than twice as large as the preceding natural injection, as measured by atmospheric buildup. Even though the system's sensitivity may be less extreme than just before a deglaciation, the climate response would be large even if the artificial injections were to stop tomorrow. That's despite the way the injected carbon dioxide is repartitioned between the atmosphere, the oceans and the land-based biosphere, and despite what's technically called carbon-dioxide opacity, producing what's called a logarithmic dependence in its greenhouse heating effect (e.g. Pierrehumbert 2010, sec. 4.4.2). Logarithmic dependence means that the magnitude of the heating effect is described by a graph that continues to increase as atmospheric carbon dioxide increases, but progressively less steeply. That's well known and was pointed out long ago by Arrhenius.

A consideration of sea levels puts all this in perspective. A metre of sea level rise is only a tiny fraction of the 100 metres or more by which sea levels rose between 20 millennia ago and today, and the additional 70 metres or so by which they'd rise if all the land-based ice sheets were to melt. It's overwhelmingly improbable that an atmospheric carbon-dioxide buildup twice as large as the natural range, let alone six times as large, or more, as advocated by the climate disinformers and their political allies, would leave sea levels clamped precisely at today's values. There is no known, or conceivable, mechanism for such clamping -- it would be Canute-like to suppose that there is -- and there's great scope for substantial further sea level rise within another century. For instance a metre of global-average sea level rise corresponds to the melting of only 5% of today's Greenland ice plus 1% of today's Antarctic. That's nothing at all by comparison with past deglaciation scenarios, but is already very large from a human perspective. And it could easily be several metres or more, over the coming decades and centuries, with devastating geopolitical and economic consequences, unless we curb greenhouse-gas emissions very soon.

The carbon-dioxide injections have cumulative and essentially permanent effects on our life-support system, including the oceans. Among the many effects are the destruction of ocean ecosystems and food chains, by rising temperatures and by the ocean acidification that results from repartitioning. The natural processes that can take the carbon dioxide back out of the system as a whole have timescales up to a hundred millennia (e.g. Archer 2009). To be sure, the carbon dioxide could be taken back out artificially, using known technologies -- that's by far the safest form of geoengineering, so-called -- but the expense has made it politically impossible so far.

Cumulativeness means that the effect of our carbon-dioxide injections on the climate system depends mainly on the total amount injected, and hardly at all on the rate of injection. That is why climate scientists now talk about a carbon `budget', meaning the amount remaining to be injected before the risk is considered unacceptable. Today this `budget' is well below the amounts of carbon in fossil-fuel reserves already located (e.g. Valero et al. 2011, Budinis et al. 2017), let alone in future reserves from fossil-fuel prospecting.

From a risk-management perspective it would be wise to assume that the climate-system amplifier is already more sensitive than in the pre-industrial past. The risk from ill-understood factors increases as the climate system moves further and further away from its best-known states, those of the past few hundred millennia. There are several reasons to expect increasing sensitivity, among them the ice-flow behaviour already mentioned. A second reason is the loss of sea ice in the Arctic, increasing the area of open ocean exposed to the summer sun. The dark open ocean absorbs solar heat faster than the white sea ice. This is a strong positive feedback, contributing to the Arctic's tendency to warm faster than the rest of the planet -- the so-called `polar amplification' of warming that has been one of the most robust predictions from several generations of climate models. In northern midsummer the Arctic is the sunniest place on earth, because of the Earth's 23.4° tilt. A third reason is the existence of what are called methane clathrates, or frozen methane hydrates, large amounts of which are stored underground in high latitudes. They add yet another positive feedback, increasing the sensitivity yet further.

Methane clathrates consist of natural gas trapped in ice instead of in shale. There are large amounts buried in permafrosts, probably dwarfing conventional fossil-fuel reserves although the precise amounts are uncertain (Valero et al. 2011). As the climate system moves further beyond pre-industrial conditions, increasing amounts of clathrates will melt and release methane gas. It's well documented that such release is happening today, at a rate that isn't well quantified but is almost certainly increasing (e.g. Shakhova et al. 2014, Andreassen et al. 2017). Permafrost has become another self-contradictory term. What used to be permanent is now temporary. This is another positive feedback whose magnitude is highly uncertain but which does increase the probability, already far from negligible, that the Earth system might go into runaway climate change, in the sense of going all the way into a very hot, very humid state like that of the early Eocene around fifty million years ago (e.g. Cramwinckel et al. 2018). Methane jolts the system toward hotter states because it's a more powerful greenhouse gas. In today's conditions its greenhouse-warming contribution per molecule is far greater than that of the carbon dioxide to which it's subsequently converted, within a decade or two (e.g. Pierrehumbert 2010, sec. 4.5.4). Such a jolt brings the next tipping point closer; and the Sun is about half a percent stronger today than it was in the Eocene, a point to which I'll return shortly.

Going into a new early Eocene or `hothouse Earth' would mean first that there'd be no great ice sheets at all, even in Antarctica, second that sea levels would be about 70 metres higher than today -- some hundreds of feet higher -- and third that the most intense thunderstorms would probably be far more powerful than today. A piece of robust and well established physics, called the Clausius-Clapeyron relation, says that air can hold increasing amounts of weather fuel, in the form of water vapour, as temperatures increase. As already mentioned, the air can hold between six and seven percent more weather fuel for each degree Celsius; and a thunderstorm surrounded by more weather fuel can suck the fuel up faster and reach a greater peak intensity more quickly. The geology of the early Eocene shows clear evidence of `storm flood events' and massive soil erosion (e.g. Giusberti et al. 2016). And it was just then -- surely no accident -- that some land-based mammals began to migrate into the oceans. Within several million years, some of them had evolved into mammals that were fully aquatic, precursors to today's whales and dolphins. Selective pressures from extremes of surface storminess can help to make sense of those extraordinary evolutionary events -- perhaps starting with hippo-like behaviour in which the water, initially, served only as a refuge (e.g. Thewissen et al. 2011).

All this took place despite the Sun being about half a percent weaker than today. We don't have accurate records of atmospheric carbon dioxide at that time, only rough estimates. But extremely high values, perhaps thousands of ppmv, are within the range of estimates and are to be expected from large-scale volcanic activity. Past volcanic activity was sometimes far greater and more extensive than anything within human experience, as with the pre-Eocene lava flows that covered large portions of India, whose remnants form what are called the Deccan Traps, and -- actually overlapping the time of the early Eocene and even more extensive -- the so-called North Atlantic Igneous Province. Thanks to the climate-system amplifier, sufficiently high carbon dioxide can explain the high temperatures and humidities despite the logarithmic dependence of greenhouse heating, and despite the weaker Sun.

The weakness of the Eocene Sun counts as something else that we know about with extremely high scientific confidence. The Sun's total power output gets stronger by roughly one percent every hundred million years. The solar models describing the power-output increase have become extremely secure -- very tightly cross-checked -- especially now that the so-called neutrino puzzle has been resolved. Even before that puzzle was resolved a few years ago, state-of-the-art solar models were tightly constrained by a formidable array of observational data, including very precise data on the Sun's acoustic vibration frequencies, called helioseismic data. The same solar models are now known to be consistent, also, with the measured fluxes of different kinds of neutrino, subatomic particles streaming outward from the nuclear reactions inside the Sun. The neutrino measurements cross-check the models in a completely independent way.

These solar models, plus recent high-precision observations, plus recent advances in understanding the details of radiation from the Sun's surface and atmosphere, point strongly to another significant conclusion. Variability in the Sun's output on timescales much less than millions of years comes from variability in sunspots and other magnetic phenomena. These phenomena are by-products of the turbulent fluid motion caused by thermal convection in the Sun's outer layers. That variability is now known to have climatic effects distinctly smaller than the effects of carbon dioxide injections to date, and very much smaller than those to come. The climatic effects from solar magnetism include not only the direct response to a slight variability in the Sun's total power output, but also some small and subtle effects from a greater variability in the Sun's ultraviolet radiation, which is absorbed mainly at stratospheric and higher altitudes. The main points are well covered in reviews by Foukal et al. (2006) and Solanki et al. (2013). Controversially, there might be an even more subtle effect from cloud modulation by cosmic-ray shielding. But to propose that any of these effects predominate over greenhouse-gas heating and even more that their timings should coincide with, for instance, the timings of full deglaciations -- the timings of the marked peaks in the orbital curve above -- would be to propose something that's again overwhelmingly improbable.

With the Sun half a percent stronger today, and a new Eocene or hothouse Earth in prospect -- one might call it the Eocene syndrome -- we must also consider what might similarly be called the Venus syndrome. Not only might we get runaway climate change into an Eocene-like state, but possibly an even more drastic runaway into a state like that of the planet Venus, with its molten-lead surface temperatures and, of course, no oceans at all, and no life. If Venus ever had oceans they've long since boiled away. This more drastic sense of the word `runaway' is still the usual sense encountered in the technical literature.

Here, however, we can be a bit more optimistic. Even if the Earth does go into a new Eocene, perhaps after a few centuries, the Venus syndrome seems unlikely to follow, on today's best estimates. Such estimates are technically tricky -- for one thing depending on poorly known cloud-radiation interactions -- but they point fairly clearly toward sub-Venus-syndrome conditions even when storminess is neglected (e.g. Pierrehumbert 2010, sec. 4.6). The storms increase the safety margin, so to speak, by transporting heat and weather fuel away from the tropical oceans into high winter latitudes. So, whatever happens to storm-devastated human societies over the next few centuries, life on Earth will probably survive in some form, just as it did in the Eocene.

Coming back to our time in the twenty-first century, let's take a closer look at the storminess issue for the near future. Once again, the Clausius-Clapeyron relation is basic: global warming equals global fuelling, and stronger storms suck up weather fuel faster. In particular, we can expect more storminess in the so-called `temperate' climates of middle latitudes. There, the Earth's rotation rate, which we can take to be a constant for this purpose, has an especially strong influence on the large-scale fluid dynamics. It will tend to preserve the characteristic spatial scales and morphologies of the extratropical jetstreams and cyclones that are familar today, suggesting in turn that the peak intensities of the cyclones and concentrated rainfall events will increase as they're fed with, on average, more and more weather fuel from the tropics and subtropics -- more and more weather fuel going into similar-sized, similar-shaped regions.

So the transition toward a hotter, more humid climate is likely to show extremes in both directions at first: wet and dry, hot and cold, heatwaves and severe cold outbreaks. In a more energetic climate sytem the fluctuations are likely to intensify, on average, with increasingly large excursions in both directions. They're all tied together by the fluid dynamics. Again thanks to the Earth's rotation, fluid-dynamical influences operate all the way out to planetary scales. In the technical literature such long-range influences are called `teleconnections'. They form a global-scale jigsaw of mutual influences, a complex web of cause and effect operating over a range of timescales out to decades. They have a role not only in amplifying jetstream meanders, for instance, but also in the variability of El Niño and other phenomena involving large-scale, decadal-timescale fluctuations in tropical sea-surface temperature and thunderstorm statistics, transferring vast amounts of heat between atmosphere and ocean.

None of these phenomena come anywhere near being adequately represented in the climate prediction models. Although the models are important as part of our hypothesis-testing toolkit -- and several generations of them have robustly predicted the Arctic maximum in warming trends, now plainly evident from sea-ice observations -- the models cannot yet accurately give us, in particular, the probabilities of weather extremes. That remains one of the toughest challenges for climate science.

Today's operational weather-forecasting models are getting better at simulating extratropical storms and precipitation, including extremes. That's mainly because of their far finer spatial resolution, obtained at great computational cost per day's simulation.

The computational cost still makes it impossible to run such operational models out to many centuries. However, in a groundbreaking study Elizabeth Kendon and co-workers ran an operational forecasting model on a UK Meteorological Office supercomputer long enough, for the first time, to support the expectation that climate change and global fuelling will increase the magnitudes and frequencies of extreme summertime thunderstorms and rainfall events (Kendon et al. 2014). Their results, focusing on the southern UK, point to greater extremes than would be predicted by the Clausius-Clapeyron relation alone, but in line with expectations from the positive feedback already mentioned. Stronger storms suck up weather fuel faster in the runup to peak intensity, explaining the sudden torrential rain and flash flooding that are now becoming all too familiar.

Such summertime rainfall extremes are spatially compact and among the most difficult to simulate. As computer power increases, though, there will be many more such studies, beginning to describe more accurately the statistics of extreme rainstorms, snowstorms and windstorms as well as those of droughts, heatwaves and firestorms, in all seasons and locations. Destructive winter storms are spatially more extensive and are better simulated, but again only by the operational weather-forecasting models and not by the climate prediction models. Tropical cyclones like Idai and Dorian pose their own peculiar modelling difficulties but, once again, the evidence suggests that the most destructive storms have tended to be underpredicted rather than overpredicted (Emanuel 2005). This at least is a strong possibility, which needs to be taken seriously for risk-assessment purposes.

*   *   *

Dear reader, before taking my leave I owe you a bit more explanation of the amplifier metaphor. As should already be clear, it's an imperfect metaphor at best. To portray the climate system as an amplifier we need to recognize not only its highly variable input sensitivity but also its many intricately-linked components operating over a huge range of timescales -- some of them out to multi-decadal, multi-century, multi-millennial and even longer. And the climate-system amplifier would be pretty terrible as an audio amplifier if only because it has so much internal noise and variability, on so many timescales, manifesting the `nonlinearity' already mentioned. An audio aficionado would call it a nasty mixture of gross distortions and feedback instabilities -- as when placing a microphone too close to the loudspeakers -- except that the instabilities have many timescales. Among the longer-timescale components there's the deep-ocean storage of carbon dioxide and the land-based processes including the waxing and waning of forests, wetlands, grasslands, and deserts, as well as ice-sheet buildup and ice-flow dynamics operating on timescales all the way out to the mind-blowing 100 millennia. the timescale for a major ice sheet to build up.

Some of the system's noisy internal fluctuations are relatively sudden, for instance showing up as the Dansgaard-Oeschger warming events encountered in chapter 2, subjecting tribes of our ancestors to major climatic change well within an individual's lifetime and probably associated with a collapse of upper-ocean layering and sea-ice cover in the Nordic Seas.67 A similar tipping point might well occur in the Arctic Ocean in the next few decades, possibly with drastic consequences for the methane clathrates, the Greenland ice sheet, and sea-level rise.

All these complexities have helped the climate disinformers because, from all the many signals within the system, showing its variability in time, one can always cherry-pick some that seem to support practically any view one wants -- especially if one replaces insights into the workings of the system, as seen from several viewpoints, by simplistic arguments that conflate timing with cause and effect. Natural variability and noise in the data provide many ways to cherry-pick data segments, showing what looks like one or another trend or phase lag and adding to the confusion about causal relations on different timescales. To gain what I'd call understanding, or insight, one needs to include good thought-experiments in one's conceptual arsenal, backed up by suitably-judged computer simulations. Such thought-experiments are involved, for instance, when considering injections of carbon dioxide and methane into the atmosphere, whether natural or artificial or both.

If you see two signals within a complex system, with one of them looking like a delayed version of the other, then nothing can be deduced about causality from that information alone. The reason is both simple and obvious. Since there are vast numbers of other signals within the complex web of cause and effect, it is equally possible that the two signals you are looking at are caused by something going on elsewhere in the system -- that neither one of them causes the other. Correlation is not causality.

This point is so important that I'd like to illustrate it by returning to what happened between 10 and 20 millennia ago, during the last deglaciation. A compilation by my colleague Dr Luke Skinner, who is an expert on the observational evidence, gives us a glimpse of the actual complexity we're dealing with. He presents graphs showing six of the signals for the last twenty-five millennia, reproduced here as Figure 13:

Last-deglaciation details, courtesy Luke Skinner

Figure 13: Graphs recording some of the major events of the last deglaciation. Time, in millennia, runs from right to left up to the present day. `EDML' and `EDC' refer to data from ice cores taken at high altitude on the East Antarctic ice sheet, respectively in Dronning Maud Land and at a location called Dome C; `NGRIP' refers to an ice core from a high-altitude location in Greenland. The smooth dashed graph shows the midsummer insolation at 65°N, reproducing the last peak and trough on the left of Figure 12. Atmospheric carbon dioxide (CO2) from EDC is shown as the black graph at the top. The blue and red-orange graphs at the bottom show oxygen-isotope ratios believed to be rough indicators, or proxies, for air temperature at the time when snow fell on to the ice; the corresponding temperature ranges are of the order of several degrees Celsius. The light orange graph at the top marked `Antarctic sea ice' is a measured quantity believed to be a proxy for the variability of sea ice extent around the continent (Abram et al. 2013), with more sea ice when the graph dips down, as on the right. For more information on events during the deglaciation, see for instance Le Quéré et al. (2007), Shakun et al. (2012), Skinner et al. (2014) and Watson et al. (2015). Compilation kindly supplied by Luke Skinner.

In particular, the topmost black graph shows atmospheric carbon dioxide from an Antarctic ice core, while the two graphs at the bottom (blue and red-orange) show isotope data giving an indication of air temperatures for Antarctica and Greenland respectively. The typical disinformers' ploy -- cherry-picking data segments -- would in this case start by restricting attention to the time interval between 21 and 17½ millennia ago. Then attention would be focused on the carbon dioxide and Greenland temperature graphs. The disinformer would then say `Look, the temperature started rising well before carbon dioxide started rising; so temperature rise causes carbon dioxide rise rather than vice versa.' (Notice the dichotomization, shutting off further thinking.)

A better commentary on the six graphs -- informed by our best, albeit imperfect, understanding of how the climate system works, including ice flow -- would be to say that the graphs point to a complex sequence of events in which the initial increase in midsummer insolation at 65°N (smooth dashed graph) caused Greenland temperatures to rise slightly, increasing the flow of land ice and meltwater into the sea and beginning a long-drawn-out rise in sea level (second black graph). Also contributing to the early rise in sea level would have been the Antarctic midsummer insolation, which peaked around 24 millennia ago. The insolation graph for the Antarctic looks like the smooth dashed graph turned upside down. And the insolation and the inflow of land ice and meltwater would have induced changes not only in sea level but also, under the influence of the Earth's rotation and the buoyancy of meltwater, changes in the deep Atlantic overturning and Southern Ocean circulations and Antarctic sea-ice extent (light orange graph near top) such that carbon dioxide began to be released from its deep-ocean reservoir, allowing it to kick in as a positive feedback that added to the ever-increasing warming and ice-melting effects of the northern insolation, continuing until about ten millennia ago.

Of course today's situation is far simpler. A large and sudden artificial injection of carbon dioxide from human activities, mostly over the past two centuries -- a mere twinkling of an eye on palaeoclimatic timescales -- has been followed, not preceded, by rising global-mean temperatures and ocean heat content.

I also need to say more about why we can trust the ice-core records of atmospheric carbon dioxide. Along with today's direct atmospheric measurements the ice-core records count as hard evidence, in virtue of the simplicity of the chemistry and the meticulous cross-checking that's been done -- for instance by comparing results from different methods of extracting the carbon dioxide trapped in ice, by comparing results between different ice cores having different accumulation rates, and by comparing with the direct atmospheric measurements that have been available since 1958. We really do know with practical certainty the past as well as the present atmospheric carbon-dioxide concentrations, with accuracies of the order of a few percent, as far back as about eight hundred millennia even though not nearly as far back as the Eocene. Thanks to the chemical stability of carbon dioxide as a gas -- in unmelted ice as well as in the atmosphere -- it's well preserved in long-frozen Antarctic ice cores, and well mixed throughout most of the atmosphere. Its atmospheric concentration in ppmv hardly varies from north to south. The Antarctic values can be read as accurate global values.

And throughout the past four hundred millennia, atmospheric carbon dioxide concentrations varied mostly within the range 180 to 280 ppmv, as already noted. As seen in Figure 3, all values were within that range except for a very few outliers; and all values without exception were far below today's 400-plus ppmv, let alone the 800 ppmv or more that the climate disinformers would like us to reach by the end of this century -- about eight times the natural range of variation.

And why do I trust the geological record of past sea levels, going up and down by more than a hundred metres? We know about sea levels from several hard lines of geological evidence, including direct evidence from old shoreline markings and coral deposits. It's difficult to allow accurately for such effects as the deformation of the Earth's crust and mantle by changes in ice and ocean-mass loading, and tectonic effects generally. But the errors from such effects are likely to be of the order of metres, not many tens of metres, over the last deglaciation at least. And, as is well known, an independent cross-check comes from oxygen isotope records (e.g. Shackleton 2000), reflecting in part the fractionation between light and heavy oxygen isotopes when water is evaporated from the oceans and deposited as snow on the great ice sheets. That cross-check is consistent with the geological shoreline estimates.

*   *   *

So -- in summary -- we may be driving in the fog, but the fog is clearing. The disinformers urge us to shut our eyes and step on the gas. The current US president wants us to burn lots of `beautiful clean coal', pushing us hard toward a new Eocene. But the disinformation campaigns now seem, at last, to be meeting the same fate for climate as they did for the ozone hole and for tobacco and lung cancer.

Earth observation and modelling will continue to improve, helped by the new techniques of Bayesian causality theory and artificial intelligence (e.g. Bolton and Zanna 2019, & refs.). The link between global fuelling and weather extremes will become increasingly clear as computer power increases and case studies accumulate. Younger generations of scientists, engineers and business entrepreneurs will see more and more clearly through the real fog of scientific uncertainty, as well as through the artificial fog of disinformation. Scientists will continue to become more skilful as communicators.

Of course climate isn't the only huge challenge ahead. There's the long-studied ecology and evolution of pandemic viruses, and of antibiotic resistance in bacteria, and of their interplay with agricultural practices. There's the threat of asteroid strikes. There's the enormous potential for good or ill in new nanostructures and materials, and in gene editing, information technology, social media, cyberwarfare, teachable artificial intelligence, and automated warfare -- `Petrov's nightmare', one might call it -- all of which demand clear thinking and risk management (e.g. Rees 2014). On teachable artificial intelligence, for instance, clear thinking requires escape from yet another version of the dichotomized, us-versus-them mindset with either us, or them, the machines, ending up `in charge' or `taking control' -- whatever that might mean -- and completely missing the complexity, the plurality, and the far greater power of human-machine interaction when it's cooperative, with each playing to its strengths.

On risk management, the number of ways for things to go wrong is combinatorially large, some of them with low probability but enormous cost -- unintended consequences that can easily be overlooked. So I come back to my hope that good science -- which in practice means open science with its powerful ideal and ethic, its openness to the unexpected, and its humility -- will continue to survive and prosper despite all the forces ranged against it.

After all, there are plenty of daring and inspirational examples. One of them is the work of the open-source software community, and another is Peter Piot's work on HIV/AIDS and other viral diseases such as Ebola. Another is the huge cooperative effort by many scientists on the COVID-19 pandemic. And yet another is the human genome story. There, the scientific ideal and ethic prevailed against corporate might,26 keeping the genomic data available to open science. When one contemplates not only human weakness but also the vast resources devoted to short-term profit by fair means or foul, and to disinformation, one can't fail to be impressed that good science gets anywhere at all. That it has done so again and again is to me, at least, very remarkable and indeed inspirational.

The ozone story, in which I myself was involved professionally, is another such example. The disinformers tried to discredit everything we did, using the full power of their commercial and political weapons. What we did was seen as heresy -- as with tobacco and tobacco and lung cancer -- a threat to share prices and profits. And yet the science, including all the cross-checks between different lines of evidence both observational and theoretical, became strong enough, adding up to enough in-depth understanding, despite the complexity of the problem, to defeat the disinformers in the end. The result was the Montreal Protocol on ozone-depleting chemicals. It's a new symbiosis between market forces and regulation. That too was inspirational. Surely Adam Smith would have approved. And it has bought us a bit more time to deal with climate, because the ozone-depleting chemicals are also potent greenhouse gases. If left unregulated, they would have accelerated climate change still further.

And on climate itself we now seem, at long last, as said earlier, to have reached what looks like a similar turning point. The risk-management side of the problem has been increasingly recognized (e.g. King et al. 2015). The Paris climate agreement of December 2015 and now the schoolchildren's and other protest movements prompt a dawning hope that the politics is changing enough -- that it's changing enough to allow another new, and similarly heretical, symbiosis.

My colleague Professor Myles Allen, who was instrumental in pointing out the cumulativeness of carbon-dioxide injections and the central relevance of the `carbon budget' -- the need to limit total future injections -- has suggested that part of that symbiosis could emerge from recognizing the parallel between the fossil-fuel power industry and the nuclear power industry. For a long time now, it has been unthinkable to plan a nuclear power plant without including the waste-disposal engineering and its costs. Progress, then, could come from recognizing the same thing for the fossil-fuel industry. Then carbon capture and storage (Oxburgh 2016), allowing safe fossil-fuel burning as well as, for instance, safe methane-to-hydrogen conversion and safe aircraft fuelling, could become a reality at scale by drawing on the industry's engineering skills and economic might.

The disinformers are still intent on blocking such progress and are still immensely powerful, within the newsmedia and now the weaponized social media. But free-market fundamentalism, with its prioritization of short-term profit above all else, no longer looks omnipotent. Its erstwhile triumphalism was eroded by the 2008 financial crash. It's been further eroded by the COVID-19 pandemic, driving home the point that `greed is good' is not, after all, the Answer to Everything. And on top of that, the limitless burning of fossil fuels without carbon capture and storage is increasingly seen as risky even in purely financial terms. It's seen as heading toward stranded assets and what's now called the bursting of the shareholders' `carbon bubble'. Low-carbon `renewable' energy is fast becoming cheaper than fossil-fuel energy (e.g. Farmer et al. 2019), turning us toward the path to prosperity noted over a decade ago by economist Nicholas Stern (2009). More and more ideas are coming to maturity about diversifying and driving down the cost of low-carbon energy, for instance with the Rolls Royce consortium on `small modular reactors' for nuclear power produced by advanced manufacturing techniques (e.g. Stein 2020). And now, at last, we are seeing the electrification of personal transport, a beautiful and elegant technology and another step in the right direction.

As regards good science in general, an important factor in the genome story, as well as in the ozone story, was a policy of open access to experimental and observational data. That policy was one of the keys to success. The climate-science community was not always so clear on that point, giving the disinformers further opportunities. However, the lesson now seems to have been learnt.

I don't think, by the way, that everyone contributing to climate disinformation has been consciously dishonest. Honest scepticism is crucial to science; and I wouldn't question the sincerity of colleagues I know personally who feel, or used to feel, that the climate-science community got things wrong. Indeed I'd be the last to suggest that that community, or any other scientific community, has never got anything wrong, even though my own sceptical judgement is that, as already argued, the climate-science consensus in the IPCC reports is mostly right apart from a tendency to underestimate the problems ahead -- partly because the climate prediction models underestimate them, even today, and partly for fear of being accused of scaremongering.

It has to be remembered that unconscious assumptions and mindsets are always involved, in everything we do and think about. The anosognosic patient is perfectly sincere in saying that a paralysed left arm isn't paralysed. There's no dishonesty. It's just an unconscious thing, an extreme form of mindset. Of course the professional art of disinformation includes the deliberate use of what sales and public-relations people call `positioning' -- the skilful manipulation of other people's unconscious assumptions, related to what cognitive scientists call `framing' (e.g. Kahneman 2011, Lakoff 2014, & refs.).

As used by professional disinformers the framing technique exploits, for instance, the dichotomization instinct -- evoking the mindset that there are just two sides to an argument. The disinformers then insist that their `side' merits equal weight, as with `flat earth' versus `round earth'. This is sometimes called `false balance'. Such techniques illustrate what I called the dark arts of camouflage and deception so thoroughly exploited, now, by the globalized plutocracies and their political allies, drawing on their vast financial resources and their deep knowledge of the way perception works. One of the greatest such deceptions has been the mindset, so widely and skilfully promoted, that low-carbon energy is `impractical' and `uneconomic', despite all the demonstrations to the contrary. It's inspirational, therefore, to see the disinformers looking foolish and facing defeat once again, as the smart renewable-energy technologies emerge and take hold.

It often takes a younger generation to achieve Max Born's `loosening of thinking', exposing mindsets and making progress. Science, for instance, has always progressed in fits and starts, always against the odds, and always involving human weakness alongside a collective struggle with mindsets exposed, usually, through the efforts of a younger generation. The great population geneticist J. B. S. Haldane famously caricatured it in four stages: (1) This is worthless nonsense; (2) This is an interesting, but perverse, point of view; (3) This is true, but quite unimportant; (4) I always said so. The push beyond simplistic evolutionary theory is a case in point.

So here's my farewell message to young scientists. You have the gifts of intense curiosity and open-mindedness. You have the best chance of avoiding mindsets, appreciating different viewpoints, and making progress. You have enormous computing power at your disposal. You have brilliant programming tools, and observational and experimental data far beyond my own youthful dreams of long ago. You have a powerful new mathematical tool, the probabilistic `do' operator, for distinguishing correlation from causality in complex systems (e.g. Pearl and Mackenzie 2018, Bolton and Zanna 2019, & refs.). You'll have seen how new insights from systems biology -- transcending simplistic evolutionary theory -- have opened astonishing new pathways to technological innovation (e.g. Wagner 2014, chapter 7, Contera 2019) as well as deeper insights into our own human nature. And above all you know how to disagree without hating -- how to argue over the evidence not to score personal or political points, but to reach toward an improved understanding.

Your generation will see the future more and more clearly. Whatever your fields of expertise, you know that it's fun to be curious and to find out how things work. It's fun to do thought-experiments and computer experiments. It's fun to develop and test your in-depth understanding, the illumination that can come from looking at a problem from more than one angle. You know that it's worth trying to convey that understanding to a wide audience, if you get the chance. You know that in dealing with complexity you'll need to hone your communication skills in any case, if only to develop cross-disciplinary collaboration, the usual first stage of which is jargon-busting -- as far as possible converting turgid technical in-talk into plain, lucid speaking.

So hang in there. Your collective brainpower will be needed as never before. Science isn't the Answer to Everything, but we're sure as hell going to need it.


(A switch from Harvard style to decimal superscript style is under construction)

1.  Many of Darwin's examples of cooperative behaviour, within and even across different social species, can be found in chapters II and III of his famous 1871 book The Descent of Man. Darwin's greatness as a scientist shows clearly in the meticulous attention he pays to observations of actual animal behaviour. The animals observed include primates, birds, dogs, and ruminants such as cattle. See especially pages 74-84 in the searchable full text of the book, which is available online at The contrary idea advocated by Darwin's contemporary, the philosopher Herbert Spencer -- that competition between individuals is the evolutionary Answer to Everything, and that nothing else is involved -- was summarized by Spencer in his catch-phrase `Nature red in tooth and claw'. The catch-phrase was later popularized by the poet Alfred, Lord Tennyson. It still seems to dominate popular culture.

2.  Monod, J., 1971: Chance and Necessity. Glasgow, William Collins, 187 pp., beautifully translated from the French by Austryn Wainhouse. This classic by the great molecular biologist Jacques Monod -- one of the sharpest and clearest thinkers that science has ever seen -- highlights the `more than two million years of directed and sustained selective pressure' (chapter 7, p. 124) arising from the co-evolution of our ancestors' genomes and cultures, entailing the gradual emergence of proto-language and then language because (p. 126) `once having made its appearance, language, however primitive, could not fail... to create a formidable and oriented selective pressure in favour of the development of the brain', in new ways -- to the advantage of `the groups best able to use it'. The possibility of such group-level selective pressure is still controversial4 and I'll return to it in my chapter 2.

3.  Tobias, P. V., 1971: The Brain in Hominid Evolution. New York, Columbia University Press, 170 pp. Phillip Tobias' famous work on palaeoanthropology led him to recognize the likely interplay of genomic and cultural evolution, despite the vast disparity of timescales. He writes that `the brain-culture relationship was not confined to one special moment in time. Long-continuing increase in size and complexity of the brain was paralleled for probably a couple of millions of years [my emphasis] by long-continuing elaboration... of the culture. The feedback relationship between the 2 sets of events is as indubitable as it was prolonged in time.'

4.  I began with examples of unconscious assumptions encountered in my own scientific research. Their unconscious nature was evident because making them conscious exposed them as self-evidently wrong, indeed silly. They're discussed in a recent paper of mine, `On multi-level thinking and scientific understanding', Adv. Atmos. Sci., 34, 1150-1158 (2017).

5.  Kahneman, D., 2011: Thinking, Fast and Slow. London, Penguin, 499 pp. Kahneman's book provides deep insight into many unconscious processes of great social importance.

6.  Bateson, G., 1972: Steps to an Ecology of Mind: Collected Essays on Anthropology, Psychiatry, Evolution and Epistemology. Republished 2000 by the University of Chicago Press. The quoted sentence is on p. 143, in the section on `Quantitative Limits of Consciousness'.

7.  See for instance chapter 4 of Noble, D., 2006: The Music of Life: Biology Beyond Genes. Oxford University Press. This short and lucid book by an eminent biologist clearly brings out the complexity, versatility, and multi-level aspects of biological systems, and the need to avoid extreme reductionism and single-viewpoint thinking, such as saying that the genome `causes' everything. A helpful metaphor for the genome, it's argued, is a digital music recording. Yes, reading the digital data on one sense `causes' a musical and possibly emotional experience but, if that's all you say, you miss the other things on which the experience depends so strongly, many of them coming from past experience as well as present circumstance. Reading the data into a playback device mediates or enables the listener's experience, rather than solely causing it.

8.  A discussion of typical case histories can be found in Sacks, O., 1995: An Anthropologist on Mars. New York, Alfred Knopf, 340 pp., Chapter 4, To See and Not See. The two subjects `Virgil' and `S.B.' studied most thoroughly, by Sacks and by Richard Gregory and Jean Wallace, were both 50 years old when the opaque elements were surgically removed to allow light into their eyes. The vision they achieved was very far from normal. An important update is in the 2016 book In the Bonesetter's Waiting Room by Aarathi Prasad, in which chapter 7 mentions recent evidence from Project Prakash, led by Pawan Sinha, providing case studies of much younger individuals blind from birth. There is much variation from individual to individual but it seems that teenagers, for instance, can often learn to see better after surgery, or adjust better to whatever visual functionality they achieve, than did the two 50-year-olds. But there is no suggestion that they achieved normal vision.

9.  A recent book by social commentator David Brooks contains a wealth of evidence that human compassion is just as natural and instinctive as human nastiness. Further evidence is offered in refs. 10 and 11. Brooks regards the phenomenon as mysterious and beyond science because he takes for granted the narrow, Spencerian or selfish-gene view of biological evolution that still permeates popular culture. But a more complete view of evolution beginning with Charles Darwin's observations,1 as summarized in my chapter 2, easily explains what Brooks describes as `the fierceness and fullness of love, as we all experience it', and the observed fact that many hundreds of people whom he's met -- he calls them social `weavers' -- live for others with little material reward. Rather, Brooks tells us, they are rewarded by something deep in their inner nature, giving them joy in caring for others and in the spontaneous upwelling of emotions such as love and compassion. They're at peace with themselves because this inner life is more important to them than the Spencerian competition with other individuals for power, money and status. They radiate joy. They `seem to glow with an inner light... they have a serenity about them, a settled resolve. They are interested in you, make you feel cherished and known, and take delight in your good.' All this comes naturally and without conscious effort. It does not come from being preached at moralistically. See Brooks, D., 2019: The Second Mountain: The Quest for a Moral Life. Random House, Penguin.

10.  2020: Humankind: A Hopeful History. Bloomsbury. This new book by Rutger Bregman assembles many examples of human behaviour that's instinctively kind and compassionate, outside the immediate kin or family.

11.  Santos, L., The Happiness Lab, a podcast by Yale psychology professor Laurie Santos, based on systematic studies of what improves personal wellbeing. These studies underline the point that caring for others and warding off loneliness works instinctively, even on a superficial level -- as with small acts of kindness, even simple thank-yous and other small courtesies to strangers. For most people, these are simply things that make you feel good -- you who offer the courtesies, that is -- rather than things someone said you ought to do.

12.  Pagel, M., 2012: Wired for Culture. London and New York, Allen Lane, Penguin, Norton, 416 pp. In this book the author, a well known evolutionary biologist, describes many interesting aspects of cultural evolution. On human languages, he claims that they are all descended from a single `mother tongue'. That claim is seen most clearly on page 299, midway through a section headed `words, languages, adaptation, and social identity'. The author suggests that language and the mother tongue were invented, as a single cultural event, at some time after `our invention of culture' around 160 to 200 millennia ago (page 2). This hypothetical picture is diametrically opposed to that of Monod,2 who along with Tobias3 insists on a strong feedback between genomic evolution and cultural evolution over a far longer timespan, millions of years, under strong selective pressures at group level (Monod,2 pp. 126-7). Pagel's arguments assume that Herbert Spencer was right and that Charles Darwin and Jacques Monod were wrong,12 and in particular that group-level selection is never important. I myself find Darwin and Monod more persuasive. And on top of that we now know that their arguments have been further, and decisively, vindicated by recent events in Nicaragua. I'll return to all this in chapter 2.

13.  A typical selection of views and controversies over the origins of language can be found in the collection of short essays by Trask, L., Tobias, P. V., Wynn, T., Davidson, I., Noble, W., and Mellars, P., 1998: The origins of speech. Cambridge Archaeological J., 8, 69-94. There are contributions from linguists, palaeoanthropologists, and archaeologists. Some of the views are consistent with Monod's,2  while others (see especially the essay by Davidson and Noble) are more consistent with the idea espoused by Pagel12 that language was a very recent, purely cultural invention. See also the thoughtful discussion in ref. 59.

14.  van der Post, L., 1972: A Story Like the Wind. London, Penguin. Laurens van der Post celebrates the influence he felt from his childhood contact with some of Africa's `immense wealth of unwritten literature', including the magical stories of the San or Kalahari-Desert Bushmen, stories that come `like the wind... from a far-off place.' See also 1961, The Heart of the Hunter (Penguin), page 28, on how a Bushman told what had happened to his small group: "They came from a plain... as they put it in their tongue, `far, far, far away'... It was lovely how the `far' came out of their mouths. At each `far' a musician's instinct made the voices themselves more elongated with distance, the pitch higher with remoteness, until the last `far' of the series vanished on a needle-point of sound into the silence beyond the reach of the human scale. They left... because the rains just would not come..."

15.  Pinker, S., 2018: Enlightenment Now: The Case for Science, Reason, Humanism and Progress, Penguin Random House. In a thoughtful chapter on democracy Pinker discusses why, in a long-term view -- and contrary to what headlines often suggest -- democracy has survived against the odds and spread around parts of the globe in three great waves over the past two centuries. It seems that this is not so much because of what happens in elections (in which the average voter isn't especially engaged or well informed) but, as pointed out by political scientist John Mueller, more because it gives ordinary people the freedom to complain publicly at any time without being imprisoned, tortured, or killed. (Of course I'm assuming that `democracy' means more than merely allowing popular voting. To be effective it must include reasonably strong counter-tyrannical institutions including the separation of executive and judiciary powers -- institutions currently under attack in, for instance, Poland and Hungary.)

16.  Abbott, B. P., et al., 2015: Observation of gravitational waves from a binary black hole merger. Physical Review Letters, 116, 061102. This was a huge team effort at the cutting edge of high technology, decades in the making, to cope with the tiny amplitude of Einstein's spacetime ripples. The `et al.' stands for the names of over a thousand other team members. The first event was observed on 14 September 2015. Another such event, observed on 26 December 2015, and cross-checking Einstein's theory even more stringently, was reported in a second paper Abbott, B. P., et al., 2016, Physical Review Letters 116, 241103. This second paper reports the first observational constraint on the spins of the black holes, with one of the spins almost certainly nonzero.

17.  Valloppillil, V., and co-authors, 1998: The Halloween Documents: Halloween I, with commentary by Eric S. Raymond. On the Internet and mirrored here. This leaked document from the Microsoft Corporation recorded Microsoft's secret recognition that software far more reliable than its own was being produced by the open-source community, a major example being Linux.  Halloween I states, for instance, that the open-source community's ability `to collect and harness the collective IQ of thousands of individuals across the Internet is simply amazing.'  Linux, it goes on to say, is an operating system in which `robustness is present at every level' making it `great, long term, for overall stability'. I well remember the non-robustness and instability, and user-unfriendliness, of Microsoft's own secret-source software during its near-monopoly in the 1990s. Recent improvements may well owe something to the competition from the open-source community.

18.  Recent advances in understanding genetic codes include insights into how they influence, and are influenced by, the lowermost layers of complexity in the molecular-biological systems we call living organisms, forming metabolic and regulatory networks. Some of these advances are beautifully described in the book by Andreas Wagner, 2014: Arrival of the Fittest: Solving Evolution's Greatest Puzzle. London, Oneworld. A detailed view emerges of the interplay between genes and functionality, such as the synthesis of chemicals needed by an organism, and the switching on of large sets of genes to make the large sets of enzymes and other protein molecules required by a particular functionality. And these insights throw into sharp relief the naivety and absurdity of thinking that genetic codes are blueprints governing everything, and that there is a single gene `for' a single trait.7 Also thrown into sharp relief is the role of natural selection and, importantly, other mechanisms, in the evolution of functionality. The importance of other mechanisms vindicates one of Charles Darwin's key insights, namely that natural selection is only part of how evolution works. (In Darwin's own words from page 421 of the sixth, 1872 edition of The Origin of Species, `As my conclusions have lately been much misrepresented, and it has been stated that I attribute the modification of species exclusively to natural selection, I may be permitted to remark that in the first edition of this work, and subsequently, I placed in a most conspicuous position -- namely, at the close of the Introduction -- the following words: "I am convinced that natural selection has been the main but not the exclusive means of modification."') And the research by Wagner and co-workers shows clearly and in great detail the role of, for instance, vast numbers of neutral mutations. These are mutations that have no immediate adaptive advantage and are therefore invisible to natural selection, yet have crucial long-term importance.

19.  Vaughan, Mark (ed.), 2006: Summerhill and A. S. Neill, with contributions by Mark Vaughan, Tim Brighouse, A. S. Neill, Zoë Neill Readhead and Ian Stronach. Maidenhead, New York, Open University Press/McGraw-Hill.

20.  Yunus, M., 1998: Banker to the Poor. London, Aurum Press. This is the story of the founding of the Grameen Bank of Bangladesh, which pioneered microlending and the emancipation of women against all expectation.

21.  The probalistic `do' operator is explained in Pearl, J. and Mackenzie, D., 2018: The Book of Why: The New Science of Cause and Effect. London, Penguin. This lucid, powerful, and very readable book describes recently-developed forms of Bayesian probability theory that make causal arrows explicit, clearly distinguishing correlation from causation. It is a precise mathematical framework for dealing with real experiments and thought-experiments. It underpins the most powerful forms of artificial intelligence and big-data analytics, and is applicable to complex problems such as the folding of proteins, and a virus infecting a cell. Its power lies not only behind today's cutting-edge open science but also, for instance, behind the weaponization of the social media,23 and the computer-assisted `quant' trading in the world of investment and finance -- maximizing short-term profits through a close cooperation between human and artificial intelligence.

22.  Pokémon Go is a behavioural experiment that gathers data from hundreds of millions of subjects. This is experimental psychology on an unprecedented scale. Pokémon Go is only one of many such experiments being continually performed by Google, Facebook, and other social-media `platforms', as well as by vendors such as Amazon. Experiments to elicit, for example, `like' button presses test the behaviour of still larger numbers of subjects. Unlike the giant hadron-collider experiments, whose data are released into the public domain, the giant behavioural experiments in question are done behind a cloak of commercial secrecy. And from them, using the power of the Bayesian-analytics toolkit,21 the social-media technocrats have built the artificial intelligences now in use for commercial and political gain, incorporating mathematical models of unconscious human behaviour. In this way they exploit and manipulate our unconscious assumptions. The sheer power and scale of this innovation is hard to grasp, but if only for civilization's sake it needs to be widely understood. For more detail see ref. 23.

23.  McNamee, R., 2019: Zucked: Waking Up to the Facebook Catastrophe, HarperCollins. As a venture capitalist who helped to launch Facebook and who knows how it works, Roger McNamee shows how Facebook, in particular, has used its behavioural experiments22 to create what I'd call `artificial intelligences that are not yet very intelligent'. They are not yet very intelligent because as currently designed they pose an existential threat to Facebook's own survival. Having started innocently with like versus dislike and other forms of `click-bait', they're now fuelling the vicious binary politics that once again threatens to replace democracy by brutal autocracy, as happened in the 1930s. And autocracy destroys the private autonomy of enterprises like Facebook. Like or dislike, friend or unfriend, follow or don't follow are not only data-gathering machines but also, in one sense, `perilous binary buttons'. Press or don't press, and don't stop to think. In this and in other ways of shutting off thinking, the social-media technocrats have risked destroying the freewheeling, freedom-loving, democratic business environment to which they owe their wealth, their private autonomy, and their current immunity from regulation. That risk of destruction has recently been worsened by the advent of troll farms, and robotic `users', that take advantage of the other great weakness in the design of the artificial intelligences. That weakness is simply the fact that users are allowed to be anonymous. Who knows, though -- maybe the technocrats have now seen the risk and are putting resources into becoming, at last, more intelligent and less socially destabilizing. There's optimism for you!

24.  Oreskes, N. and Conway, E. M., 2010: Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. Bloomsbury, 2010. Using formerly secret documents brought to light through anti-tobacco litigation, the authors document how the well-funded ozone and climate disinformation campaigns were masterminded by the same professionals who'd already honed their skills in the tobacco companies' lung-cancer campaigns.

25.  Mazzucato, M, 2018: The Value of Everything: Making and Taking in the Global Economy London, Penguin. An economist makes the case for what I call a `symbiosis' between market forces and regulation -- much closer to Adam Smith's ideas40 than popular mythology allows -- and goes into detail about how such a symbiosis might be better promoted.

26.  Sulston, J., and Ferry, G., 2003: The Common Thread: A Story of Science, Politics, Ethics and the Human Genome, Corgi edn. This important book by John Sulston and Georgina Ferry gives a first-hand account of how the scientific ideal and ethic prevailed against corporate might -- powerful commercial interests that came close to monopolizing the genomic data for private profit. By shutting off open access to the data, the monopoly would have done incalculable long-term damage to biological and medical research.

27.  Strunk, W., and White, E. B., 1979: The Elements of Style, 3rd edn. New York, Macmillan. I love the passage addressed to practitioners of the literary arts who have ambitions transcending mere lucidity: `Even to a writer who is being intentionally obscure or wild of tongue we can say, "Be obscure clearly! Be wild of tongue in a way we can understand!"'

28.  See the article headed "elegant variation" in Fowler, H. W., 1983: A Dictionary of Modern English Usage, 2nd edn., revised by Sir Ernest Gowers, Oxford University Press. Fowler's lucid and incisive article is not to be confused with the vague and muddy article under the same heading in the so-called New Fowler's of 1996.

29.  My first two examples, `If you are serious, then I'll be serious' and `If you are serious, then I'll be also', roughly translate to

translation of `If you are serious, then I'll be serious.'       Rough
translation of `If you are serious, then I'll be also.'

I don't myself know any of the Chinese languages, but even I can see that the first sentence has an invariant element while the second does not. My Chinese colleagues assure me that the first sentence is clearer and stronger than the second.

30.  Littlewood, J. E., 1953: A Mathematician's Miscellany. London, Methuen. Republished with extra material by B. Bollobás in 1986, as Littlewood's Miscellany, Cambridge University Press.

31.  More precisely, the so-called `slow manifold' is a multidimensional geometrical object that's important in fluid dynamics. It's important for understanding not only the dynamics of jetstreams and ocean currents but also, for instance, vortices and the flow around an aircraft. And the `slow manifold' is like something hairy (more precisely, it's fractal), while a manifold is like something bald (non-fractal). I've tried hard to persuade my fluid-dynamical colleagues to switch to `slow quasimanifold', but with scant success so far. For practical purposes the thing often behaves as if it were a manifold, being `thinly hairy'.

32.  Medawar, P. B., 1960: The Future of Man: The BBC Reith Lectures 1959. London, Methuen. Near the start of Lecture 2, Medawar remarks that scientists are sometimes said to give the meanings of words `a new precision and refinement', which would be fair enough, he says, were it not for a tendency then to believe that there's such a thing as `the' unique meaning of a word -- `the' pre-existing `true', or `pure', or `essential', or `inward' meaning. Then comes his remark that this `innocent belief... can lead to an appalling confusion and waste of time' -- something that very much resonates with my own experience in thesis-correcting, peer-reviewing, and scholarly journal editing.

33.  Hunt, M., 1993: The Story of Psychology. Doubleday, Anchor Books, 763 pp. The remarks about the Three Mile Island control panels and their colour coding are on p. 606.

34.  McIntyre, M. E., 1997: Lucidity and science, I: Writing skills and the pattern perception hypothesis. Interdisc. Sci. Revs. 22, 199-216.

35.  See for instance Pomerantsev, P., 2015: Nothing is True And Everything is Possible -- Adventures in Modern Russia. Faber & Faber. The title incorporates the postmodernist tenet that there's just one Absolute Truth, namely that nothing is true except the belief that nothing is true. (How lucid! How profound! How ineffable! How Derridian!) Peter Pomerantsev is a television producer who worked in Moscow with programme makers in the state's television monopoly, during the first decade of the 2000s. He presents an inside view of how the state controlled all the programming through an extraordinarily skilful operator, Vladislav Surkov, sometimes called the `Puppet Master'. Surkov's genius lay in exploiting postmodernist ideas to build an illusion of democratic pluralism in Russia, using what might be called a weaponized relativism, or a postmodernism of alternative facts. Some Western politicians and campaign advisers have taken up the same ideas. Postmodernism, as expounded by philosophers such as Jacques Derrida and Michel Foucault, provides not only a source of ideas but also a veneer of intellectual respectability, over what would otherwise be described as ordinary fictitious propaganda and binary-button-pressing.

36.  McGilchrist, I., 2009: The Master and his Emissary: the Divided Brain and the Making of the Western World. Yale University Press, 597 pp. Iain McGilchrist has worked both as a literary scholar and as a psychiatrist. In this vast and thoughtful book, `world' often means the perceived world consisting of the brain's unconscious internal models, to be discussed in my chapter 3. In most people, the Master is the right hemisphere with its holistic `world', open to multiple viewpoints. The Emissary is the left hemisphere with its dissected, atomistic and fragmented `world' and its ability to speak and thereby, all too often, to dominate proceedings with its strongly-held mindsets.

37.  Ramachandran, V. S. and Blakeslee, S., 1998: Phantoms in the Brain. London, Fourth Estate. Ramachandran is a neuroscientist known for his ingenious behavioural experiments and their interpretation. The phantoms are the brain's unconscious internal models that mediate perception and understanding (to be discussed in my chapter 3), and this book provides one of the most detailed and penetrating discussions I've seen of the nature and workings of those models, including insights into left-right hemisphere differences.

38.  The quote can be found in an essay by Max Born's son Gustav Born, 2002: The wide-ranging family history of Max Born. Notes and Records of the Royal Society (London), 56, 219-262 and Corrigendum, 56, 403. Max Born was awarded the Nobel Prize in physics, belatedly in 1954; and the quotation comes from a lecture he gave at a meeting of Nobel laureates in 1964, at Lindau on Lake Constance. The lecture was entitled Symbol and Reality (Symbol und Wirklichkeit).

39.  Conway, F. and Siegelman, J., 1978: Snapping. New York, Lippincott, 254 pp.

40.  Tribe, K., 2008: `Das Adam Smith Problem' and the origins of modern Smith scholarship. History of European Ideas, 34, 514-525. This paper provides a forensic overview of Adam Smith's writings and of the many subsequent misunderstandings of them that accumulated in the German, French, and English academic literature of the following centuries -- albeit clarified as improved editions, translations, and commentaries became available. Smith dared to view the problems of ethics, economics, politics and human nature from more than one angle, and saw his two great works The Theory of Moral Sentiments (1759) and An Inquiry into the Nature and Causes of the Wealth of Nations (1776) as complementing each other. Yes, market forces are useful, but only in symbiosis with written and unwritten regulation.

41.  Lakoff, G., 2014: Don't Think of an Elephant: Know Your Values and Frame the Debate. Vermont, Chelsea Green Publishing, Following Kahneman and Tversky, Lakoff shows in detail how those who exploit free-market fundamentalism -- including its quasi-Christian version advocated by, for instance, the writer James Dobson -- combine their mastery of lucidity principles with the technique called `framing', in order to perpetuate the unconscious assumptions that underpin their political power. Kahneman5 provides a more general, in-depth discussion of framing, and of related concepts such as anchoring and priming.

42.  See for instance Skippington, E., and Ragan, M. A., 2011: Lateral genetic transfer and the construction of genetic exchange communities. FEMS Microbiol Rev., 35, 707-735. This review article opens with the sentence `It has long been known that phenotypic features can be transmitted between unrelated strains of bacteria.' The article goes on to show among other things how `antibiotic resistance and other adaptive traits can spread rapidly, particularly by conjugative plasmids'. Conjugative means that the plasmid, which is a piece of genetic code (DNA), is passed directly from one bacterium to another through a tiny tube called a pilus, even if the two bacteria belong to different species.

43.  Wilson, D. S., 2015: Does Altruism Exist?: Culture, Genes, and the Welfare of Others. Yale University Press. See also Wilson's recent short article on multi-level selection and the scientific history of the idea. And of course the word `altruism' is a dangerously ambiguous word and source of confusion -- as Wilson points out -- deflecting attention from what matters most, which is actual behaviour. The explanatory power of models allowing multi-level selection in heterogeneous populations is further illustrated in, for instance, ref. 54.

44.  Wills, C., 1994: The Runaway Brain: The Evolution of Human Uniqueness. London, HarperCollins. This wise and powerful synthesis, years ahead of its time, builds on the author's intimate working knowledge of genes, palaeoanthropology, and population genetics. The starting point is the quote from Tobias given above, suggesting the crucial role of genome-culture feedback over millions of years.3 The author goes on to offer many far-reaching insights, not only into the science itself but also into its turbulent history. Sewall Wright's discovery of genetic drift is lucidly described in chapter 8 -- this was one of the successes of the old population-genetics models -- along with Motoo Kimura's work showing the importance of neutral mutations.18 As well as advancing our fundamental understanding of evolutionary dynamics, Kimura's work led to the discovery of the molecular-genetic `clocks' now used to estimate, from genomic sequencing data, the rates of genomic evolution and the times of genetic bottlenecks.

45.  Rose, H. and Rose. S. (eds), 2000: Alas, Poor Darwin: Arguments against Evolutionary Psychology. London, Jonathan Cape. This thoughtful compendium offers a variety of perspectives on the extreme reductionism or so-called `Darwinian fundamentalism' of recent decades, what I'm calling `simplistic evolutionary theory', as distinct from Charles Darwin's more complete, more pluralistic view. See also chapter 10 by an eminent expert on animal behaviour, the late Sir Patrick Bateson, on the word instinct and its multiple technical meanings.

46.  Dunbar, R. I. M., 2003: The social brain: mind, language, and society in evolutionary perspective. Annu. Rev. Anthropol., 32, 163-181. This review offers important insights into the group-level selective pressures on our ancestors, drawing on the primatological, palaeoanthropological and palaeoarchaeological evidence. See especially the discussion on pages 172-179. Data are summarized that reflect the growth of brain size and neocortex size over the past three million years, including its extraordinary acceleration beginning half a million years ago -- by which time, as suggested on page 175, `language, at least in some form, would have had to have evolved', in part to expand the size of the `grooming cliques' or friendship circles within larger social groups. What we would call an infantile level of language, such as `Love you, love you' would have been enough to expand such circles, by verbally grooming several individuals at once. The subsequent brain-size acceleration corresponds to what Wills44 calls runaway brain evolution, under growing selective pressures for ever-increasing linguistic and societal sophistication and group size culminating -- perhaps in the early Upper Palaeolithic around a hundred millennia ago -- in our species' ability to tell elaborate fictional as well as factual stories. See also ref. 59.

47.  Thierry, B., 2005: Integrating proximate and ultimate causation: just one more go! Current Science, 89, 1180-1183. A thoughtful commentary on the history of biological thinking, in particular tracing the tendency to neglect multi-timescale processes, with fast and slow mechanisms referred to as `proximate causes' and `ultimate causes', assumed independent solely because `they belong to different time scales' (p. 1182a) -- respectively individual-organism and genomic timescales.

48.  Rossano, M. J., 2009: The African Interregnum: the "where," "when," and "why" of the evolution of religion. In: Voland, E., Schiefenhövel, W. (eds), The Biological Evolution of Religious Mind and Behaviour, pp. 127-141. Heidelberg, Springer-Verlag, The Frontiers Collection, doi:10.1007/978-3-642-00128-4_9, ISBN 978-3-642-00127-7.   The `African Interregnum' refers to the time between the failure of our ancestors' first migration out of Africa, roughly 100 millennia ago, and the second such migration around 60 millennia ago. Rossano's brief but penetrating survey argues that the emergence of belief systems having a `supernatural layer' boosted the size, sophistication, adaptability, and hence competitiveness of human groups. As regards the Toba eruption around 70 millennia ago, the extent to which it caused a human genetic bottleneck is controversial, but not the severity of the disturbance to the climate system, like a multi-year nuclear winter. The resulting resource depletion must have severely stress-tested our ancestors' adaptability -- giving large, tightly-knit and socially sophisticated groups an important advantage. In Rossano's words, they were `collectively more fit and this made all the difference.'

49.  Laland, K., Odling-Smee, J., and Myles, S., 2010: How culture shaped the human genome: bringing genetics and the human sciences together. Nature Reviews: Genetics, 11, 137-148. This review notes the likely importance, in genome-culture co-evolution, of more than one timescale. It draws on several lines of evidence. The evidence includes data on genomic sequences, showing the range of gene variants (alleles) in different sub-populations. As the authors put it, in the standard mathematical-modelling terminology, `... cultural selection pressures may frequently arise and cease to exist faster than the time required for the fixation of the associated beneficial allele(s). In this case, culture may drive alleles only to intermediate frequency, generating an abundance of partial selective sweeps... adaptations over the past 70,000 years may be primarily the result of partial selective sweeps at many loci' -- that is, locations within the genome. `Partial selective sweeps' are patterns of genomic change responding to selective pressures yet retaining some genetic diversity, hence potential for future adaptability and versatility. The authors confine attention to very recent co-evolution, for which the direct lines of evidence are now strong in some cases -- leaving aside the earlier co-evolution of, for instance, proto-language. There, we can expect multi-timescale coupled dynamics over a far greater range of timescales, for which direct evidence is much harder to obtain, as discussed also in ref. 50.

50.  Richerson, P. J., Boyd, R., and Henrich, J., 2010: Gene-culture coevolution in the age of genomics. Proc. Nat. Acad. Sci. 107, 8985-8992. This review takes up the scientific story as it has developed after Wills' book,44 usefully complementing ref. 49. The discussion comes close to recognizing two-way, multi-timescale dynamical coupling but doesn't quite break free of asking whether culture is `the leading rather than the lagging variable' in the co-evolutionary system (my italics, to emphasize the false dichotomy).

51.  Laland, K., Sterelny, K., Odling-Smee, J., Hoppitt, W., and Uller, T., 2011: Cause and effect in biology revisited: is Mayr's proximate-ultimate dichotomy still useful? Science, 334, 1512-1516. The dichotomy, between `proximate causation' around individual organisms and `ultimate causation' on evolutionary timescales, entails a belief that the fast and slow mechanisms are dynamically independent. This review argues that they are not, even though the dichotomy is still taken by many biologists to be unassailable. The review also emphasizes that the interactions between the fast and slow mechanisms are often two-way interactions, or feedbacks, labelling them as `reciprocal causation' and citing many lines of supporting evidence. This recognition of feedbacks is part of what's now called the `extended evolutionary synthesis'. See also refs. 47 and 53.

52.  See for instance Schonmann, R. H., Vicente, R., and Caticha, N., 2013: Altruism can proliferate through population viscosity despite high random gene flow. Public Library of Science, PLoS One, 8, e72043. Improvements in model sophistication, and a willingness to view a problem from more than one angle, show that group-selective pressures can be effective.

53.  Danchin, E. and Pocheville, A., 2014: Inheritance is where physiology meets evolution. Journal of Physiology, 592, 2307-2317. This complex but very interesting review is one of two that I've seen -- the other being ref. 51 -- that go beyond refs. 49 and 50 in recognizing the importance of multi-timescale dynamical processes in biological evolution. It seems that such recognition is still a bit unusual, even today, thanks to a widespread assumption, perhaps unconsious, that timescale separation implies dynamical decoupling (see also ref. 47.) In reality there is strong dynamical coupling, the authors show, involving an intricate interplay between different timescales. It's mediated in a rich variety of ways including not only niche construction and genome-culture co-evolution but also, at the physiological level, developmental plasticity along with the non-genomic heritability now called epigenetic heritability. One consequence is the creation of hitherto unrecognized sources of heritable variability, the crucial `raw material' that allows natural selection to function. See also the Nature Commentary by Laland, K. et al., 2014: Does evolutionary theory need a rethink? Nature, 514, 161-164. (In the Commentary, for `gene' read `replicator' including regulatory DNA.)

54.  Werfel, J., Ingber, D. I., and Bar-Yam, I., 2015: Programed death is favored by natural selection in spatial systems. Phys. Rev. Lett., 114, 238103. This detailed modelling study illustrates yet again how various `altruistic' traits are often selected for, in models that include population heterogeneity and group-level selection. The paper focuses on the ultimate unconscious altruism, mortality -- the finite lifespans of most organisms. Finite lifespan is robustly selected for, across a wide range of model assumptions, simply because excessive lifespan is a form of selfishness leading to local resource depletion. The tragedy of the commons, in other words, is as ancient as life itself. The authors leave unsaid the implications for our own species.

55.  Contera, S., 2019: Nano comes to life: How Nanotechnology is Transforming Medicine and the Future of Biology. Princeton, University Press. In this brilliant and lucid account, Sonia Contera brings a large weight of evidence to bear on the need to get beyond simplistic evolutionary theory and to recognize the power of the systems-biology approach.7 18 Simplistic evolutionary theory, with its presumption that genes govern everything, has long been a serious obstacle to medical innovation in, for instance, the treatment of cancer.

56.  Pinker, S., 1997: How the Mind Works. London, Allen Lane. The author invokes `mathematical proofs from population genetics' in support of what amounts to simplistic evolutionary theory (chapter 3, page 163, section on `Life's Designer'). The author is silent on which population-genetics equations were used in these `proofs'. However, as the book proceeds, it becomes fairly clear that the author is referring to the equations of the old population-genetics models, which do not `prove' simplistic evolutionary theory but, rather, assume it in writing down the equations. In particular, the models exclude group-level selection by confining attention to averages over whole populations, conceived of as statistically homogeneous and as living in a fixed, homogeneous environment. Notice the telltale phrase `on average' in the section `I and Thou' in chapter 6, on page 398. Not even the famous Price equation, perhaps the first attempt to allow for population heterogeneity, is mentioned, nor multiple timescales, nor Monod's arguments.2 Almost exactly the same critique can be made of Richard Dawkins' famous book The Selfish Gene, which again and again makes passionate assertions that simplistic evolutionary theory and its game-theoretic extensions, such as reciprocal altruism, have been rigorously and finally established by mathematical analysis -- again meaning the old population-genetics models -- and that any other view is wrong or muddled. See also ref. 57.

57.  Dawkins, R., 2009: The Greatest Show On Earth. London, Bantam Press. I am citing this book alongside ref. 56 for two reasons. First, chapter 8 beautifully illustrates why emergent properties and self-assembling building blocks (automata) are such crucial ideas in biology. In particular, the `genetic blueprint' idea is shown to be profoundly misleading. Chapter 8 illustrates the point by describing the Nobel-prize-winning work of my late friend John Sulston on the detailed development of a particular nematode worm, Caenorhabditis elegans, from its embryo. All this gets us significantly beyond simplistic evolutionary theory -- as John suggested I call it. Second, however, the book persists with the prohibition against considering group-level selection. Any other view is, it says, an amateurish `fallacy' (footnote to chapter 3, p. 62). No hint is given as to the basis for this harsh verdict; but as in The Selfish Gene the basis seems to be an unquestioning faith in particular sets of mathematical equations, namely those defining the old population-genetics models. Those equations exclude group-level selection by assumption. And Jacques Monod, who argued that group-level selection was crucial to our ancestors' evolution,2 was by no means an amateur. He was a great scientist and a very sharp thinker who, as it happens, was also a Nobel laureate.

58.  Segerstråle, U., 2000: Defenders of the Truth: The Battle for Science in the Sociobiology Debate and Beyond. Oxford University Press. This important book gives insight into the disputes about natural selection over past decades. It's striking how dichotomization, and the Answer to Everything mindset, kept muddying those disputes even amongst serious and respected scientists. Some of the disputes involved misplaced pressure for `parsimony of explanation', forgetting Einstein's famous warning not to push Occam's Razor too far. Again and again the disputants seemed to be saying that `we are right and they are wrong' and that there's one and only one `truth'. Again and again, understanding was impeded by a failure to recognize complexity, multidirectional causality, different levels of description, and multi-timescale dynamics. And the confusion was sometimes made even worse, it seems, by failures to disentangle science from politics.

59.  Aiello, L. C., 1996: Terrestriality, bipedalism and the origin of language. Proc. Brit. Acad., 88, 269-289. Reprinted in: Runciman, W. G., Maynard Smith, J., and Dunbar, R. I. M. (eds.), 1996: Evolution of social behaviour patterns in primates and man. Oxford, University Press, and British Academy. The paper presents a closely-argued discussion of brain-size changes in our ancestors over the past several million years, alongside several other lines of palaeo-anatomical evidence. Taken together, these lines of evidence suggest an `evolutionary arms race' between competing tribes whose final stages led to the full complexity of language as we know it. Such a picture is consistent with Monod's2 and Wills44 arguments, with the final stages corresponding to what Wills called runaway brain evolution. Aiello cites evidence for a date `earlier than 100,000 years' for the first occurrence of art objects in the archaeological record, presaging the Upper Palaeolithic and suggesting that symbolic language was already in place, and already well developed, by then.

60.  Kegl, J., Senghas, A., Coppola, M., 1999: Creation through contact: sign language emergence and sign language change in Nicaragua. In: Language Creation and Language Change: Creolization, Diachrony, and Development, ed. Michel DeGraff, 179-237. Cambridge, Massachusetts, MIT Press. This is a detailed compilation of the main lines of evidence. Included are studies of the children's sign-language descriptions of videos they watched. Also, there are careful discussions of the controversies amongst linguists, including those who cannot accept the idea of genetically-enabled automata for language.

61.  Pinker, S., 1994: The Language Instinct. London, Allen Lane, 494 pp. The Nicaraguan case is briefly described in chapter 2, as far as it had progressed by the early 1990s.

62.  See for instance Senghas, A., 2010: The Emergence of Two Functions for Spatial Devices in Nicaraguan Sign Language. Human Development (Karger), 53, 287-302. This later study uses video techniques as in ref. 60 to trace the development, by successive generations of young children, of syntactic devices in signing space.

63.  See for instance Ehrenreich, B., 1997: Blood Rites: Origins and History of the Passions Of War. London, Virago and New York, Metropolitan Books. Barbara Ehrenreich's insightful and penetrating discussion contains much wisdom, it seems to me, not only about war but also about the nature of mythical deities and about human sacrifice, ecstatic suicide, and so on -- echoing Stravinsky's Rite of Spring and long pre-dating Nine-Eleven and IS/Daish. (Talk about ignorance being expensive!)

64.  Lüthi, D., et al., 2008: High-resolution carbon dioxide concentration record 650,000-800,000 years before present. Nature, 453, 379-382. Further detail on the deuterium isotope method is given in the supporting online material for a preceding paper on the temperature record.

65.  See for instance Alley, R. B., 2000: Ice-core evidence of abrupt climate changes. Proc. Nat. Acad. Sci., 97, 1331-1334. This brief Perspective is a readable summary, from a respected expert in the field, of the way in which measurements from Greenland ice have demonstrated the astonishingly short timescales of Dansgaard-Oeschger warmings, typically less than a decade and only a year or two in at least some cases, including that of the most recent or `zeroth' such warming about 11.7 millennia ago. The warmings had magnitudes typically (ref. 67) `of 10±5°C in annual average temperature'.

66.  Alley, R. B., 2007: Wally was right: predictive ability of the North Atlantic `conveyor belt' hypothesis for abrupt climate change. Annual Review of Earth and Planetary Sciences 35, 241-272. This paper incorporates a very readable, useful, and informative survey of the relevant palaeoclimatic records and recent thinking about them. Wally Broecker's famous `conveyor belt' is a metaphor for the ocean's global-scale meridional overturning circulation that has greatly helped efforts to understand the variability observed during the glacial cycles. Despite its evident usefulness, the metaphor embodies a fluid-dynamically unrealistic assumption, namely that shutting off North Atlantic deep-water formation also shuts off the global-scale return flow. (If you jam a real conveyor belt somewhere, then the rest of it stops too.) In this respect the metaphor needs refinements such as those argued for by Trond Dokken and co-workers,67 recognizing that parts of the `conveyor' can shut down while other parts continue to move and transport heat and salt at significant rates. As they point out, such refinements are likely to be important for understanding the most abrupt65 of the observed changes, the Dansgaard-Oeschger warmings, and the Arctic Ocean tipping point that may now be imminent.

67.  Dokken, T. M., Nisancioglu, K. H., Li, C., Battisti, D. S., and Kissel, C., 2013: Dansgaard-Oeschger cycles: interactions between ocean and sea ice intrinsic to the Nordic seas. Paleoceanography, 28, 491-502. This is the first fluid-dynamically credible explanation of the extreme rapidity and large magnitude65 of the Dansgaard-Oeschger warming events. Those events left clear imprints in ice-core and sedimentary records all over the Northern Hemisphere and were so sudden, and so large in magnitude, that a tipping-point mechanism must have been involved. The proposed explanation represents the only such mechanism suggested so far that could be fast enough, involving the sudden disappearance of sea ice covering the Nordic seas.

68.  In fairness I should avoid any impression that Herbert Spencer is solely to blame for human-nature cynicism, as such. He did, to be sure, adopt the essence of what I'm calling simplistic evolutionary theory as the Answer to Everything and as central to the grand philosophical system that he built. But regarding human-nature cynicism, as such, he must surely have been influenced by earlier philosophers such as Thomas Hobbes who famously declared that human nature is vicious and that only authoritarian dictatorship can save us from a dog-eat-dog life that's `nasty, brutish and short'. Hobbes evidently knew nothing about actual hunter-gatherer societies, whose individuals have all kinds of compassionate and cooperative modes of behaviour.

69.  Harari, Y. N., 2014: Sapiens: A Brief History of Humankind. Yuval Noah Harari's famous book explores what we know and don't know about our ancestors' evolution, paying careful attention to the palaeogenetic, palaeoarchaeological and archaeological records and to various attempts at explanatory theories, while being careful to note where evidence is lacking. The book aptly puts emphasis on what it calls the `cognitive revolution' near the start of the Upper Paleolithic. This is the stage of runaway brain evolution discussed also by Rossano.48 It is the stage at which our ancestors developed the ability to imagine nonexistent worlds, and to create stories about them as a new way of promoting group solidarity, and of promoting it in far larger and more powerful groups than ever before. A prerequisite must have been a language ability already well developed.

Abe-Ouchi, A., Saito, F., Kawamura, K., Raymo, M. E., Okuno, J., Takahashi, K., and Blatter, H., 2013: Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume. Nature, 500, 190-194.

Abram, N. J., Wolff, E. W., and Curran, M. A. J., 2013: Review of sea ice proxy information from polar ice cores, Quaternary Science Reviews 79, 168-183. The light orange graph near the top of Figure 13 comes from measuring the concentration of sea salt in the Dronning Maud Land ice core from East Antarctica. The review carefully discusses why this measurement should correlate with the area of sea ice surrounding the continent, as a result of long-range transport of airborne, sea-salt-bearing powder snow blown off the surface of the sea ice. The pink curve is actually sea salt concentration divided by the estimated time of ice-core-forming snow accumulation in the ice-core layer measured -- hence the label `flux' or rate of arrival, rather than `concentration'.

Andreassen, K., Hubbard, A., Winsborrow, M., Patton, H., Vadakkepuliyambatta, S., Plaza-Faverola, A., Gudlaugsson, E., Serov, P., Deryabin, A., Mattingsdal, R., Mienert, J., and Bünz, S, 2017: Massive blow-out craters formed by hydrate-controlled methane expulsion from the Arctic seafloor, Science, 356, 948-953. It seems that the clathrates in high latitudes have been melting ever since the later part of the last deglaciation, probably contributing yet another positive feedback, both then and now. Today, the melting rate is accelerating to an extent that hasn't yet been well quantified but is related to ocean warming and to the accelerated melting of the Greenland and West Antarctic ice sheets, progressively unloading the permafrosts beneath. Reduced pressures lower the clathrate melting point.

Archer, D., 2009: The Long Thaw: How Humans Are Changing the Next 100,000 Years of Earth's Climate. Princeton University Press, 180 pp.

Blackburn, E. and Epel, E., 2017: The Telomere Effect: A Revolutionary Approach to Living Younger, Healthier, Longer. London, Orion Spring. Elizabeth Blackburn is the molecular biologist who won the Nobel Prize in 2009 for her co-discovery of telomerase. Elissa Epel is a leading health psychologist. Their book explains the powerful influences of environment and lifestyle on health and ageing, via the role of the enzyme telomerase in renewing telomeres -- the end-caps that protect our strands of DNA and increase the number of pre-senescent cell divisions. Today's knowledge of telomere dynamics well illustrates the error in thinking of genes as the `ultimate causation' of everything in biology, which among other things misses the importance of multi-timescale processes, as I'll explain.

Bolton, T. and Zanna, L., 2019: Applications of Deep Learning to Ocean Data Inference and Subgrid Parameterization. J. Advances in Modeling Earth Systems, 11, 376-399, This work takes early steps toward applying deep-learning algorithms to the problem of better representing fine-scale patterns of eddies in the ocean and their systematic effects on weather and climate. A further step will be to bring in Bayesian causality theory to improve the input from fluid-dynamical knowledge.

Boomsliter, P. C. and Creel, W., 1961: The long pattern hypothesis in harmony and hearing. J. Mus. Theory (Yale School of Music), 5(1), 2-31. This wide-ranging and penetrating discussion was well ahead of its time and is supported by the authors' ingenious psychophysical experiments, which clearly demonstrate repetition-counting as distinct from Fourier analysis. On the purely musical issues there is only one slight lapse, in which the authors omit to notice the context dependence of tonal major-minor distinctions. On the other hand the authors clearly recognize, for instance, the relevance of what's now called auditory scene analysis (pp. 13-14).

Budinis, S., MacDowell, N, Krevor, S., Dixon, T., Kemper, J., and Hawkes, A., 2017: Can carbon capture and storage unlock `unburnable carbon'? Energy Procedia, 114, 7504-7515.

Burke, A., Stewart, A. L., Adkins, J. F., Ferrari, R., Jansen, M. F., and Thompson, A. F., 2015: The glacial mid-depth radiocarbon bulge and its implications for the overturning circulation. Paleoceanography, 30, 1021-1039.

Carmichael, M. J., Pancost, R. D, and Lunt, D. J., 2018: Changes in the occurrence of extreme precipitation events at the Palaeocene-Eocene Thermal Maximum. Earth Planet. Sci. Lett., 501, 24-36. Despite model shortcomings, this study provides some cross-checking of the expectation that hothouse climates like the Eocene should produce extremes of storminess.

Cramwinckel, M., Huber, M., Kocken, I. J., Agnini, C., Bijl, P. K., Bohaty, S. M. Frieling, J., Goldner, A., Hilgen, F. J., Kip, E. L., Peterse, F., van der Ploeg, R., Röhl, U., Schouten, S., and Sluijs, A., 2018: Synchronous tropical and polar temperature evolution in the Eocene. Nature, 559, 382-386. This recent data study from ocean sediment cores brings many lines of evidence together, confirming earlier conclusions that the hottest prolonged period was roughly between 50-53 million years ago, peaking around 51-52 million years ago except for a relatively brief Palaeocene-Eocene Thermal Maximum (PETM) marking the start of the Eocene 56 million years ago. This paper also presents improved estimates of tropical sea surface temperatures, with maxima of the order of 35°C around 52 million years ago and nearly 38°C at the PETM (which many scientists attribute to a natural injection of carbon dioxide or methane comparable in magnitude to today's industrial injections). Tropical sea surface temperatures today are still mostly below 30 38°C.

Doolittle, W. F., 2013: Is junk DNA bunk? A critique of ENCODE. Proc. Nat. Acad. Sci., 110, 5294-5300. ENCODE is a large data-analytical project to look for signatures of biological functionality in genomic sequences. The word `functionality' well illustrates human language as a conceptual minefield. For instance the word is often, it seems, read to mean `known functionality having an adaptive advantage', excluding the many neutral variants, redundancies, and multiplicities revealed by studies such as those of Wagner (2014).

Emanuel, K. A., 2005: Increasing destructiveness of tropical cyclones over the past 30 years. Nature, 436, 686-688. For tropical cyclones, the simple picture of a fast runup to peak intensity does not apply. Intensification is a relatively slow and complex process. Tropical cyclones scoop up most of their weather fuel directly and continually from the sea surface, making them sensitive to ocean heat content and sea-surface temperature. Less obviously, however, the laws of thermodynamics -- and the fact that these cyclones evolve much more slowly than individual thunderstorms -- make them sensitive also to conditions in their large-scale surroundings including temperatures at high altitudes, which in turn are affected by complex cloud-radiation interactions, and atmospheric haze or aerosol. Our current modelling capabilities fall far short of giving us a complete picture. Professor Emanuel, who is the leading expert on these matters, tells me that his conclusions in this Nature paper need modification for the Atlantic because of atmospheric aerosol issues not then taken into account. However, at the time of writing (2020) the conclusions for the Pacific still appear valid, he tells me. Those conclusions still suggest a tendency for the models to err on the side of underprediction, rather than overprediction, of future extremes, as discussed in the paper.

Farmer, J. D. et al., 2019: Sensitive intervention points in the post-carbon transition. Science, 364, 132-134. History shows, they point out, that there is hope of reaching sociological tipping points since, despite the continued political pressures to maintain fossil-fuel subsidies and kill renewables, not only are countervailing political pressures now building up but, also, `renewable energy sources such as solar photovoltaics (PV) and wind have experienced rapid, persistent cost declines' whereas, despite `far greater investment and subsidies, fossil fuel costs have stayed within an order of magnitude for a century.'

Feynman, R. P., Leighton, R. B., and Sands, M., 1964: Lectures in Physics, chapter 19 of vol. II, Mainly Electromagnetism and Matter. Addison-Wesley.

Foukal, P., Fröhlich, C., Spruit, H., and Wigley, T. M. L., 2006: Variations in solar luminosity and their effect on the Earth's climate. Nature, 443, 161-166, © Macmillan. An extremely clear review of some robust and penetrating insights into the relevant solar physics, based on a long pedigree of work going back to 1977. For a sample of the high sophistication that's been reached in constraining solar models, see also Rosenthal, C. S. et al., 1999: Convective contributions to the frequency of solar oscillations, Astronomy and Astrophysics 351, 689-700.

Gelbspan, R., 1997: The Heat is On: The High Stakes Battle over Earth's Threatened Climate. Addison-Wesley, 278 pp. See especially chapter 2.

Gilbert, C. D. and Li, W. 2013: Top-down influences on visual processing. Nature Reviews (Neuroscience), 14, 350-363. This review presents anatomical and neuronal evidence for the active, prior-probability-dependent nature of perceptual model-fitting, e.g. `Top-down influences are conveyed across... descending pathways covering the entire neocortex... The feedforward connections... ascending... For every feedforward connection, there is a reciprocal [descending] feedback connection that carries information about the behavioural context... Even when attending to the same location and receiving an identical stimulus, the tuning of neurons can change according to the perceptual task that is being performed...', etc.

Giusberti, L., Boscolo Galazzo, F., and Thomas, E., 2016: Variability in climate and productivity during the Paleocene-Eocene Thermal Maximum in the western Tethys (Forada section). Climate of the Past, 12, 213-240. doi:10.5194/cp-12-213-2016. The early Eocene began around 56 million years ago with the so-called Paleocene-Eocene Thermal Maximum (PETM), a huge global-warming episode with accompanying mass extinctions now under intensive study by geologists and paleoclimatologists. The PETM was probably caused by carbon-dioxide injections comparable in size to those from current fossil-fuel burning. The injections almost certainly came from massive volcanism as well as, possibly, from methane release and peatland burning. The western Tethys Ocean was a deep-ocean site at the time and so provides biological and isotopic evidence both from surface and from deep-water organisms, such as foraminifera with their sub-millimetre-sized carbonate shells. The accompanying increase in the extremes of storminess is beginning to be captured in recent model studies, (e.g. Carmichael et al. 2018).

Gregory, R. L., 1970: The Intelligent Eye. London, Weidenfeld and Nicolson, 191 pp. This great classic is still well worth reading. It's replete with beautiful and telling illustrations of how vision works. Included is a rich collection of stereoscopic images viewable with red-green spectacles. The brain's unconscious internal models that mediate visual perception are called `object hypotheses', and the active nature of the processes whereby they're selected is clearly recognized, along with the role of prior probabilities. There's a thorough discussion of the standard visual illusions as well as such basics as the perceptual grouping studied in Gestalt psychology, whose significance for word-patterns I discussed in Part I of Lucidity and Science.34 In a section on language and language perception, Chomsky's `deep structure' is identified with the repertoire of unconscious internal models used in decoding sentences. The only points needing revision are speculations that the first fully-developed languages arose only in very recent millennia and that they depended on the invention of writing. That's now refuted by the evidence from Nicaraguan Sign Language (e.g. Kegl et al. 1999), showing that there are genetically-enabled automata for language and syntactic function.

Gray, John, 2018: Seven Types of Atheism. Allen Lane. Chapter 2 includes what seems to me a shrewd assessment of Ayn Rand, as well as the transhumanists with their singularity, echoing de Chardin's "Omega Point", the imagined culmination of all evolution in a single Supreme Being -- yet more versions of the Answer to Everything.

Hoffman, D. D., 1998: Visual Intelligence. Norton, 294 pp. Essentially an update on Gregory (1970), with many more illustrations and some powerful theoretical insights into the way visual perception works.

Jaynes, E. T., 2003: Probability Theory: The Logic of Science. edited by G. Larry Bretthorst. Cambridge, University Press, 727 pp. This great posthumous work blows away the conceptual confusion surrounding probability theory and statistical inference, with a clear focus on the foundations of the subject established by the theorems of Richard Threlkeld Cox. The theory goes back three centuries to James Bernoulli and Pierre-Simon de Laplace, and it underpins today's state of the art in model-fitting and data compression (MacKay 2003). Much of the book digs deep into the technical detail, but there are instructive journeys into history as well, especially in chapter 16. There were many acrimonious disputes. They were uncannily similar to the disputes over biological evolution. Again and again, especially around the middle of the twentieth century, unconscious assumptions impeded progress. They involved dichotomization and what Jaynes calls the mind-projection fallacy, conflating outside-world reality with our conscious and unconscious internal models thereof. There's more about this in my chapter 5 on music, mathematics, and the Platonic.

Kendon, E. J., Roberts, N. M., Fowler, H. J., Roberts, M. J., Chan, S. C., and Senior, C. A., 2014: Heavier summer downpours with climate change revealed by weather forecast resolution model. Nature Climate Change, doi:10.1038/nclimate2258 (advance online publication).

King, D., Schrag, D., Zhou, D., Qi, Y., Ghosh, A., and co-authors, 2015: Climate Change: A Risk Assessment. Cambridge Centre for Science and Policy, 154 pp. The original link is broken but I've mirrored a copy here under the appropriate Creative Commons licence. Included in this very careful and sober discussion of the risks confronting us is the possibility that methane clathrates, also called methane hydrates or `shale gas in ice', will be added to the fossil-fuel industry's extraction plans (§7, p. 42). The implied carbon-dioxide injection would be very far indeed above IPCC's highest emissions scenario. This would be the `Business As Usual II' scenario of Valero et al. 2011.

Krawczynski, M. J., Behn, M. D., Das, S. B., Joughin, I., 2009: Constraints on the lake volume required for hydro-fracture through ice sheets. Geophys. Res. Lett., 36, L10501, doi:10.1029/2008GL036765. The standard elastic crack-propagation equations are used to describe the downward `chiselling' of meltwater, which is denser than the surrounding ice, forcing a crevasse to open all the way to the bottom. This mechanism is key to the sudden drainage of small lakes of meltwater that accumulate on the top surface. Such drainage has been observed to happen within hours, for instance on the ice sheet in northwest Greenland. The same `chiselling' mechanism was key to the sudden breakup of large parts of the Larsen ice shelf next to the Antarctic Peninsula, starting in the mid-1990s.

Lynch, M., 2007: The frailty of adaptive hypotheses for the origins of organismal complexity. Proc. Nat. Acad. Sci., 104, 8597-8604. A lucid and penetrating overview of what was known in 2007 about non-human evolution mechanisms, as seen by experts in population genetics and in molecular and cell biology and bringing out the important role of neutral, as well as adaptive, genomic changes, now independently confirmed in Wagner (2014).

MacKay, D. J. C., 2003: Information Theory , Inference, and Learning Algorithms. Cambridge University Press, 628 pp. On-screen viewing permitted at This book by the late David MacKay is a brilliant, lucid and authoritative analysis of the topics with which it deals, at the most fundamental level. It builds on the foundation provided by Cox's theorems (Jaynes 2003) to clarify (a) the implications for optimizing model-fitting to noisy data, usually discussed under the heading `Bayesian inference', and (b) the implications for optimal data compression. And from the resulting advances and clarifications we can now say that `data compression and data modelling are one and the same' (p. 31).

Marchitto, T. M., Lynch-Stieglitz, J., and Hemming, S. R., 2006: Deep Pacific CaCO3 compensation and glacial-interglacial atmospheric CO2. Earth and Planetary Science Letters, 231, 317-336. This technical paper contains an unusually clear explanation of the probable role of limestone sludge (CaCO3) and seawater chemistry in the way carbon dioxide (CO2) was stored in the oceans during the recent ice ages. The paper gives a useful impression of our current understanding, and of the observational evidence that supports it. The evidence comes from meticulous and laborious measurements of tiny variations in trace chemicals that are important in the oceans' food chains, and in isotope ratios of various elements including oxygen and carbon, laid down in layer after layer of ocean sediments over very many tens of millennia. Another reason for citing the paper, which requires the reader to have some specialist knowledge, is to highlight just how formidable are the obstacles to building accurate models of the carbon sub-system, including the sinking phytoplankton. Such models try to represent oceanic carbon-dioxide storage along with observable carbon isotope ratios, which are affected by the way in which carbon isotopes are taken up by living organisms via processes of great complexity and variability. Not only are we far from modelling oceanic fluid-dynamical transport processes with sufficient accuracy, including turbulent eddies over a vast range of spatial scales, but we are even further from accurately modelling the vast array of biogeochemical processes involved throughout the oceanic and terrestrial biosphere -- including for instance the biological adaptation and evolution of entire ecosystems and the rates at which the oceans receive trace chemicals from rivers and airborne dust. The oceanic upper layers where plankton live have yet to be modelled in fine enough detail to represent the recycling of nutrient chemicals simultaneously with the gas exchange rates governing leakage. It's fortunate indeed that we have the hard evidence, from ice cores, for the atmospheric carbon dioxide concentrations that actually resulted from all this complexity.

McIntyre, M. E., and Woodhouse, J., 1978: The acoustics of stringed musical instruments. Interdisc. Sci. Rev., 3, 157-173. We carried out our own psychophysical experiments to check the point made in connection with Figure 3, about the intensities of different harmonics fluctuating out of step with each other during vibrato.

NAS-RS, 2014 (US National Academy of Sciences and UK Royal Society): Climate Change: Evidence & Causes. A brief, readable, and very careful summary from a high-powered team of climate scientists, supplementing the vast IPCC reports and emphasizing the many cross-checks that have been done.

Oxburgh, R., 2016: Lowest Cost Decarbonisation for the UK: The Critical Role of CCS. Report to the Secretary of State for Business, Energy and Industrial Strategy from the Parliamentary Advisory Group on Carbon Capture and Storage, September 2016. Available from

Palmer, T. N., 2019: Discretisation of the Bloch sphere, invariant set theory and the Bell theorem. Preprint available at

Pierrehumbert, R. T., 2010: Principles of Planetary Climate. Cambridge University Press, 652 pp.

Platt, P., 1995: Debussy and the harmonic series. In: Essays in honour of David Evatt Tunley, ed. Frank Callaway, pp. 35-59. Perth, Callaway International Resource Centre for Music Education, School of Music, University of Western Australia. ISBN 086422409 5.

Le Quéré, C., et al., 2007: Saturation of the Southern Ocean CO2 sink due to recent climate change. Science, 316, 1735-1738. This work, based on careful observation, reveals yet another positive feedback that's increasing climate sensitivity to carbon dioxide emissions.

Rees, M., 2014: Can we prevent the end of the world? This seven-minute TED talk, by Astronomer Royal Martin Rees, makes the key points very succinctly. The talk is available here, along with a transcript. Two recently-established focal points for exploring future risk are the Cambridge Centre for the Study of Existential Risk and the Future of Life Institute.

Schoof, C., 2010: Ice-sheet acceleration driven by melt supply variability. Nature 468, 803-806, doi:10.1038/nature09618. This modelling study, motivated and cross-checked by recent observations of the accelerating ice streams in northwest Greenland, is highly simplified but captures some of the complexity of subglacial flow and its effects on the motion of the whole ice sheet.

Shakhova, N., Semiletov, I., Leifer, I., Sergienko, V., Salyuk, A., Kosmach, D., Chernykh, D. Stubbs, C., Nicolsky, D., Tumskoy, V., and Gustafsson, O., 2014: Ebullition and storm-induced methane release from the East Siberian Arctic Shelf. Nature Geosci., 7, 64-70, doi:10.1038/ngeo2007.   This is hard observational evidence.

Skinner, L. C., Waelbroeck, C., Scrivner, A. C., and Fallon, S. J., 2014: Radiocarbon evidence for alternating northern and southern sources of ventilation of the deep Atlantic carbon pool during the last deglaciation. Proc. Nat. Acad. Sci. Early Edition (online),

Shackleton, N. S., 2000: The 100,000-year ice-age cycle identified and found to lag temperature, carbon dioxide, and orbital eccentricity. Science, 289, 1897-1902.

Shakun, J. D., Clark, P. U., He, F., Marcott, S. A., Mix, A. C., Liu, Z., Otto-Bliesner, B., Schmittner, A., and Bard, E, 2012: Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation. Nature, 484, 49-55.

Smythies, J. 2009: Philosophy, perception, and neuroscience. Perception, 38, 638-651. On neuronal detail this discussion should be compared with that in Gilbert and Li (2013). For present purposes the discussion is of interest in two respects, the first being that it documents parts of what I called the `quagmire of philosophical confusion', about the way perception works and about conflating different levels of description. The discussion begins by noting, among other things, the persistence of the fallacy that perception is what it seems to be subjectively, namely veridical in the sense of being `direct', i.e., independent of any model-fitting process, a simple mapping between appearance and reality. This is still taken as self-evident, it seems, by some professional philosophers despite the evidence from experimental psychology, as summarized for instance in Gregory (1970), in Hoffman (1998), and in Ramachandran and Blakeslee (1998). Then a peculiar compromise is advocated, in which perception is partly direct, and partly works by model-fitting, so that `what we actually see is always a mixture of reality and virtual reality' [sic; p. 641]. (Such a mixture is claimed also to characterize some of the early video-compression technologies used in television engineering -- as distinct from the most advanced such technologies, which work entirely by model-fitting, e.g. MacKay 2003.) The second respect, perhaps of greater interest here, lies in a summary of some old clinical evidence, from the 1930s, that gave early insights into the brain's different model components. Patients described their experiences of vision returning after brain injury, implying that different model components recovered at different rates and were detached from one another at first. On pp. 641-642 we read about recovery from a particular injury to the occipital lobe: `The first thing to return is the perception of movement. On looking at a scene the patient sees no objects, but only pure movement... Then luminance is experienced but... formless... a uniform white... Later... colors appear that float about unattached to objects (which are not yet visible as such). Then parts of objects appear -- such as the handle of a teacup -- that gradually coalesce to form fully constituted... objects, into which the... colors then enter.'

Solanki, S. K., Krivova, N. A., and Haigh, J. D., 2013: Solar Irradiance Variability and Climate. Annual Review of Astronomy and Astrophysics, 51, 311-351. This review summarizes and clearly explains the recent major advances in our understanding of radiation from the Sun's surface, showing in particular that its magnetically-induced variation cannot compete with the carbon-dioxide injections I'm talking about. To be sure, that conclusion depends on the long-term persistence of the Sun's magnetic activity cycle, whose detailed dynamics is not well understood. (A complete shutdown of the magnetic activity would make the Sun significantly dimmer during the shutdown, out to times of the order of hundred millennia.) However, the evidence for persistence of the magnetic activity cycle is now extremely strong (see the review's Figure 9). It comes from a long line of research on cosmogenic isotope deposits showing a clear footprint of persistent solar magnetic activity throughout the past 10 millennia or so, waxing and waning over a range of timescales out to millennial. The timing of these changes, coming from the Sun's internal dynamics, can have no connection with the timing of the Earth's orbital changes that trigger terrestrial deglaciations.

Stein, P., 2020: Interview with BBC News on 24 January 2020 on the Rolls Royce consortium, as advertised also on the Rolls Royce website. Paul Stein, the Chief Technology Officer at Rolls Royce, claimed that the design and costing are already complete and that both have been scrutinized by the Royal Academy of Engineering and by the Treasury department of the UK government. He argued persuasively that basing the design on conservative engineering and advanced manufacturing techniques will drive costs down -- by contrast with a `large civil construction project' such as the Hinkley Point C power station, for which `history shows that costs go up with time' (and which, one might add, are more vulnerable to natural disasters and acts of war and terrorism).

Stern, N., 2009: A Blueprint for a Safer Planet: How to Manage Climate Change and Create a New Era of Progress and Prosperity, London, Bodley Head, 246 pp. See also Oxburgh (2016).

Thewissen, J. G. M., 2011: Sensor, J. D., Clementz, M. T., and Bajpai, S., Evolution of dental wear and diet during the origin of whales. Paleobiology, 37 ,655-669.

Unger, R. M., and Smolin, L., 2015: The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy. Cambridge, University Press, 543 pp. A profound and wide-ranging discussion of how progress might be made in fundamental physics and cosmology. The authors -- two highly respected thinkers in their fields, philosophy and physics -- make a strong case that the current logjam has to do with our tendency to conflate the outside world with our mathematical models thereof, what Jaynes (2003) calls the `mind-projection fallacy'. Unger and Smolin point out that `part of the task is to distinguish what science has actually found out about the world from the metaphysical commitments for which the findings of science are often mistaken.'

Valero, A., Agudelo, A, and Valero, A., 2011: The crepuscular planet. A model for the exhausted atmosphere and hydrosphere. Energy, 36, 3745-3753. This careful discussion includes up-to-date estimates of proven and estimated fossil-fuel reserves including coal, oil, gas, tar sands and clathrates. An interactive website tracking the estimated cumulative emissions of carbon dioxide since industrialization began, measured in tonnes of carbon, is The trillionth tonne of carbon emitted is by some estimates the largest possible if the impacts of climate change are to be kept within nominally acceptable levels.

Wagner, A., 2014: Arrival of the Fittest: Solving Evolution's Greatest Puzzle. London, Oneworld. There is a combinatorially large number of viable metabolisms, that is, possible sets of enzymes hence sets of chemical reactions, that can perform some biological function such as manufacturing cellular building blocks from a fuel like sunlight, or glucose, or hydrogen sulphide -- or, by a supreme irony, even from the antibiotics now serving as fuel for some bacteria. Andreas Wagner and co-workers have shown in recent years that within the unimaginably vast space of possible metabolisms, which has around 5000 dimensions, the viable metabolisms, astonishingly, form a joined-up `genotype network' of closely adjacent metabolisms. This adjacency means that single-gene, hence single-enzyme, additions or deletions can produce combinatorially large sets of new viable metabolisms, including metabolisms that are adaptively neutral or spandrel-like but advantageous in new environments, as seen in the classic experiments of C. H. Waddington on environmentally-stressed fruit flies (e.g. Wills 1994, p. 241). Such neutral changes can survive and spread within a population because, being harmless, they are not deleted by natural selection. Moreover, they promote massive functional duplication or redundancy within metabolisms, creating a tendency toward robustness, and graceful degradation, of functionality. And the same properties of adjacency, robustness, evolvability and adaptability are found within the similarly vast spaces of, for instance, possible protein molecules and possible DNA-RNA-protein circuits and other molecular-biological circuits. Such discoveries may help to resolve controversies about functionality within so-called junk DNA (e.g. Doolittle 2013). These breakthroughs, in what is now called `systems biology', add to insights like those reviewed in Lynch (2007) and may also lead to new ways of designing, or rather discovering, robust electronic circuits and computer codes. Further such insights come from recent studies of artificial self-assembling structures in, for instance, crowds of `swarm-bots'. For a general advocacy of systems-biological thinking as an antidote to extreme reductionism, see Noble's book.7.

Watson, A. J., Vallis, G. K., and Nikurashin, M., 2015: Southern Ocean buoyancy forcing of ocean ventilation and glacial atmospheric CO2. Nature Geosci, 8, 861-864.

Back to my home page,

Copyright © Michael Edgeworth McIntyre 2013. Last updated 28 May 2020 and (from 23 June 2014 onward) incorporating a sharper understanding of the last deglaciation and of the abrupt `Dansgaard-Oeschger warmings', thanks to generous advice from several colleagues including Dr Luke Skinner.
Valid HTML 4.01!