OLD DRAFT PRELUDE:
Shown below is the original draft prelude or extended preface for a book (now published, preview at http://www.damtp.cam.ac.uk/user/mem/papers/LHCE/mcintyre-book-preview.html), building on the three Lucidity and Science articles published in Interdisciplinary Science Reviews 22, 199-216 and 285-303 (1997) and 23, 29-70 (1998) together with my keynote lecture to the 4th Kobe Symposium on Human Development published in Bull. Faculty Human Devel. (Kobe University, Japan), 7(3), 1-52 (2000). Related issues in probability and statistics are discussed here. Corrected and updated copies of the Interdisciplinary Science Reviews articles can be downloaded via this index (9K). See also the CORRIGENDUM (5K), a slightly corrupted version of which was published in the December 1998 issue of Interdisciplinary Science Reviews. Further material is in the book version, showing for instance why scientific model-building involves mathematics.
By riffling the pages of this book you can animate the dots at top right. A similar animation should be visible on my home page, http://www.damtp.cam.ac.uk/user/mem/, and on the web version of this prelude, if you are viewing things through a web browser that can display animated .gif files.
What are you looking at? If the animation is done smoothly, and if you have normal vision, your eyes will tell you that you are looking at a person walking. Yet there are only twelve moving dots -- or, more precisely, twelve dots that appear to move.
This demonstration makes an important point about how we perceive and know things. Perception is an unconscious `science in miniature'. In other words perception, like science, works by model-fitting. The brain fits models -- partial, approximate representations of reality -- to data from the outside world. In the case of the twelve moving dots, the significant data are the changing positions of the dots as imaged on the retinas of your eyes. These are positions in a two-dimensional space. The internal model in your brain represents, among other things, a particular kind of motion in three-dimensional space.
The model-fitting process involves unconscious assumptions. It takes place beyond the reach of conscious awareness -- automatically, involuntarily, and at prodigious speed. The impression of a person walking has formed in your mind before you have time to think about it: experiments indicate timespans around a fifth of a second. But in other respects the process resembles the far slower, more conscious model-fitting process on which all scientific knowledge is based.
In science, one is always dealing, explicitly or implicitly, with some theory or model -- with some partial, approximate representation of reality -- and one is concerned, among other things, about its goodness-of-fit to data from the outside world. The data are usually obtained through measuring or observing instruments, but the principle is the same. Conscious and unconscious assumptions are always involved. To speak of science as an extension of ordinary perception is therefore less superficial than it sounds. Indeed this is arguably the most fundamental and accurate way of saying what science is, all the way from quantum cosmology to molecular genetics. In a simple yet profound way, it highlights both the power and the limitations of science.
It is also related to our visions of other worlds. I am thinking especially of the Platonic vision, of a timeless world beyond the everyday, a world of perfect forms and ideas, a world of truth and beauty, where perfect circles are truly perfect and where infinitely greater wonders await discovery. It may be fashionable, these days, to scoff at such visions; but we can learn from them. And we can learn new respect for them. For they provide glimpses of the marvellous way in which our unconscious mental models of reality are made, in turn suggesting not only how perception works -- and how it copes with combinatorial largeness, with enormous numbers of possibilities -- but also why there are such things as mathematics and music, to say nothing of science itself.
This book revolves around the foregoing ideas and their implications. I hope its arguments will interest young professional scientists and their mentors, as well as educators, science policymakers, and anyone else concerned about the crisis in science and in the public understanding of science. For a long time to come, the power and limitations of science will continue to be a crucial public issue, increasingly part of the human predicament. This is a matter not of ivory-tower philosophy but of hard, urgent, practical realities -- present and future realities -- all the way from the safety of computer software and genetic engineering through to the uses of quantum cryptography and the preservation of our planetary life support system.
What we glibly call `the problem' of the public understanding of science is of course not one problem but a vast web of problems, and unconscious assumptions, as is now increasingly recognized. Problem areas include not only the understanding of science by the public and by journalists and politicians, but also the understanding of the public by scientists, and the understanding of science by scientists, all of it entangled with other problems of a more social or political kind. One of these is the confusion surrounding the scientific ethic. Another is the invisibility of that ethic to today's audit culture. Yet another is the ancient quarrel about Science as Absolute Truth versus Science as Mere Opinion, one side forgetting that models are involved and the other forgetting about goodness-of-fit. That such quarrels should still be with us is strange but significant. Today they are called `science wars'. They have been worsened by market-driven journalism and short-sighted policymaking, but there are deeper causes. A wider understanding of those causes -- which I will discuss -- could help us toward the much-needed new covenant between science and society.
Understanding in its fullest sense involves -- among other things -- the quality called lucidity. This is a quality not only of writing and of speaking and of communication in general but also, I will argue, of thinking and of design. It relates to mathematics and music. It cuts far deeper than, and goes far beyond, the niceties and pedantries of style manuals. I will dare to begin the book with a discussion of what lucidity is, in this wider and deeper sense, and the principles involved in achieving it. This too can be connected with the twelve moving dots. Indeed, my point is that lucidity principles can be made scientifically interesting. And they are deeply involved in the hard practical realities.
This book is divided into four main parts or extended chapters, interleaved by three short interludes. `Hyperlinks' in the form of small numerical superscripts are keyed to an extensive set of endnotes. Anyone who dislikes such things can ignore the superscripts and the endnotes. But I think the backup is essential to any serious discussion. Important, for instance, are some primary references to the recent history of computer software design, including the so-called `Halloween documents'. These were secret internal memoranda of the world's largest commercial software corporation. There are clear implications for the safety and reliability not only of today's vast software systems, many of them critical to business and finance, but also of tomorrow's systems of genetic engineering, and of other complex systems. Understanding these issues of safety and reliability is very much tied up with understanding the power and limitations of science.
A key point, to which I will return in the final part of this book, is that advanced human societies are going to depend on the continued survival of open, independent (and lucid) scientific thinking and evidence-gathering and their availability as a public good, as distinct from science done in secret, or science done purely for profit. The survival of open science in the teeth of commercial, legal, and political pressures for secrecy can no longer be taken for granted. But the need for it to survive is increasingly well understood by thoughtful people, within as well as outside the world of government and commerce. That need is illustrated by the continuing story of bovine spongiform encephalopathy (BSE), in which the pressures for secrecy compromised food safety, and nearly wiped out independent research that could have given credible scientific advice at an early stage. I will argue that openness and independence are not only preconditions to scientific competence in general, but also -- as the Halloween documents testify with rare cogency -- preconditions to the best possible engineering, to maximizing the safety, reliability, and security of the complex technological systems on which we depend, whether connected with food safety, with computer science and electronic commerce, with quantum physics, with molecular genetics and genetic engineering, or with any other field.
All this may sound paradoxical to the man in the street. What has the survival of open science got to do with commercial success and reliable technology? Isn't open science some kind of cultural luxury? Why shouldn't the most powerful software be developed in conditions of commercial secrecy, rewarding those who create it? Why shouldn't the same principle apply to life-saving genetic medicine? How can secrecy not be the best form of security? Why should open, independent, dangerous, expensive scientific thinking have anything to do with such practicalities, and with hard commercial realities? I will discuss this carefully; but, in a soundbite, the reason is combinatorial largeness.
The basic point about complexity and complex systems is that there is always a combinatorially large -- an exponentially large, an unimaginably large -- number of possibilities, most of which are unforeseeable, and most of which are ways for things to go wrong. It is despite this that there are such things as reliable scientific knowledge, and complex yet reliable computer software. Such reliability has always depended on the collective model-fitting, the `massively parallel problem-solving', made possible by the existence of open international scientific communities. These are cross-cultural groups of people driven more by the urge to understand or improve something than by commercial pressures or political agendas or tribal loyalties, and whose interest in a problem has been ignited somehow and whose reward is the prestige of helping to solve the problem. The chances of noticing the unforeseeable are then enormously improved.
That lesson was first learnt during the Renaissance, after centuries of scant progress with alchemy and the like. It is being learnt again in the world of commerce, as the problems to be solved become ever more complex and difficult. Open communities with their interest ignited have problem-solving abilities of an order described by the Halloween documents as `simply amazing', when seen from the viewpoint of an organization powered by huge financial incentives but constrained by commercial pressures and commercial secrecy. That such an organization should have recognized, contrary to its own in-house culture, that openness and independence can be more powerful than financial incentive may yet be seen as a turning point in the development of advanced human societies -- though it will be a close-run thing. One need hardly add that open, independent communities are the only source of credible scientific advice when things go wrong.
Genetic engineering is going to be a far more subtle and complex matter than computer programming. And January 2001 brought the first genetic-engineering equivalent of a computer software crash, with the Australian mousepox virus affair. The insertion of a single gene unexpectedly caused an entire biological system, mouse plus virus, to fail catastrophically. This at any rate was the first example to be reported in the open literature, not subject to commercial or military secrecy or to legal constraints on independent investigation.
Safety and reliability will be of public concern for the foreseeable future. If present trends continue -- in patent law for instance -- today's unreliable commercial software could well be followed by tomorrow's unreliable genetic engineering. We might even have unreliable Earth systems engineering, either by design or by accident. For these reasons alone the survival of open, independent science as a distinct entity alongside commerce, but independent of commerce, can no longer be described as some kind of luxury. The survival of open science is a basic and urgent necessity, as the Halloween documents have inadvertently illustrated. That, above all, is why the public understanding of science is important; and if this book contributes to such understanding, directly or indirectly, in the slightest way, then it will not have been written in vain.
In my youth I nearly became a full-time professional musician. One reason I chose science instead was not only because I had some talent for it but also because of the vicious and destructive `hypercompetitiveness', as it is now called, found in parts of the music business: the idea of competition as warfare, the dirty tricks and the winner-take-all culture, the shameless exploitation of young artists, as documented for instance in the book When the Music Stops by Norman Lebrecht. I cannot vouch for all the details in that book, but the nature of the beast comes through recognizably. Science as a profession attracted me and many others because, by contrast, it had a remarkable `gift culture' one of whose key attributes is the scientific ethic -- a kind of chivalry or code of honour between competing colleagues, aspired to if not always attained, and able to keep our natural competitiveness, and alchemical secretiveness, sufficiently within bounds to permit a degree of openness, cooperation, and massively parallel problem-solving.
Good, credible, reliable science has always depended on the existence of that same culture and ethic, whose effectiveness first became apparent during the Renaissance. Over decades and centuries, the effectiveness has been demonstrated in countless ways, including the ignition of interest and the recruitment, again and again, of prodigiously talented people for modest remuneration. The building of complex yet reliable computer software is only one illustration among many. But the culture, the ethic, and the recruitment are now in great peril. This in turn imperils our future problem-solving abilities, and the availability of independent thinking and advice on matters of public concern. Parts of the science enterprise are becoming more and more like the professional music business writ large, with commercial and political forces taking over, spurring hypercompetitive behaviour. An example is gene patenting, the attempt to claim ownership of naturally-occurring patterns and all their future commercial uses, logically no different from patenting the Earth, or patenting the periodic table of the elements. This is not only an unprecedented kind of patenting, but also a new hypercompetitive weapon of mass destruction -- the destruction, that is, of all competition, and the weakening of independent research and enterprise. It also acts to destroy the public understanding of science, indeed to destroy coherent thinking of any kind, by arguing, astonishingly, that there is no distinction between discovery and invention. As I write, the weapon is being tested in the law courts and elsewhere. I'm among those who hope it won't work.
The politically influential book by Terence Kealey has gone so far as to urge that all science and engineering should be done under commercial pressure, giving openness, independence, and the best ideas for innovation scant chance of survival. This bears thinking about each time your computer system crashes and your customers are turned away, or a nuclear power plant goes out of control, or another disease agent jumps a species barrier in an unforeseen way. The belief, largely unconscious, that More Competition is Always Better is arguably one of the most dangerous fundamentalist beliefs of our time. Conversely, those advanced societies that find new and effective ways to value and protect the scientific ethic and other, similar professional ethics -- notwithstanding their pricelessness, their invisibility to the audit culture, their unquantifiability as `performance measures' -- will have a powerful long-term advantage through balancing competition with cooperation.
This then is the central paradox of the crisis in science and of the larger crises in democratization, commercialization, globalization, and auditing. It is a paradox increasingly recognized within the commercial world itself, reflected in the current talk about `work satisfaction', `sustainability', `partnerships', the `third way', `coopetition', and `alliances not takeovers'. By limiting hypercompetition, a society can become more competitive.
Perhaps it can become more civilized as well. When, in 1930, in front of the newsreel cameras, a journalist asked the Mahatma Gandhi what he thought of modern civilization, the Mahatma replied, `That would be a good idea.' The optimist in me hopes you agree.
Backup material, including video and audio demonstrations, as well as the current updates of the three ISR articles, can be downloaded via this link.