AMS-SIAM Summer Seminar in Applied Mathematics

Park City, July 17-August 11, 1995

Panel discussion
"Does Numerical Analysis Need a Model of Computation?"
1 August 1995

Panel Participants:

Organized and moderated by Mike Shub.

Mike Shub: This is a panel discussion on "Does numerical analysis need a model of computation?". The participants as they are seated on the stage are: Shmuel Winograd, Steve Smale, Alexandre Chorin, Arieh Iserles and Beresford Parlett.

The panel's title is not really a well posed sentence in the English language. It can mean a lot of things. It's actually purposely vague so that people can choose what they want to talk about and not be too constrained. I did have a few things in mind when I thought about it myself. There are some things that come obviously to mind with a question like that. First, can a model capture the salient features of current day computation or might it be irrelevant. Second, is that an agreed upon model of computation can ensure that everyone is talking about the same thing instead of having a lot of confusion, and it can also help facilitate rigor and a sure-footed approach to deep problems. But, on the other hand, a model might be too confining to do creative work. Finally, it may just be the case, that numerical analysis may already have all the models it needs. I didn't tell any of these thoughts to any of the participants. We'll see what happens. We don't even know if any two of us will talk about the same thing. The structure of the discussion will be that each panelist will have up to 10 minutes for a first statement followed by up to 5 minutes for a second

From the left: Shmuel Winograd, Steve Smale, Alexandre Chorin, Mike Shub, Arieh Iserles and Beresford Parlett.

statement and then perhaps brief exchanges between the panelists if there are some unresolved issues. Then we will throw the floor open, there are 2 microphones, one on each side. The only thing I ask is that the statements are short and brief and that people identify themselves when they speak. The order of the speakers is guided by 2 principles. The first is, you should always let your thesis advisor have the first word and the other is, you should always let your boss have the last word. So the order will be Smale, Chorin, Iserles, Parlett and Winograd.

Steve Smale: Let me first say that we have a manifesto on the subject, a 30 page manifesto, with Lenore Blum, Felipe Cucker and Mike Shub here. Maybe even Lenore will bring it to her talk tomorrow. OK, let me put that aside and address the question, does numerical analysis need a the model of computation? Maybe I'll broaden it a little bit. I could say does it need foundations?, does it need geometry? and so on. And my answer is no. I can best explain that in by a couple of historical examples. Maybe I'm closest to the situation of ordinary differential equations which I compare sometimes to the situation in numerical analysis. In 1960 there was a flourishing subject of differential equations, I felt that it was a little ecclectic. Some of us at that time wanted to give it a more global foundation, put the subject in the framework of modern mathematics. Differential equations, especially non-linear, was something global therefore it should exist on manifolds and should be thought of more in the terms of what we think of today as dynamical systems. But there was a flourishing school of very classical ordinary differential equations as represented say by the book of Coddington and Levinson. Well, what happened was that that school of differential equations didn't need dynamical systems. Dynamical systems did develop. It caught on. Dynamical systems, I think, has given a lot of insight and breadth and depth and understanding to the whole subject of ordinary differential equations. But did the people working in ordinary differential equations need it? They kept working on, and they still work and they still do good things. There is some overlap now between people in the school of ordinary differential equations and dynamical systems. There is some overlap and there is a large separation. I don't think that the people who didn't know manifolds or global questions of dynamics learned them, and they still do good work. I think dynamical systems is a plus but it was not exactly what people working in ordinary differential equations needed. Another example is represented maybe by physicists in quantum theory and maybe by mathematicians working in PDE. When I was a student, I think there were a number of people especially represented at NYU in PDE who had scorn for things like Hilbert space. They said anything that could be done in Hilbert space they could do it just as well without Hilbert Space. The same with quantum mechanics and operators in Hilbert space. The physicists didn't really need these developments in functional analysis, they didn't need Sobolev spaces. And so the subject of Hilbert spaces, Sobolev spaces and functional analysis developed. It wasn't the people working in PDE at that time. Most of those, at least except for some of the younger ones, never learned these things and they did what they were doing just as well without it. If they could do a PDE without Hilbert Space, fine. For them to worry about learning about Hilbert space would probably have been obstructive. On the other hand, all these modern methods have added a lot of insight, depth and breadth to the whole subject of partial differential equations. And that's a big plus. I do say that numerical analysis doesn't need these things. It doesn't need a model of computation. But on the other hand, I think that will develop. It's going to develop anyway, and it is going to develop probably more in parallel with existing analysis numerical. Numerical analysis will do very fine without it. But in the long run, these ideas from geometry and foundations will give a lot of insights and understanding to numerical analysis.

Alexandre Chorin: First I disagree with your history of partial differential equations. Let me point out that the Courant Institute was actually a pioneer in the application and Hilbert space methods to partial differential equations. In fact the famous book on Hilbert space is Courant-Hilbert. But that has nothing to do with the subject today. Last week I was looking at a collection of short stories. I found a story about a young socialist from Petrograd before the revolution discussing the subject of whether the revolution needs art. The answer that he came to is, no it doesn't, because the revolution is art. This is very much like the subject today, in some ways. It's very hard to discuss this subject because it very unclear as to what numerical analysis is and in fact what the theory of computation is or what it should be. Will you count, for example, computational field theory or computational thermodynamics as numerical analysis? You should because numerical analysis covers anything which you can do on a computer and for which you can do some mathematical development. The menu of the topics which are of interest in numerical analysis, what the algorithms will be and what the problems are, is so remote from the foundations of complexity theory, which I guess is what we are talking about, that it is very hard to relate what is being done to any kind of fundamental theory. It's simply much too early. Your picture that eventually there may be some impact of the theory of computation on numerical analysis may be true, but the day is very distant. Many of the fields of numerical analysis are just in the beginning. What is being done today is very different from what was done 10 years ago and clearly very different from what will happen 10 years from now. In many of them there are interesting problems, for which the first idea of how to do them has not arisen. There are topics which have been computer revolutionized in the last 10 years by ideas that come from who knows where, often from physical analogies and from mathematical notions. My point is simply that numerical analysis at this point is extraordinarily ill defined and is very much at its beginnings, and the idea that you can figure out what is actually going to be useful to it, that you can try to rigorize it and try to give it some logical and permanent foundation is I think very much premature. Also, some of the rigorous ideas of complexity miss ideas which are current and which are very useful. One example of that is the work of Kreiss and Oliger on the complexity of partial differential equations, which I have been trying to get people interested in. It's not rigorous work. It's work in which you simply make estimates of orders of magnitude, of how much data you need to represent the problem, how much labor you need to advance the problem in time and out of that you can get estimates on how much labor is necessary in principle to solve a PDE. You get extraordinarily good estimates which can not be rigourized in any way - heuristic estimates. The distance between those estimates and what in fact comes out of rigorous theory of computation is extraordinarily vast. To the extent that numerical analysis is an empirical subject, what is needed to be able to answer the question is one example in which the theory of computation helps numerical analysis. I'm not sure that such an example has arisen. It has not arisen within my experience. Until it has arisen within my experience, I think that my answer to that question is no.

Steve Smale: Then we agree.

(Audience laughter)

Alexandre Chorin: Yes.

Arieh Iserles: In the spirit of modern philosophy, instead of answering the question, I'll rephrase it. My problem is not whether numerical analysis needs a model of computation but what are the conditions that I expect from the model of computation before I'll be ready to consider it seriously to be of any use in numerical analysis. There is a classical paradigm of numerical analysis that we have a small number of, one can call it, master problems; numerical solution of linear algebraic equations, ODE's, numerical eigenvalue problems and so on. As long as we agree we have five, ten, fifty master problems we can try to phrase them in mathematical terms-very exactly, and then establish some sort of complexity theory and formal model. Except that I believe this is a retrograde step for the following reason. What one can call the new development, or the new spirit of numerical analysis, or the new paradigm, at least it should be the new paradigm, is actually to take the master problems and divide them into small subsets relying on small common denominators of structure. An example, a trivial example, would be to take a n by n system of linear algebraic equations. The moment you rephrase it as such you have Ax=b, A an n by n real matrix and b an n vector and so on, we can establish a very convenient formal model. But of course in most systems of interest the matrix A has a structure. It might be sparse. It might originate from discrete Fourier transform, and so on. And, in this case, the whole trick is to exploit the structure. So suddenly instead of one master problem, we have a very large collection of small problems. Moreover, the origin of the problem sometimes make the solution very different. For example, if your linear system originates from some sort of grid, then you might attempt multi-grid techniques. If your linear system originates from something that has space structure, you might you try domain decomposition. So, suddenly, instead of being faced with a small collection of problems, there is a huge range of small problems, and each problem requires different numerical analysis and different numerical treatment. If the model of computation can answer this question, we have passed the first hurdle. But this is only the first hurdle, because from my point of view, there is a much more ambitious hurdle that it is expected to pass. When you look at complexity theory and theoretical computer science, the heart of the matter there is to produce algorithms with minimal cost. Now an algorithm with minimal cost makes sense the moment you have a discrete framework, a finite framework and everything is expressible in integers, which almost never happens in numerical analysis. There is a gradual progression here. There is one range of problems that we can call algebraic problems, linear algebra, non-linear algebraic systems, eigenvalues local optimization and so on. These problems are infinite in the sense that we are dealing with real numbers. In that case the Blum-Shub-Smale model can work perfectly well. Moreover, in this sort of system the moment that I give you a candidate solution, I am telling you this is this vector or this number is the solution of your problem, at least you have some means of verifying with a high degree of confidence whether this candidate solution did solve the solution up to a given tolerance. However, as soon as you reach differential equations or for that matter integral equations, everything changes. Suppose you have an ODE and I am telling you this is a candidate solution of the ODE, the only way you can check it is by using mathematical analysis. By the way, I didn't say I was giving you a string of numbers. In principal it doesn't matter if I use computation, or perturbation analysis, or something else to derive the solution. What matters is that I have a candidate solution and to check whether it makes any sense, I need mathematical analysis. There is a popular misconception that there is some wide fire break, some explicit borderline between mathematical analysis and computation. The moment you have a PDE, you start analyzing it. You prove existence, you prove uniqueness and then you derive features. You decide whether is it integrable, whether it obeys conservation laws. Then you try to find an explicit solution. After a while you give up, you go to the nearest numerical analyst and beg him or her for a solution. That's not how it should work. We are all in the business of understanding differential equations not of numerically solving them or exactly solving them. As soon as we're in the business of understanding differential equations we have to combine analysis and computation and perhaps even laboratory experiments. At its best, we are using analysis to tell us how to compute and we are using computation to tell us what to prove. So these two are living in a symbiotic relationship and they are not divisible. The point is that suppose I tell you I have an ODE and an exact solution to this ODE. Suppose I tell you the solution is a McDonald function. By the way the McDonald function has nothing to do with the Hamburger moment problem. It is a distant cousin of the Bessel function. So we have a McDonald function. So what? If you want to plot the solution, if you want something in phase space or something like this, your best bet is to use numerical analysis to plot it. On the other hand, the fact that you know the solution is a McDonald function, gives us some kind of qualitative knowledge. We have a integral representation, we know where the zeros are, we know the asymptotics and so on. As soon as we want more than this, we need to use computation. But it also works the other way around. As soon as we have a really complicated problem to compute, we need to exploit analytic knowledge. We need somehow to imbed what we know from numerical analysis into the numerical algorithm. For example, think about front tracking and front following methods for non-linear hyperbolic conversation laws, or sympletic integration of Hamiltonian ODE's. It's something that combines analysis and computation, and that is what we need to do. Given that I am interested in numerically solving differential equations, I definitely want to have a bridge with pure mathematics. But the sort of bridge I want is not necessarily a bridge with theoretical computer science or with discrete mathematics, although I am very happy to learn from them. The sort of bridge I want is with analysis. If we can establish a theoretical model of computation that can answer all this, hallelujah! If we can't, then probably we will have a very nice theory, a beautiful intellectual edifice but I don't believe it will have much use.

Beresford Parlett: I think perhaps I will change what I was going to say. I feel we need some voice on the other side. (Audience laughter.) For me, at least, an important part of the meaning of the word model is that it is not the real thing. I think you can't have any scientific work without a model. That little word 'a' in the question-the question is does it mean one model, one agreed upon uniform model of computation or does it mean do we need any model of computation. I'm not going to say whether it's going to be one or the other, but it seems to me that we can't avoid them. It is absolutely obvious, that there are lots of little models being used all the time and the interesting question is how explicit do we want to make the model. People are working within a model whether they know it or not. The question is should we try to formalize it more or not worry. I'm not sure about the answer to that. I can see arguments both ways. Perhaps a lot of you know, but its rather amazing if circumstances ever drive you to look at a really ancient piece of work 80 or 100 years old or more. The work that was done in computing in those days was incredibly informal by our standards of today. One instance I happen to know about, I'd like to relate. Lord Raleigh was a genuine aristocrat. He inherited his title, he didn't earn it. He was informed at one time that he ought to get married. So he got married and took a six month honeymoon. He and his wife and their retinue went to see the pyramids. They had a huge barge in the Nile. Lord Raleigh hated sight-seeing. He let his wife and others go and look at the pyramids. He sat in the bottom of the barge and began to write a book called 'The Theory of Sound'. In that book he needed to get some approximations to eigenvectors of large six by six matrices. He used a technique in which he would take an initial guess, he would compute what today we would call the Raleigh quotient of that guess with respect to the matrix, (it was a symmetric matrix). Then he would shift by that, he would subtract that Raleigh quotient from his matrix. He would then solve his system of equations - the right hand side being his initial guess, his shifted matrix being his coefficients. On one occasion, that wasn't good enough so he iterated once more but he didn't change his shift. Why am I telling you all this? There is an interation called the Raleigh quotient interation but I don't think he ever used it. He did use a Raleigh quotient and he did do inverse interation with a Raleigh quotient shift for the first time. But the point is, he just described his calculations on a piece of paper and just did this one instance. It was pretty obvious to anyone who was reasonably well educated how you would extend this to the general case. You didn't need to be very formal about it. It was exactly the same with the calculations of Jacobi and others. Even advancing more in time in the field of matrix computations to the sort of bible written by Wilkinson in the 1950s and 60's, he hardly proves a theorem in that book. I've heard people in this audience criticizing the book, because they say it is very inconvenient to use as a reference. The subject really isn't organized very neatly. I think actually that is a true criticism. You sort of have to swallow it whole. I think if you sit back and look historically, things are definitely going in one direction. They are going from very, very little formality towards more and more of it. There are clearly advantages in doing that. I think that if work is ever to be elegant and perhaps memorable you have to get rid of a lot of extraneous stuff and you need a model in order to be elegant. The only danger we all face is that the desire to do things that are memorable and elegant is actually just going to get a little too strong. That is when model theory or models can be considered by outsiders to be going a little bit toward decadence. My own view is that I am all for models, I don't think we can live without them. They just shouldn't be taken too seriously.

Let me tell one other true story. In the little bit I have seen of complexity theory, all the results that I have seen would lead any reasonable person to suppose if you want greater accuracy in the solution of any simple problem, linear equations, eigenvalues, you ought to pay more money. It seems to me the expressions for the cost of complexity of solving a system of equations to a certain accuracy goes up when you ask for more accuracy and everyone thinks that is perfectly reasonable. Surely if you are going to build a model you ought to have that as a consequence if not an axiom. There is a company that builds airplanes in Seattle. They were interested in finite element analyses. They had subcontracted to a software firm to get code that would do part of this analysis which was concerned with finding the eigenvalues and eigenvectors of symmetric matrices. Three or four months later the product was delivered. The company had set up a testbed, an ordered test of problems to see whether they were going to accept the software or not. Well, the very first easiest, simplest problem they tried the software on, the code bombed. They immediately said we reject this. There was tremendous alarm in the software house. They said this can't be, etc. There was a lot of concerned work by people both from this company and the software house for 2-3 weeks, I think it was. At the end of a lot of investigation, it was realized that the easiest problem that they had given the software was to compute eigenvectors correct to 2 decimals, or their measure of the norm of the residual was to be 10 to the -2 times the norm. Of course, you can see what was going to happen. The poor code computed the first eigenvalue, it computed the first eigenvector. It stopped when it got the eigenvector correct to this rather low tolerance. >From that moment onwards it orthogonalized all the vectors it got against the first approximate eigenvector it had computed, which wasn't very good. And then as the calculation went deeper into the spectrum, the results began getting worse and worse because the approximation initially wasn't good enough. and then that was being used in all the rest. When they finally understood what was going on, they said its very simple. If you had demanded at least ten to the minus six instead of ten to the minus two, the software would breeze through these problems. And it did. The software was eventually accepted and it was very good. It solved the whole testbed, except for this first example, and why, because low tolerance was required. That doesn't prove anything. But I think it is a very instructive story about the relation between, shall we call it success or accuracy and money. In other words, it is often the best policy to demand far more accuracy than you think you could possibly need. Usually when a physicist or an engineer is put to the test, and they ask how much accuracy do you want. They don't like that question. If they do give you an answer, it is usually a wrong answer and you iterate several times. And then they say I must ask for more or I must ask for less. All I'm saying is, I think models are necessary. I don't know that we need a unique model of computation. I can't think of that ever happening. I don't think we should take the models too seriously, but we could use them.

Shmuel Winograd: Let me start by saying that I don't know what numerical analysis is. I was hoping, being the last, that I would hear a definition of what numerical analysis is from my colleagues on the panel. But I am still in the state of ignorance. So I will make up my own loose definition or at least imply some loose definition later on in what I am saying, which will probably not agree with some of your views of numerical analysis. Before addressing the question, I thought about, "Why does complexity need a model of computation?" Here I agree completely with Beresford. There is more than one model of computation. All of them equally good for their purposes; all of them are equally bad for purposes which they are not suited. The reason that in complexity we can't live without a model of computation is because of the subject matter. What we try to do, is to try to understand something about a potentially infinite collection of algorithms. We want to be able to say what it is that we are talking about. We also want to have the tools to be able to reason about them. That is the origin as to why a model of computation implicit or explicit, more likely explicit because we deal with it, is the beginning without which we can't do much in complexity. Because that is the subject matter. I will give examples perhaps more from what is called algebraic complexity, an area with which I am slightly more familiar. When one has an algorithm that comes out, an algorithm for computing discrete Fourier transform or multiplying two polynomials or multiplying complex numbers in three multiplications or what have you, at that point if I want to understand that specific algorithm, I can do it with without any of the models that people in complexity use. I need some other models though. And I think Beresford also alluded to that. Many people who work in that thing that people accept as numerical analysis, (notice how I go around trying to define what numerical analysis is) in trying to understand an algorithm, have a certain model in the back of their mind. What do I mean by multiplying two numbers? If I think about it on the computer, it's the product times one plus delta. What do I mean by adding two numbers, are they fixed point numbers for those of you old enough to remember what fixed point numbers are, or are they floating point numbers? You have a certain kind of model, and if you want to talk about it, sometimes you make it explicit because without it you can not do your work.

So, we understand why people in complexity need, in the sense that they are working on a collection of algorithms, a model that enables them to define that collection and to reason about it. In those areas that deal with the numerical behavior of algorithms, to the extent that you would want to ask questions about an infinite or a large collection of such algorithms, you will have to specify something about how you generate them. To the extent that you do not, you want to look at a specific algorithm, most likely the models that are there but are implicit would be sufficient. So perhaps the question that we should ask ourselves is, are there questions that are interesting questions about the numerical behavior of algorithms in which you deal with a collection of algorithms? My feeling is the answer is yes; my observation is that not enough is known, or that not enough results have appeared on it. But there are a few. I don't know if some of you would call them complexity or numerical analysis. I remember being very excited about reading a paper by, I believe Gary Miller (but definitely it was a Miller), who showed that for an algorithm for the product of two matrices stability can exist only in the regular way - let Strassen be damned. That is the regular way of multiplying two matrices has that kind of stability and no other algorithm can have it. Notice that in order to prove it or even state it, he must have to talk about no other algorithm. Therefore he has to deal with a whole collection of algorithms and therefore has to use a model of computation. He used the kind of model people use in algebraic complexity, as opposed to BSS maybe because he didn't know BSS at the time. So there probably will be in that which people call numerical analysis for a very very long time very interesting questions which deal with one algorithm in which any model of computation will probably not be needed beyond that implicit one. I do believe that there are interesting hidden questions to which not enough attention has been paid and which deal with the behavior of a collection of algorithms. In order to be able to reason about it, one will need some kind of a thing to define it. The minute you do that you have used a model of computation.

Steve Smale: As a little aside, I disagree with Alexandre about Courant and pioneering Hilbert space. I'd like to especially reinforce what Shmuel has to say. A lot of my motivation for spending so much time trying to understand numerical analysis is to help my own ideas about how to define an algorithm. It seems to me it's important to understand the subject of numerical analysis to make a definition of algorithm, as Shmuel was suggesting. It is the main object of study of numerical analysis and to have a definition of it so someone can look at all algorithms or a class of algorithms is an important line of understanding. I would like to take a little bit of exception to something almost everybody is believing sometimes explicitly with Arieh and Alexandre, that the criterion for the importance of work in modeling numerical analysis lies in its use for numerical analysts. To me that is, I used to say, an anti-scholarly point of view. At least it's not my point of view. Again, history from ordinary differential equations. People working in ordinary differential equations didn't have any use for the developments in dynamical systems like chaos. Chaos was not what they were looking for to solve their problems. Chaos was not useful to people working in ordinary differential equations in the 50's. But chaos is useful to an awful lot of people. Perhaps in ordinary differential equations the least maybe, but certainly among physicists and engineers you find that the things that came out of looking at ordinary differential equations as a subject from a quite different point of view, from this kind of foundational or geometric point of view led to things which could not satisfy the criterion of what people working in that subject were looking for. They weren't looking for chaos. They weren't looking for things like that. But that was what I think was an extremely important development that came out of the foundational, geometrical point of view of ordinary differential equations. I think that is why we have to be careful about trying to justify the developments in the foundations of numerical analysis in terms of what people are finding useful in the subject.

Alexandre Chorin: First let me point out that the title of the discussion is, does numerical analysis need a model of computation? It's natural to try to ask the question- Can numerical analysts, or do numerical analysts use a model of computation? I think that's the question we are trying to ask. It's perfectly reasonable that that is the question we are trying to answer. I think that you are completely right that numerical analysis needs new ideas and new paradigms. I think I tried to mention the fact that, in my view, most of what is being done now in computation is temporary. In fact the subject has not even jelled as well as partial differential equations and ordinary differential equations fifty years ago. It's moving so quickly it's very hard to define it and to say what the algorithms are. But you make an a priori assumption that progress will come from a model of computation. You make an a priori assumption that a model of computation plays the same role in numerical analysis as chaos or dynamical systems in ordinary differential equations. That is a very strong assumption, which may or may not be true. It's very unclear. I don't know on what basis we could decide so. I think that the practical side of the situation is that questions that we are normally trying to answer, and I think that Shmuel pointed to a very strong exception, but normally the questions that we are trying to answer are so far removed from the basic working of a model, it is hard to see at this time that this is the thing we need to push.

Arieh Iserles: If we can make any formal models sell coffee table albums and make numerical analysis incredibly sexy, have books written about it, like chaos vis a vis ODE's, I suppose it would be worth it. However, my worry is in a sense, a motherhood and apple pie situation, that we will all say yes, of course its nice to have models, but actually these models will not help much. I am all in favor of models from a very pragmatic point of view. I want them to help me in my work. Now, take the axioms of mathematics. The axiom of choice, constructability, etc. Many people have spent their working life on axioms, but what is the percentage of mathematicians in everyday mathematical life who are really bothered by axioms of mathematics? People are working within their own frame of reference, and in their own environment and axioms of mathematics, Peano axioms and so on don't help them, don't hinder them. They exist somewhere. It is very comforting to know that they exist. Full stop. Now I don't want this situation. The moment I start talking about a model of computation I want something that can really help us to do numerical analysis. Which brings me to the third point, what is numerical analysis? Short story. When my older daughter was quite young, we once took her to a doctor for some reason. She was old enough to be frightened of a doctor. So to fortify herself she told the doctor, "You know, my daddy is a doctor. He's not a doctor of people, he's a doctor of numbers". Until a few years ago, my definition of numerical analysis would have been that we are all doctors of numbers. However, this is the definition that I don't like. I'm quite happy not to define numerical analysis, because I see numerical analysis as being in some sort of unity with other mathematical techniques. We have a whole range of techniques, we have problems. We have, say, differential equations. We have other problems in mathematics. We are trying to understand them. To understand them, we have a range of techniques. We have analysis. We have perturbation theory. We have computation. We have to do the computation as best as we can. Then we have algorithms. We have to analyze the cost. We have to analyze the well-posedness, the stability and so on. But, I don't want to define numerical analysis. I don't want numerical analysis to be sort of a stand alone subject. I want numerical analysis to co-exist with the subject from which its problems originate.

Beresford Parlett: I suspect any of you who work in fairly large universities have probably witnessed what I have in that you can no longer find a Department of Engineering. Instead you have a College of Engineering and you have various departments. Of course, the people in the departments don't speak to each other. There are six or seven departments in a college of engineering. It is true that you don't find departments of engineering anymore. That's not because engineering has failed; it's because it has succeeded. I think numerical analysis is going the same way. I don't think it will exist as a subject for more than another twenty years or so. Not because it has failed but because computers have become so ubiquitous and so useful that people working in an incredible range of engineering and scientific and even mathematical disciplines are doing computation. Most of these people have very little to say to each other. I am sure, you all know, there is computational physics, computational chemistry, there is computational biology. The point is, OK, all right you're a physicist. All right, you're not going to be a great theorist, you go into computational physics. Do you need to take a numerical analysis course? Well, whether you do or don't need to, you probably don't. Physicists very rarely take the courses because they say we're pretty smart, it will take us at least an hour to learn what they are going to teach us in the course. I was very lucky to take part in a workshop many, many years ago in Santa Cruz where a small bunch of computational chemists got together with a small number of people who did matrix computations. The computational chemists were very sophisticated. They were a very impressive bunch. Almost all subjects in mathematics are changing and expanding. But it seems to me that scientific computing is just becoming so universal that there is not going to be a call. I was once slightly laughed at for saying that I thought that few numerical analysts were interested in arithmetic. But I think it's absolutely true. I found very, very few professors of numerical analysis who know how a square root is computed in the computer today and they probably don't know how multiplication is done, and they almost certainly don't know how division is done, with one or two exceptions sitting in the front row. The point is that we are all finite in our capacities. We've learned some little field. We're going to work in that. I don't know to what extent you feel numerical analysis is different from scientific computing. I think the subject is just going to burst out in all directions and people are going to learn what they need. There may be quite a lot of redundancy. It doesn't hurt. In that sense I don't think Shmuel has to ever bother to know what numerical analysis is. It's not an important question. But, as I said before, I think models are extremely important and there is obviously going to be lots of them. They are just part of the ball game.

Shmuel Winograd: I have the pleasure of being the last stop before the audience can speak. I'll try to be brief. Somehow, and I'm not sure if it's connected or relevant or not, the discussion reminds me of one sentence of the introduction to Landau's beautiful little book about the definitions of numbers, which by the way if there is one or two among you who hasn't looked at it, let me urge you to pick it up and look at it. Beautiful. In the introduction, it says that he is devoting or dedicating the book to his daughter, who is a chemical engineer, who thinks she knows everything that has to be known about differential equations but still doesn't know why A+B has to be equal to B+A. I think, again without knowing what numerical analysis is, and again I agree a lot with Beresford, that a lot of people are going to do computation in all areas of science. A lot of them are going to do it well, beautifully. A lot will probably make every single error that's in the book and then invent a few. If we explain it as not just a model but the whole question of rigor and foundations, if you wish, somewhere along the line in the nineteenth century, we needed the rigor of analysis, because too many things that we were doing intuitively, just went wrong. So we needed it. We all use it now. Not all of it. But every now and then, we use it or some of its consequences, knowing what are the pathologies. We talk about continuity, we talk about degrees of continuity, all of those kinds of things which came about, that people did not worry about, help us avoid doing some of the things that are wrong. I think also that the demand to understand numerical analysis because everybody in all of the sciences will be doing some of the scientific computation, as Beresford mentioned, also will put an onus on whoever it is who calls himself a numerical analyst to be able to transmit the knowledge, to teach it. You will need a certain amount of framework in order to be able to transmit, fortify, make it concise, and transmit the knowledge as to what to do and what one should not do. Somewhere there that will require, let's call it a model for lack of a better word, a framework. That phenomenon that Beresford mentioned I think will be another reason why some kind of codification will have to take place, because there will be pressure for it. There will be a need for it. It is not the need of numerical analysts, but the need of all the other people who are using numerical computation.


Gene Golub, Stanford University: I want to comment on some of the comments made by the people on the platform, in particular, some of the comments made by Beresford with whom I strongly disagree. First of all, I think that pure numerical analysis is reaching the end of the rope. The kinds of things that many of us grew up with, the Wilkinson kind of analysis for those of us who are interested in linear algebra, that sort of thing is not going to be pursued to the extent that it has been in the past. There are probably a few people who will be doing such things. My own feeling is that scientific computing is the field and that's more than numerical analysis. It combines numerical analysis, applied mathematics and computing. All these ingredients can be brought together in a way that there can be cross-talk between disciplines. There are a lot of people who do mathematics. There is mathematics done in physics departments. Electrical engineering departments do mathematics. We don't say that mathematics is going to go out of business. In the same way, I don't think scientific computing will go out of business. If you look around, a book like Parlett's is found not only in computer science departments, but in electrical engineering departments. Many, many people want to know that information. It really does cut across disciplines and therefore perhaps it should have a discipline of its own. Now another thing, I think scientific computing is technology driven in two ways. First of all, technology changes, new problems arise. We need to study those new problems. There's constantly developing mathematical tools for solving such problems. Furthermore, the computing instruments are changing all the time too. The kinds of problems we work on today and the techniques we use today may be irrelevant in ten years. So one constantly has to be kept abreast of all these developments. Our methods are going to change. The ideas that we have, methods like GMRES, who would have thought of using such methods twenty years ago? They take up a lot of storage so we rejected those methods. So I think we should think of our topic as a technology driven topic. Finally, I'd like to say something about floating point arithmetic. I think this is a topic for the anals-anal driven people. It is important to know, a few people should know it perhaps. But I don't consider it part of the mainstream of numerical analysis any longer. Perhaps one needs to know the model. But along with the Wilkinson error analysis it isn't the mainstream of what we call scientific computing today. So I think we have a more general topic at hand. Is there going to be a subject like numerical analysis? Perhaps not. But there will be scientific computing for many years to come.

Herb Keller, CalTech: I agree very much with what Beresford said and what Gene said. (Audience laughter.) I don't know how he could say Beresford was wrong. First I want to recall to Steve something that Fredriks once told me about Courant and Hilbert. In fact he told me that Hilbert said to Courant, 'Ricard, was ist das - Hilbert raum?' So even in the early days the guys that invented it, didn't particularly care or know about it. Another example is the fact that at the Courant Institute, the one thing that we never really played around with was the finite element method invented by Courant many years before. So it's true those things move on. I don't think numerical analysis needs a model, but I think it very much benefits from such models. Numerical analysts as a consequence of such models work on different problems from what they did before, from a different point of view. They look at old problems again and it is enlivened in that way. But what is numerical analysis as several people have asked? I was going to say it is what numerical analysts do. So that means you take a vote. Right? Who is a numerical analyst? What do you do? But that is a snide remark. In a certain broad sense, it has to do with the design of algorithms to approximate solutions of problems when the answer can be given in terms of numbers, tables or graphs. But that is not adequate at all. A few years ago I had the pleasure of spending several weeks with Steve Smale at Cambridge and as a consequence of that I tried to use techniques of what I thought was numerical analysis to attack Hilbert's sixteenth problem. It had nothing to do with numbers, and so on. It had to do with counting. I considered that numerical analysis. So it doesn't just have to do with numbers. I think as Beresford said, the historical view is very important in assessing the future of our field. But I'm not sure an old model helps. What some of you people are doing now, will very soon be old models I think. The new models will be important but they won't necessarily be models of computation. I don't know what they'll be models of. I don't know what we'll call it then. With quantum computers on the way, I don't know if complexity will remain something that we'll pay very much attention to in the future. An exact analogy with that is exactly what Gene just talked about, floating point arithmetic. In the old days we used to worry about overflow and underflow. Nowadays, I'm sure that 99.44 percent of people that work on computers and write code never even think about overflow or underflow because the technology has taken care of that. I think the same will be true of big parts of complexity in the future, including P=NP.

Marion Pour-El, University of Minnesota, Minneapolis: One of the things that interests me is how people who are speaking are not sure of their models. They are working with how complex their models are. They're working with something called complexity, which although they use P=NP because this is the only complexity that seems reasonable now, they're not sure of it. That contrasts quite a bit with what Turing did in the Turing machine. Everyone has the feeling that at least every machine that can compute is a special case of the Turing machine. What we are saying now is that we really don't understand complexity. We can work models. We can fix our little models from here. We can fix our little models from there. We can do a little thing. We'd like to know more about models. We'd like to relate them to our theory. Yet we have to be able to use these models, so we have to be able to understand complexity. But we don't understand complexity, not the way we understand what is a proof.

Alexandre Chorin: I really like what Gene said. It seems to me that Gene said what I should have said. I'd like to commend him for an extraordinarily well put statement.

Jim Demmel, University of California, Berkeley: I want to agree with Gene on at least one thing, which is that a lot of what we do is technology driven and that models that we are going to be using are driven by technology. I just want to give two examples. One I will take the risk of sounding anal and the second one I think is much more intrinsic to the technology. The first one is that when people write programs nowadays, they are running them on many different computers connected by networks, and the computers do have different models of arithmetic on them. We are now trying to build algorithms that are correct even though they are running and sending messages and so forth, and communicating between machines with different models. It sounds like a floating point problem and it partially is. We don't know how to write correct programs on those kinds of machines. These two machines can't even agree on a number because they have different models of computation. So that's something where we have too many models. I'd really rather just have one. There are certain examples where we simply don't know how to write a correct algorithm, even though if we have one model of floating point, it's very easy. So maybe that problem is ephemeral. But the second one is not. I would claim that if you take most of the programs you have to solve your problems, and think of them as long strings of assembly language instructions and throw out all the arithmetic, it will run in the same amount of time. Those programs are not spending their time doing arithmetic instructions, but moving data from the memory into the place where you can do the arithmetic. That's the only thing that counts anymore. The only thing that counts for the time in anybody's real algorithm is data motion. There is some data that is close and it's cheap. There is some data that far away and more expensive. There are hierarchies that way. That is what is intrinsically expensive about computing. It's not the arithmetic anymore. There are some people who are working on formal models of that and there are some people who aren't. I haven't heard a whole lot of discussion about that at this meeting. But I think that is going to be the most important complexity model.

Shmuel Winograd: I'd like to agree very much with what Jim just said, especially the second example, and to tell him, as he probably knows, there are several papers by Chandra in which he attempted to use complexity techniques to look at computation exactly from the point of view of the movement of data. So I take that comment not as a criticism, it's a challenge to people who are working in complexity: Build that model, because you'll have to do it in order to work on a collection of algorithms that takes the movement of data also into account. I think that's a very valid point.

Arieh Iserles: It is a valid point but this is precisely an example of a finite situation. Essentially, the movement of data on a parallel computer and how to optimize, and so on, is a combinatorial model on which theoretical computer sciences has done splendidly well. And indeed on these sorts of models, on these sort of questions, even standard theoretical computer science models and combinatorial models, should do very, very well. Probably better than we as numerical analysts can do in finite time.

Pascal Koiran, Ecole Normale Superieure, Lyons: I'd like to comment on the argument that technology will take care of complexity and of P=NP. I think actually the converse is true. As technology progresses, complexity issues are going to be more and more important. When you work out an algorithm that is efficient in theory, then maybe you are going to find out that many of the constants are too big and in practice it doesn't work well. If you have two curves of, let's say, one efficient algorithm and one more efficient algorithm, and when machines get bigger you can move around these curves, you may be able to actually stand in the asymptotic part of the curves when you can deal with bigger problems. If you can deal with bigger problems when the asymptotic analysis is more relevant, then the theoretical analysis becomes more relevant. And it is much more so when the technology progresses.

Mike Shub: I have a question Alex. I heard you and Gene say opposite things, and then you said you agreed with one another. I understood Gene to say that scientific computing was a very large subject and it needed an intellectual core, the way mathematics is an intellectual core for engineering and physics. This intellectual core should be machine independent because the machine might change in ten years. That sounds to me like foundations. And I heard you say that the subject was too broad and can't be structured. I want to understand how you agree with one another.

Alexandre Chorin: What I understood Gene to say was that there is a core of numerical techniques which may be changing, but which is common to all computational subjects; ways of sorting matrices, calculating eigenvalues, solving differential equations, which physicists and chemists and so on do not have to reinvent, but which can be taught as a single discipline which is numerical analysis or scientific computing, which is a much better word for it. That is something that is changing in time, evolving, developing, not yet fully determined. And I don't think a model of computation is a major component of that joint model which all the sciences can use. That's what I understood from Gene and Gene can complain if he feels like it.

Mike Shub: Do you agree Gene?

Gene Golub: Yes.

Yosef Yomdin, Weizmann Institute, Israel: I would like to remind you of a panel discussion which took place about a year ago at Mt. Holyoke College at a conference similar to this one. One of the questions which was discussed there was concerned Von Neumann's criticism of the computing paradigm which was largely created by Von Neumann. The point was that this paradigm was completely based on combinatorics and was too far from analysis. Maybe this point can unify, not answer of course, some of the remarks given here. Would it possible to construct a computational model, which in a natural way, would include analysis, not only arithmetic but analysis? Probably it could answer a good part of the questions which appeared here. At least some questions Arieh proposed. Would this model be available, it could help to combine numerical methods with analysis and maybe even to find out that some numerical methods ignore the analytic structure of the problem. Maybe it could answer the last remark concerning information flow and move this question from the realm of discrete analysis to numerical analysis.

Michael Hirsh, Emory University: In one of my professional hats, I'm a topologist, and thus a very pure mathematician. I'd like to take exception to the idea that I don't use foundations in my work. Every day I use set theory, the idea of measurable functions is rather important. Now it's true that a lot of people worked on set theory and did a lot of things that I don't use, but you need to do all that before you can distill what is possible to do without getting yourself in trouble. I think it's a mistake to ask the numerical analysis modelers to do things that are only appropriate and helpful to you right now. Steve's point is very well taken that what they're doing probably won't help you right now. But you're mathematical grandchildren may use their work quite a bit and may be very thankful to the work that they've done. Current numerical analysis probably doesn't need an explicit model, but I've been to more than one talk that ground to a halt when there was an argument from the audience and ultimately it was realized that they had different model that they were talking about. But since nobody was explicit about it, they didn't realize it until there had been a lot of animosity. I think that being explicit about your model can help even today and even more so in the future.

Jorge Tierno, CalTech: I'm an engineer, so I think I'm more of an end user of all this theory. I don't really know what all these terms mean but I do know what we expect as a user from complexity theory and from numerical analysis theory. We face problems every day that we have to solve. We have no choice in saying I don't want to work with this problem because the theory behind this problem isn't interesting enough. I have to solve the problem. In very simple terms, I know how big a problem I can solve today with todays technology and I have more or less a good guess what sort of technology I'm going to have in ten years. The question I want answered is how big of a problem am I going to be able to solve in ten years. That is the kind of answer we expect from computational complexity and I don't think it's the kind of answer we are getting from the theory derived from present day models. If there is a model needed, at least for us, it is a model that will be able to predict an answer correctly.

Arieh Iserles: A few points. First of all, all of us are using set theory in the background and are using Peano axioms in the background. No Peano axioms, no integers, we're all out of work. But we really don't debate them everyday. We take the various constructs for granted. In so far as this is concerned, OK. So set theory axioms are axioms for all of us. End of the story. But we are talking about more than this. Another point is that numerical talks are grinding to a halt because people are not using rigor, which is different from having or not having a model. We should all be rigorous. We should all define things exactly. I agree that one beneficial result of having models is that it forces us into a straight jacket of very accurate thinking and really defining what we are doing. As far as this is concerned, this is an immediate benefit of a model. This I accept. I'm not trying to have any position in favor or against models. I'm trying really to take a very pragmatic point of view. The other point is about incorporation of analysis and numerical analysis, and so on. It is not only analysis. Let us think how we are going to teach computation. There is a beautiful book by Gil Strang on teaching linear algebra, not numerical algebra, linear algebra. But teaching numerical algebra as part of linear algebra. This is the way computation should be taught, at least to mathematicians. I am sick and tired of linear algebra courses in the university first year in which the students are told they are solving linear equations using Cramer's rule. A year later they go into numerical analysis and they are told to never use Cramer's rule. And rightly so. In a sense, we have to see the whole arsenal of computational techniques as something belonging to mathematics and the subject matter from which they emerge. This is true not only with regard to analysis but especially with regard to analysis.

Maurice Rojas, MIT: I'd like to make a comment on two strange discrepancies between mathematics and computation. After that I'd like to pose a question to the panel. Two discrepancies that I'd like to point out are the following. Let's take one. There's numerical linear algebra and there's computational algebraic geometry. The strange thing I'd like to point out, at least syntactically, according to the words appearing in those phrases, numerical linear algebra should be a part of computational algebraic geometry. And yet it's not. There are an awful lot of numerical issues which are ignored in computational algebraic geometry. So that is one strange discrepancy. Another discrepancy is the following. If you consider Diophantine approximations, this is a field within number theory. It's a very beautiful field, but it's gotten very abstract lately. What you might have noticed in some of the talks occurring this week, is that there are some nice applications or at least there are some very nice Diophantine problems arising out of these computational models. The strange thing is that present Diophantine approximation theory doesn't say anything about these questions. There is a grand gap between computational models and what is knowns in Diophantine approximation. That's another strange discrepancy. The question I'd like to pose to the panel is whether algebraic geometry needs a model of computation.

Mike Shub: With that I'd like to thank the panel, I don't expect anyone to answer, and close the evening. Thank you very much.

Footnote: This transcription was initially prepared by Mary Beth Miller. I have edited it to smooth it out. I take responsibility for all the errors which may have crept in. I hope that I have not seriously altered anyone's meaning. - Mike Shub