Computations in infinite dimensions: Challenges in a continuous world

30 June - 1 July, 2014

Kavli Royal Society International Centre at Chicheley Hall (click for directions)

Participants:

Ben Adcock (Purdue)
Albrecht Bottcher (Chemnitz)
Simon Chandler-Wilde (Reading)
Albert Cohen (Paris VI)
Wolfgang Dahmen (RWTH Aachen)
Karlheinz Groechenig (Vienna)
Sinan Gunturk (Courant)
Anders Hansen (Cambridge)
Michael Hintermuller (TU Berlin)
Gitta Kutyniok (TU Berlin)
Marco Marletta (Cardiff)
Olavi Nevanlinna (Aalto)
Houman Owhadi (Caltech)
Klaus Ritter (TU Kaiserslautern)
Gareth Roberts (Warwick)
Carola Schonlieb (Cambridge)
Christoph Schwab (ETH)
Andrew Stuart (Warwick)
Endre Suli (Oxford)
Pierre Vandergeynst (EPFL)
Martin Vetterli (EPFL)

Programme, Day 1

Breakfast: 7:00 - 8:30
Opening remarks: 8:45 - 9:00
Session 1: 9:00 - 11:00
  1. 1-1. Ben Adcock (Purdue University)
  2. 1-2. Wolfgang Dahmen (RWTH Aachen)
  3. 1-3. Gitta Kutyniok (TU Berlin)
Coffee break: 11:00 - 11:30
Session 2: 11:30 - 13:00
  1. 1-4. Albrecht Bottcher (TU Chemnitz)
  2. 1-5. Marco Marletta (Cardiff University)
Lunch: 13:00 - 14:00
Session 3: 14:00 - 16:00
  1. 1-6. Olavi Nevanlinna (Aalto University)
  2. 1-7. Karlheinz Grochenig (University of Vienna)
  3. 1-8. Albert Cohen (Paris VI)
Tea break: 16:00 - 16:30
Session 4: 16:30 - 17:15
  1. 1-9. Martin Vetterli (EPFL)
Dinner: 18:30

Programme, Day 2

Breakfast: 7:00 - 8:30
Session 1: 8:45 - 11:00
  1. 2-1. Carola Schonlieb (University of Cambridge)
  2. 2-2. Michael Hintermuller (TU Berlin)
  3. 2-3. Klaus Ritter (TU Kaiserslautern)
Coffee break: 11:00 - 11:30
Session 2: 11:30 - 13:00
  1. 2-4. Christoph Schwab (ETH)
  2. 2-5. Gareth Roberts (University of Warwick)
Lunch: 13:00 - 14:00
Session 3: 14:00 - 16:00
  1. 2-6. Houman Owhadi (Caltech)
  2. 2-7. Sinan Gunturk (Courant)
  3. 2-8. Simon Chandler-Wilde (University of Reading)
Tea break: 16:00 - 16:30
Session 4: 16:30 - 17:15
  1. 2-9. Anders Hansen (Cambridge)
Dinner: 18:30

Abstracts, Day 1

1-1. Ben Adcock (Purdue University)

Infinite-dimensional compressed sensing

Since its introduction a decade ago, compressed sensing has become an intense area research in applied mathematics, engineering and computer science. However, the majority of compressed sensing is based on a finite-dimensional, vector space model. On the other hand, many problems one encounters in applications are analog; in other words, best modelled in infinite-dimensional function spaces. In this talk, I will present a theory and set of techniques for compressed sensing in infinite dimensions. I shall first motivate the need for this new approach by showing how existing finite-dimensional techniques fail for simple problems, due to mismatch between the data and the model. Next I will argue that any theory in infinite dimensions requires new assumptions, which extend the standard principles of compressed sensing - namely, sparsity and random sampling with incoherent bases - by taking into account the influence of asymptotic decay and ordering that are present in infinite dimensions. Using these, I will introduce a new framework for infinite-dimensional compressed sensing, and show how this bridges an substantial gap in the existing literature. Finally, I will end with a surprising conclusion. I will show that this new insight is not just pertinent to infinite dimensions, but that it also leads to substantially improved approaches for many finite-dimensional compressed sensing problems.

This is joint work with Anders C. Hansen, Clarice Poon and Bogdan Roman (University of Cambridge)

Slides

1-2. Wolfgang Dahmen (RWTH Aachen)

Tensor Sparsity of Solutions to High Dimensional PDEs

Functions of a possibly very large number of variables arise, for instance, as solutions to parametric families of PDEs or of PDEs with a large number differentiable variables such as the Schrödinger equation in quantum chemistry or Fokker-Planck equations modeling dilute polymers. The numerical solution of such equations is severely hampered by the "curse of dimensionality" which, roughly speaking, means that the computational cost required for achieving a desired target accuracy increases exponentially with respect to the spatial dimension. The central theme of this talk is to break this curse by unveiling a typically hidden sparsity of the solutions to such problems. Depending on the nature of the involved operators a possible sparsity model is approximability by relatively short sums of solution dependent rank-one tensors. Corresponding "regularity theorems" are presented that quantify the (near) tensor sparsity of solutions to certain high dimensional diffusion equations provided that the data are tensor sparse. Moreover, bounds for the computational complexity of such problems are derived from a concrete numerical scheme confirming polynomial tractability.

Joint work with R. DeVore, L. Grasedyck and Endre Süli.

1-3. Gitta Kutyniok (TU Berlin)

Optimal Compressive Imaging using Fourier Samples of Continuous Multivariate Data

"One fundamental problem in applied mathematics is the issue of recovery of continuous data from specific samples. Of particular importance is the case of pointwise samples of the associated Fourier transform, which are, for instance, collected in Magnetic Resonance Imaging (MRI). Strategies to reduce the number of samples required for reconstruction with a prescribed accuracy have thus a direct impact on such applications -- which in the case of MRI will, for instance, shorten the time a patient is forced to lie in the scanner. In this talk, we will present a sparse subsampling strategy of Fourier samples which can be shown to perform optimally for multivariate functions, which are typically governed by anisotropic features. For this, we will introduce a dualizable shearlet frame for reconstruction, which provides provably optimally sparse approximations of cartoon-like images, a class typically regarded as a suitable model for images. The sampling scheme will be based on compressed sensing ideas combined with a coherence-adaptive sampling density considering the coherence between the Fourier basis and the shearlet frame.

This is joint work with W.-Q Lim (TU Berlin).

1-4. Albrecht Bottcher (TU Chemnitz)

Isn't the continuum in fact simpler than the discrete world?

The perhaps greatest achievement in mathematics is Newton and Leibniz' invention of the differential and integral calculus and thus the passage from the discrete world to the continuum. We are here at Chicheley Hall because we all know that unfortunately many continuous or infinite-dimensional problem are too complicated to be solved directly and that therefore one should step back and replace them by discrete approximations, which can nowadays be tackled by having recourse to the computer. I want to recall that nevertheless there are also lots of problems where Newton and Leibniz' original strategy remains the magic wand to come to success. I present a couple of examples which show that certain phenomena concerning large matrices can be understood much better by passing from matrices to integral operators.

Slides

1-5. Marco Marletta (Cardiff University)

The finite section method for dissipative Jacobi matrices and Schrodinger operators

We consider how the spectra (eigenvalues and essential spectrum) of certain classes of infinite matrix and differential operator can be approximated by the eigenvalues of finite matrices. We are particularly interested in the case of the essential spectrum; an example of E.B. Davies shows that this may fail to be approximated. We review some recent results and some slightly overlooked results of Pokrzywa from the 1970s, and present some new theorems which are joint work of the author with Sergey Naboko (St Petersburg) and Rob Scheichl (Kent). An application is given to the dissipative barrier method for resonances, and we explain why this method is also good for calculating eigenvalues.

Slides

1-6. Olavi Nevanlinna (Aalto University)

Computability concepts in spectral computations

- What do we mean when we ask questions like Can the spectrum of any operator be computed?
- Or of an abtract Banach algebra element?
- Can you write down the resolvent in the complement of the spectrum?
We look what the existing theories of computability say about the possibility to compute the spectrum. Starting with matrices we mention the barrier results on the possibility to compute polynomial roots by generally convergent rational iterations (Doyle, McMullen). We remind the "semidecidability" property of Julia sets (Blum, Cucker, Shub, Smale) and point out a parallelism with spectral computations in the frame work of Banach algebras - leading to an effective functional calculus (ON). Computability in the sense of constructivism, computable reals, is mentioned together with some basic results (positive and negative) on the possibility to compute within this frame work the spectrum of "effectively defined operators" (Pourel, Richards). With this brief glimpse to existing theories we intend to highlight the landscape into which the Solvability Complexity Index are entering. The rest of the talk tries to explain in what - rather essential way - this new concept differs from these others. We mention some results (Ben-Artzi, Hansen, ON, Seidel), but Anders Hansen, who initiated this line of research, shall discuss this in more detail.

Slides

1-7. Karlheinz Grochenig (University of Vienna)

Approximation of bandlimited functions with finitely many samples.

We discuss the approximation of a bandlimited function from finitely many (non-uniformly spaced) samples. Whereas the mathematical theory usually addresses the question of when such a function in $L^2(R)$ can be recovered completely, numerical methods operate with a finite-dimensional model. The numerical reconstruction or approximation of the original function amounts to the solution of a large linear system. We show that the solutions of a particularly efficient discrete model in which the data are fit by trigonometric polynomials converge to the solution of the original infinite-dimensional reconstruction problem. This legitimatizes the numerical computations and explains why the algorithms produce reasonable results. This method is an early example of generalized sampling.

Slides

1-8. Albert Cohen (Paris VI)

Estimating the n-width of solution manifolds of parametric PDE’s

Numerous computational model reduction methods, such as
reduced basis and its generalization, proper orthogonal decomposition, generalized
empirical interpolation, rely on the implicit assumption that the solution
manifold, that gathers the solution to a PDE as parameters vary, can
be well approximated by low dimensional spaces. This is made more
precise by assuming that the Kolmogorov n-width of the manifold has
certain decay. In this talk, I shall discuss strategies that allow to
rigorously establish rates of decay for the Kolmogorov n-width
of solution manifolds associated with relevant parametric PDE’s,
where the parameters are infinite dimensional. One key result is
that the rate of decay of n-width of sets in Banach space is almost
preserved under the action of holomorphic maps.

Slides

1-9. Martin Vetterli (EPFL)

Inverse problems in acoustics regularized by sparsity

Many inverse problems can be solved using regularization based on sparsity, as for example in compressed sensing. However, some problems have no clear discrete structure, and thus require a continuous modeling as for example using finite rate of innovation (FRI) signal models. One area where continuous models and sparsity using singularities is well adapted is acoustics and electromagnetics. Point sources are Diracs in space, and reflectors like walls create image sources. Equipped with this model, we attack classic problems in acoustics. First is the question “can one hear the shape of a room?”, or, do room impulse responses between a source and a few microphones uniquely specify the shape of a room. The answer is positive, and involves an algorithm that performs echo sorting using the rank property of Euclidean distance matrices. Based on the insight of using sources and their images (or reflections) we can attack calibration problems, where the most general case is ''blind calibration'', namely we have N microphones, M sources and an unknown room, and it is possible, under certain conditions, to recover positions of receivers and sources as well as the shape of the room. Further applications of the basic echo sorting technique can be found in acoustic beam forming and indoor localisation. The echo sorting algorithm being mildly combinatorial represents a computational challenge, but in many cases of interest, side information is available to make the problem tractable.

This is joint work with Ivan Dokmanic

Abstracts, Day 2

2-6. Carola Schonlieb (University of Cambridge)

Optimizing the optimizers - an adaptive approach to image reconstruction.

When assigned with the task of reconstructing an image from given data the first challenge one faces is the derivation of a truthful image and data model. Such a model can be determined by the a-priori knowledge about the image, the data and their relation to each other. The source of this knowledge is either our understanding of the type of images we want to reconstruct and of the physics behind the acquisition of the data or we can thrive to learn parametric models from the data itself. The common question arises: how can we optimise our model choice? Starting from the first modelling strategy this talk will lead us from the total variation as the most successful image regularisation model today to non-smooth second- and third-order regularisers, with data models for Gaussian and Poisson distributed data as well as impulse noise. Applications for image denoising, inpainting and surface reconstruction are given. After a critical discussion of these different image and data models we will turn towards the second modelling strategy and propose to combine it with the first one using a bilevel optimization method. In particular, we will consider optimal parameter derivation for total variation denoising with multiple noise distributions and optimising total generalised variation regularisation for its application in photography.

Joint work with Luca Calatroni, Jan Lellmann, Juan Carlos De Los Reyes and Tuomo Valkonen.

Slides

2-2. Michael Hintermuller (TU Berlin)

Functional-analytic and numerical issues in splitting methods for total variation-based image reconstruction.

Variable splitting schemes for the function space version of the image reconstruction problem with total variation regularization (TV-problem) in its primal and pre-dual formulations are considered. In the primal splitting formulation, while existence of a solution cannot be guaranteed, it is shown that quasi-minimizers of the penalized problem are asymptotically related to the solution of the original TV-problem. On the other hand, for the predual formulation, a family of parametrized problems is introduced and a parameter dependent contraction of an associated fixed point iteration is established. Moreover, the theory is validated by numerical tests. Additionally, the augmented Lagrangian approach is studied, details on an implementation on a staggered grid are provided and numerical tests are shown.

2-3. Klaus Ritter (TU Kaiserslautern)

Weighted Hilbert Spaces and Integration of Functions of Infinitely Many Variables

We consider numerical integration for functions of infinite many variables, which has only recently been studied in the literature. This line of research may be considered as the limiting case of high-dimensional integration, and it is motivated, e.g., by computational problems arising for stochastic processes. We discuss the structure of the underlying function spaces, namely, Hilbert spaces with reproducing kernels given as weighted superpositions of tensor product kernels, and we present a typical result for integration in this setting. Additionally, we discuss a new kind of embedding result, which is employed in the analysis.

Partially supported by the DFG within Priority Program 1324 and by the Center for Mathematical and Computational Modelling

Slides

2-4. Christoph Schwab (ETH)

Sparse, Adaptive Quadrature Methods for Bayesian Inverse Problems of Parametric Operator Equations

We present and analyze a class of sparse Smolyak quadrature methods for Bayesian inverse problems of parametric operator equations. The proposed method relies on a parametric, deterministic reformulation of Bayesian inverse problems with distributed parameter uncertainty from possibly infinite dimensional, separable Banach spaces, with known prior probability measure on the uncertain distributed parameter. The goal of computation is to evaluate moments or quantities of interest under the Bayesian posterior, conditional on given noisy observational data. For forward problems belonging to a certain sparsity class, the parametric, deterministic density of the Bayesian posterior belongs to the same sparsity class. Instances of the considered class of forward problems are definite or indefinite elliptic and parabolic evolution problems, with scalar or tensorial unknowns and with possible uncertainty in domains, and high-dimensional initial value problems with uncertain coefficients. The results suggest in particular dimension-independent convergence rates for data-adaptive Smolyak integration algorithms, so that the proposed approach is suitable for application to the infinite dimensional setting. We address asymptotic scaling limits of the posterior in the vanishing observation noise variance limit. The asymptotics motivate a novel ``variable metric'' preconditioner of the adaptive Smolyak algorithm. Results from selected numerical experiments confirming the theoretical findings will be presented.

This work is supported by the European Research Council under FP7 Grant AdG247277, and is joint with Robert Gantner and Claudia Schillings (ETH), Andrew Stuart (Warwick).

2-5. Gareth Roberts (University of Warwick)

Retrospective simulation for infinite-dimensional simulation

This presentation will survey retrospective simulation techniques and their uses in stochastic simulation and Monte Carlo for infinite-dimensional systems. Examples may include simulation for stochastic processes, Bayesian inference problems for stochastic processes, and Bayesian non-parametrics.

Slides

2-6. Houman Owhadi (Caltech)

A calculus for the optimal quantification of uncertainties

The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical estimators (or models), these estimators are still designed ``by humans'' because there is currently no known recipe or algorithm for dividing the design of a statistical estimator into a sequence of arithmetic operations. Indeed, enabling computers to ``think'' as ``humans'' have the ability to do when faced with uncertainty is challenging in two major ways: (1) Finding optimal statistical estimators remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this talk will describe the development of a form of calculus allowing for the manipulation of infinite dimensional information structures (based on reductions of optimization problems over variables such as measures over measures and functions) and its application to the optimal quantification of uncertainties in complex systems, the investigation of Robust Bayesian Inference under finite information (finite co-dimensional classes of priors), the identification of new Selberg integral formulae and the scientific computation of optimal statistical estimators.

2-8. Sinan Gunturk (Courant)

Near-Optimal Quantization of Redundant and Compressive Measurements

We will present a new noise-shaping based quantization and reconstruction algorithm for frame measurements. For any given quantization alphabet size, this algorithm provides near-optimal accuracy for random frames as well as some classical deterministic frames. With a suitably adapted reconstruction method, we will show that this algorithm also yields near-optimal quantization for compressive sampling systems. The methods are applicable to both finite and infinite dimensions. Additional features of the proposed algorithm include low computational cost and parallel implementability.

Joint work with Evan Chou.

2-8. Simon Chandler-Wilde (University of Reading)

On Spectral Inclusion Sets and Computing the Spectra and Pseudospectra of Bounded Linear Operators

We derive inclusion sets for the (pseudo)spectrum of an infinite Jacobi matrix $A$, understood as a bounded linear operator on $L^2(Z)$. The first inclusion set is the union of certain pseudospectra of $n\times n$ principal submatrices of $A$. In a second version we work with lower bounds on $n \times \infty$ and $\infty \times n$ submatrices of $A-\lambda I$ , which effectively leads to the study of related $n \times n matrices. The latter procedure is motivated by work on second order spectra of Davies, and Davies & Plum, and on higher-order versions by Hansen. Our third set not only yields an upper bound but also an approximation of the (pseudo)spectrum of $A$ as $n \to\infty$. We discuss extensions of our results to band operators of higher bandwidth and band-dominated operators, and to matrices $A$ with operator entries. We illustrate the results by computing spectral inclusion sets for a particular random operator studied by Feinberg and Zee.

This is joint work with Marko Lindner (Hamburg-Harburg) and Ratchanikorn Chonchaiya.

Slides

2-9. Anders Hansen (University of Cambridge)

Can everything be computed? - On the Solvability Complexity Index and Towers of Algorithms

This talk addresses some of the fundamental barriers in the theory of computations. Many computational problems can be solved as follows: a sequence of approximations is created by an algorithm, and the solution to the problem is the limit of this sequence (think about computing eigenvalues of a matrix for example). However, as we demonstrate, for several basic problems in computations (computing spectra of infinite dimensional operators, solutions to linear equations or roots of polynomials using rational maps) such a procedure based on one limit is impossible. Yet, one can compute solutions to these problems, but only by using several limits. This may come as a surprise, however, this touches onto the boundaries of computational mathematics. To analyze this phenomenon we use the Solvability Complexity Index (SCI). The SCI is the smallest number of limits needed in order to compute a desired quantity. In several cases (spectral problems, inverse problems) we provide sharp results on the SCI, thus we establish the absolute barriers for what can be achieved computationally. For example, we show that the SCI of spectra of self-adjoint infinite matrices is equal to two, thus providing the first algorithms to compute such spectra in two limits. Moreover, we establish barriers on error control. We prove that no algorithm can provide error control on the computational spectral problem or solutions to infinite-dimensional linear systems. In particular, one can get arbitrarily close to the solution, but never knowing when one is "epsilon" away. In addition, we provide bounds for the SCI of spectra of classes of Schrodinger operators, thus we affirmatively answer the long standing question on whether or not these spectra can actually be computed. Finally, we show how the SCI provides a natural framework for understanding barriers in computations. In particular, we demonstrate how the impossibility result of McMullen on polynomial root finding with rational maps in one limit, and the framework of Doyle and McMullen on solving the quintic in several limits, can be put in the SCI framework.

This is joint work with J. Ben-Artzi, Olavi Nevanlinna, Markus Seidel.

Slides