skip to content

Cambridge-Imperial Computational PhD Seminar

Wednesday 19th February 2020, Cambridge

Plenary speaker: Carola Schönlieb (University of Cambridge)

Time

Speaker

Title

Abstract

12.00-12.30

Sam Thomas Randomised Approximation Algorithms In this talk, I introduce the concept the concept of using Markov chains to approximate difficult-to-compute combinatorial quantities. This can include graph-quantities such as the number of proper colourings or hardcore configurations. It has been used recently in the Stochastic Block Model. I suggest a new target for this method. I also outline how to use a similar approach (related to hardcore configurations) to create a decentralised random access scheme for fibreoptic routing, giving provable results, rather than heuristic solvers of NP-hard problems. Only introductory Markov chain theory will be required. All models will be defined explicitly.

12.30-13.00

Jan Povala Burglary in London:  Insights from Statistical Heterogeneous
Spatial Point Processes

To obtain operational insights regarding the crime of burglary in London we consider the estimation of effects of covariates on the intensity of spatial point patterns. By taking into account localised properties of criminal behaviour, we propose a spatial extension to model-based clustering methods from the mixture modelling literature. The proposed Bayesian model is a finite mixture of Poisson generalised linear models such that each
location is probabilistically assigned to one of the clusters. Each cluster is characterised by the regression coefficients which we subsequently use to interpret the localised effects of the covariates. Using a blocking structure of the study region, our approach allows specifying spatial dependence between nearby locations. We estimate the proposed model using Markov Chain Monte Carlo methods and provide a Python implementation.

14.00-15.00

Carola Schönlieb From differential equations to deep learning for inverse imaging problems Images are a rich source of beautiful mathematical formalism and analysis. Associated mathematical problems arise in functional and non-smooth analysis, the theory and numerical analysis of partial differential equations, harmonic, stochastic and statistical analysis, and optimisation. Starting with a discussion on the intrinsic structure of images and their mathematical representation, in this talk we will learn about some of these mathematical problems, about variational models for inverse problems and image analysis and their connection to partial differential equations and deep learning. The talk is furnished with applications to biomedical image analysis, art restoration and forest conservation.

15.30-16.00

William Earley Engines of Parsimony: How to take over the universe on a budget Despite their short history, computers have become pervasive in our every day lives, but these everyday computers are all of the same ilk: they use classical physics, are thermodynamically irreversible, and are made of precisely patterned semiconductors at macroscopic scales. This irreversibility turns out to have a significant impact on performance. I will discuss just what these performance impacts are and how they come from fundamental laws of physics. Furthermore, I will show how novel forms of computing---reversible, quantum, molecular/DNA, nanoscale---offer the possibility of surpassing these performance limitations and enable operating in the limit of negligible free energy. Even these unconventional computers are still limited however, and these limitations also depend on the scale of the computational system in question, from microscopic to macroscopic to planetary/stellar to (the cusp of) black hole formation.

16.00-16.30

Harrison Zhu Bayesian
probabilistic numerical integration with tree-based models
Bayesian quadrature (BQ) is a method for solving the numerical integration problems found at the heart of most machine learning and statistical methodologies. The standard approach to BQ is based on Gaussian process (GP) approximation of the integrand. As a result, the standard BQ approach is inherently limited to cases where GP approximations can be done in an efficient manner; often prohibiting high-dimensional or non-smooth target functions.  This paper proposes to tackle this issue with a new Bayesian numerical integration algorithm based on Bayesian Additive Regression Trees (BART) priors, which we call BART-Int. BART priors are easy to tune and automatically handle a mixture of discrete and continuous variables.  We demonstrate that they also lend themselves naturally to a sequential design setting and that explicit convergence rates can be obtained in a variety of settings. The advantages and disadvantages of this new methodology are highlighted on a set of benchmark tests (called Genz functions), and on a survey design problem.

16.30-17.00

Karen Luong An efficient method to approximate wave packets on the whole real line. Highly oscillatory wave packets play an important role in quantum mechanics especially in the semi-classically scaled time dependent Schrodinger equation. We compare three different orthogonal system in L_2(R) which can be used to build a spectral method for solving such problems. These systems all have banded skew-Hermitian differentiation matrices, which implies that they will respect the Born interpretation of quantum mechanics, and the linear algebra of the spectral method is simplified. Our contention is that the Malmquist-Takenaka basis is superior to the more commonly used Hermite functions and Stretched Fourier expansions for approximating wave packets. We prove that in the high-frequency regime, Malmquist-Takenaka expansions converge exponentially fast to wave packets, which goes against the established theory on this basis. The main body of the talk will look at our proof which uses the method of steepest descent in the complex plane. This is joint work with Arieh Iserles (Cambridge) and Marcus Webb (Manchester).
17.00-17.30 Antigoni Kleanthous Accelerated Calderon Preconditioning for electromagnetic scattering by multiple absorbing dielectric objects The PMCHWT formulation suffers from ill-conditioning upon discretisation leading to slow convergence of iterative solvers. A popular technique used to remedy this is that of Calderon preconditioning (CP) where one takes the preconditioner to be the same as the PMCHWT operator. This reduces the number of iterations required by iterative solvers but at the expense of increased computational cost due to the additional matrix-vector products performed. A recent study (Kleanthous et al., 2019) showed that for single-particle dielectric scattering, standard CP is no more efficient than a simple mass-matrix preconditioner in terms of total matvecs. For multi-particle problems, a block-diagonal CP was found to outperform the standard CP.  Here, we further accelerate assembly and solve time using a bi-parametric implementation, similar to that employed in (Escapil-Inchauspé et al., 2019) for the perfectly conducting case. This involves using a less expensive set of quadrature orders, $H$-matrix tolerances and near-/far-field cutoff parameters for the preconditioner than for the original operator. The efficiency of the approach is demonstrated using the Bempp software package (Śmigaj et al., 2015). Applications include large scale simulations of light scattering by ice crystals found in cirrus clouds used for numerical climate modelling.

 

Monday 2nd Decemeber 2019, Imperial

Plenary speaker: Scott McCue (Queensland University of Technology)

Time

Speaker

Title

Abstract

12.00-12.30

Julian Marcon

Using a guiding field to generate naturally curved quadrilateral meshes

Traditional methods to generate high-order meshes have typically relied on an \emph{a posteriori} approach: a linear mesh is first generated then enhanced by adding high-order nodes and projecting them onto the geometry. However, this approach is prone to the creation of invalid elements due to self-intersection. To avoid this problem, we devise a method based on an \emph{a priori} approach. We formulate a guiding field, inspired from cross fields, boundary value problem that we solve with (high-order) Laplace solver. This guiding field is aligned with the boundaries and gives us the locations of irregular nodes and separatrices for a quadrilateral block decomposition of the domain. These blocks are naturally curved and valid. From these, we finally generate a finer high-order mesh that is itself valid. We demonstrate this new technique on a set of two-dimensional examples taken from, or inspired by, the cross field literature.

12.30-13.00

Georg Maierhofer

Highly oscillatory quadrature for integrals with singularities

Highly oscillatory integrals play an important role in computational wave scattering, and their efficient evaluation is consequently often crucial for the success of boundary integral methods. Intensive research over recent decades has succeeded in developing powerful methods for the efficient evaluation of oscillatory integrals without singularities. Yet, in many applications, the Green’s function results in both oscillatory and singular behaviour. In this talk, we will present an extension of the classical Filon method to the singular oscillatory case, for two classes of integrals that are motivated by applications. This construction is based on asymptotic methods for highly oscillatory integrals and combines the strengths of asymptotic methods with those of classical quadrature. We will understand the appealing properties of Filon methods in this setting using bounds on the quality of approximation and provide an overview of the stable computation of the relevant moments.

14.00-15.00

Scott McCue

Steady nonlinear ship wave patterns

Computing the wave pattern that forms at the stern of a steadily moving ship is a challenging problem for a number of reasons.  Even if we replace the ship with a simple surface disturbance, the mathematical formulation still gives rise to a highly nonlinear free-surface problem.  One common approach is to linearise the free surface under the assumption that the amplitude of the waves is small.  Alternatively, as I will discuss, if we are to solve the fully nonlinear problem then we must resort to numerical computation.  I will summarise the sort of numerical scheme used for this problem which involves reformulating the problem as an integral equation which is enforced at rectangular mesh points. The resulting system of equations is solved using a Jacobian-free Newton-Krylov method together with a banded preconditioner that is constructed from analysing the linearised problem.   Further, I will discuss an application in which we study how properties of a ship wake can be extracted from surface height data collected at a single point as the ship travels past.  This is joint work with Ravi Pethiyagoda and Tim Moroney.

15.30-16.00

Rob Tovey

Continuous optimisation of Gaussian mixture models in inverse problems

In solving inverse problems it is common to include a convex optimisation problem which enforces some sparsity on the reconstruction, e.g. atomic scale Electron tomography. One draw-back to this, in the era of big data, is that we are still required to store and compute using each of those coefficients we expect to be zero. In this talk, we address this by parametrising the reconstruction as a sparse Gaussian Mixture Model and fitting, on a continuous grid, to the raw data. We will discuss the pros (computational complexity) and overcoming the cons (introduction of non-convexity) of such an approach.

16.00-16.30

Xiaoshu Sun

Numerical Framework for approximating the log determinant via low-rank representations with applications to Casimir energy computations

Since late 1940s, the Casimir effect was proposed and the advanced study on vacuum effects in quantum electrodynamics has been developed deeply and fast. The key step of computing the Casimir energy is to evaluate BEM matrix elements between localised basis functions and compute the log determinant of the matrix. In this talk, I will present our new fast direct algorithms on computing the log determinant of the matrix corresponding to the discretized Maxwell electric field boundary operator via low-rank approximations. Moreover, the numerical experiments of the new log determinant approach and the performance of implementing this new algorithm on computing the Casimir energy using the Bempp software will also be presented.

16.30-17.00

Sam Power

Optimal Monte Carlo Sampling and a Multi-Core Metropolis-Hastings Algorithm

When tasked with drawing approximate samples from a distribution of interest, perhaps the most commonly-used approach is the Metropolis-Hastings algorithm, a Markov Chain Monte Carlo (MCMC)technique. In this method, one proceeds by i) proposing moves to new locations, and then ii) deciding whether or not to accept these proposals. A key tradeoff with such algorithms concerns how ambitious to be with these proposal moves; one prefers to make large moves when possible, but one also wants to have a reasonable proportion of proposal moves accepted. In this talk, I will review a framework which has provided practically useful answers on how to navigate this tradeoff, the theory of `optimal scaling’ for MCMC . I will then discuss recent work on how this theory can be adapted to the scenario in which parallel computing resources are available, and describe how one can use them to improve the efficiency of standard MCMC algorithms. This leads to concrete practical recommendations, as well as providing some quantitative estimates for how much benefit one can asymptotically expect from parallelism.

 

Monday 4th November 2019, Cambridge

Plenary speaker: Nick Trefethen (University of Oxford)

 

Time

Speaker

Title

Abstract

12.00-12.30

Derek Driggs

Accelerating Variance-Reduced Stochastic Gradient Methods

Variance reduction is a crucial tool for improving the slow convergence of stochastic gradient descent. Only a few variance-reduced methods, however, have yet been shown to directly benefit from Nesterov's acceleration techniques to match the convergence rates of accelerated gradient methods. In this work, we show develop a universal acceleration framework that allows all popular variance-reduced methods to achieve accelerated convergence rates. The constants appearing in these rates, including their dependence on the dimension $n$, scale with the mean-squared-error and bias of the gradient estimator. In a series of numerical experiments, we demonstrate that versions of the popular gradient estimators SAGA, SVRG, SARAH, and SARGE using our framework significantly outperform non-accelerated versions.

12.30-13.00

Nacime Bouziani

External operators in Firedrake

In many practical cases, PDEs are not enough to accurately describe the physical problem of interest. For example, it may be necessary to include features not represented in the differential equations, or to include closures for unresolved scales. While the PDEs themselves are often based on fundamental physical laws, these additional terms are often more heuristic in nature. They may be represented by a neural network and learned entirely from observed data, or they might be based on looser physical arguments about the system.

Here we present extensions to Firedrake which enable the inclusion of arbitrary external operators. These operators are not limited to vector calculus expressions written in the UFL language, but can instead be anything that can be computed from the state of the system. We also present the differentiation and assembly mechanisms that enable variational problems involving these new operators to be solved. We illustrate the new capabilities with two new classes of operator:

- Pointwise solve operator : This operator enables the inclusion of pointwise nonlinear relationships. This facilitates the modelling of implicit constitutive relations, i.e. nonlinear solids or fluids where the stress is computed by solving a pointwise nonlinear equation every time the operator is evaluated.

- Neural Network : The pointwise neural network operator provides values by evaluating a neural network whose inputs are other state variables. This presents the opportunity to create new models of phenomena by learning from experimental or computed data. This may be applicable in a wide range of areas where the theoretical basis for models is less strong. Potential examples include turbulence closures and regularisation terms in inverse problems.

14.00-15.00

Nick Trefethen

Polynomials and Rational Functions in Numerical Computation

Polynomials are as central a topic as any in mathematics, both pure and applied, and they are the basis of many numerical algorithms. The first half of the talk will review this area and highlight Galois' unfortunate contribution to the split between pure and applied maths as well as the unexpected observation that in the multivariate case, whereas polynomials are of crucial importance to pure mathematics, they are little used for applications. It will also demonstrate Chebfun, a software system based on the power of higher-order univariate polynomials for numerical work. The second half will turn to rational functions, which can be spectacularly more effective than polynomials for certain problems with singularities if one properly exploits (i) partial fractions and (ii) barycentric representations. We shall demonstrate fast new numerical methods based on rational functions for Laplace's equation on a polygon, complex minimax approximation, and conformal mapping. Key collaborators in this work have been Abi Gopal and Yuji Nakatsukasa.

15.00-15.30

Edward Ayers

Towards proof assistants for working mathematicians

In this talk, I will discuss my work on user interfaces and automation techniques for proving and verifying theorems on a computer.
I will also give a brief pitch on why working mathematicians should care and an overview of the current state of the field.

16.00-16.30

Fagin Hales

On the simulation of quantum systems

Quantum systems are in general very difficult to simulate due to their exponential increase in complexity as the system grows. With the introduction of quantum computers, new complexity classes have arisen to describe the computation power of quantum computers such as the quantum versions of P and NP, BQP and QMA respectively. In this talk, I shall discuss the reasons why quantum systems can be so hard to simulate along with some surprising exceptions, and I will talk about a method using graph theory to benchmark the efficiency of simulating quantum circuits that I am currently working on in my research.

16.30-17.00

Peter Baddoo

Computing periodic conformal mappings

Conformal mappings are used to model a range of phenomena in the physical sciences. Although the Riemann mapping theorem guarantees the existence of a mapping between two conformally equivalent domains, actually constructing these mappings is extremely challenging. Moreover, even when the mapping is known in principle, an efficient representation is not always available. Accordingly, we present techniques for rapidly computing the conformal mapping from a multiply connected canonical circular domain to a periodic array of polygons. The boundary correspondence function is found by solving the parameter problem for a new periodic Schwarz--Christoffel formula. We then represent the mapping using rational function approximation. To this end, we modify the adaptive Antoulas--Anderson (AAA) algorithm to obtain the relevant support points, data values, and weights. Finally, we apply the new mapping techniques to numerically solve the advection-diffusion equation in a periodic polygonal domain.

17.00-17.30

Kalle Timperi

Transitions in Dynamical Systems with Bounded Uncertainty

As an alternative to stochastic differential equations, the assumption of bounded noise can offer a flexible and transparent paradigm for modelling systems with uncertainty. In particular, it offers an avenue for carrying out stability and bifurcation analysis in random dynamical systems, using a set-valued dynamics approach. The system under study is assumed to be a discrete time dynamical system in a low-dimensional Euclidian space, with bounded random kicks at each time step. The collective behaviour of all future trajectories is then represented by a set-valued map, whose minimal invariant sets represent the stable state-space regions of the system, including the effect of the noise/randomness.

We describe in the 2-dimensional case the geometric properties of minimal invariant sets, and provide a classification result for the singularity points on their boundaries. The geometry is quite well understood in this case, but becomes increasingly more complicated in higher dimensions. The geometric picture obtained serves as a stepping stone for further dynamical analysis of the systems.