# PhD Projects

Small subset of suggested projects for incoming PhD students. There are many more, and projects can be tailored to the interest of the student. Drop me an email if you are interested.

## Are AI generated hallucinations avoidable? If so, how?

*Description:**AI techniques may be the future of computational imaging, given the unprecedented developments with AI techniques for acquisition speedup in areas including electron and fluorescence microscopy, seismic imaging, Nuclear Magnetic Resonance (NMR), Magnetic Resonance Imaging (MRI) and X-Ray CT, described by Nature as 'transformative'. However, there is a serious Achilles heel: AI generated hallucinations: "The most serious issue when applying deep learning for discovery is that of hallucination. [...] These hallucinations are deceptive artifacts that appear highly plausible in the absence of contradictory information and can be challenging, if not impossible, to detect" -- Nature Methods (2019).*

Most of the applications mentioned above can be considered (in a slightly simplistic form) as the following finite-dimensional, undetermined recovery problem: Given measurements y = Ax+e, recover x. Here A is the sampling operator (or measurement matrix) - with a non-trivial kernel - that depends on the imaging modality (variations of the Fourier transform in the MRI case, for example), y is the vector of measurements, x the object to recover (typically a vectorised discrete image), and e represents the noise.

*Main objective:**Develop the mathematics of AI generated hallucinations. Establish the mathematical methodology for characterising the reason behind hallucinations (fix it when possible), and the associated methodological barriers.*

## Generalised hardness of approximation: From optimisation, inverse problems and spectra to AI and PDEs

*Description:**Alchemists wanted to create gold, Hilbert wanted an algorithm to solve Diophantine equations, researchers want to make deep learning robust in AI, MATLAB wants (but fails) to detect when it provides wrong solutions to linear programs, etc. Why do we fail in so many of these fundamental cases? The reason is typically methodological barriers. The history of science is full of methodological barriers - reasons for why we never succeed in reaching certain goals. In many cases, this is due to barriers in the foundations of mathematics. This project is about new such barriers from foundations: the phenomenon of generalised hardness of approximation (GHA). GHA grows out of our solution to the extended 9th problem from Smale's list of mathematical problems for the 21st century. This phenomenon is not a rare issue - but happens on a daily basis in computational mathematics - and causes modern software such as MATLAB to fail on basic problems, and even certify nonsensical solutions as correct.*

GHA is close in spirit to hardness of approximation (HA) in computer science. Assuming P is not equal to NP, HA is the phenomenon that one can easily compute an epsilon-approximation to the solution of a discrete computational problem for epsilon > epsilon_0>0, but for epsilon < epsilon_0 it suddenly becomes intractable. HA was discovered decades ago and has been transformative being the subject of several Goedel, Nevanlinna and ACM prizes. GHA is a similar but distinct mathematical phenomenon that requires a new set of mathematical tools in computational mathematics rather than computer science (NB: GHA is independent of P vs. NP). The GHA phenomenon has so far been detected in optimisation, inverse problems, deep learning and AI, as well as computer-assisted proofs. It is essential in the Solvability Complexity Index (SCI) hierarchy and for understanding "why things don't work", as well as a key tool to understand "why things sometimes work".

*Main objective:**Study the GHA phenomenon in whatever discipline one wants: Optimisation, inverse problems, deep learning and AI, spectra and quantum mechanics, PDEs (both linear and non-linear).*

## Can everything be computed? - On the Solvability Complexity Index hierarchy and Towers of Algorithms

**One of the most famous computing problem is finding the zeros of a polynomial using only finitely many arithmetic operations and radicals of the coefficients. The negative answer to this problem for the quintic has had an enormous impact on the theory of computations. One had to resort to approximations and then limits. This was the birth of computational mathematics. This project addresses some of the fundamental barriers in the theory of computations via the concepts of towers of algorithms and the Solvability Complexity Index (SCI) hierarchy, the latter being the smallest number of limits needed to compute a desired quantity. Fascinatingly, there are several fundamental problems in computational mathematics that are still open, e.g. computing spectra of Schrödinger operators. This rather theoretical project, relying on functional analysis and operator theory, will be about such fundamental problems in computational mathematics.**

*Description:*

*Main objective:**Develop the mathematics behind the SCI hierarchy in whatever discipline one wants: Optimisation, inverse problems, deep learning and AI, spectra and quantum mechanics, PDEs (both linear and non-linear).*

## Foundations of computational quantum mechanics: Spectra and PDEs

*Description:**Despite almost a century long research program in quantum mechanics - initiated by Schrodinger, Heisenberg, Dirac etc. - the foundations of computational quantum mechanics are not firmly established. For example, only recently have long-standing questions on how to compute spectra of wide classes of Schrodinger operators been solved. Moreover, it is unknown which Schrodinger PDEs can actually be computed? This project will focus on some of the left (and potentially difficult) open problems in computational quantum mechanics, both regarding spectra and PDEs.*

*Main objective:**Understanding the boundaries and foundations of computational quantum mechanics. For example, key questions include: Which Schrodinger PDEs can actually be computed? Which Schrodinger or Dirac operators allow for algorithms to compute their spectra, that will never make mistakes? (In precise mathematical terms, which problems are in the Sigma_1 class in the SCI hierarchy).*

## Functional analysis and infinite-dimensional computations in medical imaging

**In Magnetic Resonance Imaging (MRI) the mathematical reconstruction problem is to reconstruct a function g as accurately as possible given only a limited number of point samples of its continuous Fourier spectrum. This definition holds, to a large degree, for many inverse problems. This project aims to combine the usage of infinite-dimensional theory and computation in order to substantially improve the reconstruction of the continuous function g - rather than the traditional discrete version - and our results so far have shown excellent potential. Given that an infinite-dimensional (i.e. continuous) model is used, both the analysis and computations are more difficult than the finite dimensional (i.e. discrete) methods. The project can be steered towards pure or applied to one's liking, and there are many open questions in both areas. This project can be approached both with new AI methods as well as more classical tools.**

*Description:*

*Main objective:**Use infinite-dimensional analysis to develop new generations of methods in medical imaging. Infinite-dimensional compressed sensing is a special case of such methodolgy.*

## Designing checking algorithms - Can one determine when AI makes mistakes?

*Description:**AI methods based on deep learning have a substantial weakness: instability. The phenomenon is widespread and as pointed out by Nature (2019): "There are no fixes for the fundamental brittleness of deep neural networks". This raises the issue of trust, particularly as NNs may give nonsensical confidence in their predictions as demonstrated in Figure 4. This also raises the question whether one can test and detect when a NN produces a reliable result. This question is strongly connected to trustworthiness and explainability of AI and the literature is vast. However, despite the substantial effort, testing and checking of reliability of AI is recognised as a fundamental challenge in academia and beyond with Forbes Magazine headlining the following: 'How Do You Test AI Systems? [...] Testing AI Systems can be difficult.' - Forbes Magazine (Jan. 2020).*

It should be pointed out that mechanisms for testing and detecting failures of algorithms are common in numerical analysis. MATLAB has several routines such as linprog, lasso, mldivide (solving linear systems) with built in ÔexitÕ-flag or a ÔwarningÕ designed to verify if a computed solution is correct or notify if the output is not trustworthy. An essential question concerning trustworthiness of AI is therefore: When can one design a 'checker' algorithm that identifies when a neural network has failed?

*Main objective:**Investigating the problem: When can one design a 'checker' algorithm that identifies when a neural network has failed?*

## On non-computable problems in computer assisted proofs - Why foundations of computations is essential pure mathematicians

*Description:**Computer assisted proofs have become increasingly popular over the last decades and turn out to be instrumental when proving many long standing conjectures. The recent computer assisted proof, led by T. Hales, of the more than four century old conjecture of Kepler (HilbertÕs 18th problem) on optimal packing of 3-spheres, is a striking example. Another fascinating case is the computer assisted proof of the Dirac-Schwinger conjecture, by C. Fefferman and L. Seco, on the asymptotic behaviour of the ground-state energy of certain Schrodinger operators. What may be surprising is that the computational problems used in these proofs are non-computable according to Turing. In this talk we will discuss this paradoxical phenomenon: Not only can non-computable problems be used in computer assisted proofs, they are crucial for proving important conjectures. A key tool for understanding this phenomenon is the Solvability Complexity Index (SCI) hierarchy, which allows for a classification theory for all types of computational problems. This classification theory may be of use to pure mathematicians for determining which computational problems that may be used in computer assisted proofs. In particular, there are non-computable problems that can be used and there are non-computable problems that are so difficult that they can never be used in computer assisted proofs. The question is: which ones are safe to utilise?*

*Main objective:**Provide a theoretical framework describing which computational problems can be used in computer assisted proofs. Ideally, this could be done in collaborations with groups in pure math trying to proved conjectures with numerical computations. Examples of interest could be optimisation and mathematical physics.*

## Which neural networks (NNs) can be computed? - Precision and impossibility of approximation

*Description:**A mainstay in deep learning theory is the universal approximation theorem which says that continuous functions can be approximated arbitrarily well by NNs. This means that - theoretically, from the perspective of existence - there are very few limitations on what can be done with NNs. Indeed, in many applications there exist NNs that are both accurate and stable. Yet modern algorithms do not compute them. This yields the fundamental question: Why do modern DL algorithms lead to unstable methods, even in scenarios where one can prove that stable and accurate neural networks exist? Recently, we developed new techniques describing methodological barriers addressing this question. Indeed, provable existence results may provide false optimism, as there may be no algorithm that can compute the desired network. Existence results of NNs may have little to say about practical computation.*

Existence results on NNs will in general not imply that the desired NN can be computed, even when using arbitrary small precision. Hence, one can ask: what class of NNs can be computed by algorithms using fixed precision?

*Main objective:**Build a new approximation and numerics theory for NNs computable with finite precision algorithms.*