Trends in mathematics of information -- Deep learning, artificial intelligence and compressed sensing

Lecture series given at the University of Oslo May 14-16 2018, together with Vegard Antun.


In the last decade there have been two major breakthroughs in mathematics of information and data science that stand out: the introduction of compressed sensing in the mid 2000s and the documented success of deep learning from 2012 and onwards. In this series of talks we will give an overview of both of these techniques, provide mathematical background and plenty of practical examples. 

Deep learning is now the state-of-the-art method for classification and recognition. Its success is unmatched by a considerable margin and its performance is now referred to as super-human. This opens up for endless applications where automated recognition and classification is important such as for driverless cars, in surveillance, in image and speech recognition etc. Given that deep learning outperforms humans on many tasks one faces the question: have we reached artificial intelligence? I will consider this question in view of Smale’s 18th problem and Turing’s paper from 1950, where he introduces the imitation game. When considering this question, a rather fascinating issue is revealed. Indeed, deep learning becomes completely unstable. This phenomenon has both philosophical and practical consequences. 

Compressed sensing and sparse regularisations have in many ways changed the way one approaches medical imaging and inverse problems. However, interestingly, deep learning can also be used in these cases. The use of deep learning in inverse problems is a rather new concept, and the community has just started investigating the many possibilities. We will discuss several of the deep learning approaches and compare the results with compressed sensing. When doing so, one discovers an alluring phenomenon: deep learning may produce completely unstable reconstruction methods for inverse problems.


May 14-15: Slides

May 16: Slides