Learning and Imaging: A Trip through Modern Data Science

20.12.2017, 14:00  –  Haus 9, Raum 2.22
Institutskolloquium

Carola Schönlieb (University of Cambridge), Gitta Kutyniok (TU Berlin)

The colloquium begins at 14:00  with a talk

Model-based learning in imaging

by Carola-Bibiane Schönlieb (DAMTP Cambridge)

Abstract:

One of the most successful approaches to solve inverse problems in imaging is to cast the problem as a variational model. The key to the success of the variational approach is to define the variational energy such that its minimiser reflects the structural properties of the imaging problem in terms of regularisation and data consistency.
Variational models constitute mathematically rigorous inversion models with stability and approximation guarantees as well as a control on qualitative and physical properties of the solution. On the negative side, these methods are rigid in a sense that they can be adapted to data only to a certain extent.
Hence researchers started to apply machine learning techniques to "learn" more expressible variational models. In this talk we discuss two approaches: bilevel optimisation (which we investigated over the last couple of years and which aims to find an optimal model by learning from a set of desirable training examples) and quotient minimisation (which we only recently proposed as a way to incorporate negative examples in regularisation learning). Depending on time, we will review the analysis of these approaches, their numerical treatment, and show applications to learning sparse transforms, regularisation learning, learning of noise models and of sampling patterns in MRI.
This talk will potentially include joint work with S. Arridge, M. Benning, L. Calatroni, C. Chung, J. C. De Los Reyes, M. Ehrhardt, G. Gilboa, J. Grah, A. Hauptmann, S. Lunz, G. Maierhofer, O. Oektem, F. Sherry, and T. Valkonen.

At 15:00 there will be a coffee break followed by the second talk

 

Applied Harmonic Analysis meets Deep Learning

by Gitta Kutyniok  (TU Berlin)

Abstract:
Despite the outstanding success of deep neural networks in real-world applications, most of the related research is empirically driven and a mathematical foundation is almost completely missing. One central task of a neural network is to approximate a function, which for instance encodes a classification task. In this talk, we will be concerned with the question, how well a function can be approximated by a neural network with sparse connectivity. Using methods from approximation theory and applied harmonic analysis, we will derive a fundamental lower bound on the sparsity of a neural network. By explicitly constructing neural networks based on certain representation systems, so-called shearlets, we will then demonstrate that this lower bound can in fact be attained.  Finally, we present numerical experiments, which surprisingly show that already the standard training algorithm generates deep neural networks obeying those optimal approximation rates.

zu den Veranstaltungen