PhD defence by Chiara Mauri: "An interpretable generative model for image-based prediction"

On Wednesday 25 January 2023, Chiara Mauri will defend her PhD thesis "An interpretable generative model for image-based prediction".

Time: 13:00
Place: Bldg. 341, aud. 22 & Zoom:
https://dtudk.zoom.us/meeting/register/u5IqcemgqTMtGt1xogG9pYpKFKUF5jErAnH1

Supervisor: Professor Koen Van Leemput
Co-supervisorProfessor Mark Mühlau

Assessment committee:
Professor Aasa Feragen, DTU Compute
Professor John Ashburner, University College London
Professor Jussi Tohka, University of Eastern Finland

Chairperson:
Associate Professor Mathilde Hauge Lerche, DTU Health Tech

Summary:
The last decades have seen a significant development of computational methods that make automatic predictions of variables of interest, such as a subject's diagnosis or prognosis, based on brain Magnetic Resonance Imaging (MRI) scans.

Since MRI is able to detect subtle effects in brain anatomy more than clinical assessment, these methods have a huge potential in clinical tasks such as early diagnosis, therapy planning and monitoring, paving the way to personalized treatments. Many different image-based prediction methods have been proposed in the literature, with a special focus on discriminative deep learning techniques in the last years. While these methods are able to produce accurate results, especially when trained on large amounts of data, they have been proven to be difficult to interpret. This may be problematic, since, in many neuroimaging tasks, it is important not only to predict well, but also to interpret morphological changes underlying predictions.

In this thesis, we therefore propose a generative model for image-based prediction, which yields interpretable results, without sacrificing prediction accuracy. We first demonstrate that the proposed method achieves competitive performances as compared to state-of-the-art benchmarks in age and gender prediction tasks, especially when the sample size is at most of a few thousand subjects, which is the typical scenario in many neuroimaging applications. We then give insight into the interpretability properties of the proposed method: It automatically yields spatial maps displaying morphological effects of the variable of interest, which are straightforward to interpret.

Time

Wed 25 Jan 23
13:00 - 16:00

Organizer

DTU Sundhedsteknologi

Where

Bldg. 341, aud. 22 & Zoom