Annonce

Les commentaires sont clos.

Stage M2 / PFE : Explanable Artificial Intelligence (XAI) for Medical Image Segmentation

6 December 2023


Catégorie : Stagiaire


Deep learning has demonstrated remarkable performance in the medical field, with accuracy exceeding that of medical experts. However, it has a major problem in that these models are "black box" structures, meaning they are not transparent and difficult for humans to understand, making them unreliable. The lack of interpretability of these models is a barrier to their use in clinical practice. To overcome this problem, several interpretability studies have been proposed within the field of Explainable Artificial Intelligence (XAI). One of the main objectives of this proposal is to explore and survey existing XAI methods, starting with the Gradient-Weighted Class Activation Mapping (Grad-CAM) method, which provides explainability by visualising the area the model focuses on when making predictions in the form of a heat map, intuitively showing the basis used by the model to make decisions. These methods will then be integrated within the application of medical image segmentation applied to various medical datasets available at http://medicaldecathlon.com/. The second objective is to integerate uncertainty based measures within the neural network to estimate and produce an uncertainty map, this part of the work is already in progress by the current intern. The final objective is to use both the XAI heat map and the uncertainty map to better interpret as well as improve the performance of the network.

 

Workflow of the internship:

1. Review of papers using Explanable Artificial Intelligence (XAI) in deep learning based medical image analysis, Selvaraju et al. (2016); Teng et al. (2022); Van der Velden et al. (2022); Dao & Ly (2023).
2. Examines the Grad-CAM-based method, which is one of the XAI methods that can be integrated into segmentation models.
3. Reuse U-Net, which is already implemented for medical segmentation purposes, over the dataset that will be given from http://medicaldecathlon.com/.
4. Integrate with U-Net the Grad-CAM based method to visualise the area the model focuses on when making predictions in the form of a heat map.
5. Integrate uncertainty measure to produce the uncertainty map (this work is based on a previous, internship, some material already exists).
6. Use the uncertainty and heat map to improve the model by providing a feedback loop.
7. Perform some experimental analysis
8. Validate the approach on several data sets (if time permits).
 
 
 
Computing resources:
The student will use GRICAD (https://gricad-doc.univ-grenoble-alpes.fr/), it is a cluster with many nodes and each node can have multiple GPUs. All the commands to connect, run a job, create an isolated environment to train the DL model will be given and can be set up with the supervisor if needed.
 
 
Student profile:
Final year engineering student or Master 2 level in biomedical, image processing, machine learning, computer vision. The student must be Dynamic, Motivated and Autonomous.
 
Expected theoretical knowledge in:
1. Image processing techniques
2. Machine learning models
3. Deep learning
4. Information Theory (preferred but not mandatory)
 
Expected computing skills:
1. Coding experience in Python
 
2. Familiarity with the deep learning framework (Pytorch)
3. Linux skills
 
Salary: ~600 euros/month
Duration of the internship: 5/6 months, starting in February 2024
 
 
References
 
Loan Dao and Ngoc Quoc Ly. A comprehensive study on medical image segmentation using deep neural networks. International Journal of Advanced Computer Science and Applications, 14(3), 2023.
 
Ramprasaath R Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell, Devi Parikh, and Dhruv Batra. Grad-cam: Why did you say that? arXiv preprint arXiv:1611.07450, 2016.
 
Qiaoying Teng, Zhe Liu, Yuqing Song, Kai Han, and Yang Lu. A survey on the interpretability of deep learning in medical diagnosis. Multimedia Systems, 28(6):2335–2355, 2022.
 
Bas HM Van der Velden, Hugo J Kuijf, Kenneth GA Gilhuijs, and Max A Viergever. Explainable artificial intelligence (xai) in deep learning-based medical image analysis. Medical Image Analysis, 79:102470,
2022.
 
 
Location: Gipsa-lab, Grenoble
 
Contact:
Dawood AL CHANTI (MCF, Grenoble-INP, GIPSA-Lab):
dawood.al-chanti@grenoble-inp.fr
 
 
Flexible dates and can be adjusted.