Annonce


[PhD] Improving Facial Expression Recognition Using Explainable AI Techniques

18 Avril 2025


Catégorie : Postes Doctorant ;


Thesis Title:

Improving Facial Expression Recognition Using Explainable AI Techniques

Thesis Supervisor:

Domitile Lourdeaux, Heudiasyc Laboratory (HEUristics and Diagnostics of Complex Systems) UMR-CNRS 7253, University of Technology of Compiègne UTC

Thesis Co-Supervisor:

Insaf Setitra, Heudiasyc Laboratory (HEUristics and Diagnostics of Complex Systems) UMR-CNRS 7253, University of Technology of Compiègne UTC

Thesis Purpose:

Facial Expression Recognition (FER) has become a crucial element in various fields such as security, defense, autonomous driving, and human-computer interaction. FER systems, often based on deep learning models such as convolutional neural networks (CNNs), are effective at recognizing emotions from facial expressions (Johnson et al., 2024). However, despite their high accuracy, these models typically operate as “black box” systems, offering no transparency in decision-making. This lack of interpretability is a major obstacle, especially for mission-critical applications where understanding the model’s decisions is essential. This thesis aims to address the need for explainability in FER by integrating explainable AI (XAI) techniques. The research will explore the integration of methods such as SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) (Ribeiro et al., 2016; Lundberg et al., 2017), as well as advanced deep learning architectures, to improve the interpretability and robustness of FER systems. By integrating XAI into FER, this research aims to improve transparency, trust, and real-world applicability in high-stakes areas such as security and defense.

Description (objectives, innovative aspects):

The thesis aims to:

– Developing explainable FER models: Develop explainable FER models by integrating XAI techniques such as SHAP, LIME, and GradCAM into deep learning-based FER systems to provide clear and interpretable explanations for emotion classifications.

Several sub-objectives are targeted:

– Improving classification accuracy and generalization: Implementing clustering algorithms and nonparametric approaches (e.g., Deep Nearest Centroid) to improve emotion classification performance and model generalization across various domains.

– Using explainability for model improvement: Leveraging XAI to analyze misclassifications and iteratively refine the model. In case of misclassification, explainability methods will be used to analyze the causes of the misclassification and guide retraining using improved data representations.

– Testing the FER system in multiple domains: Evaluating explainable FER models in real-world applications to ensure their robustness and accuracy in various scenarios.

– Improving model transparency: Developing interactive interfaces that allow users to explore and understand how facial features contribute to model predictions, thereby increasing their confidence and engagement in sensitive applications.

– Optimizing for real-time deployment: Ensuring the smooth operation of the FER system in real-time environments, where fast and accurate emotion detection is essential.

Expected results:

– Explainable FER models: The integration of XAI techniques will create FER models that are not only accurate but also provide clear and interpretable explanations of their predictions. These models will help users understand the reasoning behind classification.

– Improved accuracy and robustness: The use of clustering and nonparametric models will improve the accuracy and robustness of the FER system, making it more reliable in various real-world scenarios.

– Real-time deployment: The FER system will be optimized for real-time use.

– User confidence: By providing interactive explanations, this research aims to build user confidence. The ability to understand how the model reaches its decisions will be crucial for applications where users must rely on the system for high-stakes tasks.

Potential Defense Application(s):

In military and defense applications, maintaining operators’ focus and situational awareness is crucial, especially in high-pressure environments where they are exposed to numerous connected devices that can lead to cognitive overload. This research can be applied to the use of FER combined with XAI for monitoring and improving operators’ cognitive focus by detecting distraction levels through facial expressions.

– Improved Human-Machine Collaboration: Explainable FER models can enhance trust in AI-assisted defense systems by providing transparent feedback on the mental state of soldiers interacting with automated systems, ensuring that decisions made by AI systems are aligned with human conditions.

– Training and Simulation Applications: The system could be integrated into military training programs to assess trainees’ emotional responses to various scenarios, helping to refine training protocols and improve stress resilience. – Error analysis and model refinement: If the system misclassifies emotional states, explainability techniques will help identify sources of error, leading to iterative improvements and better adaptation of the model to specific defense conditions.

– Real-time cognitive load monitoring: The FER system will be able to assess operators’ emotional and cognitive states in real time, providing information on levels of stress, fatigue, and distraction caused by excessive interaction with connected devices.

– Adaptive alert and intervention systems: Using explainability methods, the system will identify patterns leading to cognitive overload and alert command centers or autonomous systems to suggest adaptive interventions. For example, if a soldier shows signs of stress or distraction, the system could trigger alerts or adjust the flow of information to reduce cognitive strain. – Critical Decision Support: In defense operations, where situational awareness is essential, the system could help commanders assess troops’ mental state and make informed decisions regarding task assignments, rotations, or intervention strategies.

By integrating explainability into FER models for defense applications, this research aims to improve decision-making, soldier performance, and mission success by maintaining cognitive clarity in high-stakes environments.

Keywords:

Facial expression recognition, explainable AI, XAI, deep learning, convolutional neural networks, SHAP, LIME, GradCAM, cognitive load, defense applications, human-computer interaction, autonomous systems

References:

(Johnson et al., 2024) D. S. Johnson, O. Hakobyan, J. Paletschek and H. Drimalla,
« Explainable AI for Audio and Visual Affective Computing: A Scoping Review, »
in   IEEE   Transactions   on   Affective   Computing,   doi:
10.1109/TAFFC.2024.3505269.

(Lundberg et al., 2017) Lundberg, Scott M. and Su-In Lee. “A Unified Approach
to   Interpreting   Model   Predictions.”   Neural   Information   Processing   Systems
(2017).

(Ribeiro et al., 2016 ) Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). « Why
should I trust you? » Explaining the predictions of any classifier. Proceedings of
the 22nd ACM SIGKDD International Conference on Knowledge Discovery and
Data Mining, 1135–1144.

PhD Start Date:
October 2025

PhD Work Location:
Université de Technologie de Compiègne
Heudiasyc Laboratory UMR-CNRS 7253

Special Conditions:
The candidate must be of French or European nationality.

Application Instructions:
To apply, please send a CV, a motivation letter, copies of all academic records and degrees (preferably with rankings), and optionally a letter of recommendation or a referee contact.

Contact Emails:
Domitile Lourdeaux =, Insaf Setitra
firstname (dot) lastname (at) hds (dot) utc (dot) fr

Only complete applications will be considered. All documents must be in either French or English.

Les commentaires sont clos.