Annonce

Les commentaires sont clos.

TWO Postdoctoral Researcher positions for Developing Emotionally Intelligent Agents through Large Language Models for Personalized User Interaction

27 December 2023


Catégorie : Post-doctorant


TWO Postdoctoral Researcher positions for Developing Emotionally Intelligent Agents through Large Language Models for Personalized User Interaction

 

Context and Objectives: Recent advancements in natural language processing and machine learning have led to the development of powerful large language models, such as GPT-3. These models exhibit exceptional text generation capabilities and have demonstrated the potential for diverse applications. However, the integration of emotional intelligence into these models remains an unexplored frontier, hindering their ability to engage users on a deeper, more human level. The objective of this postdoctoral research project is to design and implement a sophisticated large language model (LLM) capable of recognizing and responding to users based on their emotional states during interactions. The primary goal is to create emotionally intelligent agents that can provide a more personalized and adaptive user experience.

 

Project Partners and Supervision: This project is in collaboration between Toyota Belgium and the laboratory of Images, Signaux et Systèmes Intelligents (LiSSi) at University Paris-Est Créteil (UPEC), France.

 

General requirements:

  • Self-motivated scientist/Ph. D graduate to pursue a scientific career. Independent and passionate about computer vision and artificial intelligence projects;
  • Good team player. Able to undertake independent research projects under the direction of the PI together with other team members;
  • Hold a Ph.D. in a relevant field of computer vision, image processing or machine vision, or other relevant fields;
  • Excellent scientific/technical writing skills and communication capability;
  • Prior advanced experience in working with computer vision, Affective computing and deep learning projects.

 

Specific technical requirements:

  • Excellent experience, knowledge, and skills in programming languages, particularly python;
  • Excellent knowledge and skills in digital image and signal processing + previous Video analysis experience is a big plus for acceptance
  • Deep understanding of AI learning, machine learning and data science with hand-on skill and experience. Advanced understanding and experiences on deep learning
  • Skills for numerical computational algorithms; Strong in mathematics theories.
  • Rigid and logical thinking of scientific problems;
  • High ranked publications in Q1 journals (IEEE transactions, Pattern recognition, Computer Vision and Image Understanding, ..) and conferences (MICCAI, IEEE CBMS, ISBI, CVPR, ICPR, ICIP) is mandatory

 

DURATION

1 to 2 years starting from March 2024 at an early date to start.

 

Location: Université Paris-Est Créteil, Laboratoire Images, Signaux et Systèmes Intelligents (LISSI), 122 rue Paul Armangot, 94400 Vitry sur Seine

 

APPLICATION

Please send your CV + cover letter + list of publications + recommendation letters to Alice.othmani@u-pec.fr, marleen.de.weser@toyota-europe.com and hazem.abdelkawy@toyota-europe.com

 

N.B. Only shortlisted applicants will be notified + This postdoc position can lead to permanent academic or industrial position.

Few References related to the project:

-Xi, Z., Chen, W., Guo, X., He, W., Ding, Y., Hong, B., ... & Gui, T. (2023). The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864.

-Ivanović, M., Radovanović, M., Budimac, Z., Mitrović, D., Kurbalija, V., Dai, W., & Zhao, W. (2014, June). Emotional intelligence and agents: Survey and possible applications. In Proceedings of the 4th International Conference on Web Intelligence, Mining and Semantics (WIMS14) (pp. 1-7).

-Ivanović, M., Budimac, Z., Radovanović, M., Kurbalija, V., Dai, W., Bădică, C., ... & Mitrović, D. (2015). Emotional agents-state of the art and applications. Computer science and information systems, 12(4), 1121-1148.

-Chang, Y., Wang, X., Wang, J., Wu, Y., Zhu, K., Chen, H., ... & Xie, X. (2023). A survey on evaluation of large language models. arXiv preprint arXiv:2307.03109.

-Schoneveld, L., Othmani, A., & Abdelkawy, H. (2021). Leveraging recent advances in deep learning for audio-visual emotion recognition. Pattern Recognition Letters, 146, 1-7.

-Gupta, S., Kumar, P., & Tekchandani, R. K. (2023). Facial emotion recognition based real-time learner engagement detection system in online learning context using deep learning models. Multimedia Tools and Applications, 82(8), 11365-11394.

-Song, I., Kim, H. J., & Jeon, P. B. (2014, January). Deep learning for real-time robust facial expression recognition on a smartphone. In 2014 IEEE International Conference on Consumer Electronics (ICCE) (pp. 564-567). IEEE.

-Zhang, S., Yang, Y., Chen, C., Zhang, X., Leng, Q., & Zhao, X. (2023). Deep learning-based multimodal emotion recognition from audio, visual, and text modalities: A systematic review of recent advancements and future prospects. Expert Systems with Applications, 121692.