French & English versions on https://adum.fr/as/ed/voirproposition.pl?matricule_prop=64126
Keywords
Neural network, Image processing, Computer vision, Edge AI, Federated AI, Dysgraphia
Project description
The AVIAREPTE project (Video Analysis and Artificial Intelligence for Early Detection of Handwriting Disorders), led by IMT Mines Alès and the University of Montpellier, aims to develop an innovative, affordable, and easily deployable system for the early detection of handwriting disorders (dysgraphia) in children with Developmental Coordination Disorder (DCD).
Using a simple camera, the system analyzes handwriting movements, posture, and facial expressions through a hybrid Artificial Intelligence (AI) that combines machine learning with fine motor skills knowledge. By focusing on federated and edge AI, data is processed locally on school devices, ensuring confidentiality and resource efficiency, in compliance with GDPR and the Occitanie Green Pact objectives.
Structured in several phases, the project will initially design a real-time video analysis system, followed by the development and training of supervised AI algorithms on a cohort of children. The multimodal approach incorporates motor, cognitive, and emotional indicators to enhance diagnostic accuracy.
Explainable AI is a key component, offering understandable insights into AI predictions for teachers and clinicians, thus facilitating adoption in educational and clinical environments. Rigorous laboratory validation and testing in real-world school and clinical settings will ensure reliability, efficiency, and usability.
Aligned with EuroMov Digital Health in Motion’s research strategy, the project addresses major issues of educational inclusion and equal opportunities, positioning the Occitanie region as a leader in responsible AI applied to neurodevelopmental disorders. Embracing Open Science, AVIAREPTE shares anonymized data, methods, and results, promoting collaboration and innovation in education and healthcare.
Profile and skills required
We are looking for a motivated PhD candidate interested in interdisciplinary research at the intersection of artificial intelligence, video analysis, and cognitive sciences. Candidates should have strong skills in machine learning, image processing, and/or modeling of motor behaviors. Familiarity with challenges in educational and clinical settings, as well as the ability to collaborate with researchers in AI, neuroscience, and education, will be valuable assets.
Application: please submit your application on : https://adum.fr/as/ed/voirproposition.pl?matricule_prop=64126
Enquiry: Gérard Dray (gerard.dray@mines-ales.fr) and Binbin Xu (binbin.xu@mines-ales.fr)
References
Danna, J., Puyjarinet, F., & Jolly, C. (2023). Tools and Methods for Diagnosing Developmental Dysgraphia in the Digital Age: A State of the Art. Children, 10(12), 1925. https://doi.org/10.3390/children10121925.
Kunhoth, J., Al-Maadeed, S., Saleh, M., & Akbari, Y. (2024). Multimodal Ensemble with Conditional Feature Fusion for Dysgraphia Diagnosis in Children from Handwriting Samples. arXiv preprint arXiv:2408.13754.
Mekyska, J., Galaz, Z., Safarova, K., Zvoncak, V., Cunek, L., Urbanek, T., Havigerova, J. M., Bednarova, J., Mucha, J., Gavenciak, M., Smekal, Z., & Faundez-Zanuy, M. (2024). Assessment of Developmental Dysgraphia Utilising a Display Tablet. arXiv preprint arXiv:2410.18230.
D, V., Bhandari, D., Patil, P. P., & Kulkarni, A. A. (2024). Towards Accessible Learning: Deep Learning-Based Potential Dysgraphia Detection and OCR for Potentially Dysgraphic Handwriting. arXiv preprint arXiv:2411.13595.
Lu, Y., Quinton, J.-C., Jolly, C., & Brault, V. (2024). A statistical procedure to assist dysgraphia detection through dynamic modelling of handwriting. arXiv preprint arXiv:2408.02099.
Iyer, L. S., Chakraborty, T., Reddy, K. N., Jyothish, K., & Krishnaswami, M. (2023). AI-Assisted Models for Dyslexia and Dysgraphia: Revolutionizing Language Learning for Children. In: Artificial Intelligence and Inclusive Education: Speculative Futures and Emerging Practices. Springer.