Réunion


Frugality and compression of deep learning models

Date : 16 Avril 2026
Horaire : 09h30 - 17h30
Lieu : Salle du conseil, Espace Turing, LIPADE, Université Paris Cité (7e étage) 45 rue des Saint-Pères, 75006 Paris

Axes scientifiques :
  • Adéquation algorithme-architecture, traitements embarqués
  • Audio, Vision et Perception

Organisateurs :
  • - Antoine Gourru (LaHC)
  • - Ayoub Karine (LIPADE)
  • - Aladine Chetouani (L2TI)
  • - Virginie Fresse (LaHC)

Nous vous rappelons que, afin de garantir l'accès de tous les inscrits aux salles de réunion, l'inscription aux réunions est gratuite mais obligatoire.

Inscriptions

14 personnes membres du GdR IASIS, et 33 personnes non membres du GdR, sont inscrits à cette réunion.

Capacité de la salle : 60 personnes. 13 Places restantes

Annonce

Description
Deep neural networks, despite their impressive abilities and increasing usage in both the private and public sectors, suffer from high resource consumption. This workshop is dedicated to frugality and, broadly speaking, compression in deep learning models. It brings together researchers, engineers, and practitioners to discuss advances that make neural networks lighter, particularly through pruning, distillation, and quantization approaches. As models grow in size and complexity, compression is becoming a major challenge for their efficient deployment, whether on large-scale servers or embedded devices. This event provides a unique platform for exchanging ideas on emerging methods, current challenges, and the perspectives that will shape the future of efficient deep learning models. The topics (not limited) of this meeting are:
– Quantification
– Pruning
– Knowledge Distillation
– Structured Compression and Matrix Factorization
– Optimization for Embedded Inference
– Industrial Applications and Feedback on Compression Methods in Real-World Use Cases

Invited speaker
– Smail NIAR, LAMIH, Université Polytechnique Hauts-de-France
– Diane Larlus, NAVER LABS Europe

Call for contribution
Do you wish to present your research or valorization work on the frugality and compression of deep learning models? Send a title and an abstract by email to the organizers, before March 10, 2026.

Organizers :
Aladine Chetouani (L2TI, Université Sorbonne Paris Nord) <aladine.chetouani@univ-paris13.fr>
Virginie Fresse (LHC, Université Jean Monnet) <virginie.fresse@univ-st-etienne.fr>
Antoine Gourru (LHC, Université Jean Monnet) <antoine.gourru@univ-st-etienne.fr>
Ayoub Karine (LIPADE, Université Paris Cité) <ayoub.karine@u-paris.fr>

Program (tentative)

9h30 – 9h45 : Welcome

9h45 – 10h00 : Introduction

10h – 11h: Keynote 1: Smail NIAR (LAMIH, Université Polytechnique Hauts-de-France)

11h – 12h: Presentations

12h – 13h30: Lunch break

13h30-14h30: Keynote 2: Diane Larlus (NAVER LABS Europe)

14h30-15h30: Presentations

15h30: 16h: Break

16h-17h: Presentations

17h-17h15: Closing

Résumés des contributions





Les commentaires sont clos.