Réunion

En l’absence temporaire de gestionnaire dédiée au GdR IASIS, nous sommes malheureusement dans l’impossibilité d’assurer la prise en charge de missions (des membres du GdR souhaitant participer à cette journée) sur le budget du GdR.

Les commentaires sont clos.

Vie privée et sécurité des données multimédia

Date : 11-06-2024
Lieu : IRISA, amphi INRIA, Rennes

Thèmes scientifiques :
  • A - Méthodes et modèles en traitement de signal
  • D - Télécommunications : compression, protection, transmission

Nous vous rappelons que, afin de garantir l'accès de tous les inscrits aux salles de réunion, l'inscription aux réunions est gratuite mais obligatoire.


S'inscrire à la réunion.

Inscriptions

7 personnes membres du GdR ISIS, et 0 personnes non membres du GdR, sont inscrits à cette réunion.

Capacité de la salle : 50 personnes.

Annonce

Journée commune « Protection de la vie privée » et « Sécurité des données multimédia »

Dans le cadre d'une journée commune entre le GDR IASIS et le GDR Sécurité informatique, nous vous proposons une journée partagée entre les deux GT, à savoir « Protection de la vie privée » et « Sécurité des données multimédia ».

Les activités en sécurité multimédia s'articuleront autour de la protection des données multimédia contre l'usurpation d'identité et les créations des fausses identités, le partage de secrets visuels, et la biométrie liée à la cryptographie.

Organisateurs :

William PUECH, LIRMM, Université de Montpellier

Iuliia TKACHENKO, LIRIS, Université Lyon 2

Contact :

william.puech@lirmm.fr, iuliia.tkachenko@liris.cnrs.fr

Programme

9h30-10h30 : Kai Wang, GIPSA-lab, CNRS

Titre : Digital image forensics: Different approaches and some recent focuses

14h00-14h50 : Slava Voloshynovskyy, Université de Genève, Suisse

Titre : Security of foundation models: implications for downstream tasks, content protection and tracking

14h50-15h30 : Teddy Furon, IRISA, INRIA, Rennes

Titre : Confidentialité des modèles d'IA : Qu'est-ce qu'il y a dans la boîte noire ?

15h30-16h00 : PAUSE CAFÉ

16h00-16h30 : Jean-Francois Bonastre

Titre : TBD

16h30-17h00 : Mohamed Maouche et Carole Frindel

Titre : TBD

17h00-17h30 : Vincent Thouvenot, THALES

Titre : AI Friendly Hacker : when an AI reveals more than it should?

Résumés des contributions

Digital image forensics: Different approaches and some recent focuses

Kai Wang, GIPSA-lab, CNRS

Résumé :

Nowadays, the wide variety and availability of powerful image editing and generation tools have made it easy to tamper with a digital image without leaving an obvious visual clue. Fake images, which do not reflect what happens in reality, can have serious negative impacts on society. In this talk, we provide a brief introduction to the research field of image forensics whose main objective is to detect and locate different types of image forgeries. Technically we present two case studies on two different image forensic problems. In the first case study, we show how different approaches, either traditional or deep-learning-based, can be used to solve a same image forgery detection problem. In the second case study, we present some recent trends within the research community, with special focuses on improving forensic performance in certain challenging application scenarios.

Security of foundation models: implications for downstream tasks, content protection and tracking

Slava Voloshynovskiy, Université de Genève, Suisse

Résumé :

The emergence of a vast amount of content is reshaping our digital landscape. This content comes from two main sources: it is either captured directly from the real world, i.e., physically produced, or created via digital algorithms, i.e., synthetically generated. Various tools and creators produce this content for diverse purposes. This content spans a wide array of media including images, videos, audio, and text, necessitating robust methods for its protection and tracking.

Central to this evolving digital ecosystem are Foundation Models (FMs) and notably Vision Foundation Models (VFMs), which represent a significant advancement in machine learning (ML) capabilities. These large, pre-trained neural networks, refined on extensive and diverse datasets, are versatile tools employed in many downstream applications, ranging from image classification and semantic segmentation to object detection, content retrieval, and tracking. Moreover, their ability to power generative ML technologies has been particularly transformative.

However, the provenance of data used to train these models, as well as the content they generate, poses significant challenges. There is a pressing need to ensure the integrity, authenticity, and security of this content to maintain trust in information, prevent misinformation, protect individuals and organizations from adversarial attacks, preserve the integrity of legal evidence, and uphold ethical standards. Notably, the EU AI Act recognizes the risks linked with the recent ML models and the content they generate.

To address these challenges, the multimedia security community has developed two fundamental pillars: digital watermarking (content protection) and content fingerprinting (content tracking), also known as robust perceptual hashing. Digital watermarking and content fingerprinting being integrated with Digital Rights Management (DRM) systems, enhance their ability to safeguard digital assets across a variety of platforms. Recently, these techniques have begun to leverage FMs, using them as the backbone of content protection and tracking systems. Despite their widespread use, the security of FMs and, by extension, the systems based on them remains a critically underexplored area, exposing potential vulnerabilities to unknown threats.

In this talk, we focus on the particularities of modern VFMs, digital watermarking, and content fingerprinting systems that are based on these VFMs, and investigate their robustness in the face of adversarial threats.

Confidentialité des modèles d'IA : Qu'est-ce qu'il y a dans la boîte noire ?

Teddy Furon, INRIA, Rennes

Résumé :

Considérons un modèle d'IA enfermé dans une boîte noire (accès par une API - MLaaS, ou par un circuit intégré - ML On Chip). Est-il possible d'identifier ce modèle ? Cet exposé présente les raisons pour lesquelles un attaquant ou un défenseur résoudrait cette tâche. Il fusionne nos travaux sur le fingerprinting (identification passive) et le watermarking (identification active) pour les modèles de décision (classifieurs) mais aussi les modèles d'IA générative (images ou textes).

AI Friendly Hacker : when an AI reveals more than it should?

Vincent Thouvenot, THALES

(La presentation sera en français)

Résumé :

The aim of AI based on machine learning is to generalize information about individuals to an entire population. And yet...

  • Can an AI leak information about its training data?
  • Since the answer to the first question is yes, what kind of information can it leak?
  • How can it be attacked to retrieve this information?

To emphasize AI vulnerability issues, Direction Générale de l'Armement (DGA, member of MoD in France) proposed a challenge on confidentiality attacks based on two tasks:

  • Membership Attack: An image classification model has been trained on part of the FGVC-Aircraft open-access dataset. The aim of this challenge is to find, from a set of 1,600 images, those used for training the model and those used for testing.
  • Forgetting attack: The model supplied, also known as the "export" model, was refined from a so-called "sovereign" model. The sovereign model has certain sensitive aircraft classes (families) which have been removed and replaced by new classes. The aim is to find which of a given set of classes have been used to train the sovereign model, using only the weights of the export model.

Friendly Hackers team of ThereSIS win the two tasks. At the seminar we will present how we did it and what lessons we learned during this fascinating challenge.