École d’Été Peyresq 2025
Thème Quantification d’incertitude Le GRETSI et le GdR IASIS organisent depuis 2006 une École d’Été...
8 December 2023
Catégorie : Stagiaire
Deepfakes are becoming more present and easier to generate (with Diffusion Model or GANs) and their malicious use may be a vector for false information. This abundance of artificial content must be accompanied by evolutions in verification techniques. We propose to evaluate and to implement active methods i.e. methods that actively change the contentby embedding a mark in it. The goal of such methods is to be able to assess if an image has been generated by a given generator.
Context
In this project, we focus on a specific kind of deepfakes, those which generate fully artificial images.
This can be done using Variational Auto Encoder (VAE) [1], Generative Adversarial Networks (GAN) [2] or diffusion model [3].
Goals and Challenges
The intern will study diffusion models and implement a diffusion model on a toy exemple. Based on this, he/she will investigate the impact of watermarking the input noise on the generated output and watermark retrieval.
Candidate profile
The candidate should have background in machine learning, a strong motivation toward research, as well as coding skills in Python (Pytorch, ...). Knowledge in multimedia security is not mandatory but will be appreciated.
A thesis can be pursued with secure funding as part of an ANR artificial intelligence and cyber-security project.
You can find more information : www.linkedin.com/posts/vincent-itier-a45011170_m2-internship-activity-7130803638091800576-oT6D
Feel free to contact me by email:
Vincent Itier, IMT Nord Europe: vincent.itier@imt-nord-europe.fr
Jérémie Boulanger, Univ. Lille: jeremie.boulanger@univ-lille.fr
Patrick Bas, DR, CNRS: patrick.bas@cnrs.fr