Réunion


Représentations Neuronales Implicites : des NeRF aux PINN

Date : 04 Novembre 2025
Horaire : 09h30 - 17h00
Lieu : Institut Henri Poincaré 11 rue Pierre et Marie Curie 75005 Paris

Axes scientifiques :
  • Théorie et méthodes

GdRs impliqués :
Organisateurs :

Nous vous rappelons que, afin de garantir l'accès de tous les inscrits aux salles de réunion, l'inscription aux réunions est gratuite mais obligatoire.

Inscriptions

45 personnes membres du GdR IASIS, et 76 personnes non membres du GdR, sont inscrits à cette réunion.

Capacité de la salle : 120 personnes. -1 Places restantes

Annonce

La journée est co-organisée avec le RT MAIAGES

De nombreux problèmes scientifiques et d’ingénierie nécessitent l’estimation de champs multidimensionnels (2D, 2D+t, 3D, etc.), c’est-à-dire de fonctions continues.

La discrétisation sur une grille a toujours été considérée comme une étape cruciale, en raison de la nature numérique des données et du besoin de manipulation efficace.
Cependant, cette méthode présente plusieurs inconvénients : une mémoire proportionnelle à la résolution de la grille, des grilles fixées à une résolution spécifique, et une approche mal adaptée aux structures irrégulières ou complexes.

Pour surmonter ces limitations, des concepts récents tels que les représentations neuronales implicites (INR) ou les NeRF (Neural Radiance Fields) en traitement d’image, ainsi que les réseaux neuronaux informés par la physique (PINN) pour les champs spatio-temporels, ont émergé.

Ces deux approches partagent un principe commun : encoder des signaux complexes (par exemple, une image) en une fonction continue (par exemple, 2D) grâce à un réseau neuronal. Le réseau f associe des coordonnées spatiales (par exemple, x = (x, y) pour une image 2D) à des valeurs d’intensité (comme la couleur d’un pixel), soit u = f(x). Le réseau f est ensuite optimisé en minimisant une fonction de coût qui peut intégrer des principes physiques, souvent exprimés par des équations différentielles partielles dans le cas des PINN, ou à partir de mesures éparses dans le cas des NeRF.

L’objectif de ces journées est de présenter l’état-de-l’art dans ces domaines distincts, d’identifier les points communs et les différences entre ces approches, ainsi que les enjeux associés.

Appel à contributions : Cet événement invite les participants à présenter des travaux de recherche fondamentale, des algorithmes novateurs ou des applications innovantes qui exploitent les PINN, les NeRF ou les représentations neuronales implicites. Les propositions de contributions (présentation de 20 min) sont à envoyer à valentin.debarnot@creatis.insa-lyon.fr et nelly.pustelnik@ens-lyon.fr avant le 10 octobre. Les participations de jeunes chercheurs et chercheuses sont particulièrement encouragées.
Les demandes de prise en charge de mission doivent parvenir à la gestionnaire du GDR avant le 20 octobre 17h.

Lieu et date : 4 novembre 2025, Institut Henri Poincaré, Paris

Orateurs/oratrices invité·e·s :

  • Claire Boyer, Institut de Mathématiques d’Orsay, Université Paris-Saclay
  • Laurent Jacques, UCL, Belgique
  • Thomas Leimkuehler, Max Planck Institute, Allemagne
  • Mathilde Mougeot, ENSIIE, ENS Paris-Saclay

Organisateur·rice·s:

  • Valentin Debarnot, CREATIS, INSA Lyon
  • Nelly Pustelnik, CNRS, Laboratoire de Physique, ENS Lyon

——

Programme

9h15 : Accueil des participants

9h45 : Mathilde Mougeot, ENSIIE, ENS Paris-Saclay

Physics-informed Machine learning geared towards industrial applications

10h30 : Joachim Bona-Pellissier, MaLGa Machine Learning Genoa Center

Physics-informed machine learning with kernels

10h50 : Elisa Riccietti, LIP, ENS Lyon, INRIA

Frequency-aware multigrid training in PINNs 

11h10 : Pause

11h25 : Claire Boyer, Institut de Mathématiques d’Orsay, Université Paris-Saclay

A statistical tour of physics-informed learning: from PINNs to kernels

12h10 : Baptiste Chatelier, INSA Rennes, CNRS, IETR-UMR 6164 / Mitsubishi Electric R&D Centre Europe

Learning the location-to-channel mapping

12h30 : Déjeuner

13h45 : Laurent Jacques, UCL, Belgique

Herglotz-NET: Implicit Neural Representation of Spherical Data with Harmonic Positional Encoding

14h30: Dawa Derksen, Centre National d’Etudes Spatiales (CNES), Service Traitement Plateformes et Hybridation Aval

Neural Radiance Fields for 3D Earth Observation

14h50: Chloe Thenoz, Magellium, Ramonville-Saint-Agne

LuNeRF: Towards Automatic Very High Resolution Lunar Terrain Reconstruction from LRO Data with Neural Radiance Fields

15h10: Pause 

15h25: Thomas Leimkuehler, Max Planck Institute, Allemagne

From Neural Fields through 3D Gaussian Splatting to Neural Splatting 

16h10: Camille Buonomo, CNRS, Laboratoire LIRIS Lyon

Volume preserving neural shape morphing

16h30: Diana Mateus, Nantes Université, Centrale Nantes, Laboratoire LS2N

Implicit reconstruction and representation of medical volumes: the PET and Ultrasound cases.

16h50: Louise Piecuch, Nantes Université, Ecole Centrale de Nantes

Neural Implicit Representations as Shape Priors for Efficient Segmentation and Anomaly Detection

17h10: Discussion

Résumés des contributions

Mathilde Mougeot, ensIIE & ENS Paris-Saclay

Title: Physics-informed Machine learning geared towards industrial applications

Abstract: In recent years, considerable progress has been made in implementing decision support procedures based on machine learning methods through the use of very large databases and learning algorithms. 

In many application areas, the available databases are modest in size, raising the question of whether it is reasonable, in this context, to seek to develop powerful tools based on machine learning techniques. This presentation describes hybrid models that use knowledge from physics to implement effective machine learning models with an economy of data.

Joachim Bona-Pellissier, MaLGa Machine Learning Genoa Center

Title: Physics-informed machine learning with kernels

Abstract: In this presentation, I will address the challenge of Physics-Informed Machine Learning (PIML), where the goal is to learn an unknown function from both limited observational data and known physical laws described by a Partial Differential Equation (PDE).

First, I will briefly review the popular Physics-Informed Neural Networks (PINNs) approach, which uses a composite loss function to enforce both data fidelity and PDE consistency, highlighting its strengths and limitations.

I will then focus on a physics-informed kernel method that, unlike common neural network approaches, provides a closed-form solution. The main advantage of this framework is its strong theoretical foundation; I will show that the estimator is guaranteed to converge to the true solution as more information is provided. I will conclude by presenting experimental results that validate the theory and demonstrate the method's practical effectiveness.

Elisa Riccietti, LIP, ENS Lyon, INRIA

Title: Frequency-aware multigrid training in PINNs 

Abstract: Multigrid methods are widely used for the solution of partial differential equations because of their computational advantages and exploitation of the complementarity between the involved sub-problems. In this talk, we propose a possible extension of the classical multigrid paradigm to the training of physics-informed neural networks (PINNs). Thanks to a re-interpretation of multi-level methods from a block-coordinate point of view, we show that block-coordinate descent, if applied to frequency-aware neural network architectures, can take advantage of the different frequency content in the solution just like classical multigrid and achieve better solutions and computational savings with respect to classical training. 

Claire Boyer, Institut de Mathématiques d’Orsay, Université Paris-Saclay

Title: A statistical tour of physics-informed learning: from PINNs to kernels

Abstract: We will begin by discussing the limitations inherent in the training of Physics-Informed Neural Networks (PINNs), which, despite their conceptual appeal, often face practical challenges (such as convergence issues, sensitivity to hyperparameters, and the need of large data volume). In a second step, we will recast and characterize the problem of physics-informed learning as a kernel method. This reformulation allows us to draw upon the rich body of work in statistical learning theory, particularly kernel methods, to gain deeper theoretical insights into favorable mechanisms in physics-informed learning. Furthermore, it opens the door to the development of alternative approaches.  In particular, it motivates the Physics-Informed Kernel Learning (PIKL) algorithm, which integrates PDE priors into the learning process in a more principled, theoretically grounded and potentially more robust manner. Finally, we have also developed a GPU-compatible implementation of PIKL, enabling large-scale learning and making the method practical for real-world scientific applications.

Baptiste Chatelier, INSA Rennes, CNRS, IETR-UMR 6164 / Mitsubishi Electric R&D Centre Europe

Title: Learning the location-to-channel mapping

Abstract: In wireless communication systems, performance is closely tied to the knowledge of the propagation channel, which varies rapidly with the receiver’s location. This talk presents how theoretical developments based on a physical channel model can guide the design of a neural architecture capable of learning the location-to-channel mapping, and how such a model can be leveraged to achieve precise radio localization.

Laurent Jacques, UCLouvain, Belgium

Title: Herglotz-NET: Implicit Neural Representation of Spherical Data with Harmonic Positional Encoding

Abstract: Representing and processing data in spherical domains presents unique challenges, primarily due to the curvature of the domain, which complicates the application of classical Euclidean techniques. Implicit neural representations (INRs) have emerged as a promising alternative for high-fidelity data representation; however, to effectively handle spherical domains, these methods must be adapted to the inherent geometry of the sphere to maintain both accuracy and stability. In this context, we propose Herglotz-NET (HNET), a novel INR architecture that employs a harmonic positional encoding based on complex Herglotz mappings. This encoding yields a well-posed representation on the sphere with interpretable and robust spectral properties. Moreover, we present a unified expressivity analysis showing that any spherical-based INR satisfying a mild condition exhibits a predictable spectral expansion that scales with network depth. Our results establish HNET as a scalable and flexible framework for accurate modeling of spherical data.

Dawa Derksen, Centre National d’Etudes Spatiales (CNES), Service Traitement Plateformes et Hybridation Aval

Title: Neural Radiance Fields for 3D Earth Observation

Abstract: This presentation aims to trace the evolution of Neural Radiance Fields (NeRF) applied to satellite imagery, from their introduction in 2021 (Shadow-NeRF) to recent work that has surpassed traditional stereophotogrammetry algorithms. With the rise of Earth observation constellations, it is now common to have multiple images of the same scene, making 3D reconstruction via inverse rendering pertinent. Unlike classical photogrammetry, this method involves learning a function that, from coordinates (x, y, z), returns information such as object density and colorimetry. 3D reconstruction via inverse rendering requires complex and differentiable functions. In 2020, Mildenhall et al. introduced NeRF, a method based on neural networks, which garnered interest from the computer vision community due to the photorealism of the rendered images. Since 2021, NeRF has seen numerous applications in remote sensing, particularly on time-series images with Very High Spatial Resolution (>1m) containing 10-20 images. Their success is due to the flexibility of the NeRF model, adapted to various local phenomena (specularity, shadows, moving objects) and global phenomena (radiometric changes, seasonal variations, multi-modality). We will trace the history of successive works that have overcome the unique challenges of applying these methods to remote sensing data, highlighting SAT-NGP (Billouard 2024), which addressed the slowness of NeRF models while improving reconstruction quality. We will also present recent work by CNES aimed at extending the approach to other data sources, such as SAR (Ehret 2024), lunar imaging, or radar altimetry. The focus will be on the versatility of NeRF models, which are both synthetic and generative AI models. During inference, they generate photorealistic images with control over variables such as sun position or radiometric effects and can process raw data without requiring pan-sharpening. Finally, we will list the ingredients needed to fuel new 3D reconstruction algorithms for the future of remote sensing, hoping that multi-modality will improve reconstruction quality, computation time, and generative power.

Chloe Thenoz, Magellium, Ramonville-Saint-Agne

Title: LuNeRF: Towards Automatic Very High Resolution Lunar Terrain Reconstruction from LRO Data with Neural Radiance Fields

Abstract: Despite the availability of a large amount of high resolution satellite Moon images with the LRO mission, generating automatically high resolution digital elevation model remains a challenging task due to the specificities of lunar environment. This study aims to advance the state-of-the-art in lunar terrain reconstruction, providing a foundation for future applications in mission planning, resource mapping, and establishing sustainable human presence on the Moon. We leverage Neural Radiance Fields to perform 3D reconstruction and novel view synthesis based on real lunar images, particularly in the challenging polar regions. Following recent advancements in NeRFs for satellite imagery, we adapt the NeRF model and rendering equations to take into account the changes in lighting conditions that exist in real-world lunar images. We demonstrate the potential of NeRF to learn the 3D shape of the lunar surface and to perform novel view synthesis and relighting tasks. 

Thomas Leimkuehler, Max-Planck-Institut für Informatik

Title: From Neural Fields through 3D Gaussian Splatting to Neural Splatting

Abstract: Neural Fields, or Neural Implicit Representations, have become a cornerstone of visual computing and beyond, offering compact and expressive continuous representations. Among them, Neural Radiance Fields (NeRFs) have transformed how we model and reconstruct 3D scenes. Yet their expressivity comes at a high computational cost, which still hinders real-time rendering at high quality.

To overcome this limitation, we introduced 3D Gaussian Splatting, which takes a complementary, explicit approach: scenes are represented as mixtures of millions of Gaussian primitives. This enables efficient, splatting-based rendering with high image quality – but at the expense of flexibility and memory efficiency, due to the rigid analytic form of the primitives.

In this talk, I will present a new scene representation based on Splattable Neural Primitives, which merges the strengths of both worlds: the expressivity and compactness of neural fields with the rendering efficiency of primitive splatting. This approach achieves a highly favorable trade-off among quality, performance, and memory, and demonstrates that the two paradigms – often thought to be irreconcilable – can in fact be unified.

Camille Buonomo, CNRS, Laboratoire LIRIS Lyon

Title: Volume preserving neural shape morphing

Abstract: Shape interpolation is a long standing challenge of geometry processing. As it is ill-posed, shape interpolation methods always work under some hypothesis. Among such constraints, volume preservation is one of the traditional animation principles. In this paper we propose a method to interpolate between shapes favoring volume and topology reservation. To do so, we rely on a level set representation of the shape and its advection by a velocity field through the level set equation (both parameterized as neural networks). While divergence free velocity fields ensure volume and topology preservation, they are incompatible with the Eikonal constraint of signed distance functions. This leads us to introduce the notion of adaptive divergence velocity field, a construction compatible with the Eikonal equation with theoretical guarantee on the shape volume preservation.

Diana Mateus, Nantes Université, Centrale Nantes, Laboratoire LS2N

Title: Implicit reconstruction and representation of medical volumes: the PET and Ultrasound cases.

Abstract: While INRs have been studied in the context of representing and reconstructing Computer Tomography (CT) and Magnetic Resonance (MR) images, they have been less explored for Positron Emission Tomography (PET) and ultrasound (US). To address this gap, we have recently proposed INRs as alternatives for PET and US data volumetric representation. In the context of PET, we demonstrate the feasibility of integrating an INR in an iterative reconstruction framework and investigate the effect of different types of activation functions. In the context of US, we have recently leveraged an INR to optimize for both volume consistency and probe tracking error reduction, aiming to improve freehand 3D volume reconstructions. We also investigate INRs to represent raw signed US data from plane-wave and matrix acquisitions, useful for tasks such as novel view synthesis, compounding, and compression. 

Louise Piecuch, Nantes Université, Ecole Centrale de Nantes

Title: Neural Implicit Representations as Shape Priors for Efficient Segmentation and Anomaly Detection

Abstract: Neural implicit representations (INRs) have recently emerged as powerful coordinate-based models for continuously representing 3D shapes through signed distance functions (SDFs)[1]. Their continuous and compact nature makes them particularly appealing for medical imaging, where data are often limited and structural variability is high. This property makes INRs well-suited to capture complex shape variations, serving both as strong priors to guide 3D segmentation and reduce annotation time, and as flexible models capable of representing inter-population differences or generalizing across imaging modalities. In our first contribution [3], building upon the conditional auto-decoder architecture

introduced by Amiranaschvilli et al. [2], we proposed an INR-based method to optimize slice selection for 3D segmentation from sparse 2D annotations. This strategy significantly reduces the manual effort required from clinical experts while maintaining segmentation accuracy. The method demonstrated strong generalization on two medical applications: segmentation of organs-at-risk in brain cancer radiotherapy and adaptation to new datasets for muscle segmentation in sarcopenic patients.

Our second contribution [4] extends the use of INRs toward unsupervised anomaly detection. By modeling normal muscle shapes, the proposed method identifies sarcopenic muscles based on reconstruction errors. Using the same conditional INR backbone, trained on healthy subjects, we further show that the learned latent space enables a clear separation between muscle populations via Linear Discriminant Analysis (LDA). Together, these two studies [3][4], demonstrate the versatility and robustness of neural implicit representations for medical imaging, serving both as powerful priors for guided segmentation and as shape-aware models for unsupervised pathology detection. These

results emphasize the potential of INR-based methods to bridge the gap between efficient data annotation and clinically meaningful interpretation.

[1] A. Gropp, L. Yariv, N. Haim, M. Atzmon, and Y. Lipman, “Implicit geometric regularization

for learning shapes,” in Int. Conf. on Mach. Learning (ICML), 2020

[2] T. Amiranashvili, D. Ludke, H. Bran Li, S. Zachow, and ¨ B. H. Menze, “Learning

continuous shape priors from sparse data with neural implicit functions,” Med. Img. Analysis,

vol. 94, pp. 103099, 2024.

[3] M. Monvoisin, L. Piecuch, B. Texier, C. Hémon, A. Barateau, J. Huet, A. Nordez, A-S.

Boureau, J-C. Nunes, D. Mateus: Implicit Shape-Prior for Few-Shot Assisted 3D

Segmentation, ShapeMI - workshop of MICCAI 2025




Les commentaires sont clos.