Réunion
Apprentissage et Graphes
Axes scientifiques :
- Apprentissage machine
GdRs impliqués :
Organisateurs :
- - Nicolas Keriven (IRISA)
- - Pierre-Henri Paris (Paris Saclay University)
Nous vous rappelons que, afin de garantir l'accès de tous les inscrits aux salles de réunion, l'inscription aux réunions est gratuite mais obligatoire.
Inscriptions
37 personnes membres du GdR IASIS, et 65 personnes non membres du GdR, sont inscrits à cette réunion.
Capacité de la salle : 103 personnes. 1 Places restantes
Inscriptions closes pour cette journée
Annonce
Les graphes constituent aujourd’hui un cadre conceptuel puissant utilisé dans de nombreux domaines pour modéliser et résoudre des problèmes complexes. La journée thématique « Apprentissage et Graphes, » co-organisée par les GDR IASIS et RADIA, se propose d’explorer les récentes avancées dans l’apprentissage et l’exploitation des structures de graphes, à l’interface entre le traitement du signal, l’intelligence artificielle, et la modélisation des connaissances.
Cette journée vise à réunir chercheurs et praticiens autour des enjeux liés à l’apprentissage sur graphes, qu’il s’agisse de construire, affiner ou exploiter des graphes pour des applications variées telles que le traitement des données massives, la prédiction, la décision, ou encore l’interprétation de modèles complexes.
Axes thématiques :
– Apprentissage supervisé et non supervisé sur graphes, réseaux de neurones sur graphes (GNN) et apprentissage par renforcement.
– Théorie de l’apprentissage sur graphes, statistiques sur graphes et modèles de graphes
– Traitement et modélisation des signaux à travers des structures de graphes.
– Graphes de connaissances : liaison de données basée sur l’ontologie, annotation sémantique, raffinement et complétion des graphes de connaissances, prédiction de liens, typage d’entités, gestion d’identités dans les graphes.
– Représentation et apprentissage sur les graphes de connaissances, incluant les techniques d’embeddings, la représentation temporelle, etc.
– Applications des graphes dans des domaines tels que la santé, la biologie, l’astronomie, la robotique, les télécommunications, et les sciences sociales computationnelles.
– Enjeux de frugalité numérique et éco-conception dans l’apprentissage sur graphes, répondant aux défis de la transition énergétique et numérique.
Objectifs :
Cette journée offre une opportunité unique d’explorer les synergies entre les différentes communautés académiques et industrielles concernées par l’utilisation des graphes et de l’apprentissage. Des exposés sous forme de keynotes ainsi que des présentations orales plus courtes ou poster mettront en lumière les derniers travaux dans ces domaines.
Exposés invités :
- Oana Balalu (INRIA)
- Mehwish Alam (Institut Polytechnique de Paris)
- Simon Barthelmé (CNRS, Gipsa-lab)
- Johannes Lutzeyer (Institut Polytechnique de Paris)
Appel à participations:
Merci d’envoyer avant le mercredi 5 mars 2025 un abstract de 1 page maximum décrivant en détail les travaux présentés et la liste des auteurs, incluant potentiellement un lien vers un article. Il est possible de préciser si une présentation orale ou poster est souhaitée, nous tenterons de satisfaire les souhaits de chacun au maximum.
Contacts:
Pierre-Henri Paris pierre-henri.paris@universite-paris-saclay.fr
Nicolas Keriven nicolas.keriven@cnrs.fr
Remerciements:
Cette journée bénéficie du soutien des GdRs IASIS et RADIA, ainsi que de l’AFIA et du PEPR IA projet « Sharp ».


Programme
- 8h55 - Motd'accueil
- 9h00 - Keynote: Simon Barthelmé
- 9h45 - Contribution: Naji Mouncef
- 10h10 - Pause café
- 10h25 - Keynote: Mehwish Alam
- 11h10 - Contribution: Lucie Arts
- 11h35 - Contribution: Jong Ho Jhee
- 12h00 - Déjeuner
- 13h45 - Keynote: Oana Balalu
- 14h30 - Session poster & café
- 15h45 - Keynote: Johannes Lutzeyer
- 16h30 - Contribution: Saloua Naama
- 16h55 - Contribution: Maxence Morin
- 17h20 - Contribution: Lucas Chatelain
Résumés des contributions
Keynotes
- Simon Barthelmé: Kirchhoff forests for Graph Linear Algebra
- Many operations on graphs involve the graph Laplacian, and their computational cost scales as O(n^3), where n is the number of nodes. This includes operations like filtering signals or computing the spectrum of the graph. In large graphs, approximate algorithms are used, typically methods based on Krylov subspaces or Chebyshev polynomials. In this talk, I will introduce a line of work that formulates fast Monte Carlo estimators for various operations, including filtering and spectral estimation. These estimators are based on Kirchhoff forests, which are simple stochastic processes on graphs that have a natural connection with the graph Laplacian. They have optimal asymptotic complexity and are in some cases competitive with the state-of-the-art. Joint work with Nicolas Tremblay, Pierre-Olivier Amblard, Fabienne Castell, Alexandre Gaudillière, Clothilde Melot, Yigit Pilavci, Hugo Jaquard
- Mehwish Alam: Towards Semantically Enriched Embeddings for Knowledge Graph Completion
- Embedding based Knowledge Graph (KG) completion has gained much attention over the past few years. Most of the current algorithms consider a KG as a multidirectional labeled graph and lack the ability to capture the semantics underlying the schematic information. This talk gives a progressive overview of the various algorithms for KG completion according to the level of expressivity of the semantics utilized. For example, KG completion algorithms consider only factual information including transductive and inductive link prediction, methods using shallow schematic information, semantics represented in different description logic axioms, and large language models. It finally provides the advantages and limitations of existing approaches and future recommendations.
- Oana Balalu: Hallucinations in textual generation: structured information extraction and constrained generation.
- In my work, I have studied two inverse problems: extracting facts in the form (subject, predicate, object) from text, and, given such facts, transforming them back into text. These problems are important in themselves, but they also touch upon the issue of hallucinations in large language models. They open the question of when two sentences describe the same facts and how we can measure textual similarity in an informative and explainable manner. In this presentation, I will talk about information extraction, constrained generation, and their link to hallucinations detection and prevention.
- Johannes Lutzeyer: Understanding Virtual Nodes in Graph Neural Networks: Oversmoothing, Oversquashing and Node Heterogeneity
- In this talk, I will give an accessible introduction to Graph Neural Networks (GNNs), Graph Transformers and how virtual nodes can be incorporated into GNNs. I will then share recent work (ICLR'25), in which we attempt to understand the impact of the addition of virtual nodes to GNNs. I will introduce the concepts of 1) oversmoothing, 2) oversquashing and 3) node representation sensitivity analysis and we shall then observe GNNs with virtual nodes through these three lenses. This will allow us to compare GNNs with virtual nodes to both standard GNNs and Graph Transformers. Finally, I will briefly react to a recent position paper calling out poor benchmarks in graph learning, by introducing work we have been doing with the ANR HistoGraph consortium to apply graph learning in the context of digital pathology.
Contributions Orales
- Naji Mouncef: Knowledge Graphs and Large Language Models for Informed Medical Consent
- In this work, we aim to improve and personalize the medical e-consent, by deploying the Large Language Models (LLMs), knowledge graphs (KGs) and ontologies. The proposed approach provides two key functionalities: (1) legal validation of consent documents to ensure content clarity and comprehension, based on machine learning and a knowledge graph, and (2) personalized content generation and interaction personalized to patient preferences and medical history, achieved through an ontology-driven LLM. By embedding knowledge graphs, we ensure structured representation and improved interpretability, contributing to a more adaptable and legally sound consent process.
- Lucie Arts: Consistent model selection in a collection of stochastic block models
- We introduce the penalized Krichevsky-Trofimov (KT) estimator as a convergent method for estimating the number of nodes clusters when observing multiple networks within both multi-layer and dynamic Stochastic Block Models. We establish the consistency of the KT estimator, showing that it converges to the correct number of clusters in both types of models when the number of nodes in the networks increases. Our estimator does not require a known upper bound on this number to be consistent. Furthermore, we show that these consistency results hold in both dense and sparse regimes, making the penalized KT estimator robust across various network configurations. We illustrate its performance on synthetic datasets.
- Jong Ho Jhee: Predicting Clinical Outcomes From Patient Care Pathways Represented With Temporal Knowledge Graphs
- Background: With the increasing availability of healthcare data, predictive modeling finds many applications in the biomedical domain, such as the evaluation of the level of risk for various conditions, which in turn can guide clinical decision making. However, it is unclear how knowledge graph data representations and their embedding, which are competitive in some settings, could be of interest in biomedical predictive modeling. Method: We simulated synthetic but realistic data of patients with intracranial aneurysm and experimented on the task of predicting their clinical outcome. We compared the performance of various classification approaches on tabular data versus a graph-based representation of the same data. Next, we investigated how the adopted schema for representing first individual data and second temporal data impacts predictive performances. Results: Our study illustrates that in our case, a graph representation and Graph Convolutional Network (GCN) embeddings reach the best performance for a predictive task from observational data. We emphasize the importance of the adopted schema and of the consideration of literal values in the representation of individual data. Our study also moderates the relative impact of various time encoding on GCN performance.
- Saloua Naama: On Geometrization of Graphs: ironing the graph for correct geometric interpretations with applications to RNA-Seq data analysis
- Graphs are a fundamental tool to capture complex interactions through a relatively simple logical framework of node-to-node relations. They are used in a wide range of applications and scientific domains. Yet, understanding the structure and global properties of a graph remains challenging. Geometric interpretations are widely used to represent complex problems and help develop intuitions that lead to solutions. Such interpretations are at the core of classical machine learning techniques like k-means. Attempts to define geometric interpretations generally consider vertices as “points” seating over a low dimensional Riemannian manifold and link weights as geodesic “distances” between these points. More recently, Graph Neural Networks (GNNs) are using embeddings of nodes and links into a space defined by the structure of a neural network. However, the choice of the embedding manifold is critical. In this talk, we argue that the classical embedding techniques cannot lead to correct geometric interpretation as the microscopic details, e.g., curvature at each point, of the manifold, that are needed to derive geometric properties in Riemannian geometry methods are not available. We explain that for doing correct geometric interpretation, the embedding of a graph should be done over regular constant curvature manifolds. To this end, we present an embedding approach, the discrete Ricci flow graph embedding (dRfge), based on the discrete Ricci flow that adapts the distances between nodes in a graph so that the graph can be embedded onto a constant curvature manifold that is homogeneous and isotropic. One of our major contributions is the proof of the convergence of discrete Ricci flow to a constant curvature and stable distance metric over the edges. A drawback of using the discrete Ricci flow is the high computational complexity that prevented its usage in large-scale graph analysis. We describe new algorithmic solutions that make it feasible to calculate the Ricci flow for graphs of up to 50k nodes and beyond. The intuitions behind the discrete Ricci flow make it possible to obtain new insights into the structure of large-scale graphs. We demonstrate this through a case study on analysing single-cell RNA-sequencing time-series data.
- Maxence Morin: Methodology for Identifying Social Groups Within a Transactional Graph
- Social network analysis is pivotal for organizations aiming to leverage the vast amounts of data generated from user interactions on social media and other digital platforms. These interactions often reveal complex social structures, such as tightly-knit groups based on com- mon interests, which are crucial for enhancing service personalization or fraud detection. Traditional methods like community detection and graph matching, while useful, often fall short of accurately identifying specific groups of users. This paper introduces a novel framework specif- ically designed to identify groups of users within transactional graphs by focusing on the contextual and structural nuances that define these groups.
- Lucas Chatelain : Cellular porosity in dentin: a complex spatial graph
- Graph analysis is fundamental to model brain connectivity at various scales and for different types of inputs signals. At the microscale, graphs are used to understand how the topology of neurons connectivity locally determines brain function. Inspired by neurosciences, Weinkamer et al. adapted these tools to characterize the cellular organization within bone. Here, we extend this work to dentin, a similar type of mineralized tissue. This work is based on 3D confocal fluorescence microscopy acquisitions which efficiently reveals the cellular porosity. We describe a robust pipeline to extract a graph representation of the cellular porosity and analyze this spatial network using various graph metrics.
Contributions Posters
- Philippe Ciblat: Graph-assisted Bayesian node classifiers
- Many datasets can be represented by attributed graphs on which classification methods may be of interest. The problem of node classification has attracted the attention of scholars due to its wide range of applications. The problem consists of predicting nodes’ labels based on their intrinsic features, features of their neighboring nodes and the graph structure. Graph Neural Networks (GNN) have been widely used to tackle this task. Thanks to the graph structure and the node features, they are able to propagate information over the graph and aggregate it to improve the classification performance. Their performance is however sensitive to the graph topology, especially its degree of impurity, a measure of the proportion of connected nodes belonging to different classes. Here, we propose a new Graph-Assisted Bayesian (GAB) classifier, which is designed for the problem of node classification. By using the Bayesian theorem, GAB takes into consideration the degree of impurity of the graph when classifying the nodes. We show that the proposed classifier is less sensitive to graph impurity, and less complex than GNN-based classifiers.
- Lionel Gil: Generalization emerges from local optimization in a self-organised learning network
- We design and analyze a new paradigm for building supervised learning networks, driven only by local optimization rules without relying on a global error function. Traditional neural networks with a fixed topology are made up of identical nodes and derive their expressiveness from an appropriate adjustment of connection weights. In contrast, our network stores new knowledge in the nodes accurately and instantaneously, in the form of a lookup table. Only then is some of this information structured and incorporated into the network geometry. The training error is initially zero by construction and remains so throughout the network topology transformation phase. The latter involves a small number of local topological transformations, such as splitting or merging of nodes and adding binary connections between them. The choice of operations to be carried out is only driven by optimization of expressivity at the local scale. What we’re primarily looking for in a learning network is its ability to generalize, i.e. its capacity to correctly answer questions for which it has never learned the answers. We show on numerous examples of classification tasks that the networks generated by our algorithm systematically reach such a state of perfect generalization when the number of learned examples becomes sufficiently large. We report on the dynamics of the change of state and show that it is abrupt and has the distinctive characteristics of a first order phase transition, a phenomenon already observed for traditional learning networks and known as grokking. In addition to proposing a non-potential approach for the construction of learning networks, our algorithm makes it possible to rethink the grokking transition in a new light, under which acquisition of training data and topological structuring of data are completely decoupled phenomena.
- Can Pouliquen: Schur's Positive-Definite Network: Deep Learning in the SPD cone with structure
- Estimating matrices in the symmetric positive-definite (SPD) cone is of interest for many applications ranging from computer vision to graph learning. While there exist various convex optimization-based estimators, they remain limited in expressivity due to their model-based approach. The success of deep learning motivates the use of learning-based approaches to estimate SPD matrices with neural networks in a data-driven fashion. However, designing effective neural architectures for SPD learning is challenging, particularly when the task requires additional structural constraints, such as element-wise sparsity. Current approaches either do not ensure that the output meets all desired properties or lack expressivity. In this paper, we introduce SpodNet, a novel and generic learning module that guarantees SPD outputs and supports additional structural constraints. Notably, it solves the challenging task of learning jointly SPD and sparse matrices. Our experiments illustrate the versatility and relevance of SpodNet layers for such applications.
- Yannis Karmim: Supra-Laplacian Encoding for Transformer on Dynamic Graphs
- Fully connected Graph Transformers (GT) have rapidly become prominent in the static graph community as an alternative to Message-Passing models, which suffer from a lack of expressivity, oversquashing, and under-reaching. However, in a dynamic context, by interconnecting all nodes at multiple snapshots with self-attention, GT loose both structural and temporal information. In this work, we introduce Supra-LAplacian encoding for spatio-temporal TransformErs (SLATE), a new spatio-temporal encoding to leverage the GT architecture while keeping spatio-temporal information. Specifically, we transform Discrete Time Dynamic Graphs into multi-layer graphs and take advantage of the spectral properties of their associated supra-Laplacian matrix. Our second contribution explicitly model nodes' pairwise relationships with a cross-attention mechanism, providing an accurate edge representation for dynamic link prediction. SLATE outperforms numerous state-of-the-art methods based on Message-Passing Graph Neural Networks combined with recurrent models (e.g LSTM), and Dynamic Graph Transformers, on 9 dataset.
- Jason piquenot: Context-Free Grammars and Graph Neural Networks: Bridging Language Theory and Structured Machine Learning
- The intersection of formal language theory and machine learning has led to novel advancements in graph representation learning. This presentation explores three recent works that leverage Context-Free Grammars (CFGs) to enhance the expressiveness and efficiency of Graph Neural Networks (GNNs). First, we introduce a framework that establishes a formal connection between algebraic languages and GNNs, using CFGs to structure algebraic operations into generative rules. A grammar reduction scheme is proposed to optimize the CFG, yielding a GNN model, G2N2, that conforms to the third- order Weisfeiler-Lehman (3-WL) test. Our experiments demonstrate that G2N2 outperforms other 3-WL-compliant GNNs across multiple graph learning tasks. Next, we present the Grammatical Path Network (GPN), a GNN model designed to efficiently capture cycles in graphs. By leveraging CFGs to count cycles through precomputed paths, GPN achieves comparable performance to Graph Substructure Networks (GSN) while maintaining the computational efficiency of standard Message Passing Neural Networks (MPNNs). This approach circumvents explicit cycle precomputation, offering a scalable solution for substructure-based graph learning. Finally, we introduce Grammar Reinforcement Learning (GRL), a reinforcement learning paradigm that integrates Monte Carlo Tree Search (MCTS) and a transformer-based Pushdown Automaton (PDA) within a CFG framework. GRL optimizes matrix-based formulas for path and cycle counting in graphs, discovering novel formulations that improve computational efficiency by factors of two to six compared to state-of-the-art techniques. Together, these works illustrate the power of CFGs in structuring and optimizing GNN archi- tectures, advancing both theoretical foundations and practical applications in graph learning. Our findings highlight promising directions for future research in the synergy between formal grammars and machine learning on structured data.
- Martin Gjorgjevski: Node Regression on Latent Position Random Graphs via Local Averaging
- Node regression consists in predicting the value of a graph label at a node, given observations at the other nodes. To gain some insight into the performance of various estimators for this task, we perform a theoretical study in a context where the graph is random. Specifically, we assume that the graph is generated by a Latent Position Model, where each node of the graph has a latent position, and the probability that two nodes are connected depend on the distance between the latent positions of the two nodes. In this context, we begin by studying the simplest possible estimator for graph regression, which consists in averaging the value of the label at all neighboring nodes. We show that in Latent Position Models this estimator tends to a Nadaraya Watson estimator in the latent space, and that its rate of convergence is in fact the same. One issue with this standard estimator is that it averages over a region consisting of all neighbors of a node, and that depending on the graph model this may be too much or too little. An alternative consists in first estimating the true distances between the latent positions, then injecting these estimated distances into a classical Nadaraya Watson estimator. This enables averaging in regions either smaller or larger than the typical graph neighborhood. We show that this method can achieve standard nonparametric rates in certain instances even when the graph neighborhood is too large or too small.
- Fatia Lekbour: HybEA: Hybrid Attention Models for Entity Alignment
- La multiplication des graphes de connaissances (KG) dans divers domaines et la nécessité de les intégrer ont rendu cruciale la tâche d’alignement d’entités, qui vise à identifier les nœuds décrivant une même entité du monde réel dans différents KG. Cependant, cette tâche est confrontée à deux enjeux majeurs. D’une part, l’hétérogénéité sémantique, où les mêmes entités peuvent être décrites avec des attributs et des noms différents. D’autre part, l’hétérogénéité structurelle, où les relations entre entités varient d’un KG à l’autre, rendant l’alignement difficile puisque les graphes à aligner ne sont généralement pas isomorphes. Nous avons analysé ces hétérogénéités sur plusieurs jeux de données à l’aide de métriques telles que la similarité de Levenshtein pour évaluer l’aspect sémantique des graphes, et l’indice de Jaccard pour mesurer leurs structures. Cette analyse met en évidence des variations significatives entre les datasets, influençant les performances des approches existantes. En effet, les méthodes actuelles se concentrent généralement sur un seul de ces enjeux, limitant leur capacité à s’adapter à la diversité des KG. Pour pallier ces limitations, nous proposons HybEA, un modèle semi-supervisé combinant deux modèles d’attention : l’un exploitant la structure des graphes et l’autre leurs attributs. Nos expériences sur différents types de jeux de données, couvrant un large spectre d’hétérogénéité sémantique et structurelle, montrent que HybEA dépasse l’état de l’art, avec une amélioration pouvant atteindre 26 % sur le Hits@1 et une moyenne de +8 % par rapport aux meilleures méthodes existantes. HybEA atteint des scores quasi parfaits sur plusieurs jeux de données, montrant qu’il permet de résoudre efficacement les différents niveaux d’hétérogénéité.
- Salvish Goomanee: 3D simulations of embryo mechanics using graph neural networks
- In this work, we develop a framework that emulates computationally expensive 3D simulations of embryo mechanics using graph neural networks (GNNs). We construct a surrogate graph model where nodes represent cells, and edges correspond to interfacial areas between them. This representation enables the integration of key biomechanical properties: cell volume and pressure as node features, and intracellular surface tensions along with interfacial areas as edge features. To accurately predict equilibrium states and topological transitions, we design a 3D rotationally and translationally equivariant edge-prediction GNN. Additionally, we implement a version of the Difformer GNN for link prediction, enhancing our ability to capture the Markovian nature of the simulated processes. Finally, we validate our approach on unseen data for tension inference and the prediction of cell signaling dynamics.
- Assaad Zeghina: Fouille de motifs fréquents dans les multi-graphes : apport de l’apprentissage profond et des architectures neuronales
- L’utilisation de méthodes d’apprentissage automatique et d'apprentissage profond pour résoudre des problèmes jusque-là abordés par des techniques classiques suscite un intérêt grandissant, en particulier lorsqu’il s’agit de données massives et hétérogènes. Parmi ces problématiques, l’extraction de motifs fréquents dans les graphes spatio-temporels soulève de nombreux défis : la multiplicité des types d’arcs, la prise en compte simultanée des facteurs spatiaux et temporels, ainsi que la complexité combinatoire de l’énumération de sous-graphes récurrents. Les méthodes traditionnelles, souvent basées sur des approches d’exploration exhaustive, peinent à passer à l’échelle et à préserver des performances satisfaisantes quand la taille du graphe augmente ou que sa structure devient plus riche. Face aux limites des approches traditionnelles en termes de complexité et de passage à l’échelle sur de grands graphes, nous avons proposé deux méthodes d’apprentissage profond, Multi-SPMiner et Deep-QMiner. Multi- SPMiner projette simultanément les nœuds et leurs voisinages dans un espace latent tout en préservant la structure des sous-graphes. En ajoutant itérativement des nœuds adjacents, elle construit progressivement un motif fréquent. Toutefois, Multi-SPMiner ne permet pas un entraînement de bout en bout et peine à expliquer l'absence de certains motifs. Pour pallier ces limitations, nous avons développé Deep-QMiner, qui prolonge la démarche de Multi-SPMiner en adoptant un apprentissage par renforcement séquentiel entièrement entraînable. En reformulant l'exploration des motifs comme une séquence de décisions, où des agents construisent progressivement un motif fréquent à partir de nœuds aléatoires, cette méthode offre une flexibilité accrue grâce à un système de récompense ajustable. Comparé ces méthodes aux approches classiques, les résultats montrent une amélioration du temps d’exécution, bien que la précision soit légèrement inférieure. Nos modèles ont été validés sur divers types de graphes (simples, multigraphes, étiquetés, orientés) et des applications variées (environnementales, textuelles, IRMF), mettant en évidence une bonne capacité de généralisation, même avec un entraînement initial sur des données synthétiques.
- Ikenna Oluigbo: Bijective Graph Learning Architecture with Multi-Level Attributes Interaction
- Techniques for representing graphs and preserving different graph features in low-dimensional embeddings have gathered much popularity and application recently. The resulting embeddings can be applied to a wide variety of downstream machine learning tasks such as node classification and community detection. Despite the successes of Graph Neural Networks (GNN) in graph learning, this classical architecture is plagued with limitations including but not limited to issues of oversmoothing from message passing at increasing layer depth, dependence on scarce labeled data, long-range dependencies, and a large execution time resulting from long graph updates. By way of an alternative solution to the highlighted limitations, we propose an improved graph representation technique using a mapping bijection function with log smoothing to aggregate neighborhood properties and update node representations in a message passing process, similar to that of a classical GNN. For each update of a node’s representation, we consider the previous and current state of the nodes' neighbors, as well as the node's attributes. The resulting state for each node at every update level is a simple encoded integer which captures in itself the properties of the neighbor nodes. To reduce complexity and preserve rich and meaningful features in the embedding, we pass the final states for all nodes through a feed-forward embedding layer to generate low- dimensional vector embedding for nodes. We validate our technique and show its usability through experiments with real-life datasets on node classification and community detection tasks, where our method shows higher performance over existing models.