mOeX bibliography sorted by authors (2024-02-10)
Abbas and others
Nacira Abbas, Jérôme David, Amedeo Napoli, Linkex: A tool for link key discovery based on pattern structures, in: Proc. ICFCA workshop on Applications and tools of formal concept analysis, Frankfurt (DE), pp33-38, 2019
Links constitute the core of Linked Data philosophy. With the high growth of data published in the web, many frameworks have been proposed to deal with the link discovery problem, and particularly the identity links. Finding such kinds of links between different RDF data sets is a critical task. In this position paper, we focus on link key which consists of sets of pairs of properties identifying the same entities across heterogeneous datasets. We also propose to formalize the problem of link key discovery using Pattern Structures (PS), the generalization of Formal Concept Analysis dealing with non binary datasets. After providing the proper definitions of link keys and setting the problem in terms of PS, we show that the intents of the pattern concepts correspond to link keys and their extents to sets of identity links generated by their intents. Finally, we discuss an implementation of this framework and we show the applicability and the scalability of the proposed method.
Nacira Abbas, Jérôme David, Amedeo Napoli, Discovery of link keys in RDF data based on pattern structures: preliminary steps, in: Francisco José Valverde-Albacete, Martin Trnecka (eds), Proc. 15th International conference on Concept Lattices and their Applications (CLA), Tallinn (EE), pp235-246, 2020
In this paper, we are interested in the discovery of link keys among two different RDF datasets based on FCA and pattern structures. A link key identifies individuals which represent the same real world entity. Two main strategies are used to automatically discover link keys, ignoring or not the classes to which the individuals belong to. Indeed, a link key may be relevant for some pair of classes and not relevant for another. Then, discovering link keys for one pair of classes at a time may be computationally expensive if every pair should be considered. To overcome such limitations, we introduce a specific and original pattern structure where link keys can be discovered in one pass while specifying the pair of classes associated with each link key, focusing on the discovery process and allowing more flexibility.
Nacira Abbas, Alexandre Bazin, Jérôme David, Amedeo Napoli, Sandwich: an algorithm for discovering relevant link keys in an LKPS concept lattice, in: Proc. 16th international conference on formal concept analysis (ICFCA), Strasbourg (FR), (Agnès Braud, Aleksey Buzmakov, Tom Hanika, Florence Le Ber (eds), Proc. 16th international conference on formal concept analysis (ICFCA), Lecture notes in computer science 12733, 2021), pp243-251, 2021
The discovery of link keys between two RDF datasets allows the identification of individuals which share common key characteristics. Actually link keys correspond to closed sets of a specific Galois connection and can be discovered thanks to an FCA-based algorithm. In this paper, given a pattern concept lattice where each concept intent is a link key candidate, we aim at identifying the most relevant candidates w.r.t adapted quality measures. To achieve this task, we introduce the "Sandwich" algorithm which is based on a combination of two dual bottom-up and top-down strategies for traversing the pattern concept lattice. The output of the Sandwich algorithm is a poset of the most relevant link key candidates. We provide details about the quality measures applicable to the selection of link keys, the Sandwich algorithm, and as well a discussion on the benefit of our approach.
Nacira Abbas, Alexandre Bazin, Jérôme David, Amedeo Napoli, Non-redundant link keys in RDF data: preliminary steps, in: Proc. 9th IJCAI workshop on What can FCA do for Artificial Intelligence? (FCA4AI), Montréal (CA), pp125-130, 2021
A link key between two RDF datasets D1 and D2 is a set ofpairs of properties allowing to identify pairs of individuals, say x1 in D1 and x2 in D2, which can be materialized as ax1owl:sameAs x2 identity link. There exist several ways to mine such link keys but no one takes into account the fact that owl:sameAs is an equivalence relation, which leads to the discovery of non-redundant link keys. Accordingly, in this paper, we present the link key discovery based on Pattern Structures (PS). PS output a pattern concept lattice where every concept has an extent representing a set of pairs of individuals and an intent representing the related link key candidate. Then, we discuss the equivalence relation induced by a link key and we introduce the notion of non-redundant link key candidate.
Nacira Abbas, Alexandre Bazin, Jérôme David, Amedeo Napoli, A study of the discovery and redundancy of link keys between two RDF datasets based on partition pattern structures, in: Pablo Cordero, Pavol Jozef Šafárik (eds), Proc. 16th International conference on Concept Lattices and their Applications (CLA), Tallinn (EE), pp175-189, 2022
A link key between two RDF datasets D1 and D2 is a set of pairs of properties allowing to identify pairs of individuals 𝑥1 and 𝑥2 through an identity link such as x1 owl∶sameAs x2. In this paper, relying on and extending previous work, we introduce an original formalization of link key discovery based on the framework of Partition Pattern Structures (pps). Our objective is to study and evaluate the redundancy of link keys based on the fact that owl:sameAs is an equivalence relation. In the pps concept lattice, every concept has an extent representing a link key candidate and an intent representing a partition of instances into sets of equivalent instances. Experiments show three main results. Firstly redundancy of link keys is not so significant in real-world datasets. Nevertheless, the link key discovery approach based on pps returns a reduced number of non redundant link key candidates when compared to a standard approach. Moreover, the pps-based approach is efficient and returns link keys of high quality.
Nacira Abbas, Alexandre Bazin, Jérôme David, Amedeo Napoli, Contributions to link key discovery in RDF datasets, in: Pascal Préa (ed), Proc. 27th conference on rencontres de la Société Française de Classification (SFC), Lyon (FR), 2022
A link key between two RDF datasets D1 and D2 is a set of pairs of properties allowing to identify pairs of individuals, say x1 in D1 and x2 in D2, which can be materialized as a x1 owl:sameAs x2 identity link. There exist several ways to mine such link keys but no one takes into account the fact that owl:sameAs is an equivalence relation, which leads to the discovery of non-redundant link keys. Accordingly, in this paper, we present the link key discovery based on Pattern Structures (PS). PS output a pattern concept lattice where every concept has an extent representing a set of pairs of individuals and an intent representing the related link key candidate. Then, we discuss the equivalence relation induced by a link key and we introduce the notion of non-redundant link key candidate.
Nacira Abbas, Alexandre Bazin, Jérôme David, Amedeo Napoli, Discovery of link keys in resource description framework datasets based on pattern structures, International Journal of Approximate Reasoning 161:108978, 2023
In this paper, we present a detailed and complete study on data interlinking and the discovery of identity links between two RDF-Resource Description Framework-datasets over the web of data. Data interlinking is the task of discovering identity links between individuals across datasets. Link keys are constructions based on pairs of properties and classes that can be considered as rules allowing to infer identity links between subjects in two RDF datasets. Here we investigate how FCA-Formal Concept Analysis-and its extensions are well adapted to investigate and to support the discovery of link keys. Indeed plain FCA allows to discover the so-called link key candidates, while a specific pattern structure allows to associate a pair of classes with every candidate. Different link key candidates can generate sets of identity links between individuals that can be considered as equal when they are regarded as partitions of the identity relation and thus involving a kind of redundancy. In this paper, such a redundancy is deeply studied thanks to partition pattern structures. In particular, experiments are proposed where it is shown that redundancy of link key candidates while not significant when based on identity of partitions appears to be much more significant when based on similarity.
Achichi and others
Manel Achichi, Michelle Cheatham, Zlatan Dragisic, Jérôme Euzenat, Daniel Faria, Alfio Ferrara, Giorgos Flouris, Irini Fundulaki, Ian Harrow, Valentina Ivanova, Ernesto Jiménez-Ruiz, Kristian Kolthoff, Elena Kuss, Patrick Lambrix, Henrik Leopold, Huanyu Li, Christian Meilicke, Majid Mohammadi, Stefano Montanelli, Catia Pesquita, Tzanina Saveta, Pavel Shvaiko, Andrea Splendiani, Heiner Stuckenschmidt, Élodie Thiéblin, Konstantin Todorov, Cássia Trojahn dos Santos, Ondřej Zamazal, Results of the Ontology Alignment Evaluation Initiative 2017, in: Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh (eds), Proc. 12th ISWC workshop on ontology matching (OM), Wien (AT), pp61-113, 2017
Ontology matching consists of finding correspondences between semantically related entities of different ontologies. The Ontology Alignment Evaluation Initiative (OAEI) aims at comparing ontology matching systems on precisely defined test cases. These test cases can be based on ontologies of different levels of complexity (from simple thesauri to expressive OWL ontologies) and use different evaluation modalities (e.g., blind evaluation, open evaluation, or consensus). The OAEI 2017 campaign offered 9 tracks with 23 test cases, and was attended by 21 participants. This paper is an overall presentation of that campaign.
Adrian and others
Kemo Adrian, Jérôme Euzenat, Dagmar Gromann (eds), Proc. 1st JOWO workshop on Interaction-Based Knowledge Sharing (WINKS), Bozen-Bolzano (IT), 42p., 2018
Kemo Adrian, Jérôme Euzenat, Dagmar Gromann, Ernesto Jiménez-Ruiz, Marco Schorlemmer, Valentina Tamma (eds), Proc. 2nd JOWO workshop on Interaction-Based Knowledge Sharing (WINKS), Graz (AT), 48p., 2019
Atencia and others
Manuel Atencia, Jérôme David, Jérôme Euzenat, Amedeo Napoli, Jérémy Vizzini, A guided walk into link key candidate extraction with relational concept analysis, in: Claudia d'Amato, Lalana Kagal (eds), Proc. on journal track of the International semantic web conference, Auckland (NZ), 2019
Data interlinking is an important task for linked data interoperability. One of the possible techniques for finding links is the use of link keys which generalise relational keys to pairs of RDF models. We show how link key candidates may be directly extracted from RDF data sets by encoding the extraction problem in relational concept analysis. This method deals with non functional properties and circular dependent link key expressions. As such, it generalises those presented for non dependent link keys and link keys over the relational model. The proposed method is able to return link key candidates involving several classes at once.
Manuel Atencia, Jérôme David, Jérôme Euzenat, Several link keys are better than one, or extracting disjunctions of link key candidates, in: Proc. 10th ACM international conference on knowledge capture (K-Cap), Marina del Rey (CA US), pp61-68, 2019
Link keys express conditions under which instances of two classes of different RDF data sets may be considered as equal. As such, they can be used for data interlinking. There exist algorithms to extract link key candidates from RDF data sets and different measures have been defined to evaluate the quality of link key candidates individually. For certain data sets, however, it may be necessary to use more than one link key on a pair of classes to retrieve a more complete set of links. To this end, in this paper, we define disjunction of link keys, propose strategies to extract disjunctions of link key candidates from RDF data, and apply existing quality measures to evaluate them. We also report on experiments with these strategies.
Manuel Atencia, Jérôme Euzenat, Khadija Jradeh, Chan Le Duc, Tableau methods for reasoning with link keys, Deliverable 2.1, ELKER, 32p., 2019
Data interlinking is a critical task for widening and enhancing linked open data. One way to tackle data interlinking is to use link keys, which generalise keys to the case of two RDF datasets described using different ontologies. Link keys specify pairs of properties to compare for finding same-as links between instances of two classes of two different datasets. Hence, they can be used for finding links. Link keys can also be considered as logical axioms just like keys, ontologies and ontology alignments. We introduce the logic ALC+LK extending the description logic ALC with link keys. It may be used to reason and infer entailed link keys that may be more useful for a particular data interlinking task. We show that link key entailment can be reduced to consistency checking without introducing the negation of link keys. For deciding the consistency of an ALC+LK ontology, we introduce a new tableau-based algorithm. Contrary to the classical ones, the completion rules concerning link keys apply to pairs of individuals not directly related. We show that this algorithm is sound, complete and always terminates.
Manuel Atencia, Jérôme David, Jérôme Euzenat, Amedeo Napoli, Jérémy Vizzini, Candidate link key extraction with formal concept analysis, Deliverable 1.1, ELKER, 29p., October 2019
A link key extraction procedure using formal concept analysis is described. It is shown to extract all link key candidates.
Manuel Atencia, Jérôme David, Jérôme Euzenat, Liliana Ibanescu, Nathalie Pernelle, Fatiha Saïs, Élodie Thiéblin, Cássia Trojahn dos Santos, Discovering expressive rules for complex ontology matching and data interlinking, in: Pavel Shvaiko, Jérôme Euzenat, Oktie Hassanzadeh, Ernesto Jiménez-Ruiz, Cássia Trojahn dos Santos (eds), Proc. 14th ISWC workshop on ontology matching (OM), Auckland (NZ), pp199-200, 2020
Ontology matching and data interlinking as distinguished tasks aim at facilitating the interoperability between different knowledge bases. Although the field has fully developed in the last years, most works still focus on generating simple correspondences between entities. These correspondences are however insufficient to fully cover the different types of heterogeneity between the knowledge base and complex correspondences are therefore required. Compared to simple matching, few approaches for complex matching have been proposed, focusing on correspondence patterns or exploiting common instances between the ontologies. Similarly, unsupervised data interlinking approaches (which do not require labelled data samples) have recently been developed. One approach consists in discovering linking rules such as simple keys or conditional keys on unlabelled data. The results have shown that the more expressive the rules, the higher the recall. Even more expressive rules (referential expressions, graph keys, etc.) are rather required, however naive approaches to the discovery of these rules can not be envisaged on large data sets. Existing approaches presuppose either that the data conform to the same ontology or that all possible pairs of properties be examined. Complementary, link keys are a set of pairs of properties that identify the instances of two classes of two RDF datasets. Such, link keys may be directly extracted without the need for an alignment. We introduce here an approach that aims at evaluating the impact of complex correspondences in the task of data interlinking established from the application of keys.
Manuel Atencia, Jérôme David, Jérôme Euzenat, Amedeo Napoli, Jérémy Vizzini, Link key candidate extraction with relational concept analysis, Discrete applied mathematics 273:2-20, 2020
Linked data aims at publishing data expressed in RDF (Resource Description Framework) at the scale of the worldwide web. These datasets interoperate by publishing links which identify individuals across heterogeneous datasets. Such links may be found by using a generalisation of keys in databases, called link keys, which apply across datasets. They specify the pairs of properties to compare for linking individuals belonging to different classes of the datasets. Here, we show how to recast the proposed link key extraction techniques for RDF datasets in the framework of formal concept analysis. We define a formal context, where objects are pairs of resources and attributes are pairs of properties, and show that formal concepts correspond to link key candidates. We extend this characterisation to the full RDF model including non functional properties and interdependent link keys. We show how to use relational concept analysis for dealing with cyclic dependencies across classes and hence link keys. Finally, we discuss an implementation of this framework.
Manuel Atencia, Jérôme David, Jérôme Euzenat, On the relation between keys and link keys for data interlinking, Semantic web journal 12(4):547-567, 2021
Both keys and their generalisation, link keys, may be used to perform data interlinking, i.e. finding identical resources in different RDF datasets. However, the precise relationship between keys and link keys has not been fully determined yet. A common formal framework encompassing both keys and link keys is necessary to ensure the correctness of data interlinking tools based on them, and to determine their scope and possible overlapping. In this paper, we provide a semantics for keys and link keys within description logics. We determine under which conditions they are legitimate to generate links. We provide conditions under which link keys are logically equivalent to keys. In particular, we show that data interlinking with keys and ontology alignments can be reduced to data interlinking with link keys, but not the other way around.
Manuel Atencia, Jérôme David, Jérôme Euzenat, Amedeo Napoli, Jérémy Vizzini, Relational concept analysis for circular link key extraction, Deliverable 1.2, ELKER, 57p., December 2021
A link key extraction procedure in case of circular dependencies is presented. It uses relational concept analysis and extends the procedure of Deliverable 1.1. This leads to investigate more closely the semantics of relational concept analysis which is given in terms of fixed points. Extracting all fixed points may offer more link key candidates to consider.
Bloch and others
Isabelle Bloch, Jérôme Euzenat, Jérôme Lang, François Schwarzentruber (éds), Post-actes de la Conférence Nationale en Intelligence Artificielle (CNIA 2018-2020), Revue ouverte d'intelligence artificielle (numéro spécial) 3(3-4):193-413, 2022
Isabelle Bloch, Jérôme Euzenat, Jérôme Lang, François Schwarzentruber, Introduction, Revue ouverte d'intelligence artificielle 3(3-4):193-199, 2022
Bourahla and others
Yasser Bourahla, Manuel Atencia, Jérôme Euzenat, Knowledge improvement and diversity under interaction-driven adaptation of learned ontologies, in: Ulle Endriss, Ann Nowé, Frank Dignum, Alessio Lomuscio (eds), Proc. 20th ACM international conference on Autonomous Agents and Multi-Agent Systems (AAMAS), London (UK), pp242-250, 2021
When agents independently learn knowledge, such as ontologies, about their environment, it may be diverse, incorrect or incomplete. This knowledge heterogeneity could lead agents to disagree, thus hindering their cooperation. Existing approaches usually deal with this interaction problem by relating ontologies, without modifying them, or, on the contrary, by focusing on building common knowledge. Here, we consider agents adapting ontologies learned from the environment in order to agree with each other when cooperating. In this scenario, fundamental questions arise: Do they achieve successful interaction? Can this process improve knowledge correctness? Do all agents end up with the same ontology? To answer these questions, we design a two-stage experiment. First, agents learn to take decisions about the environment by classifying objects and the learned classifiers are turned into ontologies. In the second stage, agents interact with each other to agree on the decisions to take and modify their ontologies accordingly. We show that agents indeed reduce interaction failure, most of the time they improve the accuracy of their knowledge about the environment, and they do not necessarily opt for the same ontology.
Yasser Bourahla, Manuel Atencia, Jérôme Euzenat, Knowledge transmission and improvement across generations do not need strong selection, in: Piotr Faliszewski, Viviana Mascardi, Catherine Pelachaud, Matthew Taylor (eds), Proc. 21st ACM international conference on Autonomous Agents and Multi-Agent Systems (AAMAS), (Online), pp163-171, 2022
Agents have been used for simulating cultural evolution and cultural evolution can be used as a model for artificial agents. Previous results have shown that horizontal, or intra-generation, knowledge transmission allows agents to improve the quality of their knowledge to a certain level. Moreover, variation generated through vertical, or inter-generation, transmission allows agents to exceed that level. Such results were obtained under specific conditions such as the drastic selection of agents allowed to transmit their knowledge, seeding the process with correct knowledge or introducing artificial noise during transmission. Here, we question the necessity of such measures and study their impact on the quality of transmitted knowledge. For that purpose, we combine the settings of two previous experiments and relax these conditions (no strong selection of teachers, no fully correct seed, no introduction of artificial noise). The rationale is that if interactions lead agents to improve their overall knowledge quality, this should be sufficient to ensure correct knowledge transmission, and that transmission mechanisms are sufficiently imperfect to produce variation. In this setting, we confirm that vertical transmission improves on horizontal transmission even without drastic selection and oriented learning. We also show that horizontal transmission is able to compensate for the lack of parent selection if it is maintained for long enough. This means that it is not necessary to take the most successful agents as teachers, neither in vertical nor horizontal transmission, to cumulatively improve knowledge.
Yasser Bourahla, Manuel Atencia, Jérôme Euzenat, Transmission de connaissances et sélection, in: Valérie Camps (éd), Actes 30e journées francophones sur Systèmes multi-agent (JFSMA), Saint-Étienne (FR), pp63-72, 2022
Les agents peuvent être utilisés pour simuler l'évolution culturelle et l'évolution culturelle peut être utilisée comme modèle pour les agents artificiels. Des expériences ont montré que la transmission intragénérationnelle des connaissances permet aux agents d'en améliorer la qualité. De plus, sa transmission intergénérationnelle permet de dépasser ce niveau. Ces résultats ont été obtenus dans des conditions particulières: sélection drastique des agents transmetant leurs connaissances, initialisation avec des connaissances correctes ou introduction de bruit lors de la transmission. Afin d'étudier l'impact de ces mesures sur la qualité de la connaissance transmise, nous combinons les paramètres de deux expériences précédentes et relâchons ces conditions. Ce dispositif confirme que la transmission verticale permet d'améliorer la qualité de la connaissance obtenue par transmission horizontale même sans sélection drastique et apprentissage orienté. Il montre également qu'une transmission intragénérationnelle suffisante peut compenser l'absence de sélection parentale.
Yasser Bourahla, Jérôme David, Jérôme Euzenat, Meryem Naciri, Measuring and controlling knowledge diversity, in: Tiago Prince Sales, Maria Hedblom, He Tan, Lucía Gómez Álvarez, Rafael Peñaloza, Srdjan Vesic (eds), Proc. 1st JOWO workshop on formal models of knowledge diversity (FMKD), Jönköping (SE), 2022
Assessing knowledge diversity may be useful for many purposes. In particular, it is necessary to measure diversity in order to understand how it arises or is preserved; it is also necessary to control it in order to measure its effects. Here we consider measuring knowledge diversity using two components: (a) a diversity measure taking advantage of (b) a knowledge difference measure. We present the general principles and various candidates for such components. We discuss how these measures may be used to generate populations of agents with controlled levels of knowledge diversity.
Yasser Bourahla, Manuel Atencia, Jérôme Euzenat, Inter-generation knowledge transmission without individual selection, in: Proc. 4th conference on Conference of the Cultural evolution society, Aarhus (DK), 2022
Yasser Bourahla, Multi-agent simulation of cultural ontology evolution through interaction, Thèse d'informatique, Université de Grenoble, Grenoble (FR), 2023
Artificial agents, as humans, use their knowledge to behave in an environment and within a society. Humans evolve their knowledge by adapting it in response to interactions with their environment and society. The question that is raised in this thesis is:``can knowledge evolve in a society of artificial agents, as it does in a human society?'' In particular, if agents adapt to improve their social interactions, how can this affect the quality of the population's knowledge about the environment? And how does it affect knowledge diversity? To address the questions, ontology evolution is simulated based on principles from experimental cultural evolution through an experimental framework in which: agents initially learn ontologies, from object samples, which they later adapt by interacting with each other about objects in the environment. Using this experimental framework, we show that: (1) agents reach a state of agreement in their interactions, (2) they improve the quality of their knowledge about the environment, and (3) they preserve the diversity of their knowledge. In order to characterise knowledge evolution through multiple generations, experiments are conducted with agents endowed with reproduction capabilities. Results show that (1) the variation provided by inter-generation transmission allows agents to further improve the quality of their ontologies; (2) agents select the knowledge to be preserved through intra-generation transmission which compensates for the lack of teacher selection in inter-generation transmission; and finally, (3) diversity remains stable from one generation to another. This work not only provides a basis for implementing agents capable of culturally evolving their knowledge, but also suggests that simulating such behavior can serve as a valuable tool for testing hypotheses about human cultural knowledge evolution.
Cheatham and others
Michelle Cheatham, Isabel Cruz, Jérôme Euzenat, Catia Pesquita (eds), Special issue on ontology and linked data matching, Semantic web journal (special issue) 8(2):183-251, 2017
Michelle Cheatham, Isabel Cruz, Jérôme Euzenat, Catia Pesquita, Special issue on ontology and linked data matching, Semantic web journal 8(2):183-184, 2017
Chekol and others
Melisachew Wudage Chekol, Jérôme Euzenat, Pierre Genevès, Nabil Layaïda, SPARQL Query containment under schema, Journal on data semantics 7(3):133-154, 2018
Query containment is defined as the problem of determining if the result of a query is included in the result of another query for any dataset. It has major applications in query optimization and knowledge base verification. The main objective of this work is to provide sound and complete procedures to determine containment of SPARQL queries under expressive description logic schema axioms. Beyond that, these procedures are experimentally evaluated. To date, testing query containment has been performed using different techniques: containment mapping, canonical databases, automata theory techniques and through a reduction to the validity problem in logic. In this work, we use the latter technique to test containment of SPARQL queries using an expressive modal logic called mu-calculus. For that purpose, we define an RDF graph encoding as a transition system which preserves its characteristics. In addition, queries and schema axioms are encoded as mu-calculus formulae. Thereby, query containment can be reduced to testing validity in the logic. We identify various fragments of SPARQL and description logic schema languages for which containment is decidable. Additionally, we provide theoretically and experimentally proven procedures to check containment of these decidable fragments. Finally, we propose a benchmark for containment solvers which is used to test and compare the current state-of-the-art containment solvers.
da Silva and others
Jomar da Silva, Fernanda Araujo Baião, Kate Revoredo, Jérôme Euzenat, Semantic interactive ontology matching: synergistic combination of techniques to improve the set of candidate correspondences, in: Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh (eds), Proc. 12th ISWC workshop on ontology matching (OM), Wien (AT), pp13-24, 2017
Ontology Matching is the task of finding a set of entity correspondences between a pair of ontologies, i.e. an alignment. It has been receiving a lot of attention due to its broad applications. Many techniques have been proposed, among which the ones applying interactive strategies. An interactive ontology matching strategy uses expert knowledge towards improving the quality of the final alignment. When these strategies are based on the expert feedback to validate correspondences, it is important to establish criteria for selecting the set of correspondences to be shown to the expert. A bad definition of this set can prevent the algorithm from finding the right alignment or it can delay convergence. In this work we present techniques which, when used simultaneously, improve the set of candidate correspondences. These techniques are incorporated in an interactive ontology matching approach, called ALINSyn. Experiments successfully show the potential of our proposal.
Jomar da Silva, Kate Revoredo, Fernanda Araujo Baião, Jérôme Euzenat, Interactive ontology matching: using expert feedback to select attribute mappings, in: Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh (eds), Proc. 13th ISWC workshop on ontology matching (OM), Monterey (CA US), pp25-36, 2018
Interactive Ontology Matching considers the participation of domain experts during the matching process of two ontologies. An important step of this process is the selection of mappings to submit to the expert. These mappings can be between concepts, attributes or relationships of the ontologies. Existing approaches define the set of mapping suggestions only in the beginning of the process before expert involvement. In previous work, we proposed an approach to refine the set of mapping suggestions after each expert feedback, benefiting from the expert feedback to form a set of mapping suggestions of better quality. In this approach, only concept mappings were considered during the refinement. In this paper, we show a new approach to evaluate the benefit of also considering attribute mappings during the interactive phase of the process. The approach was evaluated using the OAEI conference data set, which showed an increase in recall without sacrificing precision. The approach was compared with the state-of-the-art, showing that the approach has generated alignment with state-of-the-art quality.
Jomar da Silva, Kate Revoredo, Fernanda Araujo Baião, Jérôme Euzenat, Alin: improving interactive ontology matching by interactively revising mapping suggestions, Knowledge engineering review 35:e1, 2020
Ontology matching aims at discovering mappings between the entities of two ontologies. It plays an important role in the integration of heterogeneous data sources that are described by ontologies. Interactive ontology matching involves domain experts in the matching process. In some approaches, the expert provides feedback about mappings between ontology entities, i.e., these approaches select mappings to present to the expert who replies which of them should be accepted or rejected, so taking advantage of the knowledge of domain experts towards finding an alignment. In this paper, we present Alin, an interactive ontology matching approach which uses expert feedback not only to approve or reject selected mappings, but also to dynamically improve the set of selected mappings, i.e., to interactively include and to exclude mappings from it. This additional use for expert answers aims at increasing in the benefit brought by each expert answer. For this purpose, Alin uses four techniques. Two techniques were used in previous versions of Alin to dynamically select concept and attribute mappings. Two new techniques are introduced in this paper: one to dynamically select relationship mappings and another one to dynamically reject inconsistent selected mappings using anti-patterns. We compared Alin with state-of-the-art tools, showing that it generates alignment of comparable quality.
David and others
Jérôme David, Jérôme Euzenat, Pierre Genevès, Nabil Layaïda, Evaluation of query transformations without data, in: Proc. WWW workshop on Reasoning on Data (RoD), Lyon (FR), pp1599-1602, 2018
Query transformations are ubiquitous in semantic web query processing. For any situation in which transformations are not proved correct by construction, the quality of these transformations has to be evaluated. Usual evaluation measures are either overly syntactic and not very informative ---the result being: correct or incorrect--- or dependent from the evaluation sources. Moreover, both approaches do not necessarily yield the same result. We suggest that grounding the evaluation on query containment allows for a data-independent evaluation that is more informative than the usual syntactic evaluation. In addition, such evaluation modalities may take into account ontologies, alignments or different query languages as soon as they are relevant to query evaluation.
Jérôme David, Jérôme Euzenat, Jérémy Vizzini, Linkky: Extraction de clés de liage par une adaptation de l'analyse relationnelle de concepts, in: Actes 29e journées francophones sur Ingénierie des connaissances (IC), Nancy (FR), pp271-274, 2018
Jérôme David, Measures for knowledge – with applications to ontology matching and data interlinking, Habilitation à diriger des recherches, Université Grenoble Alpes, Grenoble (FR), May 2023
The Semantic Web is an extension of the web that enables people to express knowledge in a way that machines can reason with it. At the web scale, this knowledge may be described using different ontologies, and alignments have been defined to express these differences. Furthermore, the same individual may be represented by different instances in different datasets. Dealing with knowledge heterogeneity in the Semantic Web requires comparing these knowledge structures. Our objective is to understand heterogeneity and benefit from this understanding, not to reduce diversity. In this context, we have studied and contributed to techniques and measures for comparing knowledge structures on the Semantic Web along three dimensions: ontologies, alignments, and instances. At the ontology level, we propose measures for the ontology space and alignment space. The first family of measures relies solely on the content of ontologies, while the second one takes advantage of alignments between ontologies. At the alignment level, we investigate how to assess the quality of alignments. First, we study how to extend classical controlled evaluation measures by considering the semantics of aligned ontologies while relaxing the all-or-nothing nature of logical entailment. We also propose estimating the quality of alignments when no reference alignment is available. At the instance level, we tackle the challenge of identifying resources from different knowledge graphs that represent the same entity. We follow an approach based on keys and alignments. Specifically, we propose the notion of a link key, algorithms for extracting them, and measures to assess their quality. Finally, we recast this work in the perspective of the dynamics and evolution of knowledge.
Euzenat and others
Jérôme Euzenat, Interaction-based ontology alignment repair with expansion and relaxation, in: Proc. 26th International Joint Conference on Artificial Intelligence (IJCAI), Melbourne (VIC AU), pp185-191, 2017
Agents may use ontology alignments to communicate when they represent knowledge with different ontologies: alignments help reclassifying objects from one ontology to the other. These alignments may not be perfectly correct, yet agents have to proceed. They can take advantage of their experience in order to evolve alignments: upon communication failure, they will adapt the alignments to avoid reproducing the same mistake. Such repair experiments had been performed in the framework of networks of ontologies related by alignments. They revealed that, by playing simple interaction games, agents can effectively repair random networks of ontologies. Here we repeat these experiments and, using new measures, show that previous results were underestimated. We introduce new adaptation operators that improve those previously considered. We also allow agents to go beyond the initial operators in two ways: they can generate new correspondences when they discard incorrect ones, and they can provide less precise answers. The combination of these modalities satisfy the following properties: (1) Agents still converge to a state in which no mistake occurs. (2) They achieve results far closer to the correct alignments than previously found. (3) They reach again 100% precision and coherent alignments.
The results reported in this paper for operators addjoin and refadd are not accurate, due to a software error. The results reported were worse than they should have been. Updated results can be found in [
20180308-NOOR], [
20180311-NOOR] and [
20180529-NOOR].
Jérôme Euzenat, Crafting ontology alignments from scratch through agent communication, in: Proc. 20th International Conference on Principles and practice of multi-agent systems (PRIMA), Nice (FR), (Bo An, Ana Bazzan, João Leite, Serena Villata, Leendert van der Torre (eds), Proc. 20th International Conference on Principles and practice of multi-agent systems (PRIMA), Lecture notes in computer science 10621, 2017), pp245-262, 2017
Agents may use different ontologies for representing knowledge and take advantage of alignments between ontologies in order to communicate. Such alignments may be provided by dedicated algorithms, but their accuracy is far from satisfying. We already explored operators allowing agents to repair such alignments while using them for communicating. The question remained of the capability of agents to craft alignments from scratch in the same way. Here we explore the use of expanding repair operators for that purpose. When starting from empty alignments, agents fails to create them as they have nothing to repair. Hence, we introduce the capability for agents to risk adding new correspondences when no existing one is useful. We compare and discuss the results provided by this modality and show that, due to this generative capability, agents reach better results than without it in terms of the accuracy of their alignments. When starting with empty alignments, alignments reach the same quality level as when starting with random alignments, thus providing a reliable way for agents to build alignment from scratch through communication.
Jérôme Euzenat, Knowledge diversity under socio-environmental pressure, in: Michael Rovatsos (ed), Investigating diversity in AI: the ESSENCE project, 2013-2017, Deliverable, ESSENCE, 62p., 2017, pp28-30
Experimental cultural evolution has been convincingly applied to the evolution of natural language and we aim at applying it to knowledge. Indeed, knowledge can be thought of as a shared artefact among a population influenced through communication with others. It can be seen as resulting from contradictory forces: internal consistency, i.e., pressure exerted by logical constraints, against environmental and social pressure, i.e., the pressure exerted by the world and the society agents live in. However, adapting to environmental and social pressure may lead agents to adopt the same knowledge. From an ecological perspective, this is not particularly appealing: species can resist changes in their environment because of the diversity of the solutions that they can offer. This problem may be approached by involving diversity as an internal constraint resisting external pressure towards uniformity.
Jérôme Euzenat, De la langue à la connaissance: approche expérimentale de l'évolution culturelle, Bulletin de l'AFIA 100:9-12, 2018
Jérôme Euzenat, François Schwarzentruber (éds), Actes Conférence NationaleAFIA sur d'Intelligence Artificielle et Rencontres Jeunes Chercheurs en Intelligence Artificielle (CNIA+RJCIA), Nancy (FR), 133p., 2018
Jérôme Euzenat, Replicator-interactor in experimental cultural knowledge evolution, in: Proc. 2nd JOWO workshop on Interaction-Based Knowledge Sharing (WINKS), Graz (AT), 2019
Cultural evolution may be studied at a `macro' level, inspired from population dynamics, or at a `micro' level, inspired from genetics. The replicator-interactor model generalises the genotype-phenotype distinction of genetic evolution. Here, we consider how it can be applied to cultural knowledge evolution experiments. In particular, we consider knowledge as replicator and the behaviour it induces as interactor. We show that this requires to address problems concerning transmission. We discuss the introduction of horizontal transmission within the replicator-interactor model and/or differential reproduction within cultural evolution experiments.
Jérôme Euzenat, A map without a legend: the semantic web and knowledge evolution, Semantic web journal 11(1):63-68, 2020
The current state of the semantic web is focused on data. This is a worthwhile progress in web content processing and interoperability. However, this does only marginally contribute to knowledge improvement and evolution. Understanding the world, and interpreting data, requires knowledge. Not knowledge cast in stone for ever, but knowledge that can seamlessly evolve; not knowledge from one single authority, but diverse knowledge sources which stimulate confrontation and robustness; not consistent knowledge at web scale, but local theories that can be combined. We discuss two different ways in which semantic web technologies can greatly contribute to the advancement of knowledge: semantic eScience and cultural knowledge evolution.
Jérôme Euzenat, Marie-Christine Rousset, Semantic web, in: Pierre Marquis, Odile Papini, Henri Prade (eds), A guided tour of artificial intelligence research, Springer, Berlin (DE), 575p., 2020, pp181-207
The semantic web aims at making web content interpretable. It is no less than offering knowledge representation at web scale. The main ingredients used in this context are the representation of assertional knowledge through graphs, the definition of the vocabularies used in graphs through ontologies, and the connection of these representations through the web. Artificial intelligence techniques and, more specifically, knowledge representation techniques, are put to use and to the test by the semantic web. Indeed, they have to face typical problems of the web: scale, heterogeneity, incompleteness, and dynamics. This chapter provides a short presentation of the state of the semantic web and refers to other chapters concerning those techniques at work in the semantic web.
Jérôme Euzenat, Fixed-point semantics for barebone relational concept analysis, in: Proc. 16th international conference on formal concept analysis (ICFCA), Strasbourg (FR), (Agnès Braud, Aleksey Buzmakov, Tom Hanika, Florence Le Ber (eds), Proc. 16th international conference on formal concept analysis (ICFCA), Lecture notes in computer science 12733, 2021), pp20-37, 2021
Relational concept analysis (RCA) extends formal concept analysis (FCA) by taking into account binary relations between formal contexts. It has been designed for inducing description logic TBoxes from ABoxes, but can be used more generally. It is especially useful when there exist circular dependencies between objects. In this case, it extracts a unique stable concept lattice family grounded on the initial formal contexts. However, other stable families may exist whose structure depends on the same relational context. These may be useful in applications that need to extract a richer structure than the minimal grounded one. This issue is first illustrated in a reduced version of RCA, which only retains the relational structure. We then redefine the semantics of RCA on this reduced version in terms of concept lattice families closed by a fixed-point operation induced by this relational structure. We show that these families admit a least and greatest fixed point and that the well-grounded RCA semantics is characterised by the least fixed point. We then study the structure of other fixed points and characterise the interesting lattices as the self-supported fixed points.
Jérôme Euzenat, The web as a culture broth for agents and people to grow knowledge, in: Proc. Dagstuhl seminar on Autonomous agents on the web, Wadern (DE), (Olivier Boissier, Andrei Ciortea, Andreas Harth, Alessandro Ricci (eds), Autonomous agents on the web (seminar 21072), Dagstuhl reports 11(1), 2021), pp40-41, 2021
Jérôme Euzenat, Beyond reproduction, experiments want to be understood, in: Proc. 2nd workshop on Scientific knowledge: representation, discovery, and assessment (SciK), Lyon (FR), pp774-778, 2022
The content of experiments must be semantically described. This topic has already been largely covered. However, some neglected benefits of such an approach provide more arguments in favour of scientific knowledge graphs. Beyond being searchable through flat metadata, a knowledge graph of experiment descriptions may be able to provide answers to scientific and methodological questions. This includes identifying non experimented conditions or retrieving specific techniques used in experiments. In turn, this is useful for researchers as this information can be used for repurposing experiments, checking claimed results or performing meta-analyses.
Jérôme Euzenat, Can AI systems culturally evolve their knowledge?, in: Proc. 4th conference on Conference of the Cultural evolution society, Aarhus (DK), 2022
Jérôme Euzenat, Stepwise functional refoundation of relational concept analysis, Research report 9518, INRIA, Grenoble (FR), 68p., October 2023
Relational concept analysis (RCA) is an extension of formal concept analysis allowing to deal with several related contexts simultaneously. It has been designed for learning description logic theories from data and used within various applications. A puzzling observation about RCA is that it returns a single family of concept lattices although, when the data feature circular dependencies, other solutions may be considered acceptable. The semantics of RCA, provided in an operational way, does not shed light on this issue. In this report, we define these acceptable solutions as those families of concept lattices which belong to the space determined by the initial contexts (well-formed), cannot scale new attributes (saturated), and refer only to concepts of the family (self-supported). We adopt a functional view on the RCA process by defining the space of well-formed solutions and two functions on that space: one expansive and the other contractive. We show that the acceptable solutions are the common fixed points of both functions. This is achieved step-by-step by starting from a minimal version of RCA that considers only one single context defined on a space of contexts and a space of lattices. These spaces are then joined into a single space of context-lattice pairs, which is further extended to a space of indexed families of context-lattice pairs representing the objects manipulated by RCA. We show that RCA returns the least element of the set of acceptable solutions. In addition, it is possible to build dually an operation that generates its greatest element. The set of acceptable solutions is a complete sublattice of the interval between these two elements. Its structure and how the defined functions traverse it are studied in detail.
Jérôme Euzenat, Society = Autonomy + Adaptation, in: Proc. Dagstuhl seminar on Agents on the web, Wadern (DE), (Olivier Boissier, Andrei Ciortea, Andreas Harth, Alessandro Ricci, Danai Vachtsevanou (eds), Agents on the web (seminar 21072), Dagstuhl reports 13(2), 2023), pp86, 2023
What makes a true lively society is the capability of their members to autonomously adapt to others. It is not a set of norms cast in iron, be they programming norms or ’legal norms’. It is a set of beings trying to behave with others. This behaviour may lead to explicit norms that make explicit what should not have/need to be reinvented, but they may well remain implicit, hence continuously adapted. We should design software agents so that they are able to elaborate what drives their (social but not only) behaviours. They should be allowed to try, to make mistakes, and to transmit what they know. This is the ground on which evolution may happen. This capacity is what should be built in agents in order for them to behave without breaking too many things. The goal is not to reach a static equilibrium: in an open-ended agent space there are always opportunities to learn new things, meet new people and visit new places. Hence, rather than the state reached by agents, this is they ability to surf a dynamic disequilibrium that must be sought. This statement is somewhat made for triggering reactions within the seminar. It reacts to the apparent loss of autonomy of agents. It also extend the one I did for the previous seminar.
Inants and others
Armen Inants, Jérôme Euzenat, So, what exactly is a qualitative calculus?, Artificial intelligence 289:103385, 2020
The paradigm of algebraic constraint-based reasoning, embodied in the notion of a qualitative calculus, is studied within two alternative frameworks. One framework defines a qualitative calculus as "a non-associative relation algebra (NA) with a qualitative representation", the other as "an algebra generated by jointly exhaustive and pairwise disjoint (JEPD) relations". These frameworks provide complementary perspectives: the first is intensional (axiom-based), whereas the second one is extensional (based on semantic structures). However, each definition admits calculi that lie beyond the scope of the other. Thus, a qualitatively representable NA may be incomplete or non-atomic, whereas an algebra generated by JEPD relations may have non-involutive converse and no identity element. The divergence of definitions creates a confusion around the notion of a qualitative calculus and makes the "what" question posed by Ligozat and Renz actual once again. Here we define the relation-type qualitative calculus unifying the intensional and extensional approaches. By introducing the notions of weak identity, inference completeness and Q-homomorphism, we give equivalent definitions of qualitative calculi both intensionally and extensionally. We show that "algebras generated by JEPD relations" and "qualitatively representable NAs" are embedded into the class of relation-type qualitative algebras.
Jradeh and others
Khadija Jradeh, Optimised tableau algorithms for reasoning in the description logic ALC extended with link keys, Thèse d'informatique, Université de Grenoble, Grenoble (FR), 2022
Knowledge Graphs (KGs) are unceasingly used by different organisation to represent real- world entities in the form of a graph. They may use an ontological layer for describing the classes and properties of the represented entities. RDF knowledge graphs are knowledge graphs that convey to the RDF model. RDF knowledge graph interlinking is the task of identifying different IRIs belonging to different RDF knowledge graphs and referring to the same real- world entity. This facilitates data integration and interoperability by combining different entity descriptions present in different knowledge graphs.There exist different methods for addressing the task of interlinking RDF knowledge graph. Link keys are among these methods. They are used for interlinking RDF knowledge graphs described using different ontologies. Link keys specify the properties to be compared to decide whether two entities belonging to different classes and present in different knowledge graphs are the same.Link keys can be expressed as logical axioms, and, thus, it is possible to combine them with ontologies, and ontology alignments to perform logical reasoning. In this thesis, we aim to study the problem of reasoning with link keys. To formally investigate this problem, we model RDF knowledge graphs, ontologies, and ontology alignments using the description logic ALC. We choose the description logic ALC as a base language for reasoning. ALC covers many modeling capabilities used for knowledge representation and allows for a more easy extension to more expressive description logics. We extend ALC with link keys and individual equalities, the resulting description logic is called ALC+LK. We show that link key entailment can be reduced to link key consistency checking without the need of introducing the negation of link keys.Then we design an algorithm for deciding the consistency of ALC+LK ontology. We have proved that the algorithm is sound, complete, and always terminates. This algorithm runs in 2EXPTIME. However, there exist EXPTIME algorithms for reasoning in ALC and the completion rules added for handling link keys and equalities require no more computational power than that of ALC.In the light of the above, we design a sound, complete, worst-case optimal algorithm for reasoning in ALC+LK. This algorithm is inspired by the compressed tableau algorithm, which allows obtaining the EXPTIME optimal complexity result. However, this algorithm has a non- directed behaviour which obstruct its implementation.Last but most importantly, we propose a sound, complete, and worst-case optimal tableau algorithm for reasoning in the description logic ALC with individuals and link keys. This al- gorithm, in contrast to the non-directed one, is directed by the application of completion rules. This avoids the generation of useless structures and facilitates its implementation. We implement this algorithm and provide a number of proof-of-concept experiments that demonstrates the importance of reasoning with link keys for the data interlinking task.
Chloé Khadija Jradeh, Jérôme David, Olivier Teste, Cássia Trojahn dos Santos, L'Apport Mutuel de la Combinaison des Tâches d'Interconnexion de Données et d'Alignement d'Ontologies pour l'Alignement Expressifs, in: Actes 34e journées francophones sur Ingénierie des connaissances (IC), Strasbourg (FR), pp59-68, 2023
Plusieurs méthodes ont été proposées pour aborder les tâches d’interconnexion de données et d’alignement d’ontologies, qui sont généralement traitées séparément. Dans cet article, nous présentons DICAP, un algorithme qui permet leur collaboration mutuelle. Les expériences réalisées montrent que l’ajout de relations owl:sameAs résultant de l’interconnexion de données permet de découvrir des correspondances ontologiques supplémentaires. De plus, la présence de correspondances ontologiques permet l’extraction de règles de liage supplémentaires et discriminantes.
Kalaitzakis and others
Andreas Kalaitzakis, Jérôme Euzenat, À quoi sert la spécialisation en évolution culturelle de la connaissance?, in: Maxime Morge (éd), Actes 31e journées francophones sur Systèmes multi-agent (JFSMA), Strasbourg (FR), pp76-85, 2023
Des agents peuvent faire évoluer leurs ontologies en accomplissant conjointement une tâche. Nous considérons un ensemble de tâches dont chaque agent ne considère qu'une partie. Nous supposons que moins un agent considère de tâches, plus la précision de sa meilleure tâche sera élevée. Pour le vérifier, nous simulons différentes populations considérant un nombre de tâches croissant. De manière contre-intuitive, l'hypothèse n'est pas vérifiée. D'une part, lorsque les agents ont une mémoire illimitée, plus un agent considère de tâches, plus il est précis. D'autre part, lorsque les agents ont une mémoire limitée, les objectifs de maximiser la précision de leur meilleures tâches et de s'accorder entre eux sont mutuellement exclusifs. Lorsque les sociétés favorisent la spécialisation, les agents n'améliorent pas leur précision. Cependant, ces agents décideront plus souvent en fonction de leurs meilleures tâches, améliorant ainsi la performance de leur société.
Andreas Kalaitzakis, Jérôme Euzenat, Multi-tasking resource-constrained agents reach higher accuracy when tasks overlap, in: Proc. 20th European conference on multi-agents systems (EUMAS), Napoli (IT), (Vadim Malvone, Aniello Murano (eds), Proc. 20th European conference on multi-agents systems (EUMAS), Lecture notes in computer science 14282, 2023), pp425-434, 2023
Agents have been previously shown to evolve their ontologies while interacting over a single task. However, little is known about how interacting over several tasks affects the accuracy of agent ontologies. Is knowledge learned by tackling one task beneficial for another task? We hypothesize that multi-tasking agents tackling tasks that rely on the same properties, are more accurate than multi-tasking agents tackling tasks that rely on different properties. We test this hypothesis by varying two parameters. The first parameter is the number of tasks assigned to the agents. The second parameter is the number of common properties among these tasks. Results show that when deciding for different tasks relies on the same properties, multi-tasking agents reach higher accuracy. This suggests that when agents tackle several tasks, it is possible to transfer knowledge from one task to another.
Luntraru and others
Adriana Luntraru, Value-sensitive knowledge evolution, Master's thesis, Université Grenoble Alpes, Grenoble (FR), 2023
Cultural values are cognitive representations of general objectives, such as independence or mastery, that people use to distinguish whether something is "good" or "bad". More specifically, people may use their values to evaluate alternatives and pick the most compatible one. Cultural values have been previously used in artificial societies of agents with the purpose of replicating and predicting human behavior. However, to the best of our knowledge, they have never been used in the context of cultural knowledge evolution. We consider cooperating agents which adapt their individually learned ontologies by interacting with each other to agree. When two agents disagree during an interaction, one of them needs to adapt its ontology. We use the cultural values of independence, novelty, authority and mastery to influence the choice of which agent adapts in a population of agents sharing the same values. We investigate the effects the choice of cultural values has on the knowledge obtained. Our results show that agents do not improve the accuracy of their knowledge without using the mastery value. Under certain conditions, independence causes the agents to converge to successful interactions faster, and novelty increases knowledge diversity, but both effects come with a large reduction in accuracy. We however did not find any significant effects of authority.
Pauwels and others
Pieter Pauwels, María Poveda Villalón, Alvaro Sicilia, Jérôme Euzenat, Semantic technologies and interoperability in the built environment, Semantic web journal 9(6):731-734, 2018
The built environment consists of plenty of physical assets with which we interact on a daily basis. In order to improve not only our built environment, but also our interaction with that environment, we would benefit a lot from semantic representations of this environment. This not only includes buildings, but also large infrastructure (bridges, tunnels, waterways, underground systems), and geospatial data. With this special issue, an insight is given into the current state of the art in terms of semantic technologies and interoperability in this built environment. This editorial not only summarizes the content of the Special Issue on Semantic Technologies and interoperability in the Built Environment, it also provides a brief overview of the current state of the art in general in terms of standardisation and community efforts.
Rousset and others
Marie-Christine Rousset, Manuel Atencia, Jérôme David, Fabrice Jouanot, Olivier Palombi, Federico Ulliana, Datalog revisited for reasoning in linked data, in: Giovambattista Ianni, Domenico Lembo, Leopoldo Bertossi, Wolfgang Faber, Birte Glimm, Georg Gottlob, Steffen Staab (eds), Proc. 13th International summer school on reasoning web (RW), Lecture notes in computer science 10370, 2017, pp121-166
Linked Data provides access to huge, continuously growing amounts of open data and ontologies in RDF format that describe entities, links and properties on those entities. Equipping Linked Data with inference paves the way to make the Semantic Web a reality. In this survey, we describe a unifying framework for RDF ontologies and databases that we call deductive RDF triplestores. It consists in equipping RDF triplestores with Datalog inference rules. This rule language allows to capture in a uniform manner OWL constraints that are useful in practice, such as property transitivity or symmetry, but also domain-specific rules with practical relevance for users in many domains of interest. The expressivity and the genericity of this framework is illustrated for modeling Linked Data applications and for developing inference algorithms. In particular, we show how it allows to model the problem of data linkage in Linked Data as a reasoning problem on possibly decentralized data. We also explain how it makes possible to efficiently extract expressive modules from Semantic Web ontologies and databases with formal guarantees, whilst effectively controlling their succinctness. Experiments conducted on real-world datasets have demonstrated the feasibility of this approach and its usefulness in practice for data integration and information extraction.
Shvaiko and others
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh (eds), Proc. 12th ISWC workshop on ontology matching (OM), Wien (AT), 225p., 2017
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh (eds), Proc. 13th ISWC workshop on ontology matching (OM), Monterey (CA US), 227p., 2018
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, Cássia Trojahn dos Santos (eds), Proc. 14th ISWC workshop on ontology matching (OM), Auckland (NZ), 210p., 2020
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, Cássia Trojahn dos Santos (eds), Proc. 15th ISWC workshop on ontology matching (OM), Athens (GR), 253p., 2020
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, Cássia Trojahn dos Santos (eds), Proc. 16th ISWC workshop on ontology matching (OM), (online), 218p., 2021
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, Cássia Trojahn dos Santos (eds), Proc. 17th ISWC workshop on ontology matching (OM), (online), 230p., 2022
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, Cássia Trojahn dos Santos (eds), Proc. 18th ISWC workshop on ontology matching (OM), Athens (GR), 202p., 2023
Sicilia and others
Alvaro Sicilia, Pieter Pauwels, Leandro Madrazo, María Poveda Villalón, Jérôme Euzenat (eds), Special Issue on Semantic Technologies and Interoperability in the Build Environment, Semantic web journal (special issue) 9(6):729-855, 2018
Siebers and others
Anaïs Siebers, Intrinsic exploration-motivation in cultural knowledge evolution, Master's thesis, Ruhr Universität, Bochum (DE), 2023
In cultural knowledge evolution simulated by multi-agent simulations, agents can improve the accuracy of their knowledge by interacting with other agents and adapting their knowledge with the aim of agreeing. But their knowledge might be confined to specific areas because they do not have the capacity to explore the world around them. Since intrinsic motivation to explore in artificial agents has already proven to increase exploration, it was researched whether and how agents in simulations of cultural knowledge evolution can be motivated to explore. Moreover, it was tested how far this improves and changes their knowledge. Three different kinds of motivation were investigated: curiosity, creativity and non-exploration. Moreover, intrinsic motivation was modelled with and without reinforcement learning. Agents either explored on their own or picked specific interaction partner(s). It has been shown that it is possible to model agents with intrinsic motivation to explore in cultural knowledge evolution, and that this has a significant effect on the agents’ knowledge. Contrary to the expectations and other studies, this did not lead to an increase in knowledge completeness. Out of all intrinsic motivations, curiosity had the highest accuracy and completeness. Models with reinforcement learning performed similar to direct models. As expected, intrinsic motivation led to faster convergence of the agents’ knowledge, especially with social agents. Heterogeneously motivated agents only had a higher accuracy and completeness than homogeneously motivated agents in specific cases. This thesis can be regarded as a foundation for further investigation into the role of intrinsic motivation in cultural knowledge evolution. Different forms of intrinsic motivation or different reinforcement learning techniques could be tested. Additionally, intrinsic motivation at different stages of the experiment or in different ratios, for example curious agents and agents with no motivation, could be investigated in more detail. Lastly, agents could teach other agents things they explored a lot.
van den Berg and others
Line van den Berg, Epistemic alignment repair, in: Proc. 31st ESSLLI student session, Riga (LV), 2019
Ontology alignments enable interoperability between heterogeneous information resources. The Alignment Repair Game (ARG) specifically provides a way for agents to simultaneously communicate and improve the alignment when a communication failure occurs. This is achieved through applying adaptation operators that provide a revision strategy for agents to resolve failures with minimum information loss. In this paper, we explore how closely these operators resemble logical dynamics. We develop a variant of Dynamic Epistemic Logic called DEOL to capture the dynamics of ARG by modeling ontologies as knowledge and alignments as belief with respect to the plausibility relation. The dynamics of ARG are then achieved through announcements and conservative upgrades. With the representation of ARG in DEOL, we formally establish the limitations and the redundancy of the adaptation operators. More precisely, that for a complete logical reasoner, replace, addjoin and refine are redundant for one or both agents in the game and that add would be replaced by addjoin in all cases.
Line van den Berg, Manuel Atencia, Jérôme Euzenat, Agent ontology alignment repair through dynamic epistemic logic, in: Bo An, Neil Yorke-Smith, Amal El Fallah Seghrouchni, Gita Sukthankar (eds), Proc. 19th ACM international conference on Autonomous Agents and Multi-Agent Systems (AAMAS), Auckland (NZ), pp1422-1430, 2020
Ontology alignments enable agents to communicate while preserving heterogeneity in their information. Alignments may not be provided as input and should be able to evolve when communication fails or when new information contradicting the alignment is acquired. In the Alignment Repair Game (ARG) this evolution is achieved via adaptation operators. ARG was evaluated experimentally and the experiments showed that agents converge towards successful communication and improve their alignments. However, whether the adaptation operators are formally correct, complete or redundant is still an open question. In this paper, we introduce a formal framework based on Dynamic Epistemic Logic that allows us to answer this question. This framework allows us (1) to express the ontologies and alignments used, (2) to model the ARG adaptation operators through announcements and conservative upgrades and (3) to formally establish the correctness, partial redundancy and incompleteness of the adaptation operators in ARG.
The refine operator is not partially redundant with respect to Agent b (because it has no way to detect the incoherence from the announcement alone).
Line van den Berg, Manuel Atencia, Jérôme Euzenat, Unawareness in multi-agent systems with partial valuations, in: Proc. 10th AAMAS workshop on Logical Aspects of Multi-Agent Systems (LAMAS), Auckland (NZ), 2020
Public signature awareness is satisfied if agents are aware of the vocabulary, propositions, used by other agents to think and talk about the world. However, assuming that agents are fully aware of each other's signatures prevents them to adapt their vocabularies to newly gained information, from the environment or learned through agent communication. Therefore this is not realistic for open multi-agent systems. We propose a novel way to model awareness with partial valuations that drops public signature awareness and can model agent signature unawareness, and we give a first view on defining the dynamics of raising and forgetting awareness on this framework.
Line van den Berg, Malvin Gattinger, Dealing with unreliable agents in dynamic gossip, in: Proc. 3rd International workshop on dynamic logic (DaLi), Praha (CZ), (Manuel Martins, Igor Sedlár (eds), Proc. 3rd International workshop on dynamic logic (DaLi), Lecture notes in computer science 12569, 2020), pp51-67, 2020
Gossip describes the spread of information throughout a network of agents. It investigates how agents, each starting with a unique secret, can efficiently make peer-to-peer calls so that ultimately everyone knows all secrets. In Dynamic Gossip, agents share phone numbers in addition to secrets, which allows the network to grow at run-time. Most gossip protocols assume that all agents are reliable, but this is not given for many practical applications. We drop this assumption and study Dynamic Gossip with unreliable agents. The aim is then for agents to learn all secrets of the reliable agents and to identify the unreliable agents. We show that with unreliable agents classic results on Dynamic Gossip no longer hold. Specifically, the Learn New Secrets protocol is no longer characterised by the same class of graphs, so-called sun graphs. In addition, we show that unreliable agents that do not initiate communication are harder to identify than agents that do. This has paradoxical consequences for measures against unreliability, for example to combat the spread of fake news in social networks.
Line van den Berg, Forgetting agent awareness: a partial semantics approach, in: Proc. 4th conference on Women in Logic workshop (WiL), Paris (FR), (Sandra Alves, Sandra Kiefer, Ana Sokolova (eds), Proc. 4th conference on Women in Logic workshop (WiL), Paris (FR), 2020), pp18-21, 2020
Partial Dynamic Epistemic Logic allows agents to have different knowledge representations about the world through agent awareness. Agents use their own vocabularies to reason and talk about the world and raise their awareness when confronted with new vocabulary. Through raising awareness the vocabularies of agents are extended, suggesting there is a dual, inverse operator for forgetting awareness that decreases vocabularies. In this paper, we discuss such an operator. Unlike raising awareness, this operator may induce an abstraction on models that removes evidence while preserving conclusions. This is useful to better understand how agents with different knowledge representations communicate with each other, as they may forget the justifications that led them to their conclusions.
Line van den Berg, Manuel Atencia, Jérôme Euzenat, A logical model for the ontology alignment repair game, Autonomous agents and multi-agent systems 35(2):32, 2021
Ontology alignments enable agents to communicate while preserving heterogeneity in their knowledge. Alignments may not be provided as input and should be able to evolve when communication fails or when new information contradicting the alignment is acquired. The Alignment Repair Game (ARG) has been proposed for agents to simultaneously communicate and repair their alignments through adaptation operators when communication failures occur. ARG has been evaluated experimentally and the experiments showed that agents converge towards successful communication and improve their alignments. However, whether the adaptation operators are formally correct, complete or redundant could not be established by experiments. We introduce a logical model, Dynamic Epistemic Ontology Logic (DEOL), that enables us to answer these questions. This framework allows us (1) to express the ontologies and alignments used via a faithful translation from ARG to DEOL, (2) to model the ARG adaptation operators as dynamic modalities and (3) to formally define and establish the correctness, partial redundancy and incompleteness of the adaptation operators in ARG.
The refine operator is not partially redundant with respect to Agent b (because it has no way to detect the incoherence from the announcement alone).
Line van den Berg, Cultural knowledge evolution in dynamic epistemic logic, Thèse de mathématiques-informatique, Université de Grenoble, Grenoble (FR), October 2021
To reason and talk about the world, agents may use their own distinct vocabularies, structured into knowledge representations, also called ontologies. In order to communicate, they use alignments: translations between terms of their ontologies. aHowever, ontologies may change, requiring their alignments to evolve accordingly. Experimental cultural evolution offers a framework to study the mechanisms of their knowledge evolution. It has been applied to the evolution of alignments in the Alignment Repair Game (ARG). Experiments have shown that, through ARG, agents improve their alignments and reach successful communication. Yet, these experiments are not sufficient to understand the formal properties of cultural knowledge evolution. This thesis bridges experimental cultural knowledge evolution with a theoretical model of cultural knowledge evolution in logic. This is achieved by introducing Dynamic Epistemic Ontology Logic and defining a faithful translation of ARG in DEOL that (a) encodes the ontologies, (b) maps agents' ontologies and alignments to knowledge and beliefs, and (c) captures the adaptation operators through announcements and conservative upgrades. This model shows that all but one adaptation operator are correct, they are incomplete and some are partially redundant. Three differences between the ARG agents and their logical model explain these results, leading to an independent model of awareness based on partial valuations and weakly reflexive relations. An alternative model of ARG is then defined under which the formal properties are re-examined, showing that this model is closer to the original game. This is a first step towards defining a theoretical model of cultural knowledge evolution.
Line van den Berg, Jérôme Euzenat, The small Class? gamebook, Pedagogical material, 2022
Class? is an enjoyable card game aiming at grouping colourful cards into meaningful classes. It illustrates facets of reasoning with classifications. In order to introduce Class? progressively, this small gamebook provides a sequence of games before getting to the Class? game itself and beyond. The games are presented in increasing order of difficulty so that a game will benefit from mastering of previous ones.
Line van den Berg, Manuel Atencia, Jérôme Euzenat, Raising awareness without disclosing truth, Annals of mathematics and artificial intelligence 91(4):431-464, 2023
Agents use their own vocabularies to reason and talk about the world. Public signature awareness is satisfied if agents are aware of the vocabularies, or signatures, used by all agents they may, eventually, interact with. Multi-agent modal logics and in particular Dynamic Epistemic Logic rely on public signature awareness for modeling information flow in multi-agent systems. However, this assumption is not desirable for dynamic and open multi-agent systems because (1) it prevents agents to use unique signatures other agents are unaware of, (2) it prevents agents to openly extend their signatures when encountering new information, and (3) it requires that all future knowledge and beliefs of agents are bounded by the current state. We propose a new semantics for awareness that enables us to drop public signature awareness. This semantics is based on partial valuation functions and weakly reflexive relations. Dynamics for raising public and private awareness are then defined in such a way as to differentiate between becoming aware of a proposition and learning its truth value. With this, we show that knowledge and beliefs are not affected through the raising operations.
Line van den Berg, Jérôme Euzenat, Class? en classe: jouer avec des classifications pour combiner mathématiques et informatique, Recherches et recherches-actions en didactique de l'informatique 1(1), 2024
Class? est un jeu dans lequel les joueurs doivent organiser leurs cartes en fonction d’une classification cachée. Les cartes posées par les autres joueurs leur permettent de deviner où mettre les leurs. Il a été conçu pour que des écoliers appréhendent que les mêmes objets peuvent être classés de différentes manières et qu’il est possible de transmettre une classification sans l’expliciter. Le jeu fait appel a des notions qui se présentent facilement à l’aide des cartes à jouer comme des ensembles définis par des conditions nécessaires et suffisantes (classes). Cela permet d’introduire des classifications hiérarchiques et des notions algorithmiques (tests de conditions, récursion) pour les manipuler. Enfin, il nécessite de raisonner logiquement sur ces notions. Class? a été joué avec succès par des élèves du CM2 à la seconde. Nous nous sommes donc posé la question de son positionnement en tant que ressource pédagogique. Il apparait tout d’abord qu’il ne semble pas illustrer de concepts particulièrement mis en avant par les programmes officiels de l’éducation nationale. Il offre plutôt une manière alternative de renforcer des notions transversales très importantes en informatique. Nous caractérisons Class? par rapport aux efforts d’informatique sans ordinateurs et à d’autres jeux utilisables à cette fin. Finalement, nous discutons d’une décomposition de Class? en une succession de jeux plus simples permettant d’introduire les notions impliquées l’une après l’autre.
Vizzini and others
Jérémy Vizzini, Data interlinking with relational concept analysis, Master's thesis, Université Grenoble Alpes, Grenoble (FR), 2017
Vast amounts of RDF data are made available on the web by various institutions providing overlapping information. To be fully exploited, different representations of the same object across various data sets have to be identified. This is what is called data interlinking. One novel way to generate such links is to use link keys. Link keys generalise database keys by applying them across two data sets. The structure of RDF makes this problem much more complex than for relational databases for several reasons. An instance can have multiple values for a given attribute. Moreover, values of properties are not necessarily datatypes but instances of the graph. A first method has been designed to extract and select link keys from two classes of objects which deals with multiple values but not object values. Moreover, the extraction step has been rephrased in formal concept analysis (FCA) allowing to generate link keys across relational tables. Our aim is to extend this work so that it can deal with multiple values. Then, we show how to use it to deal with object values when the data set is cycle free. This encoding does not necessarily generate the optimal link keys. Hence, we use relational concept analysis (RCA), an extension of FCA taking relations between concepts into account. We show that a new expression of this problem is able to extract the optimal link keys even in the presence of circularities. Moreover, the elaborated process does not require information about the alignments of the ontologies to find out for which pairs of classes to extract link keys. We implemented these methods and evaluated them by reproducing the experiments made in previous studies. This shows that the method extracts the expected results as well as (also expected) scalability issues.
Werner and others
Luisa Werner, Pierre Genevès, Nabil Layaïda, Jérôme Euzenat, Damien Graux, Reproduce, replicate, reevaluate: the long but safe way to extend machine learning methods, in: Proc. 38th AAAI Conference on Artificial Intelligence (AAAI), Vancouver (CA), pp15850-15858, 2024
Reproducibility is a desirable property of scientific research. On the one hand, it increases confidence in results. On the other hand, reproducible results can be extended on a solid basis. In rapidly developing fields such as machine learning, the latter is particularly important to ensure the reliability of research. In this paper, we present a systematic approach to reproducing (using the available implementation), replicating (using an alternative implementation) and reevaluating (using different datasets) state-of-the-art experiments. This approach enables the early detection and correction of deficiencies and thus the development of more robust and transparent machine learning methods. We detail the independent reproduction, replication, and reevaluation of the initially published experiments with a method that we want to extend. For each step, we identify issues and draw lessons learned. We further discuss solutions that have proven effective in overcoming the encountered problems. This work can serve as a guide for further reproducibility studies and generally improve reproducibility in machine learning.