State-of-the-art approaches model Entity Matching (EM) as a binary classification problem, where Machine (ML) or Deep Learning (DL) based techniques are applied to evaluate if descriptions of pairs of entities refer to the same real-world instance. Despite these approaches have experimentally demonstrated to achieve high effectiveness, their adoption in real scenarios is limited by the lack of interpretability of their behavior. This paper showcases Landmark Explanation1, a tool that makes generic post-hoc (model-agnostic) perturbation-based explanation systems able to explain the behavior of EM models. In particular, Landmark Explanation computes local interpretations, i.e., given a description of a pair of entities and an EM model, it computes the contribution of each term in generating the prediction. The demonstration shows that the explanations generated by Landmark Explanation are effective even for non-matching pairs of entities, a challenge for explanation systems.

Landmark Explanation: An Explainer for Entity Matching Models / Baraldi, A.; Del Buono, F.; Paganelli, M.; Guerra, F.. - (2021), pp. 4680-4684. (Intervento presentato al convegno 30th ACM International Conference on Information and Knowledge Management, CIKM 2021 tenutosi a aus nel NOV 01-05, 2021) [10.1145/3459637.3481981].

Landmark Explanation: An Explainer for Entity Matching Models

Guerra F.
2021

Abstract

State-of-the-art approaches model Entity Matching (EM) as a binary classification problem, where Machine (ML) or Deep Learning (DL) based techniques are applied to evaluate if descriptions of pairs of entities refer to the same real-world instance. Despite these approaches have experimentally demonstrated to achieve high effectiveness, their adoption in real scenarios is limited by the lack of interpretability of their behavior. This paper showcases Landmark Explanation1, a tool that makes generic post-hoc (model-agnostic) perturbation-based explanation systems able to explain the behavior of EM models. In particular, Landmark Explanation computes local interpretations, i.e., given a description of a pair of entities and an EM model, it computes the contribution of each term in generating the prediction. The demonstration shows that the explanations generated by Landmark Explanation are effective even for non-matching pairs of entities, a challenge for explanation systems.
2021
30th ACM International Conference on Information and Knowledge Management, CIKM 2021
aus
NOV 01-05, 2021
4680
4684
Baraldi, A.; Del Buono, F.; Paganelli, M.; Guerra, F.
Landmark Explanation: An Explainer for Entity Matching Models / Baraldi, A.; Del Buono, F.; Paganelli, M.; Guerra, F.. - (2021), pp. 4680-4684. (Intervento presentato al convegno 30th ACM International Conference on Information and Knowledge Management, CIKM 2021 tenutosi a aus nel NOV 01-05, 2021) [10.1145/3459637.3481981].
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1264814
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 1
social impact