Quantifying and Advancing Search System Explainability

SIGIR 2023

As information retrieval (IR) systems, such as search engines and conversational agents, become ubiquitous in various domains, the need for transparent and explainable systems grows to ensure accountability, fairness, and unbiased results. Despite many recent advances toward explainable AI and IR techniques, there is no consensus on what it means for a system to be explainable. Although a growing body of literature suggests that explainability is comprised of multiple subfactors, virtually all existing approaches treat it as a singular notion. Additionally, while neural retrieval models (NRMs) have become popular for their ability to achieve high performance, research on the explainability of NRMs has been largely unexplored until recent years. Numerous questions remain unanswered regarding the most effective means of comprehending how these intricate models arrive at their decisions and the extent to which these methods will function efficiently for both developers and end-users. This research aims to develop effective methods to evaluate and advance explainable retrieval systems toward the broader research field goal of creating techniques to make potential biases more identifiable.

Download paper here