2022 IEEE International Conference on Big Data (Big Data)
Download PDF

Abstract

In the legal field, Technology-Assisted Review (TAR) systems for e-discovery are typically perceived as "black boxes" by practitioners, providing little to no insight into how the system makes its classification predictions. The lack of explainability in TAR systems for e-discovery renders their decisions opaque, making it difficult for attorneys to trust their recommendations and thus to discharge ethical obligations to clients. In addition, litigants cannot fully participate in the process if they cannot understand the relevance judgments, and jurists cannot make well-informed judgments on discovery matters. The Fuzzy ARTMAP algorithm is an explainable neural network architecture that permits the extraction of fuzzy If-Then rules from the model at any point in its training, the model is also geometrically interpretable, allowing a researcher or practitioner to understand what the model has learned up to that point.This paper evaluates the explainable Fuzzy ARTMAP algorithm for use in the TAR domain. Not only does it achieve suitable document classification performance for a TAR system, as measured by recall and recall-at-effort, but it also enables direct insight into how the algorithm decides relevance. This is in contrast to existing approaches for explainable TAR which only rely on extracting document snippets as post hoc explanations of why a document is relevant.In addition, the effect of different document features (tf-idf, word2vec, and GloVe) on recall performance is also evaluated. Performance is compared to AutoTAR, the state-of-the-art TAR algorithm which makes relevance predictions but is not able to provide any explanations about them. Experiments on the Reuters-21578 and 20Newsgroups corpora indicate robust recall performance overall and comparable or better metrics than AutoTAR in some circumstances.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles