Deep neural networks have demonstrated exceptional performance breakthroughs in the field of document image classification; yet, there has been limited research in the field that delves into the explainability of these models. In this paper, we present a comprehensive study in which we analyze 9 different explainability methods across 10 different state-of-the-art document classification models and 2 popular benchmark datasets, making three major contributions. First, through an exhaustive qualitative and quantitative analysis of various explainability approaches, we demonstrate that majority of them perform poorly in generating useful explanations for document images, with only two techniques, namely, Occlusion and DeepSHAP, providing relatively adequate, human-interpretable and faithful explanations. Second, to identify the features most relevant to the models’ prediction, we present an approach to generate counterfactual explanations. An analysis of these explanations reveals that many document classification models can be highly susceptible to minor perturbations in the input. Moreover, they may easily fall victim to biases in the document data, and end up relying on seemingly irrelevant features to make their decisions, with 25-50% of the predictions overall, and up to 60% for some classes strongly depending on these features. Lastly, our analysis revealed that the popular document benchmark datasets, RVL-CDIP and Tobacco3482, are inherently biased, with document identification (ID) numbers of specific styles consistently appearing in certain document regions. If unaddressed, this bias allows the models to predict document classes solely by looking at the ID numbers and prevents them from learning more complex document features. Overall, by unveiling the strengths and weaknesses of various explainability methods, document datasets and deep learning models, our work presents a major step towards creating more transparent and robust document image classification systems.
Model interpretability and robustness are becoming increasingly critical today for the safe and practical deployment of deep learning (DL) models in industrial settings. As DL-backed automated document processing systems become increasingly common in business workflows, there is a pressing need today to enhance interpretability and robustness for the task of document image classification, an integral component of such systems. Surprisingly, while much research has been devoted to improving the performance of deep models for this task, little attention has been given to their interpretability and robustness. In this paper, we aim to improve upon both aspects and introduce DocXClassifier, an inherently interpretable deep document classifier that not only achieves significant performance improvements over existing approaches in image-based document classification, but also holds the capability to simultaneously generate feature importance maps while making its predictions. Our approach attains state-of-the-art performance in image-based classification on two popular document datasets, RVL-CDIP and Tobacco3482, with top-1 classification accuracies of 94.17% and 95.57%, respectively. Additionally, it sets a new record for the highest image-based classification accuracy on Tobacco3482 without transfer learning from RVL-CDIP, at 90.14%. In addition, our proposed training strategy demonstrates superior robustness compared to existing approaches, significantly outperforming them on 19 out of 21 different types of novel data distortions, while achieving comparable results on the remaining two. By combining robustness with interpretability, DocXClassifier presents a promising step towards the practical deployment of DL models for document classification tasks.