Abstract

As text classifiers become increasingly used in real-time applications, it is critical to consider not only their accuracy but also their robustness to changes in the data distribution. In this paper, we consider the case where there is a confounding variable $Z$ that influences both the text features $X$ and the class variable $Y$. For example, a classifier trained to predict the health status of a user based on their online communications may be confounded by socioeconomic variables. When the influence of $Z$ changes from training to testing data, we find that classifier accuracy can degrade rapidly. Our approach, based on Pearl’s back-door adjustment, estimates the underlying effect of a text variable on the class variable while controlling for the confounding variable. Although our goal is prediction, not causal inference, we find that such adjustments are essential to building text classifiers that are robust to confounding variables. On three diverse text classifications tasks, we find that covariate adjustment results in higher accuracy than competing baselines over a range of confounding relationships (e.g., in one setting, accuracy improves from 60% to 81%).

Citation

@InProceedings{virgile2016using,
  author =       {Virgile Landeiro and Aron Culotta},
  title =        {Robust Text Classification in the Presence of Confounding Bias},
  booktitle = {Thirtieth National Conference on Artificial Intelligence (AAAI)},
  year =         2016
}