Out-of-Distribution Detection

Machine learning based NLP systems tend to fail when applied on out-of-distribution (OOD) data (ie, data drawn from a distribution different from the model’s train dataset). Current models are also ineffective in differentiating between in-distribution and OOD data. This makes it difficult to apply these systems in critical applications where mistakes could be very costly. The ability to effectively detect OOD data will allow systems to redirect these OOD data to another system (eg, in production, OOD data could be flagged for human intervention). This project will study the different types of out-of-distribution data in NLP tasks and develop techniques to effectively detect them.