TL;DR: Today’s NLP models, for all their recent successes, have certain limitations. Case in point: they exhibit poor performance when processing multilingual code-mixed sentences (each containing multiple languages). Our new approach addresses this…
Morpheus exposes the potential allocative harms of popular pretrained NLP models by simulating inflectional variation. We propose adversarial fine-tuning for mitigating the effects of training only on error-free Standard English data.