Out-of-Distribution Failure through the Lens of Labeling Mechanisms: An Information Theoretic Approach

Introducing additional labels can help OOD generalization performance.

Abstract

Machine learning models typically fail in deployment environments where the distribution of data does not perfectly match that of the training domains. This phenomenon is believed to stem from networks’ failure to capture the invariant features that generalize to unseen domains. However, we attribute this phenomenon to the limitations that the labeling mechanism employed by humans imposes on the learning algorithm. We conjecture that providing multiple labels for each datapoint where each could describe the existence of particular objects/concepts on the data point, decreases the risk of capturing non-generalizable correlations by the model. We theoretically show that learning over a multi-label regime, where labels for each data point are present, tightens the expected generalization gap by a factor of 1/sqrtK compared to a similar case where only one label for each data point is in hand. Also, we show that learning under this regime is much more sample efficient and requires a fraction of training data to provide competitive results.

Slides