Multiaccuracy and multicalibration are multigroup equity notions for prediction which have discovered quite a few functions in studying and computational complexity. They are often achieved from a single studying primitive: weak agnostic studying. Right here we examine the ability of multiaccuracy as a studying primitive, each with and with out the extra assumption of calibration. We discover that multiaccuracy in itself is relatively weak, however that the addition of world calibration (this notion is named calibrated multiaccuracy) boosts its energy considerably, sufficient to recuperate implications that have been beforehand recognized solely assuming the stronger notion of multicalibration.
We give proof that multiaccuracy won’t be as highly effective as customary weak agnostic studying, by displaying that there isn’t any strategy to post-process a multiaccurate predictor to get a weak learner, even assuming the perfect speculation has correlation 1/2. Reasonably, we present that it yields a restricted type of weak agnostic studying, which requires some idea within the class to have correlation higher than 1/2 with the labels. Nevertheless, by additionally requiring the predictor to be calibrated, we recuperate not simply weak, however robust agnostic studying.
An identical image emerges once we think about the derivation of hardcore measures from predictors satisfying multigroup equity notions. On the one hand, whereas multiaccuracy solely yields hardcore measures of density half the optimum, we present that (a weighted model of) calibrated multiaccuracy achieves optimum density. Our outcomes yield new insights into the complementary roles performed by multiaccuracy and calibration in every setting. They make clear why multiaccuracy and world calibration, though not significantly highly effective by themselves, collectively yield significantly stronger notions.
- † College of Oxford
- ‡ Stanford College