A call-theoretic characterization of good calibration is that an agent looking for to attenuate a correct loss in expectation can not enhance their final result by post-processing a superbly calibrated predictor. Hu and Wu (FOCS’24) use this to outline an approximate calibration measure referred to as calibration choice loss (CDL), which measures the maximal enchancment achievable by any post-processing over any correct loss. Sadly, CDL seems to be intractable to even weakly approximate within the offline setting, given black-box entry to the predictions and labels. We advise circumventing this by proscribing consideration to structured households of post-processing capabilities Okay. We outline the calibration choice loss relative to Okay, denoted CDLOkay the place we think about all correct losses however prohibit post-processings to a structured household Okay. We develop a complete concept of when CDLOkay is information-theoretically and computationally tractable, and use it to show each higher and decrease bounds for pure lessons Okay. Along with introducing new definitions and algorithmic methods to the speculation of calibration for choice making, our outcomes give rigorous ensures for some broadly used recalibration procedures in machine studying.
- † College of Texas at Austin
- ‡ Harvard College

