The current speedy adoption of enormous language fashions (LLMs) highlights the essential want for benchmarking their equity. Typical equity metrics, which give attention to discrete accuracy-based evaluations (i.e., prediction correctness), fail to seize the implicit impression of mannequin uncertainty (e.g., greater mannequin confidence about one group over one other regardless of comparable accuracy). To handle this limitation, we suggest an uncertainty-aware equity metric, UCerF, to allow a fine-grained analysis of mannequin equity that’s extra reflective of the inner bias in mannequin selections in comparison with typical equity measures. Moreover, observing knowledge measurement, range, and readability points in present datasets, we introduce a brand new gender-occupation equity analysis dataset with 31,756 samples for co-reference decision, providing a extra numerous and appropriate dataset for evaluating trendy LLMs. We set up a benchmark, utilizing our metric and dataset, and apply it to guage the conduct of ten open-source LLMs. For instance, Mistral-7B reveals suboptimal equity on account of excessive confidence in incorrect predictions, a element ignored by Equalized Odds however captured by UCerF. Total, our proposed LLM benchmark, which evaluates equity with uncertainty consciousness, paves the best way for creating extra clear and accountable AI techniques.
- * Work accomplished whereas at Apple
- † Robotics Institute, Carnegie Mellon College
- ‡ Heart for Information Science, New York College

