Quantization-aware coaching (QAT) is a number one method for enhancing the accuracy of quantized neural networks. Previ-
ous work has proven that decomposing coaching right into a full-precision (FP) section adopted by a QAT section yields superior
accuracy in comparison with QAT alone. Nonetheless, the optimum allocation of compute between the FP and QAT phases stays
unclear. We conduct intensive experiments with numerous compute budgets, QAT bit widths, and mannequin sizes from 86.0M
to 2.2B to analyze how completely different QAT durations affect ultimate efficiency. We reveal that, opposite to earlier
findings, the loss-optimal ratio of QAT to FP coaching will increase with the full quantity of compute. Furthermore, the opti-
mal fraction may be precisely predicted for a variety of mannequin sizes and quantization widths utilizing the tokens-per-
parameter-byte statistic. From experimental knowledge, we derive a loss scaling legislation that predicts each optimum QAT ratios and fi-
nal mannequin efficiency throughout completely different QAT/FP compute allocation methods and QAT bit widths. We use the scaling legislation
to make additional predictions, which we confirm experimentally, together with which QAT bit width is perfect underneath a given mem-
ory constraint and the way QAT accuracy with completely different bit widths compares to full-precision mannequin accuracy. Moreover,
we suggest a novel cooldown and QAT fusion strategy that performs studying charge decay collectively with quantization-aware
coaching, eliminating redundant full-precision mannequin updates and reaching important compute financial savings. These findings
present sensible insights into environment friendly QAT planning and allow the coaching of higher-quality quantized fashions with the
identical compute finances.
- † École Polytechnique Fédérale de Lausanne (EPFL)
- ** Work carried out whereas at Apple

