We suggest that reinforcement studying (RL) from partial professional demonstrations will not be merely a coaching heuristic, however a promising framework for fixing advanced sequence technology duties. Supervised fine-tuning (SFT) depends on dense ground-truth labels, which develop into more and more expensive as sequence size grows. RL, alternatively, struggles with sparse rewards and a combinatorially giant output area. We tackle this by introducing adaptive backtracking (AdaBack), a per-sample curriculum studying algorithm that reveals solely a partial prefix of the goal output throughout coaching. The supervision size is adjusted dynamically for every pattern primarily based on the mannequin’s previous reward sign, permitting it to incrementally study to finish reasoning chains by conditioning on right partial options. We examine this intermediate regime between SFT and RL and argue that per-sample curriculum studying is greater than a trade-off between effectivity and generality, it may achieve duties with lengthy sequences of latent dependencies the place SFT and RL each fail to generalize. Utilizing an artificial job with latent parity constraints, we present that our adaptive curriculum over partial solutions reliably solves issues which are in any other case intractable. On mathematical reasoning benchmarks (MATH, GSM8k), we discover that curriculum studying allows fashions to resolve issues that RL alone can not, buying new reasoning capabilities by incremental publicity to partial options.
- † École Polytechnique Fédérale de Lausanne (EPFL)
- * Equal supervision

