Giant language fashions excel with reinforcement studying (RL), however totally unlocking this potential requires a mid-training stage. An efficient mid-training part ought to determine a compact set of helpful actions and allow quick choice amongst them by means of on-line RL. We formalize this instinct by presenting the primary theoretical end result on how mid-training shapes post-training: it characterizes an motion subspace that minimizes each the worth approximation error from pruning and the RL error throughout subsequent planning. Our evaluation reveals two key determinants of mid-training effectiveness: pruning effectivity, which shapes the prior of the preliminary RL coverage, and its affect on RL convergence, which governs the extent to which that coverage may be improved through on-line interactions. These outcomes recommend that mid-training is simplest when the choice house is compact and the efficient horizon is brief, highlighting the significance of working within the house of motion abstractions reasonably than primitive actions. Constructing on these insights, we suggest Reasoning as Motion Abstractions (RA3), a scalable mid-training algorithm. Particularly, we derive a sequential variational decrease sure and optimize it by iteratively discovering temporally-consistent latent constructions through RL, adopted by fine-tuning on the bootstrapped knowledge. Experiments on code era duties exhibit the effectiveness of our strategy. Throughout a number of base fashions, RA3 improves the common efficiency on HumanEval and MBPP by 8 and 4 factors over the bottom mannequin and the next-token prediction baseline. Moreover, RA3 achieves quicker convergence and better asymptotic efficiency in RLVR on HumanEval+, MBPP+, LiveCodeBench, and Codeforces.
- † Northwestern College
- ‡ College of Illinois Urbana–Champaign (UIUC)
- ** Work completed whereas at Apple

