Lengthy chain-of-thought (CoT) considerably enhances giant language fashions’ (LLM) reasoning capabilities. Nonetheless, the in depth reasoning traces result in inefficiencies and an elevated time-to-first-token (TTFT). We suggest a novel coaching paradigm that makes use of reinforcement studying (RL) to information reasoning LLMs to interleave considering and answering for multi-hop questions. We observe that fashions inherently possess the power to carry out interleaved reasoning, which might be additional enhanced via RL. We introduce a easy but efficient rule-based reward to incentivize right intermediate steps, which guides the coverage mannequin towards right reasoning paths by leveraging intermediate alerts generated throughout interleaved reasoning. Intensive experiments performed throughout 5 various datasets and three RL algorithms (PPO, GRPO, and REINFORCE++) exhibit constant enhancements over conventional think-answer reasoning, with out requiring exterior instruments. Particularly, our method reduces TTFT by over 80% on common and improves as much as 19.3% in Go@1 accuracy. Moreover, our technique, educated solely on query answering and logical reasoning datasets, displays sturdy generalization potential to complicated reasoning datasets similar to MATH, GPQA, and MMLU. Moreover, we conduct in-depth evaluation to disclose a number of invaluable insights into conditional reward modeling.
- † Duke College
- ‡ Work accomplished whereas at Apple