Regardless of their subtle general-purpose capabilities, Giant Language Fashions (LLMs) typically fail to align with numerous particular person preferences as a result of commonplace post-training strategies, like Reinforcement Studying with Human Suggestions (RLHF), optimize for a single, international goal. Whereas Group Relative Coverage Optimization (GRPO) is a extensively adopted on-policy reinforcement studying framework, its group-based normalization implicitly assumes that each one samples are exchangeable, inheriting this limitation in personalised settings. This assumption conflates distinct person reward distributions and systematically biases studying towards dominant preferences whereas suppressing minority alerts. To deal with this, we introduce Personalised GRPO (P-GRPO), a novel alignment framework that decouples benefit estimation from speedy batch statistics. By normalizing benefits towards preference-group-specific reward histories slightly than the concurrent era group, P-GRPO preserves the contrastive sign obligatory for studying distinct preferences. We consider P-GRPO throughout numerous duties and discover that it persistently achieves quicker convergence and better rewards than commonplace GRPO, thereby enhancing its capability to get better and align with heterogeneous choice alerts. Our outcomes exhibit that accounting for reward heterogeneity on the optimization degree is crucial for constructing fashions that faithfully align with numerous human preferences with out sacrificing common capabilities.

