PipelineRL: 長いシーケンス生成のための高速オンポリシー強化学習
原題: [2509.19128] PipelineRL: Faster On-policy Reinforcement Learning for ...
分析結果
- カテゴリ
- AI
- 重要度
- 72
- トレンドスコア
- 36
- 要約
- 本論文では、長いシーケンス生成のための新しい強化学習手法「PipelineRL」を提案します。この手法は、オンポリシー強化学習の効率を向上させることを目的としており、従来の手法に比べて学習速度を大幅に向上させることができます。実験結果により、PipelineRLは複雑なタスクにおいても優れた性能を示し、シーケンス生成の分野における新たな可能性を示唆しています。
- キーワード
[2509.19128] PipelineRL: Faster On-policy Reinforcement Learning for Long Sequence Generation Computer Science > Machine Learning arXiv:2509.19128 (cs) [Submitted on 23 Sep 2025 ( v1 ), last revised 26 Sep 2025 (this version, v2)] Title: PipelineRL: Faster On-policy Reinforcement Learning for Long Sequence Generation Authors: Alexandre Piché , Ehsan Kamalloo , Rafael Pardinas , Xiaoyin Chen , Dzmitry Bahdanau View a PDF of the paper titled PipelineRL: Faster On-policy Reinforcement Learning for Long Sequence Generation, by Alexandre Pich\'e and 4 other authors View PDF HTML (experimental) Abstract: Reinforcement Learning (RL) is increasingly utilized to enhance the reasoning capabilities of Large Language Models (LLMs). However, effectively scaling these RL methods presents significant challenges, primarily due to the difficulty in maintaining high AI accelerator utilization without generating stale, off-policy data that harms common RL algorithms. This paper introduces PipelineRL, an approach designed to achieve a superior trade-off between hardware efficiency and data on-policyness for LLM training. PipelineRL employs concurrent asynchronous data generation and model training, distinguished by the novel in-flight weight updates. This mechanism allows the LLM generation engine to receive updated model weights with minimal interruption during the generation of token sequences, thereby maximizing both the accelerator utilization and the freshness of training data. Experiments conducted on long-form reasoning tasks using 128 H100 GPUs demonstrate that PipelineRL achieves approximately $\sim 2x$ faster learning compared to conventional RL baselines while maintaining highly on-policy training data. A scalable and modular open-source implementation of PipelineRL is also released as a key contribution. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2509.19128 [cs.LG] (or arXiv:2509.19128v2 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2509.19128 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Alexandre Piché [ view email ] [v1] Tue, 23 Sep 2025 15:15:21 UTC (3,853 KB) [v2] Fri, 26 Sep 2025 20:28:08 UTC (3,838 KB) Full-text links: Access Paper: View a PDF of the paper titled PipelineRL: Faster On-policy Reinforcement Learning for Long Sequence Generation, by Alexandre Pich\'e and 4 other authors View PDF HTML (experimental) TeX Source view license Current browse context: cs.LG < prev | next > new | recent | 2025-09 Change to browse by: cs References & Citations NASA ADS Google Scholar Semantic Scholar export BibTeX citation Loading... BibTeX formatted citation × loading... Data provided by: Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer ( What is the Explorer? ) Connected Papers Toggle Connected Papers ( What is Connected Papers? ) Litmaps Toggle Litmaps ( What is Litmaps? ) scite.ai Toggle scite Smart Citations ( What are Smart Citations? ) Code, Data, Media Code, Data and Media Associated with this Article alphaXiv Toggle alphaXiv ( What is alphaXiv? ) Links to Code Toggle CatalyzeX Code Finder for Papers ( What is CatalyzeX? ) DagsHub Toggle DagsHub ( What is DagsHub? ) GotitPub Toggle Gotit.pub ( What is GotitPub? ) Huggingface Toggle Hugging Face ( What is Huggingface? ) Links to Code Toggle Papers with Code ( What is Papers with Code? ) ScienceCast Toggle ScienceCast ( What is ScienceCast? ) Demos Demos Replicate Toggle Replicate ( What is Replicate? ) Spaces Toggle Hugging Face Spaces ( What is Spaces? ) Spaces Toggle TXYZ.AI ( What is TXYZ.AI? ) Related Papers Recommenders and Search Tools Link to Influence Flower Influence Flower ( What are Influence Flowers? ) Core recommender toggle CORE Recommender ( What is CORE? ) IArxiv recommender toggle IArxiv Recommender ( What is IArxiv? ) Author Venue Institution Topic About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs . Which authors of this paper are endorsers? | Disable MathJax ( What is MathJax? )