Global Trend Radar
arXiv cs.LG (Machine Learning) INT ai 2026-04-29 13:00

計算整列トレーニング:テスト時推論の最適化

原題: Compute Aligned Training: Optimizing for Test Time Inference

元記事を開く →

分析結果

カテゴリ
AI
重要度
69
トレンドスコア
28
要約
テスト時の計算スケーリングは、大規模言語モデル(LLM)の性能向上において強力な手段として浮上しています。しかし、標準的なポストトレーニングのパラダイムには課題があります。
キーワード
arXiv:2604.24957v1 Announce Type: new Abstract: Scaling test-time compute has emerged as a powerful mechanism for enhancing Large Language Model (LLM) performance. However, standard post-training paradigms, Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL), optimize the likelihood of individual samples under a base policy, creating a misalignment with test time procedures that rely on aggregated or filtered outputs. In this work, we propose Compute Aligned Training, which aligns training objectives with test-time strategies. By conceptualizing inference strategies as operators on the base policy, we derive new loss functions that maximize performance when said strategies are applied. We instantiate such loss functions for SFT and RL across common test time strategies. Finally, we provide empirical evidence that this training method substantially improves test time scaling over standard training. arXiv:2604.24957v1 Announce Type: new Abstract: Scaling test-time compute has emerged as a powerful mechanism for enhancing Large Language Model (LLM) performance. However, standard post-training paradigms, Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL), optimize the likelihood of individual samples under a base policy, creating a misalignment with test time procedures that rely on aggregated or filtered outputs. In this work, we propose Compute Aligned Training, which aligns training objectives with test-time strategies. By conceptualizing inference strategies as operators on the base policy, we derive new loss functions that maximize performance when said strategies are applied. We instantiate such loss functions for SFT and RL across common test time strategies. Finally, we provide empirical evidence that this training method substantially improves test time scaling over standard training.

類似記事(ベクトル近傍)