Global Trend Radar
Web: arxiv.org US web_search 2026-05-06 05:52

専門家の混合に関する包括的調査:アルゴリズム、理論、応用

原題: [2503.07137] A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications

元記事を開く →

分析結果

カテゴリ
AI
重要度
66
トレンドスコア
30
要約
本論文は、専門家の混合(Mixture-of-Experts)に関する包括的な調査を行い、関連するアルゴリズム、理論、応用について詳述しています。専門家の混合は、機械学習における重要な手法であり、異なる専門家モデルを組み合わせることで、より高い予測精度を実現します。著者は、最新の研究成果を整理し、今後の研究の方向性についても考察しています。
キーワード
[2503.07137] A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications Computer Science > Machine Learning arXiv:2503.07137 (cs) [Submitted on 10 Mar 2025 ( v1 ), last revised 24 Jan 2026 (this version, v4)] Title: A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications Authors: Siyuan Mu , Sen Lin View a PDF of the paper titled A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications, by Siyuan Mu and Sen Lin View PDF HTML (experimental) Abstract: Artificial intelligence (AI) has achieved astonishing successes in many domains, especially with the recent breakthroughs in the development of foundational large models. These large models, leveraging their extensive training data, provide versatile solutions for a wide range of downstream tasks. However, as modern datasets become increasingly diverse and complex, the development of large AI models faces two major challenges: (1) the enormous consumption of computational resources and deployment difficulties, and (2) the difficulty in fitting heterogeneous and complex data, which limits the usability of the models. Mixture of Experts (MoE) models has recently attracted much attention in addressing these challenges, by dynamically selecting and activating the most relevant sub-models to process input data. It has been shown that MoEs can significantly improve model performance and efficiency with fewer resources, particularly excelling in handling large-scale, multimodal data. Given the tremendous potential MoE has demonstrated across various domains, it is urgent to provide a comprehensive summary of recent advancements of MoEs in many important fields. Existing surveys on MoE have their limitations, e.g., being outdated or lacking discussion on certain key areas, and we aim to address these gaps. In this paper, we first introduce the basic design of MoE, including gating functions, expert networks, routing mechanisms, training strategies, and system design. We then explore the algorithm design of MoE in important machine learning paradigms such as continual learning, meta-learning, multi-task learning, and reinforcement learning. Additionally, we summarize theoretical studies aimed at understanding MoE and review its applications in computer vision and natural language processing. Finally, we discuss promising future research directions. Comments: 29 pages, 3 figures Subjects: Machine Learning (cs.LG) ; Artificial Intelligence (cs.AI) Cite as: arXiv:2503.07137 [cs.LG] (or arXiv:2503.07137v4 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2503.07137 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Siyuan Mu [ view email ] [v1] Mon, 10 Mar 2025 10:08:55 UTC (221 KB) [v2] Fri, 11 Apr 2025 04:58:42 UTC (101 KB) [v3] Fri, 18 Apr 2025 02:53:01 UTC (101 KB) [v4] Sat, 24 Jan 2026 07:43:04 UTC (101 KB) Full-text links: Access Paper: View a PDF of the paper titled A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications, by Siyuan Mu and Sen Lin View PDF HTML (experimental) TeX Source view license Current browse context: cs.LG < prev | next > new | recent | 2025-03 Change to browse by: cs cs.AI References & Citations NASA ADS Google Scholar Semantic Scholar export BibTeX citation Loading... BibTeX formatted citation × loading... Data provided by: Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer ( What is the Explorer? ) Connected Papers Toggle Connected Papers ( What is Connected Papers? ) Litmaps Toggle Litmaps ( What is Litmaps? ) scite.ai Toggle scite Smart Citations ( What are Smart Citations? ) Code, Data, Media Code, Data and Media Associated with this Article alphaXiv Toggle alphaXiv ( What is alphaXiv? ) Links to Code Toggle CatalyzeX Code Finder for Papers ( What is CatalyzeX? ) DagsHub Toggle DagsHub ( What is DagsHub? ) GotitPub Toggle Gotit.pub ( What is GotitPub? ) Huggingface Toggle Hugging Face ( What is Huggingface? ) Links to Code Toggle Papers with Code ( What is Papers with Code? ) ScienceCast Toggle ScienceCast ( What is ScienceCast? ) Demos Demos Replicate Toggle Replicate ( What is Replicate? ) Spaces Toggle Hugging Face Spaces ( What is Spaces? ) Spaces Toggle TXYZ.AI ( What is TXYZ.AI? ) Related Papers Recommenders and Search Tools Link to Influence Flower Influence Flower ( What are Influence Flowers? ) Core recommender toggle CORE Recommender ( What is CORE? ) IArxiv recommender toggle IArxiv Recommender ( What is IArxiv? ) Author Venue Institution Topic About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs . Which authors of this paper are endorsers? | Disable MathJax ( What is MathJax? )

類似記事(ベクトル近傍)