Global Trend Radar
Web: github.com US web_search 2026-05-05 11:36

vLLM - 高スループットかつメモリ効率の良いLLM推論エンジン

原題: GitHub - vllm-project/vllm: A high-throughput and memory ...vLLM - WikipediavLLM Release Notes - NVIDIA DocsvLLM Quickstart: High-Performance LLM Serving - in 2026Fastest Local LLM Setup: Ollama vs vLLM vs llama.cpp Real ...vLLMvLLM Quickstart: High-Performance LLM Serving - in 2026vLLM Quickstart: High-Performance LLM Serving - in 2026vLLM Quickstart: High-Performance LLM Serving - in 2026vLLM vs TensorRT-LLM vs SGLang: H100 Benchmarks (2026)

元記事を開く →

分析結果

カテゴリ
AI
重要度
78
トレンドスコア
42
要約
vLLMは、高スループットとメモリ効率を兼ね備えた大規模言語モデル(LLM)の推論および提供エンジンです。GitHub上で開発されており、さまざまなLLMのサービングに対応しています。vLLMは、特に高性能なLLMの提供を目指しており、他のフレームワークと比較して優れたパフォーマンスを発揮します。
キーワード
GitHub - vllm-project/vllm: A high-throughput and memory-efficient inference and serving engine for LLMs · GitHub Skip to content You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert vllm-project / vllm Public Uh oh! There was an error while loading. Please reload this page . Notifications You must be signed in to change notification settings Fork 16.4k Star 79k main Branches Tags Go to file Code Open more actions menu Folders and files Name Name Last commit message Last commit date Latest commit History 16,301 Commits 16,301 Commits .buildkite .buildkite .gemini .gemini .github .github benchmarks benchmarks cmake cmake csrc csrc docker docker docs docs examples examples requirements requirements scripts scripts tests tests tools tools vllm vllm .clang-format .clang-format .coveragerc .coveragerc .dockerignore .dockerignore .git-blame-ignore-revs .git-blame-ignore-revs .gitignore .gitignore .markdownlint.yaml .markdownlint.yaml .pre-commit-config.yaml .pre-commit-config.yaml .readthedocs.yaml .readthedocs.yaml .shellcheckrc .shellcheckrc .yapfignore .yapfignore AGENTS.md AGENTS.md CLAUDE.md CLAUDE.md CMakeLists.txt CMakeLists.txt CODE_OF_CONDUCT.md CODE_OF_CONDUCT.md CONTRIBUTING.md CONTRIBUTING.md DCO DCO LICENSE LICENSE MANIFEST.in MANIFEST.in README.md README.md RELEASE.md RELEASE.md SECURITY.md SECURITY.md codecov.yml codecov.yml mkdocs.yaml mkdocs.yaml pyproject.toml pyproject.toml setup.py setup.py use_existing_torch.py use_existing_torch.py View all files Repository files navigation Easy, fast, and cheap LLM serving for everyone | Documentation | Blog | Paper | Twitter/X | User Forum | Developer Slack | 🔥 We have built a vLLM website to help you get started with vLLM. Please visit vllm.ai to learn more. For events, please visit vllm.ai/events to join us. About vLLM is a fast and easy-to-use library for LLM inference and serving. Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has grown into one of the most active open-source AI projects built and maintained by a diverse community of many dozens of academic institutions and companies from over 2000 contributors. vLLM is fast with: State-of-the-art serving throughput Efficient management of attention key and value memory with PagedAttention Continuous batching of incoming requests, chunked prefill, prefix caching Fast and flexible model execution with piecewise and full CUDA/HIP graphs Quantization: FP8, MXFP8/MXFP4, NVFP4, INT8, INT4, GPTQ/AWQ, GGUF, compressed-tensors, ModelOpt, TorchAO, and more Optimized attention kernels including FlashAttention, FlashInfer, TRTLLM-GEN, FlashMLA, and Triton Optimized GEMM/MoE kernels for various precisions using CUTLASS, TRTLLM-GEN, CuTeDSL Speculative decoding including n-gram, suffix, EAGLE, DFlash Automatic kernel generation and graph-level transformations using torch.compile Disaggregated prefill, decode, and encode vLLM is flexible and easy to use with: Seamless integration with popular Hugging Face models High-throughput serving with various decoding algorithms, including parallel sampling , beam search , and more Tensor, pipeline, data, expert, and context parallelism for distributed inference Streaming outputs Generation of structured outputs using xgrammar or guidance Tool calling and reasoning parsers OpenAI-compatible API server, plus Anthropic Messages API and gRPC support Efficient multi-LoRA support for dense and MoE layers Support for NVIDIA GPUs, AMD GPUs, and x86/ARM/PowerPC CPUs. Additionally, diverse hardware plugins such as Google TPUs, Intel Gaudi, IBM Spyre, Huawei Ascend, Rebellions NPU, Apple Silicon, MetaX GPU, and more. vLLM seamlessly supports 200+ model architectures on Hugging Face, including: Decoder-only LLMs (e.g., Llama, Qwen, Gemma) Mixture-of-Expert LLMs (e.g., Mixtral, DeepSeek-V3, Qwen-MoE, GPT-OSS) Hybrid attention and state-space models (e.g., Mamba, Qwen3.5) Multi-modal models (e.g., LLaVA, Qwen-VL, Pixtral) Embedding and retrieval models (e.g., E5-Mistral, GTE, ColBERT) Reward and classification models (e.g., Qwen-Math) Find the full list of supported models here . Getting Started Install vLLM with uv (recommended) or pip : uv pip install vllm Or build from source for development. Visit our documentation to learn more. Installation Quickstart List of Supported Models Contributing We welcome and value any contributions and collaborations. Please check out Contributing to vLLM for how to get involved. Citation If you use vLLM for your research, please cite our paper : @inproceedings { kwon2023efficient , title = { Efficient Memory Management for Large Language Model Serving with PagedAttention } , author = { Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica } , booktitle = { Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles } , year = { 2023 } } Contact Us For technical questions and feature requests, please use GitHub Issues For discussing with fellow users, please use the vLLM Forum For coordinating contributions and development, please use Slack For security disclosures, please use GitHub's Security Advisories feature For collaborations and partnerships, please contact us at [email protected] Media Kit If you wish to use vLLM's logo, please refer to our media kit repo About A high-throughput and memory-efficient inference and serving engine for LLMs vllm.ai Topics amd cuda inference pytorch transformer openai moe llama gpt model-serving tpu kimi blackwell llm llm-serving qwen deepseek deepseek-v3 qwen3 gpt-oss Resources Readme License Apache-2.0 license Code of conduct Code of conduct Contributing Contributing Security policy Security policy Uh oh! There was an error while loading. Please reload this page . Activity Custom properties Stars 79k stars Watchers 538 watching Forks 16.4k forks Report repository Releases 92 v0.20.1 Latest May 4, 2026 + 91 releases Sponsor this project Uh oh! There was an error while loading. Please reload this page . opencollective.com/ vllm Learn more about GitHub Sponsors Uh oh! There was an error while loading. Please reload this page . Contributors Uh oh! There was an error while loading. Please reload this page . Languages Python 88.5% Cuda 6.2% C++ 3.7% Shell 0.9% CMake 0.3% C 0.3% Other 0.1% You can’t perform that action at this time.

類似記事(ベクトル近傍)