Global Trend Radar
Web: huggingface.co US web_search 2026-04-30 04:44

Z.ai · ハギングフェイス

原題: Z.ai · Hugging Face

元記事を開く →

分析結果

カテゴリ
AI
重要度
60
トレンドスコア
24
要約
Z.aiの推論プロバイダーに関するドキュメントが提供されています。AWSのTrainiumとInferentiaを活用し、Argilla、AutoTrain、Bitsandbytes、CLI、チャットUI、データセットビューアなどの機能を通じて、データセットの展開や管理が可能です。
キーワード
Z.ai · Hugging Face Inference Providers documentation Z.ai Inference Providers 🏡 View all docs AWS Trainium & Inferentia Accelerate Argilla AutoTrain Bitsandbytes CLI Chat UI Dataset viewer Datasets Deploying on AWS Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Huggingface.js Inference Endpoints (dedicated) Inference Providers Kernels LeRobot Leaderboards Lighteval Microsoft Azure Optimum PEFT Reachy Mini Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Trackio Transformers Transformers.js Xet smolagents timm Search documentation main EN Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Copy page Z.ai All supported Z.ai models can be found here Z.ai is an AI platform that provides cutting-edge large language models powered by GLM series. Their flagship models feature Mixture-of-Experts (MoE) architecture with advanced reasoning, coding, and agentic capabilities. For latest pricing, visit the pricing page . Resources Website : https://z.ai/ Documentation : https://docs.z.ai/ API Documentation : https://docs.z.ai/api-reference/introduction GitHub : https://github.com/zai-org Hugging Face : https://huggingface.co/zai-org Supported tasks Chat Completion (LLM) Find out more about Chat Completion (LLM) here . Language Python JavaScript cURL Client openai huggingface_hub requests Provider Z.ai Settings Settings Settings Copied import os from openai import OpenAI client = OpenAI( base_url= "https://router.huggingface.co/v1" , api_key=os.environ[ "HF_TOKEN" ], ) completion = client.chat.completions.create( model= "zai-org/GLM-5.1:zai-org" , messages=[ { "role" : "user" , "content" : "What is the capital of France?" } ], ) print (completion.choices[ 0 ].message) Chat Completion (VLM) Find out more about Chat Completion (VLM) here . Language Python JavaScript cURL Client openai huggingface_hub requests Provider Z.ai Settings Settings Settings Copied import os from openai import OpenAI client = OpenAI( base_url= "https://router.huggingface.co/v1" , api_key=os.environ[ "HF_TOKEN" ], ) completion = client.chat.completions.create( model= "zai-org/GLM-4.5V:zai-org" , messages=[ { "role" : "user" , "content" : [ { "type" : "text" , "text" : "Describe this image in one sentence." }, { "type" : "image_url" , "image_url" : { "url" : "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ], ) print (completion.choices[ 0 ].message) Text To Image Find out more about Text To Image here . Language Python JavaScript Provider Z.ai Settings Settings Settings Copied import os from huggingface_hub import InferenceClient client = InferenceClient( provider= "zai-org" , api_key=os.environ[ "HF_TOKEN" ], ) # output is a PIL.Image object image = client.text_to_image( "Astronaut riding a horse" , model= "zai-org/GLM-Image" , ) Update on GitHub ← WaveSpeedAI Hub API →

類似記事(ベクトル近傍)