Global Trend Radar
Dev.to US tech 2026-05-09 00:06

Node v22インストール地獄なしのOpenClaw — Telegramに投稿しました

原題: OpenClaw without the Node v22 install hell — I put it on Telegram

元記事を開く →

分析結果

カテゴリ
AI
重要度
83
トレンドスコア
45
要約
OpenClawをNode v22のインストール問題なしで利用する方法について、Telegramに情報を共有しました。これにより、ユーザーはスムーズにOpenClawを導入できるようになります。
キーワード
I'll be honest. I tried to install OpenClaw three times before I gave up and shipped a hosted version on Telegram for $20/mo. If you've stared at the OpenClaw README and felt the dread settle in, this post is for you. I'm going to walk through: The exact friction that kills 90% of installs (with the receipts) What I built to skip it The architecture, including the parts I'm not proud of What you lose when you don't run it locally Why I think the hosted angle is the right answer for most people If you'd rather just try it: voltagegpu.com/confidential-agent . Same price as ChatGPT Plus. Sealed in Intel TDX in the EU. The operator (me) literally cannot read your messages. More on that later. The install nobody finishes OpenClaw is a beast of a project. Hundreds of thousands of stars on GitHub. A plugin ecosystem that makes LangChain look anaemic. A maintainer with an actual point of view about what an agent should be. It's also unusable for ~99% of the humans who star it. Here's what the README asks of you, in order: # 1. Install nvm (you've heard of it, never installed it) curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.0/install.sh | bash # 2. Install Node v22 specifically — not v20, not v21, not v22.0.0, # not the v22 already in your homebrew. v22.16.0 or it segfaults on plugin load. nvm install 22.16.0 nvm use 22.16.0 # 3. Global npm install of a 380MB package npm install -g openclaw # 4. Get an API key from a model provider you've never heard of # OR from OpenAI but configure the right base URL # OR from Chutes/Targon/etc. — README lists 14 options without ranking them # 5. Edit ~/.openclaw/openclaw.json — JSON, no schema validation, fails silently { "providers" : [ ...], "agents" : [ ...], "gateway" : { "mode" : "local" } // miss this, exit code 78, no error message } # 6. Install the gateway as a systemd user service openclaw daemon install openclaw daemon start # 7. Run your first agent openclaw agent main --local --prompt "hello world" # (waits 100 seconds — yes really, plugin load — returns three lines of JSON) Each step is fine on its own. Stack them and you've got a 30-minute setup that fails on step 5 for half the people who start it, because the JSON config rejects fields the README example shows. I lost an hour on meta.lastTouchedBy alone — turns out the schema rejects it, even though it appears in three of the demo configs. The maintainer has been blunt about this. Paraphrasing a recent issue thread: "if you don't know how to use a terminal, this project is too dangerous for you to run." Fair. But that filter throws out a lot of people who actually need an agent. So I paid myself to host it The shortcut, once you've eaten enough of these errors, is this: What if the install just... wasn't your problem? That's the entire pitch. Run OpenClaw on a server I control. Wire the input/output to a surface every adult already has on their phone. Charge for it. The surface I picked: Telegram. Not Slack (work). Not WhatsApp (no bot API worth using). Not iMessage (Apple won't let you). Telegram's bot API is mature, the UX is identical to texting, and people already have it. The flow ends up being four steps: Subscribe at voltagegpu.com/confidential-agent — Stripe, $20/mo Dashboard shows you a one-time link token Open Telegram, message @VoltageGPUPersonalBot , send /start <token> Start texting it like you'd text a person Total time, sign-up to first reply: about four minutes, most of which is Stripe checkout. What's actually running Here's the architecture, no glossing: Telegram client │ ▼ ┌──────────────────────────────────────────┐ │ Next.js app on Vercel │ │ /api/telegram/webhook │ │ ├─ verifies bot token │ │ ├─ resolves chatId → userId │ │ └─ inserts AgentJob row in Postgres │ └──────────────────────────────────────────┘ │ ▼ ┌──────────────────────────────────────────┐ │ Postgres (Neon) │ │ AgentJob: { userId, chatId, prompt, │ │ status, result } │ └──────────────────────────────────────────┘ │ polled ▼ ┌──────────────────────────────────────────┐ │ Worker on OVH VPS (systemd unit) │ │ voltage-personal-agent.service │ │ ├─ pulls pending AgentJob │ │ ├─ spawns: openclaw agent main --local │ │ │ --prompt <user message> │ │ ├─ openclaw loads 92 plugins (~90s │ │ │ cold, the part I'm not proud of) │ │ ├─ extracts payloads[0].text │ │ ├─ writes result back to AgentJob │ │ └─ sends to Telegram via bot API │ └──────────────────────────────────────────┘ │ inference ▼ ┌──────────────────────────────────────────┐ │ Chutes TEE inference │ │ https://llm.chutes.ai/v1 │ │ model: Qwen/Qwen3-32B-TEE │ │ Intel TDX-sealed, EU-hosted │ └──────────────────────────────────────────┘ A few things to call out from the diagram, because they hurt to debug: OpenClaw 2026.5.x changed the response shape without bumping major. It used to return {output: "..."} . It now returns {payloads: [{text, mediaUrl}], meta: {...}} . If you grep for .output in your worker code, you'll get empty replies forever and the JSON will look fine in your logs because meta is populated. --local mode loads 92 plugins on every cold call. That's the ~90-100 second floor I keep hitting. The gateway daemon ( openclaw daemon start ) keeps plugins warm on port 18789, but the worker right now spawns fresh per job because I haven't figured out a clean way to multiplex jobs through a single warm gateway without leaking state between users. So users wait 100 seconds. I have JOB_TIMEOUT_MS=240_000 to absorb this and a "thinking..." Telegram message at t+2s so it doesn't feel dead. The Telegram sendMessage returns {ok: false} on bad chatId instead of throwing. So a typo in the chat resolution path silently swallows the agent's reply. I learned this by inserting an AgentJob with chatId 999999999 , watching the worker complete successfully, and finding the answer in the database but never on my phone. Lesson: assert ok === true and re-queue if not. What you lose by not running it locally Be honest with yourself. The hosted version is not strictly equivalent to running OpenClaw on your laptop. Specifically: No custom plugins — you get the 92 that ship by default. Want to add the GitHub plugin with your PAT? Local only. No local file access — OpenClaw on your laptop can read ~/Documents/ . The hosted bot cannot reach into your filesystem (and shouldn't). Single agent identity — I configure --agent main only. You can't define --agent code-reviewer and --agent legal-research with different system prompts (yet). Inference model is fixed — Qwen/Qwen3-32B-TEE. You don't get to swap in GPT-5 or Claude. This is a deliberate choice for the hardware-sealed story (more on that), but it's still a constraint. If any of those are dealbreakers, install OpenClaw locally. Genuinely. The README is hostile but the project is good. What you gain The reasons people actually use the hosted version, ranked by what I see in support emails: Memory persistence across devices. Local OpenClaw stores conversation memory on disk. The hosted version stores it server-side, so the bot remembers your context whether you message from your phone, your laptop browser (Telegram Web), or your tablet. Mobile. OpenClaw locally is laptop-only unless you SSH from your phone, which nobody does. No installation entropy. No nvm conflicts when you upgrade macOS. No "works on my machine, fails on yours" when teaching a colleague. EU + TDX privacy posture. This one needs a paragraph. The privacy angle, briefly OpenClaw locally is private to you in the sense that the agent runs on your laptop. But the moment you point it at OpenAI or Anthropic, your prompts go to a US-hosted commercial provider that holds plaintext logs and can be subpoenaed. The hosted version routes inference to an Intel TDX-sealed VM in France. TDX is a hardware confidentiality feature: the VM's memory is encrypted with a per-VM key the host (us) cannot extract. Our SREs can't read your prompts. A subpoena to us yields ciphertext we can't decrypt. The inference model never sees plaintext outside the enclave. This is the "GDPR Article 28(3)(b) confidentiality, hardware-enforced" story, and it's why a couple of solo lawyers and notaries have started using it for client-sensitive drafting that they used to handle in ChatGPT and quietly regret. If you want the long version, there's a comparison page — same $20/mo as ChatGPT Plus, different threat model. The price anchor I picked $20/mo for a reason. ChatGPT Plus is $20. Claude Pro is $20. There's an unwritten consumer expectation that "premium AI = $20/mo," and I'm not interested in fighting it. What's included: 2,000 inference requests / month (covers normal daily use comfortably) Persistent conversation memory All 92 default OpenClaw plugins (web search, summarisation, file analysis on Telegram-attached docs, etc.) Telegram delivery on @VoltageGPUPersonalBot If you blow past 2,000, the dashboard offers metered top-ups. If you don't, you don't pay extra. Try it or fork the bridge If you just want to use it: voltagegpu.com/confidential-agent . If you want to host your own Telegram bridge to OpenClaw on your own VPS, the architecture above is roughly all of it. The painful bits are: Handle the payloads[0].text extraction shape change Don't trust sendMessage ok-status Cold plugin load is ~90s; either keep a warm gateway or set user expectations The gateway.mode=local config field is required and the failure mode is exit code 78 with no message Whichever you pick: stop trying to install OpenClaw cold on a fresh machine and expecting it to work first try. It won't. The maintainer was right about the terminal warning. The fix is either commit to the install pain, or pay someone else to wear it. I picked option 3: become the someone else. If this saved you an evening, the bot is at voltagegpu.com/confidential-agent . If it didn't, the architecture diagram above is yours to copy. I'll be honest. I tried to install OpenClaw three times before I gave up and shipped a hosted version on Telegram for $20/m