Global Trend Radar
Dev.to US tech 2026-05-09 02:54

週末にDiscordの「船追跡」ボットを作った方法(24/7稼働を支える3プロセスアーキテクチャ)

原題: How I built a Discord 'ship-tracker' bot in a weekend (and the 3-process architecture that keeps it alive 24/7)

元記事を開く →

分析結果

カテゴリ
AI
重要度
65
トレンドスコア
27
要約
この記事では、著者が週末にDiscord用の船追跡ボットを構築した方法と、そのボットを24時間365日稼働させるための3プロセスアーキテクチャについて説明しています。ボットの設計、開発プロセス、使用した技術、そして運用のための工夫が詳しく紹介されており、他の開発者にとっても参考になる内容です。
キーワード
Disclosure: I'm a senior backend tech lead and I run HostingGuru. This bot runs on HostingGuru's Pro tier — but the architecture (web service + worker + scheduled job) works on any platform that supports those three primitives. I'll point out where each piece runs. I co-run a small Discord community for indie founders building dev tools. About 220 members, mostly early-stage SaaS people, lots of Claude Code / Cursor enthusiasts. Every Monday I used to manually scroll through the previous week's #i-shipped channel and write a digest message: "this week we shipped X, Y, Z." It took 30 minutes every Monday morning. After 5 weeks of it I did the math — 30 min × 52 weeks = 26 hours a year of me doing what a bot could do better. So one Saturday I built ShipTrack , the bot that's been keeping my Mondays free for 6 months now. This is the build log. It's mostly about an architecture decision (3 separate processes instead of 1) that turned out to be the difference between "bot keeps crashing" and "bot just works." What the bot does Three things, in order of complexity: Listens for the /ship slash command. When a member runs /ship "Launched my AI todo app — feedback welcome: link.com" , the bot logs the launch into a database and reacts with 🚀 in the channel. Tracks #i-shipped channel messages. When anyone posts in that channel (without slash command), the bot detects launch-shaped content (heuristic: contains a URL + at least one of "shipped", "launched", "live"), logs it, reacts. Posts a weekly digest every Monday at 9am UTC. The bot pulls all launches from the last 7 days, formats them into a nice list, and posts it to #announcements with @-mentions of the founders. That's it. Three things. But they map to three completely different kinds of computation, which is where v1 went wrong. v1: the naive setup that crashed in 15 minutes I started simple. One Node.js file. node bot.js . Deploy to a Render free web service. Done in 30 minutes. It worked on my laptop. It worked for the first 14 minutes after deploy. Then Render's free tier put the service to sleep due to no incoming HTTP traffic — and a Discord bot doesn't get HTTP traffic by default. It maintains a long-lived WebSocket connection to Discord's gateway. Render couldn't see that traffic. To Render, my bot was idle. So Render killed it. When the bot came back from sleep 30 seconds later, it tried to reconnect to Discord's gateway. Discord saw two sessions for the same bot. The old session got disconnected with a 4008 Reconnect and the new one inherited some weird state. Members started seeing the bot react to messages twice. Slash commands timed out. This is the kind of bug that takes a long time to diagnose if you've never seen it before, because everything looks fine in your logs . There's no error, just slightly wrong behavior. I wasted 4 hours. Why Discord bots are weirder than they look The thing nobody tells you when you start: a Discord bot has two completely different communication channels with Discord's servers, and they have totally different operational requirements. Channel 1: the gateway (WebSocket, persistent). The bot opens a WebSocket to wss://gateway.discord.gg Stays open forever Receives every event in real time (member joined, message posted, reaction added) Sends heartbeats every 41.25 seconds If the connection drops for >60 seconds, you have to fully re-authenticate and resync state Channel 2: slash commands (HTTP, on-demand). Discord POSTs to YOUR endpoint when a user runs a slash command You have 3 seconds to respond or Discord shows "interaction failed" to the user Public HTTP endpoint with signed payload verification These two channels don't fit on the same kind of host. The gateway needs always-on . The slash command webhook needs public HTTPS that wakes up fast . Most "deploy your Node app" flows assume one or the other, not both. v2: three processes, three responsibilities The architecture I landed on has three pieces: ┌──────────────────────┐ ┌──────────────────────┐ ┌──────────────────────┐ │ WEB SERVICE │ │ WORKER │ │ SCHEDULED SCRIPT │ │ HTTPS endpoint │ │ Always-on process │ │ Runs Monday 9am UTC │ │ Slash command webhook│ │ Discord gateway │ │ Generates weekly │ │ /api/discord/interact│ │ WebSocket connection │ │ digest │ └──────────────────────┘ └──────────────────────┘ └──────────────────────┘ │ │ │ └─────────────────────────┴──────────────────────────┘ │ ┌──────────────┐ │ Postgres │ │ (launches) │ └──────────────┘ Three separate deployments, one shared database. Each process does what it's good at and nothing else. Process 1: the web service (slash commands) This is a tiny Express app. One endpoint. Returns under 1 second. // web-service/server.js import express from ' express ' ; import { verifyKey } from ' discord-interactions ' ; import { db } from ' ./db.js ' ; const app = express (); app . use ( express . json ({ verify : ( req , _res , buf ) => { req . rawBody = buf ; } })); app . post ( ' /api/discord/interact ' , async ( req , res ) => { // 1. Verify Discord signed the request const signature = req . get ( ' X-Signature-Ed25519 ' ); const timestamp = req . get ( ' X-Signature-Timestamp ' ); const valid = verifyKey ( req . rawBody , signature , timestamp , process . env . DISCORD_PUBLIC_KEY ); if ( ! valid ) return res . status ( 401 ). send ( ' invalid signature ' ); // 2. Discord sometimes pings to check liveness if ( req . body . type === 1 ) return res . json ({ type : 1 }); // 3. Slash command — log the launch and respond fast if ( req . body . type === 2 && req . body . data ?. name === ' ship ' ) { const userId = req . body . member . user . id ; const username = req . body . member . user . username ; const text = req . body . data . options ?.[ 0 ]?. value || '' ; await db . launches . insert ({ user_id : userId , username , text , channel_id : req . body . channel_id , created_at : new Date (), }); return res . json ({ type : 4 , data : { content : `🚀 Logged your ship, ${ username } !` }, }); } res . json ({ type : 4 , data : { content : ' Unknown command ' } }); }); app . listen ( process . env . PORT || 3000 ); Deploy this as a normal web service . It can sleep on free tiers — Discord sends a request only when someone runs /ship , and 1 second of cold start before responding is fine. (For HostingGuru, I picked the Hobby tier with the always-on free guarantee anyway, but the architecture works either way.) Process 2: the worker (gateway + reactions) This is the long-running part. It opens the WebSocket connection to Discord and listens for messages. It can't sleep. Ever. // worker/bot.js import { Client , GatewayIntentBits , Events } from ' discord.js ' ; import { db } from ' ./db.js ' ; const client = new Client ({ intents : [ GatewayIntentBits . Guilds , GatewayIntentBits . GuildMessages , GatewayIntentBits . MessageContent , ], }); const SHIPPED_CHANNEL_ID = process . env . SHIPPED_CHANNEL_ID ; client . on ( Events . MessageCreate , async ( msg ) => { if ( msg . author . bot ) return ; if ( msg . channel . id !== SHIPPED_CHANNEL_ID ) return ; // Heuristic: contains a URL + a "shipped"-ish word const hasUrl = /https ? : \/\/\S +/ . test ( msg . content ); const hasShipWord = / \b( shipped|launched|live|released )\b /i . test ( msg . content ); if ( ! hasUrl || ! hasShipWord ) return ; await db . launches . insert ({ user_id : msg . author . id , username : msg . author . username , text : msg . content , channel_id : msg . channel . id , message_id : msg . id , created_at : new Date (), }); await msg . react ( ' 🚀 ' ); }); client . on ( Events . ClientReady , () => { console . log ( `ShipTrack online as ${ client . user . tag } ` ); }); client . login ( process . env . DISCORD_BOT_TOKEN ); Deploy this as a background worker . On HostingGuru this is the Pro tier worker process type — same Procfile -style declaration as Heroku's old worker: line: # hostingguru.yml (or similar config) processes : bot : type : worker command : node worker/bot.js always_on : true The platform keeps it running. If it crashes, it restarts. If you push new code, it gracefully reconnects. No HTTP traffic required to keep it alive — that's the whole point of a worker process type vs a web service. Process 3: the scheduled script (weekly digest) This one runs once a week. It's an "on-demand" script — runs, finishes, exits. Costs almost nothing. // scripts/weekly-digest.js import { Client , GatewayIntentBits } from ' discord.js ' ; import { db } from ' ./db.js ' ; const client = new Client ({ intents : [ GatewayIntentBits . Guilds ] }); await client . login ( process . env . DISCORD_BOT_TOKEN ); const since = new Date ( Date . now () - 7 * 24 * 60 * 60 * 1000 ); const launches = await db . launches . find ({ created_at : { $gte : since }, }); if ( launches . length === 0 ) { await client . destroy (); process . exit ( 0 ); } const formatted = launches . map ( l => `• <@ ${ l . user_id } > shipped: ${ l . text . slice ( 0 , 200 )} ` ) . join ( ' \n ' ); const channel = await client . channels . fetch ( process . env . ANNOUNCEMENTS_CHANNEL_ID ); await channel . send ({ content : `**📦 This week we shipped ( ${ launches . length } launches):**\n\n ${ formatted } ` , allowedMentions : { users : [] }, // notify in formatting only, don't ping }); await client . destroy (); process . exit ( 0 ); On HostingGuru, this runs as an on-demand script triggered by a schedule: processes : weekly-digest : type : script command : node scripts/weekly-digest.js schedule : " 0 9 * * 1" # every Monday at 9:00 UTC The platform spins up an ephemeral container at the scheduled time, runs the script, captures the output, exits. You pay for ~3 seconds of compute per week. If you've ever fought with Heroku Scheduler, you'll appreciate that the script lives in your repo, version-controlled, with the same env vars as the rest of your app. Why this architecture matters The naive temptation is to put all three in one Node process: HTTP server + Discord client + a