The whole platform in small, learnable pieces. Point your AI at it or read it yourself — it's the same either way.
From an empty terminal to a live app in five minutes. Every step, no assumptions.
public/, api/, migrations/, seed.sql, hatchable.toml. File placement is the API.
The entire hatchable module: seven methods across db, auth, email, storage.
Enable [auth] in hatchable.toml and get signup, login, password reset, and OAuth — all auto-mounted.
Connect any MCP client, the five-step build loop, prompting tips, tool safety, and anti-patterns to avoid.
What happens during hatchable deploy: migrations, seed, static copy, function registration.
Hatchable projects follow a small, fixed layout. File placement determines behavior — there's no routing config.
public/ static files, served at their file path
api/ backend functions — each file is one endpoint
hello.js → /api/hello
users/list.js → /api/users/list
users/[id].js → /api/users/:id (req.params.id)
_lib/ shared code, not routed
migrations/*.sql SQL files, run in filename order on every deploy
seed.sql optional — runs on first deploy, once per project
hatchable.toml optional overrides (cron, auth, project name)
package.json dependencies (no build scripts yet)
Every API file exports a default async function that receives Express-shaped req and res objects. The hatchable module gives you seven methods:
db.query(sql, params) → { rows, rowCount } db.transaction([{sql, params}]) → { results } auth.getUser(req) → { id, email, name } | null email.send({ to, subject, html }) storage.put(key, buffer, type) → url storage.get(key) → { buffer, contentType } storage.del(key)
That's the whole surface. Raw SQL is the primary database interface — agents are great at it, and the skills port anywhere. Use RETURNING to get inserted ids in the same round trip:
INSERT INTO users (email) VALUES ($1) RETURNING id
Drop this into hatchable.toml and your app gets signup, login, logout, password reset, session management, and OAuth:
[auth]
enabled = true
providers = ["email", "google"]
Hatchable auto-mounts the standard endpoints under /api/auth/* and provisions users, sessions, and accounts tables in your project's own database. You extend users with your own columns via a normal migration. In your function code, auth.getUser(req) returns the current user — same call whether auth is on or off.
The Hatchable MCP server lives at https://hatchable.com/mcp. It speaks the Streamable HTTP transport (protocol version 2025-03-26) and authenticates via OAuth 2.1 with dynamic client registration, or a static bearer token you paste. Add it once and every prompt that ends in "…on hatchable" ships through the same build loop.
Four popular clients, one URL:
# Claude Code — OAuth, no token needed claude mcp add --transport http hatchable https://hatchable.com/mcp # Claude.ai — Settings → Connectors → Add custom connector https://hatchable.com/mcp # Cursor — paste into ~/.cursor/mcp.json { "mcpServers": { "hatchable": { "url": "https://hatchable.com/mcp", "headers": { "Authorization": "Bearer $HATCHABLE_TOKEN" } }}} # Codex — ~/.codex/config.toml [mcp_servers.hatchable] url = "https://hatchable.com/mcp" bearer_token_env_var = "HATCHABLE_API_KEY"
OAuth-capable clients (Claude Code, Claude.ai, Antigravity) skip the token step — sign in on the browser tab that pops up and come back. For static-token clients, grab $HATCHABLE_TOKEN from your API keys page.
Every successful session follows the same five steps. Get your agent into this rhythm and most of what follows becomes automatic.
project_id and the live URL. Use the URL as the final deliverable you hand to your user.{path, content} entries is faster than ten write_files and is atomic: an invalid path in the batch rejects the whole batch.public/ to the CDN, registers API endpoints, bumps the version.patch_file is a find-and-replace; cheaper than rewriting the whole file for small edits.Then deploy again and loop.
What we've seen work:
proj_a8Kq7fR2xZ" early in the conversation — the agent remembers and stops asking which project.run_function it with a realistic body and show me the response." This catches half of all bugs before the frontend is written.Every project gets a dedicated PostgreSQL database. Treat it like any other Postgres — you have the full language.
migrations/*.sql is the schema source of truth. Each migration runs exactly once, in filename order, tracked in __hatchable_migrations. Use numbered prefixes: 0001_init.sql, 0002_add_tags.sql.execute_sql is for queries and one-off data ops — SELECT, UPDATE, INSERT. Don't use it for CREATE TABLE or ALTER TABLE; changes made there aren't in your migration history and won't reproduce for forks.$1, $2, etc. — never string-interpolate untrusted values.RETURNING to get inserted ids in the same round trip instead of a follow-up SELECT.seed.sql runs once per project (and on every fresh fork). Good for demo data; keep it idempotent-ish so re-running doesn't duplicate rows if you trigger it manually.Three visibility levels, set via set_visibility:
your-slug.hatchable.site URL, or a custom domain you point with a CNAME.[auth] enabled. Same as public, plus room for user accounts, higher request budgets, and burst rate limits.Build on personal. Promote when you're ready.
Every Hatchable tool carries MCP annotations so your AI (and the client surface it runs in) can reason about what a call will do before making it:
readOnlyHint: true) — get_project, list_projects, read_file, list_files, get_schema, view_logs, search_directory. Safe to call freely.destructiveHint: true) — everything that writes or deletes, plus execute_sql and run_function. Clients that prompt before destructive calls will surface these.execute_sql is the sharpest tool. It can DROP TABLE as easily as SELECT. Agents should prefer the structured tools (get_schema, migrations) when possible.Things we regularly see that end in tears:
execute_sql — bypasses migration tracking, doesn't reproduce on fork. Always go through migrations/*.sql.package.json — deploy refuses projects with scripts.build (no remote build yet). Commit built output to public/.set_env. Keys matching SECRET, PASSWORD, TOKEN, API_KEY, or PRIVATE are auto-flagged as secrets in the console.run_function returns 200 — write, deploy, test, then share. The whole loop takes seconds; users discovering the bug for you costs more.patch_file exists precisely for this. Less context burned, fewer AI hallucinations sneaking in as "incidental rewrites."One hatchable deploy (or the deploy MCP tool) does the following in order:
package.json has a build script (commit built output to public/ for now).hatchable.toml and validates auth providers if [auth] is enabled.migrations/*.sql (filename order), tracked in __hatchable_migrations so each runs once.seed.sql if this is a fresh install (including forks).public/** to the CDN, injecting window.__HATCHABLE__ into HTML files.api/**.js as live endpoints (excluding api/_lib/**).Want the details? Every MCP tool includes its full spec in the tool description when your AI lists them — so your AI always has an up-to-date copy, even if this page lags.