Documentation

Build on Hatchable.

The whole platform in small, learnable pieces. Point your AI at it or read it yourself — it's the same either way.

Quickstart

From an empty terminal to a live app in five minutes. Every step, no assumptions.

Project structure

public/, api/, migrations/, seed.sql, hatchable.toml. File placement is the API.

SDK reference

The entire hatchable module: seven methods across db, auth, email, storage.

Auth

Enable [auth] in hatchable.toml and get signup, login, password reset, and OAuth — all auto-mounted.

MCP + best practices

Connect any MCP client, the five-step build loop, prompting tips, tool safety, and anti-patterns to avoid.

Deploy

What happens during hatchable deploy: migrations, seed, static copy, function registration.

Project structure

Hatchable projects follow a small, fixed layout. File placement determines behavior — there's no routing config.

public/              static files, served at their file path
api/                 backend functions — each file is one endpoint
  hello.js           → /api/hello
  users/list.js      → /api/users/list
  users/[id].js      → /api/users/:id  (req.params.id)
  _lib/              shared code, not routed
migrations/*.sql     SQL files, run in filename order on every deploy
seed.sql             optional — runs on first deploy, once per project
hatchable.toml       optional overrides (cron, auth, project name)
package.json         dependencies (no build scripts yet)

The SDK

Every API file exports a default async function that receives Express-shaped req and res objects. The hatchable module gives you seven methods:

db.query(sql, params)           → { rows, rowCount }
db.transaction([{sql, params}])  → { results }

auth.getUser(req)                 → { id, email, name } | null

email.send({ to, subject, html })

storage.put(key, buffer, type)    → url
storage.get(key)                  → { buffer, contentType }
storage.del(key)

That's the whole surface. Raw SQL is the primary database interface — agents are great at it, and the skills port anywhere. Use RETURNING to get inserted ids in the same round trip:

INSERT INTO users (email) VALUES ($1) RETURNING id

Auth in two lines of config

Drop this into hatchable.toml and your app gets signup, login, logout, password reset, session management, and OAuth:

[auth]
enabled = true
providers = ["email", "google"]

Hatchable auto-mounts the standard endpoints under /api/auth/* and provisions users, sessions, and accounts tables in your project's own database. You extend users with your own columns via a normal migration. In your function code, auth.getUser(req) returns the current user — same call whether auth is on or off.

MCP

The Hatchable MCP server lives at https://hatchable.com/mcp. It speaks the Streamable HTTP transport (protocol version 2025-03-26) and authenticates via OAuth 2.1 with dynamic client registration, or a static bearer token you paste. Add it once and every prompt that ends in "…on hatchable" ships through the same build loop.

Connecting your AI

Four popular clients, one URL:

# Claude Code — OAuth, no token needed
claude mcp add --transport http hatchable https://hatchable.com/mcp

# Claude.ai — Settings → Connectors → Add custom connector
https://hatchable.com/mcp

# Cursor — paste into ~/.cursor/mcp.json
{ "mcpServers": { "hatchable": {
  "url": "https://hatchable.com/mcp",
  "headers": { "Authorization": "Bearer $HATCHABLE_TOKEN" }
}}}

# Codex — ~/.codex/config.toml
[mcp_servers.hatchable]
url = "https://hatchable.com/mcp"
bearer_token_env_var = "HATCHABLE_API_KEY"

OAuth-capable clients (Claude Code, Claude.ai, Antigravity) skip the token step — sign in on the browser tab that pops up and come back. For static-token clients, grab $HATCHABLE_TOKEN from your API keys page.

The build loop

Every successful session follows the same five steps. Get your agent into this rhythm and most of what follows becomes automatic.

  1. create_project — once. Returns a project_id and the live URL. Use the URL as the final deliverable you hand to your user.
  2. write_files (plural) — bulk over per-file. One call with an array of {path, content} entries is faster than ten write_files and is atomic: an invalid path in the batch rejects the whole batch.
  3. deploy — runs new migrations, copies public/ to the CDN, registers API endpoints, bumps the version.
  4. run_function — execute the endpoints you just wrote through your authenticated session. Works on personal projects before you've made them public, so you can verify response shapes before a real user hits them.
  5. patch_file or another write_files — iterate. patch_file is a find-and-replace; cheaper than rewriting the whole file for small edits.

Then deploy again and loop.

Prompting

What we've seen work:

Database

Every project gets a dedicated PostgreSQL database. Treat it like any other Postgres — you have the full language.

Going live

Three visibility levels, set via set_visibility:

Build on personal. Promote when you're ready.

Tool safety

Every Hatchable tool carries MCP annotations so your AI (and the client surface it runs in) can reason about what a call will do before making it:

Anti-patterns

Things we regularly see that end in tears:

Deploy

One hatchable deploy (or the deploy MCP tool) does the following in order:

  1. Rejects the deploy if package.json has a build script (commit built output to public/ for now).
  2. Parses hatchable.toml and validates auth providers if [auth] is enabled.
  3. Runs every new migration in migrations/*.sql (filename order), tracked in __hatchable_migrations so each runs once.
  4. Runs seed.sql if this is a fresh install (including forks).
  5. Copies public/** to the CDN, injecting window.__HATCHABLE__ into HTML files.
  6. Registers api/**.js as live endpoints (excluding api/_lib/**).
  7. Increments the project version and supersedes the previous deploy.

Want the details? Every MCP tool includes its full spec in the tool description when your AI lists them — so your AI always has an up-to-date copy, even if this page lags.