TL;DR. MCP (Model Context Protocol) is an open standard for how AI assistants call external tools — databases, APIs, deployment targets, anything. Anthropic introduced it in November 2024. Claude Code, Cursor, ChatGPT, Codex, and most major AI tools now support it. The practical effect: any MCP-compatible AI can use any MCP-compatible tool, without custom integrations.
The one-sentence definition
MCP is to AI tools what USB is to hardware — a single common interface that lets any client talk to any server without custom wiring per pair.
In more technical terms: MCP is a JSON-RPC protocol that lets an AI assistant (the client) invoke tools exposed by an external service (the server). The assistant sees the tools as natural extensions of what it can do; the service defines which tools it offers and what they take as input.
Why MCP exists
Before MCP, every AI tool integration was custom work. If you wanted Claude to be able to deploy to AWS, someone had to build a specific "Claude-to-AWS" integration. To also let Claude query your database, someone built a "Claude-to-Postgres" integration. Then GPT-4 needed its own version. Then Cursor needed them all too.
The math was obvious: n AI tools × m services = n × m integrations to build. That doesn't scale, and the integrations were shallow and brittle because nobody had the resources to maintain them all well.
MCP flips it to n + m: each AI tool speaks MCP, each service speaks MCP, they connect through a common interface. A server built once works with every MCP-compatible client.
How it actually works
Three components, in plain English:
- The host. The AI tool itself (Claude Code, Cursor, etc.). It manages the conversation with the user and decides when to call tools.
- The client. Lives inside the host. Speaks MCP to external servers.
- The server. An external process or web endpoint that exposes tools the AI can call. Examples: a database with "query" and "insert" tools; a deployment platform with "create project" and "deploy" tools; a file system with "read file" and "write file" tools.
When you ask Claude Code to "create a new project called foo and deploy it," here's what happens:
- Claude Code (the host) reads your request.
- Claude has a list of every tool every connected MCP server has advertised —
create_projectfrom Hatchable,write_filefrom the file system, etc. - Claude decides which tools to call, in what order.
- The client (inside Claude Code) sends JSON-RPC requests to each server.
- Each server runs the tool and returns the result.
- Claude reads the results and decides what to do next.
All of this happens in seconds, without you having to specify which tools to use. Claude figures out the tool chain from your natural-language request.
What MCP servers look like in practice
An MCP server is a process or web endpoint that exposes a list of tools, each with a name, description, and argument schema. When an AI client connects, the server sends back its tool manifest. Example tools a hypothetical database MCP server might expose:
query(sql: string) → rows— run a read querylist_tables() → tables— enumerate schemasdescribe_table(name: string) → columns— get a table's schema
The AI assistant reads the manifest and knows "oh, this server can query databases." When a conversation needs database access, it calls those tools directly — no prompting, no hallucinated SQL endpoints, no translation layer.
The transport layers
MCP servers can be reached three ways:
- stdio. The AI client launches a local process and speaks to it over standard input/output. Best for local-only integrations (a file system, a git repo).
- HTTP (Streamable HTTP). The server is a web endpoint. The client POSTs JSON-RPC requests and gets streaming responses. Best for hosted services (databases, APIs, deploy platforms).
- SSE (Server-Sent Events). An older streaming model. Being phased out in favor of Streamable HTTP.
Most tools that run on hosted platforms (like Hatchable's /mcp endpoint) use HTTP transport. Most local-dev tools (file systems, git) use stdio.
Who supports MCP
As of early 2026, MCP is supported by:
- Claude — Claude Code (terminal) and Claude Desktop (GUI) both speak MCP natively.
- Cursor — MCP support added mid-2024; configured via JSON in Settings.
- Codex — OpenAI's terminal agent reads MCP server configs from
~/.codex/config.toml. - ChatGPT Desktop — MCP support via the "Connectors" feature.
- Antigravity — Google's browser-based AI coding tool.
- OpenClaw — an independent MCP-compatible agent.
- Continue — open-source VS Code / JetBrains agent.
The protocol is open and governed by Anthropic through a public spec. New clients and servers keep appearing. The list above is current but not exhaustive.
What can MCP servers do?
Effectively anything a program can do. Common categories:
- File system access — read/write files, list directories, search contents.
- Version control — git operations, GitHub/GitLab APIs.
- Databases — Postgres, MySQL, SQLite, Redis, key-value stores.
- Deployment — Hatchable, and other hosts increasingly adopting MCP.
- Search — web search via specific providers, vector search over documents.
- Company data — Slack, Notion, Google Drive, Linear, Jira, etc.
- Testing and monitoring — run tests, fetch logs, analyze errors.
Each one is just a JSON schema describing a capability. The AI calls the tool; the tool does the work; the AI reads the result.
Why this matters for building apps
The practical effect for app-builders: you can now have a conversation like "build a habit tracker with login and save the data to a database" and the AI has real tools for each part of that. It calls a deployment MCP server to create the project, a database MCP server to run migrations, a file-system tool to write the code. None of it is pretend.
Before MCP, AI tools could write the code for you but couldn't run it anywhere. Now they can do both in the same conversation. This is the biggest shift in AI-assisted coding since the original GPT-3.5 demos.
Security
Because MCP servers can actually do things (write files, run database queries, deploy code), security is a real consideration. A few practical notes:
- MCP clients prompt for permission the first time they call each tool. You approve or deny. Good clients let you pre-approve specific servers or tools.
- Servers authenticate clients — typically via a bearer token. Don't paste your MCP server tokens into places you wouldn't paste a production API key.
- Servers define their own scope — Hatchable's server only exposes tools related to your Hatchable projects; it doesn't touch anything outside that. Reputable MCP servers are similarly scoped.
- You should still review what an AI is about to do when it's working in production or with sensitive data. The prompt-for-permission default exists because "AI acts on your behalf" has real consequences.
MCP vs. tool use in general
MCP isn't the first way AI tools have called external tools. OpenAI has had "function calling" since 2023, Anthropic had tool use before MCP, various agent frameworks (LangChain, OpenAI Assistants) defined their own tool interfaces.
What's new with MCP is the open standard piece. Those earlier systems were per-vendor. MCP's contribution isn't the idea of AI-calling-tools (that predates it); it's that the same interface works across vendors. A server you build for Claude works with Cursor, ChatGPT, and Codex unchanged.
Try MCP with Hatchable.
Our /mcp endpoint works with any MCP-compatible AI tool. Free forever.
Get started free →Frequently asked questions
What does MCP stand for?
Model Context Protocol. Introduced by Anthropic in November 2024 as an open standard for connecting AI assistants to external tools and data sources.
Do I need to know how MCP works to use it?
No. If you use Claude Code, Cursor, ChatGPT, or similar tools, MCP is the plumbing that makes external-tool support work. You configure a server URL and token; the tools it exposes appear in the AI's capabilities. You don't need to understand the protocol to use it, only how to add a server in your specific AI tool.
Is MCP secure?
As secure as you treat the tokens and servers. The protocol itself uses standard bearer-token auth over HTTPS. The risks are the normal API-token risks — don't share or commit tokens, use reputable MCP servers, review what the AI is about to do when it matters. Most MCP clients prompt for permission the first time each tool is called.
Can I build my own MCP server?
Yes. The spec is public (modelcontextprotocol.io) and there are SDKs for TypeScript, Python, and other languages. A basic server is ~50 lines of code exposing a handful of JSON-RPC endpoints. Many internal teams build MCP servers as a wrapper around their existing APIs to give their AI tools structured access.
What's the difference between MCP and "function calling"?
OpenAI's function calling and Anthropic's original tool use were per-vendor ways to let AI models call external code. They worked within one AI provider's ecosystem. MCP is an open standard that works across providers — a server you build once works with Claude, Cursor, ChatGPT, Codex, and more, no per-vendor rewrites. Same underlying idea (AI calls tools), wider interoperability.
What are some real MCP servers I can try?
Hatchable's /mcp endpoint for deploying apps. Anthropic's reference implementations for filesystem, Postgres, Slack, and GitHub integration. Community servers exist for Linear, Jira, Notion, Google Drive, and more. Search for "MCP server [thing]" on GitHub — the ecosystem is growing quickly.