TL;DR. MCP (Model Context Protocol) is an open standard for how AI assistants call external tools — databases, APIs, deployment targets, anything. Anthropic introduced it in November 2024. Claude Code, Cursor, ChatGPT, Codex, and most major AI tools now support it. The practical effect: any MCP-compatible AI can use any MCP-compatible tool, without custom integrations.

The one-sentence definition

MCP is to AI tools what USB is to hardware — a single common interface that lets any client talk to any server without custom wiring per pair.

In more technical terms: MCP is a JSON-RPC protocol that lets an AI assistant (the client) invoke tools exposed by an external service (the server). The assistant sees the tools as natural extensions of what it can do; the service defines which tools it offers and what they take as input.

Why MCP exists

Before MCP, every AI tool integration was custom work. If you wanted Claude to be able to deploy to AWS, someone had to build a specific "Claude-to-AWS" integration. To also let Claude query your database, someone built a "Claude-to-Postgres" integration. Then GPT-4 needed its own version. Then Cursor needed them all too.

The math was obvious: n AI tools × m services = n × m integrations to build. That doesn't scale, and the integrations were shallow and brittle because nobody had the resources to maintain them all well.

MCP flips it to n + m: each AI tool speaks MCP, each service speaks MCP, they connect through a common interface. A server built once works with every MCP-compatible client.

How it actually works

Three components, in plain English:

When you ask Claude Code to "create a new project called foo and deploy it," here's what happens:

  1. Claude Code (the host) reads your request.
  2. Claude has a list of every tool every connected MCP server has advertised — create_project from Hatchable, write_file from the file system, etc.
  3. Claude decides which tools to call, in what order.
  4. The client (inside Claude Code) sends JSON-RPC requests to each server.
  5. Each server runs the tool and returns the result.
  6. Claude reads the results and decides what to do next.

All of this happens in seconds, without you having to specify which tools to use. Claude figures out the tool chain from your natural-language request.

What MCP servers look like in practice

An MCP server is a process or web endpoint that exposes a list of tools, each with a name, description, and argument schema. When an AI client connects, the server sends back its tool manifest. Example tools a hypothetical database MCP server might expose:

The AI assistant reads the manifest and knows "oh, this server can query databases." When a conversation needs database access, it calls those tools directly — no prompting, no hallucinated SQL endpoints, no translation layer.

The transport layers

MCP servers can be reached three ways:

Most tools that run on hosted platforms (like Hatchable's /mcp endpoint) use HTTP transport. Most local-dev tools (file systems, git) use stdio.

Who supports MCP

As of early 2026, MCP is supported by:

The protocol is open and governed by Anthropic through a public spec. New clients and servers keep appearing. The list above is current but not exhaustive.

What can MCP servers do?

Effectively anything a program can do. Common categories:

Each one is just a JSON schema describing a capability. The AI calls the tool; the tool does the work; the AI reads the result.

Why this matters for building apps

The practical effect for app-builders: you can now have a conversation like "build a habit tracker with login and save the data to a database" and the AI has real tools for each part of that. It calls a deployment MCP server to create the project, a database MCP server to run migrations, a file-system tool to write the code. None of it is pretend.

Before MCP, AI tools could write the code for you but couldn't run it anywhere. Now they can do both in the same conversation. This is the biggest shift in AI-assisted coding since the original GPT-3.5 demos.

Security

Because MCP servers can actually do things (write files, run database queries, deploy code), security is a real consideration. A few practical notes:

MCP vs. tool use in general

MCP isn't the first way AI tools have called external tools. OpenAI has had "function calling" since 2023, Anthropic had tool use before MCP, various agent frameworks (LangChain, OpenAI Assistants) defined their own tool interfaces.

What's new with MCP is the open standard piece. Those earlier systems were per-vendor. MCP's contribution isn't the idea of AI-calling-tools (that predates it); it's that the same interface works across vendors. A server you build for Claude works with Cursor, ChatGPT, and Codex unchanged.

Try MCP with Hatchable.

Our /mcp endpoint works with any MCP-compatible AI tool. Free forever.

Get started free →

Frequently asked questions

What does MCP stand for?

Model Context Protocol. Introduced by Anthropic in November 2024 as an open standard for connecting AI assistants to external tools and data sources.

Do I need to know how MCP works to use it?

No. If you use Claude Code, Cursor, ChatGPT, or similar tools, MCP is the plumbing that makes external-tool support work. You configure a server URL and token; the tools it exposes appear in the AI's capabilities. You don't need to understand the protocol to use it, only how to add a server in your specific AI tool.

Is MCP secure?

As secure as you treat the tokens and servers. The protocol itself uses standard bearer-token auth over HTTPS. The risks are the normal API-token risks — don't share or commit tokens, use reputable MCP servers, review what the AI is about to do when it matters. Most MCP clients prompt for permission the first time each tool is called.

Can I build my own MCP server?

Yes. The spec is public (modelcontextprotocol.io) and there are SDKs for TypeScript, Python, and other languages. A basic server is ~50 lines of code exposing a handful of JSON-RPC endpoints. Many internal teams build MCP servers as a wrapper around their existing APIs to give their AI tools structured access.

What's the difference between MCP and "function calling"?

OpenAI's function calling and Anthropic's original tool use were per-vendor ways to let AI models call external code. They worked within one AI provider's ecosystem. MCP is an open standard that works across providers — a server you build once works with Claude, Cursor, ChatGPT, Codex, and more, no per-vendor rewrites. Same underlying idea (AI calls tools), wider interoperability.

What are some real MCP servers I can try?

Hatchable's /mcp endpoint for deploying apps. Anthropic's reference implementations for filesystem, Postgres, Slack, and GitHub integration. Community servers exist for Linear, Jira, Notion, Google Drive, and more. Search for "MCP server [thing]" on GitHub — the ecosystem is growing quickly.