Introducing the Model Context Protocol (MCP)
An open standard we've been working on at Anthropic that solves a core challenge with LLM apps - connecting them to your data.
No more building custom integrations for every data source. MCP provides one protocol to connect them all:
Here's a quick demo using the Claude desktop app, where we've configured MCP:
Watch Claude connect directly to GitHub, create a new repo, and make a PR through a simple MCP integration.
Once MCP was set up in Claude desktop, building this integration took less than an hour.
Getting LLMs to interact with external systems isn't usually that easy.
Today, every developer needs to write custom code to connect their LLM apps with data sources. It's messy, repetitive work.
MCP fixes this with a standard protocol for sharing resources, tools, and prompts.
At its core, MCP follows a client-server architecture where multiple services can connect to any compatible client.
Clients are applications like Claude Desktop, IDEs, or AI tools. Servers are light adapters that expose data sources.
Part of what makes MCP powerful is that it handles both local resources (your databases, files, services) and remote ones (APIs like Slack or GitHub's) through the same protocol
An MCP server shares more than just data as well. In addition to resources (files, docs, data), they can expose:
- Tools (API integrations, actions)
- Prompts (templated interactions)
Security is built into the protocol - servers control their own resources, there's no need to share API keys with LLM providers, and there are clear system boundaries.
Right now, MCP is only supported locally - servers must run on your own machine.
But we're building remote server support with enterprise-grade auth, so teams can securely share their context sources across their organization.
We're building a world where AI connects to any data source through a single, elegant protocol—MCP is the universal translator.
Integrate MCP once into your client and connect to data sources anywhere.
Get started with MCP in <5 minutes - we've built servers for GitHub, Slack, SQL databases, local files, search engines, and more.
Install the Claude Desktop app and follow our step-by-step guide to connect your first server: https://t.co/WF34ZQxHhv
Like LSP did for IDEs, we're building MCP as an open standard for LLM integrations.
Build your own servers, contribute to the protocol, and help shape the future of AI integrations: https://t.co/SqJSBneYSW
And check out the blog post for even more info: https://t.co/TyHWhuS4fO
Introducing the Model Context Protocol (MCP)
An open standard we've been working on at Anthropic that solves a core challenge with LLM apps - connecting them to your data.
No more building custom integrations for every data source. MCP provides one protocol to connect them all: Here's a quick demo using the Claude desktop app, where we've configured MCP:
Watch Claude connect directly to GitHub, create a new repo, and make a PR through a simple MCP integration.
Once MCP was set up in Claude desktop, building this integration took less than an hour. Getting LLMs to interact with external systems isn't usually that easy.
Today, every developer needs to write custom code to connect their LLM apps with data sources. It's messy, repetitive work.
MCP fixes this with a standard protocol for sharing resources, tools, and prompts.At its core, MCP follows a client-server architecture where multiple services can connect to any compatible client.
Clients are applications like Claude Desktop, IDEs, or AI tools. Servers are light adapters that expose data sources. Part of what makes MCP powerful is that it handles both local resources (your databases, files, services) and remote ones (APIs like Slack or GitHub's) through the same protocolAn MCP server shares more than just data as well. In addition to resources (files, docs, data), they can expose:
- Tools (API integrations, actions)
- Prompts (templated interactions)Security is built into the protocol - servers control their own resources, there's no need to share API keys with LLM providers, and there are clear system boundaries.Right now, MCP is only supported locally - servers must run on your own machine.
But we're building remote server support with enterprise-grade auth, so teams can securely share their context sources across their organization.We're building a world where AI connects to any data source through a single, elegant protocol—MCP is the universal translator.
Integrate MCP once into your client and connect to data sources anywhere.Get started with MCP in <5 minutes - we've built servers for GitHub, Slack, SQL databases, local files, search engines, and more.
Install the Claude Desktop app and follow our step-by-step guide to connect your first server: https://t.co/WF34ZQxHhvLike LSP did for IDEs, we're building MCP as an open standard for LLM integrations.
Build your own servers, contribute to the protocol, and help shape the future of AI integrations: https://t.co/SqJSBneYSWAnd check out the blog post for even more info: https://t.co/TyHWhuS4fO