This is the second of four parts in this series. Part 1 can be found here.
4. The Architecture of MCP: Clients, Protocol, Servers, and Services
How does MCP actually work under the hood? At its core, MCP follows a client–server architecture, with a twist tailored for AI-to-software communication. Let’s break down the roles:
MCP servers
These are lightweight adapters that run alongside a specific application or service. An MCP server exposes that application’s functionality (its “services”) in a standardized way. Think of the server as a translator embedded in the app—it knows how to take a natural-language request (from an AI) and perform the equivalent action in the app. For example, a Blender MCP server knows how to map “create a cube and apply a wood texture” onto Blender’s Python API calls. Similarly, a GitHub MCP server can take “list my open pull requests” and fetch that via the GitHub API. MCP servers typically implement a few key things:
- Tool discovery: They can describe what actions/capabilities the application offers (so the AI knows what it can ask for).
- Command parsing: They interpret incoming instructions from the AI into precise application commands or API calls.
- Response formatting: They take the output from the app (data, confirmation messages, etc.) and format it back in a way the AI model can understand (usually as text or structured data).
- Error handling: They catch exceptions or invalid requests and return useful error messages for the AI to adjust.
MCP clients
On the other side, an AI assistant (or the platform hosting it) includes an MCP client component. This client maintains a 1:1 connection to an MCP server. In simpler terms, if the AI wants to use a particular tool, it will connect through an MCP client to that tool’s MCP server. The client’s job is to handle the communication (open a socket, send/receive messages) and present the server’s responses to the AI model. Many AI “host” programs act as an MCP client manager—e.g., Cursor (an AI IDE) can spin up an MCP client to talk to Figma’s server or Ableton’s server, as configured. The MCP client and server speak the same protocol, exchanging messages back and forth.
The MCP protocol
This is the language and rules that the clients and servers use to communicate. It defines things like message formats, how a server advertises its available commands, how an AI asks a question or issues a command, and how results are returned. The protocol is transport agnostic: It can work over HTTP/WebSocket for remote or stand-alone servers, or even standard I/O streams (stdin/stdout) for local integrations. The content of the messages might be JSON or another structured schema. (The spec uses JSON Schema for definitions.) Essentially, the protocol ensures that whether an AI is talking to a design tool or a database, the handshake and query formats are consistent. This consistency is why an AI can switch from one MCP server to another without custom coding—the “grammar” of interaction remains the same.
Services (applications/data sources)
These are the actual apps, databases, or systems that the MCP servers interface with. We call them “services” or data sources—they are the end target the AI ultimately wants to utilize. They can be local (e.g., your filesystem, an Excel file on your computer, a running Blender instance) or remote (e.g., a SaaS app like Slack or GitHub accessed via API). The MCP server is responsible for securely accessing these services on behalf of the AI. For example, a local service might be a directory of documents (served via a Filesystem MCP), whereas a remote service could be a third-party API (like Zapier’s web API for thousands of apps, which we’ll discuss later). In MCP’s architecture diagrams, you’ll often see both local data sources and remote services—MCP is designed to handle both, meaning an AI can pull from your local context (files, apps) and online context seamlessly.
To illustrate the flow, imagine you tell your AI assistant (in Cursor), “Hey, gather the user stats from our product’s database and generate a bar chart.” Cursor (as an MCP host) might have an MCP client for the database (say a Postgres MCP server) and another for a visualization tool. The query goes to the Postgres MCP server, which runs the actual SQL and returns the data. Then the AI might send that data to the visualization tool’s MCP server to create a chart image. Each of these steps is mediated by the MCP protocol, which handles discovering what the AI can do (“this server offers a run_query action”), invoking it, and returning results. All the while, the AI model doesn’t have to know SQL or the plotting library’s API—it just uses natural language and the MCP servers translate its intent into action.
It’s worth noting that security and control are part of architecture considerations. MCP servers run with certain permissions—for instance, a GitHub MCP server might have a token that grants read access to certain repos. Currently, configuration is manual, but the architecture anticipates adding standardized authentication in the future for robustness (more on that later). Also, communication channels are flexible: Some integrations run the MCP server inside the application process (e.g., a Unity plug-in that opens a local port), while others run as separate processes. In all cases, the architecture cleanly separates the concerns: The application side (server) and the AI side (client) meet through the protocol “in the middle.”
5. Why MCP Is a Game Changer for AI Agents and Developer Tooling
MCP is a fundamental shift that could reshape how we build software and use AI. For AI agents, MCP is transformative because it dramatically expands their reach while simplifying their design. Instead of hardcoding capabilities, an AI agent can now dynamically discover and use new tools via MCP. This means we can easily give an AI assistant new powers by spinning up an MCP server, without retraining the model or altering the core system. It’s analogous to how adding a new app to your smartphone suddenly gives you new functionality—here, adding a new MCP server instantly teaches your AI a new skill set.
From a developer tooling perspective, the implications are huge. Developer workflows often span dozens of tools: coding in an IDE, using GitHub for code, Jira for tickets, Figma for design, CI pipelines, browsers for testing, etc. With MCP, an AI codeveloper can hop between all these seamlessly, acting as the glue. This unlocks “composable” workflows where complex tasks are automated by the AI chaining actions across tools. For example, consider integrating design with code: With an MCP connection, your AI IDE can pull design specs from Figma and generate code, eliminating manual steps and potential miscommunications.
No more context switching, no more manual translations, no more design-to-code friction—the AI can directly read design files, create UI components, and even export assets, all without leaving the coding environment.
This kind of friction reduction is a game changer for productivity.
Another reason MCP is pivotal: It enables vendor-agnostic development. You’re not locking into one AI provider’s ecosystem or a single toolchain. Since MCP is an open standard, any AI client (Claude, other LLM chatbots, or open source LLMs) can use any MCP server. This means developers and companies can mix and match—e.g., use Anthropic’s Claude for some tasks, switch to an open source LLM later—and their MCP-based integrations remain intact. That flexibility derisks adopting AI: You’re not writing one-off code for, say, OpenAI’s plug-in format that becomes useless elsewhere. It’s more like building a standard API that any future AI can call. In fact, we’re already seeing multiple IDEs and tools embrace MCP (Cursor, Windsurf, Cline, the Claude desktop app, etc.), and even model-agnostic frameworks like LangChain provide adapters for MCP. This momentum suggests MCP could become the de facto interoperability layer for AI agents. As one observer put it, what’s to stop MCP from evolving into a “true interoperability layer for agents” connecting everything?
MCP is also a boon for tool developers. If you’re building a new developer tool today, making it MCP-capable vastly increases its power. Instead of only having a GUI or API that humans use, you get an AI interface “for free.” This idea has led to the concept of “MCP-first development,” where you build the MCP server for your app before or alongside the GUI. By doing so, you ensure from day one that AI can drive your app. Early adopters have found this extremely beneficial. “With MCP, we can test complex game development workflows by simply asking Claude to execute them,” says Miguel Tomas, creator of the Unity MCP server. This not only speeds up testing (the AI can rapidly try sequences of actions in Unity) but also indicates a future where AI is a first-class user of software, not an afterthought.
Finally, consider the efficiency and capability boost for AI agents. Before MCP, if an AI agent needed some info from a third-party app, it was stuck unless a developer had foreseen that need and built a custom plug-in. Now, as the ecosystem of MCP servers grows, AI agents can tackle a much wider array of tasks out of the box by leveraging existing servers. Need to schedule a meeting? There might be a Google Calendar MCP. Analyze customer tickets? Perhaps a Zendesk MCP. The barrier to multistep, multisystem automation drops dramatically. This is why many in the AI community are excited: MCP could unlock a new wave of AI orchestration across our tools. We’re already seeing demos where a single AI agent moves fluidly from emailing someone to updating a spreadsheet to creating a Jira ticket, all through MCP connectors. The potential to compose these actions into sophisticated workflows (with the AI handling the logic) could usher in a “new era” of intelligent automation, as Siddharth Ahuja described after connecting Blender via MCP.
In summary, MCP matters because it turns the dream of a universal AI assistant for developers into a practical reality. It’s the missing piece that makes our tools context aware and interoperable with AI, with immediate productivity wins (less manual glue work) and strategic advantages (future-proof, flexible integrations). The next sections will make this concrete by walking through some eye-opening demos and use cases made possible by MCP.