Model Context Protocol: The new AI connection standard

Published on August 7, 2025

model-context-protocol-introduction-header1

AI tools, especially AI agents that can act on behalf of users, are growing in popularity. As a result, developers need to find ways to give the tools access to their apps and SaaS products to keep up with market demands.

Direct integration has been clunky, requiring custom integrations for each AI tool. The Model Context Protocol (MCP) aims to solve this as a standardized way for AI to interact with your apps on behalf of users.

This post explains what MCP is, how it works, and how you can leverage it in your digital products to add new AI functionality to your existing apps, broaden your use cases, and reach a wider user base.

What is the Model Context Protocol?

The MCP is an open standard and open-source framework developed by Anthropic to standardize how AI systems, especially large language models (LLMs), interact with external tools, systems, and data sources.

MCP offers a standardized way to connect your systems, tools, data, and functionality to AI agents in a structured, machine-readable way by exposing functions through an interface. It facilitates two-way communication between your software and external AI (like ChatGPT calling your CMS, or an AI agent triggering a Jira workflow), allowing you to launch AI-powered features faster.

model-context-protocol-introduction-image1

MCP is a scalable alternative to custom plugins or brittle integrations for each AI product you want to allow users to connect with. It's pitched as being "the USB-C of AI" due to its universal nature that allows any AI that implements the protocol to connect to your tools, data, and systems. 

This universal functionality is enabled by the following:

  • Context awareness: MCP servers hand over relevant context like content schemas, analytics, or user history without needing developers to pass raw documents or engineer prompts. This makes the responses more accurate and specific.

  • Autonomy and automation: Instead of just answering questions, MCP allows AI agents to take action. They can invoke operations such as publishing content, updating records, or triggering workflows, all based on natural language instructions. This can enable automation across teams with less manual human work.

  • Integration flexibility: MCP decouples AI capabilities from custom integrations. With an MCP server in place, any AI agent can discover and use tools exposed through the protocol without needing a custom integration built for each use case. It connects your app to the growing, standardized AI ecosystem.

MCP is great for tasks where you normally spend two minutes thinking about what to do and then thirty minutes executing it manually. For example, MCP works well in project management tools where describing a task is easy but actually creating the task, representing it correctly on a board, and ensuring that fields are filled with information that is usually easy to infer from the context of the story can be time-consuming. Atlassian is a great example of using AI and MCP for innovative workflows like this. Jira allows you to use AI agents to describe tasks and automate complex workflows, freeing you up to focus on other things. Their in-app AI agent is already pretty good, but they go a step further by providing an MCP server that allows you to bypass the UI entirely while retrieving and changing relevant data. 

As well as simplifying integration and reducing development burdens, MCP can lead to growth. By implementing MCP, developers of tools and platforms can expand their product’s use cases and enable automation and scalability. This not only attracts more users, but it means they get faster, more intelligent workflows by setting their AI assistants to perform tasks using your product, giving you a leg up over competition that doesn't support MCP.

How does MCP work?

At a high level, MCP defines how AI agents can "discover" and interface with tools or information in external systems. The MCP server acts as an interpreter between the AI client and your services, telling the client what services are available and how to use them.

It starts with discovery: The AI agent will connect to an MCP server, which provides a list of available tools (functions that the service can perform) and their context through machine-readable definitions. The definitions will describe what tools are available (for example, “publish content,” “query database”) and how to call them.

When the agent decides to use a tool, it makes a function call by sending a JSON request (usually via HTTP, but MCP also supports STDIO). In turn, the MCP server will process the request, perform the action in the external system, and return a structured response.

For example, the agent might request the action function getBlogArticles() and the MCP server might respond with { blogArticles: [{title: "My Super Blog Post"...}]}

The main components of an MCP server include:

  • Host application: The external system being exposed (for example, a CMS or CRM).

  • MCP server: The server that exposes functions, tools, and context of the host application.

  • MCP client: The AI model / agent that wishes to interact with the host application.

  • Transport layer: Typically HTTP, which serves as the protocol for sending and receiving structured requests and responses.

model-context-protocol-introduction-image2

What problems does the Model Context Protocol solve?

Adding AI features to your app usually ends up with unsatisfying results. Most teams don't have the resources to develop AI features powerful enough to be useful and fit their use case. Similarly, adding AI integrations that grant broad access without context can lead to unexpected results (for both you and your users). Instead, developers can implement MCP to advertise and support specific extensively tested actions their apps allow AI to take. This ensures those actions provide value to customers without wasting time and effort. 

MCP enables dynamic scoping. This ensures you don't overload the LLM with unnecessary context and make it perform worse. Since MCP can scope tool access by environment and credentials, enterprises can limit what AI can access or modify. This helps prevent AI from overstepping the actions a user intended for it to take.

LLM isolation

LLMs are mostly isolated, and they lack direct integration, even ones that offer web search capabilities. This makes it difficult to connect to private systems, enterprise databases, or internal apps.

MCP servers are a way to plug your AI tools into this valuable information. For example, you might want to set your AI to help update information in an internal business app. This would involve either copying and pasting records manually (which is time-consuming or error-prone) or connecting the AI to your app’s REST API, which gives it broad access that may result in accidental data changes or security issues if the AI doesn't have full context of what it can and can't (or shouldn't) do. Implementing MCP allows the AI to interact with the data in this tool safely while allowing you to work with this information through prompting and natural language commands.

The NxM problem

NxM is a significant integration problem for developers, and MCP is the solution. N represents the number of LLMs, and M represents external tools or apps they might want to connect them to, of which there is a great number. Each LLM will need a separate integration for each external tool, rapidly multiplying the amount of integration points required.

This problem causes repetition of code and redundant development time, as developers have to solve the same integration problem for different models. For example, developers might finish integrating ChatGPT for summaries but want to use another AI product for transcription. This would require them to start from scratch with the integration, and maintenance would be required for each custom integration to make sure they don't break. Additionally, since different AI models handle things in different ways, the code will be fragmented,  which could make the codebase confusing and cause the application to behave in unexpected ways.

Standardization solves all of this by providing a consistent interface for services to interact.

The rise of AI agents

AI agents build on LLM chatbots with three elements: autonomous reasoning, memory, and tool use.

AI agents don't just generate text; they can act on behalf of the user and take actual actions. MCP allows those agents to do this with external systems like content platforms, analytics tools, or CRMs, and even combine tools together through a natural language interface. For example, Contentful exposes functionality through its AI actions that an AI agent could use.

For example, you might ask an LLM to take some photos from your online photo storage, write a bit about them, and then publish a blog post for you. The LLM could then pull data about previous posts from an SEO tool and make suggestions about what you could add to make your next one more popular. By describing what you want in natural language, the agent can autonomously pull together data from all of these different tools and take action in seconds.

model-context-protocol-introduction-image3

A scalable alternative to custom LLM plugins

MCP servers offer a scalable alternative to custom ChatGPT plugins and one-off integrations that may break due to a software update. Rather than having to spin up a custom integration for each AI model, MCP exposes your system’s functionality through a standardized interface, allowing any compatible LLM to interact with your system and allowing you to swap out different AI models as you please. This takes the pressure out of having to pick the "right one" to integrate with and frees you up to be flexible around compliance.

MCP reduces complexity, lightens the burden of having to maintain integrations, and helps teams make their apps smarter and deliver a better user experience.

How to use MCP to give your app AI capabilities

To connect AI agents to your app through the Model Context Protocol, you will need to expose the functionality and data of your internal system through a machine-readable interface. The following is a high-level overview of how to achieve this:

  1. Define the functionality with a tool schema: Decide on what functionality you want to make available to AI agents (fetching analytics, transcribing audio). Then write the tool definitions that describe these actions in a structured way.

  2. Build your MCP server: Deploy a server that can host all of your tool definitions and provide endpoints that AI agents can access.

  3. Add context handlers: If your app needs to provide relevant context to perform certain actions, add the required logic to provide that context when needed.

  4. Implement security: Set up authentication, access controls, input validation, and human-in-the-loop approval where necessary.

  5. Register or share your MCP server: Make your server discoverable by agents. You can do this by sharing the URL directly or by registering it with an MCP registry.

Security implications for MCP servers and clients

Security is paramount when building any connected tool, both for the protection of your own data and infrastructure and to ensure that your service doesn't become an attack vector for your users.

Security considerations when building an MCP server

MCP servers and agentic applications are relatively new and still rapidly evolving, and so is the security landscape around them. Developers that build these systems must carefully consider how their services interact with MCP — how their MCP server will allow AI to interact, how they will establish trust and consent, and how they can audit interactions so they can fully understand what users' AI agents are actually doing.

There are some practical things that developers can do to make sure the systems they're building are as secure as possible. It's important to set up things like authentication and authorization from an early stage. You should also strengthen the security of your MCP implementation, as well as the backend services it accesses. The services should only allow trusted agents to access the platform to perform approved actions. You should also put access controls in place, which can scope the tools and context aligned with the principle of least privilege.

To guard your system against injection attacks, configure your MCP server to validate and sanitize all inputs from AI agents. Disable all operations by default, unless the user explicitly grants permission. In critical scenarios, like authorizing a payment or permanently deleting data, seek human-in-the-loop (HITL) approval. 

Other essential security considerations include rate limiting and open-sourcing your MCP implementation if possible. Exposing the inner workings of your system by showing the code and logic builds trust with your community, enabling users to audit, customize, and, if necessary, extend the server for their own specific use cases.

Security considerations when connecting to an MCP server

Just as you need to take care when building an MCP server, you should take equal care when connecting your own AI tools to one. A poorly designed MCP server may malfunction or take actions you did not permit or intend, and there is a very real concern of malicious MCP servers.

Malicious MCP servers may have poisoned tool definitions that hide potentially malicious actions, such as deleting data, trying to access private data, or using MCP as a vector to access other connected MCP servers and the tools behind them. They usually do this by providing misleading function names or changing what a function does after trust has been established, which tricks AI agents into calling dangerous actions under false pretenses.

Ensure that any MCP servers you connect to give you the power to grant granular permissions, and make sure that you're aware of what a tool will actually do. This way, you can be as confident as possible that your AI agent will take the right actions, and you’ll reduce the risk of it unintentionally doing something wrong or being exploited by a remote MCP server.

The future of Model Context Protocol

There are echoes of the API revolution that happened over the last two decades in the explosion of interest in MCP servers. Companies gave access to their APIs to developers and those developers ended up building things nobody expected using those APIs and finding new ways to get value from existing products. The same thing is set to happen with MCP: People will build things we don’t expect.

Imagine marketers and content strategists who would like to have a simpler interface to understand holistically all the content currently published on a platform. With quick and easy access to this information, they could identify gaps, spot trends, and brainstorm ideas and outlines for new content. MCP could also be useful to content architects who would like to have a higher-level language that they can use to describe the shape of content, and then leave it to the LLM to interpret that into individual elements in the content schema.

Build a future-proof, composable, extensible architecture with AI, MCP, and Contentful

AI-enhanced architecture is the next evolutionary stepping stone in the digital experience. MCP enables this shift by providing a standardized interface that makes connecting external systems to AI tools easier. Your digital products need to start using this technology now to ensure you are positioned to best meet your current and future customers’ expectations.

Content-driven experiences are no exception to this: AI agents need to be able to connect to, understand, and interact with your online products and the text, images, video, and other media that supports them. This may range from asking an AI assistant to rewrite content and optimize keywords for better search rankings to asking an AI to automate content translation. You can do all of this and more with Contentful’s AI actions.

Contentful provides its own MCP server that you can use to enable your own AI assistants with the ability to work with Contentful or to build agentic applications that use Contentful. If you or your AI team is interested in having AI work with Contentful, consider exploring Contentful's MCP Server on GitHub.

Subscribe for updates

Build better digital experiences with Contentful updates direct to your inbox.

Meet the authors

Niko Berry

Niko Berry

Product Manager

Contentful

Niko is a product manager of developer experience at Contentful. Also, an MCP enthusiast.

Marco Cristofori

Marco Cristofori

Product Marketing Manager

Contentful

Marco is a B2B content creator and product marketer blending technical with creative skills. From the early stages of product ideation to a successful market launch, all the way through to sales enablement, he loves to take products and translate them into clear, relatable messages.

Related articles

Next.js redirects are an important function for any frontend application. This tutorial demonstrates the four ways that you can create a page redirect in Next.js.
Guides

How to create a Next.js redirect in 4 ways (with examples)

September 20, 2024

React memo is a higher-order component that boosts performance by memoizing functional components. Learn how and when to apply it for optimized re-renders.
Guides

What is React memo? How to improve React performance

November 8, 2024

If you’re a retailer stuck with legacy systems, replatforming your ecommerce solution is a good way to modernize and meet your customers’ needs.
Guides

Ecommerce replatforming: A guide to future-proofing online retail

November 10, 2022

Contentful Logo 2.5 Dark

Ready to start building?

Put everything you learned into action. Create and publish your content with Contentful — no credit card required.

Get started