
Last November, Anthropic released the Model Context Protocol (MCP), a new standard for communication between components of an AI application, as well as external systems or tools. The developer community quickly adopted the protocol, implementing hundreds of MCP Servers. Now, with leading companies like AWS, GitHub, and even Anthropic “rival” OpenAI officially adopting MCP, it is gaining traction on the business side as well.
MCP standardizes the integration of data and tools with AI Agents, which is proving incredibly valuable for building AI applications faster and explains why MCP is quickly becoming the new standard for communicating context in agent-based AI systems.
For AI models to deliver reliable value in production environments like coding assistants, manufacturing controls, or financial reporting, they require appropriate context. Effective AI systems balance the model's capabilities with access to relevant, accurate information—whether that's proprietary data from various enterprise systems or the latest insights from web searches– as well as agentic tools that can further process data and automate enterprise workflows.
Previously, this was done in an ad-hoc, non-standardized way – but now MCP provides a consistent, structured format for interacting with large language models (LLMs) and other AI models, making it much easier to build customized AI applications. It's similar to how REST APIs once standardized how web services communicate, allowing for seamless integration and interoperability across different systems and platforms.
MCP defines clear patterns for providing context to models, managing tool use, and handling responses, enabling developers to build more maintainable AI applications faster without reinventing implementation patterns for each new use case.
MCP uses a simple client-server model. AI applications like Cursor, Claude, or a Haystack Agent act as clients that connect to MCP servers, each of which provides access to a specific tool or data source through a standardized interface.
When the AI application needs information or wants to perform an action, it sends a request to the appropriate MCP server, which handles the interaction with the underlying data source or tool and returns the results. This standardization means that any MCP-compatible client can work with any MCP-compatible server without any custom integration work.
While the actual documentation distinguishes between hosts (the AI application) and clients (protocol adapters on the host side that connect 1-to-1 to servers), in most practical discussions of MCP, the AI application itself is simply referred to as the "MCP Client" that can connect to multiple servers.
The true power of MCP becomes clear when we look at real-world applications:
With these faster and broader ways to prototype and iterate, the deepset AI Platform helps teams keep track of their various projects and releases. It provides built-in best practices for AI product development and includes many ready-made, yet easily customizable templates to jumpstart any project. Product teams can validate multiple use cases before committing resources to full development, reducing time-to-market and development costs.
Compound AI consists of multiple, self-contained components that can include AI models, non-AI business logic, and additional data sources in a cohesive system. Components can be swapped out and updated, providing flexibility and modularity. This modular approach has become the standard for sophisticated AI applications, allowing for components to be evaluated and replaced independently.
MCP fits perfectly into this modularity concept that is central to the Compound AI approach. The standardization offered by MCP is particularly valuable for AI agent-based systems, which rely on accessing and orchestrating tools based on context and goals. MCP enables these agents to:
As this new technology gains adoption, it promises to streamline the design of modular AI systems that can flexibly incorporate the tools and data they need to operate effectively.
deepset solutions are built using the Haystack open source framework for custom, production-grade AI. Haystack provides limitless flexibility to build with the best components, allowing users to choose from a large library of integrations (e.g., vector databases, LLMs, embedding, retrieval, and ranking models) from across the industry for their use cases. In addition to the pre-built components, users can build their own custom components to incorporate business logic or niche tools and data sources. Thanks to Haystack's open source nature, users of all deepset products retain full ownership of their solutions, as they are never locked into a proprietary "black box" format. Now with MCP, adding a custom data source or tool integration is even easier and faster:
Because connecting to an existing MCP Server is faster than writing a new integration from scratch, the integration opens up new horizons for building custom AI applications with Haystack by deepset.
The ease and flexibility with which MCP enables ad hoc integration of data sources, as well as faster prototyping and iteration cycles with a variety of tools, makes it likely to become an integral part of enterprise AI systems in the coming years. However, there are still areas where the MCP ecosystem needs to mature:
In our upcoming post, we’ll take a closer look at MCP’s role in the enterprise and how organizations can prepare their infrastructure for enterprise-grade MCP implementations. Stay tuned!