Model Context Protocol
By Satish Gupta • 12/21/2025
Introduction
Model Context Protocol is a recent buzz in generative AI solutions, and this is where the future of enterprise AI solutions is heading.
Model Context Protocol is an evolution of the integration component. Earlier, we saw integrations in legacy systems like SOAP, mainframe, and SAP, where Enterprise Service Bus (ESB) was widely used. These integrations involved very complex message transformations. Many banks still run ESB in production environments. Over time, many ESB-based integrations were transformed into Kafka- or MQ-based pub/sub integrations, which were more scalable and fault-tolerant.
Legacy to Modern way of Integration
Five to seven years back, the concept of microservices came into the limelight, and APIs became the backbone of this architecture. APIs acted as an abstraction where the source system only needed to know the schema and structure, while individual API owners managed the full lifecycle of the API. This became the foundation of microservice-based architecture. Anyone involved in solution design can easily correlate how important APIs and integrations are. This is where most organizations are today, and it is very obvious that when LLMs and generative AI solutions become mainstream in organizations, there will be frequent requirements to integrate with APIs or messaging systems like MQ and Kafka.
Problem With Hard Coded LLM Integration
One way to implement this is to hard-code the logic inside the LLM to orchestrate API calls, take the output, and then pass it to another LLM. However, this is not the right approach. Model Context Protocol sits on top of APIs, Kafka, and MQ, and creates a contract so that the LLM can discover all the available APIs and tools. The LLM can call them whenever needed, and MCP also provides reasoning capability—where the LLM can keep trying until it gets the correct result. This is a game changer in generative AI solutions.
A Simple Example to Understand MCP
For example, assume there are two APIs:
One API finds a customer based on the customer name.
Another API fetches detailed customer information using the customer ID.
With MCP in place, the LLM can discover these two APIs on the fly. It can first call the API to find the customer using the name, extract the customer ID from the response, and then automatically call the second API by passing the customer ID to fetch the remaining details—without writing a single line of orchestration code.
Isn’t this fascinating?
MCP Way of Integration
A typical API transaction is: call → response → done
Whereas with MCP, the flow becomes: think → check available tools → observe → think again → call another tool
This gives an added advantage of decoupling integration logic from the LLM, allowing the LLM to dynamically decide which tool to call.
Isn’t this a game changer? Comment if you agree.