MCPs – the context whisperers for GenAI

GenAI Landscape


Generative AI has become one of the most debated technologies in recent years, not for being the sci-fi dream of ‘true AI,’ but for what it can already do today. Socio-economic and moral implications aside, most people still only scratch the surface level understanding of how GenAI actually works.

Article content

Understanding Machine Learning

To establish the basis for what we’ll discuss, let’s first break down machine learning and review where LLMs and other AI models fit into the AI pipeline.

An LLM, or large language model, is a key piece of an AI pipeline that is designed to understand and generate human language by analyzing vast amounts of text data, often serving as the front end or intake system for other AI models.

Other models build patterns from a corpora of datasets that may include: images, text, code and any other information which the model will parse and is trained to distinguish patterns from and to memorize and adapt on these patterns to generate content and answers. In the case of something like image generation, an LLM such as ChatGPT will interpret your instructions, clean them up, and translate them into a form an image model (like Stable Diffusion) can parse and understand in order to generate images based on common language instructions.

These systems are leveraging human designed constructs consisting of contemporary algorithm types that build predictions from learning patterns parsed from the datasets.

However, LLMs and other AI models do not intrinsically operate like traditional databases with direct query retrieval. Instead, they store and access information implicitly through the learned patterns themselves. This enables the model to “learn” to associate word and phrase patterns with context derived from the patterns, which enables them to utilize the relevant key information for generating text and content.

While machine learning and LLMs can generate remarkably human-like outputs, they struggle without proper contextual framing. Raw models tend to overgeneralize, hallucinate, or miss subtle intent because they lack structured context.

These limitations eventually lead to the real challenge: how do we build and guide GenAI systems toward finely tuned accuracy and focus?

Challenges and Application

LLM based chatbots can be paired with other AI models to be used for: writing code, for more comprehensive internet style search, meal planning, building presentations, writing articles, creating assets, and even building complex application.

The core challenge with generative AI beyond accuracy is context management and connecting LLMs to tools, structure data and user workflows can be difficult, which is where MCPs come in.


Article content

What are MCPs

So how do MCPs fit into this equation and what exactly are their purpose in the GenAI system? At a basic level an MCP could be seen as a sort of an Agile User Story to help define a context for a specific context driven purpose.

At a technical level, an MCP, or “Model Context Protocol” is essentially a server that acts as an interpretive context bridge or a set of keys that ties human intent and technical context into something an LLM can parse that helps to ground and structure model context.

Think of an MCP as an operator mode for an LLM that enables it to generate better and more defined results with fewer hallucinations that are more relevant to the intention of the model and the person using it.

What Makes a Good MCP?

MCPS themselves can be built with GenerativeAI, I have created several myself and have even created programs that build templates for making programs and MCPs with GenerativeAI such as Claude or Roo Code.

While useful, MCPs carry a caution: they’re easy to over-engineer and can even be quick to build, especially when generated with LLMs. Overloading them with raw context can dilute their purpose. The real challenge is designing each MCP with clear, focused intent.

Let’s first talk about what qualities are inherent to a well constructed model before talking about best practices for designing them.

  • Universal Connector – Instead of a single platform integration, MCPs provide a shared standard that can connect multiple APIs and data to a context or purpose
  • Guardian of Trust and Safety – Access control, audit trails, and permissions can be baked into the protocol
  • Context Provider – Real-time access to relevant data can make models more accurate, and less prone to hallucination
  • Action Enabler – Beyond retrieval, MCP empowers models to execute and automate tasks: sending emails, updating records, or orchestrating workflows

MCP OverviewGenerateBetter.ai: Understanding MCPs

Well designed MCP models will also help take out some of the guesswork at a technical and context driven level as well as allow an LLM to interact with other external tools via Rest APIS.

For example, a well-designed MCP tailored for a Release Manager, or engineer could embed relevant tools, processes, and intended purpose for that role, while also exposing REST API links so the LLM knows when and how to interact with external systems.


Article content

MCP Driven Results in GenAI

These are some current practical applications being used for building better GenAI Agents and tools framed with with well designed model context protocols

  • IoT & Multimodal Agents: Real-time context from sensors, images, and speech layered into workflows
  • Software Development: AI code assistants that query repos, read documentation, and suggest fixes or optimizations to code
  • Enterprise Automation: Connecting CRMs, ERPs, and communication tools through one standardized layer
  • Customer Support: Bots with real-time access to order histories and ticketing systems
  • Healthcare & Engineering: Agents working with domain-specific tools like CAD software or medical records

MCP – Intentional Design and The Future

In terms of best practices, an MCP should fit a specific purpose and not be overloaded with information and context that casts too wide a net

MCP Focus and Best Practices

A strong theme across the community is what not to do:

  • Don’t just wrap REST APIs: REST is resource-centric, while MCP is action-centric. Dumping endpoints like get_user or list_events overwhelms agents, inflates token costs and can cause tool confusion that can give less optimal results
  • Design MCP servers with intent: APIMCP converters often produce dozens of near-duplicate tools (create_invoice, generate_invoice), confusing models and creating hallucination risks. If you’re using GenerativeAI to help build MCP servers, do the work on creating specific context specs for the AI to parse and ensure you’re reading through the generated MCP output yourself before trusting in it, especially in a work environment
  • Don’t feed it a firehose of raw data: Use curated Context. For example, don’t hand an AI the entire Google Calendar API; instead, provide a single reschedule tool aligned with workflows and framed with context

As one Redditor put it: “APIs are raw materials, MCPs are the cooked dishes.”

Community Best PracticesReddit r/mcp: Wrong Way to Build MCPs Anti-PatternsThe Signal Path: Stop Generating MCP Servers from REST APIs

Current Ecosystem Momentum

  • MCPs have moved beyond the theoretical and are quickly becoming an industry standard for high quality LLMs Anthropic pioneered MCP, open-sourced toolkits, and released reference servers for Google Drive, Slack, GitHub, and more
  • OpenAI integrates MCP into its Agents SDK, enabling models to dynamically discover tools
  • Microsoft added MCP into Copilot Studio with Azure governance support
  • Hugging Face and community developers are building adapters and LangChain integrations
  • Enterprises like MindsDB use MCP for federated data querying across multiple systems

Ecosystem MomentumAhmed Tokyo: MCP State & Future Impact

What’s in Store? – The Future of MCPs

Looking ahead, MCPs are poised to become a transformative foundational layer and a context bridge for AI systems. Their success depends on intentionally crafted design and not a bloated info and API dump, but through carefully scoped context bridges that align with human workflows.

Thoughtfully architected and data curated MCPs have vast potential to terraform the machine learning landscape as trusted collaborators that understand, act, and orchestrate across digital environments.

If engineered without purpose, MCMs may conversely add noise, cost, and confusion to a system and have a cumulative negative impact on GenAI efficiency as a whole. The future of AI will be built on protocols like MCPs. There is a balancing act to maintain in designing them not as bloated context wrappers, but as intentional and meaningful context bridges between human intelligence and the digital world. See content credentials

Article content

Leave a comment