How to reliably connect LLMs to real-world data and systems | Tech Radar
Overview
News, deals, reviews, guides and more on the newest computing gadgets
Start exploring exclusive deals, expert advice and more
Details
Unlock and manage exclusive Techradar member rewards.
Unlock instant access to exclusive member features.
Get full access to premium articles, exclusive features and a growing list of member rewards.
How to reliably connect LLMs to real-world data and systems
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
How to reliably connect large language models (LLMs) to real-world data and systems, and why approaches like Graph RAG can help. (Image credit: Getty Images)
The model context protocol (MCP), an open-source standard introduced by Anthropic in late 2024, standardizes how large language models (LLMs) connect to external tools, databases, and data sources.
It enables Claude or Cursor to easily interact with files, APIs, and databases without needing bespoke, hard-coded integrations.
This is why MCP is often compared to a “USB-C port for AI.”
How context-aware agents and open protocols drive real-world success in enterprise AI
Beyond the hype: The critical role of security in responsible AI development
Unlike USB-C, however, it carries risks, especially around whether the model truly understands what it’s querying and what the underlying data represents.
Without careful planning around this aspect of the protocol, AI projects can fall short of expectations. Let’s consider why.
MCP provides a far more effective way for LLMs to interact with external tools, APIs, and data sources. Instead of manually orchestrating multi-step pipelines—retrieving data, formatting it, injecting it into prompts, and parsing outputs—AI teams can expose capabilities directly to the model, allowing the LLM to decide what to use, when, and how to combine results.
That moves AI from static prompt-response loops toward more dynamic, agent-like behavior. Instead of hardcoding workflows, teams define tools and the model orchestrates them—querying databases, triggering workflows like sending emails or updating systems—choosing the right tools and sequencing them based on the task.
Both in theory and in practice, this approach reduces engineering overhead and increases flexibility. Model providers, frameworks, and platforms are increasingly supporting MCP-style interactions, and many developer tools now assume tool-based orchestration.
However, MCP introduces new challenges, notably tool overload. The temptation is to give the model access to as many tools as possible, assuming more capability should mean better outcomes, but that’s not always the case.
Just as giving an LLM too many options can increase the risk of hallucination, the same dynamic appears in deployment. As the number of available tools grows, the model’s ability to reliably select the correct one decreases, tempting it to pick the wrong one or misuse them, producing unintended results.
From demo to production: What agent-based AI must actually deliver
Think AI hallucinations are bad? Here's why you're wrong
AI governance under strain: what modern platforms mean for data privacy
In this setting, instead of hallucinating text, the model now hallucinates actions.
So, what’s the path forward? Our experience shows progress comes from a minimal, tightly scoped set of tools tailored to the specific task. Complex workflows should also be decomposed into smaller steps, each with a clearly defined set of capabilities.
Next is context. An LLM may know how to use a tool, but not what to do with it. For example, a model might generate syntactically correct queries, but without understanding the underlying schema or data relationships, those queries may end up meaningless or incomplete.
That’s the equivalent of handing someone access to a vast filing system without an index. Without the structured knowledge needed to interpret data correctly or to decide which tool is appropriate, even well-designed MCP systems can behave unpredictably.
I am deliberately not discussing security issues, but they are far from trivial: think unauthorized data access, prompt injections that trigger unintended actions, and more. The bottom line is you need to know which tools your AI app used, why it chose them, and what actions were executed as a result.
It’s clear that as helpful as it is, MCP is not a complete solution. It serves as an enabling layer, but does not structure knowledge, provide reliable context, or guide decision-making. Architectural choices are critical: improving the quality and structure of the context provided to the model makes a significant difference.
This is where approaches like retrieval-augmented generation (RAG) helps; and increasingly, where graph-based approaches are gaining attention, giving rise to Graph RAG, as first suggested by Microsoft.
Traditional RAG systems use vector search to retrieve relevant information. This approach helps reduce hallucinations but can struggle with complex relationships or implicit structures in the data. Graph RAG, however, extends this idea by introducing a knowledge graph layer, which encodes entities, relationships, and rules in a structured form.
This gives the model a clearer understanding of how data is connected and what it represents. In the context of MCP, this improves tool selection. When the LLM has access to structured knowledge, it can determine which tools are relevant to a given task, and it enables more controlled execution, guiding the model toward valid actions and away from risky or nonsensical ones.
For example, a graph can encode constraints such as permissions, dependencies, or business logic. This provides a form of guardrail that complements MCP’s action layer. The result is a more balanced system: MCP handles interaction and execution, RAG supplies relevant context, and the knowledge graph adds clarity, constraints, and reasoning support.
This combination helps reduce hallucination and misuse, two of the biggest risks in MCP-driven systems. By integrating Graph RAG with MCP-based workflows, developers can create systems in which models are not only capable of acting but are also better informed about when and how to act—bringing us closer to practical AI that is both powerful and reliable.
This article was produced as part of Tech Radar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those of Tech Radar Pro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit
You must confirm your public display name before commenting
1 Motorola Razr Fold becomes first silicon-carbon battery phone available through US carriers
2 Do I really need the Wire Guard protocol with my VPN?
4 Spider-Man 4's MCU timeline position explained after new plot detail confuses fans
Tech Radar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.
© Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.
Key Takeaways
- News, deals, reviews, guides and more on the newest computing gadgets
- Start exploring exclusive deals, expert advice and more
- Unlock and manage exclusive Techradar member rewards
- Unlock instant access to exclusive member features
- Get full access to premium articles, exclusive features and a growing list of member rewards



