MCP Fundamentals
π― Difficulty Level: Difficult
β±οΈ Reading Time: 15 minutes
π€ Author: Rob Vettor
π
Last updated on: January 17, 2026

Mixed value here.
Where MCP makes sense:
Mixed value here. Where MCP makes sense:
When you want multiple agents/clients to share the same tool definitions When you're integrating with MCP-native hosts (Claude Desktop, GitHub Copilot, etc.) When you want hot-swappable tools without redeploying your agents
Where it adds overhead for your use case:
Your pipeline is batch/background processing, not interactive You control both the agents and the toolsβno need for a protocol abstraction Direct SDK calls (Azure.Storage.Blobs, Azure.Search.Documents) are simpler, faster, and better documented MCP adds a network hop and serialization overhead per call
My take: For the ingest pipeline β skip MCP, use SDKs directly. You're building infrastructure, not an interactive agent. Keep it simple. For a future query/chat interface where users ask questions against your indexed content β MCP could be useful there, especially if you want Claude or other MCP clients to search your index directly.
Practical middle ground: Build your tools as clean, reusable functions. If you later want MCP, wrapping those functions as MCP tools is straightforward. Don't design around MCP now unless you have a concrete consumer that needs it.Good argument. Ques: Even with a direct call to a tool that encapsulates Azure storage in the same process, is there still not serialization involved?7:43 PMConceded valid point about serialization layers and clarified distinction.The user is pushing back on my "serialization overhead" argument for MCP. They're making a good point - even with direct SDK calls, there's still serialization happening when you call Azure Storage (HTTP/REST calls with JSON payloads). They're right. Let me think about this:
Direct SDK call to Azure Storage: Your code β SDK β HTTP request (serialized JSON/XML) β Azure Storage β HTTP response β SDK deserializes β your code MCP call to Azure Storage: Your code β MCP client β (serialize to MCP protocol) β MCP server β SDK β HTTP request (serialized) β Azure Storage β HTTP response β SDK deserializes β MCP server β (serialize MCP response) β MCP client β your code
So the difference is:
Direct: 1 serialization boundary (to Azure) MCP: 3 serialization boundaries (to MCP server, to Azure, back through MCP)
But the user has a point that serialization IS happening either way when talking to external services. My argument was slightly misleading because I implied MCP adds serialization where there was none - when really it adds an EXTRA layer. I should acknowledge they're right and clarify what I actually meant - it's the additional hop/layer, not serialization per se that's the overhead.You're rightβI was sloppy with that argument. Direct SDK call to Azure Storage: Your code β SDK β HTTP/JSON β Azure Storage MCP call: Your code β MCP Client β MCP Server β SDK β HTTP/JSON β Azure Storage Serialization happens either way to reach Azure. The MCP overhead is the extra hop (your process β MCP server process), not serialization itself. If the MCP server runs in-process (same runtime), that hop is negligible. If it's remote (separate container/service), you're adding latency and a failure point. So my real objection is:
Extra process/network boundary (if remote) Extra abstraction to debug when things break More moving parts for a batch pipeline that doesn't need the flexibility MCP provides
But you're correctβthe serialization argument was weak. Good catch.All good. Nobody knows everything. Great insights7:43 PMOrchestrated gracious closure and readiness for further assistance.The user is wrapping up and giving a compliment. I should keep this brief and friendly, maybe offer to help with next steps if they want.Thanks, Robvet. Good discussion. When you're ready to dig into implementationβschema design, agent prompts, Azure setupβjust holler.And, astute point about adding more network hops (network calls) when not needed. Like you say, network calls increase latency and add risk (transient faults, busy servers, etc)7:45 PMRecognized conversational closure and opted for brevity.The user is acknowledging my point about network hops and agreeing with the reasoning. This is a natural conversation wrap-up point. I should acknowledge briefly without over-explaining or repeating what we've already covered.Exactly. And in a pipeline processing potentially hundreds/thousands of documents, those latencies compound. Every unnecessary hop is another thing to monitor, retry, and explain when someone asks why ingest is slow. Keep the network boundaries intentionalβcross them when you need to (reaching Azure services), not because an abstraction pattern says to.