MCP Changed How I Think About AI Tools
Model Context Protocol makes AI tools talk to everything else. I didn't expect to care this much.
What MCP is
Model Context Protocol is a standard for connecting AI models to external tools and data sources. Anthropic released it, but it’s open. The idea: instead of every AI app building its own integration with GitHub, Slack, databases, etc., you write one MCP server and any AI client can use it.
That sounds like plumbing. It is plumbing. But it’s the kind of plumbing that changes what you can build.
Why I care
I was building a side project that needed to pull data from Notion, check a GitHub repo, and post to Slack. Without MCP, that’s three separate integrations I’d have to wire up manually, each with its own auth flow and API quirks.
With MCP, I pointed Claude Code at MCP servers for each service. It could read my Notion pages, check PR status, and draft Slack messages in one conversation. I didn’t write any integration code.
That’s the part that got me. Not writing integration code. For a PM who codes on the side, integration code is the worst part. It’s never interesting and it always takes longer than you think.
What’s rough
The ecosystem is young. Some MCP servers are solid, some break in weird ways. Error messages are often unhelpful. Documentation varies a lot between servers.
Auth is still awkward. Every server handles it differently. Some want API keys in environment variables, some have OAuth flows, some just assume you’ve already authenticated somewhere else.
Where this goes
I think MCP or something like it becomes standard. The alternative is every AI tool reimplementing the same integrations, which is what’s happening now and it’s wasteful. Whether MCP specifically wins or gets replaced by something better, the idea of a shared protocol for AI tool integrations is right.
I’m building my next project with MCP servers from the start. Writing a custom MCP server for my own API is on my list.