Anthropic's MCP Launch: Five Takeaways from a Skeptic
I doubted MCP at first. A year in, I was wrong.
When Anthropic announced the Model Context Protocol in late 2024, I thought it was a Claude-specific play. A year of using it for client work changed my mind.
When Anthropic announced MCP in late 2024, I read the blog post and thought "neat, but proprietary." A year later I'm running it daily.
What I got wrong
I thought MCP would stay Anthropic-only. It hasn't. Major IDE-class tools have adopted it, dozens of community servers exist, and the protocol itself is open.
I thought "tools as APIs" was already a solved problem. It wasn't. Every AI client had a different way of integrating tools, and integrations didn't compose. MCP fixed that.
I thought the abstraction overhead would matter. It doesn't. The protocol is thin enough that it doesn't get in the way.
Five takeaways
- MCP is the USB-C moment for AI tooling. Boring, standard, plug-anywhere.
- Build MCP servers, not bespoke integrations. Future-proofs your tooling investment.
- The ecosystem will fragment around quality, not protocol. The good MCP servers (well-tested, well-documented, well-scoped) will pull ahead.
- Auth is still rough. Most production MCP servers I've shipped need bespoke auth wrappers. The spec is improving here, but not done.
- It changes how I architect agents. Instead of one big agent with bespoke tools, I now build small agents that compose MCP-provided capabilities.
What I run
- A personal MCP server for my Linear + Notion + Cal.com
- Filesystem MCP for code review
- A Postgres read-only MCP for client work where I need to ask data questions naturally
- The official GitHub MCP for repo operations
That's my agent toolkit. Composable. Portable across clients.
Where it goes
The MCP-as-protocol gives us a chance to build real interoperability between AI assistants. We've never had that before. Worth investing your time understanding.