Livestream: AI-Powered prototyping & wireframing | 4/24

What are best AI tools? Take the State of AI survey

Builder logo
builder.io
Contact SalesGo to App
Builder logo
builder.io

Blog

Home

Resources

Blog

Forum

Github

Login

Signup

×

Visual CMS

Drag-and-drop visual editor and headless CMS for any tech stack

Theme Studio for Shopify

Build and optimize your Shopify-hosted storefront, no coding required

Resources

Blog

Get StartedLogin

‹ Back to blog

AI

How to Build Your Own MCP Server

April 16, 2025

Written By Alice Moore

I don’t know about you, but I find myself switching AI models (surprise Gemini release anybody?) and clients (Cursor, Windsurf, Cursor again—no, wait!) pretty often.

What frustrates me more than anything is loss of context. I’m constantly explaining to the AI what it needs to know about my problem and trying to get it to act in “my style” of doing things.

But what if that context were portable? What if you could ask a question in Claude Desktop, get an answer, and then recall the conversation later in Cursor when coding?

In this article, we’ll do just that, building out the tooling together in just a few quick steps. Here’s how the final product will look:

Here’s the complete code for this example project so you can clone it. I recommend following along; the goal is that by the end of this tutorial, you’ll be able to create your own lil’ dream server.

Why bother with MCP?

What you’re seeing above is, as you may have guessed from the 48px title and borderline-absurd keyword optimization of this post, a Model Context Protocol (MCP) server.

If you already know all about MCP and want to get to building, feel free to skip this section and head on down to the “Quick Start.” Otherwise, set your stopwatch—here’s the 3-minute primer.

If you want autonomous AI agents, you’re gonna need tools that enable them to see and interact with the world around them. Unfortunately, connecting AI assistants directly to tools makes for fragile integrations; update the AI model or the API on either side of the tool, and you get broken code.

So, how can we build more robust, reusable AI capabilities?

One route is through Anthropic’s Model Context Protocol (MCP). It’s a standardized communication layer (based on JSON-RPC) that allows AI clients (like Cursor) to discover and use external capabilities provided by MCP servers.

These capabilities include accessing persistent data (Resources), performing various actions in the outside world (Tools), and receiving specific instructions on how to use those resources and tools (Prompts).

For a full exploration of MCP's goals, architecture, and potential, you can read my deep dive.

That’s great, but…

Clients like Cursor and Windsurf already have great AI agents without MCP. So, why do we need more tools?

Put simply, client developers can’t build everything. They don’t want to spend all their development hours tweaking web search for every new model, and they’re definitely not out here trying to roll their own Jira integration.

MCP lets service providers like GitHub and Notion maintain their own AI integrations, which means higher-quality interactions and less duplicated effort.

So, when you opt into using an MCP server, the main benefits you get are future-proofing and portability. You get an enormous ecosystem of plug-and-play tools that you can bring to any chat window that implements the standard.

Okay, but…

Even if you’re not a developer who needs to wire up their own service API to MCP, there are a lot of benefits to having the knowhow.

For me, I’ve noticed that the more I spend time building servers, the less I feel like my entire job is just copy/pasting huge swaths of text between input boxes. I’m automating context, and it makes AI models more personally useful to me.

Plus, it feels like a way to put a stake in the ground with the ever-shifting landscape of AI. Tools I build today should keep working even as new models, clients, and services come around.

But enough waxing poetic. Time to roll up our sleeves.

I’m not gonna lie: If you want to just hand the AI agent the MCP docs and tell it what functionalities you want… well, it’s probably gonna work. This is the kind of code AI is especially good at—it’s boilerplatey.

Use the MCP Inspector as you go, and keep feeding errors back to the AI. And check out our best Cursor tips to get the most out of the AI agent.

Otherwise, here’s the breakdown for those who want to learn how the architecture works, in order to build scalable AI tools.

A meme of Gru from "Despicable Me" where he is presenting a plan of "Vibe code with Cursor", "Don't read the code", and "Expose keys in prod". He is excited about the plan until the last step where he realizes his mistake.

Let's get the code base ready with these three steps. We won't worry about API keys or client setup yet.

  1. Clone the Repository: Get the example code onto your local machine.
  2. Install Dependencies: We need the MCP SDK and a few other libraries.
  3. Build the Code: Compile the TypeScript source code into runnable JavaScript.

You now have the compiled code in the build/ directory.

If you want to grab an OpenRouter API key and head on down to “Running the server with real clients,” you’re more than welcome to. The server will work as is.

Before we dive into the specific features of this CSS Tutor example, let's nail down the fundamental structure of any MCP server built with the TypeScript SDK and get a minimal version running.

Open the main server file: src/index.ts. You'll see these key parts:

  • Imports: The file brings in McpServer (the core server class) and StdioServerTransport (for communication) from the @modelcontextprotocol/sdk.
  • Registration imports: We import registerPrompts, registerResources, and registerTools from other files in the src/ directory. These functions (which we'll explore later) are responsible for telling the server about the specific capabilities we want to give it.
  • Server instantiation: We create the server instance, setting the server's name and version, and initializing empty placeholders for its capabilities.
  • Calling registrations: The imported register* functions are called: These calls populate the server instance with the actual tools, resources, and prompts defined elsewhere.
  • The main function: This async function sets up the communication transport and connects the server:
  • Execution: Finally, main() is called with basic error handling.

This structure is the heart of the server. It initializes, registers capabilities, and connects for communication.

To make sure this core loop works without needing any external APIs or complex logic yet, let's temporarily modify src/index.ts:

  1. Comment out the capability registration calls:
  2. Add a simple "hello" tool right before the main function definition:
  3. Re-build the code:

With just these changes in src/index.ts, we now have a runnable MCP server that offers only one basic tool. It doesn't do much yet besides offer Empire Strikes Back spoilers, but it confirms the core structure and communication setup is working.

Now that we have a minimal, runnable server, how do we check if it's actually speaking MCP correctly? We use Anthropic’s MCP Inspector.

This command-line utility acts as a basic MCP client. It launches your server process, connects to it via standard I/O (just like Claude Desktop or Cursor would), and shows you the JSON-RPC messages being exchanged.

From your project's root directory, run:

npx @modelcontextprotocol/inspector node ./build/index.js
  • npx ...inspector: Downloads and runs the inspector package.
  • node: The command to execute your server.
  • ./build/index.js: The path to your compiled server entry point.

The inspector will start, connect to your server, and begin exchanging messages. If you go to the localhost url, you can interact with it:

  1. Connection: You'll see initialize messages confirming the connection.
  2. List tools: Use the inspector's interface to ask the server what tools it offers. You should see only our hello_world tool listed.
  3. List resources/prompts: If you try to go to the resources or prompts tabs, they should be unclickable, since we commented out their registrations.
  4. Call the tool: Use the inspector to call the hello_world tool. You should see the server respond with our custom message.

The MCP Inspector is your best friend during development. After each step where you add or modify a capability (tool, resource, or prompt), verify that the server registers it correctly and responds as expected. The Inspector lets you test server functionality without involving a full AI client.

Use the buddy system: anywhere you go, the MCP Inspector goes.

A first person view of the protagonist from the video game "Portal" carrying the Companion Cube.

(^ Live footage of you and the MCP Inspector.)

Now that we have the basic server running and know how to debug it with the Inspector, let's 1) grab some snacks, and 2) incrementally add the actual CSS Tutor features.

Feel free to tweak the capabilities as we go along—all coding skills are welcome!

First, let's activate and understand the tool that fetches external information.

In src/index.ts, remove the dummy hello_world tool definition we added earlier, and uncomment the line registerTools();. This line calls the function in src/tools/index.ts that registers all our tools.

export const server = new McpServer({
  name: "css-tutor",
  version: "0.0.1",
  capabilities: {
    prompts: {},
    resources: {},
    tools: {}
  }
});

// registerPrompts();
// registerResources();
registerTools();

// delete dummy tool

async function main() // rest of code

Now, open src/tools/index.ts and find the registerGetLatestUpdatesTool function. This is where the get_latest_updates tool is defined and registered with our server.

Inside this file, you'll see a few key things happening:

  1. Configuration & safety check: It uses dotenv to load environment variables, specifically looking for OPENROUTER_API_KEY. If the key is missing, it logs a warning and skips registration, preventing the server from offering a tool that can't function.
  2. Tool registration: It uses server.tool() to register the get_latest_updates tool. This includes giving it a name, a description for the AI client, and defining its input schema (in this case, {} because it takes no arguments).
  3. Core logic (Handler): The core logic is in the asynchronous handler function passed to server.tool(). This handler is responsible for:
  4. Activation: Finally, the main registerTools function (at the bottom of the file) ensures that registerGetLatestUpdatesTool() gets called when the server starts up.

Compile the changes.

npm run build

To test this tool with the Inspector, the server process needs the API key. Prefix the inspector command:

# Example on Linux/macOS
OPENROUTER_API_KEY="sk-or-..." npx @modelcontextprotocol/inspector node ./build/index.js

(See the project's README.md for Windows examples).

Run the MCP Inspector. Use tools/list. You should now see get_latest_updates registered. Try calling the tool via the Inspector—it should return recent CSS news! (As long as you have ~$0.04 in credits from OpenRouter available.)

Architecture diagram of an AI system. An AI client interacts with an MCP server core, which registers Prompts, Resources, and Tools modules. Resources connect to Persistent Storage, and Tools connect to an OpenRouter API.

Now, let's activate the components that allow our server to remember information across interactions: the css_knowledge_memory resource and the tools to interact with it.

Back in our main file (src/index.ts) uncomment the line registerResources();.

Open up src/resources/index.ts and find the registerCssKnowledgeMemoryResource function.

  • Registration: It uses server.resource() to define the css_knowledge_memory resource. This gives it a name, a unique URI (memory://...), read/write permissions, and an asynchronous handler function.
  • Core logic (handler & helpers): The handler function is called when a client wants to read the resource's current state. It uses helper functions (readMemory, writeMemory also defined in this file) which handle the actual file system operations: reading, parsing, validating (with Zod), stringifying, and writing data to the data/memory.json file. This file acts as our persistent memory store.
  • Activation: The main registerResources function (at the bottom of the file) ensures that registerCssKnowledgeMemoryResource() gets called when the server starts.

Next, head on over to src/tools/index.ts and look at the registerReadFromMemoryTool and registerWriteToMemoryTool functions within src/tools/index.ts. These provide the actions clients can take related to the memory resource.

  • Registration: Both tools are registered using server.tool(). read_from_memory has no specific input schema, while write_to_memory defines an input schema using Zod ({ concept: z.string(), known: z.boolean() }) to ensure clients send the correct data format for updates.
  • Core logic (handlers): The read_from_memory tool's handler simply calls the imported readMemory() helper from src/resources/index.ts and returns the current state. The write_to_memory tool's handler receives validated arguments ({ concept, known }), then uses both readMemory() and writeMemory() helpers to load the current state, update it based on the input, and save the modified state back to data/memory.json.
  • Activation: The main registerTools function ensures these tool registration functions are called.

Compile the changes.

npm run build

Run the MCP Inspector.

  • In the Resources tab, you should now see css_knowledge_memory registered.
  • In the tools tab, you should see get_latest_updates (from Step 1) plus the new read_from_memory and write_from_memory tools.
  • Verify the statefulness: Use the Inspector to call read_from_memory, then write_to_memory with some test data (e.g., { "concept": "Grid", "known": true }), and finally call read_from_memory again. Confirm that the data returned by the second read reflects the change you wrote, and check the data/memory.json file directly to see the persisted update.

Last step! Time to tell the AI model how to use the tools and resource we’ve provided.

In src/index.ts, uncomment the last commented-out line, registerPrompts();.

Open src/prompts/index.ts.

  • Registration: The registerCssTutorPrompt function uses server.prompt() to define the css-tutor-guidance prompt, giving it a name and description for the client. It specifies no input schema ({}) because calling this prompt doesn't require any arguments from the client. (We could pass dynamic data here, which can get pretty spicy.)
  • Core Logic (Handler & Content): The handler for this prompt is very simple. It just returns the content of the cssTutorPromptText constant (defined in the same file), which contains the detailed instructions for the AI on how to behave like a CSS tutor using the available tools and memory.
  • Activation: The main registerPrompts function (at the bottom of the file) makes sure registerCssTutorPrompt() gets called when the server starts.

Compile the changes.

npm run build

Run the MCP Inspector.

  • In the Prompts tab, you should now see css-tutor-guidance registered.
  • Try calling the prompt from the Inspector. It should display the full guidance text defined in cssTutorPromptText.

Pretty cool, right? Well, here’s the thing: Even though the server now offers the prompt via MCP, most clients can’t automatically use MCP prompts. Claude will need you to pick it manually, and Cursor can’t even access MCP prompts yet.

So, for now, rely on features like Cursor rules to provide instructions on how to use certain MCP servers. Hopefully, we’ll see more MCP adoption soon.

With our server fully built and debugged using the Inspector, it's time to connect it to actual AI clients.

If you use the Claude desktop application:

  1. Go to Settings. Not the settings near your profile (for some reason?), but the actual app found in the top toolbar.
  2. Go to “Developer” → “Edit Config”
  3. Add an entry for the css-tutor server:
  4. Replace the absolute path (not relative!) and API key with your actual values.
  5. Restart Claude Desktop, and connect to the css-tutor server. (See video up top for where to press.)

If you use the Cursor editor:

  1. Go to Cursor Settings > MCP > Add new global MCP server.
  2. Configure the server the exact same as in the Claude steps above.
  3. Create prompt rule: Cursor doesn't automatically use the server's MCP prompt, so to create a rule, go to Cursor Settings > Rules and add a new Project Rule with the pasted prompt from the server.
  4. Activate the rule: When chatting or generating code (e.g., Cmd+K) in Cursor within this project, you need to @mention the rule, and then Cursor’s agent can use the server as intended without further guidance.

You should now be able to recreate the demo video scenario, chatting with one client and then moving to the other whenever you want.

First, pat yourself on the back. New skills are awesome.

Second, think about the implications.

This CSS Tutor example is simple by design, to help you learn the power of MCP as quickly as possible—but imagine what you could do with some real tools.

Maybe you want:

  • More sophisticated state: Replace the JSON file with a proper database (like SQLite or PostgreSQL) for multi-user support or larger datasets.
  • Additional tools: Add tools to search specific documentation sites (like MDN), fetch CSS examples from Codepen, or even analyze a user's local CSS file.
  • Dynamic prompts: Instead of a static prompt, make the guidance adapt based on the user's known concepts stored in the resource.
  • Error handling and rerouting: Add more granular error handling, especially for external API calls, and reroute logic when one service is down.
  • Different Transports: Explore other transport options besides StdioServerTransport if you need network-based communication—e.g., Server-Sent Events (SSE) for streaming.

MCP provides a pretty powerful framework to make whatever you want. By building MCP servers, you can make tailored, stateful, and future-proof integrations that connect to any new assistant that speaks the protocol.

Happy building!

Introducing Visual Copilot: convert Figma designs to high quality code in a single click.

Try Visual CopilotGet a demo

Share

Twitter / X
LinkedIn
Facebook
Share this blog
Copy icon
Twitter "X" icon
LinkedIn icon
Facebook icon
Hand written text that says "A drag and drop headless CMS?"

Introducing Visual Copilot:

A new AI model to turn Figma designs to high quality code using your components.

Try Visual CopilotGet a demo

Continue Reading
Design to Code6 MIN
Visual Editor 3.0: Prompt, Design, and Develop on One Canvas
April 23, 2025
Design to Code8 MIN
Figma to Android: Convert designs to mobile apps in seconds
April 17, 2025
AI18 MIN
Designing Agentic AI Systems: A Web Dev’s Guide
April 11, 2025