MCP Server Tutorial — Build Your First Server in 30 Minutes
This MCP server tutorial teaches you how to build a production-ready Model Context Protocol server from scratch. In 30 minutes, you’ll go from zero to a working server that Claude Code can connect to — with no prior MCP experience required.
Who this is for:
- Senior engineers — You want to understand MCP protocol internals and build internal tooling that your whole team can use
- Junior engineers — You’ve written Python async code before and want to add MCP to your toolkit
- Teams — You need to connect internal APIs to AI agents without writing custom integrations for each tool
1. What You’ll Build
Section titled “1. What You’ll Build”You’ll create a Task Manager MCP Server with four tools that demonstrate the full capability set:
| Tool | Purpose | When to Use |
|---|---|---|
add_task | Create a new task | User wants to track something |
list_tasks | View all tasks | User asks “what do I have to do?” |
complete_task | Mark done | User finishes something |
get_task_stats | See completion metrics | User wants productivity insights |
Think of it like this: your team has a custom task tracker. Instead of building separate integrations for Claude Code, Cursor, and your internal agent framework, you build one MCP server. All three tools connect automatically. That’s the N+M integration benefit — 1 server + 3 hosts instead of 3×1 = 3 custom integrations.
2. Real-World Problem Context
Section titled “2. Real-World Problem Context”Before MCP, every AI tool required a separate custom integration — the cost compounds quickly across a real engineering team.
The Pre-MCP Integration Nightmare
Section titled “The Pre-MCP Integration Nightmare”A real scenario from a mid-size AI startup in 2024: they had an internal task API, a customer database, and a deployment system. They wanted Claude Code to interact with all three.
Without MCP, they built:
- Custom Claude Code extension (2 weeks, 1 engineer)
- Custom Cursor extension (1.5 weeks, fixing different API shapes)
- Custom agent framework integration (2 weeks, different auth model)
- Total: 5.5 weeks, 3 different codebases to maintain
The maintenance burden was worse. When the task API added a new required field, they updated three integrations. When Cursor changed its extension format, they rewrote that integration. After 6 months, the integration code outnumbered the actual business logic.
Why This MCP Server Tutorial Matters
Section titled “Why This MCP Server Tutorial Matters”MCP changes the math. You write the server once. Claude Code connects via configuration. Cursor connects via configuration. Your custom agent connects via the same Python SDK you used to build the server.
The 30-minute investment in this tutorial pays off when:
- Your team adopts a new AI tool (zero integration work)
- You update your internal API (change one server, not N clients)
- A new engineer joins (they learn one protocol, not N custom integrations)
3. MCP Server Architecture and Concepts
Section titled “3. MCP Server Architecture and Concepts”Think of MCP like a universal remote control for AI tools:
Your TV, soundbar, and streaming box all speak different protocols. The universal remote translates one standard button press into whatever each device needs. MCP does the same for AI agents — one protocol, any tool.
The Three Components
Section titled “The Three Components”MCP Host (Claude Code, Cursor, your agent)
- Embeds an LLM and wants to give it tools
- Manages MCP connections, routes tool calls
- You don’t build this — you use existing hosts
MCP Client (inside the host)
- Speaks the MCP protocol
- Discovers tools, handles serialization
- The SDK provides this — you configure it
MCP Server (what you’re building)
- Exposes tools, resources, prompts
- Runs as stdio subprocess or HTTP service
- Zero knowledge of which host connects
The Three Capability Types
Section titled “The Three Capability Types”Tools — Actions the LLM can take (create task, query database, send message)
Resources — Data the LLM can read (task list, file contents, API response)
Prompts — Reusable prompt templates the server provides
4. Build an MCP Server Step by Step
Section titled “4. Build an MCP Server Step by Step”Follow these five steps to build, connect, and test a working MCP server from scratch.
Step 1: Project Setup
Section titled “Step 1: Project Setup”Create your project directory and activate a virtual environment:
mkdir task-manager-mcpcd task-manager-mcppython -m venv venvsource venv/bin/activateInstall the MCP SDK:
pip install mcpVerify your setup:
python -c "import mcp; print(mcp.__version__)"You need version 1.6.0 or higher. The SDK handles all JSON-RPC protocol details so you focus on your tools.
Step 2: Create the Server Skeleton
Section titled “Step 2: Create the Server Skeleton”Create server.py:
#!/usr/bin/env python3"""Task Manager MCP Server — Tutorial Implementation"""
import asynciofrom mcp.server import Serverfrom mcp.server.models import InitializationOptionsfrom mcp.types import Tool, TextContentimport mcp.server.stdio
# Initialize the MCP serverapp = Server("task-manager")
# In-memory task store (replace with SQLite in production)tasks = {}task_counter = 0
@app.list_tools()async def list_tools() -> list[Tool]: """Tell the host what tools this server provides.""" return [ Tool( name="add_task", description="Create a new task with title, description, and priority.", inputSchema={ "type": "object", "properties": { "title": { "type": "string", "description": "Short task title (3-50 characters)" }, "description": { "type": "string", "description": "Detailed task description" }, "priority": { "type": "string", "enum": ["low", "medium", "high"], "description": "Task priority level" } }, "required": ["title"] } ), Tool( name="list_tasks", description="List all tasks, optionally filtered by priority.", inputSchema={ "type": "object", "properties": { "priority_filter": { "type": "string", "enum": ["low", "medium", "high", "all"], "description": "Filter by priority (default: all)" } } } ), Tool( name="complete_task", description="Mark a task as completed by ID.", inputSchema={ "type": "object", "properties": { "task_id": { "type": "integer", "description": "The task ID to complete" } }, "required": ["task_id"] } ), Tool( name="get_task_stats", description="Get task completion statistics.", inputSchema={"type": "object", "properties": {}} ) ]
@app.call_tool()async def call_tool(name: str, arguments: dict) -> list[TextContent]: """Handle actual tool execution.""" global task_counter
if name == "add_task": task_counter += 1 task = { "id": task_counter, "title": arguments["title"], "description": arguments.get("description", ""), "priority": arguments.get("priority", "medium"), "completed": False } tasks[task_counter] = task return [TextContent( type="text", text=f"Created task #{task_counter}: {task['title']} ({task['priority']})" )]
elif name == "list_tasks": priority_filter = arguments.get("priority_filter", "all") filtered = [t for t in tasks.values() if priority_filter == "all" or t["priority"] == priority_filter]
if not filtered: return [TextContent(type="text", text="No tasks found.")]
lines = [f"{'✓' if t['completed'] else '○'} #{t['id']}: {t['title']} [{t['priority']}]" for t in sorted(filtered, key=lambda x: x["id"])] return [TextContent(type="text", text="\n".join(lines))]
elif name == "complete_task": task_id = arguments["task_id"] if task_id not in tasks: return [TextContent(type="text", text=f"Task #{task_id} not found.")] tasks[task_id]["completed"] = True return [TextContent(type="text", text=f"Task #{task_id} completed.")]
elif name == "get_task_stats": total = len(tasks) completed = sum(1 for t in tasks.values() if t["completed"]) by_priority = {"low": 0, "medium": 0, "high": 0} for t in tasks.values(): by_priority[t["priority"]] += 1
return [TextContent(type="text", text= f"Tasks: {total} total, {completed} done, {total-completed} pending. " f"By priority: low={by_priority['low']}, medium={by_priority['medium']}, high={by_priority['high']}" )]
return [TextContent(type="text", text=f"Unknown tool: {name}")]
async def main(): """Start the MCP server using stdio transport.""" async with mcp.server.stdio.stdio_server() as (read_stream, write_stream): await app.run( read_stream, write_stream, InitializationOptions( server_name="task-manager", server_version="1.0.0" ) )
if __name__ == "__main__": asyncio.run(main())Key parts explained:
@app.list_tools() — Called when the host connects. Returns the tool catalog with JSON schemas. The host uses this to tell the LLM what tools are available.
@app.call_tool() — Called when the LLM decides to use a tool. Receives the tool name and arguments (validated against your schema by the host).
stdio_server() — Creates the transport layer. Reads JSON-RPC messages from stdin, writes responses to stdout. The SDK handles all protocol framing.
Step 3: Test the Server
Section titled “Step 3: Test the Server”Verify your server starts without errors:
python server.pyIt will appear to hang — that’s correct. It’s waiting for JSON-RPC messages on stdin. Press Ctrl+C to exit.
Step 4: Connect to Claude Code
Section titled “Step 4: Connect to Claude Code”Create .mcp.json in your project root:
{ "mcpServers": { "task-manager": { "command": "python", "args": ["/absolute/path/to/task-manager-mcp/server.py"] } }}Important: Use the absolute path. Claude Code launches the server from its working directory.
Start Claude Code in the same directory:
claudeClaude Code will read .mcp.json, start your server, and discover the tools automatically.
Step 5: Test Your Tools
Section titled “Step 5: Test Your Tools”Try these commands in Claude Code:
Add a high priority task called "Deploy to production"List all my pending tasksMark task 1 as completeClaude will call your tools and display the results. You just built a working MCP server.
5. MCP Server Protocol and System View
Section titled “5. MCP Server Protocol and System View”MCP uses JSON-RPC 2.0 messages over stdio — understanding the wire format makes debugging significantly easier.
How the Protocol Works
Section titled “How the Protocol Works”MCP uses JSON-RPC 2.0 over stdio. Here’s what actually travels across the wire:
Host → Server (discover tools):
{ "jsonrpc": "2.0", "id": 1, "method": "tools/list", "params": {}}Server → Host (tool catalog):
{ "jsonrpc": "2.0", "id": 1, "result": { "tools": [ { "name": "add_task", "description": "Create a new task...", "inputSchema": { "type": "object", ... } } ] }}Host → Server (execute tool):
{ "jsonrpc": "2.0", "id": 2, "method": "tools/call", "params": { "name": "add_task", "arguments": { "title": "Review PR", "priority": "high" } }}The Python SDK handles all of this. But understanding the wire format helps with debugging.
📊 Visual Explanation
Section titled “📊 Visual Explanation”MCP Communication Flow
How a tool call travels from LLM to your server and back
Server Lifecycle
Section titled “Server Lifecycle”📊 Visual Explanation
Section titled “📊 Visual Explanation”Server Lifecycle
From startup to shutdown
6. MCP Server Tutorial Code Examples
Section titled “6. MCP Server Tutorial Code Examples”Two patterns extend the tutorial server into a production-ready implementation: adding resources and replacing in-memory storage.
Example 1: Extending with Resources
Section titled “Example 1: Extending with Resources”Resources expose read-only data. Add this to your server:
from mcp.types import Resource
@app.list_resources()async def list_resources() -> list[Resource]: return [ Resource( uri="tasks://all", name="All Tasks", description="Complete task list as JSON", mimeType="application/json" ) ]
@app.read_resource()async def read_resource(uri: str) -> str: if uri == "tasks://all": import json return json.dumps(list(tasks.values()), indent=2) raise ValueError(f"Unknown resource: {uri}")Tools are for actions. Resources are for data. Hosts can cache resources — they never cache tool results.
Example 2: Production-Ready SQLite Storage
Section titled “Example 2: Production-Ready SQLite Storage”Replace the in-memory dict with SQLite:
import sqlite3from contextlib import contextmanager
class TaskStore: def __init__(self, db_path: str = "tasks.db"): self.db_path = db_path self._init_db()
def _init_db(self): with sqlite3.connect(self.db_path) as conn: conn.execute(""" CREATE TABLE IF NOT EXISTS tasks ( id INTEGER PRIMARY KEY AUTOINCREMENT, title TEXT NOT NULL, description TEXT, priority TEXT DEFAULT 'medium', completed BOOLEAN DEFAULT 0 ) """)
def add(self, title: str, description: str = "", priority: str = "medium") -> int: with sqlite3.connect(self.db_path) as conn: cursor = conn.execute( "INSERT INTO tasks (title, description, priority) VALUES (?, ?, ?)", (title, description, priority) ) conn.commit() return cursor.lastrowid7. MCP Server Trade-offs and Pitfalls
Section titled “7. MCP Server Trade-offs and Pitfalls”Most MCP server failures fall into three categories: path misconfiguration, blocking async code, and silent schema validation errors.
Common MCP Server Issues and Fixes
Section titled “Common MCP Server Issues and Fixes”📊 Visual Explanation
Section titled “📊 Visual Explanation”Common MCP Server Issues and Fixes
- Server starts but tools don't appear in Claude
- Tool calls hang or timeout
- Schema validation errors
- Changes not reflected after restart
- Server crashes on bad input
- Use absolute paths in .mcp.json; check Python is in PATH
- Ensure async functions don't block; check for infinite loops
- Validate JSON Schema at jsonschemavalidator.net
- Run /refresh in Claude Code to clear tool cache
- Add try/except in call_tool; return errors as TextContent
This Is Where Engineers Get Burned
Section titled “This Is Where Engineers Get Burned”Silent schema failures. If your inputSchema doesn’t match what the LLM sends, the tool call fails silently. Claude gets an error but may not show it clearly.
Defense: Always validate inputs explicitly and return clear error messages:
@app.call_tool()async def call_tool(name: str, arguments: dict) -> list[TextContent]: try: if name == "add_task": if "title" not in arguments: return [TextContent(type="text", text="Error: 'title' is required")] if len(arguments["title"]) < 3: return [TextContent(type="text", text="Error: title must be 3+ characters")] # ... execute except Exception as e: return [TextContent(type="text", text=f"Error: {str(e)}")]Process overhead. Each stdio MCP server is a separate process. Five servers = five subprocesses. For lightweight tools this is fine. For high-frequency calls, the IPC overhead adds up.
Mitigation: For high-frequency production use, consider HTTP+SSE transport or direct function calling for hot paths.
8. MCP Server Interview Questions
Section titled “8. MCP Server Interview Questions”MCP appears in senior GenAI engineer interviews. Here’s what to expect:
“Walk me through building an MCP server”
Weak answer: “I’d use the Python SDK and define some tools.”
Strong answer: “I’d define the capability contract — tools, resources, and prompts — with JSON Schema validation. I’d implement @list_tools() to expose capabilities and @call_tool() to handle execution. For transport, stdio works for local development, HTTP+SSE for remote. I’d validate inputs defensively and return clear error messages. Testing involves both unit tests for tool logic and integration tests with an actual host.”
“When would you use MCP versus direct function calling?”
Strong answer: “MCP wins when tools are shared across multiple hosts — Claude Code, Cursor, and custom agents all connecting to the same capability. The N+M integration benefit only materializes with multiple clients. For a single agent with internal tools that won’t be reused, direct function calling has less overhead and is simpler to debug.”
“How do you handle security in MCP servers?”
Strong answer: “Never hardcode credentials. Inject them via environment variables at server startup. The .mcp.json can reference env vars that the host passes through. For remote HTTP+SSE servers, authentication happens at the HTTP layer. I also validate all inputs and treat tool results as untrusted — they get injected into LLM context, so a compromised server could attempt prompt injection.”
“What happens if an MCP server crashes mid-execution?”
Strong answer: “With stdio transport, the host detects the subprocess exit and surfaces an error to the LLM. With HTTP+SSE, the connection drops and the client retries or fails. For critical workflows, I implement idempotent tools — calling complete_task twice shouldn’t break anything.”
9. MCP Server in Production
Section titled “9. MCP Server in Production”Deploying an MCP server beyond a local stdio process requires choosing between Docker sidecar and HTTP+SSE transport based on your sharing requirements.
Deployment Patterns
Section titled “Deployment Patterns”Local stdio (development):
- Server runs as subprocess
- Fast iteration, easy debugging
- State is local to machine
Docker sidecar:
FROM python:3.11-slimWORKDIR /appCOPY requirements.txt .RUN pip install -r requirements.txtCOPY server.py .CMD ["python", "server.py"]HTTP+SSE (shared services):
- Deploy as stateless HTTP service
- Multiple agents connect to one instance
- Requires authentication at edge
Observability
Section titled “Observability”Add structured logging:
import loggingimport time
logger = logging.getLogger("mcp.task-manager")
@app.call_tool()async def call_tool(name: str, arguments: dict) -> list[TextContent]: start = time.time() logger.info(f"Tool called: {name}", extra={"args": arguments})
try: result = await _execute(name, arguments) logger.info(f"Tool success: {name}", extra={"duration_ms": (time.time()-start)*1000}) return result except Exception as e: logger.error(f"Tool failed: {name}", extra={"error": str(e)}) raiseCredentials Management
Section titled “Credentials Management”Never put secrets in .mcp.json. Use environment variables:
{ "mcpServers": { "task-manager": { "command": "python", "args": ["/path/to/server.py"], "env": { "DATABASE_URL": "${DATABASE_URL}" } } }}The host substitutes ${DATABASE_URL} from its environment.
10. Summary and Key Takeaways
Section titled “10. Summary and Key Takeaways”What you learned:
- MCP servers expose three capabilities: Tools (actions), Resources (data), Prompts (templates)
- The protocol is JSON-RPC over stdio or HTTP+SSE — the SDK handles details
- Server lifecycle: Initialize → Discover → Execute loop → Shutdown
- Tool schemas use JSON Schema — validate inputs defensively
- Resources are for data, Tools are for actions — keep them separate
Production checklist:
- Replace in-memory storage with SQLite/Postgres
- Add structured logging and metrics
- Validate all inputs with clear error messages
- Use environment variables for configuration
- Add tests for each tool handler
- Document
.mcp.jsonconfiguration for your team
Next steps:
- Extend your server with
update_task,delete_task,search_tasks - Add webhook resources for real-time notifications
- Package your server for PyPI distribution
- Explore HTTP+SSE transport for remote deployment
Related
Section titled “Related”- Model Context Protocol (MCP) Overview — Deep dive into MCP architecture and when to use it
- AI Agents and Agentic Systems — How agents use MCP tools in reasoning loops
- Agentic Design Patterns — ReAct, tool use patterns, multi-agent coordination
- LangChain vs LangGraph — Frameworks for building agents that connect to MCP servers
- Python for GenAI Engineers — Async Python patterns used in MCP servers
- GenAI System Design — Production architecture for agent systems
Last updated: February 2026. Tutorial tested with mcp Python SDK 1.6.0 and Claude Code 0.2.
Frequently Asked Questions
How do you build an MCP server?
Install the MCP Python SDK, create a server instance, and define tools using the @mcp.tool() decorator. Each tool is a Python async function with type-annotated parameters and a docstring. The SDK handles protocol negotiation, message serialization, and transport. You can test locally by connecting Claude Code to your server via the .mcp.json configuration file.
What is an MCP server used for?
An MCP server exposes tools, resources, and prompts to AI clients like Claude Code, Cursor, and other MCP-compatible applications through the Model Context Protocol. It lets AI agents connect to your internal APIs, databases, and services through a single standardized protocol instead of writing custom integrations for each client.
How do I connect an MCP server to Claude Code?
Add your server to the .mcp.json configuration file in your project root. Specify the server command (e.g., python server.py), any required environment variables, and the transport type (stdio for local, SSE for remote). Claude Code reads this configuration at startup and connects to all configured MCP servers automatically.
What can MCP tools do that regular function calling cannot?
MCP tools are discovered dynamically — the client queries the server for available tools at connection time rather than hardcoding tool definitions. MCP also supports resources (read-only data like files and database records) and prompts (reusable prompt templates) alongside tools. The protocol handles versioning, capability negotiation, and transport abstraction, making MCP servers portable across different AI clients.
What is the difference between MCP tools and MCP resources?
Tools are actions the LLM can take, like creating a task or querying a database. Resources are read-only data the LLM can access, like a task list or file contents. Hosts can cache resources but never cache tool results. Use tools for operations with side effects and resources for exposing data.
How does MCP compare to traditional API integrations?
Traditional API integrations require a separate custom integration for each AI client, creating an N×M problem. MCP provides a single standardized protocol so you build one server and any MCP-compatible client can connect. This reduces integration work from N×M to N+M, and API changes only require updating one server.
Can you use MCP servers with Claude, GPT, and other LLMs?
MCP servers work with any MCP-compatible host application regardless of the underlying LLM. Claude Code, Cursor, and other MCP clients can all connect to the same server. The server has zero knowledge of which host or LLM is connecting, making MCP servers portable across different AI applications.
How do you build an MCP server in Python?
Install the mcp Python package, create a Server instance, and register tools with the @app.list_tools() and @app.call_tool() decorators. Each tool uses a JSON Schema input definition and returns TextContent results. Run the server using the stdio_server() transport for local development. See the Python for GenAI guide for async patterns used in MCP servers.
What transport options are available for MCP servers?
MCP supports two transport types: stdio and HTTP+SSE. Stdio runs the server as a subprocess that communicates via stdin and stdout, ideal for local development. HTTP+SSE deploys the server as a stateless HTTP service that multiple agents can connect to remotely, suitable for shared production services that require authentication at the edge.
How do you debug an MCP server that is not working?
The most common issues are incorrect absolute paths in .mcp.json, Python not being in PATH, and blocking async functions. Verify the server starts by running python server.py directly. Use the --verbose flag to see JSON-RPC traffic. Add try-except blocks in call_tool to return clear error messages as TextContent instead of letting exceptions crash the server.