Model Context Protocol (MCP) Communication
Model Context Protocol (MCP) is an open‑source standard for connecting AI applications to external systems. MCP lets applications such as ChatGPT or Claude access data sources (like local files or databases), tools (calculators, search engines) and workflows. MCP is akin to a USB‑C port for AI: it standardizes how AI models connect to outside context.
What does MCP enable?
According to the MCP documentation, the protocol allows agents to access calendars, design applications, databases and even 3D printers. Anthropic notes that MCP replaces fragmented integrations with a single universal protocol, making it easier to give AI systems access to the data they need.
The MCP specification explains that the protocol establishes a JSON‑RPC 2.0 communication channel between hosts (LLM applications), clients (connectors within the host) and servers (services that provide context and capabilities). MCP servers can provide three types of features—resources, prompts and tools—while clients can support sampling, roots and elicitation. Security and trust principles emphasize user consent, data privacy and tool safety.
Install the mcp package
The Python SDK implements the MCP specification and simplifies building servers and clients. To install the package and its command‑line interface, run:
Simple MCP server with FastMCP
The FastMCP class automatically creates MCP servers from annotated functions. The following example exposes a tool that adds two numbers, a resource that returns a personalized greeting, and a prompt that generates a greeting message. When run with the streamable-http transport, the server listens for JSON‑RPC requests on an HTTP endpoint.
# Create an MCP server named "Demo"
mcp = FastMCP("Demo", json_response=True)
# Define an addition tool
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two integers and return the sum."""
return a + b
# Define a dynamic greeting resource
@mcp.resource("greeting://{name}")
def get_greeting(name: str) -> str:
"""Return a personalized greeting for the given name."""
return f"Hello, {name}!"
# Define a prompt
@mcp.prompt()
def greet_user(name: str, style: str = "friendly") -> str:
"""Generate a greeting prompt in a specified style."""
styles = {
"friendly": "Please write a warm, friendly greeting",
"formal": "Please write a formal, professional greeting",
"casual": "Please write a casual, relaxed greeting",
}
return f"{styles.get(style, styles['friendly'])} for someone named {name}."
# Run the server using streamable HTTP transport
if __name__ == '__main__':
mcp.run(transport="streamable-http", host="localhost", port=8000)
The server listens on localhost:8000 and exposes its methods via JSON‑RPC. A client can call the add tool by sending a JSON‑RPC message with the method name and parameters to the /mcp endpoint. For example, using requests:
payload = {
"jsonrpc": "2.0",
"method": "add",
"params": {"a": 5, "b": 7},
"id": 1
}
response = requests.post("http://localhost:8000/mcp", json=payload)
print(response.json())
# Expected result: {'jsonrpc': '2.0', 'result': 12, 'id': 1}
The mcp CLI tool can also be used to connect servers to supported LLM clients.
✅ Activity
Design an MCP server that computes the Body Mass Index (BMI) of a user. The server should provide a tool named calculate_bmi that accepts weight (kg) and height (m) and returns the BMI (weight / height²). You should also create a resource that returns a description of what BMI means, and a prompt that instructs an assistant to explain the result to the user.
- Server: Create an MCP server using FastMCP. Define a tool calculate_bmi(weight: float, height: float) -> float. Define a resource at "definition://bmi" that returns a string describing BMI. Define a prompt explain_bmi(name: str, weight: float, height: float) that uses the tool and resource to generate a natural-language explanation.
# Initialize server
mcp = FastMCP("BMI", json_response=True)
@mcp.tool()
def calculate_bmi(weight: float, height: float) -> float:
"""Compute the body mass index given weight (kg) and height (m)."""
return weight / (height ** 2)
@mcp.resource("definition://bmi")
def bmi_definition() -> str:
"""Return a human‑readable definition of BMI."""
return (
"Body Mass Index (BMI) is a measure of body fat based on weight and height. "
"BMI = weight (kg) / (height (m))^2."
)
@mcp.prompt()
def explain_bmi(name: str, weight: float, height: float) -> str:
"""Generate a prompt for an assistant to explain the BMI result."""
bmi_value = calculate_bmi(weight, height)
return (
f"{name}'s BMI is {bmi_value:.2f}. Use the BMI definition resource "
f"and give advice on maintaining a healthy weight."
)
if __name__ == '__main__':
mcp.run(transport="streamable-http", host="localhost", port=8001)
- Client: Write a Python script that sends a JSON‑RPC request to the server’s /mcp endpoint to call the calculate_bmi tool for a person weighing 70 kg and 1.75 m tall. Print the returned BMI value.
payload = {
"jsonrpc": "2.0",
"method": "calculate_bmi",
"params": {"weight": 70.0, "height": 1.75},
"id": 1
}
resp = requests.post("http://localhost:8001/mcp", json=payload)
print("BMI result:", resp.json()['result'])
✅ Knowledge Check
1. What is the primary purpose of the Model Context Protocol (MCP)?
- Incorrect. MCP is an open standard for connecting AI applications to external data sources, tools and workflows; it does not specify how to store models.
- Correct. MCP provides a standardized JSON‑RPC 2.0 protocol that allows hosts, clients and servers to share context and tools.
- Incorrect. MCP builds on JSON‑RPC and does not replace HTTP for RESTful services.
- Incorrect. MCP focuses on connecting applications and does not define programming languages.
2. Which type of features can an MCP server provide?
- Incorrect. MCP servers can expose resources, prompts and tools.
- Correct. The MCP specification lists resources, prompts and tools as server‑provided features.
- Incorrect. Servers can also provide resources and tools, not just prompts.
- Incorrect. Sampling and roots are features that clients may offer to servers.