Quickstart Guide
This guide will get you started with mcp_use in under 5 minutes . We’ll cover installation, basic configuration, and running your first agent.
Installation
Python (pip)
TypeScript (npm)
Python (source)
TypeScript (source)
Installing from source gives you access to the latest features and examples!
Installing LangChain Providers
mcp_use works with various LLM providers through LangChain. You’ll need to install the appropriate LangChain provider package for your chosen LLM:
Python (OpenAI)
TypeScript (OpenAI)
Python (Anthropic)
TypeScript (Anthropic)
Python (Google)
TypeScript (Google)
Python (Groq)
TypeScript (Groq)
pip install langchain-openai
Tool Calling Required : Only models with tool calling capabilities can be used with mcp_use. Make sure your chosen model supports function calling or tool use.
Environment Setup
Set up your environment variables in a .env file for secure API key management:
OPENAI_API_KEY = your_api_key_here
ANTHROPIC_API_KEY = your_api_key_here
GROQ_API_KEY = your_api_key_here
GOOGLE_API_KEY = your_api_key_here
Your First Agent
Here’s a simple example to get you started:
import asyncio
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from mcp_use import MCPAgent, MCPClient
async def main ():
# Load environment variables
load_dotenv()
# Create configuration dictionary
config = {
"mcpServers" : {
"playwright" : {
"command" : "npx" ,
"args" : [ "@playwright/mcp@latest" ],
"env" : {
"DISPLAY" : ":1"
}
}
}
}
# Create MCPClient from configuration dictionary
client = MCPClient(config)
# Create LLM
llm = ChatOpenAI( model = "gpt-4o" )
# Create agent with the client
agent = MCPAgent( llm = llm, client = client, max_steps = 30 )
# Run the query
result = await agent.run(
"Find the best restaurant in San Francisco USING GOOGLE SEARCH" ,
)
print ( f " \n Result: { result } " )
if __name__ == "__main__" :
asyncio.run(main())
Configuration Options
You can also load servers configuration from a config file:
client = MCPClient.from_config_file( "browser_mcp.json" )
Example configuration file (browser_mcp.json):
{
"mcpServers" : {
"playwright" : {
"command" : "npx" ,
"args" : [ "@playwright/mcp@latest" ],
"env" : {
"DISPLAY" : ":1"
}
}
}
}
For multi-server setups, tool restrictions, and advanced configuration options, see the Configuration Overview .
Available MCP Servers
mcp_use supports any MCP server . Check out the Awesome MCP Servers list for available options.
Streaming Agent Output
Stream agent responses as they’re generated:
async for chunk in agent.stream( "your query here" ):
print (chunk, end = "" , flush = True )
Next Steps
Need Help? Join our community discussions on GitHub or check out the comprehensive examples in our repository!