Documentation Index
Fetch the complete documentation index at: https://docs.mcp-use.com/llms.txt
Use this file to discover all available pages before exploring further.

Release 1.4.0
We’ve migrated to LangChain 1.0.0, unlocking better performance and simplified agent execution patterns, as well as direct provider integrations for OpenAI, Anthropic, and Google.
Major Changes
LangChain 1.0.0 Migration
We’ve successfully migrated to LangChain 1.0.0, unlocking better performance and simplified agent execution patterns.
Key improvements:
- Upgraded from LangChain 0.3.27 → 1.0.0
- Simplified agent instantiation using
create_agent()
- Reduced MCPAgent codebase by ~440 lines while adding functionality
- Enhanced internal agent loop handling
Usage example:
from mcp_use import MCPAgent
agent = MCPAgent(config={"mcpServers": {...}})
# Run method now uses LangChain 1.0.0 internally
result = await agent.run("Your query")
# Stream method leverages create_agent() and astream()
async for chunk in agent.stream("Your query"):
print(chunk)
Provider Adapters
Direct integration adapters for major AI providers - no MCPAgent required!
Use MCP tools directly with your preferred provider’s SDK:
from openai import OpenAI
from mcp_use import MCPClient
from mcp_use.agents.adapters import OpenAIMCPAdapter
client = MCPClient(config={"mcpServers": {...}})
# Create adapter for OpenAI format
adapter = OpenAIMCPAdapter()
await adapter.create_all(client)
# Use with OpenAI SDK
openai = OpenAI()
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Find hotels in Trapani"}],
tools=adapter.tools + adapter.resources + adapter.prompts
)
# Execute tool calls
for tool_call in response.choices[0].message.tool_calls:
executor = adapter.tool_executors[tool_call.function.name]
result = await executor(**json.loads(tool_call.function.arguments))
content = adapter.parse_result(result)
from anthropic import Anthropic
from mcp_use import MCPClient
from mcp_use.agents.adapters import AnthropicMCPAdapter
client = MCPClient(config={"mcpServers": {...}})
# Create adapter for Anthropic format
adapter = AnthropicMCPAdapter()
await adapter.create_all(client)
# Use with Anthropic SDK
anthropic = Anthropic()
response = anthropic.messages.create(
model="claude-sonnet-4-5",
messages=[{"role": "user", "content": "Find hotels in Trapani"}],
tools=adapter.tools + adapter.resources + adapter.prompts,
max_tokens=1024
)
# Execute tool calls
for content in response.content:
if content.type == "tool_use":
executor = adapter.tool_executors[content.name]
result = await executor(**content.input)
parsed = adapter.parse_result(result)
from google import genai
from google.genai import types
from mcp_use import MCPClient
from mcp_use.agents.adapters import GoogleMCPAdapter
client = MCPClient(config={"mcpServers": {...}})
# Create adapter for Google format
adapter = GoogleMCPAdapter()
await adapter.create_all(client)
# Use with Google SDK
gemini = genai.Client()
google_tools = [types.Tool(function_declarations=adapter.tools)]
response = gemini.models.generate_content(
model="gemini-flash-lite-latest",
contents=[types.Content(role="user", parts=[types.Part.from_text("Find hotels")])],
config=types.GenerateContentConfig(tools=google_tools)
)
# Execute function calls
for function_call in response.function_calls:
executor = adapter.tool_executors[function_call.name]
result = await executor(**function_call.args)
content = adapter.parse_result(result)
Available adapters:
OpenAIMCPAdapter - OpenAI & compatible APIs
AnthropicMCPAdapter - Claude & Anthropic APIs
GoogleMCPAdapter - Gemini & Google AI APIs
LangChainMCPAdapter - LangChain integration (existing)
Enhancements
Agent Improvements
- Dynamic tool updates: Tools can now be refreshed during agent execution
- Enhanced step tracking: Better visibility into agent execution steps with middleware support
- Streaming improvements: More reliable streaming output with proper null checking
- History filtering: Automatic filtering of ToolMessage from conversation history
- Remote mode delegation: Query streaming properly delegated to remote agents
Quality & Reliability
- Stricter streaming tests for better reliability
- Improved observability callback handling
- Better null checks for node output message extraction
- Fixed max_steps assignment issues
Breaking Changes
None - all existing methods have been updated internally to use LangChain 1.0.0 while maintaining the same API.
Staying on LangChain 0.3.x
If you need to remain on LangChain 0.3.x, pin mcp-use to version 1.3.13:
# Using pip
pip install mcp-use==1.3.13
# Using uv
uv pip install mcp-use==1.3.13
Or in your requirements.txt or pyproject.toml:
# requirements.txt
mcp-use==1.3.13
# pyproject.toml
[project]
dependencies = [
"mcp-use==1.3.13",
]
Version 1.3.13 is the last release compatible with LangChain 0.3.27.
Dependencies
langchain>=1.0.0
langchain-core>=1.0.0
See Also