Quickstart
Get started with mcp_use in minutes
Quickstart Guide
This guide will get you started with mcp_use in under 5 minutes. We’ll cover installation, basic configuration, and running your first agent.
Installation
Installing from source gives you access to the latest features and examples!
Installing LangChain Providers
mcp_use works with various LLM providers through LangChain. You’ll need to install the appropriate LangChain provider package for your chosen LLM:
Tool Calling Required: Only models with tool calling capabilities can be used with mcp_use. Make sure your chosen model supports function calling or tool use.
For other providers, check the LangChain chat models documentation
Environment Setup
Set up your environment variables in a .env
file for secure API key management:
Your First Agent
Here’s a simple example to get you started:
Configuration Options
You can also load servers configuration from a config file:
Example configuration file (browser_mcp.json
):
For multi-server setups, tool restrictions, and advanced configuration options, see the Configuration Overview.
Available MCP Servers
mcp_use supports any MCP server. Check out the Awesome MCP Servers list for available options.
Streaming Agent Output
Stream agent responses as they’re generated:
Uses LangChain’s streaming API. See streaming documentation for more details.
Next Steps
Configuration
Complete configuration guide covering client setup and agent customization
LLM Integration
Discover all supported LLM providers and optimization tips
Examples
Explore real-world examples and use cases
Need Help? Join our community discussions on GitHub or check out the comprehensive examples in our repository!