Quickstart Guide

This guide will get you started with mcp_use in under 5 minutes. We’ll cover installation, basic configuration, and running your first agent.

Installation

pip install mcp-use

Installing from source gives you access to the latest features and examples!

Installing LangChain Providers

mcp_use works with various LLM providers through LangChain. You’ll need to install the appropriate LangChain provider package for your chosen LLM:

pip install langchain-openai

Tool Calling Required: Only models with tool calling capabilities can be used with mcp_use. Make sure your chosen model supports function calling or tool use.

For other providers, check the LangChain chat models documentation

Environment Setup

Set up your environment variables in a .env file for secure API key management:

.env
OPENAI_API_KEY=your_api_key_here
ANTHROPIC_API_KEY=your_api_key_here
GROQ_API_KEY=your_api_key_here
GOOGLE_API_KEY=your_api_key_here

Your First Agent

Here’s a simple example to get you started:

import asyncio
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from mcp_use import MCPAgent, MCPClient

async def main():
    # Load environment variables
    load_dotenv()

    # Create configuration dictionary
    config = {
      "mcpServers": {
        "playwright": {
          "command": "npx",
          "args": ["@playwright/mcp@latest"],
          "env": {
            "DISPLAY": ":1"
          }
        }
      }
    }

    # Create MCPClient from configuration dictionary
    client = MCPClient.from_dict(config)

    # Create LLM
    llm = ChatOpenAI(model="gpt-4o")

    # Create agent with the client
    agent = MCPAgent(llm=llm, client=client, max_steps=30)

    # Run the query
    result = await agent.run(
        "Find the best restaurant in San Francisco USING GOOGLE SEARCH",
    )
    print(f"\nResult: {result}")

if __name__ == "__main__":
    asyncio.run(main())

Configuration Options

You can also load servers configuration from a config file:

client = MCPClient.from_config_file("browser_mcp.json")

Example configuration file (browser_mcp.json):

{
  "mcpServers": {
    "playwright": {
      "command": "npx",
      "args": ["@playwright/mcp@latest"],
      "env": {
        "DISPLAY": ":1"
      }
    }
  }
}

For multi-server setups, tool restrictions, and advanced configuration options, see the Configuration Overview.

Available MCP Servers

mcp_use supports any MCP server. Check out the Awesome MCP Servers list for available options.

Streaming Agent Output

Stream agent responses as they’re generated:

async for chunk in agent.astream("your query here"):
    print(chunk, end="", flush=True)

Uses LangChain’s streaming API. See streaming documentation for more details.

Next Steps

Need Help? Join our community discussions on GitHub or check out the comprehensive examples in our repository!