Skip to main content
This guide shows you how to configure and use multiple MCP servers simultaneously with mcp_use, enabling complex workflows that span different domains.

Overview

Using multiple MCP servers allows your agent to access a diverse set of tools from different sources. For example, you might want to:
  • Web scraping with Playwright + File operations with filesystem server
  • Database queries with SQLite + API calls with HTTP server
  • Code execution with Python server + Git operations with GitHub server
The MCPClient can manage multiple servers, and the optional ServerManager can dynamically select the appropriate server for each task.

Basic Multi-Server Configuration

Create a configuration file that defines multiple servers:
multi_server_config.json
{
  "mcpServers": {
    "playwright": {
      "command": "npx",
      "args": ["@playwright/mcp@latest"],
      "env": {
        "DISPLAY": ":1",
        "PLAYWRIGHT_HEADLESS": "true"
      }
    },
    "filesystem": {
      "command": "mcp-server-filesystem",
      "args": ["/safe/workspace/directory"],
      "env": {
        "FILESYSTEM_READONLY": "false"
      }
    },
    "sqlite": {
      "command": "mcp-server-sqlite",
      "args": ["--db", "/path/to/database.db"],
      "env": {
        "SQLITE_READONLY": "false"
      }
    },
    "github": {
      "command": "mcp-server-github",
      "args": ["--token", "${GITHUB_TOKEN}"],
      "env": {
        "GITHUB_TOKEN": "${GITHUB_TOKEN}"
      }
    }
  }
}

Using Multiple Servers

Basic Approach (Manual Server Selection)

import { ChatOpenAI } from '@langchain/openai'
import { MCPAgent, MCPClient, loadConfigFile } from 'mcp-use'

async function main() {
    // Load multi-server configuration
    const config = await loadConfigFile('multi_server_config.json')
    const client = new MCPClient(config)

    // Create agent (all servers will be connected)
    const llm = new ChatOpenAI({ model: 'gpt-4' })
    const agent = new MCPAgent({ llm, client })

    // Agent has access to tools from all servers
    const result = await agent.run(
        'Search for Python tutorials online, save the best ones to a file, ' +
        'then create a database table to track my learning progress'
    )
    console.log(result)

    await client.closeAllSessions()
}

main().catch(console.error)

Advanced Approach (Server Manager)

Enable the server manager for more efficient resource usage:
import { ChatOpenAI } from '@langchain/openai'
import { MCPAgent, MCPClient, loadConfigFile } from 'mcp-use'

async function main() {
    const config = await loadConfigFile('multi_server_config.json')
    const client = new MCPClient(config)
    const llm = new ChatOpenAI({ model: 'gpt-4' })

    // Enable server manager for dynamic server selection
    const agent = new MCPAgent({
        llm,
        client,
        useServerManager: true,  // Only connects to servers as needed
        maxSteps: 30
    })

    // The agent will automatically choose appropriate servers
    const result = await agent.run(
        'Research the latest AI papers, summarize them in a markdown file, ' +
        'and commit the file to my research repository on GitHub'
    )
    console.log(result)

    await client.closeAllSessions()
}

main().catch(console.error)

Configuration Patterns

Web Scraping + Data Processing

{
  "mcpServers": {
    "playwright": {
      "command": "npx",
      "args": ["@playwright/mcp@latest"],
      "env": {
        "PLAYWRIGHT_HEADLESS": "true"
      }
    },
    "pandas": {
      "command": "mcp-server-pandas",
      "args": ["--allow-file-access"],
      "env": {
        "PANDAS_SAFE_MODE": "true"
      }
    },
    "filesystem": {
      "command": "mcp-server-filesystem",
      "args": ["/data/workspace"]
    }
  }
}
Usage example:
const result = await agent.run(
    'Scrape product data from example-store.com, ' +
    'clean and analyze it with pandas, ' +
    'then save the results as CSV and Excel files'
)

Development Workflow

{
  "mcpServers": {
    "filesystem": {
      "command": "mcp-server-filesystem",
      "args": ["/home/user/projects"]
    },
    "github": {
      "command": "mcp-server-github",
      "args": ["--token", "${GITHUB_TOKEN}"]
    },
    "python": {
      "command": "mcp-server-python",
      "args": ["--safe-mode"]
    },
    "git": {
      "command": "mcp-server-git",
      "args": ["--repo-path", "/home/user/projects"]
    }
  }
}
Usage example:
const result = await agent.run(
    'Create a new Python function to calculate fibonacci numbers, ' +
    'write unit tests for it, run the tests, ' +
    'and if they pass, commit the changes to the current git branch'
)

Research and Documentation

{
  "mcpServers": {
    "playwright": {
      "command": "npx",
      "args": ["@playwright/mcp@latest"]
    },
    "arxiv": {
      "command": "mcp-server-arxiv",
      "args": ["--max-results", "10"]
    },
    "wikipedia": {
      "command": "mcp-server-wikipedia",
      "args": ["--language", "en"]
    },
    "filesystem": {
      "command": "mcp-server-filesystem",
      "args": ["/research/notes"]
    }
  }
}

Managing Server Dependencies

Environment Variables

Use environment variables for sensitive information:
.env
GITHUB_TOKEN=ghp_...
DATABASE_URL=postgresql://user:pass@localhost/db
API_KEY=sk-...
WORKSPACE_PATH=/safe/workspace
Reference them in your configuration:
{
  "mcpServers": {
    "github": {
      "command": "mcp-server-github",
      "env": {
        "GITHUB_TOKEN": "${GITHUB_TOKEN}"
      }
    },
    "filesystem": {
      "command": "mcp-server-filesystem",
      "args": ["${WORKSPACE_PATH}"]
    }
  }
}

Conditional Server Loading

You can conditionally include servers based on availability:
import { ChatOpenAI } from '@langchain/openai'
import { MCPClient, MCPAgent } from 'mcp-use'

async function createAgentWithAvailableServers() {
    const config: any = { mcpServers: {} }

    // Always include filesystem
    config.mcpServers.filesystem = {
        command: 'mcp-server-filesystem',
        args: ['/workspace']
    }

    // Include GitHub server if token is available
    if (process.env.GITHUB_TOKEN) {
        config.mcpServers.github = {
            command: 'mcp-server-github',
            env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN }
        }
    }

    // Include database server if URL is available
    if (process.env.DATABASE_URL) {
        config.mcpServers.postgres = {
            command: 'mcp-server-postgres',
            env: { DATABASE_URL: process.env.DATABASE_URL }
        }
    }

    const client = new MCPClient(config)
    return new MCPAgent({
        llm: new ChatOpenAI({ model: 'gpt-4' }),
        client
    })
}

Performance Optimization

Server Manager Benefits

The server manager provides several performance benefits:
  • Lazy Loading
  • Resource Management
  • Error Isolation
    // Without server manager - all servers start immediately
    const agent = new MCPAgent({ llm, client, useServerManager: false })
    // Result: All 5 servers start, consuming resources

    // With server manager - servers start only when needed
    const agentOptimized = new MCPAgent({ llm, client, useServerManager: true })
    // Result: Only the required servers start for each task

Tool Filtering

Control which tools are available to prevent confusion:
// Restrict to specific tool types
const agent = new MCPAgent({
    llm,
    client,
    allowedTools: ['file_read', 'file_write', 'web_search'],
    disallowedTools: ['system_exec', 'network_request']
})

// Or filter by server
const agentFiltered = new MCPAgent({
    llm,
    client,
    allowedServers: ['filesystem', 'playwright'],
    useServerManager: true
})

Troubleshooting Multi-Server Setups

Common Issues

Check server logs and ensure all dependencies are installed:
    import { logger } from 'mcp-use'

    // Enable detailed logging
    logger.level = 'debug'
    const config = await loadConfigFile('config.json')
    const client = new MCPClient(config)
Different servers might provide tools with the same name:
    // Use server prefixes to avoid conflicts
    const agent = new MCPAgent({
        llm,
        client,
        useToolPrefixes: true  // Tools become "server_name.tool_name"
    })
Too many servers can slow down the agent:
    // Limit concurrent servers
    const agent = new MCPAgent({
        llm,
        client,
        useServerManager: true,
        maxConcurrentServers: 3
    })

Debug Configuration

Enable comprehensive debugging:
import { logger, MCPAgent, MCPClient, loadConfigFile } from 'mcp-use'
import { ChatOpenAI } from '@langchain/openai'

// Enable debug logging
logger.level = 'debug'

// Create client with debug mode
const config = await loadConfigFile('multi_server_config.json')
const client = new MCPClient(config)

const llm = new ChatOpenAI({ model: 'gpt-4' })

// Create agent with verbose output
const agent = new MCPAgent({
    llm,
    client,
    useServerManager: true,
    debug: true,
    verbose: true
})

Best Practices

Start Simple

Begin with 2-3 servers and add more as needed. Too many servers can overwhelm the LLM.

Use Server Manager

Enable use_server_manager=True for better performance and resource management.

Environment Variables

Store sensitive configuration like API keys in environment variables, not config files.

Error Handling

Implement graceful degradation when servers are unavailable or fail.

Next Steps