Skip to main content

Agent Configuration

This guide covers MCPAgent configuration options for customizing agent behavior and LLM integration. For client configuration, see the Client Configuration guide.

API Keys

Never hardcode API keys in your source code. Always use environment variables for security.
Since agents use LLM providers that require API keys, you need to configure them properly:

Agent Parameters

When creating an MCPAgent, you can configure several parameters to customize its behavior:
import { MCPAgent, MCPClient, loadConfigFile } from 'mcp-use'
import { ChatOpenAI } from '@langchain/openai'

// Basic configuration
const config = await loadConfigFile('config.json')
const agent = new MCPAgent({
    llm: new ChatOpenAI({ model: 'gpt-4o', temperature: 0.7 }),
    client: new MCPClient(config),
    maxSteps: 30
})

// Advanced configuration
const advancedAgent = new MCPAgent({
    llm: new ChatOpenAI({ model: 'gpt-4o', temperature: 0.7 }),
    client: new MCPClient(config),
    maxSteps: 30,
    serverName: undefined,
    autoInitialize: true,
    memoryEnabled: true,
    systemPrompt: 'Custom instructions for the agent',
    additionalInstructions: 'Additional guidelines for specific tasks',
    disallowedTools: ['file_system', 'network', 'shell']  // Restrict potentially dangerous tools
})

Available Parameters

  • llm: Any LangChain-compatible language model (required)
  • client: The MCPClient instance (optional if connectors are provided)
  • connectors: List of connectors if not using client (optional)
  • server_name: Name of the server to use (optional)
  • max_steps: Maximum number of steps the agent can take (default: 5)
  • auto_initialize: Whether to initialize automatically (default: False)
  • memory_enabled: Whether to enable memory (default: True)
  • system_prompt: Custom system prompt (optional)
  • system_prompt_template: Custom system prompt template (optional)
  • additional_instructions: Additional instructions for the agent (optional)
  • disallowed_tools: List of tool names that should not be available to the agent (optional)
  • use_server_manager: Enable dynamic server selection (default: False)

Tool Access Control

You can restrict which tools are available to the agent for security or to limit its capabilities. Here’s a complete example showing how to set up an agent with restricted tool access:
import { config } from 'dotenv'
import { ChatOpenAI } from '@langchain/openai'
import { MCPAgent, MCPClient } from 'mcp-use'

async function main() {
    // Load environment variables
    config()

    // Create configuration object
    const configuration = {
        mcpServers: {
            playwright: {
                command: 'npx',
                args: ['@playwright/mcp@latest'],
                env: {
                    DISPLAY: ':1'
                }
            }
        }
    }

    // Create MCPClient from configuration object
    const client = new MCPClient(configuration)

    // Create LLM
    const llm = new ChatOpenAI({ model: 'gpt-4o' })

    // Create agent with restricted tools
    const agent = new MCPAgent({
        llm,
        client,
        maxSteps: 30,
        disallowedTools: ['file_system', 'network']  // Restrict potentially dangerous tools
    })

    // Run the query
    const result = await agent.run(
        'Find the best restaurant in San Francisco USING GOOGLE SEARCH'
    )
    console.log(`\nResult: ${result}`)

    await client.closeAllSessions()
}

main().catch(console.error)
You can also manage tool restrictions dynamically:
// Update restrictions after initialization
agent.setDisallowedTools(['file_system', 'network', 'shell', 'database'])
await agent.initialize()  // Reinitialize to apply changes

// Check current restrictions
const restrictedTools = agent.getDisallowedTools()
console.log(`Restricted tools: ${restrictedTools}`)
This feature is useful for:
  • Restricting access to sensitive operations
  • Limiting agent capabilities for specific tasks
  • Preventing the agent from using potentially dangerous tools
  • Focusing the agent on specific functionality

Working with Adapters Directly

If you want more control over how tools are created, you can work with the adapters directly. The BaseAdapter class provides a unified interface for converting MCP tools to various framework formats, with LangChainAdapter being the most commonly used implementation. The adapter pattern makes it easy to:
  1. Create tools directly from an MCPClient
  2. Filter or customize which tools are available
  3. Integrate with different agent frameworks
Benefits of Direct Adapter Usage:
  • Flexibility: More control over tool creation and management
  • Custom Integration: Easier to integrate with existing LangChain workflows
  • Advanced Filtering: Apply custom logic to tool selection and configuration
  • Framework Agnostic: Potential for future adapters to other frameworks

Server Manager

The Server Manager is an agent-level feature that enables dynamic server selection for improved performance with multi-server setups.

Enabling Server Manager

To improve efficiency and potentially reduce agent confusion when many tools are available, you can enable the Server Manager by setting use_server_manager=True when creating the MCPAgent.
// Enable server manager for automatic server selection
const agent = new MCPAgent({
    llm,
    client,
    useServerManager: true  // Enable dynamic server selection
})

How It Works

When enabled, the agent will automatically select the appropriate server based on the tool chosen by the LLM for each step. This avoids connecting to unnecessary servers and can improve performance with large numbers of available servers.
// Multi-server setup with server manager
const config = await loadConfigFile('multi_server_config.json')
const client = new MCPClient(config)
const agent = new MCPAgent({
    llm,
    client,
    useServerManager: true
})

// The agent automatically selects servers based on tool usage
const result = await agent.run(
    'Search for a place in Barcelona on Airbnb, then Google nearby restaurants.'
)

Benefits

  • Performance: Only connects to servers when their tools are actually needed
  • Reduced Confusion: Agents work better with focused tool sets rather than many tools at once
  • Resource Efficiency: Saves memory and connection overhead
  • Automatic Selection: No need to manually specify server_name for most use cases
  • Scalability: Better performance with large numbers of servers

When to Use

  • Multi-server environments: Essential for setups with 3+ servers
  • Resource-constrained environments: When memory or connection limits are a concern
  • Complex workflows: When agents need to dynamically choose between different tool categories
  • Production deployments: For better resource management and performance
For more details on server manager implementation, see the Server Manager guide.

Memory Configuration

MCPAgent supports conversation memory to maintain context across interactions:
// Enable memory (default)
const agent = new MCPAgent({
    llm,
    client,
    memoryEnabled: true
})

// Disable memory for stateless interactions
const statelessAgent = new MCPAgent({
    llm,
    client,
    memoryEnabled: false
})

System Prompt Customization

You can customize the agent’s behavior through system prompts:

Custom System Prompt

const customPrompt = `
You are a helpful assistant specialized in data analysis.
Always provide detailed explanations for your reasoning.
When working with data, prioritize accuracy over speed.
`

const agent = new MCPAgent({
    llm,
    client,
    systemPrompt: customPrompt
})

Additional Instructions

Add task-specific instructions without replacing the base system prompt:
const agent = new MCPAgent({
    llm,
    client,
    additionalInstructions: 'Focus on finding recent information from the last 6 months.'
})

System Prompt Templates

For more advanced customization, you can provide a custom system prompt template:

Performance Configuration

Configure agent performance characteristics:

Debugging Configuration

Enable debugging features during development:

Agent Initialization

Control when and how the agent initializes:

Error Handling

Configure how the agent handles errors:

Best Practices

  1. LLM Selection: Use models with tool calling capabilities
  2. Step Limits: Set reasonable max_steps to prevent runaway execution
  3. Tool Restrictions: Use disallowed_tools for security
  4. Memory Management: Disable memory for stateless use cases
  5. Server Manager: Enable for multi-server setups
  6. System Prompts: Customize for domain-specific tasks
  7. Error Handling: Implement proper timeout and retry logic
  8. Testing: Test agent configurations in development environments

Common Issues

  1. No Tools Available: Check client configuration and server connections
  2. Tool Execution Failures: Enable verbose logging and check tool arguments
  3. Memory Issues: Disable memory or limit concurrent servers
  4. Timeout Errors: Increase max_steps or agent timeout values
For detailed troubleshooting, see the Common Issues guide.