Skip to main content

LLM Integration Guide

mcp_use supports integration with any Language Learning Model (LLM) that is compatible with LangChain. This guide covers how to use different LLM providers with mcp_use and emphasizes the flexibility to use any LangChain-supported model.
Key Requirement: Your chosen LLM must support tool calling (also known as function calling) to work with MCP tools. Most modern LLMs support this feature.

Universal LLM Support

mcp_use leverages LangChain’s architecture to support any LLM that implements the LangChain interface. This means you can use virtually any model from any provider, including:

OpenAI

GPT-4, GPT-4o, GPT-3.5 Turbo

Anthropic

Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku

Google

Gemini Pro, Gemini Flash, PaLM

Open Source

Llama, Mistral, CodeLlama via various providers

OpenAI

Anthropic Claude

Google Gemini

Groq (Fast Inference)

Local Models with Ollama

Model Requirements

Tool Calling Support

For MCP tools to work properly, your chosen model must support tool calling. Most modern LLMs support this: Supported Models:
  • OpenAI: GPT-4, GPT-4o, GPT-3.5 Turbo
  • Anthropic: Claude 3+ series
  • Google: Gemini Pro, Gemini Flash
  • Groq: Llama 3.1, Mixtral models
  • Most recent open-source models
Not Supported:
  • Basic completion models without tool calling
  • Very old model versions
  • Models without function calling capabilities

Checking Tool Support

You can verify if a model supports tools:

Model Configuration Tips

Temperature Settings

Different tasks benefit from different temperature settings:

Model-Specific Parameters

Each provider has unique parameters you can configure:

Cost Optimization

Choosing Cost-Effective Models

Consider your use case when selecting models:
Use CaseRecommended ModelsReason
Development/TestingGPT-3.5 Turbo, Claude HaikuLower cost, good performance
Production/ComplexGPT-4o, Claude SonnetBest performance
High VolumeGroq modelsFast inference, competitive pricing
Privacy/LocalOllama modelsNo API costs, data stays local

Token Management

Environment Setup

Always use environment variables for API keys:
# .env file
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AI...
GROQ_API_KEY=gsk_...

Advanced Integration

Custom Model Wrappers

You can create custom wrappers for specialized models:

Model Switching

Switch between models dynamically:

Troubleshooting

Common Issues

  1. “Model doesn’t support tools”: Ensure your model supports function calling
  2. API key errors: Check environment variables and API key validity
  3. Rate limiting: Implement retry logic or use different models
  4. Token limits: Adjust max_tokens or use models with larger context windows

Debug Model Behavior

// Enable verbose logging to see model interactions
const agent = new MCPAgent({
    llm,
    client,
    verbose: true  // Shows detailed model interactions
})
For more LLM providers and detailed integration examples, visit the LangChain Chat Models documentation.