Skip to main content

Using mcp-use with OpenAI

The OpenAI adapter allows you to seamlessly integrate tools, resources, and prompts from any MCP server with the OpenAI Python SDK. This enables you to use mcp-use as a comprehensive tool provider for your OpenAI-powered agents.

How it Works

The OpenAIMCPAdapter converts not only tools but also resources and prompts from your active MCP servers into a format compatible with OpenAI’s tool-calling feature. It maps each of these MCP constructs to a callable function that the OpenAI model can request.
  • Tools are converted directly to OpenAI functions.
  • Resources are converted into functions that take no arguments and read the resource’s content.
  • Prompts are converted into functions that accept the prompt’s arguments.
The adapter maintains a mapping of these generated functions to their actual execution logic, allowing you to easily call them when requested by the model.

Step-by-Step Guide

Here’s how to use the adapter to provide MCP tools, resources, and prompts to an OpenAI Chat Completion.
Before starting, install the OpenAI SDK:
uv pip install openai
1

First, set up your MCPClient with the desired MCP servers. This part of the process is the same as any other mcp-use application.
from mcp_use import MCPClient

config = {
    "mcpServers": {
        "airbnb": {"command": "npx", "args": ["-y", "@openbnb/mcp-server-airbnb", "--ignore-robots-txt"]},
    }
}

client = MCPClient(config=config)
2

Next, instantiate the OpenAIMCPAdapter. This adapter will be responsible for converting MCP constructs into a format OpenAI can understand.
from mcp_use.adapters import OpenAIMCPAdapter

# Creates the adapter for OpenAI's format
adapter = OpenAIMCPAdapter()
You can pass a disallowed_tools list to the adapter’s constructor to prevent specific tools, resources, or prompts from being exposed to the model.
3

Use the create_all method on the adapter to inspect all connected MCP servers and generate a list of tools, resources and prompts in the OpenAI function-calling format.
# Convert tools from active connectors to the OpenAI's format
# this will populates the list of tools, resources and prompts
await adapter.create_all(client)

# If you decided to create all tools (list concatenation)
openai_tools = adapter.tools + adapter.resources + adapter.prompts
This list will include functions generated from your MCP tools, resources, and prompts.
If you don’t want to create all tools, you can call single functions. For example, if you only want to use tools and resources, you can do the following:
await adapter.create_tools(client)
await adapter.create_resources(client)

# Then, you can decide which ones to use:
openai_tools = adapter.tools + adapter.resources
4

Now, you can use the generated openai_tools in a call to the OpenAI API. The model will use the descriptions of these tools to decide if it needs to call any of them to answer the user’s query.
from openai import OpenAI

openai = OpenAI()
messages = [
    {"role": "user", "content": "Please tell me the cheapest hotel for two people in Trapani."}
]

response = openai.chat.completions.create(
    model="gpt-4o",
    messages=messages,
    tools=openai_tools
)

response_message = response.choices[0].message
messages.append(response_message)
5

If the model decides to use one or more tools, the response_message will contain tool_calls. You need to iterate through these calls, execute the corresponding functions, and append the results to your message history.The OpenAIMCPAdapter makes this easy by providing a tool_executors dictionary and a parse_result method.
# Handle the tool calls (Tools, Resources, Prompts...)
for tool_call in response_message.tool_calls:
    import json

    function_name = tool_call.function.name
    arguments = json.loads(tool_call.function.arguments)

    # 1. Use the adapter's map to get the correct executor
    executor = adapter.tool_executors.get(function_name)

    if not executor:
        content = f"Error: Tool '{function_name}' not found."
    else:
        try:
            # 2. Execute the tool using the retrieved function
            print(f"Executing tool: {function_name}({arguments})")
            tool_result = await executor(**arguments)

            # 3. Use the adapter's universal parser
            content = adapter.parse_result(tool_result)
        except Exception as e:
            content = f"Error executing tool: {e}"

    # 4. Append the result for this specific tool call
    messages.append(
        {
            "tool_call_id": tool_call.id,
            "role": "tool",
            "name": function_name,
            "content": content
        }
    )
The adapter.parse_result(tool_result) method simplifies the process by correctly formatting the output, whether it’s from a standard tool, a resource, or a prompt.
6

Finally, send the updated message history which now includes the tool call results back to the model. This allows the model to use the information gathered from the tools to formulate its final answer.
second_response = openai.chat.completions.create(
    model="gpt-4o",
    messages=messages,
    tools=openai_tools
)

final_message = second_response.choices[0].message
print("\n--- Final response from the model ---")
print(final_message.content)

Complete Example

For reference, here is the complete, runnable code for integrating mcp-use with the OpenAI SDK.
import asyncio

from dotenv import load_dotenv
from openai import OpenAI

from mcp_use import MCPClient
from mcp_use.adapters import OpenAIMCPAdapter

# This example demonstrates how to use our integration
# adapters to use MCP tools and convert to the right format.
# In particularly, this example uses the OpenAIMCPAdapter.

load_dotenv()


async def main():
    config = {
        "mcpServers": {
            "airbnb": {"command": "npx", "args": ["-y", "@openbnb/mcp-server-airbnb", "--ignore-robots-txt"]},
        }
    }

    try:
        client = MCPClient(config=config)

        # Creates the adapter for OpenAI's format
        adapter = OpenAIMCPAdapter()

        # Convert tools from active connectors to the OpenAI's format
        # this will populates the list of tools, resources and prompts
        await adapter.create_all(client)

        # If you don't want to create all tools, you can call single functions
        # await adapter.create_tools(client)
        # await adapter.create_resources(client)
        # await adapter.create_prompts(client)

        # If you decided to create all tools (list concatenation)
        openai_tools = adapter.tools + adapter.resources + adapter.prompts

        # Use tools with OpenAI's SDK (not agent in this case)
        openai = OpenAI()
        messages = [{"role": "user", "content": "Please tell me the cheapest hotel for two people in Trapani."}]
        response = openai.chat.completions.create(model="gpt-4o", messages=messages, tools=openai_tools)

        response_message = response.choices[0].message
        messages.append(response_message)
        if not response_message.tool_calls:
            print("No tool call requested by the model")
            print(response_message.content)
            return

        # Handle the tool calls (Tools, Resources, Prompts...)
        for tool_call in response_message.tool_calls:
            import json

            function_name = tool_call.function.name
            arguments = json.loads(tool_call.function.arguments)

            # Use the adapter's map to get the correct executor
            executor = adapter.tool_executors.get(function_name)

            if not executor:
                print(f"Error: Unknown tool '{function_name}' requested by model.")
                content = f"Error: Tool '{function_name}' not found."
            else:
                try:
                    # Execute the tool using the retrieved function
                    print(f"Executing tool: {function_name}({arguments})")
                    tool_result = await executor(**arguments)

                    # Use the adapter's universal parser
                    content = adapter.parse_result(tool_result)
                except Exception as e:
                    print(f"An unexpected error occurred while executing tool {function_name}: {e}")
                    content = f"Error executing tool: {e}"

            # Append the result for this specific tool call
            messages.append({"tool_call_id": tool_call.id, "role": "tool", "name": function_name, "content": content})

        # Send the tool result back to the model
        second_response = openai.chat.completions.create(model="gpt-4o", messages=messages, tools=openai_tools)
        final_message = second_response.choices[0].message
        print("\n--- Final response from the model ---")
        print(final_message.content)

    except Exception as e:
        print(f"Error: {e}")


if __name__ == "__main__":
    asyncio.run(main())