Skip to main content
mcp-use supports multiple approaches for streaming agent output, allowing you to receive incremental results, tool actions, and intermediate steps as they are generated by the agent.

Step-by-Step Streaming

The stream method provides a clean interface for receiving intermediate steps during agent execution. Each step represents a tool call and its result.
import { ChatOpenAI } from '@langchain/openai'
import { MCPAgent, MCPClient } from 'mcp-use'

async function stepStreamingExample() {
    // Setup agent
    const config = {
        mcpServers: {
            playwright: {
                command: 'npx',
                args: ['@playwright/mcp@latest']
            }
        }
    }

    const client = new MCPClient(config)
    const llm = new ChatOpenAI({ model: 'gpt-4' })
    const agent = new MCPAgent({ llm, client })

    // Stream the agent's steps
    console.log('🤖 Agent is working...')
    console.log('-'.repeat(50))

    for await (const step of agent.stream('Search for the latest Python news and summarize it')) {
        console.log(`\n🔧 Tool: ${step.action.tool}`)
        console.log(`📝 Input: ${JSON.stringify(step.action.toolInput)}`)
        const result = step.observation.substring(0, 100)
        console.log(`📄 Result: ${result}${step.observation.length > 100 ? '...' : ''}`)
    }

    console.log('\n🎉 Done!')
    await client.closeAllSessions()
}

stepStreamingExample().catch(console.error)

Low-Level Event Streaming

For more granular control, use the stream_events method to get real-time output events:
import { ChatOpenAI } from '@langchain/openai'
import { MCPAgent, MCPClient } from 'mcp-use'

async function basicStreamingExample() {
    // Setup agent
    const config = {
        mcpServers: {
            playwright: {
                command: 'npx',
                args: ['@playwright/mcp@latest']
            }
        }
    }

    const client = new MCPClient(config)
    const llm = new ChatOpenAI({ model: 'gpt-4' })
    const agent = new MCPAgent({ llm, client })

    // Stream the agent's response
    console.log('Agent is working...')

    for await (const event of agent.streamEvents('Search for the latest Python news and summarize it')) {
        if (event.event === 'on_chat_model_stream') {
            // Stream LLM output token by token
            const text = event.data?.chunk?.text
            if (text) {
                process.stdout.write(text)
            }
        }
    }

    console.log('\n\nDone!')
    await client.closeAllSessions()
}

basicStreamingExample().catch(console.error)
The streaming API is based on LangChain’s stream_events method. For more details on event types and data structure, check the LangChain streaming documentation.

Choosing the Right Streaming Method

Use stream() when:

• You want to show step-by-step progress • You need to process each tool call individually • You’re building a workflow UI • You want simple, clean step tracking

Use stream_events() when:

• You need fine-grained control over events • You’re building real-time chat interfaces • You want to stream LLM reasoning text • You need custom event filtering

Examples

Building a Streaming UI

Here’s an example of how you might build a simple console UI for streaming:

Web Streaming with FastAPI

For web applications, you can stream agent output using Server-Sent Events:

Next Steps