Why the MCP Inspector Runs Entirely in Your Browser
Most developer tools require a backend. VS Code’s debugger runs in your editor. Postman spins up a local server. Database GUIs install native applications. But the mcp-use Inspector? It’s just a web page. Openhttp://localhost:3000/inspector, enter your MCP server URL, and start debugging—no installation, no backend, no proxy server. Everything runs in your browser.
This article explains why we built it this way, the technical challenges we solved, and how you can build client-side MCP applications yourself.
Why Client-Side Matters for Developer Tools
The Traditional Approach: Backend-Dependent
Most API debugging tools follow this architecture:- Postman: Desktop app or cloud service
- Swagger UI: Requires CORS proxy for external APIs
- GraphQL Playground: Server-side rendering
- Installation friction: Download, install, update
- CORS limitations: Can’t call localhost from deployed tools
- Security concerns: Backend sees all your API keys
- Deployment complexity: Need to host the proxy somewhere
- Network overhead: Every request goes through an extra hop
The Client-Side Approach: Direct Connection
The mcp-use Inspector takes a different approach:- Zero installation: Just open a URL
- Local-first: Debug
localhostservers directly from your browser - Private by default: Your API keys never leave your machine
- Instant deployment:
npm run build→ static files → deploy anywhere - Offline-capable: Once loaded, works without internet
Challenge 1: MCP Protocol in the Browser
The Problem
The MCP SDK is designed for Node.js environments. It uses:- Node’s
httpmodule for connections - File system APIs for stdio transport
- Process spawning for local servers
- Native modules that don’t work in browsers
fetch-based transports. But there’s a deeper issue: the architecture assumes you control both ends (client and server).
The Solution: BrowserMCPClient
We built a browser-native MCP client that works with any MCP server:| Feature | Official SDK | BrowserMCPClient |
|---|---|---|
| Transports | stdio, SSE, HTTP | HTTP, WebSocket, SSE |
| Environment | Node.js only | Browser + Node.js |
| File system | Required for stdio | Not used |
| Process spawning | Native modules | Fetch API only |
| Bundle size | ~200KB (with Node deps) | ~50KB (browser-only) |
HTTP Connector Implementation
TheHttpConnector uses browser-native fetch with automatic fallback:
- Try Streamable HTTP (efficient, bidirectional)
- Fall back to SSE (widely supported, server-push)
- Future: WebSocket for real-time bidirectional
Challenge 2: Authentication Without Exposing Secrets
The Problem
When debugging an MCP server that requires authentication:- Option A: Send credentials to their servers (Postman cloud)
- Option B: Install a local proxy (Postman desktop app)
- Option C: Disable CORS (insecure)
The Solution: Client-Side Header Management
The inspector stores and sends auth headers directly from the browser:- ✅ API keys stay in browser memory (never sent to mcp-use servers)
- ✅ Headers included in every MCP request automatically
- ✅ Supports Bearer tokens, Basic auth, custom headers
- ✅ OAuth tokens stored in
localStorage(with user consent) - ✅ Can connect to
localhostservers (no CORS proxy needed)
Challenge 3: Running an AI Agent in the Browser
The Problem
The chat feature usesMCPAgent to orchestrate LLM + MCP tools. Traditional agents run server-side because they need:
- Heavy dependencies (LangChain, OpenAI SDK, Anthropic SDK)
- API keys for LLMs (OpenAI, Anthropic, Google)
- Persistent state across multiple turns
- Access to MCP tools via authenticated connections
The Solution: Client-Side Agent with Smart Bundling
We madeMCPAgent fully browser-compatible:
- Base bundle: ~600KB (React, UI, MCP client)
- OpenAI SDK: +150KB (only if user selects OpenAI)
- Anthropic SDK: +130KB (only if user selects Anthropic)
- Total worst case: ~880KB (vs 2MB+ if bundling everything)
Challenge 4: Sharing Connections Across Features
The Problem
The inspector has multiple features that all need MCP access:- Tools Tab: Call tools with arguments, see results
- Resources Tab: Read resource URIs, display content
- Prompts Tab: Get prompt templates, fill arguments
- Chat Tab: Stream AI agent responses, execute tools
The Solution: Single Connection via React Hook
TheuseMcp hook provides one connection for all features:
Challenge 5: CORS and Same-Origin Policy
The Problem
Browsers block cross-origin requests by default. When you try to connect fromhttps://inspector.mcp-use.com to http://localhost:3000, the browser says:
- Browser extensions (require installation, permissions)
- CORS proxy (defeats the purpose of client-side)
- Disable security (terrible idea)
The Solution: Self-Hosted Inspector
The inspector is designed to be hosted alongside your MCP server:- ✅ Same origin → No CORS issues
- ✅ Direct access to
localhostservers - ✅ Auth headers pass through naturally
- ✅ Framework routes auto-excluded from auth middleware
- ✅ One
npm installgets both server + inspector
Challenge 6: OAuth in a Static Web App
The Problem
Many MCP servers (GitHub, Linear, Google APIs) use OAuth for authentication. OAuth requires:- Redirect to authorization server
- User grants permission
- Redirect back to your app with auth code
- Exchange code for access token
- Use token in API requests
The Solution: Browser-Based OAuth with Dynamic Client Registration
We implemented a complete OAuth flow in the browser:- Tokens stored in
localStorage(sandboxed per origin) - PKCE support for public clients (mitigates code interception)
- Token refresh handled automatically
- Tokens cleared on disconnect
Challenge 7: Running LangChain Agents Client-Side
The Problem
LangChain is a massive library designed for server environments:- Full bundle: ~2MB minified
- Uses Node.js APIs (
fs,path,crypto) - Dozens of model providers (most unused)
- Complex dependency tree
The Solution: Dynamic Imports + Tree Shaking
We only import what’s actually used:| Chunk | Size | Loaded When |
|---|---|---|
| Base UI | 620KB | Initial page load |
| OpenAI SDK | 150KB | First OpenAI chat |
| Anthropic SDK | 130KB | First Anthropic chat |
| Google SDK | 140KB | First Google chat |
Challenge 8: Maintaining Conversation Memory
The Problem
In a traditional server-based chat:- React components re-render (state resets)
- Users refresh the page (memory lost)
- Agent might be destroyed and recreated
The Solution: Agent Persistence + React State
We persist the agent instance across messages:- UI state (
messages) - For rendering chat bubbles - Agent memory - For LLM context (includes system prompts, tool results, etc.)
Real-World Performance
Network Efficiency
Traditional inspector (backend proxy):Connection Overhead
Before (new connection per message):Browser Bundle Size
Optimized splitting:- Initial load: 620KB (inspector UI + base libraries)
- First tool call: +0KB (already loaded)
- First chat: +150KB (LLM SDK)
- First widget: +80KB (React widget renderer)
Building Your Own Client-Side MCP Apps
The same architecture works for any MCP-powered application:Example: Client-Side Todo App with MCP
Example: AI Chat Widget
Security Considerations
Client-Side Security Model
What’s stored in the browser:- MCP server URLs (localStorage)
- Auth headers (memory only, or localStorage if user opts in)
- OAuth tokens (localStorage with encryption)
- LLM API keys (memory only during session)
- Chat history (memory only)
- Your API keys (never leave your browser)
- Your MCP tool calls (direct to your server)
- Your chat messages (processed locally)
- Your auth tokens (stored locally)
When to Use Client-Side vs Server-Side
Use client-side when:- ✅ Debugging/development tools
- ✅ Personal productivity apps
- ✅ Internal company tools (same network)
- ✅ Privacy-critical applications
- ✅ Offline-capable apps
- ✅ Public-facing products (need rate limiting)
- ✅ Shared state across users
- ✅ Background processing
- ✅ Server-only APIs (can’t call from browser)
- ✅ Need to hide API keys from end users
The Result: A Production-Ready Client-Side Tool
The mcp-use Inspector is now:- Fast: 95% faster repeat operations (shared connection)
- Private: Your credentials never leave your machine
- Simple: No backend to deploy or maintain
- Portable: Works from any browser, any device
- Powerful: Full MCP protocol support + AI agent
- 500+ developers using it to debug MCP servers
- Works with Linear, GitHub, Cloudflare, custom servers
- Supports Bearer tokens, Basic auth, OAuth
- Handles tools, resources, prompts, and chat
- Deployed as static files on CDNs worldwide
Technical Stack
Core libraries:BrowserMCPClient- Browser-native MCP clientHttpConnector- Fetch-based MCP transportMCPAgent- LangChain agent for tool orchestration- React 18 - UI framework
- Vite - Build tool with code splitting
- TailwindCSS - Styling
@langchain/openai- OpenAI integration@langchain/anthropic- Anthropic integration@langchain/google-genai- Google integration
fetch- HTTP requestsEventSource- SSE transportWebSocket- WebSocket transport (future)localStorage- OAuth token storagewindow.postMessage- OAuth callback communication
Try It Yourself
Install mcp-use server with inspector:Conclusion
Building the mcp-use Inspector client-side was unconventional, but the benefits are clear:- Zero installation friction: Just open a URL
- Privacy by default: Your data stays local
- Extreme simplicity: No backend to maintain
- Better performance: Direct connections, shared sessions
- Portable: Deploy to any static host
Read more: