MCP Server Guide
TONL-MCP Bridge provides two MCP server modes:
- stdio - Direct integration with Claude Desktop (recommended for local use)
- HTTP/SSE - Remote server for team collaboration, Docker, Kubernetes
Choose Your Mode
stdio Mode (Local)
Use when:
- Running Claude Desktop locally
- Single user
- No network needed
- Fastest performance
Pros:
- ✅ No authentication needed
- ✅ No network ports
- ✅ Fastest (<1ms latency)
- ✅ Auto-starts with Claude
Cons:
- ❌ Local only (no remote access)
- ❌ One user at a time
HTTP/SSE Mode (Remote)
Use when:
- Team collaboration needed
- Docker/Kubernetes deployment
- Remote access required
- Multiple clients
Pros:
- ✅ Remote access
- ✅ Multiple clients
- ✅ Production monitoring
- ✅ Load balancing ready
Cons:
- ❌ Requires authentication
- ❌ Network dependency
- ❌ More complex setup
stdio Mode Setup
Quick Start
Install:
npm install -g tonl-mcp-bridgeConfigure Claude Desktop:
macOS:
code ~/Library/Application\ Support/Claude/claude_desktop_config.jsonWindows:
notepad %APPDATA%\Claude\claude_desktop_config.jsonConfiguration:
{
"mcpServers": {
"tonl": {
"command": "npx",
"args": ["-y", "tonl-mcp-stdio"]
}
}
}Restart Claude Desktop and you're done!
Development Setup
For local development:
{
"mcpServers": {
"tonl": {
"command": "node",
"args": [
"/absolute/path/to/tonl-mcp-bridge/dist/mcp/stdio.js"
]
}
}
}Build first:
cd /path/to/tonl-mcp-bridge
npm run buildEnvironment Variables
{
"mcpServers": {
"tonl": {
"command": "node",
"args": ["/path/to/stdio.js"],
"env": {
"DEBUG": "tonl:*",
"NODE_ENV": "development"
}
}
}
}HTTP/SSE Mode Setup
Quick Start
1. Generate Token:
openssl rand -hex 32
# Output: kJ8mN2pQ4rS6tU8vW0xY2zA4bC6dE8fG0hI2jK4lM6n=2. Start Server:
export TONL_AUTH_TOKEN=kJ8mN2pQ4rS6tU8vW0xY2zA4bC6dE8fG0hI2jK4lM6n=
npx tonl-mcp-serverOutput:
🚀 TONL MCP Server listening on port 3000
- SSE Stream: http://localhost:3000/mcp
- Health: http://localhost:3000/health
🔒 Security: Enabled (Bearer Token required)3. Configure Client:
{
"mcpServers": {
"tonl": {
"url": "http://localhost:3000/mcp",
"transport": {
"type": "sse"
},
"headers": {
"Authorization": "Bearer kJ8mN2pQ4rS6tU8vW0xY2zA4bC6dE8fG0hI2jK4lM6n="
}
}
}
}Environment Variables
PORT=3000 # Server port
TONL_AUTH_TOKEN=<token> # Required for production
NODE_ENV=production # Environment
DEBUG=tonl:* # Debug loggingAuto-Generated Tokens (Development)
For development, tokens are auto-generated if not set:
npx tonl-mcp-server
# ⚠️ Security: Development mode (Auto-generated session tokens)
# Token: 3c92afcc-ee69-4d1d-b7e0-39ba92c0443a
# 💡 Set TONL_AUTH_TOKEN for production use⚠️ Auto-generated tokens are valid ONLY for the current session!
Available Tools
Both modes expose the same three MCP tools:
1. convert_to_tonl
Convert JSON data to TONL format.
Parameters:
{
data: any[]; // Data to convert
name: string; // Collection name
options?: {
optimize?: boolean; // Type optimization
flattenNested?: boolean; // Flatten objects
includeStats?: boolean; // Include statistics
anonymize?: string[]; // Fields to anonymize
}
}Example:
{
"data": [
{"id": 1, "name": "Alice", "age": 25},
{"id": 2, "name": "Bob", "age": 30}
],
"name": "users",
"options": {
"includeStats": true
}
}Response:
{
"tonl": "users[2]{id:i32,name:str,age:i32}:\n 1, Alice, 25\n 2, Bob, 30",
"stats": {
"originalTokens": 118,
"compressedTokens": 75,
"savedTokens": 43,
"savingsPercent": 36.4
}
}2. parse_tonl
Convert TONL back to JSON.
Parameters:
{
tonl: string; // TONL formatted string
validateSchema?: boolean; // Validate schema
}Example:
{
"tonl": "users[2]{id:i32,name:str}:\n 1, Alice\n 2, Bob"
}Response:
{
"json": [
{"id": 1, "name": "Alice"},
{"id": 2, "name": "Bob"}
]
}3. calculate_savings
Calculate token savings between JSON and TONL.
Parameters:
{
jsonData: string; // JSON formatted data
tonlData: string; // TONL formatted data
model?: string; // Tokenizer model
}Supported models:
gpt-5(default)gpt-4o,gpt-4o-miniclaude-opus-4.5,claude-sonnet-4.5gemini-2.5-pro,gemini-2.5-flash
Example:
{
"jsonData": "[{\"id\":1,\"name\":\"Alice\"}]",
"tonlData": "users[1]{id:i32,name:str}:\n 1, Alice",
"model": "gpt-4o"
}Response:
{
"originalTokens": 38,
"compressedTokens": 25,
"savedTokens": 13,
"savingsPercent": 34.2,
"model": "gpt-4o"
}HTTP Endpoints
The HTTP server provides additional endpoints:
Health Checks
GET /health - Liveness probe
curl http://localhost:3000/health
# {"status":"healthy","uptime":3600,"timestamp":"2024-12-07T18:00:00.000Z"}GET /ready - Readiness probe
curl http://localhost:3000/ready
# {"status":"ready","timestamp":"2024-12-07T18:00:00.000Z"}Metrics
GET /metrics - Prometheus metrics
curl http://localhost:3000/metrics
# HELP tonl_conversions_total Total number of conversions
# TYPE tonl_conversions_total counter
# tonl_conversions_total{direction="json_to_tonl"} 42GET /metrics/live - Live metrics stream (requires auth)
curl -N -H "Authorization: Bearer $TOKEN" \
http://localhost:3000/metrics/live
# data: {"type":"metrics","timestamp":1701972000000,"data":"..."}Streaming
POST /stream/convert - Stream NDJSON to TONL
curl -X POST http://localhost:3000/stream/convert \
-H "Content-Type: application/x-ndjson" \
--data-binary @logs.ndjsonQuery parameters:
collection- Collection name (default: "data")skipInvalid- Skip invalid JSON lines (default: true)
Docker Deployment
Basic Docker
docker run -d \
-p 3000:3000 \
-e TONL_AUTH_TOKEN=$(openssl rand -hex 32) \
-e NODE_ENV=production \
ghcr.io/kryptomrx/tonl-mcp-bridge:latestDocker Compose
version: '3.8'
services:
tonl-server:
image: ghcr.io/kryptomrx/tonl-mcp-bridge:latest
ports:
- "3000:3000"
environment:
- TONL_AUTH_TOKEN=${TONL_AUTH_TOKEN}
- NODE_ENV=production
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
restart: unless-stoppedStart:
export TONL_AUTH_TOKEN=$(openssl rand -hex 32)
docker-compose up -dKubernetes Deployment
Basic Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: tonl-mcp-bridge
spec:
replicas: 3
selector:
matchLabels:
app: tonl-mcp-bridge
template:
metadata:
labels:
app: tonl-mcp-bridge
spec:
containers:
- name: tonl-server
image: ghcr.io/kryptomrx/tonl-mcp-bridge:latest
ports:
- containerPort: 3000
env:
- name: TONL_AUTH_TOKEN
valueFrom:
secretKeyRef:
name: tonl-secrets
key: auth-token
- name: NODE_ENV
value: "production"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: tonl-mcp-bridge
spec:
selector:
app: tonl-mcp-bridge
ports:
- port: 3000
targetPort: 3000
type: ClusterIPCreate secret:
kubectl create secret generic tonl-secrets \
--from-literal=auth-token=$(openssl rand -hex 32)Deploy:
kubectl apply -f deployment.yamlSee Kubernetes Guide for complete setup.
Security
Authentication
Production (required):
export TONL_AUTH_TOKEN=$(openssl rand -hex 32)Development (optional):
# Auto-generates session tokens
npx tonl-mcp-serverRate Limiting
- 100 requests per 15 minutes per IP
- Applies to
/stream/convertendpoint - Returns 429 when exceeded
Security Headers
Helmet middleware enabled by default:
- X-Content-Type-Options: nosniff
- X-Frame-Options: SAMEORIGIN
- X-XSS-Protection: 0
- Strict-Transport-Security (HTTPS only)
Best Practices
- Always use tokens in production
- Rotate tokens monthly
- Use HTTPS for remote access
- Monitor failed auth attempts
- Set resource limits (Docker/K8s)
- Enable Prometheus metrics
- Configure health checks
Monitoring
Prometheus Metrics
Business metrics:
tonl_token_savings_total- Total tokens savedtonl_compression_ratio- Compression efficiencytonl_conversions_total- Total conversions
Operational metrics:
tonl_http_requests_total- Request counttonl_http_request_duration_seconds- Request latencytonl_active_connections- Active SSE connectionstonl_errors_total- Error count
System metrics:
process_cpu_user_seconds_total- CPU usageprocess_resident_memory_bytes- Memory usagenodejs_heap_size_used_bytes- Heap usage
Grafana Dashboard
Import dashboard from:
https://grafana.com/dashboards/tonl-mcp-bridgeOr create custom dashboard with:
- Request rate graph
- Token savings graph
- Error rate graph
- Response time histogram
Troubleshooting
stdio Mode
Tools not appearing:
- Check config.json syntax
- Verify absolute paths
- Restart Claude Desktop
- Check logs:
~/Library/Logs/Claude/mcp*.log
Command not found:
# Global install
npm install -g tonl-mcp-bridge
# Verify
which tonl-mcp-stdioHTTP Mode
Connection refused:
# Check server is running
curl http://localhost:3000/health
# Check port
lsof -i :3000Authentication failed:
# Verify token
curl -H "Authorization: Bearer $TOKEN" \
http://localhost:3000/mcpHigh latency:
- Check network connectivity
- Monitor with
/metricsendpoint - Review Prometheus graphs
- Check Docker/K8s resource limits
Performance
stdio Mode:
- Startup: <1 second
- Latency: <1ms
- Memory: ~50MB
- No network overhead
HTTP Mode:
- Startup: <2 seconds
- Latency: 5-20ms (local), 50-200ms (remote)
- Memory: ~100MB
- Network dependent
Recommendations:
- Use stdio for local Claude Desktop
- Use HTTP for team/production deployments
- Scale horizontally (K8s replicas)
- Enable Prometheus monitoring
Next Steps
Integration:
- Claude Desktop - Quick start
- Docker Deployment - Container setup
- Kubernetes - Production scaling
Features:
- Streaming - Real-time processing
- Privacy - Data anonymization
- Monitoring - Live dashboard
API:
- Core API - Library usage
- Streaming API - Stream processing
- Server API - HTTP endpoints