40-60% Token Reduction
Eliminate JSON verbosity while preserving data structure. Savings scale with dataset size - up to 60% for 100+ records.
Production-ready TypeScript library that optimizes JSON for LLMs. Built for RAG systems, vector databases, and real-time streaming.
The Problem: LLMs charge per token. JSON is verbose:
[
{"id": 1, "name": "Alice", "age": 25},
{"id": 2, "name": "Bob", "age": 30}
]Cost: 118 tokens
The Solution: TONL removes redundancy:
users[2]{id:i32,name:str,age:i32}:
1, Alice, 25
2, Bob, 30Cost: 75 tokens โ 36% savings
| Records | JSON Tokens | TONL Tokens | Savings |
|---|---|---|---|
| 5 | 118 | 75 | 36% |
| 10 | 247 | 134 | 46% |
| 100 | 2,470 | 987 | 60% |
| 1,000 | 24,700 | 9,870 | 60% |
๐ก Savings increase with more data
npm install tonl-mcp-bridgeimport { jsonToTonl } from 'tonl-mcp-bridge';
const users = [
{ id: 1, name: "Alice", email: "alice@example.com" },
{ id: 2, name: "Bob", email: "bob@example.com" }
];
const tonl = jsonToTonl(users, "users");
console.log(tonl);Output:
users[2]{id:i32,name:str,email:str}:
1, Alice, alice@example.com
2, Bob, bob@example.comResult: JSON used 118 tokens, TONL uses 75 tokens. 36% savings.
import OpenAI from 'openai';
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{
role: "system",
content: `Here is user data:\n${tonl}` // โ
36% fewer tokens
},
{
role: "user",
content: "Who is the oldest user?"
}
]
});import { MilvusAdapter } from 'tonl-mcp-bridge/sdk/vector';
const milvus = new MilvusAdapter({ address: 'localhost:19530' });
await milvus.connect();
// Search and get TONL results (automatic conversion)
const result = await milvus.searchToTonl(
'documents',
queryEmbedding,
{ limit: 10 }
);
// Use TONL result in LLM prompt
const prompt = `Context:\n${result.tonl}\n\nQuestion: ${userQuestion}`;
// โ
Saved ${result.stats.savingsPercent}% tokens# Stream 1GB log file with constant memory
curl -X POST http://localhost:3000/stream/convert \
-H "Content-Type: application/x-ndjson" \
--data-binary @app-logs.ndjson \
-o logs.tonl
# 250,000 lines/second, 47% compression// Anonymize sensitive fields before sending to LLM
const masked = jsonToTonl(users, 'users', {
anonymize: ['email', 'ssn', 'creditCard'],
mask: true // Preserves format: a***@example.com
});
// Safe to use in LLM prompts - PII protectedDocker:
docker run -d -p 3000:3000 \
-e TONL_AUTH_TOKEN=your-token \
ghcr.io/kryptomrx/tonl-mcp-bridge:latestKubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tonl-mcp-bridge
spec:
replicas: 3
template:
spec:
containers:
- name: tonl-server
image: ghcr.io/kryptomrx/tonl-mcp-bridge:latest
livenessProbe:
httpGet:
path: /health
port: 3000
readinessProbe:
httpGet:
path: /ready
port: 3000# Real-time dashboard
tonl top
# Prometheus metrics
curl http://localhost:3000/metrics# Convert files
tonl convert data.json
# With token statistics
tonl convert data.json -s
# Calculate ROI
tonl roi --savings 45 --queries-per-day 1000
# Analyze multiple files
tonl analyze data/*.json --visual
# Start MCP server
tonl-mcp-serverNative adapters for popular databases:
All adapters include automatic TONL conversion and token statistics.
Enterprise RAG Platform:
โ Perfect for:
โ Not ideal for:
๐ Learn
๐ง Integrate
๐ Deploy
๐ก Calculate
TONL format by Ersin Koรง. This library adds production features: streaming, privacy, monitoring, and enterprise integrations.