Back to Documentation
Observability
Full visibility into your AI applications. Tracing, metrics, logging, and error tracking with OpenTelemetry support.
npm install @rana/observability
Request Tracing
Distributed tracing for all AI requests with full context
import { createTracer, withTracing } from '@rana/observability';
const tracer = createTracer({
serviceName: 'my-ai-app',
exporters: ['console', 'otlp'],
otlpEndpoint: 'http://localhost:4318'
});
// Automatic tracing
const result = await withTracing(
'summarize-document',
async (span) => {
span.setAttribute('document.length', doc.length);
return await summarize(doc);
}
);
// Or use decorators
class MyService {
@traced('processRequest')
async process(input: string) {
return await this.ai.chat(input);
}
}Token Analytics
Track token usage, costs, and efficiency across all requests
import { TokenAnalytics } from '@rana/observability';
const analytics = new TokenAnalytics();
// Automatic tracking
analytics.track({
model: 'gpt-4',
inputTokens: 1500,
outputTokens: 500,
cost: 0.045,
latency: 2300,
metadata: { userId: '123', feature: 'chat' }
});
// Get insights
const daily = await analytics.getDailyUsage();
const byModel = await analytics.getUsageByModel();
const byFeature = await analytics.getUsageByMetadata('feature');
// Cost breakdown
const costs = await analytics.getCostBreakdown({
period: 'month',
groupBy: 'model'
});Performance Monitoring
Track latency percentiles, throughput, and error rates
import { PerformanceMonitor } from '@rana/observability';
const monitor = new PerformanceMonitor({
sloLatencyP99: 3000, // 3s P99 target
sloErrorRate: 0.01, // 1% error rate target
alertOnBreach: true
});
// Automatic metrics collection
monitor.recordRequest({
name: 'chat',
duration: 1234,
success: true
});
// Get performance stats
const stats = await monitor.getStats('1h');
console.log(stats.latency.p50); // 800ms
console.log(stats.latency.p99); // 2100ms
console.log(stats.throughput); // 150 req/min
console.log(stats.errorRate); // 0.005Request/Response Logging
Structured logging with PII redaction and search
import { AILogger } from '@rana/observability';
const logger = new AILogger({
redactPII: true, // Auto-redact emails, phones, etc.
redactPatterns: [/api_key=\w+/], // Custom patterns
storage: 'elasticsearch',
retention: '30d'
});
// Automatic logging
const result = await logger.wrap('chat', async () => {
return await chat(userMessage);
});
// Search logs
const logs = await logger.search({
query: 'error',
timeRange: { from: '1h ago' },
filters: { model: 'gpt-4' }
});Error Tracking
AI-specific error tracking with context and grouping
import { ErrorTracker } from '@rana/observability';
const errorTracker = new ErrorTracker({
sentry: { dsn: process.env.SENTRY_DSN },
groupSimilarErrors: true,
captureContext: true
});
// Automatic error capture
try {
await chat(message);
} catch (error) {
errorTracker.capture(error, {
model: 'gpt-4',
prompt: message,
userId: user.id
});
}
// Get error insights
const insights = await errorTracker.getInsights('24h');
console.log(insights.topErrors);
console.log(insights.errorsByModel);
console.log(insights.errorTrend);OpenTelemetry Export
Export to any OTel-compatible backend
import { setupOTel } from '@rana/observability';
setupOTel({
serviceName: 'my-ai-app',
exporters: {
traces: {
type: 'otlp',
endpoint: 'http://jaeger:4318'
},
metrics: {
type: 'prometheus',
port: 9090
},
logs: {
type: 'otlp',
endpoint: 'http://loki:3100'
}
}
});
// All RANA operations are now traced
const result = await chat('Hello');
// Automatically creates spans, records metrics, logsSupported Backends
Datadog
New Relic
Grafana
Jaeger
Prometheus
Sentry
Elasticsearch
CloudWatch