Back to Documentation
AI-Native Features
Built-in capabilities for production AI applications: detect hallucinations, verify facts, score confidence, and optimize prompts automatically.
Prompt Optimization
Automatically optimize prompts for better results and lower costs
import { optimizePrompt, compressPrompt } from '@rana/core';
// Compress a verbose prompt
const compressed = compressPrompt(longPrompt);
// Result: 40% fewer tokens, same meaning
// Optimize for a specific goal
const optimized = await optimizePrompt(prompt, {
goal: 'quality',
strategy: 'chain-of-thought'
});Hallucination Detection
Detect fabricated facts, fake citations, and logical inconsistencies
import { detectHallucinations } from '@rana/core';
const result = detectHallucinations(response, {
knownFacts: [...],
context: sourceDocument
});
if (result.hasHallucinations) {
console.log('Issues found:', result.instances);
// Types: fabricated_citation, overconfidence,
// temporal_error, logical_inconsistency
}Confidence Scoring
Measure response confidence through linguistic and consistency analysis
import { scoreConfidence, isConfident } from '@rana/core';
const score = scoreConfidence(response, {
context: originalQuery,
samples: [response1, response2, response3] // for consistency
});
console.log(score.overall); // 0.85
console.log(score.level); // 'high'
console.log(score.breakdown); // { linguistic, consistency, specificity, grounding }
console.log(score.recommendations);Fact Verification
Extract claims and verify them against knowledge bases
import { verifyFacts, extractClaims } from '@rana/core';
// Extract factual claims from text
const claims = await extractClaims(response);
// [{ text: "Paris is the capital of France", type: "factual" }]
// Verify claims
const result = await verifyFacts(response);
console.log(result.verifiedClaims); // Claims with evidence
console.log(result.falseClaims); // Number of false claims
console.log(result.overallReliability); // 0-1 scoreQuality Scoring
Multi-dimensional quality evaluation for LLM responses
import { scoreQuality, getQualityLevel } from '@rana/core';
const quality = scoreQuality(response, query);
console.log(quality.overall); // 0.87
console.log(quality.dimensions); // { relevance, completeness, clarity,
// accuracy, helpfulness, conciseness }
console.log(quality.suggestions); // Improvement recommendationsComprehensive Analysis
Run all checks at once for complete response validation
import { analyzeResponse, isTrustworthy } from '@rana/core';
// Quick check
if (isTrustworthy(response)) {
// Safe to use
}
// Full analysis
const analysis = await analyzeResponse(response, {
query: originalQuery,
context: { knownFacts: [...] }
});
console.log(analysis.overallScore); // 0.82
console.log(analysis.hallucinations); // Hallucination results
console.log(analysis.confidence); // Confidence results
console.log(analysis.verification); // Fact verification
console.log(analysis.quality); // Quality scores
console.log(analysis.recommendations); // Combined recommendationsBest Practices
- Always verify critical information before presenting to users
- Use confidence scoring to add uncertainty indicators in UI
- Compress prompts in production to reduce costs by 30-50%
- Log quality scores for monitoring and improvement tracking
- Combine hallucination detection with fact verification for critical apps