Core Architecture Overview
RANA is built on a layered architecture that separates concerns and enables flexibility. Understanding this architecture will help you build better applications and extend the framework when needed.
Architecture Layers
┌─────────────────────────────────────────┐
│ Your Application │
├─────────────────────────────────────────┤
│ @rana/react (Hooks & Components) │
├─────────────────────────────────────────┤
│ @rana/prompts (Prompt Management) │
├─────────────────────────────────────────┤
│ @rana/core (LLM Client & Agents) │
├─────────────────────────────────────────┤
│ LLM Providers (OpenAI, etc) │
└─────────────────────────────────────────┘@rana/core - The Foundation
The core package provides the fundamental building blocks:
Agent Class
The Agent class is the primary way to interact with LLMs. It handles communication, streaming, retries, and tool execution.
import { Agent } from '@rana/core';
const agent = new Agent({
name: 'Assistant',
model: 'claude-sonnet-4-20250514',
systemPrompt: 'You are a helpful assistant.',
tools: [searchTool, calculatorTool],
memory: { type: 'conversation', maxMessages: 50 }
});
// Simple execution
const result = await agent.run('Hello!');
// Streaming execution
for await (const chunk of agent.stream('Tell me a story')) {
process.stdout.write(chunk.content);
}Provider Abstraction
RANA abstracts away provider-specific details, allowing you to switch between OpenAI, Anthropic, Google, and other providers seamlessly.
import { configureProviders } from '@rana/core';
configureProviders({
anthropic: { apiKey: process.env.ANTHROPIC_API_KEY },
openai: { apiKey: process.env.OPENAI_API_KEY },
google: { apiKey: process.env.GOOGLE_API_KEY }
});
// Works with any configured provider
const agent = new Agent({ model: 'gpt-4' }); // OpenAI
const agent2 = new Agent({ model: 'claude-sonnet-4-20250514' }); // Anthropic
const agent3 = new Agent({ model: 'gemini-pro' }); // Google@rana/react - React Integration
The React package provides hooks that manage state and side effects for AI interactions.
useChat Hook
import { useChat } from '@rana/react';
function ChatComponent() {
const {
messages,
input,
setInput,
send,
isLoading,
error
} = useChat({
api: '/api/chat',
onFinish: (message) => console.log('Done:', message)
});
return (
<div>
{messages.map(m => (
<div key={m.id}>{m.content}</div>
))}
<input value={input} onChange={e => setInput(e.target.value)} />
<button onClick={send} disabled={isLoading}>Send</button>
</div>
);
}useAgent Hook
import { useAgent } from '@rana/react';
function AgentComponent() {
const {
run,
result,
isRunning,
toolCalls,
stop
} = useAgent({
name: 'ResearchAgent',
tools: [searchTool]
});
return (
<div>
<button onClick={() => run('Research AI trends')}>
Start Research
</button>
{toolCalls.map(tc => (
<div key={tc.id}>Using: {tc.tool}</div>
))}
<div>{result}</div>
</div>
);
}@rana/prompts - Prompt Management
Enterprise-grade prompt management with versioning, A/B testing, and analytics.
import { PromptManager } from '@rana/prompts';
const pm = new PromptManager({ workspace: 'my-app' });
// Register prompts with versioning
await pm.register('greeting', {
template: 'Hello {{name}}, how can I help you today?',
variables: ['name'],
version: '1.0.0'
});
// Execute with automatic tracking
const result = await pm.execute('greeting', {
variables: { name: 'John' }
});
// Get analytics
const stats = await pm.getAnalytics('greeting');Data Flow
Understanding how data flows through RANA:
- User Input - User sends a message via UI
- React Hook - Hook captures input and manages state
- API Route - Request sent to your API endpoint
- Agent Processing - Agent processes with tools/memory
- LLM Request - Agent sends request to LLM provider
- Streaming Response - Response streams back through all layers
- UI Update - React hook updates state, UI re-renders
Key Design Patterns
Composition over Inheritance
RANA favors composition. You build complex agents by combining simple, focused components.
Convention over Configuration
Sensible defaults everywhere. You can customize anything, but you rarely need to.
Type-Safe by Default
Everything is fully typed. TypeScript inference works throughout the entire stack.
What's Next?
Now that you understand the architecture, let's set up your development environment in the next lesson.