← Back to comparisons
RANA vs LangChain
LangChain is powerful but complex. RANA gives you the same capabilities with 90% less code and a gentler learning curve.
90%
Less Code
10x
Faster Setup
100%
TypeScript
Simple Chat
LangChain15 lines
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";
const model = new ChatOpenAI({
modelName: "gpt-4",
temperature: 0.7,
});
const messages = [
new SystemMessage("You are a helpful assistant."),
new HumanMessage("Hello!"),
];
const response = await model.invoke(messages);
console.log(response.content);RANA6 lines
import { createRana } from '@rana/core';
const rana = createRana();
const response = await rana.chat('Hello!');
console.log(response.content);With Tools
LangChain23 lines
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const weatherTool = tool(
async ({ location }) => {
return `Weather in ${location}: Sunny, 72°F`;
},
{
name: "get_weather",
description: "Get weather for a location",
schema: z.object({
location: z.string(),
}),
}
);
const model = new ChatOpenAI({
modelName: "gpt-4",
}).bindTools([weatherTool]);
const response = await model.invoke("What's the weather in SF?");
// Handle tool calls manually...RANA14 lines
import { createRana, createTool } from '@rana/core';
const rana = createRana();
const weather = createTool({
name: 'get_weather',
description: 'Get weather for a location',
parameters: { location: { type: 'string' } },
handler: ({ location }) => `Weather in ${location}: Sunny, 72°F`,
});
const response = await rana
.tools([weather])
.chat('What\'s the weather in SF?');RAG Pipeline
LangChain33 lines
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import { createRetrievalChain } from "langchain/chains/retrieval";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const embeddings = new OpenAIEmbeddings();
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const docs = await splitter.createDocuments([text]);
const vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);
const retriever = vectorStore.asRetriever();
const prompt = ChatPromptTemplate.fromTemplate(`
Answer based on context: {context}
Question: {input}
`);
const documentChain = await createStuffDocumentsChain({
llm: new ChatOpenAI(),
prompt,
});
const retrievalChain = await createRetrievalChain({
combineDocsChain: documentChain,
retriever,
});
const response = await retrievalChain.invoke({ input: "question" });RANA9 lines
import { createRana } from '@rana/core';
import { createRAG } from '@rana/rag';
const rana = createRana();
const rag = createRAG({ rana });
await rag.ingest(text);
const response = await rag.query('question');When to Choose RANA
Choose RANA if you:
- ✓Want to ship fast without a steep learning curve
- ✓Prefer TypeScript-first development
- ✓Need built-in testing and cost tracking
- ✓Value convention over configuration
Choose LangChain if you:
- •Need Python as your primary language
- •Have existing LangChain investments
- •Need very specific chain configurations