At Mem0, we are building the long-term memory for LLMs to enable personalization for the GenAI stack. This enables LLMs to remember past interactions and provide more personalized responses.
Taranjeet Singh is the co-founder & CEO of Mem0. He joined Khatabook (YC S18) as first growth engineer & rapidly transitioned to Senior PM. He began his software engineering career at Paytm (India's Paypal) & witnessed Paytm's meteoric rise from to become a household name. He built an AI-powered tutoring app that was featured at Google I/O. He co-created EvalAI, an open source Kaggle alternative with Deshraj & grew it to 1.6K GitHub stars.
Deshraj Yadav is the co-founder and CTO of Mem0. He is broadly interested in the field of Artificial Intelligence and Machine Learning Infrastructure. He led the AI Platform at Tesla Autopilot, which enabled large-scale training, model evaluation, monitoring, and observability for Tesla's full self-driving development. Prior to that, Deshraj created EvalAI, an open source ML platform, as his master's thesis at Georgia Tech.
TL;DR
Mem0 is an open-source memory layer for AI applications. It solves the problem of stateless LLMs by efficiently storing and retrieving user interactions, enabling personalized AI experiences that improve over time. Our hybrid datastore architecture combines graph, vector, and key-value stores to make AI apps personalized and cost-efficient. Watch our explainer video here.
—
Hey everyone! We're Taranjeet and Deshraj, and we built Mem0 to solve a big problem we faced with LLMs while building Embedchain (an open-source RAG framework with 2M+ downloads). LLMs don’t have memory, so they forget everything after each session. This leads to inefficient and repetitive interactions, making it hard to create personalized AI experiences. Think about having to repeat your preferences over and over again, and how frustrating that gets! Mem0 changes that.
❌ The Problem
LLMs are stateless—they don’t remember anything between sessions. Every time you interact with them, you have to provide the same context, which gets repetitive and wastes computational resources. This makes AI apps less useful and personalized over time.
✨ Our Solution
Mem0 adds a memory layer to AI applications, making them stateful which allows them to store and recall user interactions, preferences, and relevant context. This way, AI apps evolve with every interaction, delivering more personalized and relevant responses without needing large context blocks in each prompt.
To make this possible, we needed to create a system that could efficiently manage and retrieve all the relevant information AI apps collect over time. That’s where Mem0’s hybrid datastore architecture comes in, making AI smarter and more efficient as it learns.
⚙️ How it works
Mem0 employs a hybrid datastore architecture that combines graph, vector, and key-value stores to store and manage memories effectively. Here’s the breakdown:
Watch this video for a demo of our playground in action here
🙏 Our Asks
We are building self-improving memory for LLM apps that enables seamless personalization for end-users. Our product offers APIs that allow developers to store and manage individual user preferences in a centralized layer.
This smart memory continuously learns from user interactions, ensuring preferences are consistently applied no matter which LLM the user engages with. It provides a personalized experience across different AI platforms and applications.
By offering our smart memory as a service, we empower developers to integrate advanced personalization capabilities into their products, significantly reducing complexity.