Home
Companies
Mano AI

Mano AI

An end-to-end LLM platform hosted in your cloud

With us, you get an entire LLM stack in your cloud: - The ETL data pipelines that maintain your "Second Source of Truth" - Managed open-source vector stores like Qdrant and Weaviate that automatically scale as your data grows. - Managed monitoring solution to track your pipelines and vector stores. - A simple UI that is ready to plug and play with role-based access controls. We then power all your LLM enterprise use cases: Enterprise semantic search, document Q&A, internal ChatGPT, and more.

Mano AI
Founded:2023
Team Size:2
Location:New York
Group Partner:Dalton Caldwell

Active Founders

Nicolas Raga

I'm the CEO and co-founder of Mano AI. Before starting Mano, I worked on the distributed training infrastructure for Amazon's LLM models. I also spent some time building core features for AWS's real-time data streaming service called Kinesis Data Analytics. When I'm not working, I'm all about espresso, adrenaline, fitness, and Venezuelan rap 🇻🇪

Nicolas Raga
Nicolas Raga
Mano AI

Omar Mihilmy

I’m the CTO and co-founder of Mano AI. Prior to starting Mano, I helped Amazon scale their last mile delivery infrastructure to 4 billion packages. Along my professional journey, I managed to make significant contributions to renowned OSS projects such Node.js and AWS SDK; leading me to one of the fastest promotions to senior engineer. I was the fastest swimmer in Egypt for multiple years, so when I’m not building you can find me in lane 1 😉

Omar Mihilmy
Omar Mihilmy
Mano AI

Company Launches

Mano lets you run LLMs on your own data without leaving your cloud. We provide the data pipelines and enterprise controls you need to move fast and stay safe. Try it out now!

Background

Enterprises are eager to use LLMs on their data. However, their only two options are to either expose all their data to a 3P service or spend the next year trying to hire an ML team. This is where Mano comes in.

Over the next few years, enterprises that are able to properly leverage LLMs on their business data will have a significant competitive advantage. To do so, the following has to be true:

  1. You’ll need a strong data layer that converts your business data into a format LLM’s are able to interact with. We call this your “Second Source of Truth”. These are data pipelines that are able to extract, embed, index, and retrieve data for your Retrieval-Augmented workflows.
  2. You’ll need to fine-tune LLMs on your business data and develop specialized distilled models for specific use cases such as internal search, enterprise question-answering, and customer support.

We’re excited to provide you the data-layer and we’ve partnered with @GigaML to provide you the fine-tuned models. That way you own your infrastructure and have unbounded customizability.

Solution

We provide the following stack:

  • The ETL data pipelines that maintain your "Second Source of Truth"
  • Managed open-source vector stores like Qdrant and Weaviate that automatically scale as your data grows.
  • Managed monitoring solution to track your pipelines and vector stores.
  • A simple UI that is ready to plug and play with role based access controls.

Why Trust Us?

  • Nico helped build Amazon's largest GPU cluster that trains their proprietary foundational model. We developed some technology that led us to submit 2 patents for our work.
  • I (Omar) built the tools behind Prime Video and Last Mile. This means that every time a package is late or a frame is dropped from your favorite show, our anomaly detection systems immediately remediates it.
  • We've also been working together for quite a while. We started collaborating 2 years ago, contributing to projects for the FBI and CMS.

What Makes Us Different?

We focus on three main things:

  1. Private Cloud: We believe in the importance of owning your infrastructure, so we've built everything using infrastructure as code. This means we deploy a single infrastructure template via CloudFormation or similar services.
  2. Customizable: We provide access to your "Second Source of Truth", a feature that no other company offers. This means you don't have to rely on us for fully customized solutions.
  3. Comprehensive: Our end-to-end solution includes everything from chunking, encoding, and embedding, to monitoring and scaling.

These come with the following benefits:

  • Expansive: You can easily use @GigaML's fine-tuned LLMs and embedding models, all with the same interface, data, and access controls.
  • Petabyte-Scale: Built by two former AWS engineers, we have a battle-tested playbook for launching scalable services that power more than 50% of the internet.
  • Secure: Your data. Your cloud. Our engine.
  • Synchronized: At first glance, a solitary data dump might appear straightforward. However, keeping data in sync requires tracking change history in order to implement fine-grained vector updates. We've developed proprietary technology to keep your costs low.

Our Asks

If you are a CTO or engineering leader that wants to own your LLM Data Retrieval Stack or if you’re a security-conscious startup after Series B-C that wants to start adopting AI, join our waitlist at usemano.com and reach out to: founders@usemano.com. We are selecting companies based on data lake size and contract value.