HomeCompaniesGiga

AI Support Agent for Enterprises.

Active Founders
Varun Vummadi
Varun Vummadi
Founder
Prompt Engineer at GigaML
Esha Dinne
Esha Dinne
Founder
Hi, I'm Esha, I am currently CTO of Giga ML and an IIT Kharagpur CS'23 grad, where I ranked 3rd institute-wide. Before Giga ML, I was a systems engineer intern at Quadeye Securities and led the Math Club at IIT. Off work, I watch anime and read novels.
Company Launches
Giga ML's X1 Large 32k - The most powerful on-prem deployable model
See original launch post

We are announcing our first lineup of on-premise LLMs, X1 Large 8k, 32k — pre-trained and fine-tuned versions of llama2 70B, which are outperforming Claude 2 on the MT bench with a score of 8.1 vs 8. (White paper coming soon with performance on all the benchmarks.)

X1 Large is available for further fine-tuning and pre-training. Try it out here and let us know what you think!

The problems today:

  1. Pre-training: Existing large language models (LLMs) lack the ability to pre-train on specific text data, hindering their effectiveness in specialized domains like healthcare, legal, and finance.
  2. Fine-tuning: The inability to fine-tune LLMs for specific output structures or forms restricts their adaptability in critical areas requiring tailored responses.
  3. Privacy: Organizations dealing with sensitive customer data face trust & compliance challenges when using third-party servers like OpenAI and Anthropic.

X1 Large:

  • Performance: Achieves an MT bench score of 8.1, surpassing Claude 2 after fine-tuning and pre-training.
  • Customization: Our unique pre-training and fine-tuning capabilities provide unrivaled performance for industry-specific use cases.
  • Security: Offers secure on-premise deployment, ensuring data privacy for enterprises.
  • State of the art RAG: We’re partnering with @Mano AI to bring state of art RAG for on-prem deployment on your petabytes of data.

Our ask:

Try out out the demo here with your prompt and let us know how it performed! Email us at founders@gigaml.com if you want fine-tuning and pre-training access in-cloud or on-premise.

Coming soon:

X1 Large Med: Continuing pre training on Medical data.

X1 Large Law: Continuing pre training on entire law data base of all the countries.

Previous Launches
We supercharge open-source LLMs to outperform GPT-4 for your use case.
YC Photos
Giga
Founded:2024
Batch:Summer 2023
Team Size:30
Status:
Active
Location:San Francisco
Primary Partner:Harj Taggar