Home
Companies
Kobalt Labs

Kobalt Labs

AI copilot to automate risk ops for fintechs and banks

Kobalt has built an AI copilot to automate manual risk and compliance operations for fintechs and banks. Given a vendor contract, Kobalt’s reasoning engine ingests internal policies and procedures, legal commitments, past privacy and compliance assessments, and syncs with external legislation to instantly auto-surface and track risks. We're helping community/regional/sponsor banks increase revenue and scale their partnership and vendor volume, and we're helping fintechs conduct their partner diligence in 1/0 the time, all without needing to increase headcount.

Kobalt Labs
Founded:2023
Team Size:2
Location:New York
Group Partner:Aaron Epstein

Active Founders

Kalyani Ramadurgam

Kalyani is the CEO of Kobalt Labs. She previously conducted AI research at Stanford, built financial security products at Apple, and built health data analysis tools used by governments of 13 countries at Zenysis. She graduated from Stanford with her MS in Computer Science (concentration in AI), and her BS in Computer Science (AI) with a minor in Human Rights.

Kalyani Ramadurgam
Kalyani Ramadurgam
Kobalt Labs

Ashi Agrawal

Ashi is the CTO of Kobalt Labs. She previously worked on infra tooling as a senior software engineer at Affirm, the launch of Meaningful Matches at Nuna as a KPCB Fellow, and reliability of internal latency tracing at Meta. She graduated from Stanford with her BS in Computer Science (Theory) with a minor in Dance.

Ashi Agrawal
Ashi Agrawal
Kobalt Labs

Company Launches

TL;DR:  It's risky to let GPT access private data or take action (DB write, API calls, chat with users, etc). Kobalt Labs enables companies to securely use GPT or other LLMs without being blocked by data privacy issues.

Hi! We’re Ashi Agrawal and Kalyani Ramadurgam, the founders of Kobalt Labs.

❌ What’s the problem?

  1. Data privacy is one of the most significant blockers to deep LLM adoption. We’ve worked at companies that struggle to use LLMs due to security concerns -- healthcare companies are especially vulnerable.
  2. Companies need a way to use cloud-based models without putting their PII, PHI, MNPI, or any other private information at risk of exposure. BAAs don’t actually enforce security at the API layer.
  3. Companies that have sensitive data are acutely at risk when using an LLM. Prompt injection, malicious subversive inputs, and data leakage are just the tip of the iceberg when it comes to everything that will go wrong as LLM usage becomes more sophisticated.

✨ What do we do?

Our model-agnostic API:

  • Anonymizes and replaces PII and other sensitive data – including custom entity types – from structured and unstructured input
  • Can also replace PII with synthetic “twin” data that ensures consistent behavior with the original content
  • Continuously monitors model output for potential sensitive data leakage
  • Flags user inputs for prompt injection or malicious activity
  • Aligns model usage with compliance frameworks and data privacy standards

👉 Why are we different?

  • Our sole focus is optimizing security and data privacy while minimizing latency. All traffic is encrypted, we don’t hold any user inputs, and we score highly on prompt protection and PII detection benchmarks.
  • On the backend, we’re using multiple models of varying performance and speeds, and filtering inputs through a model cascade to make everything as fast as possible.
  • We’re compatible with OpenAI, Anthropic, and more, including self-hosted models.

🙏 Our ask:

Do you work with lots of sensitive data or know someone who does? Ping us at hi@kobaltlabs.com :)