The safest way to use AI at the enterprise

Credal protects your sensitive data across all AI apps. We ensure everyone at your company can get the benefits of AI like ChatGPT, without risking business secrets or protected data like PII making it into AI training. Think "Okta for AI". Credal gives you four layers of security: 1. Automatic redaction, anonymisation, or user warnings when sensitive data is about to be sent off network 2. Our Secure Chat UI and APIs provide audit logs on every interaction with external AI, allowing you to see what data has been shared with which providers and what Terms of Service or MSA's govern each one. 3. We provide a wrapper around all the major providers of Large Language Models, so we can automatically direct your requests to the most appropriate AI for your particular request (based on data sensitivity, context window, cost, etc.) 4. We get sensible MSA's in place with all major external providers, so your data cannot be retained for more than 30 days or used to train models.

Jobs at Credal.ai

New York, NY, US
$140K - $200K
0.75% - 1.50%
3+ years
New York, NY, US
$160K - $220K
1.00% - 1.75%
6+ years
Team Size:4
Location:New York
Group Partner:Brad Flora

Active Founders

Ravin Thambapillai

I hanker after an honorable Goddam skull. I worked at Google, TheHutGroup (now THG) at Series A, GoCardless at Series A, and most recently at Palantir for 7 years, where I led the building of significant parts of America's emergency-public-health infrastructure, including what Atlantic Magazine called "America's most reliable Pandemic data." My principles are listed here: https://docs.google.com/document/d/11-yFsiwOgYDUmHSWe8J0B2RU2a0JbO2yYUwqM2fjDbE

Ravin Thambapillai
Ravin Thambapillai

Jack Fischer

Five years at Palantir in various technical capacities on projects ranging from airline reliability to defense computer vision. Palantir's US commercial hiring manager 2020-2022. Worked on several of the precursors to H1 (YC W20) and developed their early entity resolution algorithms. Now enabling secure AI!

Jack Fischer
Jack Fischer

Company Launches

Hi there! Ravin and Jack here from Credal.ai


Credal protects your sensitive data across all AI apps. We give everyone at your company access to the latest AI models via our secure Chat UI, integrated with your data, whilst management gets granular control over what data is shared with each provider.

⚔️ The Problem:

CEOs of every company from Microsoft to our YC W23 batchmates understand the massive productivity gains available through AI applications. But huge concerns exist over adopting these tools at the enterprise: what’s happening to our data once we hand it over to AI providers? The landscape of providers is already large and fast-growing. Many have terms of service that permit training general models on your data. As the usage of these tools explode, businesses risk losing control over what data has been shared with whom, when, and what technical and contractual safeguards are in place to protect that data from inadvertently ending up in the hands of competitors via an AI model’s training dataset.


Credal is the gatekeeper for your data, ensuring that all data sent to AI is protected by 3 layers of security:

  1. Credal identifies sensitive data using our (self-hosted) detection models alongside your existing data classifications and lets you automatically redact, anonymize, block, or simply warn the user, if sensitive data is about to be sent over to a provider.
  2. For the major endpoints like chat or text completion, Credal allows you to seamlessly switch between providers: letting you take advantage of the latest models, or highest privacy models as they are released, without locking your company into a single vendor.
  3. Finally, Credal protects you with standardized Terms of Service and MSAs that ensure you have total control over your data. Credal links your audit logs to the specific provider associated with each request, allowing full transparency into both the technical and contractual safeguards that govern how any cut of data could be used.

Our secure chat UI can automatically route your request to the best AI for your question (or you can specify manually)

With granular audit logs on each request, per data source, including what was redacted:

Who needs this, and why did we build it?

Our current customers are security-conscious companies trying to use AI to get an edge in their operations and product. We noticed early on building AI applications that customers were really nervous about handing data over to AI models they didn’t fully understand. Many enterprises were turning ChatGPT off entirely, or banning its use on anything remotely sensitive. Today, our customers use Credal for a range of problems, from basic things like using ChatGPT for summarizing meeting notes, to using complicated combinations of AI (like Claude + GPT 4) to automatically structure company documents. Either way, Credal is giving our customers the confidence they need to use cutting-edge AI on sensitive data.

Meet the Team:

Jack and I met in 2019 when we co-led Palantir’s engagement with a multibillion-dollar Life Sciences conglomerate, steering it from an initial pilot to an enterprise deal so critical to their business that Palantir going bankrupt was mentioned as a risk on the customer’s S1 filing.

I started my career at Google, and since then have been both in the trenches and leading teams at THG at Series A, GoCardless at Series A, and Palantir. Jack started his career at H1 (at pre-seed), before joining Palantir. We’ve both led highly acclaimed teams at Palantir: Atlantic Magazine called the output of work I led for 18 months: “America’s most reliable pandemic data”[1] and the Washington Post said of a team Jack led for a year: “With these systems aiding brave Ukrainian troops, the Russians probably cannot win”[2]

We’re bringing a wealth of expertise in handling the most sensitive data imaginable, into what we see as one of the central problems of our time: building trust in AI.


If you are an enterprise with sensitive data, but you want to be able to use that data with AI  - grab time to chat with us. Enterprises that sign up before Demo Day (April 5th) are eligible for 50% off!


[1] https://www.theatlantic.com/health/archive/2021/01/hhs-hospitalization-pandemic-data/617725/ (The article discusses whether the Biden administration would keep paying for the ‘vital system’ that HHS had procured under Trump. In case you’re wondering what happened, it did and even expanded the contract.

[2] https://www.washingtonpost.com/opinions/2022/12/19/palantir-algorithm-data-ukraine-war/

Company Photo

Company Photo