HomeCompaniesTinfoil

Tinfoil

Tinfoil makes your AI workflows secure, verifiable, and private.

Tinfoil makes it easy to make your AI workloads provably private, without changing your code. You get the privacy of on-prem deployments, while running on the cloud. We’ve built a full-stack platform on top of the latest NVIDIA GPUs offering confidential computing capabilities, meaning that you don’t have to tradeoff performance for privacy. Under the hood, we integrate a suite of recent advances in secure hardware technologies, in particular NVIDIA’s confidential compute mode available on Hopper and Blackwell. When combined with Tinfoil’s software stack, companies can prove their security claims rather than relying on unenforceable statements like "trust us, we don’t log your queries.” Tinfoil guarantees that all data always stays private and cannot be accessed by anyone other than the end user — not even by Tinfoil or the cloud provider it’s processed on. The founding team combines deep academic and industry experience in security and internet protocols. Tanya was previously a systems engineer at Cloudflare, where she built Internet security protocols used by billions, and contributed to the Workers AI platform. Jules and Sacha recently completed their PhDs at MIT where they worked on secure hardware and privacy-preserving technologies. Jules has also worked at NVIDIA on their confidential computing team. Nate has deep infrastructure experience, starting in high school when he built a mini-CDN that got him an internship at Cloudflare. Tinfoil was born from our personal frustration with the false choice between access to powerful AI and data privacy. As AI becomes tightly integrated into our lives and business workflows, it will touch an ever increasing amount of sensitive personal and proprietary data. Right now, the only solutions are DPAs, band-aids like PII redaction, or the nuclear option of running everything on-prem. These solutions are not scalable. By making it easy to deploy provable privacy, Tinfoil promises to unlock significantly deeper AI adoption and integrations, just as TLS on the Internet enabled e-commerce to flourish by securing credit cards on the network. If OpenAI is creating “God in a box”, we are putting God in a blackbox, so Satan can’t spy.
Active Founders
Tanya Verma
Tanya Verma
Founder
Co-founder @ Tinfoil | Systems, Cryptography and AI @ Cloudflare | CubeSats & CS @ UIUC
Jules Drean
Jules Drean
Founder
Jules has an MIT PhD in secure hardware and confidential computing. He has industry experience working at Microsoft Research and NVIDIA on the technologies underlying Tinfoil. During his PhD, Jules has built secure enclaves from the ground up and studied how to attack and defend these systems against advance microarchitectural attacks. His work also looked at using trusted hardware to securely deploy advanced cryptographic primitives such as FHE and MPC.
Nate Sales
Nate Sales
Founder
Confidential AI @ Tinfoil. Hacking on systems, security, and applied physics.
Sacha Servan-Schreiber
Sacha Servan-Schreiber
Founder
Co-founder of Tinfoil | PhD at MIT working on cryptography and privacy.
Tinfoil
Founded:2024
Batch:Spring 2025
Team Size:4
Status:
Active
Location:San Francisco
Primary Partner:Diana Hu
Company Launches
Tinfoil: Verifiable Privacy for Cloud AI
See original launch post

Sending your most sensitive or proprietary data to OpenAI et al?

Sick of deals getting stalled during enterprise or government security reviews?

Hate deploying on various configurations of on-prem infrastructure environments?

Say less ✋

We’re Tanya, Jules, Sacha and Nate from Tinfoil. What we’re building at Tinfoil sidesteps these concerns because we can host models while guaranteeing zero data access and retention.

We let you run powerful models like Llama, Deepseek R1 on cloud GPUs without you having to trust us – or the cloud provider – with your private data.

Since AI performs better the more context you give it, we think solving AI privacy will unlock more valuable AI applications, just how TLS on the Internet enabled e-commerce to flourish knowing that your credit card info wouldn't be stolen by someone sniffing internet packets.

https://youtu.be/wnwI22qT-QQ

Who we are

We come from backgrounds in cryptography, security, and infrastructure. Jules did his PhD in trusted hardware and confidential computing at MIT, and worked with NVIDIA and Microsoft Research on the same, Sacha did his PhD in privacy-preserving cryptography at MIT, Nate worked on privacy tech like Tor, and I (Tanya) was on Cloudflare's cryptography team.

We were unsatisfied with band-aid techniques like PII redaction (which is actually undesirable in some cases like AI personal assistants) or “pinky promise” security through legal contracts like DPAs. We wanted a real solution that replaced trust with provable security, and addressed our personal frustration with the false choice between exploring cutting-edge AI capabilities and personal privacy.

The magic sauce

Running models locally or on-prem is an option, but can be expensive and inconvenient. Fully Homomorphic Encryption (FHE) is not practical for LLM inference for the foreseeable future. The next best option is using secure enclaves: a secure environment on the chip that no other software running on the host machine can access. This lets us perform LLM inference in the cloud while being able to prove that no one, not even Tinfoil or the cloud provider, can access the data. And because these security mechanisms are implemented in hardware, there is minimal performance overhead.

Up until recently, secure enclaves were only available on CPUs. But NVIDIA confidential computing recently added these hardware-based capabilities to their latest GPUs, making it possible to run GPU-based workloads in a secure enclave.

Believe it when you see it, but Tinfoil has negligible performance overhead despite the strong security & privacy guarantees.

Getting Started

The best part is you can start using confidential AI chat and inference with Tinfoil today!

Try our private chat: tinfoil.sh/chat.

https://www.youtube.com/watch?v=FuFszT0_vXk

Build with our API: Drop-in replacement for the OpenAI chat-completions API with Tinfoil

We support several popular open source models like Deepseek R1 70B, Llama3.3 70B, Mistral 3.1 24B. Missing your favorite model? Just ask us to add it!

Try out our free version running Llama3.3 70B in Colab.

See how Nate psychoanalyzes himself by sending all his iMessage chats to Tinfoil!

https://www.loom.com/share/2381c02ba51043528c6d3bdf61270bc0?sid=9ea0e955-ae68-4ce4-9c86-d06cab18d25c

Deploy your AI Application: Make your AI application enterprise-ready in a matter of minutes. Here's a demo of how you can use Tinfoil to run a deepfake detection service that could run securely on people's private videos:

https://www.youtube.com/watch?v=_8hLmqoutyk

We encourage you to be skeptical of our privacy claims. Verifiability is our answer. It’s not just us saying it’s private; the hardware and cryptography let you check. Here’s a guide that walks you through the verification process: https://docs.tinfoil.sh/verification/attestation-architecture.

The Ask

If you think this sounds interesting, or too good to be true, contact us at founders@tinfoil.sh!

We’d love to hear from you, whether you’re a:

  • startup looking to sell into regulated industries
  • government agency that does everything on-prem but doesn’t trust your AI team to handle sensitive data
  • neocloud trying to secure enterprise customers in regulated industries that don’t migrate from the big clouds to you
  • company providing your employees dead-simple confidential LLM chat app

You can also learn more on our blog, and subscribe to our LinkedIn and Twitter to follow updates!