HomeCompaniesCompresr

LLM-native context compression

Compresr provides an API that compresses LLM context without losing what matters. It’s a drop-in for agents and RAG that cuts token costs and improves accuracy.
Active Founders
Berke Argin
Berke Argin
Founder
CAIO @ compresr.ai. On a mission to make every token count | EPFL CS | prev UBS
Kamel Charaf
Kamel Charaf
Founder
Co-founder & COO @ Compresr (YC W26) | EPFL Data Science Masters | ex-Bell Labs
Oussama Gabouj
Oussama Gabouj
Founder
CTO @ compresr.ai. Previously worked in research at EPFL’s DLab and AXA, focusing on efficient ML systems and prompt compression.
Ivan Zakazov
Ivan Zakazov
Founder
CEO @ Compresr. Previously researched LLM context compression as an EPFL PhD (Switzerland). Former Microsoft and Philips Research.
Company Launches
Compresr – context compression for LLM pipelines and agents 🗜️
See original launch post

TL;DR

We help you boost context management with 100x compression. Use Compresr SDK to add compression to your LLM pipelines, and improve agentic workflows with our open-source compression proxy (works with Claude Code and OpenClaw).

https://youtu.be/QAk4MBMHWQQ?si=3DrkVwhK_JrClSU1

Did you ever struggle to fit all relevant context into the model’s window – only to see the model failing to grasp it? Waited for Claude Code to finish yet another 3-minute compaction session? Got surprised with the OpenClaw bills?

Compresr is on a mission to make you forget about managing long model/agent’s context. With our API, you can reduce long context to the essentials the model needs for any given request – making its generation better, faster, and cheaper.

We also care deeply about seamless and simple integration into the agentic workflows. Our open-sourced proxy takes a minute to set-up, and optimizes your Claude Code / OpenClaw context from the moment you finish installation. You will experience no change apart from better performance and smaller bills.

uploaded image

Our ask:

  1. If you use Claude Code, Codex, or OpenClaw, try out our compression-enabling proxy and let us know what you think!
  2. If you build LLM pipelines involving long context – let’s chat!

Make every token count.

Compresr
Founded:2026
Batch:Winter 2026
Team Size:4
Status:
Active
Location:San Francisco
Primary Partner:Jared Friedman