HomeCompaniesLaminar
Laminar

Understand why your AI agent breaks. Iterate fast to fix it.

Laminar is open-source observability for AI agents. Trace complex workflows, replay and debug agent runs, and detect anomalies across trajectories at scale.
Active Founders
Robert Kim
Robert Kim
Founder
Co-founder and CEO @ Laminar (lmnr.ai). Previously, I interned at Palantir where I built semantic search package which now powers many internal AI teams and worked on resource allocation engine at core infrastructure team. I also interned at Bloomberg where I scaled market tick processing pipeline by 10x to 10M ticks/s.
Din Mailibay
Din Mailibay
Founder
Co-founder and CTO at Laminar (lmnr.ai). Previously, I have worked at Amazon for 2 years building and scaling critical payments infrastructure. Before that, I've spent a year creating ML infrastructure for a drug discovery biotech startup in Korea.
Company Launches
Laminar – Understand why your agent failed. Iterate fast to fix it.
See original launch post

Hey everyone, Robert from Laminar (YC S24) here. The entire observability space was built for request-response LLM apps. Nobody was building for Agents that run for 40 minutes and fail in ways you can't reproduce. So we did.

Tracing that actually helps you understand what happened

Most observability platforms just collect data and wait until you go look at it. And when you do, you get a tree of spans. You click into one, read it, go back, click the next one, try to hold the context in your head. For a simple LLM call, that's fine. For an agent that ran for 30 minutes and made 200 decisions, it's practically useless.

We built Laminar's tracing to give you as much information as quickly as possible.

Our trace timeline and reader mode lay out the agent's reasoning and actions as a clean, readable feed.

uploaded image

You open a trace and immediately see what the agent did, what it was thinking, and where things went wrong (we also capture and show application level exceptions). For browser agents, we record full browser sessions synced with traces. You literally see what the agent saw at every step → https://laminar.sh/shared/traces/16fcd540-9583-8bcb-f8e0-8b15ec92c58d.

If a trace is too complex to parse visually, you can chat with it: ask questions about what happened in natural language, instead of manually digging through hundreds of steps. We take into account the whole context of the trace, not just a single span.

uploaded image

We can confidently say that we have the best tracing DX on the market. It’s one line of code to integrate. Laminar SDK auto-patches vast majority of AI frameworks and SDKs, including Claude Agent SDK, AI SDK, LiteLLM, Browser Use, Stagehand, OpenHands SDK, and much more. We are the only platform that traces Claude Agent SDK sub-agents. When Claude delegates work to sub-agents, you get full visibility into that entire chain, not just the top-level call. Here’s an example of a trace https://laminar.sh/shared/traces/3f9219d0-8691-daf7-836a-8874ca4c1d9f.

The debugger

This is the feature we wished existed when we were building agents ourselves. When your agent fails 15 minutes into a run, the normal workflow is: restart from scratch, wait for it to reach the same state, hope it reproduces the failure. That's insane.

Laminar starts a local dev server that connects to our platform. You can run your agent directly from our UI, and when a run fails, you go to the exact step where it went wrong. You tweak your prompt or tool definitions right in the UI. And you rerun from that step — with full context preserved.

Here's how it works under the hood: our tracing SDK sits right before the LLM call boundary. When you rerun from a step, we mock all the LLM calls that happened before that point, replaying their original responses. This means the agent walks through its prior steps instantly without actually calling any LLMs and spending any tokens — and crucially, it properly restores external state along the way. If your agent was controlling a browser, the browser gets back to the exact page and DOM state. If it was working in a sandbox, the sandbox is restored. By the time execution reaches your breakpoint, everything — conversation history, tool state, external environment — is exactly as it was. You tweak your prompt, hit rerun, and the agent picks up from there with the real world intact.

Here’s a demo of Laminar debugger.

https://www.youtube.com/watch?v=iSw8MM6tRvY

Signals

uploaded image

In development, you care about individual runs. In production, you have thousands of runs and the question changes — it's no longer "why did this run fail," it's "what's going wrong across all my runs and how often."

That's what Signals solve. You write a short natural language description of what you want to detect — something like "agent gets stuck in a retry loop" or "user gets frustrated and rephrases their request." Laminar runs this against every trace, extracts matching events, and then clusters them into patterns. So instead of manually sampling traces hoping to spot trends, you get a structured view of what's actually happening. We use this ourselves to monitor our own agents, and it catches things we'd never find by skimming through traces.

SQL editor, evals, and more

All of your data lives is accessible with built-in SQL editor (both in UI and API). You can run arbitrary queries against your traces, spans, and events — build custom dashboards, do ad-hoc analysis, or bulk-create datasets from production traces. Those datasets plug directly into our evals pipeline, so you can run evaluations on real production data instead of synthetic test cases.

Laminar

Leading agent companies like Browser Use, OpenHands, Rye.com, Alai and many more use Laminar in production. Laminar is fully open source and extremely fast (written in Rust). Self-host it anywhere or use our managed platform (https://laminar.sh).

Our ask:

🔗 https://laminar.sh

Previous Launches
Laminar captures browser session recordings and syncs them with agent traces.
Observability and analytics which helps you track LLM apps in production.
Combining orchestration, evals, data, and observability into a single platform.
Laminar
Founded:2024
Batch:Summer 2024
Team Size:6
Status:
Active
Location:San Francisco
Primary Partner:Jared Friedman