AI agents can now execute financial transactions, access sensitive data, and take real-world actions, but there is no insurance, no certification, and no clear liability framework for when they fail.
This is the same gap that existed with cyber risk in the early 2000s, before a $15B+ insurance market emerged. We’re at that same inflection point with AI.
The signals are already there: AI-related incidents are being carved out of traditional liability coverage, lawsuits are growing rapidly, and enterprises are still under pressure to deploy AI. The result is a fast-growing, unaddressed risk layer.
We saw this firsthand. We were building AI agents for claims processing: handling payments and decisions inside regulated environments. Every customer asked the same question: who is liable when the agent gets it wrong? There was no answer.
That’s when it became clear: this infrastructure has to exist before enterprise AI can scale.
Ines had already built insurance products from zero at SafetyWing, including carrier relationships and underwriting frameworks. We realized we were uniquely positioned to build the certification and insurance layer for AI agents - so we did.
Every transformative technology needs a trust layer before it can scale.
The internet needed cybersecurity. Cloud computing needed compliance and audit standards.
AI is next.
Today, AI agents can act but they can’t be trusted at scale. There is no standard for how they’re evaluated, no clear liability when they fail, and no infrastructure to make them safe to deploy in critical environments.
If we succeed, the liability and certification framework we build won't just protect enterprises, it will make AI itself safer, more accountable, and more auditable at a global scale. We are not just selling insurance. We are building the infrastructure that allows the agentic world to exist.