Home
Companies
Sero AI

Sero AI

AI-powered moderation platform for Trust and Safety

Easily build bespoke safety workflows that integrate AI and human expertise, ensuring end-user safety and regulatory compliance. Our solutions range from measuring harm prevalence, to resolving abuse reports with AI and handling complex cases requiring in-depth investigation.

Sero AI
Founded:2023
Team Size:2
Location:San Jose, CA
Group Partner:Michael Seibel

Active Founders

Ankur Rustagi

Ankur is a seasoned leader with over a decade of experience in applying machine learning to enhance products and ensure user safety. He has led teams at Facebook and Roblox, tackling issues like child endangerment and hate speech. Motivated by this experience, Ankur founded Sero AI to assist other companies in effectively addressing similar trust and safety challenges.

Ankur Rustagi
Ankur Rustagi
Sero AI

Company Launches

tl;dr: Sero AI streamlines the management of safety workflows, such as abuse reports, content moderation, user appeals, and more. It's a workflow manager, augmented with an AI-copilot, designed to elevate the efficiency and precision of human moderators.

Hi everyone, we are Talha and Ankur (left to right), and we are building Sero AI. Our mission is to empower platforms to keep their users safe.


We led safety machine learning teams at Meta and Roblox where we deployed hundreds of AI models to safeguard our users from threats like human trafficking and child endangerment. In this process, we learned the often overlooked yet critical importance of human moderation accuracy for enhancing machine learning enforcement. Inspired by this understanding, we founded Sero AI with the goal to support other companies navigating similar Trust and Safety challenges.

🧨 The Problem

Platforms hosting user-generated content establish community guidelines and employ a substantial workforce of human moderators. Their role is crucial in enforcing these policies, particularly high risk policies such as child endangerment, hate speech, and harassment. At a large scale, Trust and Safety operations managers face challenges in ensuring moderator quality and consistency. Thus platforms either build in-house bespoke solutions or use non-opinionated tools such as  Zendesk, which leads to high costs, user frustrations, and safety risks. This costs tens of millions of dollars, over-enforces account bans, and causes real life harm.

✨ Our Solution

We provide an end to end solution for T&S operations management and moderation. Sero AI is the fastest way to set up tailored safety workflows and customize review user interfaces, augmented by a self learning AI-copilot to assist human moderators.

  • Easily setup moderation workflows:create tailored moderation workflows such as tiered child safety reviews or a user appeals queue. All in a few clicks.
  • Customize moderation UI: Build custom views with all relevant information in one place. No more jumping across several screens to review a single user.
  • Moderate with AI co-pilot: Accelerate reviews of users, audios, and videos by highlighting harmful sections.

As Salesforce manages customer relations, Sero AI streamlines Trust and Safety operations.

🙏 Our Asks

Introduce us to your Trust and Safety friends.

* introduction blurb: Former Meta and Roblox team members are building Sero AI - an end to end T&S platform simplifying human moderation workflows. Check out their website and reach out at founders@getsero.ai for more.