Hey everyone, Moe here.
We're building Webhound — AI research agents where you set a budget and the agent keeps researching until it runs out. More budget = more sources, more depth.
How we got here: We started with a product (now called Datasets) that builds structured tables from the web. It worked, but we hit a wall: as datasets got bigger, costs exploded and context became unmanageable. And for open-ended datasets, we never had a clean answer for "when should the agent stop?"
So we rebuilt everything around two ideas: costs that stay linear with time, and giving users direct control over depth.
The new architecture is a Planner-Executor-Verifier loop. Executors get fresh context each run. Planner and Verifier work from summaries and can search over what's accumulated. Runs until budget's out.
We tested it on traditional deep research as a sanity check. At $3-5, it beat the top result on DeepResearch Bench. At $15, it way outperformed it, even running on older models (gemini-2.5-flash). So we shipped it as Reports and put the same architecture into Datasets.
We now have two products:
Asks:
– Moe
(Demo Video for Reports: https://www.youtube.com/watch?v=eEGJBudvmsE)