{"id":88443,"title":"Hydra: Serverless Realtime Analytics on Postgres","tagline":"Unlock realtime analytics on live data in seconds","body":"![uploaded image](/media/?type=post\u0026id=88443\u0026key=user_uploads/54309/fc2a33f3-1607-4213-bee9-7fb9693b0f5f)\n\n**Hydra is Serverless Realtime Analytics on Postgres.**\n\nBy separating compute from storage, Hydra enables compute-isolated analytics and bottomless storage. It is designed for low latency applications built on time series and event data.\n\n# [**Try Now! (free)**](https://start.hydra.so/get-started)\n\n\u003chttps://youtu.be/mib8ehnMmC8\u003e\n\n# **Enable in seconds**\n\n![uploaded image](/media/?type=post\u0026id=88443\u0026key=user_uploads/54309/42f17c01-89e7-47ae-ba01-f76bc23c864e)\n\nSet up is simple. To unlock serverless realtime analytics on Postgres, run:\n\n```\npip install hydra-cli\n```\n\nFor setup\n\n```\nhydra\n```\n\n1. fetch your hydra token, paste in to enable Hydra\n2. create analytics schema / tables and insert data ([**quickstart docs**](https://docs.hydra.so/intro/quickstart))\n3. _voila_ - run queries, get insights in milliseconds\n\n# **Problem**\n\nFor decades, there’ve been two core problems with analytics on Postgres:\n\n**Slow** - aggregates and complex queries can take minutes to return results from large data sets.\n\n**Resource Contention** - Expensive analytics queries hog Postgres’ RAM / CPU resources and impair transactional performance. In other words, the entire app slows down, which makes users unhappy.\n\n# **Solution**\n\n### **Fast**\n\nHydra returns analytics queries 400X faster than standard Postgres. Hydra uses duckdb to perform isolated serverless processing on these tables in Postgres. In fact, Hydra is faster than most specialized analytics databases.\n\nIf you’re running AWS RDS, AWS Aurora, Heroku Postgres, Supabase, Fly Postgres, Render Postgres, GCP Cloud SQL, etc.. you can improve expensive analytics queries by 400X by using Hydra.\n\n![uploaded image](/media/?type=post\u0026id=88443\u0026key=user_uploads/54309/3fb7dc1e-622a-4303-ba7a-f729f18ab019)\n\nfun fact: the Snowflake instance (128x4XL) in the benchmark is $100k / month.\n\n![uploaded image](/media/?type=post\u0026id=88443\u0026key=user_uploads/54309/5db5d184-2a69-4811-b14e-5ad433f4038c)\n\nThese are results from clickbench which represents a typical workload in the following areas: clickstream and traffic analysis, web analytics, machine-generated data, structured logs, and events data. The table consists of exactly 99 997 497 records — rather small by modern standards but allows tests to be performed in a reasonable time.\n\n### **Isolated, serverless processing**\n\nUsing Hydra there is no impact on Postgres’ RAM / CPU resources when reading from or writing to an analytics tables.\n\nAs a result, here are more cool things Hydra can do: zero-copy clones for scaling read replicas, automatic caching, write isolation, bottomless storage with high data compression, and more.\n\n# **How it works**\n\n![uploaded image](/media/?type=post\u0026id=88443\u0026key=user_uploads/54309/8d8999aa-a370-4f0f-9d99-badfe4621c97)\n\nHydra integrates duckdb execution and features with Postgres using [**pg_duckdb**](https://github.com/duckdb/pg_duckdb), an open-source project we co-developed with the creators of DuckDB.\n\n**FAQ:**\n\nyes, you can join between an analytic table and a standard row-based table.\n\nyes, you can write (insert \u0026 update) to analytics tables.\n\nyes, Hydra is identical to using standard postgres.\n\nyes, data inserted into an analytics table is automatically converted into analytics-optimized columnar format.\n\ntell Hydra you are creating an analytics table by including the ‘using duckdb’ keyword - that’s it.\n\n# **Hydra is best for**\n\nHydra is both a row and columnstore, so you can use it for standard Postgres work, like multitenant apps, as well as analytical work, like monthly reporting. Here are several cool Hydra use cases that blend both transactional \u0026 analytics — more in our [**use case docs**](https://docs.hydra.so/guides/usecase).\n\n* time series, logs, events, traces\n* monitoring \u0026 telemetry\n* financial services\n* observability \u0026 metrics\n* cybersecurity \u0026 fraud detection\n* web \u0026 product analytics\n* iot\n* realtime ml / ai\n\n# **ok, but why use postgres instead of … \\[insert favorite tech\\]?**\n\nIt’s true, there are many specialized analytics databases. reviewing the benchmark above, the majority of analytics databases are actually _slower_ than Hydra. Regardless, isolated analytics databases produce their own set of challenges and costs.\n\n### **data pipelines (etl)**\n\nTraditionally, an analytics database is isolated, and in most cases, engineers must set up data pipelines for data movement and transformation between Postgres, S3, and the analytics db. Pipelines aren’t cheap. Also, there’s pipeline latency bottlenecking how stale the analytics are. And when pipelines break — because they do — you’re stuck with downtime and wrong results.\n\nHydra side-steps the latency and costs of data pipelines entirely with full support for inserts and updates on columnar files in analytics tables.\n\n### **overweight design**\n\nMany use cases don’t justify a heavy setup. From our time working at Heroku, we saw many transactional apps that only need a couple high level aggregates and a few complex analytical queries to return quickly. Hardly OLAP (online analytical processing) and not really HTAP (hybrid transactional and analytical processing) - just apps in need of a speed boost.\n\nThe most heavyweight option, data warehouses like BigQuery, are good at “big queries”, but not great at smaller, rapid analytics queries that’re more common with applications, rather than monthly reporting.\n\nThese traditional approaches are too heavy, can be brittle, and introduce extra costs and latency your startup doesn’t need.\n\n# **do we have a hosted cloud offering?**\n\nYes, and [it’s awesome](https://start.hydra.so/get-started).\n\n# [**Try now!**](https://start.hydra.so/get-started)\n\n## **useful links**\n\n[**website**](https://www.hydra.so/), [**docs**](https://docs.hydra.so/overview), [**local dev guide**](https://docs.hydra.so/guides/local_development), [**analytics guide**](https://docs.hydra.so/guides/analytics), [**architecture**](https://docs.hydra.so/intro/architecture), [**changelog**](https://docs.hydra.so/changelog/changelog)\n\nIf you’ve made it this far here’s a pic of me (joe) and my cofounder (jd) in hood river, oregon. Building Hydra has been a wild ride, but we found a way to have fun.\n\n![uploaded image](/media/?type=post\u0026id=88443\u0026key=user_uploads/54309/69da46c2-bfc2-48c6-8ed3-2364e71c34fd)\n\n","slug":"N0V-hydra-serverless-realtime-analytics-on-postgres","created_at":"2025-03-11T12:27:09.916Z","updated_at":"2026-04-25T00:25:20.479Z","total_vote_count":8,"url":"https://www.ycombinator.com/launches/N0V-hydra-serverless-realtime-analytics-on-postgres","share_image_url":"https://www.ycombinator.com/media/?type=post\u0026id=88443\u0026key=user_uploads/54309/fc2a33f3-1607-4213-bee9-7fb9693b0f5f","company":{"id":25738,"name":"Hydra","slug":"hydra","url":"https://hydra.so","logo":"https://bookface-images.s3.amazonaws.com/small_logos/87c0954c151aa7067e034d8de33b3174d43f5ff0.png","batch":"Winter 2022","industry":"B2B","tags":["Developer Tools","Analytics","Open Source","Data Engineering"],"search_path":"https://bookface.ycombinator.com/company/25738"}}