FiddleCube enables developers to create high-quality datasets using AI. Our data platform enables users to: 1. Create datasets using just a prompt, few seed examples or a knowledge base of documents. 2. Manage and annotate the datasets, apply various quality metrics and evals. 3. Export the data in a structured format that connects with any GPT or open source fine-tuning API.
Creating high quality datasets at FiddleCube. Fascinated about AI alignment. Curious about health-tech, design and fitness. Full Stack engineer, part-time illustrator.
Obsessed with improving LLMs with high-quality synthetic data. In my previous life, I built products at companies like Google, Uber and LinkedIn for nearly 10 years.
Llama3.1 405B has just dropped, and it's already outperforming GPT-4o. As we assist our customers in fine-tuning domain-specific LLMs, we see firsthand that it's no small feat. It requires an extensive, diverse, and superior-quality dataset, and multiple iterations of training to get it right.
Identifying the right data in the knowledge base is a manual, challenging process.
Data cleaning and filtering takes significant effort and man-hours, and is error-prone.
Costs of training & evals skyrocket with bad datasets requiring multiple iterations of training.
FiddleCube’s data platform converts your data corpus into a high-quality fine-tuning dataset. Generate 1000s of rows of multi-turn chat, function-calling, and QnAs. Additionally, augment your datasets synthetically from unstructured data to improve your model's performance.
Our users have used us to:
Sign up here to generate your first dataset. Or book a call with us for help in getting started.