Automate 90% of manual prompt engineering using our self-improving prompt optimizer.
š Sumanyu and Marius from Hamming AI; we're part of the upcoming S24 batch!
TLDR: Are you spending a lot of time optimizing prompts by hand? We're launching our Prompt Optimizer (new feature in beta) to automate prompt engineering. It's completely free for 7 days!
š Click here to try our Prompt Optimizer š
Thought experiment: What if we used LLMs to optimize prompts for other LLMs?
Writing high-quality and performant prompts by hand requires enormous trial and error. Here's the usual workflow:
What's worse, new model versions often break previously working prompts. Or, say you want to switch from OpenAI GPT3.5 Turbo to Llama 3. You need to re-optimize your prompts by hand. ā
Describe your task, add examples, or let us synthetically create some, and click run.
Behind the scenes, we use LLMs to generate different prompt variants. Our LLM judge measures how well a particular prompt solves the task. We capture outlier examples and use them to improve the few-shot examples in the prompt. We run several "trials" to refine the prompts iteratively.
Benefits:
Sumanyu previously helped Citizen (a safety app; backed by Founders Fund, Sequoia, 8VC) grow its users by 4X and grew an AI-powered sales program to $100s of millions in revenue/year at Tesla.
Marius previously ran data infrastructure @ Anduril, drove user growth at Citizen with Sumanyu, and was a founding engineer @ Spell (an MLOps startup acquired by Reddit).
In this launch, we showed how we help teams optimize each prompt. In our next launch, we'll walk through how teams use Hamming to optimize their entire AI app.