Homeā€ŗ
Launchesā€ŗ
BerriAI
22

šŸš…Ā LiteLLM - Call all LLM APIs using the OpenAI format [Llama2, Anthropic, Huggingface, etc.]

Add 100+ new LLMs to your application, with a drop-in replacement for the openai chat completion call.

Hello, Iā€™m Ishaan - one of the maintainers of LiteLLM.

TLDR; LiteLLM letā€™s you call all LLM APIs (Azure, Anthropic, Replicate, etc.) using the OpenAI format. We translate the inputs, standardize exceptions, and guarantee consistent outputs for completion() and embedding() calls

Problem āŒ: Multiple LLM APIs - Hard to debug

Calling LLM APIs involved multiple ~100 line if/else statements which made our debugging problems explode.

I remember when we added Azure and Cohere to our chatbot. Azureā€™s API calls would fail so we implemented model fallbacks - (e.g. if Azure fails, try Cohere then OpenAI etc.). However, provider-specific logic meant our code became increasingly complex and hard to debug.

Solution šŸ’”

1ļøāƒ£Ā Simplify calling existing LLM APIs

Thatā€™s when we decided to abstract our LLM calls behind a single package - LiteLLM. We needed I/O that just worked, so we could spend time improving other parts of our system (error-handling/model-fallback logic, etc.).

LiteLLM does 3 things really well:

  • Consistent I/O:Ā It removes the need for multiple if/else statements.
  • Reliable: Extensively tested with 50+ cases and used in our production environment.
  • Observable: Integrations with Sentry, Posthog, Helicone, etc.

2ļøāƒ£Ā Easily add new LLM APIS - LiteLLM UI

The next big challenge was adding new LLM APIs. Each addition involved 3 changes:

  • Update list of available models users can call from
  • Adding key to our secret manager / .env file
  • Mapping the model name - e.g. replicate/llama2-chat-... to a user-facing alias llama2.

Since LiteLLM integrates with every LLM API - we provide all of this out of the box with zero configuration. With a single environment variable - LITELLM_EMAIL you can automatically add 100+ new LLM API integrations into your production server, without modifying code / redeploying changes šŸ‘‰ LiteLLM UI

Ask šŸ‘€:

  • Adding new LLM providers? Contact us atĀ krrish@berri.aiĀ if you need help!
  • ā­ļø us onĀ GitHubĀ to keep up with releases and news.
  • JoinĀ our Discord!