LiteLLM allows you to interact with 100+ LLMs seamlessly using a consistent OpenAI-compatible format, either use their python SDK or their proxy server.Starting from LiteLLM v1.48.12, you can:
Log LLM calls to Literal AI and evaluate your LLM or prompt performance
Create multi-step traces with Literal AI decorators
This integration is compatible with the Literal AI SDK decorators, enabling conversation and agent tracing
Copy
import litellmfrom literalai import LiteralClientimport osos.environ["LITERAL_API_KEY"] = ""os.environ['OPENAI_API_KEY']= ""os.environ['LITERAL_BATCH_SIZE'] = "1" # You won't see logs appear until the batch is full and sentlitellm.input_callback = ["literalai"] # Support other Literal AI decorators and prompt templateslitellm.success_callback = ["literalai"] # Log Input/Output to LiteralAIlitellm.failure_callback = ["literalai"] # Log Errors to LiteralAIliteralai_client = LiteralClient()@literalai_client.rundef my_agent(question: str): # agent logic here response = litellm.completion( model="gpt-4o-mini", messages=[{"role": "user", "content": question}], metadata={"literalai_parent_id": literalai_client.get_current_step().id} ) return responsemy_agent("Hello world")# Waiting to send all logs before exiting, not needed in a production serverliteralai_client.flush()
This integration works out of the box with prompts managed on Literal AI. This means that a specific LLM generation will be bound to its template.Learn more about Prompt Management on Literal AI.