Agent Run Monitoring
The full source code for this guide can be found in our cookbooks repository.
This code integrates an asynchronous OpenAI client with a Literal AI client to create a conversational agent. It utilizes Literal AI’s step decorators for structured logging and tool orchestration within a conversational flow. The agent can process user messages, make decisions on tool usage, and generate responses based on a predefined set of tools and a maximum iteration limit to prevent infinite loops.
.env
With the integration of Literal AI, you can now visualize runs and LLM calls directly on the Literal AI platform, enhancing transparency and debuggability of your AI-driven applications.
Was this page helpful?