You should create a new instance of the callback handler for each invocation.
You should call
literalai_client.flush()
at the end of your script to ensure all generations are logged.The LangChain integration already support LLM tracing. You should not use it in conjunction with other LLM provider integrations such as OpenAI.
Multiple langchain calls in a single thread
You can combine the Langchain callback handler with the concept of Thread to monitor multiple langchain calls in a single thread.Adding tags, metadata or a Step ID
If you use LangChain’s built-in tags and metadata, they will be added to the Literal AI generations. Additionally, you can specify a Step ID to ensure a generation is logged with this Step ID.LangGraph
LangGraph works similarly to LangChain when it comes to using the Literal AI callback handler.Python
Check out this Python LangGraph example.
Typescript
Check out this Typescript LangGraph example.