If you use LangChain’s built-in tags and metadata, they will be added to the Literal AI generations.
Additionally, you can specify a Step ID to ensure a generation is logged with this Step ID.
import{ v4 as uuidv4 }from'uuid';import{ ChatOpenAI }from'@langchain/openai';const client =newLiteralClient({ apiKey, apiUrl });const cb = client.instrumentation.langchain.literalCallback();const model =newChatOpenAI({});const literalaiStepId =uuidv4();await model.invoke('Hello, how are you?',{ callbacks:[cb], metadata:{ key:'value',// use literalaiStepId in the metadata to specify a Step ID literalaiStepId,}, tags:['tag1','tag2'],});
LangGraph works similarly to LangChain when it comes to using the Literal AI callback handler.
# list imports workflow = StateGraph(MessagesState)# define graph ...app = workflow.compile(checkpointer=checkpointer)# run the app with LangChain callback handlercb = literalai_client.langchain_callback()final_state = app.invoke({"messages":[HumanMessage(content="what is the weather in sf")]}, config=RunnableConfig(callbacks=[cb]))
Link a Literal AI Prompt to a LangChain/LangGraph Run
prompt = literalai_client.api.get_prompt(name="RAG prompt")langchain_prompt = prompt.to_langchain_chat_prompt_template()# Use langchain_prompt as any other LangChain prompt
The Literal AI SDK will not only log the generations but also track which prompt versions were used to generate them.
This is especially useful to track the performance of your prompt versions and debug in context.