The Langchain integration enables to monitor your Langchain agents and chains with a single line of code.

You should create a new instance of the callback handler for each invocation.
The LangChain integration already support LLM tracing. You should not use it in conjunction with other LLM provider integrations such as OpenAI.
import os
from literalai import LiteralClient

from langchain_openai import ChatOpenAI
from langchain.schema.runnable.config import RunnableConfig
from langchain.schema import StrOutputParser
from langchain.prompts import ChatPromptTemplate

literalai_client = LiteralClient(api_key=os.getenv("LITERAL_API_KEY"))
cb = literalai_client.langchain_callback()

# optional: attach to a prompt template on Literal AI
# prompt = literalai_client.api.get_or_create_prompt(...)
# prompt_template = prompt.to_langchain_chat_prompt_template()

prompt_template = ChatPromptTemplate.from_messages(
    ['human', 'Tell me a short joke about {topic}']
)

model = ChatOpenAI(streaming=True)
runnable = prompt_template | model | StrOutputParser()

res = runnable.invoke(
    {"topic": "ice cream"},
    config=RunnableConfig(callbacks=[cb], run_name="joke")
    )

Multiple langchain calls in a single thread

You can combine the Langchain callback handler with the concept of Thread to monitor multiple langchain calls in a single thread.

import os
from literalai import LiteralClient

literalai_client = LiteralClient(api_key=os.getenv("LITERAL_API_KEY"))

with literalai_client.thread(name="Langchain example") as thread:
    cb = literalai_client.langchain_callback()
    # Call your Langchain agent here

Adding tags, metadata or a Step ID

If you use LangChain’s built-in tags and metadata, they will be added to the Literal AI generations. Additionally, you can specify a Step ID to ensure a generation is logged with this Step ID.

import { v4 as uuidv4 } from 'uuid';
import { ChatOpenAI } from '@langchain/openai';

const client = new LiteralClient({ apiKey, apiUrl });

const cb = client.instrumentation.langchain.literalCallback();

const model = new ChatOpenAI({});

const literalaiStepId = uuidv4();


await model.invoke('Hello, how are you?', {
  callbacks: [cb],
  metadata: {
    key: 'value',
    // use literalaiStepId in the metadata to specify a Step ID
    literalaiStepId,
  },
  tags: ['tag1', 'tag2'],
});

LangGraph

LangGraph works similarly to LangChain when it comes to using the Literal AI callback handler.

# list imports 

workflow = StateGraph(MessagesState)
# define graph ...

app = workflow.compile(checkpointer=checkpointer)

# run the app with LangChain callback handler
cb = literalai_client.langchain_callback()
final_state = app.invoke(
    {"messages": [HumanMessage(content="what is the weather in sf")]},
    config=RunnableConfig(callbacks=[cb])
)
prompt = literalai_client.api.get_prompt(name="RAG prompt")
langchain_prompt = prompt.to_langchain_chat_prompt_template()
# Use langchain_prompt as any other LangChain prompt

The Literal AI SDK will not only log the generations but also track which prompt versions were used to generate them.

This is especially useful to track the performance of your prompt versions and debug in context.