The Langchain integration enables to monitor your Langchain agents and chains with a single line of code.

You should create a new instance of the callback handler for each invocation.
The LangChain integration already support LLM tracing. You should not use it in conjunction with other LLM provider integrations such as OpenAI.

Multiple langchain calls in a single thread

You can combine the Langchain callback handler with the concept of Thread to monitor multiple langchain calls in a single thread.

Adding tags, metadata or a Step ID

If you use LangChain’s built-in tags and metadata, they will be added to the Literal AI generations. Additionally, you can specify a Step ID to ensure a generation is logged with this Step ID.

import { v4 as uuidv4 } from 'uuid';
import { ChatOpenAI } from '@langchain/openai';

const client = new LiteralClient({ apiKey, apiUrl });

const cb = client.instrumentation.langchain.literalCallback();

const model = new ChatOpenAI({});

const literalaiStepId = uuidv4();


await model.invoke('Hello, how are you?', {
  callbacks: [cb],
  metadata: {
    key: 'value',
    // use literalaiStepId in the metadata to specify a Step ID
    literalaiStepId,
  },
  tags: ['tag1', 'tag2'],
});

LangGraph

LangGraph works similarly to LangChain when it comes to using the Literal AI callback handler.

The Literal AI SDK will not only log the generations but also track which prompt versions were used to generate them.

This is especially useful to track the performance of your prompt versions and debug in context.