This integration allows you to very simply add observability and monitoring to your LLM application based on Vercel’s AI SDK. The instrumentation is available for the 4 main methods of the Vercel AI SDK: streamText, generateText, streamObject and generateObject.

The Vercel AI SDK integration already support LLM tracing. You should not use it in conjunction with other LLM provider integrations such as OpenAI.
import { LiteralClient } from '@literalai/client';
import { openai } from '@ai-sdk/openai';

// Here we use the wrapped `streamText` from the Literal AI client. This will allow us to automatically capture
// both the response from the LLM API and the other informations present in the response (latency, token counts, etc...).
const streamText = literalAiClient.instrumentation.vercel.streamText;

export async function POST(req: Request) {
    const { messages } = await req.json();
  
  // The wrapped version of `streamText` accepts the same parameters as the original function.
  const result = streamText({
    model: openai('gpt-4o'),
    messages,
  });

  return result.toDataStreamResponse();
}

With Threads and Runs

In most cases, you will want to keep track of the different generations from your application by grouping them into Threads or Runs. This is especially useful when you want to understand the context in which a generation was made, or when you want to compare different generations.

TypeScript

const generateText = literalAiClient.instrumentation.vercel.generateText;

export async function POST(req: Request) {
const streamText =
  literalAiClient.instrumentation.vercel.streamText;

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = await literalAiClient
    .thread({ name: "Example" })
    .wrap(async () => {
      return streamText({
        model: openai("gpt-4o"),
        messages,
      });
    });

  return result.toDataStreamResponse();
}

}

With Metadata, Tags and Step IDs

Using our Vercel AI SDK integration, you can pass metadata, tags and a step ID at the generation level. These values will be automatically added to the generation when it is logged on Literal AI.

import { v4 as uuidv4 } from 'uuid';

const literalaiStepId = uuidv4();

const generateText = literalAiClient.instrumentation.vercel.generateText;

const { text } = await generateText({
  model: openai('gpt-4o-mini'),
  prompt: question,
  literalaiStepId,
  literalaiTags: ['tag1', 'tag2'],
  literalaiMetadata: { otherKey: 'otherValue' }
});

Cookbooks

You can find more involved examples in our Cookbooks repository :

  • This chatbot uses Vercel AI SDK’s useChat hook in the frontend
  • This example uses the Vercel AI SDK integration in the backend