0.0.517 (September 16th, 2024)

Improvements

  • Added link to prompt as URL in the get_prompt API
  • Introduced decorators with metadata, tags and stepid fields
  • Adjusted integrations to account for metadata and tags logging

Bug fixes

  • Made third-party dependencies optional when importing Literal AI client.

0.0.515

Improvements

This version improves compatibility with the LangChain ecosystem, notably with complex LangGraph flows.

0.0.514

Improvements

Simplify the Vercel AI SDK integration by leveraging the internal context. threadId and parentId are now automatically handled without the need to pass them as a literalAiParent argument.

0.0.513

Improvements

  • Added option to send a root run ID upon creation of a Step
  • The API update_prompt_ab_testing replaces the old promote_prompt and extends on it to allow for A/B testing
Starting with 0.0.513, you need to have Literal AI version 0.0.617-beta or above.

0.0.512

Breaking Changes

Instantiation of the TypeScript client was altered to take in an object as argument:

New Features

This release introduces a new, wrapper-aware method to create attachments : client.api.createAttachment. This method is a wrapper around the uploadFile method and the new Attachment constructor. It allows you to create an attachment in a single call, and attach it to the thread/step in the current context.

The old method using uploadFile and new Attachment is considered obsolete, although it remains in the SDK for backward compatibility.

Improvements

  • threadId is now optional when creating an attachment or uploading a file
  • createAttachment and uploadFile now accept file data in the following formats : ReadableStream, ReadStream, Buffer, File, Blob & ArrayBuffer
  • Generation creation now supports passing tags and metadata
  • Experiment become optionally linked to a Dataset
Starting with 0.0.512, you need to have Literal AI version 0.0.615-beta or above.

0.0.510

Breaking Changes

Revamp of the OpenAI instrumentation : instead of manually instrumenting each call result, you can now instrument OpenAI globally. Each subsequently call will be logged with no additional code.

This new implementation is context-aware and will log the OpenAI calls in the right steps and threads if you use the new wrapper syntax.

If you were using the instrumentation.openai method in your code and upgrade the Literal AI SDK to 0.0.510, you will get the following error at compile time : error TS2554: Expected 0-1 arguments, but got 2..

To fix this, you should move the instrumentation.openai call to the top of your code and remove the arguments apart from the optional { tags: []} argument.

You can find code examples in the OpenAI Instrumentation guide.

0.0.509

New Features

  • New syntax with wrappers for step and thread methods.

0.0.504

Deprecations

  • format is deprecated. formatMessages should be used now.

New Features

  • Add support for tags with OpenAI Instrumentation
  • Allow thread upsert with one single object argument

0.0.503

Deprecations

  • createPrompt() is deprecated. getOrCreatePrompt() should now be used.

New Features

  • getOrCreatePrompt(). For creating a new Prompt, use getOrCreatePrompt().

A Prompt is fully defined by its name, template_messages, settings and tools. If a prompt already exists for the given arguments, it is returned. Otherwise, a new prompt is created.

/* 
* @param name The name of the prompt to retrieve or create.
* @param templateMessages A list of template messages for the prompt.
* @param settings Optional settings for the prompt.
* @returns The prompt that was retrieved or created.
*/