Literal AI Platform

Prompt Management: prompts can be stored in either Literal AI or in your application code.

What is a Prompt Template?

A Prompt or Prompt Template in Literal AI consists of:

  • the prompt i.e. the messages, and its variables
  • the LLM provider i.e. OpenAI, Anthropic, etc.
  • the LLM settings i.e. temperature, top P, etc.

Prompts in Literal AI allow you to:

  • Enable product teams and domain experts to draft and iterate on prompts.
  • Deploy new prompt versions without redeploying your code.
  • Track which prompt version was used for a specific generation. Improve prompts by debugging logged LLM generations with context.
  • Compare prompt versions, LLM models, settings and providers to determine which one performs better.
  • Collaborate effectively with your team.

Create a Prompt

Go to the Prompt Playground to create your first prompt template. On the left panel, define the template messages. In each template message, you can define variables following the Mustache syntax.

Literal AI Platform

Create a Prompt Template

You can try out the prompt template by setting values for the variables, picking an LLM provider/model and clicking on Submit.

The Prompt Playground supports multiple LLM providers/models and you can switch between them to see how the prompt behaves.

Once you are happy with the prompt, you click on Save.

Create a new version

Whether to fix a bug or to add a new feature, you can create a new version of a prompt template.

Create a New Version

To do so, go to the Prompts page, select the prompt template you want to iterate on. Then select the version of the prompt you want to create a new version from.

Edit the template messages and click on Save when you are done. A diff of the changes will be displayed. Optionally provide a changelog to keep track of the changes.

Programmatically

If you prefer to keep your Prompt Template in your code, you can still version it on Literal AI. Note that A/B testing is not available if you manage prompt templates in your code.

See installation to get your API key and instantiate the SDK

Literal AI will check if the prompt changed compared to the last version and create a new version if needed.

Pull a Prompt Template from Literal AI

If your prompt templates live on Literal AI, you will have to pull them in your app before using them.

Prompt templates are cached to ensure fast access.

Format a Prompt Template

Once you have your prompt instance, you can format it with the relevant variables.

Format to OpenAI format

Convert to LangChain Chat Prompt

Coupled with integrations like OpenAI or LangChain, the Literal AI SDK will not only log the generations but also track which prompt versions were used to generate them.

This is especially useful to track the performance of your prompt versions and debug in context.

A/B Test a Prompt

Progressively roll out new prompt versions or LLMs in production using A/B testing.

  • Increases confidence in deployments by allowing gradual rollout and comparison of different versions.
  • Empowers product teams to implement and test prompt improvements independently, reducing reliance on engineering teams.
  • Enables data-driven decision making by comparing performance metrics between versions.
  • Facilitates rapid iteration and optimization of prompts in a production environment.
  • Minimizes risk by allowing easy rollback if a new version underperforms.

A/B Testing prompt versions in production

On Literal AI

When pulling a prompt template without specifying a version, Literal AI will use the A/B testing rollout probabilities to select the version to use.

Literal AI Platform

Setting Prompt Template A/B Testing

By default the v0 will have a rollout of 100%. You can change the rollout probabilities in the Prompts page.

From your code

You can also update the A/B testing rollout probabilities from your code:

Add LLM provider credentials

Local credentials

The local credentials are stored in the local storage of your browser.

In order to interact with various providers using the prompt playground you will need to use your own credentials. You can directly setup the credentials for the current provider when using the prompt playground. Additionally you can access the credentials you defined in the “Settings > LLMs” page.

Shared credentials

The shared credentials are stored online, and the key is encrypted.
Shared credentials are managed by admins but can be used by AI engineers.

To favor collaboration, you can create credentials in the “Settings > LLMs” page. Once created any admin users of a project will be able to use it in the prompt playground. It is important that we are not exposing shared credentials’ keys to the client.

Add a custom LLM provider

If you are hosting your own LLM, you can add it as a custom provider in Literal AI.

Configure an LLM provider

Experiments

You can test a specific prompt template version against a dataset and a set of evaluators to measure the performance of the prompt and avoid regressions.

🚧 Work in progress, coming soon 🚧