With Literal AI you can create, debug and manage prompts. If you are using LangChain in your LLM application, you need to use a different format for prompts. In this guide, you will learn how to convert a Literal AI prompt to a LangChain Chat Prompt Template.

You can combine the prompt with the Langchain integration to log the generations and to track which prompt versions were used to generate them.

How to convert a Literal AI Prompt to LangChain

First, you pull the prompt from the Literal AI platform. Then, you can format this prompt to LangChain’s format.

import os
from literalai import LiteralClient

literal_client = LiteralClient(api_key=os.getenv("LITERAL_API_KEY"))

# pull the prompt from Literal AI
prompt = literal_client.api.get_prompt(name="RAG prompt")

# convert to LangChain prompt
langchain_prompt = prompt.to_langchain_chat_prompt_template()

Here is an example application:

import os
from literalai import LiteralClient

from langchain_openai import ChatOpenAI
from langchain.schema.runnable.config import RunnableConfig
from langchain.schema import StrOutputParser

literal_client = LiteralClient(api_key=os.getenv("LITERAL_API_KEY"))

def main():
    # pull the prompt from Literal AI
    prompt = literal_client.api.get_prompt(name="RAG prompt")

    # convert to LangChain prompt
    lc_prompt = prompt.to_langchain_chat_prompt_template()

    model = ChatOpenAI(streaming=True)
    runnable = lc_prompt | model | StrOutputParser()
    
    cb = literal_client.langchain_callback()
    variables = {"foo": "bar"}

    res = runnable.invoke(
        variables, 
        config=RunnableConfig(callbacks=[cb], run_name="Test run")
        )

main()