Monitor all LLM providers via LangChain simple unified interface init_chat_model:

from langchain.chat_models import init_chat_model
from literalai import LiteralClient

literalai_client = LiteralClient()
gpt_4o = init_chat_model("gpt-4o", model_provider="openai", temperature=0)
claude_sonnet = init_chat_model("claude-3-5-sonnet-20241022", model_provider="anthropic", temperature=0)

# Literal AI callback
cb = literalai_client.langchain_callback()

# Invoke the model with input and callback configuration
gpt_4o.invoke("what's your name", config=RunnableConfig(callbacks=[cb]))
claude_sonnet.invoke("what's your name", config=RunnableConfig(callbacks=[cb]))

literalai_client.flush()

You can thus monitor Anthropic, Mistral AI, Cohere and many other LLM providers. Full list here.