You can use the Literal AI platform to instrument Mistral AI API calls. This allows you to track and monitor the usage of Mistral AI API calls in your application and replay them in the Prompt Playground.

The Mistral AI instrumentation supports sync, async, streamed and regular responses!

Instrumenting Mistral AI API calls

from mistralai import Mistral, UserMessage
import json, os
from literalai import LiteralClient

literalai_client = LiteralClient(api_key="")
literalai_client.instrument_mistralai()

client = Mistral(api_key="")

model = "mistral-large-latest"

messages = [
    {
        "role": "user",
        "content": "What is the best French cheese?",
    }
]

# With streaming
stream_response = client.chat.stream(
    model = model,
    messages = messages,
)

for chunk in stream_response:
    print(chunk.data.choices[0].delta.content)

# Now you can use the OpenAI API as you normally would