Literal AI Documentation home pagelight logodark logo
  • Get Started
  • Get Started
Client-side A/B testing
Literal AI Documentation home pagelight logodark logo
  • Documentation
  • Python Client
  • TypeScript Client
  • Cookbooks
  • Get Started
    • Overview
    • Quick Start
    Application
    • Dashboard
    • Logs
    • Prompts
    • Playground
    • Users
    Evaluation
    • Scorers
    • Online Evals
    • Datasets
    • Experiments
    • Annotation Queues
    Settings
    • General
    • LLM
    • Scoring
    • Team
    Guides
    • Evaluation
    • Continuous Improvement
    • Fine-tuning
    Integrations
    • OpenAI
    • LangChain/LangGraph
    • Llama Index
    • LiteLLM
    • Other LLM Providers (Anthropic, etc.)
    • LLM Inference servers (vLLM, etc.)
    • Mistral AI
    • Vercel AI SDK
    • OpenLLMetry
    • Chainlit
    Self Hosting
    • Get Started
    • Manual Deployment
    More
    • Privacy & Security
    • Export Data
    • Release Notes
    • Roadmap

    Client-side A/B testing

    This Python notebook shows how to build two prompts, randomly assign one to each new conversation and A/B test on a metric.

    Was this page helpful?

    Suggest editsRaise issue
    twittergithublinkedin
    Powered by Mintlify