0.1.1 (November 13th, 2024)

  • Handle serialization of lists in JSON objects

0.1.0 (November 12th, 2024)

  • Added create_prompt_variant API to create Prompt variants
  • Modified create_experiment to take as input a Prompt variant ID instead of a Prompt ID
Starting with 0.1.0, you need to have Literal AI version 0.1.0-beta or above to create experiments.

0.0.627 (October 28th, 2024)

  • Added fields label, status and tags to Prompt query retrieval
  • Fixed serialization/deserialization in LangChain integration
Starting with 0.0.627, you need to have Literal AI version 0.0.629-beta or above.

0.0.626 (October 21st, 2024)

  • Exposed label to create_scores API for non-solely numerical scores
  • Added release parameter to LiteralClient constructor.
  • Fixed Prompt object string representation

0.0.625 (October 1st, 2024)

  • LlamaIndex instrumentation support extended to (in addition to engine.query):
    • llm.chat and llm.predict_and_call
    • agent.chat for custom FunctionCallingAgent and LlamaIndex’s OpenAIAgent
    • stream versions of the above methods
  • Prevent duplicate LlamaIndex instrumentation

0.0.624 (September 27th, 2024)

  • Make sure each experiment run has its own ID
  • LangChain integration:
    • Allow additional variables in prompt template
    • Make callback handler serializable

0.0.623 (September 24th, 2024)

  • Add token usage to LangChain integration
  • Improved LangChain integration to support all types of LangChain messages
  • Fixed LangChain missing tool call ID

0.0.612

  • Removed runtime dependency on llama-index, only required during instrumentation.

0.0.611

  • Adapt the Mistral AI integration to account for Mistral AI Python client changes
  • Added option to send a root run ID upon creation of a Step
  • The API update_prompt_ab_testing replaces the old promote_prompt and extends on it to allow for A/B testing
  • Tags and metadata arguments passed to LangChain will now show in logs
  • Fixed the event processor when batch not filled every X seconds
Starting with 0.0.611, you need to have Literal AI version 0.0.617-beta or above.

0.0.610

  • Adapt the LlamaIndex integration to the new instrumentation API
  • Add option to send metadata on Generation creation
Starting with 0.0.610, you need to have Literal AI version 0.0.615-beta or above.
The LLamaIndex integration is compatible with llama-index versions 0.10.58 and above.

0.0.608

  • LangChain integration improvements:
    • runs logs have a properly nested structure
    • variables serialization for complex objects

0.0.607

  • Added Mistral AI Instrumentation

0.0.606

Fixes

  • JSON parse errors are now logged

0.0.605

Fixes

  • The HTTP calls are now following redirections

0.0.604

Improvements

  • Enhance error handling

0.0.603

Fixes

  • Default prompt version to None

0.0.602

Fixes

  • Flush doesn’t wait for the internal batch to be empty
  • Add repr to classes
  • Error with participant identifier

0.0.601

Improvements

  • Strip bytes from steps

New Features

  • Add get_steps api

0.0.600

Improvements

  • Change default batch size from 1 to 5

New Features

  • Rename literal_ to literalai_

Fixes

  • Make params optional

0.0.509

Deprecations

  • format() is deprecated. format_messages() should now be used.

New Features

  • Add support for tags with OpenAI Instrumentation

0.0.508

Deprecations

  • create_prompt() is deprecated. get_or_create_prompt() should now be used.

New Features

  • get_or_create_prompt(). For creating a new Prompt, use get_or_create_prompt().

    A `Prompt` is fully defined by its `name`, `template_messages`, `settings` and tools.
    If a prompt already exists for the given arguments, it is returned.
    Otherwise, a new prompt is created.
    
    Args:
        name (str): The name of the prompt to retrieve or create.
        template_messages (List[GenerationMessage]): A list of template messages for the prompt.
        settings (Optional[Dict]): Optional settings for the prompt.
    
    Returns:
        Prompt: The prompt that was retrieved or created.