Development
Changelog
0.1.1
(November 13th, 2024)
- Handle serialization of lists in JSON objects
0.1.0
(November 12th, 2024)
- Added
create_prompt_variant
API to createPrompt
variants - Modified
create_experiment
to take as input aPrompt
variant ID instead of aPrompt
ID
Starting with
0.1.0
, you need to have Literal AI version 0.1.0-beta or above to create experiments.0.0.627
(October 28th, 2024)
- Added fields
label
,status
andtags
toPrompt
query retrieval - Fixed serialization/deserialization in LangChain integration
Starting with
0.0.627
, you need to have Literal AI version 0.0.629-beta or above.0.0.626
(October 21st, 2024)
- Exposed
label
tocreate_scores
API for non-solely numerical scores - Added
release
parameter toLiteralClient
constructor. - Fixed
Prompt
object string representation
0.0.625
(October 1st, 2024)
- LlamaIndex instrumentation support extended to (in addition to
engine.query
):llm.chat
andllm.predict_and_call
agent.chat
for customFunctionCallingAgent
and LlamaIndex’sOpenAIAgent
- stream versions of the above methods
- Prevent duplicate LlamaIndex instrumentation
0.0.624
(September 27th, 2024)
- Make sure each experiment run has its own ID
- LangChain integration:
- Allow additional variables in prompt template
- Make callback handler serializable
0.0.623
(September 24th, 2024)
- Add token usage to LangChain integration
- Improved LangChain integration to support all types of LangChain messages
- Fixed LangChain missing tool call ID
0.0.612
- Removed runtime dependency on
llama-index
, only required during instrumentation.
0.0.611
- Adapt the Mistral AI integration to account for Mistral AI Python client changes
- Added option to send a root run ID upon creation of a
Step
- The API
update_prompt_ab_testing
replaces the oldpromote_prompt
and extends on it to allow for A/B testing - Tags and metadata arguments passed to LangChain will now show in logs
- Fixed the event processor when batch not filled every X seconds
Starting with
0.0.611
, you need to have Literal AI version 0.0.617-beta or above.0.0.610
- Adapt the LlamaIndex integration to the new instrumentation API
- Add option to send metadata on Generation creation
Starting with
0.0.610
, you need to have Literal AI version 0.0.615-beta or above.The LLamaIndex integration is compatible with
llama-index
versions 0.10.58 and above.0.0.608
- LangChain integration improvements:
- runs logs have a properly nested structure
- variables serialization for complex objects
0.0.607
- Added Mistral AI Instrumentation
0.0.606
Fixes
- JSON parse errors are now logged
0.0.605
Fixes
- The HTTP calls are now following redirections
0.0.604
Improvements
- Enhance error handling
0.0.603
Fixes
- Default prompt version to
None
0.0.602
Fixes
- Flush doesn’t wait for the internal batch to be empty
- Add
repr
to classes - Error with participant identifier
0.0.601
Improvements
- Strip bytes from steps
New Features
- Add
get_steps
api
0.0.600
Improvements
- Change default batch size from 1 to 5
New Features
- Rename
literal_
toliteralai_
Fixes
- Make params optional
0.0.509
Deprecations
format()
is deprecated.format_messages()
should now be used.
New Features
- Add support for tags with OpenAI Instrumentation
0.0.508
Deprecations
create_prompt()
is deprecated.get_or_create_prompt()
should now be used.
New Features
-
get_or_create_prompt()
. For creating a new Prompt, useget_or_create_prompt()
.
Was this page helpful?