We are continuously developing to Literal AI. Our focus is on rolling out new features as well as improving the Developer Experience. Check out our release notes to see what has been released recently.We release a new guide or cookbook for every new feature in Literal AI.Upcoming new features:
Literal AI currently supports evaluation from your code and LLM-as-a-Judge on Literal AI. We will soon support custom evaluations, directly in Literal AI.
This feature will allow you to register code on Literal AI and use it to monitor your LLM system in production.
We are continuously improving our Playground and Prompt management features:
Compare view: We’re adding a new feature to compare different models side-by-side in the Playground. This will allow developers to easily evaluate and contrast the outputs of various models for the same input.
Enhanced versioning: We’re expanding our versioning capabilities to include not just prompt templates and tool definitions, but also the tool code itself. This will provide more comprehensive version control and traceability for your LLM applications.
These enhancements will enable more robust testing, easier debugging, and better management of your LLM workflows.
This feature is about supporting continuous data streams in your LLM applications. For example, live voice conversation and video streams.Contact us if you are interested in the functionality.