Roadmap
We are continuously developing to Literal AI. Our focus is on rolling out new features as well as improving the Developer Experience. Check out our release notes to see what has been released recently.
We release a new guide or cookbook for every new feature in Literal AI.
Upcoming new features:
Roles and Orgs
Literal AI currently supports Roles
and Projects
for collaboration.
We plan to enhance project management with:
- Customizable roles and permissions for diverse organizational needs
- Improved security through granular access controls
Evaluation
Literal AI currently supports evaluation from your code and LLM-as-a-Judge on Literal AI. We will soon support custom evaluations, directly in Literal AI. This feature will allow you to register code on Literal AI and use it to monitor your LLM system in production.
Playground and Prompts
We are continuously improving our Playground and Prompt management features:
- Compare view: We’re adding a new feature to compare different models side-by-side in the Playground. This will allow developers to easily evaluate and contrast the outputs of various models for the same input.
- Enhanced versioning: We’re expanding our versioning capabilities to include not just prompt templates and tool definitions, but also the tool code itself. This will provide more comprehensive version control and traceability for your LLM applications. These enhancements will enable more robust testing, easier debugging, and better management of your LLM workflows.
Dashboard
The Dashboard in Literal AI will be enhanced for better data visualization and analysis. Key improvements include:
- Interactive data exploration
- Minimal changes to existing codebase and database structure
Streams
This feature is about supporting continuous data streams in your LLM applications. For example, live voice conversation and video streams.
Contact us if you are interested in the functionality.
Was this page helpful?