Annotation Queues
The Literal AI platform helps you organise and streamline the human evaluation process by easily setting-up annotation queues. Each queue let you add data for review and provides an accessible and efficient ui for domain experts to score, tag and add to datasets crucial insights.
Creating an annotation queue
You can access your queues on the “Annotation Queues” page, and create them by pressing the ”+” button in the top right corner of the table.
Annotation Queues Page
This will open a modal form, that you can fill with a name to identify the annotation queue and a description to convey the queue meaning.
Create Annotation Queue Form
Populating a queue with data
With your queues created you can manually add items to review by accessing your run/generation logs, selecting them and clicking “Add to Annotation Queue”.
Generation Logs Page
To more precisely pick the relevant data you are able to select specific steps.
Generation Detail Page
Review queues
On the “Annotation Queues” page, a reviewer can open a specific queue and start reviewing added items. They can provide you with human feedback by scoring and tagging each item.
Annotation Queue Item Page
A reviewer can also add a specific step to an existing dataset for further processing.
Finally, once the work is done, they can mark the item as Reviewed.
More granular control
To keep track of what is happening in the queues, we also provide a list of Annotation Queue items for super users.
Annotation Queue Items List Page
Was this page helpful?