# Literal AI Documentation ## Docs - [Overview](https://docs.literalai.com/get-started/overview.md): Literal AI is the collaborative platform for building **production-grade LLM apps**. - [Quick Start](https://docs.literalai.com/get-started/quick-start.md): Install the Literal AI client and get your API key - [Annotation Queues](https://docs.literalai.com/guides/annotation-queue.md): Annotation Queues help you organise and streamline the human evaluation process. You can easily set-up queues, add data for review and provide an accessible and efficient ui for domain experts to score, tag and add to datasets crucial insights. - [Continuous Improvement](https://docs.literalai.com/guides/continuous-improvement.md): Improve your LLM applications over time with continuous improvement. - [Dashboard](https://docs.literalai.com/guides/dashboard.md): Monitor your AI application usage, and track performance and reliability. - [Datasets](https://docs.literalai.com/guides/dataset.md): Datasets are collections of input/expected output samples for conducting experiments and non regression tests. - [Fine-tuning](https://docs.literalai.com/guides/distillation.md): Fine-tune your LLM models to improve performance and reduce costs. - [Evaluation](https://docs.literalai.com/guides/evaluation.md): Learn how to evaluate your LLM applications and agents. - [Experiments](https://docs.literalai.com/guides/experiment.md): Experiments enable continous improvement of your Prompt/Agent -- i.e. guarantee net improvements. - [Logs](https://docs.literalai.com/guides/logs.md): Logs are essential to monitor and improve your LLM app in production. Literal AI provides flexible and composable SDKs to log your LLM app at different levels of granularity. - [Online Evals](https://docs.literalai.com/guides/online-evals.md): Automatically evaluate your LLM logs in production, monitor performance and detect issues. - [Playground](https://docs.literalai.com/guides/playground.md): The playground is a place where you can create, test, and debug prompts. - [Prompts](https://docs.literalai.com/guides/prompts.md): Prompt Management enables collaboration between engineering and product teams to create, version, A/B test, debug, and monitor prompts directly from Literal AI. - [Scorers](https://docs.literalai.com/guides/scorers.md): Scorers are functions (code or LLM-as-a-judge) that take an input, output and expected output and return a score. - [General](https://docs.literalai.com/guides/settings/general.md) - [LLM](https://docs.literalai.com/guides/settings/llm.md): Manage your LLM credentials and costs. - [Scoring](https://docs.literalai.com/guides/settings/scoring.md) - [Team](https://docs.literalai.com/guides/settings/team.md) - [Users](https://docs.literalai.com/guides/users.md): Understand your end users, and how they interact with your application. - [Chainlit](https://docs.literalai.com/integrations/chainlit.md) - [LangChain/LangGraph](https://docs.literalai.com/integrations/langchain.md) - [LiteLLM](https://docs.literalai.com/integrations/litellm.md) - [Llama Index](https://docs.literalai.com/integrations/llama-index.md) - [Other LLM Providers (Anthropic, etc.)](https://docs.literalai.com/integrations/llm-providers.md) - [LLM Inference servers (vLLM, etc.)](https://docs.literalai.com/integrations/messages-server.md) - [Mistral AI](https://docs.literalai.com/integrations/mistralai.md) - [OpenAI](https://docs.literalai.com/integrations/openai.md) - [OpenLLMetry](https://docs.literalai.com/integrations/openllmetry.md) - [Vercel AI SDK](https://docs.literalai.com/integrations/vercel-ai-sdk.md) - [Export Data](https://docs.literalai.com/more/export-data.md) - [Migration Guide](https://docs.literalai.com/more/migration-guide.md) - [Release Notes](https://docs.literalai.com/more/release-notes.md) - [Roadmap](https://docs.literalai.com/more/roadmap.md) - [Privacy & Security](https://docs.literalai.com/more/security.md) - [Api](https://docs.literalai.com/python-client/api-reference/api.md) - [Client](https://docs.literalai.com/python-client/api-reference/client.md) - [Dataset](https://docs.literalai.com/python-client/api-reference/dataset.md) - [Generation](https://docs.literalai.com/python-client/api-reference/generation.md) - [Step](https://docs.literalai.com/python-client/api-reference/step.md) - [Thread](https://docs.literalai.com/python-client/api-reference/thread.md) - [Changelog](https://docs.literalai.com/python-client/development/changelog.md) - [Introduction & Installation](https://docs.literalai.com/python-client/get-started/introduction.md) - [Manual Deployment](https://docs.literalai.com/self-hosting/deployment.md) - [Get Started](https://docs.literalai.com/self-hosting/get-started.md) - [Api](https://docs.literalai.com/typescript-client/api-reference/api.md) - [Dataset](https://docs.literalai.com/typescript-client/api-reference/dataset.md) - [Changelog](https://docs.literalai.com/typescript-client/development/changelog.md) - [Installation](https://docs.literalai.com/typescript-client/introduction.md) - [With wrappers](https://docs.literalai.com/typescript-client/with-wrappers.md) - [Without wrappers](https://docs.literalai.com/typescript-client/without-wrappers.md) ## OpenAPI Specs - [openapi](https://docs.literalai.com/api-reference/openapi.json) ## Optional - [Cookbooks](https://github.com/Chainlit/literalai-cookbooks)