From d2d2f5f8088b9cfec5b6bd958d168e7924c08ccc Mon Sep 17 00:00:00 2001 From: Joschka Braun <47435119+joschkabraun@users.noreply.github.com> Date: Wed, 11 Oct 2023 23:51:33 +0200 Subject: [PATCH] add parea to related resources (#759) --- related_resources.md | 1 + 1 file changed, 1 insertion(+) diff --git a/related_resources.md b/related_resources.md index 6ce8ff1..d9707ef 100644 --- a/related_resources.md +++ b/related_resources.md @@ -16,6 +16,7 @@ People are writing great tools and papers for improving outputs from GPT. Here a - [LMQL](https://lmql.ai): A programming language for LLM interaction with support for typed prompting, control flow, constraints, and tools. - [OpenAI Evals](https://github.com/openai/evals): An open-source library for evaluating task performance of language models and prompts. - [Outlines](https://github.com/normal-computing/outlines): A Python library that provides a domain-specific language to simplify prompting and constrain generation. +- [Parea AI](https://www.parea.ai): A platform for debugging, testing, and monitoring LLM apps. - [Promptify](https://github.com/promptslab/Promptify): A small Python library for using language models to perform NLP tasks. - [PromptPerfect](https://promptperfect.jina.ai/prompts): A paid product for testing and improving prompts. - [Prompttools](https://github.com/hegelai/prompttools): Open-source Python tools for testing and evaluating models, vector DBs, and prompts.