AgentList
HomeProjectsArticlesAbout
Explore Projects
HomeProjectsArticlesAbout
Explore Projects
Home / Projects / AgentLabs

AgentLabs

Stale
GitHub TypeScript Apache-2.0

Description

AgentLabs is a toolkit for agent development and testing, focused on experimentation, replay, and workflow support to improve iteration speed.

Tags

testing developer-tools evaluation python

Categories

⚡ Agent Tools 📊 Observability
Visit GitHub Visit Website View Docs

Project Metrics

Stars 546
Forks 53
Watchers 546
Issues 7
Created September 7, 2023
Last commit February 6, 2025

Deployment

Local

Related Projects

DeepEval

14.8k · Python
Active

DeepEval is an open-source evaluation framework for LLM applications. It provides rich evaluation metrics and tools, supporting unit testing and integration testing to help developers build reliable LLM applications.

llmevaluationtesting +1

Harbor

1.5k · Python
Active

Framework for running agent evaluations and creating RL environments to measure and improve agent performance

evaluationbenchmarkrl-environments +2

Prompt Ops

800 · Python
Active

An open-source tool from Meta for LLM prompt optimization. Automates the process of continuously improving and refining LLM prompts.

prompt-engineeringllmtools +2

Giskard

5.3k · Python
Active

An open-source evaluation and testing library for LLM agents providing automated model scanning, bias detection, performance benchmarking, and compliance checks.

evaluationtestingllm-safety +3
AgentList

Curated directory of open-source AI agent projects

Quick Links

  • Project List
  • Featured Articles
  • Browse Categories

Contact

  • About
  • Privacy Policy
  • Contact Us

© 2026 AgentList. All rights reserved.

Made with for the open source community