Why OpenJudge?

OpenJudge is an open-source evaluation framework for AI applications (e.g., AI agents or chatbots) designed to evaluate quality and drive continuous application optimization.

In practice, application excellence depends on a trustworthy evaluation workflow: Collect test data → Define graders → Run evaluation at scale → Analyze weaknesses → Iterate quickly.

OpenJudge provides ready-to-use graders and supports generating scenario-specific rubrics (as graders), making this workflow simpler, more professional, and easy to integrate into your workflow.

It can also convert grading results into reward signals to help you fine-tune and optimize your application.

Key Features

  • Systematic & Quality-Assured Grader Library: Access 50+ production-ready graders featuring a comprehensive taxonomy, rigorously validated for reliable performance.

    • Multi-Scenario Coverage: Extensive support for diverse domains including Agent, text, code, math, and multimodal tasks via specialized graders. Explore Supported Scenarios
    • Holistic Agent Evaluation: Beyond final outcomes, we assess the entire lifecycle—including trajectories and specific components (Memory, Reflection, Tool Use). Agent Lifecycle Evaluation
    • Quality Assurance: Built for reliability. Every grader comes with benchmark datasets and pytest integration for immediate quality validation. View Benchmark Datasets
  • Flexible Grader Building: Choose the build method that fits your requirements:

    • Customization: Clear requirements, but no existing grader? If you have explicit rules or logic, use our Python interfaces or Prompt templates to quickly define your own grader. Custom Grader Development Guide
    • Zero-shot Rubrics Generation: Not sure what criteria to use, and no labeled data yet? Just provide a task description and optional sample queries—the LLM will automatically generate evaluation rubrics for you. Ideal for rapid prototyping. Zero-shot Rubrics Generation Guide
    • Data-driven Rubrics Generation: Ambiguous requirements, but have few examples? Use the GraderGenerator to automatically summarize evaluation Rubrics from your annotated data, and generate a llm-based grader. Data-driven Rubrics Generation Guide
    • Training Judge Models: Massive data and need peak performance? Use our training pipeline to train a dedicated Judge Model. This is ideal for complex scenarios where prompt-based grading falls short. Train Judge Models
  • Easy Integration: Using mainstream observability platforms like LangSmith or Langfuse? We offer seamless integration to enhance their evaluators and automated evaluation capabilities. We also provide integrations with training frameworks like VERL for RL training.

Quick Tutorials

More Tutorials

Built-in Graders

Build Graders

Integrations

Applications

Running Graders

Validating Graders