Skip to content

Eval Cases

Eval cases are individual test cases within an evaluation file. Each case defines input messages, expected outcomes, and optional evaluator overrides.

evalcases:
- id: addition
expected_outcome: Correctly calculates 15 + 27 = 42
input_messages:
- role: user
content: What is 15 + 27?
expected_messages:
- role: assistant
content: "42"
FieldRequiredDescription
idYesUnique identifier for the eval case
expected_outcomeYesDescription of what a correct response should contain
input_messagesYesArray of input messages sent to the target
expected_messagesNoExpected response messages for comparison
executionNoPer-case execution overrides (target, evaluators)
rubricsNoStructured evaluation criteria
sidecarNoAdditional metadata passed to evaluators

Messages follow the standard chat format:

input_messages:
- role: system
content: You are a helpful math tutor.
- role: user
content: What is 15 + 27?

Optional reference responses for comparison by evaluators:

expected_messages:
- role: assistant
content: "42"

Override the default target or evaluators for specific cases:

evalcases:
- id: complex-case
expected_outcome: Provides detailed explanation
input_messages:
- role: user
content: Explain quicksort algorithm
execution:
target: gpt4_target
evaluators:
- name: depth_check
type: llm_judge
prompt: ./judges/depth.md

Pass additional context to evaluators via the sidecar field:

evalcases:
- id: code-gen
expected_outcome: Generates valid Python
sidecar:
language: python
difficulty: medium
input_messages:
- role: user
content: Write a function to sort a list