Перейти к содержимому

Validation Loop Pattern

Это содержимое пока не доступно на вашем языке.

Ensure quality by verifying results and retrying if they don’t meet criteria. Prevents low-quality outputs from proceeding.

[action] → [check] → success → [next]
failure → [fix] → [increment-iteration] → [action]
{
"type": "agent-directive",
"id": "do-work",
"directive": "Complete the task. Iteration: {{current_iteration}}",
"completionCondition": "Task completed with quality standards met",
"inputSchema": {
"type": "object",
"properties": {
"result": { "type": "string" },
"quality_check_passed": { "type": "string", "enum": ["yes", "no"] }
},
"required": ["result", "quality_check_passed"]
},
"connections": { "success": "check-quality" }
}
{
"type": "condition",
"id": "check-quality",
"condition": {
"operator": "eq",
"left": { "contextPath": "quality_check_passed" },
"right": "yes"
},
"connections": {
"true": "next-step",
"false": "fix-issues"
}
}
{
"type": "agent-directive",
"id": "fix-issues",
"directive": "Fix issues found in iteration {{current_iteration}}. Previous result: {{result}}",
"connections": { "success": "increment-iteration" }
}

Using expression node:

{
"type": "expression",
"id": "increment-iteration",
"expressions": ["current_iteration = current_iteration + 1"],
"connections": { "default": "do-work" }
}

Add check before retry:

{
"type": "condition",
"id": "check-max-iterations",
"condition": {
"operator": "lt",
"left": { "contextPath": "current_iteration" },
"right": 5
},
"connections": {
"true": "do-work",
"false": "escalate-to-user"
}
}

For measurable quality:

{
"inputSchema": {
"properties": {
"quality_score": { "type": "number", "minimum": 0, "maximum": 10 }
},
"required": ["quality_score"]
}
}
{
"condition": {
"operator": "gte",
"left": { "contextPath": "quality_score" },
"right": 8
}
}

From development-flow.json:

{
"id": "verify-step-implementation",
"directive": "Verify step {{current_step_name}} implementation:\n- Expected: {{expected_outcome}}\n- Check actual matches expected",
"inputSchema": {
"properties": {
"step_verified": { "type": "string", "enum": ["yes", "no"] },
"verification_evidence": { "type": "string" }
},
"required": ["step_verified", "verification_evidence"]
}
}

When a workflow includes validation gates (nodes that ask user for approval), the agent must follow strict rules:

Always include this instruction in validation gate directives:

CRITICALLY IMPORTANT - REACTION TO FEEDBACK:
- If user said "yes" → approval = "yes"
- If user gave ANY feedback or said "no" → approval = "no"
- DO NOT fix yourself!
- Reply approval = "no" and write user_feedback with feedback
- Workflow will direct to fix branch itself
- All fixes are done only through workflow, not independently

Without explicit instructions, agents tend to:

  1. Interpret user feedback as minor corrections and self-fix
  2. Report approval = "yes" even when user gave feedback
  3. Skip the workflow’s fix branch entirely

This breaks the workflow’s iteration loop and prevents proper quality control.

{
"type": "agent-directive",
"id": "approve-plan",
"directive": "Show user the plan and ask for confirmation.\n\nPlan: {{plan_summary}}\n\n**CRITICALLY IMPORTANT - REACTION TO FEEDBACK:**\n- If user said \"yes\" → approval = \"yes\"\n- If user gave ANY feedback or said \"no\" → approval = \"no\"\n- DO NOT fix yourself!\n- Reply approval = \"no\" and write user_feedback with feedback\n- Workflow will direct to fix branch itself",
"completionCondition": "User confirmed or rejected plan",
"inputSchema": {
"type": "object",
"properties": {
"plan_approved": { "type": "string", "enum": ["yes", "no"] },
"user_feedback": { "type": "string" }
},
"required": ["plan_approved"]
},
"connections": { "success": "route-plan-approval" }
}