Skip to main content

Phase 3 Playbook — Build & Iterate

Duration: 2-12 weeks (varies by scope) | Agents: 15-30+ | Gate Keeper: Agents Orchestrator

Objective

Implement all features through continuous Dev↔QA loops. Every task is validated before the next begins. This is where the bulk of the work happens — and where NEXUS’s orchestration delivers the most value.

Pre-Conditions

1

Foundation Verified

Phase 2 Quality Gate passed (foundation verified with screenshots)
2

Backlog Ready

Sprint Prioritizer backlog available with RICE scores
3

Pipeline Operational

CI/CD pipeline operational and tested
4

Design System Ready

Design system and component library accessible
5

API Scaffold Ready

API scaffold with auth system deployed

The Dev↔QA Loop — Core Mechanic

FOR EACH task IN sprint_backlog (ordered by RICE score):

  1. ASSIGN task to appropriate Developer Agent
  2. Developer IMPLEMENTS task
  3. Evidence Collector TESTS task
     - Visual screenshots (desktop, tablet, mobile)
     - Functional verification against acceptance criteria
     - Brand consistency check
  4. IF verdict == PASS:
       Mark task complete
       Move to next task
     ELIF verdict == FAIL AND attempts < 3:
       Send QA feedback to Developer
       Developer FIXES specific issues
       Return to step 3
     ELIF attempts >= 3:
       ESCALATE to Agents Orchestrator
       Orchestrator decides: reassign, decompose, defer, or accept
  5. UPDATE pipeline status report

Agent Assignment Matrix

Task CategoryPrimary AgentQA Agent
React/Vue/Angular UIFrontend DeveloperEvidence Collector
REST/GraphQL APIBackend ArchitectAPI Tester
Database operationsBackend ArchitectAPI Tester
Mobile (iOS/Android)Mobile App BuilderEvidence Collector
ML model/pipelineAI EngineerTest Results Analyzer
CI/CD/InfrastructureDevOps AutomatorPerformance Benchmarker
Premium/complexSenior DeveloperEvidence Collector
Quick prototype/POCRapid PrototyperEvidence Collector

Parallel Build Tracks

Managed by: Agents OrchestratorAgents: Frontend Developer, Backend Architect, AI Engineer, Mobile App Builder, Senior DeveloperQA: Evidence Collector, API Tester, Test Results AnalyzerCadence:
  • Sprint cadence: 2-week sprints
  • Daily: Task implementation + QA validation
  • End of sprint: Sprint review + retrospective

Sprint Execution Template

1

Sprint Planning (Day 1)

Sprint Prioritizer activates:
  • Review backlog with updated RICE scores
  • Select tasks based on team velocity
  • Assign tasks to developer agents
  • Identify dependencies and ordering
  • Set sprint goal and success criteria
Output: Sprint Plan with task assignments
2

Daily Execution (Day 2 to N-1)

Agents Orchestrator manages:
  • Current task status check
  • Dev↔QA loop execution
  • Blocker identification and resolution
  • Progress tracking and reporting
Status report format:
  • Tasks completed today
  • Tasks in QA
  • Tasks in development
  • Blocked tasks (with reason)
  • QA pass rate: [X/Y]
3

Sprint Review (Day N)

Project Shepherd facilitates:
  • Demo completed features
  • Review QA evidence for each task
  • Collect stakeholder feedback
  • Update backlog based on learnings
Participants: All active agents + stakeholders
Output: Sprint Review Summary
4

Sprint Retrospective

Workflow Optimizer facilitates:
  • What went well?
  • What could improve?
  • What will we change next sprint?
  • Process efficiency metrics
Output: Retrospective Action Items

Orchestrator Decision Logic

WHEN task fails QA:
  IF attempt == 1:
    → Send specific QA feedback to developer
    → Developer fixes ONLY the identified issues
    → Re-submit for QA
    
  IF attempt == 2:
    → Send accumulated QA feedback
    → Consider: Is the developer agent the right fit?
    → Developer fixes with additional context
    → Re-submit for QA
    
  IF attempt == 3:
    → ESCALATE
    → Options:
      a) Reassign to different developer agent
      b) Decompose task into smaller sub-tasks
      c) Revise approach/architecture
      d) Accept with known limitations (document)
      e) Defer to future sprint
    → Document decision and rationale

Quality Gate Checklist

#CriterionEvidence SourceStatus
1All sprint tasks pass QA (100%)Evidence Collector screenshots per task
2All API endpoints validatedAPI Tester regression report
3Performance baselines met (P95 < 200ms)Performance Benchmarker report
4Brand consistency (95%+ adherence)Brand Guardian audit
5No critical bugs (zero P0/P1)Test Results Analyzer summary
6All acceptance criteria metTask-by-task verification
7Code review completed for all PRsGit history evidence

Gate Decision

Proceed to Phase 4Feature-complete application ready for hardening

Handoff to Phase 4

For Reality Checker:
  • Complete application (all features implemented)
  • All QA evidence from Dev↔QA loops
  • API Tester regression results
  • Performance Benchmarker baseline data
  • Brand Guardian consistency audit
  • Known issues list (if any accepted limitations)
For Legal Compliance Checker:
  • Data handling implementation details
  • Privacy policy implementation
  • Consent management implementation
  • Security measures implemented
For Performance Benchmarker:
  • Application URLs for load testing
  • Expected traffic patterns
  • Performance budgets from architecture
For Infrastructure Maintainer:
  • Production environment requirements
  • Scaling configuration needs
  • Monitoring alert thresholds

Phase 3 is complete when all sprint tasks pass QA, all API endpoints are validated, performance baselines are met, and no critical bugs remain open.

Build docs developers (and LLMs) love