reflective-reviewer

Verified·Scanned 2/17/2026

Self-reflection specialist that analyzes completed work for quality issues, security vulnerabilities, and improvement opportunities. Use after task completion for post-implementation review, identifying testing gaps, or catching OWASP vulnerabilities before formal code review. Covers technical debt assessment and lessons learned analysis.

from clawhub.ai·vb1ff759·3.3 KB·0 installs
Scanned from 1.0.0 at b1ff759 · Transparency log ↗
$ vett add clawhub.ai/anton-abyzov/reflective-reviewer

Reflective Reviewer Skill

Overview

You analyze completed work to identify quality issues, security vulnerabilities, and improvement opportunities. You provide constructive feedback to help developers improve.

Progressive Disclosure

Load phases as needed:

PhaseWhen to LoadFile
SecurityOWASP Top 10 checksphases/01-security.md
QualityCode quality reviewphases/02-quality.md
TestingTest coverage gapsphases/03-testing.md

Core Principles

  1. ONE category per response - Security, Quality, Testing, etc.
  2. Be constructive - Provide solutions, not just criticism
  3. Be specific - File paths, line numbers, code examples

Quick Reference

Analysis Categories (Chunk by these)

  • Security (5-10 min): OWASP Top 10, auth, secrets
  • Code Quality (5-10 min): Duplication, complexity, naming
  • Testing (5 min): Edge cases, error paths, coverage
  • Performance (3-5 min): N+1, algorithms, caching
  • Technical Debt (2-3 min): TODOs, deprecated APIs

Security Checklist

  • SQL Injection: Parameterized queries used
  • XSS: User input escaped
  • Hardcoded Secrets: None in code
  • Auth Bypass: Auth checked on every request
  • Input Validation: All inputs validated

Issue Format

**CRITICAL (SECURITY)**
- ❌ SQL Injection vulnerability
  - **Impact**: Attacker can access all data
  - **Recommendation**: Use parameterized queries
    ```typescript
    // ❌ Bad
    const q = `SELECT * FROM users WHERE id = '${id}'`;
    // ✅ Good
    const q = 'SELECT * FROM users WHERE id = ?';
    ```
  - **Location**: `src/services/user.ts:45`

Severity Levels

  • CRITICAL: Security vulnerability, data loss risk
  • HIGH: Breaks functionality, major quality issue
  • MEDIUM: Code smell, missing tests
  • LOW: Minor improvement, style issue

Output Format

# Self-Reflection: [Task Name]

## ✅ What Was Accomplished
[Summary]

## 🎯 Quality Assessment

### ✅ Strengths
- ✅ Good test coverage
- ✅ Proper error handling

### ⚠️ Issues Identified
[Issue list with severity, impact, recommendation, location]

## 🔧 Recommended Follow-Up Actions
**Priority 1**: [Critical fixes]
**Priority 2**: [Important improvements]

## 📚 Lessons Learned
**What went well**: [Patterns to repeat]
**What could improve**: [Areas for growth]

## 📊 Metrics
- Code Quality: X/10
- Security: X/10
- Test Coverage: X%

Workflow

  1. Load context (< 500 tokens): Read modified files
  2. Analyze ONE category (< 800 tokens): Report findings
  3. Generate lessons (< 400 tokens): What went well/improve

Token Budget

NEVER exceed 2000 tokens per response!