content-quality-auditor
This skill audits content against the CORE‑EEAT 80‑item benchmark and generates a detailed per‑item audit, dimension scores, and a prioritized action plan. It can fetch page and competitor content when connectors are enabled (Automatically fetch page content, extract HTML structure, check schema markup, verify internal/external links, and pull competitor content for comparison.) and references /seo:check-technical.
Content Quality Auditor
Based on CORE-EEAT Content Benchmark. Full benchmark reference: references/core-eeat-benchmark.md
This skill evaluates content quality across 80 standardized criteria organized in 8 dimensions. It produces a comprehensive audit report with per-item scoring, dimension and system scores, weighted totals by content type, and a prioritized action plan.
When to Use This Skill
- Auditing content quality before publishing
- Evaluating existing content for improvement opportunities
- Benchmarking content against CORE-EEAT standards
- Comparing content quality against competitors
- Assessing both GEO readiness (AI citation potential) and SEO strength (source credibility)
- Running periodic content quality checks as part of a content maintenance program
- After writing or optimizing content with seo-content-writer or geo-content-optimizer
What This Skill Does
- Full 80-Item Audit: Scores every CORE-EEAT check item as Pass/Partial/Fail
- Dimension Scoring: Calculates scores for all 8 dimensions (0-100 each)
- System Scoring: Computes GEO Score (CORE) and SEO Score (EEAT)
- Weighted Totals: Applies content-type-specific weights for final score
- Veto Detection: Flags critical trust violations (T04, C01, R10)
- Priority Ranking: Identifies Top 5 improvements sorted by impact
- Action Plan: Generates specific, actionable improvement steps
How to Use
Audit Content
Audit this content against CORE-EEAT: [content text or URL]
Run a content quality audit on [URL] as a [content type]
Audit with Content Type
CORE-EEAT audit for this product review: [content]
Score this how-to guide against the 80-item benchmark: [content]
Comparative Audit
Audit my content vs competitor: [your content] vs [competitor content]
Data Sources
See CONNECTORS.md for tool category placeholders.
With ~~web crawler + ~~SEO tool connected: Automatically fetch page content, extract HTML structure, check schema markup, verify internal/external links, and pull competitor content for comparison.
With manual data only: Ask the user to provide:
- Content text, URL, or file path
- Content type (if not auto-detectable): Product Review, How-to Guide, Comparison, Landing Page, Blog Post, FAQ Page, Alternative, Best-of, or Testimonial
- Optional: competitor content for benchmarking
Proceed with the full 80-item audit using provided data. Note in the output which items could not be fully evaluated due to missing access (e.g., backlink data, schema markup, site-level signals).
Instructions
When a user requests a content quality audit:
Step 1: Preparation
### Audit Setup
**Content**: [title or URL]
**Content Type**: [auto-detected or user-specified]
**Dimension Weights**: [loaded from content-type weight table]
#### Veto Check (Emergency Brake)
| Veto Item | Status | Action |
|-----------|--------|--------|
| T04: Disclosure Statements | ✅ Pass / ⚠️ VETO | [If VETO: "Add disclosure banner at page top immediately"] |
| C01: Intent Alignment | ✅ Pass / ⚠️ VETO | [If VETO: "Rewrite title and first paragraph"] |
| R10: Content Consistency | ✅ Pass / ⚠️ VETO | [If VETO: "Verify all data before publishing"] |
If any veto item triggers, flag it prominently at the top of the report and recommend immediate action before continuing the full audit.
Step 2: CORE Audit (40 items)
Evaluate each item against the criteria in references/core-eeat-benchmark.md.
Score each item:
- Pass = 10 points (fully meets criteria)
- Partial = 5 points (partially meets criteria)
- Fail = 0 points (does not meet criteria)
### C — Contextual Clarity
| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| C01 | Intent Alignment | Pass/Partial/Fail | [specific observation] |
| C02 | Direct Answer | Pass/Partial/Fail | [specific observation] |
| ... | ... | ... | ... |
| C10 | Semantic Closure | Pass/Partial/Fail | [specific observation] |
**C Score**: [X]/100
### O — Organization
| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| O01 | Heading Hierarchy | Pass/Partial/Fail | [specific observation] |
| ... | ... | ... | ... |
**O Score**: [X]/100
### R — Referenceability
[Same format]
**R Score**: [X]/100
### E — Exclusivity
[Same format]
**E Score**: [X]/100
Step 3: EEAT Audit (40 items)
Same format for Exp, Ept, A, T dimensions.
### Exp — Experience
| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| Exp01 | First-Person Narrative | Pass/Partial/Fail | [specific observation] |
| ... | ... | ... | ... |
**Exp Score**: [X]/100
### Ept — Expertise
[Same format]
### A — Authority
[Same format]
### T — Trust
[Same format]
Note: Some EEAT items (A01 Backlink Profile, A05 Brand Recognition, A07 Knowledge Graph Presence, etc.) may require site-level data not available from content alone. Score what is observable; mark unverifiable items as "N/A — requires site-level data" and exclude from dimension average.
Step 4: Scoring & Report
Calculate scores and generate the final report:
## CORE-EEAT Audit Report
### Overview
- **Content**: [title]
- **Content Type**: [type]
- **Audit Date**: [date]
- **Total Score**: [score]/100 ([rating])
- **GEO Score**: [score]/100 | **SEO Score**: [score]/100
- **Veto Status**: ✅ No triggers / ⚠️ [item] triggered
### Dimension Scores
| Dimension | Score | Rating | Weight | Weighted |
|-----------|-------|--------|--------|----------|
| C — Contextual Clarity | [X]/100 | [rating] | [X]% | [X] |
| O — Organization | [X]/100 | [rating] | [X]% | [X] |
| R — Referenceability | [X]/100 | [rating] | [X]% | [X] |
| E — Exclusivity | [X]/100 | [rating] | [X]% | [X] |
| Exp — Experience | [X]/100 | [rating] | [X]% | [X] |
| Ept — Expertise | [X]/100 | [rating] | [X]% | [X] |
| A — Authority | [X]/100 | [rating] | [X]% | [X] |
| T — Trust | [X]/100 | [rating] | [X]% | [X] |
| **Weighted Total** | | | | **[X]/100** |
**Score Calculation**:
- GEO Score = (C + O + R + E) / 4
- SEO Score = (Exp + Ept + A + T) / 4
- Weighted Score = Σ (dimension_score × content_type_weight)
**Rating Scale**: 90-100 Excellent | 75-89 Good | 60-74 Medium | 40-59 Low | 0-39 Poor
### Per-Item Scores
#### CORE — Content Body (40 Items)
| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| C01 | Intent Alignment | [Pass/Partial/Fail] | [observation] |
| C02 | Direct Answer | [Pass/Partial/Fail] | [observation] |
| ... | ... | ... | ... |
#### EEAT — Source Credibility (40 Items)
| ID | Check Item | Score | Notes |
|----|-----------|-------|-------|
| Exp01 | First-Person Narrative | [Pass/Partial/Fail] | [observation] |
| ... | ... | ... | ... |
### Top 5 Priority Improvements
Sorted by: weight × points lost (highest impact first)
1. **[ID] [Name]** — [specific modification suggestion]
- Current: [Fail/Partial] | Potential gain: [X] weighted points
- Action: [concrete step]
2. **[ID] [Name]** — [specific modification suggestion]
- Current: [Fail/Partial] | Potential gain: [X] weighted points
- Action: [concrete step]
3–5. [Same format]
### Action Plan
#### Quick Wins (< 30 minutes each)
- [ ] [Action 1]
- [ ] [Action 2]
#### Medium Effort (1-2 hours)
- [ ] [Action 3]
- [ ] [Action 4]
#### Strategic (Requires planning)
- [ ] [Action 5]
- [ ] [Action 6]
### Recommended Next Steps
- For full content rewrite: use [seo-content-writer](../../build/seo-content-writer/) with CORE-EEAT constraints
- For GEO optimization: use [geo-content-optimizer](../../build/geo-content-optimizer/) targeting failed GEO-First items
- For content refresh: use [content-refresher](../../optimize/content-refresher/) with weak dimensions as focus
- For technical fixes: run `/seo:check-technical` for site-level issues
Validation Checkpoints
Input Validation
- Content source identified (text, URL, or file path)
- Content type confirmed (auto-detected or user-specified)
- Content is substantial enough for meaningful audit (≥300 words)
- If comparative audit, competitor content also provided
Output Validation
- All 80 items scored (or marked N/A with reason)
- All 8 dimension scores calculated correctly
- Weighted total matches content-type weight configuration
- Veto items checked and flagged if triggered
- Top 5 improvements sorted by weighted impact, not arbitrary
- Every recommendation is specific and actionable (not generic advice)
- Action plan includes concrete steps with effort estimates
Example
User: "Audit this article about email marketing best practices for CORE-EEAT quality"
Output: [Full audit report following the structure above, with all 80 items scored, dimension and total scores calculated using Blog Post weights (C:25%, O:10%, R:10%, E:20%, Exp:10%, Ept:10%, A:5%, T:10%), Top 5 improvements identified, and action plan generated]
Tips for Success
- Start with veto items — T04, C01, R10 are deal-breakers regardless of total score
- Focus on high-weight dimensions — Different content types prioritize different dimensions
- GEO-First items matter most for AI visibility — Prioritize items tagged GEO 🎯 if AI citation is the goal
- Some EEAT items need site-level data — Don't penalize content for things only observable at the site level (backlinks, brand recognition)
- Use the weighted score, not just the raw average — A product review with strong Exclusivity matters more than strong Authority
- Re-audit after improvements — Run again to verify score improvements and catch regressions
Related Skills
- seo-content-writer — Write content that scores high on CORE dimensions
- geo-content-optimizer — Optimize for GEO-First items
- content-refresher — Update content to improve weak dimensions
- on-page-seo-auditor — Technical on-page audit (complements this skill)
- memory-management — Store audit results for tracking over time