Back to Phase Orchestration
Level 3 — Maximum Depth

skill.md Structure

The 3000-line brain of the Learning Skill, line by line. YAML frontmatter, mode detection, helpfulness scoring, design system, agent dispatch.

On this page
  • YAML Frontmatter & Identity Rule
  • Mode Detection & Gate Checks
  • Level System & Helpfulness Scoring
  • Design System Definition
  • Agent Dispatch Templates
01
YAML Frontmatter & Identity Rule
The first 50 lines: name, description, then the IDENTITY RULE that prevents external project names from appearing

skill.md begins with a YAML frontmatterThe block between two --- at the file start. Claude reads this block to identify the skill name and description. Without it, the file is not recognized as a skill. block. This block defines the skill name and a compact description that Claude sees first when loading the skill.

# Lines 1-4: YAML Frontmatter --- name: learning-skill description: Combines two superpowers: (1) Transforms codebases into interactive Multi-Level HTML courses (L0-L3) with audience segmentation and energy-company design, AND (2) generates interactive Knowledge Graphs with dashboard for code exploration. THREE MODES: "Course" for HTML courses, "Understand" for Knowledge Graphs, "Combined" for both. ---

name: The slug learning-skill is used internally as an identifier. It appears in folder names and references.

description: The description is deliberately long and keyword-rich. Claude uses it to decide if this skill matches the user prompt. Each keyword ("HTML courses", "Knowledge Graphs", "Dashboard") increases match probability.

No model: field: Unlike agent definitions (which have model: inherit), skill.md has no model field -- the skill always runs in the context of the current model.

Immediately after the frontmatter comes the Identity RuleA table of FORBIDDEN terms like "Understand Anything" or "Understanding Skill". This rule prevents the original project name from appearing in any output. -- the most important compliance block of the entire skill.

## IDENTITY RULE — NO EXTERNAL PROJECT NAMES (HARD BLOCK) FORBIDDEN — in ANY generated output: | Forbidden | Why | | "Understand Anything" | External project name | | "Understand-Anything" | External project name | | "Understanding Skill" | Wrong name | | "Claude Learning & Understanding Skill" | Wrong name | | "claude-learning-understanding" | Wrong slug | | ".understand-anything/" | Wrong folder name | CORRECT designations: | Context | Correct Name | | Skill Name | "Learning Skill" | | Skill Slug | "learning-skill" | | Output Folder | ".claude-learning/" | | Mode Names | "Course Mode", "Understand Mode" |

Why does this rule exist? The Learning Skill evolved from an earlier project ("Understand Anything"). Since the old name must never appear in any output, the self-check runs 5 regex matches before EVERY generated output.

Self-check: 5 checks are run before every generated output. Each check is a regex match against the generated text. If a forbidden string is found, it must be removed or replaced.

# The Self-Check: 5 checks before EVERY output Self-check before EVERY output: [ ] Does the output contain "Understand Anything"? -> REMOVE [ ] Does the output contain "Understanding"? -> Only OK as generic English word (e.g. "for understanding the code") NOT OK as part of a name [ ] Does the output contain ".understand-anything/"? -> REPLACE with ".claude-learning/" [ ] Does a commit message reference external projects? -> REMOVE [ ] Does the output contain "from Understand-Anything"? -> REMOVE
Edge Case: The string "Understanding" may appear in generic English ("for understanding the architecture"). It is ONLY forbidden when used as part of a name ("Understanding Skill", "Learning & Understanding"). The distinction requires semantic analysis, not just string matching.
Edge Case: The rule also applies to commit messages, debug logs, and internal intermediate results. A git commit -m "merge from understand-anything" would violate the rule. This is often overlooked because developers do not consider debug output as "generated output".
02
Mode Detection & Gate Checks
Phase 0: keyword matching for Course/Understand/Combined. The HARD-BLOCK query before Phase 1.

Mode detection is the first decision point. It reads the user promptThe user's first message that activates the skill. Typically contains a path/URL and keywords like "course", "understand", or "both". and matches keywords against three mode tables. If no clear match: ask the user.

# Mode Detection: Keyword Matching | Mode | Keywords | | Course Mode | "Kurs", "Tutorial", "Walkthrough", | | | "interaktiv", "course", "teach", | | | "lernen", "erklaeren", "HTML", | | | "Kursseiten" | | Understand Mode| "verstehen", "understand", | | | "knowledge graph", "dashboard", | | | "explore", "graph", "analyse", | | | "analyze" | | Combined | "beides", "komplett", "full", | | | "alles", "combined", "kurs + graph", | | | "everything" | | Unclear | None of the above | | | -> Ask the user! |

Course Mode: Activates the HTML course pipeline (phases 0-6). Generates self-contained HTML files with level hierarchy.

Understand Mode: Activates the Knowledge Graph pipeline (phases 0-7). Generates knowledge-graph.json + React dashboard.

Combined: First KG pipeline, then KG-powered course building. The KG data improves helpfulness scoring.

Unclear: The safest option -- better to ask once too often than to build the wrong thing.

After mode detection comes the HARD BLOCKAn impassable gate: Phase 1 MUST NOT start until the user has explicitly answered both audience and integration mode. No exceptions -- not even for "test the skill" or "just go".: Two gates must be answered BY THE USER.

# HARD BLOCK: Self-check before Phase 1 # Executed AFTER mode detection, # BEFORE any HTML is generated. Self-check before Phase 1: [ ] Has the user EXPLICITLY stated which audiences? -> If no: ASK [ ] Has the user EXPLICITLY stated whether Standalone or Embedded? -> If no: ASK [ ] If Embedded: Has the user provided GitHub URL, Imprint, Privacy, Copyright? -> If no: ASK [ ] All three checks passed? -> Only now start Phase 1

The query itself is a predefined template showing three audience options with their default depths:

# Query Template (sent to the user) Before I start, two questions: 1. Who is this course for? (multi-select) * End Users -- People who USE the app (Default: up to L2, workflow focus) * Developers -- People who UNDERSTAND the code (Default: up to L3, full depth) * Executives -- Management, stakeholders (Default: up to L1, compact + KPIs) * Custom -- Define your own audience (Default: up to L2, adjustable) 2. How will the courses be used? * Standalone -- Files for sharing/offline use * Embedded -- Part of a website (then I need: GitHub link? Imprint URL? Copyright?)
Edge Case: "Test the skill on itself" is NOT a sufficient answer. Even though "developer" and "this repo" may be implied, nothing may be assumed. The query is asked regardless. The same goes for "just do it", "go ahead", "it's just a test".
Edge Case: "For developers" answers only Gate 1, not Gate 2 (integration). Likewise "standalone please" answers only Gate 2. Both must be explicitly answered. Partial answers lead to a reduced query asking only the missing gate.
User saysGate 1 (Audience)Gate 2 (Integration)Action
"For devs, standalone"ClearClearStart Phase 1
"Make a course"UnclearUnclearFull query
"For developers"ClearUnclearAsk Gate 2 only
"Standalone please"UnclearClearAsk Gate 1 only
"Test this"UnclearUnclearFull query
03
Level System & Helpfulness Scoring
HS formula in detail: Complexity(0-3) + Relevance(0-3) + Learning Value(0-2) + Independence(0-2). Audience-specific thresholds.

Not every topic deserves 3 more levels. The Helpfulness ScoreA number from 1-10 that evaluates whether a deeper page REALLY helps the learner. Calculated from 4 dimensions: Complexity, Relevance, Learning Value, Independence. (HS) autonomously decides whether a topic gets its own subpage or remains as a paragraph on the parent page.

# Helpfulness Score Formula HS = Complexity(0-3) + Relevance(0-3) + LearningValue(0-2) + Independence(0-2) # Maximum achievable: 3+3+2+2 = 10 | Dimension | 0 | 1 | 2 | 3 | | Complexity | Trivial, | Needs | Needs multiple | Needs own page | | | 1 sentence | 1 paragraph | sections | with visuals | | Relevance | Niche | Nice-to-know | Important for | Core concept, | | | detail | | understanding | nothing works without | | LearningValue | Facts only | Explains | Enables own | n/a (max 2) | | | | a "why" | action | | | Independence | Repeats | 50% new | 80% new | n/a (max 2) | | | parent | information | information | |

Complexity (0-3): How much explanation effort does the topic need? A trivial fact (0) vs. a concept requiring visualizations (3).

Relevance (0-3): How central is the topic for understanding the overall system? A niche detail (0) vs. an indispensable core concept (3).

Learning Value (0-2): What CAN the learner DO after reading? Just know facts (0) vs. act independently (2).

Independence (0-2): How much NEW content does a separate page bring? If 80%+ repeats the parent page, a separate page is not worth it.

Crucially, thresholds are not the same for all audiences. Each audience has its own depth profileDefines max level, HS threshold for own page, and HS threshold for "go deeper" per audience. Executives rarely need more than L1, developers go up to L3.:

# Audience-specific thresholds | Audience | Max Level | HS own page | HS go deeper | | End Users | L2 | >= 7 (strict) | >= 9 | | Developers | L3 | >= 6 (standard) | >= 8 | | Executives | L1 | >= 8 (strictest)| >= 10 (never) | # Decision tree per topic T, audience A, level L: 1. L >= max_level[A]? -> STOP. No deeper level. 2. HS(T,A) < hs_threshold[A]? -> No own page. Paragraph/accordion on parent. 3. HS(T,A) >= hs_threshold[A]? -> Own page. 4. HS(T,A) >= hs_deeper[A] -> Check L+1. AND L+1 < max_level[A]? If yes: plan L+1.

The same HS can produce completely different results for different audiences. Here is the audience weighting per topic type:

# Audience weighting: Same topic, different HS | Topic Type | End User | Developer | Executive | | Implementation detail | 1-3 | 7-10 | 1-3 | | UI workflow | 7-10 | 3-5 | 2-4 | | Cost/ROI | 2-4 | 1-3 | 8-10 | | Error handling | 6-8 | 8-10 | 3-5 | | Architecture decision | 1-3 | 7-9 | 6-8 | | Security/Compliance | 3-5 | 7-9 | 8-10 | # Example: Topic "Authentication" End User: HS=7 -> L1 own page, no L2 (7 < 9) Developer: HS=9 -> L1 + L2 + L3 (9 >= 8, up to Max L3) Executive: HS=8 -> L1 only (Max Level L1 reached)
Edge Case: KG-powered scoring. In Combined Mode, the HS is enriched with Knowledge Graph data. Nodes with complexity: "complex" receive +1 bonus to Complexity. Nodes with fan-in > 5 get +1 to Relevance. Core-layer nodes get +1 to Learning Value. The final HS is capped at max(10): final_HS = min(10, base_HS + kg_bonus).
Edge Case: Depth profile override. The user can say "For executives, but with full depth" during the gate check. Max level is then raised to L3. The override is documented in the depth map so it is traceable why an executive course suddenly has L3 pages.
04
Design System Definition
CSS Custom Properties (:root block), color space definitions, contrast rules (WCAG AA)

Phase 3 (FOUNDATION) defines the design system as inline CSSThere is NO separate CSS file. Everything is inline in every HTML file (self-contained principle). The CSS is defined once and copied into each file to ensure consistency.. The :root block is the heart -- it defines 60+ custom properties that are identical in every page.

:root { /* === PRIMARY BRAND COLORS === */ --color-deep-blue: #000099; --color-impulse-orange: #FE8F11; --color-warm-gray: #E4DAD4; /* === BACKGROUNDS === */ --color-bg: #FFFFFF; --color-bg-warm: #F3EFEB; --color-bg-code: #000066; /* === TEXT === */ --color-text: #1A1A2E; --color-text-secondary: #6B6560; --color-text-muted: #7A7570; /* WCAG AA: 5.1:1 on white */ /* === SEMANTIC COLORS === */ --color-success: #84C041; --color-error: #CC0000; --color-info: #1195EB; --color-warning: #FDC83A; /* === ACTOR COLORS (6 colors) === */ --color-actor-1: #000099; /* Deep Blue */ --color-actor-2: #FE8F11; /* Impulse Orange */ --color-actor-3: #84C041; /* Success Green */ --color-actor-4: #1195EB; /* Info Blue */ --color-actor-5: #5BE3D6; /* Teal */ --color-actor-6: #FDC83A; /* Warning Yellow */ /* === FONTS === */ --font-display: 'Bricolage Grotesque', Georgia, serif; --font-body: 'DM Sans', -apple-system, sans-serif; --font-mono: 'JetBrains Mono', 'Fira Code', monospace; /* === TYPOGRAPHY === */ --text-xs:.75rem; --text-sm:.875rem; --text-base:1rem; --text-lg:1.125rem; /* ... up to --text-6xl: 3.75rem */ /* === LAYOUT === */ --content-width: 820px; --content-width-wide: 1000px; --nav-height: 50px; }

Primary brand colors: Three core colors form the "energy company palette". Deep Blue (#000099) for trust and expertise, Impulse Orange (#FE8F11) for attention and CTAs, Warm Gray (#E4DAD4) for borders and subtle separators.

Code background: #000066 (darker than Deep Blue) ensures sufficient contrast to light code text (ratio >7:1, WCAG AAA).

Text-muted: #7A7570 has a contrast ratio of 5.1:1 on white -- just barely WCAG AA compliant. For longer texts, --color-text (#1A1A2E, 15.7:1) should be preferred.

Actor colors: 6 distinct colors for flow diagrams and architecture visualizations. Chosen to remain distinguishable even with color blindness (deuteranomaly).

Content width: 820px is the sweet spot for readability -- approximately 70 characters per line at 1rem font size.

WCAG contrast rules are embedded as hard constraints in the design system:

# WCAG AA Contrast Matrix (Minimum 4.5:1 for text) | Foreground | Background | Ratio | Status | | --color-text (#1A1A2E) | #FFFFFF | 15.7:1 | AAA | | --color-text (#1A1A2E) | --bg-warm | 13.2:1 | AAA | | --color-text-muted | #FFFFFF | 5.1:1 | AA | | rgba(255,255,255,.9) | --bg-code | 12.8:1 | AAA | | --impulse-orange | --deep-blue | 4.7:1 | AA | | --impulse-orange | #FFFFFF | 2.9:1 | Fail* | * Orange on white fails WCAG AA for normal text. Therefore orange is only used for large headlines (>=18px) and icons, NOT for body text.
Edge Case: Impulse Orange (#FE8F11) on white background has only a contrast ratio of 2.9:1 -- fails WCAG AA for normal text. Therefore orange is exclusively used for badges, icons, buttons (with white text on them), and large headings. For body text on white, --color-text (#1A1A2E) is always used.
Edge Case: Responsive overrides. On mobile devices, typography variables are overridden via media query: @media (max-width: 768px) { :root { --text-4xl: 1.875rem; --text-5xl: 2.25rem; } } @media (max-width: 480px) { :root { --text-4xl: 1.5rem; --text-5xl: 1.875rem; } } Breakpoints are 768px (tablet) and 480px (smartphone).
05
Agent Dispatch Templates
How skill.md assembles agent prompts: pipeline agent template, file agent template, context injection patterns

skill.md dispatches two kinds of sub-agentsClaude instances launched via the Task tool. Each agent receives its own prompt template and works in isolation. Results flow back to the main context.: pipeline agents (1 per audience) and file agents (1 per HTML file). Both are configured via template strings.

# Pipeline Agent Prompt Template # 1 pipeline agent per audience You are the pipeline agent for audience [EMOJI + NAME]. Depth profile: - Max level: [L1/L2/L3] - HS threshold own page: [>=6/>=7/>=8] - HS threshold go deeper: [>=8/>=9/>=10] Integration mode: [Standalone/Embedded] Output directory: [PATH] Curriculum for your audience: [TOPIC TREE WITH HS SCORES FOR THIS AUDIENCE ONLY] CSS/JS Foundation: [REFERENCE TO PHASE 3 OUTPUT] Naming convention: - L0: index[_SUFFIX]_[de|en].html - L1: l1/[slug][_SUFFIX]_[de|en].html - L2: l2/[slug][_SUFFIX]_[de|en].html - L3: l3/[slug][_SUFFIX]_[de|en].html Your task: 1. Build L0 (DE + EN parallel as file agents) 2. WAIT until L0 complete 3. Build L1 — ONLY topics with HS >= [THRESHOLD] 4. WAIT until L1 complete 5. [If max level >= L2:] Build L2 6. WAIT until L2 complete 7. [If max level = L3:] Build L3 8. Return pipeline summary

Lines 1-2: The agent receives its audience identity -- including emoji for internal log attribution.

Depth profile: The three critical parameters determining how deep the pipeline goes. The agent CANNOT build beyond its max level.

Curriculum: The filtered topic tree is injected here -- ONLY topics for THIS audience. An end-user pipeline agent never sees developer topics.

Steps 1-8: The strictly sequential flow WITHIN the pipeline. Between levels, it WAITS. Within a level, file agents run in parallel.

Important: Audience switch links on L0 remain EMPTY -- they are set in Phase 5 (Polish) once ALL pipelines are complete.

# File Agent Prompt Template # 1 file agent per HTML file Create the HTML file [FILENAME] in directory [PATH]. Context: - Audience: [EMOJI + NAME] - Language: [DE/EN] - Level: [L0/L1/L2/L3] - Topic: [TOPIC] - Integration mode: [Standalone/Embedded] - Helpfulness Score: [SCORE] Content: [CONTENT DESCRIPTION FROM CURRICULUM] CSS/JS: Use exactly the following foundation: [FOUNDATION INSERT OR REFERENCE] Linking: - Breadcrumb path: [PATH] - Deep-dive links: [LIST OF TARGET FILES] - Sibling pages: [LIST WITHIN THIS AUDIENCE] - Language counterpart: [FILENAME] - Audience switch: PLACEHOLDER (populated in Phase 5) Quality gates: [ALL RELEVANT GATES]

FILENAME + PATH: Computed from the naming convention: [topic-slug][_audience-suffix]_[language].html in the correct level folder.

Context block: The file agent receives EVERYTHING it needs to build the page autonomously -- audience, language, level, topic, integration, and HS.

Foundation: Either the complete CSS/JS is inserted inline, or a reference to the Phase 3 output. In practice, inline insertion is most common (self-contained).

Linking: All links are explicitly specified -- the agent does not have to guess paths. This prevents dead links.

Audience switch = PLACEHOLDER: The file agent CANNOT populate the switch because it does not know which other pipelines exist. Phase 5 replaces the placeholder retroactively.

The context injection patternIn the Understand pipeline, language and framework files are dynamically appended to the agent prompt. For each detected language, the matching .md file from the languages/ folder is loaded. works similarly in the Understand pipeline -- where language addenda and framework addenda are injected:

# Context Injection for Understand Pipeline Agents # Example: file-analyzer agent 1. Base template: agents/file-analyzer.md is loaded 2. Language Context Injection: FOR EACH detected language (e.g. python, typescript): Load ./languages/<language-id>.md Append under ## Language Context header If file does not exist: skip silently 3. Framework Addendum Injection: FOR EACH detected framework (e.g. Django, React): Load ./frameworks/<framework-id>.md Append after Language Context If file does not exist: skip silently 4. Batch-specific context: - Project root path - Batch index number - Pre-resolved import data (batchImportData) - List of files to analyze 5. Final prompt = Base template + Language context(s) + Framework addendum(s) + Batch context
Edge Case: Missing language file. If a language is detected for which no languages/*.md exists (e.g. an obscure format), it is silently skipped. The agent then works without language-specific hints. This can lead to more generic node summaries. Currently 23 language files and 10 framework files exist.
Alternative implementation: Instead of context injection, one could have a central "knowledge agent" that other agents query via tool calls. The Learning Skill deliberately chose against this pattern: injection is deterministic (same input = same prompt), while tool calls are non-deterministic and can lead to inconsistent results.
More Developer L3 Pages