← Back to Helpfulness Scoring

Score Calculation and Weighting

Four dimensions, audience-specific weighting, threshold logic, and the depth map — the complete scoring system

01

Dimensions in Detail

The Helpfulness Score (HS) is composed of four dimensions, each rated from 0 to 3. The sum yields the raw score (0–12), which is then weighted per audience.

🧩
Complexity (K)
How much explanation does the topic require to be understood?
0 — Trivial. One sentence suffices. “The project uses MIT license.”
1 — One paragraph. “Installation: npm install, create .env, npm start.”
2 — Multiple sections. API design with conventions, error handling, pagination.
3 — Own page with diagrams. Auth system with OAuth2, JWT, RBAC, session management.
🎯
Relevance (R)
How important is the topic for understanding the overall system?
0 — Peripheral. System is understood without it.
1 — Helpful but not essential. Logging configuration.
2 — Important for the big picture. Database schema.
3 — Core topic. System cannot be understood without it.
📖
Learning Value (L)
How much does the audience learn from a detailed explanation?
0 — No learning effect. Standard boilerplate.
1 — Slight learning effect. Familiar, but details are useful.
2 — Good learning effect. Transferable patterns and techniques.
3 — High learning effect. Complex concepts inaccessible without explanation.
🔐
Independence (E)
Can the topic be explained in isolation, or does it depend on many others?
0 — Fully dependent. Only makes sense in context of other topics.
1 — Some dependencies. Requires background from 2–3 other topics.
2 — Largely independent. Brief intro suffices as context.
3 — Fully independent. Can be understood with zero prior knowledge.

Formula: HS = K + R + L + E (Raw score: 0–12). This raw score is then transformed into an audience-specific score via the weighting matrix.

02

Audience Weighting Matrix

The same topic receives different scores depending on the audience. A topic essential for developers (HS=10) may be irrelevant for executives (HS=2). The weighting matrix captures this difference.

Example heatmap — typical score ranges by topic and audience:

Topic 🔧 Developers 👤 Users 📊 Executives
Implementation Detail 7–10 1–3 0–2
UI Workflow 3–5 7–10 4–6
Cost / ROI 1–3 2–4 8–10
API Design 8–10 4–6 1–3
Onboarding Process 4–6 8–10 5–7
Security Architecture 8–10 5–7 7–9
License 0–1 0–1 3–5
Error Handling 7–10 5–7 1–3

Green = high (7–10) • Orange = medium (4–6) • Gray = low (0–3)

Concrete example: Topic “Error Handling”

🔧 Developers
K=2, R=3, L=2, E=1
HS = 8
👤 Users
K=1, R=2, L=2, E=2
HS = 7
📊 Executives
K=2, R=3, L=2, E=1
HS = 8

Even though the raw score is similar, the threshold per audience determines whether a dedicated page gets created. More in section 03.

03

Threshold Logic

The HS does not directly determine what gets built. That is the job of the decision tree: for each combination of topic, audience, and level, it checks whether the score meets the threshold — and whether the audience is even allowed that level.

Thresholds per level and audience:

Level 🔧 Developers 👤 Users 📊 Executives
L0 Mention ≥ 1 ≥ 1 ≥ 1
L1 Own Module ≥ 4 ≥ 5 ≥ 6
L2 Own Page ≥ 6 ≥ 7 — (max L1)
L3 Deep-Dive ≥ 8 — (max L2) — (max L1)

Decision tree as flowchart:

Topic + Audience + Level
Level ≤ max_level(Audience)?
NO
STOP: audience_max_level
YES
HS ≥ threshold[audience][level]?
NO
STOP: hs_below_threshold
YES
Deeper level possible?
NO
BUILD + STOP
YES
BUILD + check next level

The decision process for each topic/audience pair:

1. Start at Level 0 and work upward.

2. Gate 1: Is the current level still within the audience maximum? Executives: max L1. Users: max L2. Developers: max L3. If not → STOP.

3. Gate 2: Does the HS meet the threshold for this level with this audience? Developers need HS≥6 for L2, users need HS≥7. If not → STOP.

4. If both gates pass → level is planned. Move to next level.

5. The loop stops at the first gate failure. The stop reason is stored so the depth map can later explain why a topic does not go deeper.

Important: A topic with HS=5 for developers gets L0+L1 (threshold 4 met) but no L2 (threshold 6 not met). The stop reason is “hs_below_threshold”.

04

Transparency — The Depth Map

The depth map is the transparency artifact of the scoring system. It shows for every topic: what score it received, which levels were created, which files were generated — and why it stopped at a particular point.

Depth map output format:

Topic Audience HS Levels Files Stop Reason
Auth 🔧 Dev 10 L0, L1, L2, L3 auth_dev_en.html (L1)
auth_dev_en.html (L2)
auth_dev_en.html (L3)
complexity_exhausted
Auth 👤 User 7 L0, L1, L2 auth_user_en.html (L1)
auth_user_en.html (L2)
audience_max_level
Auth 📊 Exec 9 L0, L1 auth_exec_en.html (L1) audience_max_level
Setup 🔧 Dev 5 L0, L1 setup_dev_en.html (L1) hs_below_threshold
Setup 👤 User 8 L0, L1, L2 setup_user_en.html (L1)
setup_user_en.html (L2)
audience_max_level
License 🔧 Dev 1 L0 hs_below_threshold

Why the depth map matters:

Without the depth map, scoring is a black box. Users see that a topic has no L3 page but not why. The depth map makes the decision traceable:

“audience_max_level” — The audience cannot go deeper. Not a score problem, but a structural boundary.
“hs_below_threshold” — The score was too low. The topic was not important enough for this audience.
“complexity_exhausted” — All available levels were built. There is simply nothing more to explain.

✏️ Knowledge Check

Topic “Error Handling” has K=2, R=3, L=2, E=1 (HS=8). For which audiences does it get its own page (at least L1)?

Only 🔧 Developers — threshold ≥4 met
🔧 Developers and 👤 Users — both have thresholds below 8
All three — 🔧≥4 ✅, 👤≥5 ✅, 📊≥6 ✅
🧪 Deep-Dive: Thresholds in Detail →