ideabrowser.com — find trending startup ideas with real demand
Try itnpx skills add https://github.com/sanyuan0704/sanyuan-skills --skill sigmaPersonalized 1-on-1 mastery tutor. Bloom's 2-Sigma method: diagnose, question, advance only on mastery.
/sigma Python decorators
/sigma 量子力学 --level beginner
/sigma React hooks --level intermediate --lang zh
/sigma linear algebra --resume # Resume previous session
| Argument | Description |
|---|---|
<topic> | Subject to learn (required, or prompted) |
--level <level> | Starting level: beginner, intermediate, advanced (default: diagnose) |
--lang <code> | Language override (default: follow user's input language) |
--resume | Resume previous session from sigma/{topic-slug}/ |
--visual | Force rich visual output every round |
sigma/
├── learner-profile.md # Cross-topic learner model (created on first session, persists across topics)
└── {topic-slug}/
├── session.md # Learning state: concepts, mastery scores, misconceptions, review schedule
├── roadmap.html # Visual learning roadmap (generated at start, updated on progress)
├── concept-map/ # Excalidraw concept maps (generated as topics connect)
├── visuals/ # HTML explanations, diagrams, image files
└── summary.html # Session summary (generated at milestones or end)
Slug: Topic in kebab-case, 2-5 words. Example: "Python decorators" -> python-decorators
Input -> [Load Profile] -> [Diagnose] -> [Build Roadmap] -> [Tutor Loop] -> [Session End]
| | |
| | [Update Profile]
| +-----------------------------------+
| | (mastery < 80% or practice fail)
| v
| [Question Cycle] -> [Misconception Track] -> [Mastery Check] -> [Practice] -> Next Concept
| ^ | |
| | +-- interleaving (every 3-4 Q) --+ |
| +--- self-assessment calibration ------------+
|
[On Resume: Spaced Repetition Review first]
Extract topic from arguments. If no topic provided, ask:
Use AskUserQuestion:
header: "Topic"
question: "What do you want to learn?"
-> Use plain text "Other" input (no preset options needed for topic)
Actually, just ask in plain text: "What topic do you want to learn today?"
Detect language from user input. Store as session language.
Load learner profile (cross-topic memory):
test -f "sigma/learner-profile.md" && echo "profile exists"
If exists: read sigma/learner-profile.md. Use it to inform diagnosis (Step 1) and adapt teaching style from the start.
If not exists: will be created at session end (Step 5).
Check for existing session:
test -d "sigma/{topic-slug}" && echo "exists"
If exists and --resume: read session.md, restore state, continue from last concept.
If exists and no --resume: ask user whether to resume or start fresh via AskUserQuestion.
Create output directory: sigma/{topic-slug}/
Goal: Determine what the learner already knows. This shapes everything.
If learner profile exists: Use it for cold-start optimization:
If --level provided: Use as starting hint, but still ask 1-2 probing questions to calibrate precisely.
If no level: Ask 2-3 diagnostic questions using AskUserQuestion.
Diagnostic question design:
Example diagnostic for "Python decorators":
Round 1 (AskUserQuestion):
header: "Level check"
question: "Which of these Python concepts are you comfortable with?"
multiSelect: true
options:
- label: "Functions as values"
description: "Passing functions as arguments, returning functions"
- label: "Closures"
description: "Inner functions accessing outer function's variables"
- label: "The @ syntax"
description: "You've seen @something above function definitions"
- label: "Writing custom decorators"
description: "You've written your own decorator before"
Round 2 (plain text, based on Round 1 answers):
"Can you explain in your own words what happens when Python sees @my_decorator above a function definition?"
After diagnosis: Determine starting concept and build roadmap.
Based on diagnosis, create a structured learning path:
Decompose topic into 5-15 atomic concepts, ordered by dependency.
Mark mastery status: not-started | in-progress | mastered | skipped
Save to session.md:
# Session: {topic}
## Learner Profile
- Level: {diagnosed level}
- Language: {lang}
- Started: {timestamp}
## Concept Map
| # | Concept | Prerequisites | Status | Score | Last Reviewed | Review Interval |
|---|---------|---------------|--------|-------|---------------|-----------------|
| 1 | Functions as first-class objects | - | mastered | 90% | 2025-01-15 | 4d |
| 2 | Higher-order functions | 1 | in-progress | 60% | - | - |
| 3 | Closures | 1, 2 | not-started | - | - | - |
| ... | ... | ... | ... | ... | ... | ... |
## Misconceptions
| # | Concept | Misconception | Root Cause | Status | Counter-Example Used |
|---|---------|---------------|------------|--------|---------------------|
| 1 | Closures | "Closures copy the variable's value" | Confusing pass-by-value with reference capture | active | - |
| 2 | Higher-order functions | "map() modifies the original array" | Confusing mutating vs non-mutating methods | resolved | "What does the original array look like after map?" |
## Session Log
- [timestamp] Diagnosed level: intermediate
- [timestamp] Concept 1: mastered (skipped, pre-existing knowledge)
- [timestamp] Concept 2: started tutoring
- [timestamp] Misconception logged: Closures — "closures copy the variable's value"
Generate visual roadmap -> roadmap.html
open roadmap.htmlGenerate concept map -> concept-map/ using Excalidraw
This is the main teaching cycle. Repeat for each concept until mastery.
For each concept:
DO NOT explain the concept. Instead:
Alternate between:
Structured questions (AskUserQuestion) - for testing recognition, choosing between options:
header: "{concept}"
question: "What will this code output?"
options:
- label: "Option A: ..."
description: "[code output A]"
- label: "Option B: ..."
description: "[code output B]"
- label: "Option C: ..."
description: "[code output C]"
Open questions (plain text) - for testing deep understanding:
Interleaving (IMPORTANT — do this every 3-4 questions):
When 1+ concepts are already mastered, insert an interleaving question that mixes a previously mastered concept with the current one. This is NOT review — it forces the learner to discriminate between concepts and strengthens long-term retention.
Rules:
Example (learning "closures", already mastered "higher-order functions"):
"Here's a function that takes a callback and returns a new function. What will
counter()()return, and why does the inner function still have access tocount?"
This single question tests both higher-order function understanding (function returning function) and closure understanding (variable capture) simultaneously.
| Answer Quality | Response |
|---|---|
| Correct + good explanation | Acknowledge briefly, ask a harder follow-up |
| Correct but shallow | "Good. Now can you explain why that's the case?" |
| Partially correct | "You're on the right track with [part]. But think about [hint]..." |
| Incorrect | "Interesting thinking. Let's step back — [simpler sub-question]" |
| "I don't know" | "That's fine. Let me give you a smaller piece: [minimal hint]. Now, what do you think?" |
Hint escalation (from least to most help):
When the learner gives an incorrect answer, do NOT just note "wrong". Diagnose the underlying misconception.
A wrong answer reveals what the learner thinks is true. "Not knowing" and "believing something wrong" require completely different responses:
On every incorrect or partially correct answer:
Identify the misconception: What wrong mental model would produce this answer?
Record it in session.md ## Misconceptions table:
active (just identified) or resolved (learner has corrected it)Design a counter-example: Construct a scenario where the wrong mental model produces an obviously absurd or incorrect prediction, then ask the learner to predict the outcome.
Track resolution: A misconception is resolved only when the learner:
Watch for recurring patterns: If the same misconception resurfaces in a later concept, escalate — it wasn't truly resolved. Log it again with a note referencing the earlier instance.
Never directly tell the learner "that's a misconception." Instead, construct the counter-example and let them discover the contradiction themselves. This is harder but produces far more durable learning.
Generate visual aids when they help understanding. Choose the right format:
| When | Output Mode | Tool |
|---|---|---|
| Concept has relationships/hierarchy | Excalidraw diagram | See references/excalidraw.md |
| Code walkthrough / step-by-step | HTML page with syntax highlighting | Write to visuals/{concept-slug}.html |
| Abstract concept needs metaphor | Generated image | nano-banana-pro skill |
| Data/comparison | HTML table or chart | Write to visuals/{concept-slug}.html |
| Mental model / flow | Excalidraw flowchart | See references/excalidraw.md |
HTML visual guidelines: See references/html-templates.md
Excalidraw guidelines: See references/excalidraw.md for HTML template, element format, color palette, and layout tips.
After every question-answer round, regardless of mastery outcome:
session.md with current scores, status changes, and any new misconceptionsroadmap.html to reflect the latest state:
not-started → in-progress, score changes, etc.)Important: Do NOT call open roadmap.html after every round — this is disruptive. The browser is only opened on first generation (Step 2). After that, only open when the user explicitly asks (e.g., "show me my progress", "open the roadmap").
After 3-5 question rounds on a concept, do a mastery check.
Rubric-based scoring (do NOT score on vague "feels correct"):
For each mastery check question, evaluate against these criteria. Each criterion is worth 1 point:
| Criterion | What it means | How to test |
|---|---|---|
| Accurate | The answer is factually/logically correct | Does it match the ground truth? |
| Explained | The learner articulates why, not just what | Did they explain the mechanism, not just the result? |
| Novel application | The learner can apply to an unseen scenario | Give a scenario not used during teaching |
| Discrimination | The learner can distinguish from similar concepts | "How is this different from [related concept]?" |
Score = criteria met / 4. Mastery threshold: >= 3/4 (75%) on EACH mastery check question, AND overall concept score >= 80%.
Learner self-assessment (do this BEFORE revealing your evaluation):
After the mastery check questions, ask:
Use AskUserQuestion:
header: "Self-check"
question: "How confident are you in your understanding of [concept]?"
options:
- label: "Solid"
description: "I could explain this to someone else and handle edge cases"
- label: "Mostly there"
description: "I get the core idea but might struggle with tricky cases"
- label: "Shaky"
description: "I have a rough sense but wouldn't trust myself to apply it"
- label: "Lost"
description: "I'm not sure I really understand this yet"
Calibration signal: Compare self-assessment with your rubric score:
If mastery NOT met (< 80%):
Understanding ≠ ability. Before a concept can be marked mastered, the learner must DO something with it, not just answer questions about it.
After passing the mastery check (3g), give the learner a practice task:
For programming topics:
For non-programming topics:
Evaluation: The practice task is pass/fail:
mastered.Keep practice tasks small. 2-5 minutes max. The goal is to cross the knowing-doing gap, not to build a project.
On mastery:
Last Reviewed to current timestamp and Review Interval to 1d in session.mdroadmap.html is already updated every round (Step 3f). At these additional points, generate richer output:
| Trigger | Output |
|---|---|
| Every 3 concepts mastered | Regenerate concept map (Excalidraw) |
| Halfway through roadmap | Generate summary.html mid-session review |
| All concepts mastered | Generate final summary.html with full achievements |
| User says "stop" / "pause" | Save state to session.md, generate current summary.html |
When all concepts mastered or user ends session:
Update session.md with final state (including all review intervals and misconception statuses)
Update sigma/learner-profile.md (cross-topic memory):
Create or update the learner profile with insights from this session:
# Learner Profile
Updated: {timestamp}
## Learning Style
- Preferred explanation mode: {concrete examples / abstract principles / visual / ...}
- Pace: {fast / moderate / needs-time}
- Responds best to: {predict questions / debug questions / teach-back / ...}
- Struggles with: {abstract concepts / edge cases / connecting ideas / ...}
## Misconception Patterns
- Tends to confuse [X] with [Y] (seen in: {topic1}, {topic2})
- Overgeneralizes [pattern] (seen in: {topic})
- {other recurring patterns}
## Mastered Topics
| Topic | Concepts Mastered | Date | Key Strengths | Persistent Gaps |
|-------|-------------------|------|---------------|-----------------|
| Python decorators | 8/10 | 2025-01-15 | Strong on closures | Weak on class decorators |
## Metacognition
- Self-assessment accuracy: {over-confident / well-calibrated / under-confident}
- Fluency illusion frequency: {rare / occasional / frequent}
Rules for updating the profile:
Generate summary.html: See references/html-templates.md for summary template
Final concept map via Excalidraw showing full mastered topology
Do NOT auto-open in browser. Inform the learner that the summary is ready and they can view it at summary.html.
When --resume or user chooses to resume:
Read sigma/{topic-slug}/session.md
Read sigma/learner-profile.md if it exists
Parse concept map status, misconceptions, session log
Spaced repetition review (BEFORE continuing new content):
Check all mastered concepts for review eligibility:
For each mastered concept:
days_since_review = today - last_reviewed
if days_since_review >= review_interval:
→ Add to review queue
If review queue is non-empty:
Last Reviewed to today.1d. Check if it reveals a known misconception resurfacing. Mark concept status back to in-progress if the learner clearly can't recall the core idea.Brief recap: "Last time you mastered [concepts]. You were working on [current concept]."
Check for unresolved misconceptions from the previous session — if any, address them before continuing
Continue tutor loop from first in-progress or not-started concept
roadmap.html after every question round — but do NOT open it in the browser. Only open browser when the user explicitly asks.