Chapter 9 · Meta · 4 min read
A short chapter at the end of a guide that taught you Claude Code — about what it took to build this very guide with Claude Code.
Over April 23–24, 2026, across six sessions spanning roughly fourteen hours of session time, one developer and one agent shipped everything you just read: the scaffold, nine chapters in two languages, the sample projects, the companion skill, the hero illustration, the four diagrams, the tests, the deploy. A lot of that fourteen hours was the agent working unattended on approved plans — hands-on human attention was closer to a normal afternoon's worth. These are the numbers underneath, pulled straight from Claude Code's local session transcripts, priced at current Anthropic API rates with the proper cache multipliers.
Cache read dominates at 97.4 %. Claude Code's prompt cache means most of the input is re-read from short-lived cache, not freshly processed. That's the efficiency that makes long agentic sessions affordable.
Main sessions ran on Claude Opus 4.7. Subagents picked lighter models for scoped tasks — Haiku 4.5 for lookups, Sonnet 4.6 for medium-depth, Opus when they needed main-thread reasoning.
claude-opus-4-7 — 2,959 messagesclaude-haiku-4-5 — 202 messagesclaude-sonnet-4-6 — 109 messagesclaude-opus-4-7 — 101 messagesBuild #2 carried 55 % of total cost and spawned 41 subagents while parsing the workshop briefing and running the V2 agentic rewrite. The other sessions cluster around 10–15 %.
| Session | Cost share | Duration | Subagents | Tool calls | Output tok | Cost (API-rate) |
|---|---|---|---|---|---|---|
| 20:03 build #1 | 14 % | 2 h 17 m | 14 | 351 | 941 K | $122 |
| 22:02 build #2 | 55 % | ~27 h wall | 41 | 729 | 2.4 M | $462 |
| 22:04 parallel | 11 % | 1 h 47 m | 15 | 223 | 790 K | $89 |
| 22:21 small | 1 % | 16 m | 3 | 26 | 54 K | $7 |
| 23:49 final push | 9 % | 1 h 27 m | 7 | 304 | 455 K | $74 |
| 09:16 current | 11 % | ~11 h wall | 27 | 186 | 606 K | $90 |
The fraction of user messages containing pushback language (no, wait, actually, that is not) dropped from 45 % in the heaviest build session to 26 % in the most recent — a quantified learning curve across the arc.
Two reasons.
Transparency. When someone reads a workshop guide and wonders "how long did this actually take and what did it cost," the honest answer deserves to live in the guide itself, not a follow-up email. Fourteen hours of session time, much of it unattended. $845 at API rates. Zero incremental spend on the Max subscription the work actually ran on.
Demonstration. The strongest evidence that the patterns in Chapter 4 and Chapter 6 — plan mode, proof slices, the compound loop, the harness-first posture — actually scale is a lab built using exactly those patterns. This chapter is the receipt.
A good final exercise: turn what you just read into a dashboard of your own usage. The patterns from Chapters 4 and 6 carry: narrow scope, plan before work, proof slice before batch.
Try this plan prompt — one scoped task, one concrete output:
/plan: Write me a Python script that reads the last 30 days of my Claude Code session jsonl files from ~/.claude/projects/ and renders a single-page HTML dashboard. Per session: duration, turn count, subagent count, token totals, API-rate cost. Use the published pricing at platform.claude.com for current rates. One HTML file. No dependencies beyond the Python stdlib.
Let Claude write the plan, read it, tighten the scope if it drifted, then /work against the approved plan. A week of your own sessions rendered as a dashboard will tell you more about how you work with Claude Code than any case study someone else published.