What I'm exploring this week
Small experiments, half-thoughts, and links worth your time.
A short, low-stakes note. Things I'm reading, building, or thinking about right now.
Reading
A re-read of The Bitter Lesson. Every time I think I've moved past it, it turns out I haven't. The compass for almost any AI design decision: the methods that win are the ones that scale with compute.
Building
A small CLI that watches a folder, summarizes new files, and writes the summaries back as sidecar notes. It's the kind of unfinished thing that exists mostly to teach me something — in this case, that streaming summaries feel meaningfully different from waiting for a full response, even when the total time is the same.
Watching
A recent talk on Software 3.0. Worth the hour, especially the section on how prompts and weights become the new programs.
Open questions
- What's the right primitive for "agent memory" that doesn't just become "embedding everything forever"?
- Is there a clean way to version a prompt the way you version a function?
- Why do I keep enjoying Markdown more than every editor that tries to replace it?
If you're noodling on any of these, I'd love to hear from you.
More writing
Shift-left, but for AI coding assistants
Notes on what changes (and what doesn't) when ~14K developers start writing code with an LLM in the loop.
Threat modeling, faster — with an LLM in the loop
A practical pattern for using an LLM to bootstrap STRIDE without giving up the parts that need a human.