Simon Willison opens ChatGPT, pastes 1,000 comments from a stranger's Hacker News history, and types three words: "Profile this user." Within seconds, the model returns a detailed breakdown of that person's professional identity, core beliefs, communication patterns, and likely income sources. No search engine involved. No manual reading required. Just a conversation with a collaborator that actually understands context.
This moment, documented on Willison's blog in March 2026, crystallizes a shift that developers have been fumbling toward for two years. Most people still treat LLMs like souped-up search—ask a question, get an answer, done. But Willison's profiling trick reveals something different: when you treat these models as collaborators with working memory and contextual reasoning, they unlock capabilities that search never had.
The technical setup is straightforward. The Algolia Hacker News API exposes comments sorted by date with author usernames as tags. Willison built a simple tool that hits that API, fetches up to 1,000 comments for any user, and formats them for clipboard. He feeds that into Claude Opus 4.6—these days he prefers it for this task—and prompts "profile this user." The model synthesizes patterns across thousands of data points: writing style, topics of interest, argumentative tendencies, even monetization strategies.
What's striking is what the model infers that no search result would surface. It identifies someone's core thesis on AI coding by tracking how they discuss the topic across months of comments. It notices that a developer monetizes through GitHub sponsors rather than corporate payroll by detecting how they describe their own work. These aren't facts you find with search—they're conclusions you reach through synthesis.
The same collaborator mindset powers Willison's other recent experiment. When a developer pointed out that a PC Gamer article had ballooned to 37MB due to auto-playing video ads and web bloat, Willison deployed Rodney—his custom Claude Code agent for web investigation—to audit the page. The agent fetched the content, analyzed the performance characteristics, and synthesized findings without manual tab-switching between developer tools and browser inspectors.
Both experiments share a pattern: the human sets the goal, the LLM handles the interpretation, synthesis, and reasoning across messy real-world data. This isn't about LLMs replacing developer tools. It's about LLMs becoming the layer that coordinates those tools.
The implications for developer tooling are concrete. Debugging becomes a conversation rather than a ritual of grep and stack traces. Profiling becomes synthesis rather than dashboard-gazing. The skill shifts from knowing how to use each tool to knowing how to ask a collaborator the right question.
Willison himself frames this as "agentic engineering"—using coding agents as productivity multipliers for skilled developers. The qualifier matters: these tools amplify expertise, they don't replace it. Someone who doesn't understand what a profiler does won't know what to ask the LLM to profile. But for developers who already know their craft, the collaboration model transforms tedious tasks into conversational ones.
The 1,000-comment threshold isn't arbitrary. It's the rough limit of what fits in a context window while leaving room for the model to think. Below that, you get surface-level analysis. Above it, you risk the model hallucinating connections that aren't there. Finding that sweet spot—feeding an LLM enough context to be useful but not so much it confabulates—is becoming a core developer skill.
Willison's blog now hosts a running guide on this approach, updated as he discovers new patterns. The most recent entry, published March 21, includes a detailed breakdown of what the profiling prompt captures and what it misses. It's practical knowledge, shared openly, exactly the kind of information that spreads through the developer community via links and not algorithms.