There is a specific kind of waste that every developer using AI coding agents has felt: you hand Claude Code a reasonably complex task, it works quietly for eight minutes, and then you see the result and realize it made a sensible but wrong assumption somewhere around minute two. The whole branch goes in the trash.
The failure is not the model's capability. It is the approval loop. The model had to choose between two plausible interpretations and it picked one without telling you it had a choice to make.
Anthropic shipped a fix for this, and it is already getting traction on X: Claude Code can now render interactive pickers inline during a task -- option selectors, date pickers, design variant choosers -- and pause until you click. The model does not guess. It surfaces the decision, hands it to you, and then proceeds with your explicit input.
The Old Loop vs. the New Loop
Before:
- Write a prompt describing what you want
- Claude Code executes for 5-15 minutes across multiple files
- Review the output
- Notice it misread an ambiguous requirement ("use the existing auth middleware" vs. "scaffold a new one")
- Revert, rewrite the prompt with more constraints, restart
The feedback cycle is long and expensive. Every rewrite costs context, time, and momentum.
After:
- Write a prompt describing what you want
- Claude Code hits an ambiguous decision -- say, "three viable approaches to the data model"
- An interactive picker appears in the terminal (or browser if you are using Claude Code on the web)
- You click the option you want
- The agent proceeds with your choice baked in
The key shift: the model is now surfacing decisions as decisions rather than collapsing them silently into assumptions. This is architecturally important. It means the agent is modeling its own uncertainty and treating you as a participant rather than a spec document.
What the AskUserQuestion Tool Actually Does
Claude Code exposes a built-in tool called AskUserQuestion that the model can call during any task. When invoked, it pauses execution and presents a structured question with a set of options. You respond, and the agent continues.
This is not the same as Claude asking "Are you sure?" at the end. The tool fires at the moment the agent hits a decision branch -- before it commits to a direction. It is the difference between a contractor asking which tile you want before cutting versus showing you the finished backsplash.
On the Claude.ai web interface, the same behavior renders as visual pickers: a calendar for date selection, a card layout for design options. Hamel Husain noted this week that he got a calendar picker while doing a scheduling task and a three-option design picker while working in Claude Code -- same underlying mechanism, surface-appropriate rendering.
How to Make It Trigger Reliably: The AGENTS.md Approach
The model does not always call AskUserQuestion unprompted. Left to default behavior, it will often make a choice and proceed. To increase the rate at which it surfaces decisions for your input, you can configure this in your global AGENTS.md file.
Eleanor Berger shared her configuration this week. The pattern works across Claude Code, opencode, GitHub Copilot CLI, and other agents that respect AGENTS.md:
## Decision Protocol
Before committing to any approach that involves a meaningful trade-off,
use the AskUserQuestion tool to present options. This includes:
- Architecture choices (new component vs. extend existing)
- Data model decisions (schema design, relation types)
- Auth and security patterns (where multiple valid approaches exist)
- UI structure when design options differ substantially
Do NOT ask for confirmation on: file naming, minor style choices,
obvious error handling, or anything covered by existing project conventions.
The instruction trains the model on when to escalate versus when to proceed. Without it, Claude Code tends to ask too rarely or too often depending on the task. The AGENTS.md constraint tunes the signal.
A Concrete Workflow for a 30-Minute Development Session
If you want to test this yourself, here is a session structure that demonstrates the pattern clearly:
Setup (5 minutes)
- Add the Decision Protocol block above to your global
~/.claude/AGENTS.md - Pick a task with at least one real architecture decision embedded in it (adding a new feature to an existing app works well)
Run the task (15 minutes)
- Hand Claude Code the task with minimal constraint on approach
- Watch for picker prompts when they appear -- respond quickly, do not over-explain
- Note which decisions the model escalated vs. resolved itself
Review (10 minutes)
- Compare the output to what you would have specified upfront
- Check whether the escalated decisions were the right ones to escalate
- Adjust your AGENTS.md instruction based on what you saw
After two or three sessions, you will have a calibrated instruction that matches your working style. The model learns what your "meaningful trade-off" threshold is from the context you set.
Where This Fits in the Broader Shift
The interactive picker is a small feature with a large surface area. It represents one of the more important architectural moves in coding agents: explicit uncertainty surfacing instead of confident hallucination.
Most of the time wasted in AI-assisted development is not in the generation phase. It is in the review-and-rewrite cycle that follows wrong assumptions. Anything that moves decisions earlier in the loop -- before the model has already committed eight minutes of compute and 400 lines of code to one interpretation -- compresses that waste significantly.
The /copy command in Claude Code's January changelog also uses a picker to let you select individual code blocks from a response rather than copying the full output. Same mechanism, different context. Interactive selection as a pattern is clearly getting broader application.
This is not a replacement for clear prompting. A well-specified task still produces better results than an ambiguous one with pickers. But for complex multi-step work where requirements are genuinely underspecified, the interactive loop is a meaningful improvement over the alternative.
Add the AGENTS.md config, run a session that involves an actual architectural choice, and see whether the decisions Claude Code escalates to you match the ones you would have wanted to make yourself. That alignment rate is the metric worth tracking.
