
10 Claude Code Workflow Tips from Creator Boris Cherny
Boris Cherny, creator of Claude Code, shared his personal workflow for building software with AI. Here are 10 practical tips to transform your dev loop.
Sean McLellan
Lead Architect & Founder
10 Claude Code Workflow Tips from Creator Boris Cherny
February 1, 2026
If you've been following the AI development space, you likely saw Boris Cherny (@bcherny), the creator of Claude Code, trending on Twitter today. He pulled back the curtain on how he—and the team at Anthropic—uses their own tool to build software.
For many developers, Claude Code (the CLI) is a powerful assistant. But for Boris, it's something more: a teammate that requires a specific workflow to unlock its full potential.
After digging through his thread and the resulting discussions, I've distilled his advice into 10 practical tips that can transform how you pair-program with AI.
1. The Plan Is the Product
Boris's number one rule? Don't rush to code. "If my goal is to write a Pull Request, I will use Plan mode, and go back and forth with Claude until I like its plan," he explains. Most developers jump straight to "fix this bug." Boris spends time iterating on the architectural plan before a single line of code is written. Once the plan is solid, he switches to auto-accept edits mode, where Claude can often "one-shot" the implementation. The Takeaway: Treat the plan as the most important artifact. If the plan is right, the code follows.
2. Build Verification Loops
This is the "secret sauce" of agentic coding. Don't just ask Claude to write code; give it the tools to prove the code works. Boris emphasizes giving Claude a feedback loop—access to a browser test, a server log, or a simulator. "Claude tests every single change I land to claude.ai/code using the Claude Chrome extension. It opens a browser, tests the UI, and iterates until the code works and the UX feels good." The Takeaway: If Claude can't run the code, it's just guessing. Give it eyes.
3. Team Knowledge in CLAUDE.md
How does your team share coding standards? A wiki no one reads?
At Anthropic, they use a CLAUDE.md file in the root of the repo. This isn't just documentation; it's context that is injected into every session.
Boris uses the @.claude tag on coworkers' PRs to add learnings directly to this file.
The Takeaway: Turn code review into a system update. When you catch a recurring mistake, document it in CLAUDE.md so the agent never makes it again.
4. Parallelize Your Work
Why wait for one task to finish? Boris runs roughly 5 local sessions and 5-10 remote sessions simultaneously. To make this work locally without git conflicts, he uses a separate git checkout for each session, rather than just branching. The Takeaway: Treat agent sessions like async background jobs. Fire them off, let them work, and check back later.
5. Automate the Mundane with Slash Commands
If you find yourself typing the same prompts repeatedly, script them. Boris uses custom slash commands stored in .claude/commands/.
For example, /commit-push-pr isn't just a git alias; it's a prompt that instructs Claude to run git status, generate a commit message, push, and open a PR—all in one go.
The Takeaway: Don't be a prompt typist. Be a prompt engineer.
6. Quality Over Speed (Opus vs. Sonnet)
In a world obsessed with speed, Boris chooses the slow lane. He uses Opus 4.5 (with thinking enabled) for all coding tasks, preferring it over the faster Sonnet. "I find Opus better at tool use and ultimately faster overall because it makes fewer mistakes," he notes. The Takeaway: A fast model that writes buggy code is slower than a slow model that gets it right the first time.
7. Auto-Format on Post-Process
Nothing breaks an agent's flow like a linting error. To prevent this, Boris runs a "PostToolUse" hook that automatically runs bun run format after every file edit.
The Takeaway: Don't ask the AI to worry about whitespace. Let the linter handle the style so the AI can focus on the logic.
8. Security: Allowlist, Don't Skip
It is tempting to use --dangerously-skip-permissions to stop the constant approval prompts. Boris advises against it.
Instead, he uses /permissions to whitelist safe commands (like builds and tests) while keeping a gate on potentially destructive ones.
The Takeaway: Safety is friction, but it's necessary friction. Configure your safe zone rather than removing the fence.
9. Discovery Over Indexing (Agentic Search vs. RAG)
One subtle but powerful point is how the team prefers agentic search (grep, ls, find) over traditional RAG (Retrieval-Augmented Generation). Static vector indexes go stale the moment you edit a file. An agent that can "look around" the codebase using terminal tools sees the current state of reality. The Takeaway: Let the agent explore the file system. It's slower than a database query, but it's always accurate.
10. The Social Agent
Perhaps the most futuristic tip is how the team views agents socially. They aren't just tools; they are participants in the PR process. Agents clarify requirements, update documentation, and even comment on reviews. The Takeaway: Stop thinking of AI as a text editor. Start thinking of it as a junior developer who needs guidance, context, and feedback.
Final Thoughts
What's striking about Boris's workflow is how "vanilla" it is. He isn't using complex custom scaffolding; he's mastering the fundamentals of the tool. For developers looking to level up their AI workflow, the lesson is clear: Success isn't about the prompt. It's about the process. Start with a plan, enforce verification, and share knowledge. The code will take care of itself.

Sean McLellan
Lead Architect & Founder
Sean is the visionary behind BaristaLabs, combining deep technical expertise with a passion for making AI accessible to small businesses. With over two decades of experience in software architecture and AI implementation, he specializes in creating practical, scalable solutions that drive real business value. Sean believes in the power of thoughtful design and ethical AI practices to transform how small businesses operate and grow.