How Anthropic engineering teams use Claude Code every day
Read Time 5 mins | Written by: Cole
Anthropic documented how their internal teams use Claude Code across ten departments. The results reveal practical patterns engineering leaders can apply immediately – and honest insights about what doesn't work.
The reality: it's not just about writing code faster
The most striking finding isn't about code generation speed. Instead, teams report that AI coding assistants fundamentally change how engineers work across three key dimensions:
Confidence in unfamiliar territory. The API Knowledge team now tackles bugs in codebases they've never seen before, something that previously required pulling in senior engineers or spending days building context. One team member explained they can now ask "Do you think you can fix this bug?" and get immediate progress on issues they would have previously delegated.
Elimination of context-switching overhead. Security Engineering cut their infrastructure debugging time in half – from 10-15 minutes down to 5 minutes – by feeding stack traces directly into Claude Code instead of manually scanning through code. This wasn't about writing new code; it was about understanding existing systems faster.
Cross-functional capability expansion. Perhaps most surprisingly, non-technical teams are building production tools. Growth Marketing created automated workflows that generate hundreds of ad variations in minutes, while Legal built accessibility tools and department coordination systems – all without traditional engineering support.
Real-world Claude Code use cases across departments
Kubernetes crisis resolution without specialists: When the Data Infrastructure team's Kubernetes clusters went down and weren't scheduling new pods, they fed screenshots of dashboards into Claude Code. It guided them through Google Cloud's UI menu by menu until they found pod IP address exhaustion, then provided exact commands to create a new IP pool – bypassing the need to involve networking specialists.
Design-to-code velocity: Product Design pastes mockup images into Claude Code to generate fully functional prototypes that engineers can immediately understand and iterate on. This replaces the traditional cycle of static Figma designs requiring extensive explanation. For tasks like removing "research preview" messaging across the entire codebase, they coordinated with legal and implemented updates in two 30-minute calls instead of a week of back-and-forth.
Cross-team enablement: The Data Infrastructure team showed finance members how to write plain text files describing data workflows. Employees with no coding experience could describe steps like "query this dashboard, get information, run these queries, produce Excel output," and Claude Code would execute the entire workflow, asking for required inputs like dates.
Marketing automation at scale: Growth Marketing built an agentic workflow that processes CSV files containing hundreds of existing ads with performance metrics, identifies underperforming ads, and generates new variations meeting strict character limits (30 characters for headlines, 90 for descriptions). Using two specialized sub-agents, the system generates hundreds of new ads in minutes.
They also developed a Figma plugin that programmatically generates up to 100 ad variations by swapping headlines and descriptions – reducing hours of copy-pasting to half a second per batch.
Three adoption patterns that actually work
Across Anthropic's teams, three distinct usage patterns emerged that delivered measurable impact:
Pattern 1: Autonomous execution for well-defined tasks
The Claude Code team itself uses "auto-accept mode" for rapid prototyping on peripheral features. They start from a clean git state, set Claude to work autonomously, and review the 80% complete solution before taking over for final refinements. Their Vim mode implementation came 70% from autonomous work.
The key insight: this pattern works best for tasks on the product's edges, not core business logic. Teams emphasize frequent checkpointing so they can easily revert if Claude goes off track. The RL Engineering team reports this approach succeeds on the first attempt about one-third of the time – but when it works, it saves significant development time.
Pattern 2: Synchronous collaboration for critical features
For features touching core business logic, teams work synchronously with Claude Code, providing detailed prompts with specific implementation instructions. They monitor the process in real-time to ensure code quality and architectural alignment while letting Claude handle repetitive coding work.
The Product Development team uses this approach for critical features, maintaining oversight while delegating the mechanical aspects of implementation. This hybrid model preserves engineering judgment while accelerating execution.
Pattern 3: Knowledge extraction and codebase navigation
Multiple teams cite this as their most valuable use case. New hires use Claude Code to navigate massive codebases by reading documentation, identifying relevant files, and explaining data pipeline dependencies. This replaces traditional data catalogs and significantly accelerates onboarding.
The Data Infrastructure team directs new data scientists to Claude Code as their first stop for understanding the codebase. The Inference team, many without ML backgrounds, reduced research time by 80% – what took an hour of Google searching now takes 10-20 minutes.
The infrastructure decisions that enable adoption
Teams that saw the most success made specific infrastructure choices:
Documentation in Claude.md files. The better teams documented their workflows, tools, and expectations in Claude.md files, the better Claude Code performed. The Data Infrastructure team emphasizes this made Claude Code excel at routine tasks like setting up new data pipelines when existing patterns were documented.
Custom slash commands for repeated workflows. Security Engineering uses 50% of all custom slash command implementations in the entire monorepo. These commands streamline specific workflows and dramatically speed up repeated tasks.
MCP servers for sensitive data. Rather than using CLI tools directly, teams recommend MCP servers to maintain security control over what Claude Code can access, especially for sensitive data requiring logging or privacy controls.
What doesn't work (and how to avoid it)
Teams were candid about limitations:
The RL Engineering team notes Claude Code succeeds on the first autonomous attempt only about one-third of the time. Their advice: try one-shot first, but be ready to switch to collaborative guidance when it doesn't work. Starting over often has a higher success rate than trying to fix mistakes.
The Data Science team treats it "like a slot machine" – save your state, let Claude run for 30 minutes, then either accept the result or start fresh rather than wrestling with corrections.
Product Design emphasizes treating Claude as "an iterative partner, not a one-shot solution." Rather than expecting perfect solutions immediately, approach it as a collaborator you iterate with.
Measuring the impact: beyond velocity metrics
Teams report measurable improvements across multiple dimensions:
- Development velocity: Product Design executed visual and state management changes 2-3x faster
- Cycle time compression: Teams now complete complex projects in hours instead of a week of coordination
- Team capability expansion: Growth Marketing operates "like a larger team," handling tasks that traditionally required dedicated engineering resources
- Quality improvements: Inference team reports comprehensive test coverage with edge cases they would have missed manually
But perhaps more important than speed metrics, teams report qualitative changes in how they work. The API Knowledge team notes "enhanced developer happiness" and reduced friction in daily workflows. Product Design finds that designers understanding system constraints upfront improves design-engineering collaboration.
The bottom line for engineering leaders
AI coding assistants are delivering real productivity gains, but not in the way most engineering leaders expect. The value doesn't come primarily from generating code faster – it comes from eliminating friction, enabling confident exploration of unfamiliar codebases, and expanding team capabilities into areas that previously required specialists.
The teams seeing the most success treat these tools as collaborators rather than magic solutions. They invest in infrastructure like documentation and custom commands. They develop intuition about which tasks work autonomously versus which need supervision. And they're transparent about limitations while actively exploring new use cases.
For engineering leaders considering adoption, the evidence suggests starting with specific, well-documented use cases rather than broad deployment. Focus on reducing context-switching overhead and accelerating codebase navigation before optimizing for autonomous code generation.
And perhaps most importantly, create space for teams to experiment and share learnings – the most innovative uses often come from unexpected places, like Legal building accessibility tools or Growth Marketing creating agentic workflows.
The future of software development doesn't show engineers being replaced by AI – it shows engineers augmented by AI, working on problems they couldn't have tackled before.
Don't Miss
Another Update
new content is published
Cole
Cole is Codingscape's Content Marketing Strategist & Copywriter.
