AI Coding Assistants: An Honest Review After 12 Months
I've used every major AI coding assistant daily for the past year. Here's what actually works, what doesn't, and what senior leaders need to know before mandating these tools.
I’ve used every major AI coding assistant daily for the past year. Here’s what actually matters.
The Tools I Tested
Over twelve months, I used GitHub Copilot, Cursor, Claude Code, and several others across real projects — not toy demos, not tutorials, but production work with deadlines and consequences.
What Actually Works
Boilerplate elimination. This is where AI coding tools genuinely shine. Repetitive code, standard patterns, configuration files — these tools handle them well. Time savings are real and measurable.
Code explanation. Point an AI at an unfamiliar codebase and ask it to explain what’s happening. This is genuinely useful, especially for teams inheriting legacy systems.
Test generation. AI-generated tests aren’t perfect, but they’re a solid starting point. They catch the obvious cases you might miss when you’re moving fast.
What Doesn’t Work
Architecture decisions. AI tools will confidently suggest architectures that look reasonable but miss crucial context about your team’s capabilities, your deployment constraints, or your regulatory requirements.
Novel problem-solving. For problems that don’t look like anything in the training data, these tools struggle. They’ll generate plausible-looking code that subtly misses the point.
The Bottom Line for Leaders
AI coding tools are productivity boosters for experienced developers. They are not replacements for engineering judgement. If someone tells you AI will let you halve your engineering team, they’re selling you something.
Invest in the tools. But invest more in the people using them.