Reddit Experience · Apr 2026

Usability tested AI code assistants on real dev tasks - speed gains but debugging skills tanked hard [Experienced]

9 upvotes 5 replies

Interview Experience

Look, I'm a UX researcher and last month we ran usability sessions with a handful of mid-level devs (think 3-5 years exp) on their daily workflows. Threw in some AI code helpers for common tasks like

Full Details

Look, I'm a UX researcher and last month we ran usability sessions with a handful of mid-level devs (think 3-5 years exp) on their daily workflows. Threw in some AI code helpers for common tasks like refactoring messy APIs or building simple features. Honestly, the results pissed me off a bit. They spit out working code 3x faster on average - what used to take 25-30 mins dropped to under 10. Great for shipping quick prototypes, right? But then we pulled the AI and had them debug the same buggy output manually. Time ballooned to double, and half straight-up missed basic logic flaws they'd normally catch in seconds. One guy said it felt like his brain went on autopilot. Tbh, it reminded me of running: you can smash intervals with a pacer app telling you every stride, but hit a trail run without it and your pacing discipline crumbles after mile 5. All that tool reliance atrophies the fundamentals. Not saying ditch AI - it's a damn good sprint coach. But how do you keep the long-run endurance for debugging and problem-solving? Do you enforce AI-free coding days? Mix in LeetCode without hints? Or is this just the new normal and schools/bootcamps need to adapt? Curious what you're seeing in interviews or on the job.

Free preview. Unlock all questions →