Main image of article Five Reasons It’s Still Worth Being a Developer in the Age of AI Coding

I’ve been writing software since the dark ages—before the internet and the web existed. There were fewer programming languages back then, and if you weren’t programming in BASIC, you probably had a shelf full of well-thumbed user guides and reference manuals. You had to write all your own code, share it with friends on floppy disks, or buy it from a catalog.

Programming then was mostly desktop and terminal based. There was no web, GUI, or mobile development, and everything was done on either a single PC platform (DOS) or on proprietary hardware with its own operating system. Then, in the mid-to-late ’90s, the web appeared, triggering a Cambrian-era explosion of programming languages, platforms, and software architectures that transformed the lives of developers everywhere.

Suddenly, you could find code online with search engines, ask for advice on sites like Experts Exchange and Stack Overflow, and now you can even ask AI to generate code. If almost anyone can produce software today, is it still worth being a developer?

Here are five reasons I think so.

There’s more to creating software than writing code. You need to know what to build, not just how to build it. Do you understand the application’s architecture? How to use version control? Should you store data in a database or JSON files? Why might one programming language be better for a specific task than another? AI can help, but only experience can guide you to the best choice.

Over time, code evolves as clients report bugs or request enhancements. At work, I’m part of a team that supports a 1.4 million-line application written in Delphi and C#/WPF. I doubt any AI could handle something that large without astronomical costs for the tokens needed to process it. Other teams in the company have similarly large applications.

Earlier program generators had this same problem—the code worked fine for the initial task, but making changes was a nightmare.

As a recreational poker player, I recently wrote a poker simulator in Rust that plays a million games against two to eight computer players in just 1.5 seconds, calculating the probability of winning for a particular starting hand. The program simulates a deck of cards, draws for each player, and evaluates the best hand from seven cards. Optimizing it for precious nanoseconds was part of the fun—the current version can evaluate the best hand in 125 nanoseconds, though I’m sure I can make it faster.

If you work in a specialized field such as finance, you’ll have domain knowledge—like pricing futures and options—that helps you understand and validate AI-generated code. If I showed you a Black-Scholes function, would you know what it’s for or how to use it? (It’s for pricing options.)

For now, humans are still the best at debugging. It’s like solving a jigsaw puzzle as a detective—reading logs, setting breakpoints, stepping through code, or reviewing recent changes. Debugging is a dark art, and over time you develop a feel for the best approach.

Software is full of bugs. The time-honored metric is one bug per thousand lines of code. Even if AI-generated code were 100 times less buggy, that 1.4 million-line application would still have a few bugs. Who will fix them?

AI can also mess up spectacularly. I once asked an AI to make a major change, and it left my code with over a dozen compile errors, deleted several important functions, and broke a working version I hadn’t backed up.

AI can make programming more productive, and I’ve found that about 90% of AI-generated code works correctly. But that other 10% still needs fixing—often because the AI uses deprecated functions or hasn’t been updated for changes in the language.

A non-developer might assemble working code with AI, but they’ll inevitably hit speed bumps. When the AI-generated code fails or needs changes, someone still has to know how to fix it.