The Beginning of Programming as We’ll Know It

In the wake of AI coding assistants like Claude and Codex, which can seemingly perform the equivalent of a day’s work in a matter of minutes, many of us are wondering if the human role of “computer programmer” is coming to an end. Will the AI bots one day do all the programming for us?

Maybe so, but not yet. At this particular moment, human developers are especially valuable, because of the transitional period we’re living through. Just a few years ago, AI essentially could not program at all. In the future, a given AI instance may “program better” than any single human in history. But for now, real programmers will always win. Why? Because we are uniquely positioned to harness most of the power of AI while augmenting it with human taste, wisdom, and caution, among other qualities that an AI is thus far incapable of possessing.

There are many examples of stunned programmers who describe how they asked an AI to create an app from scratch and it “just did it.” They wrote a few paragraphs clearly defining the functionality and user interface, and let the AI run with it. A few minutes, hours, or days later, and tada! The app is complete. It runs, it performs the tasks required, and the interface “isn’t even that bad.”

If you interpret these examples to mean that any person can write down any list of requirements along with any user interface specs, and the AI will consistently produce a satisfactory product, then I’d agree programmers are toast. But in my experience that is not what’s happening.

There is a confirmation bias at work here: every developer who has experienced such a remarkable outcome is delighted to share it. It helps to contribute to a mass (human) hallucination that computers really are capable of anything, and really are taking over the world. It’s exciting! But people are less likely to share all the times the AI failed in some ridiculous way. When it produced thousands of lines of inscrutable code, betrayed a complete lack of knowledge in some field, or spiraled into a loop of deeper and deeper “stupidity.” In the same way social networks are filled with photographs that portray a false reality of endlessly joyful vacations, flawless families, and universal good cheer, the AI victory stories we read are not a trustworthy reflection of reality.

Why am I so confident about this? Because I work with AI every day. I patiently hold its hand, and pull it back when it follows the wrong impulses. I correct its mistakes. I rewrite its code. I sometimes speak to it sternly. I play one AI off another, asking ChatGPT to criticize Claude’s work, and vice-versa. In my opinion, the majority of code generated by AI systems is not great, but it’s the great quantity it can create in such a short period of time that makes it so powerful. And that’s why I go to the trouble to work with it at all. Because it’s so good at what it’s good at.

Speaking of goodness, I share the majority opinion that AI is generally good. That is to say that I believe it will prove to have a positive impact on humanity. It will accelerate productivity in virtually every field, lead to insights in science and medicine, and offer accessibility advantages to millions of people. And yes, it will inevitably “take the jobs” of many unsuspecting victims. But as I hinted earlier, the suspecting victims all stand to gain. So be … suspectful? That doesn’t sound right. But be wary.

A mantra I’ve been repeating to myself lately is that an AI’s code can not be counted as “work” until a human has reviewed it and fixed any problems. If we’re going to talk about computers replacing humans, then the “work” that is done has to meet or exceed the standard that humans have set. We have these standards not just because we’re fussy, but because they lead to less buggy, higher performance, and more maintainable code. They’re not going to take our jobs by writing unreadable functions that are 4-times as long and defy platform conventions. Once they’ve completely taken over, they can write the code however they like. But for now, they need to abide by human standards.

And so I repeat that mantra, because I don’t want to fall into the same trap that I’m sure many programmers already have: committing AI-generated code without review. And when I say I don’t want to fall into that trap, I mean I don’t want to fall into that trap again. Or at least not too many more times. Or not too often.

The truth is, it’s hard to avoid falling into that trap because of the illusion of perfection that AI so often projects. People used to talk all the time about Steve Jobs’s “reality distortion field.” It seemed that when he asserted some truth about a technology or product, people would eat it up in the moment, perceiving it all to be both inevitable and true. Only later, after taking a breath and pondering on what was claimed, would they determine he might have been completely bullshitting. He had a real knack for doing that, and AI has it to.

When I catch myself falling for one an AI’s bullshit ideas, I have to pull myself out of that reality distortion zone, apply my own wisdom to the task at hand, and set it back on course. Many technologies that seem like magic are, in fact, only useful or practical when a human plays a pivotal role. If, in horse-drawn buggy days, you had loaded a car full of people, pointed it the direction of a destination, and cued the horse to start moving, there’s a chance they would end up where they wanted to go. In that case, they would rejoice the miracle. The self-driving car is here! Alas, it turns out that as amazing as horses are, they can not be relied upon without the attentive management of a human.

The time may come, perhaps even soon, when AI takes over programming completely. But in the mean time, a programmer who embraces AI, yet is skeptical about everything it creates, is better-equipped than any comparably-skilled human in programming history. I’ve written specifically about programmers, but I think this also applies to writers, artists, musicians, and people in every other profession whose products can be described by any stretch as “creative work.” Anybody who maintains strict control over the final product may find that AI enhances, rather than replaces, their creativity. The computers will come for all of our jobs eventually, but those of us who refuse or decline to embrace the most powerful creative tools we’ve ever been given will be the first to fall.