'AI' Sucks the Joy Out of Programming

| 3 minutes | Comments

Talking with my dogs brings me greater pleasure, and I think it helps me get more work done.
Talking with my dogs brings me greater pleasure, and I think it helps me get more work done.

I now think that AI/LLMs suck the joy out of programming. I’ve used spicy auto-complete, as well as agents running in my IDE, in my CLI, or on GitHub’s server-side. I’ve been experimenting enough with LLM/AI-driven programming to have an opinion on it. And it kind of sucks.

I started programming when I was around 15 years old, and I’ve had about 28 years of pleasure from doing it. What I like the most is when things finally end up working, in the way I want them to work, and that happens after gradually building an understanding of the problem space, and of the algorithms used. It’s not just the destination, but the journey for getting there. There is nothing more gratifying than mastering some algorithm or technique, or finally understanding the problem that I need to solve.

What I don’t like about programming is when things fail, and they fail for stupid reasons, such as a small detail I missed, and it nearly always happens either because I’m not paying attention, or because my understanding is incomplete, and I just copied some solution from somewhere, and it was a bad solution to begin with. For example, concurrency or performance problems are the worst, especially because they tend to be non-deterministic, and the documentation isn’t easily accessible, so you put together bits and pieces from the Internet. And I hate such parts because I end up with a trial-and-error workflow where I desperately try things out, in what can only be described as “shooting bullets in the dark”.

When I tell an LLM-driven Agent to do something, it may take care of the easy parts. Sometimes, those parts are boring, such as interacting with some 3rd party and proprietary API, but they are easy nonetheless. And for the hard parts? It fails, and it fails hard. The first impulse is to tell the LLM that it made mistakes, giving it high-level details about what to do next, in a very slow and error-prone feedback loop, and then I watch the LLM fail again. And because it got the easy parts right, I feel compelled to give it another chance, again telling it to fix its crap, and then I watch it fail again, and it fails harder this time.

Being very frustrated, already, I checkout the code from the repository, giving the code a shot myself, because at least I know how to investigate and come up with a reasonable solution. But then I discover that the code is … hard to maintain crap. And the problem is that it doesn’t look like crap during the initial PR(s), but crap builds up (with crappy comments too), because I’m too afraid of telling it to make the code more maintainable, due to the high probability of mistakes.

Even when things end up working, I’m left with the feeling that I could’ve done a better job, and this feeling permeates everything but insignificant scripts.

With an LLM-driven workflow, I’m getting all the bad parts about programming, like the stress generated by not being in control when things aren’t working, desperately trying to shoot bullets in the dark and fix unmaintainable code, and without any of the gratification, which comes from the journey itself. We’re replacing that journey and all the learning, with a dialogue with an inconsistent idiot.

| Written by