GenAI and papering over inefficiency

I'm not a fan of the current crop of generative AI products, as the folks I work with are no doubt already sick of hearing. I don't discount them completely, and for something that an LLM is very likely to get correct I'm not above asking ChatGPT, especially with how god awful Google results seem to be these days. But I don't feel good about doing it, mostly for the common reasons:

  • unprecedented-scale theft of the creative output of groups big and small to make established tech titans (or smaller companies bankrolled by a titan) ever more powerful
  • the inability to truly trust anything they (very confidently) tell you
  • the enormous energy use and associated environmental effects

But something else has been bugging me, from a programmer mindset: many of the "best" uses for LLMs just vastly speed up crapping out something to satisfy an inefficient process, rather than doing anything to actually solve that inefficiency.

Coding assistants make it very fast to generate boilerplate. Can't we spend just some of that unfathomably large pool of money and human effort to reduce the need for boilerplate in the first place? Boilerplate is by definition code we must write for our programs to work, but that doesn't contribute to solving the problem at hand; are we really just happy to accept that that's an unsolvable fact of life for a software developer?

AI writing tools will turn your bullet points for a performance review or an email to your boss into professional-sounding prose. Your boss will then probably run it through the same tool to turn it back into bullet points. Why are we bothering to keep up this farce? Especially when the apparatus for doing so consumes the same energy as a small nation.

It all feels like implementing arithmetic functions by running a random number generator until we get something that looks about right. In the programming world especially, I worry we're heading towards a future where we stop bothering to actually solve problems because some model will give us good enough, enough of the time. That probably sounds great to a lot of folks, but it makes me pretty sad.