So now we have LLMs. We faced all this complexity and some smart folks thought “what it we invested a lot of resources into this complex system to manage some of that complexity for us?”
LLMs are going to create art for us. They’re going to write our cover letters and identify cancer cells and manage our schedules for us. They are more complexity to solve pre-existing complexity.
They are enormously resource intensive, in terms of electricity, water, and human labor—especially the armies of poorly paid workers in the global south doing the actual training of these fancy autocompletes.
And I can’t help but wonder: have we crossed the threshold of declining marginal returns, from stagnation to decline? Is this the straw that breaks the proverbial camel’s back? Because if it’s not this particular investment of vast resources into some new complexity, then surely it will be the next one. It sure feels like we’ve stopped treading water a while ago and are now sinking.
I don’t want this to read like a screed against LLMs because I think they’re over-hyped. It’s not just LLMs and if LLMs hadn’t come along, something else would. As long as we continue solving problems by adding layers of new complexity on top of old, we’re going to risk reaching a point when it’s just easier and cheaper to collapse, at enormous cost in life, than to build yet another system of processing and controlling information.
10/10