kolektiva.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Kolektiva is an anti-colonial anarchist collective that offers federated social media to anarchist collectives and individuals in the fediverse. For the social movements and liberation!

Administered by:

Server stats:

3.6K
active users

You might have noticed that contemporary society is very complex.

There are a lot of people in the world—a staggering 8.1 billion people—which means a lot of potential relationships between people and a lot of potential interactions.

One consequence of this complexity is the need for lots and lots of information to manage it all. The world produces, processes, consumes, and stores an unimaginable amount of information every single day.

And, perversely, the production of all that information generates the need for yet more information, more management—and, thus, more complexity.

More information? You probably need more digital storage, which means more production, more serial numbers, more inventories. You might need more archivists, more documentation, more indexing. You can see how this snowballs.

1/10

explodingtopics.com/blog/data-

Exploding Topics · Amount of Data Created Daily (2024)Statistics related to data creation per day and over time.

Let’s consider a fairly mundane example. Amazon is a gigantic network of buyers and sellers. Its operation involves the use of huge quantities of information—about prices and quantities, supply chains and delivery dates, bank accounts and interest rates.

It is, in short, a tool our society created to help manage some of that enormous complexity and make sense of that vast quantity of information (Jeff Bezos’ parasitic rentier ownership aside).

Some enterprising individuals figured out they could exploit Amazon to sell fake products. This poses legitimate problems: some of those knock-offs might genuinely hurt people.

So: society had a lot of complexity and created a new system for making sense of that complexity, but now there’s even *more* complexity as a result. Rather than trusting the system to produce reliable results with minimal effort, customers have to expend additional effort to figure out if they’re actually paying for what they set out to buy.

Since Amazon doesn’t care—they get paid either way—perhaps we’ll see professional Amazon validators emerge, whom the public can hire to evaluate and aggregate Amazon information.

Complexity on top of complexity on top of complexity.

2/10

nytimes.com/wirecutter/blog/am

The New York Times · Welcome to the Era of Fake ProductsBy Ganda Suthivarakom

Like I said, fairly mundane. But it adds up when you consider that this process is playing out in every single aspect of our lives. It’s everywhere. We are drowning in information, overloaded and overwhelmed by it.

So some people specialize in curating that information for customers. Or aggregating it. Governments around the world spend billions of dollars each year on intelligence apparatuses to collect, process, and convey information is a desperate attempt to make the world more legible to state leaders. And all of these processes create yet more complexity.

It is unsustainable.

Consider bitcoin, the consumate solution in search of a problem. Despite the promises of its boosters, bitcoin and crypto generally have not displaced any pre-existing currencies. They have just massively piled yet more complexity on top of the global economy, virtually all of it bullshit. Bitcoin “mining” now consumes two whole percent of electricity in the US, all to perform math problems that are entirely pointless by design.

None of that touches on the vast architecture that has accreted around cryptocurrency—the forums and the exchanges and the NFTs—like some cancerous tumor.

Complexity on top of complexity on top of complexity.

3/10

arstechnica.com/science/2024/0

Ars Technica · Over 2 percent of the US’s electricity generation now goes to bitcoinUS government tracking the energy implications of booming bitcoin mining in US.

And then someone had the great idea of creating LLMs, which some gullible fools have taken to calling “AI,” which are vast and expensive engines designed to produce plausible-sounding and convincing bullshit at inhumanly fast rates. The result is the flooding of our already crowded information space with enormous quantities of junk, at a rate that we can’t possibly keep up with.

Lawyers have started inserting references to fictitious cases in their filings, generated by LLMs. A couple of them have been caught. How many more have gone undetected? What about other fields that rely on trustworthy information?

I once needed a fairly uncommon vaccine in response to a pretty rare infection. It was rare enough that the doctors at the one nearby hospital that stocked the vaccine had to consult references to determine proper dosage and procedures—they had to look it up. For now, they could reference authoritative and reliable documents. But how much longer will that last, before trustworthy and reliable information is lost in an avalanche of plausible bullshit?

verdict.co.uk/ai-under-fire-in

4/10

Verdict · AI under fire in legal sector after ChatGPT used to cite fictitious caseBy Kurt Robson
HeavenlyPossum

The more complexity, the more information we have, the more resources we must expend to locate, access, and process the information we actually need. We have to spend additional resources—time, attention, money—to confirm that the product we’re ordering online really is the thing we want and not fake junk that could unexpectedly hurt us. This is nothing new, of courses; people have been dealing with misinformation and information overload for as long as there have been people. But we’ve reached the point where we can automate the mass production of bullshit that can easily fool almost everyone.

Will LLMs bring us to the point where life-saving medical information is buried in masses of bullshit, requiring additional resources to parse the information first?

I was inspired down this line of thought by the op-ed below, on the threat posed by LLMs to the study of history. LLMs can generate plausible bullshit versions of old photographs and historical documents. Will we start losing access to the past as a result?

I am, by academic training, an historian. I also rely heavily on historical and archeological information for my understanding of hidden mechanisms of coercion, the space of possible human social forms, and methods of resistance and liberation. If the historical record is flood with plausible bullshit, we’ll lose so much more than just some sense of the past.

nytimes.com/2024/01/28/opinion

5/10

The New York Times · Opinion | A.I. Is Endangering Our HistoryBy Jacob N. Shapiro

In 1988, anthropologist Joseph Tainter published a book called “The Collapse of Complex Societies.” In it, Tainter argues that societies collapse—experience a sudden, unplanned loss of complexity—not because of what we colloquially assume would be the causes: invasion, famine, plague, climate catastrophe, etc. Rather, Tainter argues that collapse happens when returns on complexity plateau and start to decline, and simplification becomes a cheaper option than yet more complexity.

Tainter posits the following:

1 . human societies are problem-solving organizations;

2. sociopolitical systems require energy for their maintenance;

3. increased complexity carries with it increased costs per capita; and

4. investment in sociopolitical complexity as a problem-solving response often reaches a point of declining marginal returns.

In other words, it’s not the threat itself that causes collapse. All societies constantly face challenges and threats. Instead, collapse happens when societies get so complex that they have to invest more than they can afford just in maintaining their current level of complexity, much less additional complexity.

6/10

archive.org/details/TheCollaps

Internet ArchiveThe Collapse Of Complex Societies : Free Download, Borrow, and Streaming : Internet ArchiveMaldição Ancestral.

The classic example that Tainter offers is the Roman Empire. Facing a growing population and declining agricultural yields, the Roman Republic (and later Empire) began a process of territorial conquest. It invested enormously in additional complexity—raising armies, equipping and training them, administering conquered territories, etc.

But these conquests increased the resources available to Rome—in terms of human labor, food, metals and minerals, etc. All that complexity paid for itself, for a time. Eventually, though, the costs of additional complexity became higher as distances from the imperial core increased. And the benefits of that additional complexity decreased as the Romans conquered more marginal territories at the extreme periphery of the Mediterranean world.

At first, the returns on those investments grew slower, and then stopped growing at all. Eventually, they began to decline. The Romans had to work harder and harder just to stay in the same place, and then eventually it all stopped working entirely.

This is how, Tainter argued, the Roman Empire collapsed under the onslaught of barbarian invasions that were proportionally smaller than attacks Rome had faced centuries earlier when it was much smaller and much less wealthy.

7/10

Does it feel like we’re getting more return for our investment in complexity these days? It doesn’t to me. I don’t think it’s a coincidence that the two candidates for US presidency have slogans that convey a sense of loss and diminishment. Make American Great Again. Build Back Better. Are these not tacit admissions that things haven’t been going great for a while, that we are—at best—just treading water?

A paper published in Nature last January observed that scientific research productivity is down, papers and patent applications are less novel, and research is less interdisciplinary—despite (because of?) an exponential growth in the volume of research being published. Science is less innovative, less disruptive, and yet we continue to pump out ever more paper about science.

Maybe a connection?

nature.com/articles/s41586-022

8/10

NaturePapers and patents are becoming less disruptive over time - NatureA decline in disruptive science and technology over time is reported, representing a substantive shift in science and technology, which is attributed in part to the reliance on a narrower set of existing knowledge.

David Graeber made a closely related point when he asked where the flying cars are—ie, why has the modern world stopped delivering on the promises of technological progress and advancement that we once took for granted?

Graeber notes that, when he was a kid, science fiction imagined future technologies that science eventually delivered—space travel and nuclear-powered ships and lasers and what-not. But, at a certain point, science stopped delivering. We never got our flying cars, our colonies on the moon. We have viagra but not a cure for the common cold.

What we did get was an increasingly sophisticated facsimile of scientific progress in the form of special effects. We’re very good at creating the illusion of a more sophisticated future. And we got a lot of investment in technologies of surveillance and control. But not so much the stuff that was supposed to make our lives easier or better or more fantastical:

“For most of human history, the top speed at which human beings could travel had been around 25 miles per hour. By 1900 it had increased to 100 miles per hour, and for the next seventy years it did seem to be increasing exponentially. By the time Toffler was writing, in 1970, the record for the fastest speed at which any human had traveled stood at roughly 25,000 mph, achieved by the crew of Apollo 10 in 1969, just one year before. At such an exponential rate, it must have seemed reasonable to assume that within a matter of decades, humanity would be exploring other solar systems.

Since 1970, no further increase has occurred. The record for the fastest a human has ever traveled remains with the crew of Apollo 10.”

thebaffler.com/salvos/of-flyin

9/10

The Baffler · Of Flying Cars and the Declining Rate of ProfitA secret question hovers over us, a sense of disappointment, a broken promise we were given as children about what our adult world was supposed to be like. I am referring not to the standard false…

So now we have LLMs. We faced all this complexity and some smart folks thought “what it we invested a lot of resources into this complex system to manage some of that complexity for us?”

LLMs are going to create art for us. They’re going to write our cover letters and identify cancer cells and manage our schedules for us. They are more complexity to solve pre-existing complexity.

They are enormously resource intensive, in terms of electricity, water, and human labor—especially the armies of poorly paid workers in the global south doing the actual training of these fancy autocompletes.

And I can’t help but wonder: have we crossed the threshold of declining marginal returns, from stagnation to decline? Is this the straw that breaks the proverbial camel’s back? Because if it’s not this particular investment of vast resources into some new complexity, then surely it will be the next one. It sure feels like we’ve stopped treading water a while ago and are now sinking.

I don’t want this to read like a screed against LLMs because I think they’re over-hyped. It’s not just LLMs and if LLMs hadn’t come along, something else would. As long as we continue solving problems by adding layers of new complexity on top of old, we’re going to risk reaching a point when it’s just easier and cheaper to collapse, at enormous cost in life, than to build yet another system of processing and controlling information.

theverge.com/features/23764584

10/10

The Verge · AI Is a Lot of WorkBy Josh Dzieza