quinta-feira, junho 12, 2025
HomeArtificial IntelligenceNormal Technology at Scale – O’Reilly

Normal Technology at Scale – O’Reilly


The widely read and discussed article “AI as Normal Technology” is a reaction against claims of “superintelligence,” as its headline suggests. I’m substantially in agreement with it. AGI and superintelligence can mean whatever you want—the terms are ill-defined and next to useless. AI is better at most things than most people, but what does that mean in practice, if an AI doesn’t have volition? If an AI can’t recognize the existence of a problem that needs a solution, and want to create that solution? It looks like the use of AI is exploding everywhere, particularly if you’re in the technology industry. But outside of technology, AI adoption isn’t likely to be faster than the adoption of any other new technology. Manufacturing is already heavily automated, and upgrading that automation would require significant investments of money and time. Factories aren’t rebuilt overnight. Neither are farms, railways, or construction companies. Adoption is further slowed by the difficulty of getting from a good demo to an application running in production. AI certainly has risks, but those risks have more to do with real harms arising from issues like bias and data quality than the apocalyptic risks that many in the AI community worry about; those apocalyptic risks have more to do with science fiction than reality. (If you notice an AI manufacturing paper clips, pull the plug, please.)

Still, there’s one kind of risk that I can’t avoid thinking about, and that the authors of “AI as Normal Technology” only touch on, though they are good on the real nonimagined risks. Those are the risks of scale: AI provides the means to do things at volumes and speeds greater than we have ever had before. The ability to operate at scale is a huge advantage, but it’s also a risk all its own. In the past, we rejected qualified female and minority job applicants one at a time; maybe we rejected all of them, but a human still had to be burdened with those individual decisions. Now we can reject them en masse, even with supposedly race- and gender-blind applications. In the past, police departments guessed who was likely to commit a crime one at a time, a highly biased practice commonly known as “profiling.”1 Most likely most of the supposed criminals are in the same group, and most of those decisions are wrong. Now we can be wrong about entire populations in an instant—and our wrongness is justified because “an AI said so,” a defense that’s even more specious than “I was just obeying orders.”

We have to think about this kind of risk carefully, though, because it’s not just about AI. It depends on other changes that have little to do with AI, and everything to do with economics. Back in the early 2000s, Target outed a pregnant teenage girl to her parents by analyzing her purchases, determining that she was likely to be pregnant, and sending advertising circulars that targeted pregnant women to her home. This example is an excellent lens for thinking through the risks. First, Target’s systems determined that the girl was pregnant using automated data analysis. No humans were involved. Data analysis isn’t quite AI, but it’s a very clear precursor (and could easily have been called AI at the time). Second, exposing a single teenage pregnancy is only a small part of a much bigger problem. In the past, a human pharmacist might have noticed a teenager’s purchases and had a kind word with her parents. That’s certainly an ethical issue, though I don’t intend to write on the ethics of pharmacology. We all know that people make poor decisions, and that these decisions effect others. We also have ways to deal with these decisions and their effects, however inadequately. It’s a much bigger issue that Target’s systems have the potential for outing pregnant women at scale—and in an era when abortion is illegal or near-illegal in many states, that’s important. In 2025, it’s unfortunately easy to imagine a state attorney general subpoenaing data from any source, including retail purchases, that might help them identify pregnant women.

We can’t chalk this up to AI, though it’s a factor. We need to account for the disappearance of human pharmacists, working in independent pharmacies where they can get to know their customers. We had the technology to do Target’s data analysis in the 1980s: We had mainframes that could process data at scale, we understood statistics, we had algorithms. We didn’t have big disk drives, but we had magtape—so many miles of magtape! What we didn’t have was the data; the sales took place at thousands of independent businesses scattered throughout the world. Few of those independent pharmacies survive, at least in the US—in my town, the last one disappeared in 1996. When nationwide chains replaced independent drugstores, the data became consolidated. Our data was held and analyzed by chains that consolidated data from thousands of retail locations. In 2025, even the chains are consolidating; CVS may end up being the last drugstore standing.

Whatever you may think about the transition from independent druggists to chains, in this context it’s important to understand that what enabled Target to identify pregnancies wasn’t a technological change; it was economics, glibly called “economies of scale.” That economic shift may have been rooted in technology—specifically, the ability to manage supply chains across thousands of retail outlets—but it’s not just about technology. It’s about the ethics of scale. This kind of consolidation took place in just about every industry, from auto manufacturing to transportation to farming—and, of course, just about all forms of retail sales. The collapse of small record labels, small publishers, small booksellers, small farms, small anything has everything to do with managing supply chains and distribution. (Distribution is really just supply chains in reverse.) The economics of scale enabled data at scale, not the other way around.

Digital image © Guilford Free Library.
Douden’s Drugstore (Guilford, CT) on its closing day.2

We can’t think about the ethical use of AI without also thinking about the economics of scale. Indeed, the first generation of “modern” AI—something now condescendingly referred to as “classifying cat and dog photos”—happened because the widespread use of digital cameras enabled photo sharing sites like Flickr, which could be scraped for training data. Digital cameras didn’t penetrate the market because of AI but because they were small, cheap, and convenient and could be integrated into cell phones. They created the data that made AI possible.

Data at scale is the necessary precondition for AI. But AI facilitates the vicious circle that turns data against its humans. How do we break out of this vicious circle? Whether AI is normal or apocalyptic technology really isn’t the issue. Whether AI can do things better than individuals isn’t the issue either. AI makes mistakes; humans make mistakes. AI often makes different kinds of mistakes, but that doesn’t seem important. What’s important is that, whether mistaken or not, AI amplifies scale.3 It enables the drowning out of voices that certain groups don’t want to be heard. It enables the swamping of creative spaces with dull sludge (now christened “slop”). It enables mass surveillance, not of a few people limited by human labor but of entire populations.

Once we realize that the problems we face are rooted in economics and scale, not superhuman AI, the question becomes: How do we change the systems in which we work and live in ways that preserve human initiative and human voices? How do we build systems that build in economic incentives for privacy and fairness? We don’t want to resurrect the nosey local druggist, but we prefer harms that are limited in scope to harms at scale. We don’t want to depend on local boutique farms for our vegetables—that’s only a solution for those who can afford to pay a premium—but we don’t want massive corporate farms implementing economies of scale by cutting corners on cleanliness.4 “Big enough to fight regulators in court” is a kind of scale we can do without, along with “penalties are just a cost of doing business.” We can’t deny that AI has a role in scaling risks and abuses, but we also need to realize that the risks we need to fear aren’t the existential risks, the apocalyptic nightmares of science fiction.

The right thing to be afraid of is that individual humans are dwarfed by the scale of modern institutions. They’re the same human risks and harms we’ve faced all along, usually without addressing them appropriately. Now they’re magnified.

So, let’s end with a provocation. We can certainly imagine AI that makes us 10x better programmers and software developers, though it remains to be seen whether that’s really true. Can we imagine AI that helps us to build better institutions, institutions that work on a human scale? Can we imagine AI that enhances human creativity rather than proliferating slop? To do so, we’ll need to take advantage of things we can do that AI can’t—specifically, the ability to want and the ability to enjoy. AI can certainly play Go, chess, and many other games better than a human, but it can’t want to play chess, nor can it enjoy a good game. Maybe an AI can create art or music (as opposed to just recombining clichés), but I don’t know what it would mean to say that AI enjoys listening to music or looking at paintings. Can it help us be creative? Can AI help us build institutions that foster creativity, frameworks within which we can enjoy being human?

Michael Lopp (aka @Rands) recently wrote:

I think we’re screwed, not because of the power and potential of the tools. It starts with the greed of humans and how their machinations (and success) prey on the ignorant. We’re screwed because these nefarious humans were already wildly successful before AI matured and now we’ve given them even better tools to manufacture hate that leads to helplessness.

Note the similarities to my argument: The problem we face isn’t AI; it’s human and it preexisted AI. But “screwed” isn’t the last word. Rands also talks about being blessed:

I think we’re blessed. We live at a time when the tools we build can empower those who want to create. The barriers to creating have never been lower; all you need is a mindset. Curiosity. How does it work? Where did you come from? What does this mean? What rules does it follow? How does it fail? Who benefits most from this existing? Who benefits least? Why does it feel like magic? What is magic, anyway? It’s an endless set of situationally dependent questions requiring dedicated focus and infectious curiosity.

We’re both screwed and blessed. The important question, then, is how to use AI in ways that are constructive and creative, how to disable their ability to manufacture hate—an ability just demonstrated by xAI’s Grok spouting about “white genocide.” It starts with disabusing ourselves of the notion that AI is an apocalyptic technology. It is, ultimately, just another “normal” technology. The best way to disarm a monster is to realize that it isn’t a monster—and that responsibility for the monster inevitably lies with a human, and a human coming from a specific complex of beliefs and superstitions.

A critical step in avoiding “screwed” is to act human. Tom Lehrer’s song “The Folk Song Army” says, “We had all the good songs” in the war against Franco, one of the 20th century’s great losing causes. In 1969, during the struggle against the Vietnam War, we also had “all the good songs”—but that struggle eventually succeeded in stopping the war. The protest music of the 1960s came about because of a certain historical moment in which the music industry wasn’t in control; as Frank Zappa said, “These were cigar-chomping old guys who looked at the product that came and said, ‘I don’t know. Who knows what it is. Record it. Stick it out. If it sells, alright.’” The problem with contemporary music in 2025 is that the music industry is very much in control; to become successful, you have to be vetted, marketable, and fall within a limited range of tastes and opinions. But there are alternatives: Bandcamp may not be as good an alternative as it once was, but it is an alternative. Make music and share it. Use AI to help you make music. Let AI help you be creative; don’t let it replace your creativity. One of the great cultural tragedies of the 20th century was the professionalization of music. In the 19th century, you’d be embarrassed not to be able to sing, and you’d be likely to play an instrument. In the 21st, many people won’t admit that they can sing, and instrumentalists are few. That’s a problem we can address. By building spaces, online or otherwise, around your music, we can do an end run around the music industry, which has always been more about “industry” than “music.” Music has always been a communal activity; it’s time to rebuild those communities at human scale.

Is that just warmed-over 1970s thinking, Birkenstocks and granola and all that? Yes, but there’s also some reality there. It doesn’t minimize or mitigate risk associated with AI, but it recognizes some things that are important. AIs can’t want to do anything, nor can they enjoy doing anything. They don’t care whether they are playing Go or deciphering DNA. Humans can want to do things, and we can take joy in what we do. Remembering that will be increasingly important as the spaces we inhabit are increasingly shared with AI. Do what we do best—with the help of AI. AI is not going to go away, but we can make it play our tune.

Being human means building communities around what we do. We need to build new communities that are designed for human participation, communities in which we share the joy in things we love to do. Is it possible to view YouTube as a tool that has enabled many people to share video and, in some cases, even to earn a living from it? And is it possible to view AI as a tool that has helped people to build their video? I don’t know, but I’m open to the idea. YouTube is subject to what Cory Doctorow calls enshittification, as is enshittification’s poster child TikTok: They use AI to monetize attention and (in the case of TikTok) may have shared data with foreign governments. But it would be unwise to discount the creativity that has come about through YouTube. It would also be unwise to discount the number of people who are earning at least part of their living through YouTube. Can we make a similar argument about Substack, which allows writers to build communities around their work, inverting the paradigm that drove the 20th century news business: putting the reporter at the center, rather than the institution? We don’t yet know whether Substack’s subscription model will enable it to resist the forces that have devalued other media; we’ll find out in the coming years. We can certainly make an argument that services like Mastodon, a decentralized collection of federated services, are a new form of social media that can nurture communities at human scale. (Possibly also Bluesky, though right now Bluesky is only decentralized in theory.) Signal provides secure group messaging, if used properly—and it’s easy to forget how important messaging has been to the development of social media. Anil Dash’s call for an “Internet of Consent,” in which humans get to choose how their data is used, is another step in the right direction.

In the long run, what’s important won’t be the applications. It will be “having the good songs.” It will be creating the protocols that allow us to share those songs safely. We need to build and nurture our own gardens; we need to build new institutions at human scale more than we need to disrupt the existing walled gardens. AI can help with that building, if we let it. As Rands said, the barriers to creativity and curiosity have never been lower.


Footnotes

  1. A study in Connecticut showed that, during traffic stops, members of non-profiled groups were actually more likely to be carrying contraband (i.e., illegal drugs) than members of profiled groups.
  2. Digital image © Guilford Free Library.
  3. Nicholas Carlini’s Machines of Ruthless Efficiency makes a similar argument.
  4. And we have no real guarantee that local farms are any more hygienic.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments