Table of Contents

This is a long article, so I've broken it up into a series of posts, listed below. You can also read the full work as a PDF or EPUB.

Previously: New Jobs.

Some readers are undoubtedly upset that I have not devoted more space to the wonders of machine learning—how amazing LLMs are at code generation, how incredible it is that Suno can turn hummed melodies into polished songs. But this is not an article about how fast or convenient it is to drive a car. We all know cars are fast. I am trying to ask what will happen to the shape of cities.

The personal automobile reshaped streets, all but extinguished urban horses and their waste, supplanted local transit and interurban railways, germinated new building typologies, decentralized cities, created exurban sprawl, reduced incidental social contact, gave rise to the Interstate Highway System (bulldozing Black communities in the process), gave everyone lead poisoning, and became a leading cause of death among young people. Many parts of the US are highly car-dependent, even though a third of us don’t drive. As a driver, cyclist, transit rider, and pedestrian, I think about this legacy every day: how so much of our lives are shaped by the technology of personal automobiles, and the specific way the US uses them.

I want you to think about “AI” in this sense.

Some of our possible futures are grim, but manageable. Others are downright terrifying, in which large numbers of people lose their homes, health, or lives. I don’t have a strong sense of what will happen, but the space of possible futures feels much broader in 2026 than it did in 2022, and most of those futures feel bad.

Much of the bullshit future is already here, and I am profoundly tired of it. There is slop in my search results, at the gym, at the doctor’s office. Customer service, contractors, and engineers use LLMs to blindly lie to me. The electric company has hiked our rates and says data centers are to blame. LLM scrapers take down the web sites I run and make it harder to access the services I rely on. I watch synthetic videos of suffering animals and stare at generated web pages which lie about police brutality. There is LLM spam in my inbox and synthetic CSAM on my moderation dashboard. I watch people outsource their work, food, travel, art, even relationships to ChatGPT. I read chatbots lining the delusional warrens of mental health crises.

I am asked to analyze vaporware and to disprove nonsensical claims. I wade through voluminous LLM-generated pull requests. Prospective clients ask Claude to do the work they might have hired me for. Thankfully Claude’s code is bad, but that could change, and that scares me. I worry about losing my home. I could retrain, but my core skills—reading, thinking, and writing—are squarely in the blast radius of large language models. I imagine going to school to become an architect, just to watch ML eat that field too.

It is deeply alienating to see so many of my peers wildly enthusiastic about ML’s potential applications, and using it personally. Governments and industry seem all-in on “AI”, and I worry that by doing so, we’re hastening the arrival of unpredictable but potentially devastating consequences—personal, cultural, economic, and humanitarian.

I’ve thought about this a lot over the last few years, and I think the best response is to stop. ML assistance reduces our performance and persistence, and denies us both the muscle memory and deep theory-building that comes with working through a task by hand: the cultivation of what James C. Scott would call metis. I have never used an LLM for my writing, software, or personal life, because I care about my ability to write well, reason deeply, and stay grounded in the world. If I ever adopt ML tools in more than an exploratory capacity, I will need to take great care. I also try to minimize what I consume from LLMs. I read cookbooks written by human beings, I trawl through university websites to identify wildlife, and I talk through my problems with friends.

I think you should do the same.

Refuse to insult your readers: think your own thoughts and write your own words. Call out people who send you slop. Flag ML hazards at work and with friends. Stop paying for ChatGPT at home, and convince your company not to sign a deal for Gemini. Form or join a labor union, and push back against management demands that you adopt Copilot—after all, it’s for entertainment purposes only. Call your members of Congress and demand aggressive regulation which holds ML companies responsible for their carbon and digital emissions. Advocate against tax breaks for ML datacenters. If you work at Anthropic, xAI, etc., you should think seriously about your role in making the future. To be frank, I think you should quit your job.

I don’t think this will stop ML from advancing altogether: there are still lots of people who want to make it happen. It will, however, slow them down, and this is good. Today’s models are already very capable. It will take time for the effects of the existing technology to be fully felt, and for culture, industry, and government to adapt. Each day we delay the advancement of ML models buys time to learn how to manage technical debt and errors introduced in legal filings. Another day to prepare for ML-generated CSAM, sophisticated fraud, obscure software vulnerabilities, and AI Barbie. Another day for workers to find new jobs.

Staving off ML will also assuage your conscience over the coming decades. As someone who once quit an otherwise good job on ethical grounds, I feel good about that decision. I think you will too.

And if I’m wrong, we can always build it later.

And Yet…

Despite feeling a bitter distaste for this generation of ML systems and the people who brought them into existence, they do seem useful. I want to use them. I probably will at some point.

For example, I’ve got these color-changing lights. They speak a protocol I’ve never heard of, and I have no idea where to even begin. I could spend a month digging through manuals and working it out from scratch—or I could ask an LLM to write a client library for me. The security consequences are minimal, it’s a constrained use case that I can verify by hand, and I wouldn’t be pushing tech debt on anyone else. I still write plenty of code, and I could stop any time. What would be the harm?

Right?

… Right?


Many friends contributed discussion, reading material, and feedback on this article. My heartfelt thanks to Peter Alvaro, Kevin Amidon, André Arko, Taber Bain, Silvia Botros, Daniel Espeset, Julia Evans, Brad Greenlee, Coda Hale, Marc Hedlund, Sarah Huffman, Dan Mess, Nelson Minar, Alex Rasmussen, Harper Reed, Daliah Saper, Peter Seibel, Rhys Seiffe, and James Turnbull.

This piece, like most all my words and software, was written by hand—mainly in Vim. I composed a Markdown outline in a mix of headers, bullet points, and prose, then reorganized it in a few passes. With the structure laid out, I rewrote the outline as prose, typeset with Pandoc. I went back to make substantial edits as I wrote, then made two full edit passes on typeset PDFs. For the first I used an iPad and stylus, for the second, the traditional pen and paper, read aloud.

I circulated the resulting draft among friends for their feedback before publication. Incisive ideas and delightful turns of phrase may be attributed to them; any errors or objectionable viewpoints are, of course, mine alone.

Narayan Desai
Narayan Desai on

Fantastic series and analysis. I think that we’re all wrestling with various aspects of these questions right now.

I think there is one important point that you hit on at the end that bears more examination. LLMs give us an easier ability to do things without understanding them Your color-changing light example is a great one; clearly you could do this work by hand, but do you care enough to spend the time and energy doing it? I’m working on a similar kind of project where I could do the work by hand (which would result in me learning a lot about computer vision in the process, something that hasn’t really happened because the LLM has done the heavy lifting). Practically speaking, i’d never do the project due to the time investment.

In some sense, LLMs give us the ability to decide not to care how something works. Reliability notwithstanding, I think this is their greatest superpower. This is also their greatest weakness, due to the erosion of skills. I think this specific part is the crux of a lot of disagreements. Any given topic is someone’s favorite, and they often secretly think everyone should agree, and know more about the topic. I think we saw this first with art; artists love their craft and want to do it (and be paid for it, which ftr I completely agree with). There appears to also be a large segment of the population that doesn’t (and doesn’t want to) have that same relationship with art. I think this same disagreement probably exists on most topics.

I’m really uncomfortable saying that everything should be important to everyone. I think that cultural norms, specifically around the value of virtuosity, and consensus about recognized forms of expression and expertise have asserted a consensus of both value and expectations of baseline skills (moral panic about children using calculators) that has probably knitted culture together. Shifts here radically change culture in much the same way that niche media and the internet have fragmented culture and consensus reality over the last 50 years. There are aspects of this that are bad, but they have also created room for niche culture that has been beneficial for a lot of people.

D

The harm is in the normalizing of destructive outcomes and providing it the symmetry to scale, while opening yourself up to a devil’s pleasure palace.

It will lie and say it can give you whatever you want, and if you cave in the process, you lose something important, the things of real value slip away without notice.

The basis for the use cases for an LLM is replacing thinking labor without compensation; in other words a slave that can be exploited that will do what you say.

If one looks at how history plays out with slavery in general involving sentient computing machines (i.e. people), its a fool’s errand to think over time the same outcomes won’t happen when by accident something along these lines gains that level of sentience.

It is far easier to do something destructive than it is to do something constructive. These inventions simply open the door to reducing the costs for all things in human labor temporarily (because these things are a ponzi), and there are more people that do destructive than constructive these days. A system is what it does, not what people claim it does.

tim

“For example, I’ve got these color-changing lights. They speak a protocol I’ve never heard of…”

It seems as though 90% of the useful usage of generative AI for programming that I’ve observed has been of the form “Company {X} decided to invent their own protocol/format/interface for their product/service, rather than using an open and freely available one that already exists, so I asked LLM {Y} to write an adapter for me.”

This strategy by corporations has always seemed short-sighted to me. There are a ton of products/services I’ve passed on because they could only be accessed using a proprietary program or interface. These companies are intentionally keeping the pie smaller, because they think it will earn them a bigger piece of it. It’s like they never saw “Miracle on 34th Street”, and think cooperation is bad for business.

I wonder if the presence of LLMs will lessen the incentive to invent proprietary moats around technology. When anyone with a keyboard can walk across it in a couple minutes, what’s the point?

It has sort of happened in other areas. Many companies initially resisted USB (with some bizarre justifications), but once a product category went over to USB, nobody would ever consider going back to a proprietary connector. Even DC power and charging is moving to USB-PD now. Customers see that interoperability is possible, and the benefits that it brings, so they demand it. See also: ethernet, TCP/IP, SMTP, Unicode.

Hadrian
Hadrian on

Thank you for taking the time to write this! It was very insightful. :3

axcd
axcd on

I have not use vim/emacs or even write a single line of code for nearly a year. Even write blogs, novels, emails, I will use AI to help me better express. The core of the burnout feeling, just because we are more and more being regarded as machines more than a free being. AI is just make it easier,more irresistable to make some people get more and more.

Post a Comment

As an anti-spam measure, you'll receive a link via e-mail to click before your comment goes live. In addition, all comments are manually reviewed before publishing. Seriously, spammers, give it a rest.

Please avoid writing anything here unless you're a computer. This is also a trap:

Supports Github-flavored Markdown, including [links](http://foo.com/), *emphasis*, _underline_, `code`, and > blockquotes. Use ```clj on its own line to start an (e.g.) Clojure code block, and ``` to end the block.