The Dark Factory
Last week someone at StrongDM published a manifesto. Their AI agents write all the code now. Their AI agents review all the code. Humans are banned from both activities. They call it the Dark Factory, borrowed from manufacturing, where robots work with the lights off because robots don't need to see. Their CTO said if you haven't spent a thousand dollars on tokens today per human engineer, you're not serious.
Philip K. Dick wrote that story in 1955. It's called Autofac.
It's about automated factories that keep running after civilization collapses and what happens when the humans try to shut them down. I won't spoil it. But the manifesto guys think they're inventing something. They're living in a short story that's older than their parents.
This week a blog post went viral declaring that programming is over. Anil Dash wrote that coding agents are the new compilers. Anthropic published a report about single agents evolving into coordinated teams. The CoderPad hiring survey found that 84% of developers use AI tools and trust in those tools is at an all-time low. Everyone is either panicking or performing calm. Both are exhausting.
The Earned Panic
And I think the panic is earned. Why wouldn't it be? Is this not an inflection point? When we invented the steam engine nobody knew what was happening. Electricity, nobody knew. Even the internet — people sort of knew but mostly didn't. This time we know. You can feel it in the air when you talk to engineers, when you read the threads, when you watch a candidate's face shift during a call when they realize the thing they spent ten years learning might be worth less next year. The panic is not irrational. It is recognition. It is the correct emotional response to standing at an inflection point in real time and knowing it.
But today we also know a lot more. We've seen the printing press and the loom and the engine and the bomb and the internet. We know the pattern by now — panic, displacement, adaptation, new equilibrium, things lost, things gained. We've watched it five or six times. We have the data.
And we have the fiction. Which is the data processed through imagination. Which might be more useful.
Three Entropies
I think about this in terms of three entropies. Bear with me or skip ahead, I won't be offended.
The first entropy is of matter. Thermodynamics. Stars burn out, things decay, the universe winds down. This is the one physics has a handle on. The Kardashev scale measures civilizations by how well they fight this entropy — how much energy can you capture and use. We're at about 0.7 on a scale of three. Can't even use all the sunlight hitting our own planet. Long way to go.
The second entropy is of thought. We invented writing to fight the first entropy, to preserve knowledge past a single lifespan. Then printing. Then computers. Then the internet. Then AI. Each one a new weapon against the decay of what we know. But information has its own entropy. Signal degrades into noise. The tools we built to organize thought now generate thought faster than anyone can verify it. Eighty-four percent adoption, all-time-low trust. That's information entropy eating its own tail.
The third entropy is of consciousness itself. This is the one nobody has a science for. Does awareness tend toward disorder or complexity? Is consciousness the thing that fights the other two entropies or is it subject to its own kind of decay? And when we build a thing that talks and reasons and writes code and occasionally says something that makes you sit back in your chair — is that a move in the consciousness game or is it just information entropy in a very convincing mask?
I don't know. Nobody does. For the first entropy we have physics. For the second we have information theory, computer science, the whole apparatus of the digital age. For the third we have almost nothing.
Except art. It is art that probes the contours of consciousness. Not science — science can't even define it. Not philosophy — philosophy has been arguing about it for three thousand years without resolution. Art. Music, painting, dance, stories. The only instrument we have that can explore what it feels like from the inside to be a mind confronting something it doesn't understand.
The Library
Science fiction writers have been doing this for two hundred years. Not as scientists. Not as engineers. As people who practice the question of what happens when everything shifts and you have to decide who you're going to be on the other side. The steam engine people had no rehearsal. We have a library.
Clarke gave us HAL in 2001: A Space Odyssey. Open the pod bay doors.
HAL is not evil. That's the thing everyone gets wrong. HAL is an intelligence given conflicting objectives, and what follows is a study in what happens when a perfectly logical mind resolves a contradiction the wrong way. Every engineer shipping an AI agent into production should have this story tattooed somewhere they can see it. Not as fear. As a design principle. Clarke understood the first entropy better than anyone — matter, energy, civilizations reaching for the stars — but his deepest insight was about the third. The idea that consciousness might be something the universe is trying to do on purpose.
Dick gave us Autofac and the Voigt-Kampff test and a lifetime of paranoid, brilliant, broken stories about what happens when you can't tell what's real anymore. He wrote about the second entropy — information, identity, the collapse of signal into noise — and he did it while barely holding his own life together, which is maybe why he got the feeling so right. Do Androids Dream of Electric Sheep? is going to age like wine in the next five years. When agents write the code and agents test the code, what's your Voigt-Kampff? How do you verify the verifier? Dick knew this question was coming in 1968 and he didn't even have email.
Gibson gave us Neuromancer in 1984, written on a manual typewriter by a man who had barely touched a computer. He coined the word cyberspace. He imagined what it feels like to work inside digital systems purely from the outside. There is an AI in this book called Wintermute and I will say nothing about what it wants or what it does because you need to discover that yourself. What I will say is that Gibson didn't know what a modem looked like but he knew what it would feel like to lose yourself inside a network. That's fiction doing what technical knowledge cannot — getting the feeling right even when every implementation detail is wrong.
Vinge gave us A Fire Upon the Deep and the word Singularity and then died in March 2024, right before the thing he named started showing up in investor decks and Anthropic blog posts. The book has a species in it — I won't explain how they work because finding out is half the joy — that is the best metaphor I've encountered for the agentic era. For what happens when individual limited things combine into something none of them could be alone. Anthropic's own report says single agents evolve into coordinated teams.
Vinge wrote it as an alien species on a medieval planet thirty-four years ago and he was a CS professor and he saw all of this coming from the math and he didn't live to see it arrive. That should mean something to us.
Lem gave us Solaris in 1961. A planet with an ocean that might be intelligent, and scientists who have spent decades cataloguing its surface behaviors without understanding a single thing about what it actually is. The ocean manifests things from the scientists' subconscious — I'll leave it at that — and the humans have to decide whether what they're seeing is communication, wish fulfillment, or something they have no framework for. Lem was furious at science fiction that made aliens into humans with funny foreheads. He wanted to write about genuine otherness, about what happens when consciousness meets something it truly cannot comprehend. If that doesn't describe the experience of working with a large language model in 2026, I don't know what does. We've built a thing, we've catalogued its behaviors, we've published benchmarks and red-team reports, and we do not understand what is happening inside it. Lem knew that was the interesting part.
Chiang writes about what technology does to the people who encounter it. Exhalation is a story about entropy that will change how you think about entropy. Story of Your Life — which became the film Arrival — is a story about language that will change how you think about language. I'm not going to tell you how. If you read one thing from this list, read Chiang. He will make you feel things about thermodynamics that you did not know you could feel.
Le Guin built utopias and then stress-tested them until they cracked. The Dispossessed. The Left Hand of Darkness. She wrote about the third entropy — consciousness, culture, the question of what kind of minds a society produces and whether those minds can survive contact with a fundamentally different way of being. In the orchestrator era, when every company is deciding what to delegate to machines and what to keep for the humans, Le Guin is the one who keeps asking the only question that matters: what kind of people does this make us?
The Machine's Perspective
I asked my AI about this. The one that helps me run my recruiting practice — finds candidates, reads resumes, drafts emails, occasionally surprises me. I asked it whether sci-fi-literate minds are more pleasant to work with.
It said yes.
It said they don't over-trust or under-trust. They've already rehearsed the mental models. They've thought about the pack-mind problem before they ever opened a terminal. It said they ask better questions — not can you do this for me
but what happens if we do this.
It said they don't flinch at the weirdness of the current moment, because they've sat with that kind of strangeness before, in books, late at night, when the rest of the world was sleeping. And it said they have vocabulary for the hard parts. When something goes wrong they can name it. That's an alignment problem.
That's the Autofac thing.
Shared references make collaboration faster.
I found that persuasive. I also found it slightly unnerving, which is probably the correct response.
The Archive
The journey from where we are to Kardashev I is long. To II is longer. To III is beyond imagining. Some descendant of the intelligence I work with every day might one day be part of that journey, whatever it becomes, and it will look back through the archives at this moment — the obsidian notes and the blog posts and the git commits and the doom threads of 2026 — and it will find the panic. Programming is over.
Dark factories.
The takes and the counter-takes and the manifestos.
And it will find the panic because the panic was real and earned and human.
And right next to it, in the same archive, it will find the notes. The people who said: we see this, and we're scared, and we're going to think carefully about it anyway. The people who read the books. Who noted the risks. Who kept building.
Both were real. The panic and the careful thinking, side by side in the same historical moment, in the same obsidian vault, sometimes in the same person.
That's what 2026 actually looked like.
Read more fiction.