The Week I Review logo

The Week I Review

Archives
February 20, 2026

My favorite hobby is hating AI

For years, I was obsessed with cryptocurrency… though not in the way most people tend to be. I would spend hours every single week watching videos, listening to podcasts, reading articles or even the occasional real book… all so I could be the most informed Hater possible. So that on the very off chance I might find myself face to face with a cryptobooster, I could shut down their stupid face with Facts and Logic. 

This was, in every sense of the word, a waste of time. I don’t think a single person in the world avoided using cryptocurrency because of me. At best, a handful of people have lower opinions about it because of something I said, but the gambling degens were still gonna gambling degen and no one else was going to figure out how to even play. 

But I couldn’t help myself. What I’d say started as mild curiosity but never particular interest in the early 2010s slowly became an unceasing rage. A technology seemingly built to drain money from the gullible – ponzi schemes and rugpulls and scams and hacks with zero accountability and zero recourse – built by people who believe they know better than anyone else about how the world functions and they just. fucking. don’t. I hate them so much. And that hate has nowhere to go but deeper. I must luxuriate in that rage or else it will consume me. 

And there’s something fun about hating cryptocurrency. Because you know it’s not just philosophically bad, it’s structurally bad. It’s a house of cards that will fall. I laughed when Bitcoin recently dropped below the arbitrarily meaningful $65K valuation, because it reminded everyone that the only reason it’s valuable is because people believe that it is… and the revelation that a substantial investment in the movement came from Jeffery Epstein? Oh man. The vindication!

The worst people you know actually didn’t make a great point at all! 

… 

Now replace virtually every above instance of the word “crypto” with “Generative AI,” and you’ve got me in 2026. Because the people I hated five years ago in the crypto space are the same fucking people I hate today over in Bot Land.

I admit it's not 1:1… though that’s not always to GenAI’s credit. And we’ll get to that, but I think it's worth staking my overall position, because I am unequivocally a capital H Hater. In my daily life, I will take any opportunity to perform this hatred. When my work suggested I take a look at an LLM-based research tool, I reminded both my manager and director in separate discussions that these systems are definitionally unreliable and noted that the company will probably die within the next 18 months (and then went and read whatever tens of thousands of words Skeptic Extraordinaire Ed Zitron had written that week to prove the point).

But I also did say “Sure I'll take a look” because A) I don't want them to try to replace me with an LLM, but also B)cause at the basest level, this shit is just Normal Technology. For years, it was actually really cool when Machine Learning tools were added to things: I used and continue to use advanced noise reduction, automatic subject detection and masking, and high-quality upscaling. It’s amazing! So of course I was compelled by the promise of high-powered [whatever] generators. And to some degree, I still am:

Last January, when I wanted to find out what my actual “Wrapped” for 2024 was, Spotify sent me the year's listening data as three JSON files: utterly useless to a fake data person like me. But then a bot gave me python scripts to combine them into a single CSV. I’m sure I could have had a bot make me a whole new Wrapped replica – certainly that’s what people are doing now – but I wanted to do the analysis myself… and it certainly helped me do that! 

It was also helpful when I asked my search provider of choice, Kagi,’s LLM what it meant that the US Mixed Curling Team of Cory and Korey had “the hammer” in their Olympic match against South Korea, and it just told me “the last hit of the round” instead of making me click one of the results and Ctrl+F for it. 

But that’s nothing more than “helpful,” particularly since if I cared even a little bit about curling I would have had to check the source because I don’t actually trust it because I can’t fully trust it because these things don’t know things! They are fundamentally incapable of determining veracity, meaning I would pay precisely Zero Additional American Dollars to keep that access. 

And I already pay $108 a year for a damn search engine, so what’s the likelihood that The Average Person will pay for that access? The last time OpenAI reported those numbers, just 5% of customers paid. And so of course they’re saying “Ads” while Anthropic says “We won’t do ads* [*We might do ads]” and Perplexity says ”We did ads and they didn’t work,” which is something folks should really keep in mind when they claim that Ads Will Definitely Work, because… they genuinely might not. I’m sure ChatGPT would get better rates than Perplexity… but considering that everything about LLMs costs more, the margins on serving those ads may just be lower than people think. 

Which would be great for society, actually, and also personally, because the most delicious meal I’ve ever had won’t sate me nearly as much as the headline “OpenAI has run out of money.”

I fist pumped when Nvidia CEO Jensen Huang admitted he was lying about that $100 billion investment (he should be in jail), and then giggled a week later when Softbank CFO Seiichi Morooka signaled to investors that they’re leery about additional investment in a company that they are largely responsible for currently being alive.

And both Nvidia and Softbank will probably give OpenAI more money… as will the Saudis or whomever else wants to get in on Sam Altman’s grift, but it won’t be enough. And I really, truly hope that the company’s dissolution means I never have to hear about Altman or Greg “I gave $25 million to Donald Trump for Humanity” Brockman again. You know when people say “Oh man I wouldn’t wish that on my worst enemies”? Yeah, I wish all those things on these fucking people. 

It’s not just the executives, but I’m particularly mad at the executives… and the politicians they’ve paid for. Yes, it’s infuriating that all of these companies just gave up on their climate pledges to chase a phantom, but it was policy that allowed them to do so. If the companies behind these mostly-hypothetical data centers owned by former crypto miners were required to run on renewable energy and recycled water using the highest quality acoustic suppression, then so many of the Environmental Concerns just… go away. 

But instead of focusing on practical solutions that would cost real money to implement and get in the way of their immediate growth plans, the CEOs crow about eventually “Putting Data Centers In Space,” which is a deeply stupid idea that will not work. You know this because literally everything Elon Musk says is a lie,* and I know this because of the very clear explanation from Taranis, a former NASA engineer who also worked at Google on (among other things) cloud AI deployment.

*One of the most infuriating revelations of the last decade is that all of the fuss about “The Hyperloop” and The Boring Company came because Elon Musk wanted to kill the momentum for high-speed rail development in California, because billionaires hate public infrastructure… unless of course they’re the big beautiful roads on which he can put those stupid fucking taxis that can’t even drive in the rain.

Anyways, data centers existed long before these models, will be here long after them, and are really the bedrock of modern society in ways that most people are willing to accept… so we should be fighting for better regulations regardless.

Also, while we’re on the subject of policy stuff: I don't care about copyright infringement at unprecedented scale because I don't care about copyright infringement because copyright is primarily a tool used by corporations to control or punish artists rather than anything that helps them – something that becomes more blatant with each passing year. The Week I Review was and remains posted under the Creative Commons license, because I fundamentally think that work should be made available for others to expand upon.

To be clear, I’m not a fan of outright theft or plagiarism: HBomberguy’s “Plagiarism and You(tube)” is my most-rewatched 4+-hour video essay (followed Jenny Nicholson’s Star Wars Hotel video and literally nothing else), but the point of that video is how rarely the facts of infringement come after the bad actors. It’s the true creatives who are left to contend with the limitations of law. 

(“Fair Use” is not some magical term you can invoke to keep yourself from getting sued: it is a legal defense you can use once you’ve gotten to court. The only person who gets to decide whether your work qualifies is a judge… and I don’t think we should cheer on the empowerment of that paradigm or the corporations who will use it to extract more from people making art. Also, that whole Sora/Disney collab was one of the most embarrassing things that happened in a deeply embarrassing year.

(Nothing that comes out of a bot is copyrightable (maybe the only truly “good” thing about any of this), but a whole lot of Disney stuff is still trademarked, so even if that newly-sanctioned video of Mickey Doing 9/11 is in the public domain, you would still almost certainly lose a lawsuit if you used it to sell your flight school… though I would like someone to try.))

I wasn’t always a Hater: in the early days of ChatGPT I had some fun with it, and in the early weeks of Midjourney I decided to use it for the album not-art of my trap metal song about how absolutely fucked everything is (a choice I still consider artistically valid even though I now hate looking at the result), and I know that there are people who genuinely enjoy the slop… but I really don’t. The instant I realize something is generated, whatever positive feelings I might have had just vanish. There was this brief period where Spotify recommended a variety of generated Metalcore track that actually sounded pretty good… until I noticed something weird about the screaming, and then I heard that again, and I realized that what I was hearing was the space where the tech was failing and that there was less humanity in those sounds than in Boi What’s Plankton tracks.

And “humanity” is kind of a weird bar. Like, I absolutely love David Fincher movies, but his whole thing is absolute precision: there are no accidental camera wobbles or awkward cuts that remind you that there were people behind the camera. But there were still people in front of it, emoting and making me feel things because they’re people. And people are fucking cool. I’ve been watching the Olympics these past few weeks and it’s amazing! You know what’s way less amazing? Machines doing the things they were specifically built to do. 

And, to be clear, I’m not offended by bot outputs. The fascists have gleefully adopted the GenAI aesthetic in a way that should give anyone who touches these tools a whole lot of pause… but I just think less of the person who used it and then move on. I get why Boi What, a musician who uses AI Spongebob filters for his vocals, would just default to the tech for his videos (Neon Tide is about his conflicted feelings on the whole thing)… but Glorb also puts AI Spongebob filters on his vocals and his videos are cool as fuck. These are all choices that people are making. 

But the boosters have been talking less about “art”istic outputs so far this year, because they’ve turned their obsession to AI Agents: systems that will take a prompt and go do things on your behalf. Vibe coded services like Openclaw have further convinced the most obnoxious people in the world that we’re at the dawn of the new era, but what we’re really seeing is the Model Train Theory of Generative AI proposed by Cal Newport on a recent episode of Better Offline: these “agents” are grown-up toys that can be fiddled with in ways that are super compelling to the fiddlers and basically not at all to normies.

Which is to say, LinkedIn has turned into a subreddit for mediocre model trains (with the most glaring security holes imaginable), but the posters still act like it's a serious platform for serious people. 

And that disconnect is the problem. If they just stayed in their little corner like every other internet niche, we'd all be happier people, but as is so often the case with the least likeable people, they want everyone else to think that their dumb little hobbies are Actually Important, that they are Doing Something despite bragging about how they’re Doing Literally Nothing. Like, the simplest indictment of our socioeconomic system is that the richest man in the world is also somehow the biggest loser to ever exist: a man so pathetic that even Jeffrey Epstein wouldn’t bring him to the island. And he is all the fuck in on GenAI, which is, as far I’m concerned, inherently disqualifying. So yeah:

The reason that I hate Generative AI is because Elon Musk loves it. 

And Elon Musk loves it partially because he wants to fuck a robot and is genuinely titillated by his rip-off of Death Note’s Misa Amane but mostly because it accelerates his primary political project, which is to remake reality – current and historical – in his image. 

A few months ago, I was talking to some friends about education and the things we learned as kids. One of them, a Black woman from Baltimore, was taught all kinds of things that us whites never learned. And she saw that as the system failing, where I saw it as the system succeeding (derogatory). Because it’s not useful to the wealthy and powerful that white kids learn about Black Wall Street – a thing most of us first heard about when we saw its destruction depicted on Damon Lindelhof’s Watchmen show – because knowing that the KKK was supported as they destroyed the lives of Black people just living the American dream and Doing Capitalism as we have been taught it should be done makes the lie so unbelievably clear. As George Carlin so bleakly put it: It's called the American dream because you have to be asleep to believe it.”

And the people in power – especially but not exclusively the right-wing – need us to be asleep and believing that fiction, to make and keep them dumb so the average person never pushes back against their material conditions. As the saying goes: There are no poor people in America, just temporarily embarrassed millionaires. 

And maybe part of this is that a horrifyingly large number of Americans are not taught how to fucking read, that phonics has been replaced with what can only be called “vibe reading,” where people are meant to memorize the look of words and just kinda guess the words they haven’t memorized from context. It honestly sounds more like how LLMs function: most of the time, it’s probably fine but when it’s not… it results in some deeply fundamental misunderstandings, and how would you even know? When we talk about media literacy online, it's impossible to divorce that from how bad actual literacy is. 

The internet in theory makes it harder to limit people’s access to good information, but you know what makes it really easy? What technology is perfect for controlling dissemination of information/disinformation at scale? Generative AI. Honestly, LLMs are disinformation systems by default, and it requires an extraordinary amount of human labor to be even kind of not that. Reinforcement Learning from Human Feedback, or RLHF, is a system by which actual people accept, reject, or often entirely rewrite LLM outputs in order to guide future outputs in an “appropriate” direction. It is the bedrock of modern LLMs and the reason you think they’re worth using.

So what happens when disinformation is actually the point? Well, you get Grok: the bot that gladly undresses little girls. And the damage that continues to do is incalculable, but most of the effects are less obviously evil, though will have similarly destabilizing effects. 

One of the worst headlines I've read recently (and god that is such a high bar) is The Guardian’s Latest ChatGPT model uses Elon Musk’s Grokipedia as source, followed a week later by ChatGPT isn’t the only chatbot pulling answers from Elon Musk’s Grokipedia on The Verge, which showed that Gemini (in all of its forms), Bing, and Perplexity are also citing a website that is literally a 1984’d version of the most important information source in human history. 

Grokipedia takes many of its words straight from Wikipedia (Creative Commons ftw?), but the only thing that can change those words is Grok itself… at the request of the kind of people who want a right-wing version of Wikipedia – which is to say, the worst kind of people. And thus far the citations are fairly rare (ChatGPT being the worst offender), but Grokipedia has existed for less than four months, so that's not even the coldest of comforts.

Every billionaire is a policy failure, but Elon Musk is a society failure – reflecting all of our worst tendencies right back at us. The only good he does is show us how fundamentally broken and corrupt our system is. 

I have been thinking recently about the fact that I don’t know how to read a map, because by the time I was driving far enough to need one Google had made them irrelevant. And while I'd have bigger problems in a post-GPS society, it does feel like some piece of my brain never really developed… and there are times where I make an active decision to not check Maps and just figure it out on my own. But that only does so much.

And when I think about the rise (and fall) of Generative AI, that is what scares me. I’m afraid of a generation raised on this stuff – what they’ll “learn” now and what they’ll do when they can’t “learn” from it in the future.

And here it's worth calling attention to “AI and the American Smile,” an essay published in March, 2023 by jenka. It focuses on what seems like a fairly innocuous use of Midjourney, generating a series of historical selfies: “photos” of various soldiers/samurai/tribal warriors/etc giving big goofy grins. But where I find the images unsettling because of their AI-ness, jenka – a Soviet immigrant – was bothered by the smiles. That specific smile, showing every single one of the pearly whites, is a Western invention… but the bots function on statistics, and all those teeth from people like me overwhelm the outputs, making the Soviet soldiers smile in a way they never would. 

This is the real bullshit. Models imitate confidence because people want them to be confident otherwise why use them… but they don't and can't know anything. That's just not how these things work. Language is not intelligence, and it's been clear for a long time that simply throwing more data into a model won't make it “smart.”

And it's deeply embarrassing how many people refuse to acknowledge this. Literally last week, Nature published a paper arguing that impressive scores on benchmarks mean that chatbots have achieved human-level intellect, and everyone involved should be thrown out of academia. I expect that kind of poppycock from executives and investors, but when scientists mislead people it's so much worse. How are we supposed to trust institutions publishing things we can see with our eyes are obviously wrong?

And then there’s the fucking press. Ed Zitron is great at zingers – you'd hope so from a career PR guy – but “tech stenographers” is one of his best. These people don't report the news: They repeat the obvious lies. There is no “journalism” going on when someone reports that OpenAI “signed a deal worth tens or hundreds of billions of dollars” when they don't add e.g. “Despite having no apparent way to pay for it.” When people who are smart enough to be skeptical take executives at their word or uncritically say “The economics will work out” because digital god or something, pointing to the various charts indicating profit happens at the end of the decade (literally the South Park Gnome underwear scheme/meme).

When Ars Technica’s Senior AI Reporter, whose work I’ve always found irritatingly boosterish, used a bot to write a since-retracted story about how a bot published a hit piece about a man who rejected a piece of generated code from an open source project that included generated quotes (here’s the man himself discussing it).

I want someone to take all of these people and slap them in the face and shout “Wake the fuck up.” It's like watching Chuck Schumer talk about how his Republican colleagues are going to somehow see the light: I don’t know if he’s willfully blind, the exact same sort of liar, or both (it’s definitely both), but people should show him exactly as much respect as he is showing us (e.g. none at all).

In December of 2024, Casey Newton – among the press’s biggest boosters and someone whose work I used to quite like before I just couldn’t take it anymore – wrote “The phony comforts of AI skepticism,” an article that was bad then and looks worse now, as it strawmans and misrepresents the beliefs of one of the most prominent GenAI skeptics, Gary Marcus (here’s his response to Newton).

I don’t remember when or how I first heard Marcus’s name, but I’ve been following his newsletter since February 2023. And it’s a good read! I recommend it. He is the closest thing to a “Mainstream” skeptic out there, which is probably why he gets credit for things he didn’t say. For example, many people thought he coined “Stochastic Parrots” as a descriptor for the way LLMs function… but no, that was Emily M Bender and Timnit Gebru, whose paper on the subject resulted in Gebru being fired from her role on the “Ethical AI” team at Google, because it threatened to undermine her employer – which had long since dropped its “Don't be evil” mantra. 

They’re less often cited in media coverage, but they have remained prominent critics of these systems and how corporations are using them to further consolidate power and disenfranchise people: Gebru founded DAIR, the Distributed Artificial Intelligence Research Institute, whose Director of Research co-wrote The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want with Bender.

What Marcus is that makes him unique among skeptics is an AI guy who founded an AI company and has been writing about the technology for decades… He’s just not a generative AI guy: he was pointing out the fundamental issues of LLMs long before most of us had ever heard of them, dating back to critiques of GPT-2 in 2019 and his much more public call-out of GPT-3 in March 2022 with the essay “Deep Learning Is Hitting a Wall” – again, months before the launch of ChatGPT – which painted a massive target on his back from a lot of people who now agree with the things he's been saying the whole time (while refusing to acknowledge that he's been saying them the whole time. 

(The number of tech stenographers claiming former Meta AI guy Yann LeCun was at the forefront of anything when he finally said pure scaling wouldn't be enough is pathetic. Here's Gary Marcus breaking down LeCun’s grift: every single time I see someone cite him (or “Godfather of AI” Geoffrey Hinton), that immediately diminishes their credibility in my eyes.))

However, I think what really makes Marcus palateable to the mainstream are his politics: he's a pretty typical center-left kinda guy whose policy desires are good but hardly radical. Which is to say, despite having a superficially similar subtitle, his latest book, Taming Silicon Valley: How We Can Ensure AI Works for Us, is no AI Con. 

Also, Ed Zitron thinks it's “Shit.” 

Zitron has not been a critic of GenAI nearly as long as Marcus has – as noted earlier, he’s a PR guy by trade – but he has jumped to the top of the leaderboard as a result of his extraordinarily in-depth and utterly vicious critiques. Marcus is not really a “hater.” He's a guy frustrated that his industry is putting all its eggs in a bad basket.

But Zitron is a Hater. I wish I was half the hater he is. 

And that's why people love him. Because he backs that hate up with numbers that no one seems capable of refuting about the actual hard finances of this industry and technology, and the hard finances really matter when these companies are burning mind-melting sums of money every day: I have completely lost my sense of what money even means reading his weekly reports breaking down what the model manufacturers who want compute (OpenAI, Anthropic, Meta, xAI) and hyperscalers who provide it (Google, Microsoft, Amazon, Oracle) are spending and also whether it has even the slightest chance of going from “revenue” to “profit.”

His work serves as an excellent complement to the more technological critiques of his peers, and he is often well ahead of the curve: it was a real bad look when six months to the day after Zitron’s “Coreweave is a time bomb,” The Verge posted “Meet Coreweave, the AI Industry's Ticking Time Bomb.” And I know how this works: Liz Lopatto did her own reporting and added plenty of new details (and definitely didn't choose the headline), but it does seem a little weird to not even acknowledge earlier critiques in the same vein, especially because editor-in-chief Nilay Patel’s responses in the comments section on that article made it seem like an active decision to cut him out (possibly because Zitron has said mean things about him in the past).

Though that speaks a bit to his whole vibe, which tbh gets a little grating after a few hundred thousand words and hours upon hours of podcasts. I found the panel he did with Enshittification author Cory Doctorow last November quite illuminating. Doctorow (whose own book about all this, The Reverse Centaur’s Guide to Life After AI, comes out in June (for those who don't want to wait, he boils down the thesis in this this talk he gave in December)) is a gifted speaker. Zitron is not. He looks uncomfortable and talks over people and misses the point his guests are trying to make. This can be useful when he’s being interviewed because he doesn't mince words, but it definitely diminishes his effectiveness as an interviewer.

But also… you kinda need to be that kinda person to do what he’s done: to teach yourself the ins and outs of finance and construction and power acquisition out of spite, all so that he can break down the deals and the no deals and explain what would be required for the proposed deals to actually be the real deal, and even if it can get repetitive (how many times will he say “I’m genuinely terrified it’s worse than I think…”?) , I savor every tiny morsel of every issue because I can just feel that something is wrong here. 

The way it’s seeping into every crack in our lives is wrong. The tech companies that have come to control our lives aren’t satisfied, and they never will be and it’s completely fucking crazy that they’re willing to incinerate all the money in the world for the possibility of more power. I need these newsletters and David Gerard’s weekday updates of the latest stupidest fucking thing I've ever heard over on Pivot to AI to remind me that that I’m not the one who’s crazy here.

And so I seek out other people who feel that way. It’s impossible to avoid the boosters, but I don’t have to engage with them. They have their own bubbles of billionaire boot-lickers, and I’d rather not. I think what I get out of anti-AI video essays is what normies get out of their thousandth watch of Friends or The Office or whatever. It just feels good to see some ultra-technical guy break down the security hole at the at the very center of the premise of LLMs, which means prompt injection attacks are not only inevitable but technically impossible to eliminate.

This is where GenAI is exactly like crypto, actually, and frankly quite a bit worse in part because some of these AI scams still use crypto, but instead of getting phone calls from actual people convincing you to send them money (or bitcoin), you’ll get phone or even video calls from fake people convincing you to send them money (or bitcoin). And now there’s a new version where people receive scam calls that fake surveys to ask the kinds of questions that banks ask when trying to verify your identity… after which the scammers use a synthetic version of your voice to get your money through traditional financial channels without you even knowing. 

And, like, it would be trivial to make a synthetic version of me, given how much of my face and voice is online, but there are consumer-accessible models that promise to clone any voice with just ten seconds of audio. There is exactly one (1) scenario where that makes sense, which is being able to create a likeness that can then be used to scam people. Have you posted even one (1) TikTok or Reel? There’s probably ten seconds of audio that can be used to steal your grandmother’s retirement account. 

Seriously: Tell your loved ones that if they receive a phone call from you where you sound like you need help or money or whatever to hang up and call you back directly. Create code words or other methods to verify that you are you. Because unfortunately, these aren’t the models that are going to die when OpenAI does. This new era of scams is our new reality.

And it fucking sucks. It fucking sucks that the internet is getting meaningfully worse because executives are pushing developers to code with Claude – Amazon Web Services, which is the thing that makes most of the web happen, has had two outages thus far that are directly attributable to bots. And it fucking sucks that the RAM industry is all in on these data centers that mostly don't exist that even the biggest companies in the gosh damn world are taking on ungodly amounts of debt to fund, because it just further jacks up the prices of basically every single thing that has electricity running through it, because there are vanishingly few products left that aren't just big houses for tiny computers. (And also, the amount of RAM being put into phones and the like has dramatically increased specifically because companies want to run AI models on device, which requires more RAM!) 

It’s a vicious cycle that means already-increasing prices further increase, so spending will decrease, and consumer companies will die. 

I'm not gonna lie: I don't really understand what the endgame is, but it feels like we're dangerously close to some kind of one. If this is Normal Technology and they fix the problems (that they can’t fix without fundamentally changing the architecture), then our lives become a little bit easier in small ways but we mostly just go on. But if it’s not, then what? The doomer vision is annoying but I guess I “understand” it? Of course they’d think a superintelligence would enslave us, because that’s what they would do! But if we’re making shit up, the ending of Her seems at least as likely as the beginning of Terminator 2.

It’s the boosters I genuinely don’t understand… because there is no paradise, right? We all understand that our system is built on winners and losers and if that as the small group of people win all of us lose. What happens to More Everything Forever when the billionaires have everything and the rest of us have been replaced? Will they just gamble NFTs on Polymarket about who will starve first?

It’s so stupid… but it’s not based on smarts. Again: Language is not intelligence (different link!), and I’m just so sick of this whole thing. I’m so sick of people who want the bots to be alive. So sick of people who want to replace people. So sick of people licking the boots of executives who would literally stomp them to death if it meant making One More American Dollar. 

It doesn’t need to be this way, and yet it is this way and us Haters can’t do anything about it but watch and wait. Because the data center deals will fall through, because OpenAI won’t be able to raise more money than exists in the Venture Capital space and therefore won’t be able to fund Oracle’s ambitions (also, the Coreweave Time Bomb will explode). Nvidia’s growth will end because it is mathematically impossible that it doesn't. And maybe Google – a company with a hilariously large product graveyard – and Meta – a company that immediately pivoted away from the tech it renamed itself after – decide that the cost isn't actually worth it in the end, at which point they move on to, like, quantum computing or some other Torment Nexus. 

No one knows exactly what happens or when, but the fact is that our society is rapidly reforming around a technology that is brittle. It is brittle at every single level, and it will break. And it will probably take the entire global economy with it! (Perhaps we don’t even need the singularity to all end up on the streets). And I don’t really want to think about what happens then… 

Except for the part when I get to tell everyone that I was fucking right. '

Filed Under: Essays & Non-Reviews

Don't miss what's next. Subscribe to The Week I Review:
Website
YouTube
Discord
Creative Commons (CC BY-SA 4.0)
Powered by Buttondown, the easiest way to start and grow your newsletter.