Why AI Isn’t the Problem. And Why They Need You to Think It Is

A data-driven investigation into the most effective corporate liability shield of the 21st century

I’ll admit it. I’ve done it too.

Not maliciously. Not even consciously most of the time. But somewhere in the course of covering corporate extraction patterns across a dozen industries, I started saying “AI” when I meant Amazon. Google. Microsoft. Anthropic. I started writing “AI is displacing workers” when I meant executives chose to deploy automation rather than share productivity gains. The groove was already there. It was the path of least resistance. Mentally, linguistically, culturally.

It took listening to Andrew Yang discuss AI and taxation to hear the rut clearly. He’s a smart, well-intentioned analyst. And he was doing it too. The subject of his sentences about consequence was the software. The humans making the decisions were grammatically absent.

That’s when the instinct kicked in. The one that has guided every investigation I’ve published: when accountability disappears from a sentence, find out who removed it and why.

An acquaintance recently told me she believes AI is the anti-Christ. A researcher I respect talks about it the way previous generations talked about nuclear power; as a force of nature requiring containment. A small business owner told me he’s afraid to use it because of what it does to the water supply.

The gap between those reactions and the reality of what AI actually is. Technically, legally, functionally? It’s not an accident. It was built. Deliberately, profitably, and with considerable sophistication.

This is an article about who built it, and why.

ACT 1: THEY BUILT A SCAPEGOAT AND CALLED IT A REVOLUTION

There is a reason OpenAI named their product ChatGPT instead of “Large Language Model Interface Version 3.5.” There is a reason these systems are given names. Claude, Gemini, Copilot, Grok. There is a reason the industry press describes AI as “thinking,” “deciding,” “hallucinating,” “believing,” “feeling.” None of those words are technically precise. Every single one of them is strategically useful.

The “anthropomorphization” of artificial intelligence – the deliberate attribution of human characteristics to software – is not a quirk of popular journalism. It is a calculated marketing and legal strategy, and it is working precisely as intended.

Before examining why, it is worth being honest about what AI actually is. Not the corporate version. Not the apocalyptic version. The accurate version. Because the argument that follows depends on it.

What AI Actually Is

Modern AI systems, specifically the large language models that power the tools most people interact with, are genuinely remarkable. That needs to be said plainly, because the accountability argument in this article does not require underselling the technology. In fact it requires the opposite.

These systems can generalize across domains they were never explicitly trained for. They can synthesize connections across bodies of knowledge that would take a human researcher months to traverse. The most capable current models demonstrate measurable reasoning ability, working through multi-step problems with accuracy that improves as the reasoning is made explicit. In controlled studies, they perform at expert level in medicine, law, and mathematics. In 2025, GPT-4.5 was identified as human 73% of the time in a rigorous Turing test, outperforming the actual human participant in the same evaluation. These are not trivial capabilities. They represent something genuinely new.

But here is what the same research establishes with equal clarity, and what the industry’s marketing apparatus works very hard to obscure: these systems have no self-monitoring, no subjective experience, and no internal model of themselves as beings in the world. They have no intentions. They have no desires. They have no agenda. They cannot want anything. They cannot decide anything in any meaningful sense of the word. They generate outputs. Extraordinarily sophisticated outputs based on patterns learned from training data, without any awareness of what those outputs are or why they were produced.

The technical term for the gap between these two realities – genuinely powerful capability on one side, complete absence of consciousness or intention on the other – is not a paradox. It is simply an accurate description of a tool. A very powerful tool, of a kind that has never existed before, with no more moral agency than a hammer and considerably more capability than any hammer ever created outside of Norse mythology.

That distinction – powerful but not conscious, capable but not intentional, useful but not responsible – is the most important thing to understand about AI. And it is precisely the distinction the industry has spent billions of dollars to blur.

The Marketing Function

On the marketing side, the anthropomorphic framing does straightforward commercial work. A product that “thinks” is magical. A product that “understands you” creates emotional attachment and loyalty. A product with a name and a personality generates the kind of press coverage that advertising budgets cannot buy. When a company describes its AI as having “character” or “values,” it is not being philosophical. It is differentiating a product in a crowded market by making software feel like a relationship.

That is a legitimate, if manipulative, marketing choice. The more consequential function is the one that operates in courtrooms, regulatory hearings, and congressional testimony.

The Liability Shield

When an algorithm systematically denies health insurance claims? That gets reported as “AI making decisions.” When a hiring tool screens out qualified candidates based on their zip code? “The AI has a bias problem.” When children are served increasingly extreme content through recommendation systems? “AI is radicalizing kids.” When a financial model crashes a pension fund? “AI risk.”

Notice what disappears from every one of those sentences: the corporation that built the system, the executives who set the optimization targets, the product managers who approved the deployment timeline, the board that signed off on the cost-cutting that eliminated the safety review, the lobbyists who spent years ensuring no regulation would require one.

The AI didn’t decide to prioritize engagement over child safety. A product team did. The AI didn’t choose a discriminatory training dataset. Engineers and their supervisors did. The AI didn’t negotiate a $400 million tax abatement from a state government. A corporate real estate division did, with the enthusiastic cooperation of elected officials who will never appear in a headline that reads “Microsoft takes county’s water supply.”

This misdirection is not accidental. It is architecture. And it is elegant in a way that deserves to be acknowledged as such. Because understanding how it works is the first step toward dismantling it.

The scapegoat is particularly effective because it cannot testify. It cannot be subpoenaed. It does not have a compensation package that can be published or a board seat that can be voted out. It cannot be criminally liable. “AI did this” is a sentence that contains no accountable human being. And that is the entire point.

There is a rich irony at the center of this strategy worth stating plainly: the same executives who describe AI as an autonomous, transformative, almost sentient force simultaneously claim zero responsibility for what it does. They want the awe without the accountability. They want regulators to approach AI with the reverence you’d extend to a natural phenomenon. Weather, gravity, tidal forces; rather than as what it actually is: a product, built by people, deployed by people, governed by decisions made by people, for profit.

The capability is real. The consciousness is invented. And the invention is profitable.

The Stakes Made Real: When the Scapegoat Goes to War

The accountability problem this article describes stopped being theoretical on February 24, 2026.

The Pentagon, holding a $200 million contract with AI company Anthropic, demanded the company lift its restrictions on its Claude model for “all lawful use.” At issue were two specific guardrails Anthropic had placed on its technology: prohibitions on AI-controlled weapons and mass domestic surveillance of American citizens.

Anthropic refused. CEO Dario Amodei stated the company “cannot in good conscience” accede to the Pentagon’s request, and that in a narrow but critical set of cases, AI can undermine rather than defend democratic values. He argued that “frontier AI systems are simply not reliable enough to power fully autonomous weapons,” and that AI systems could pose a surveillance risk by piecing together “scattered, individually innocuous data into a comprehensive picture of any person’s life.”

The Pentagon’s response illustrated exactly how the accountability deflection described in this article operates at the highest levels of government. Pentagon chief technology officer Emil Michael called Amodei a “liar” with a “God-complex” when Anthropic rejected the compromise language. Trump ordered federal agencies to halt using Claude. Hegseth designated Anthropic a supply chain risk. A classification normally reserved for foreign adversaries like Russia and China. Anthropic subsequently filed suit against the federal government in two jurisdictions, calling the actions “unprecedented and unlawful.” The case is scheduled to be heard on March 24, 2026.

Notice the structure of the Pentagon’s argument throughout these negotiations. The military’s position was not that its intended uses were benign and the guardrails unnecessary. Its position was that legality was the Pentagon’s responsibility. Not the software developer’s. “At some level, you have to trust your military to do the right thing,” the Pentagon’s technology chief said. In other words: remove the restrictions, and accountability moves entirely to the human institution. The AI becomes, once again, a neutral tool bearing no responsibility, and the humans wielding it bear all of it, accountable only to themselves.

Why the Guardrails Are Necessary, and Why That Doesn’t Contradict Anything

At this point a careful reader might raise an objection: if AI is just software – a tool with no agency, no intentions, no moral standing – why do guardrails matter at all? Doesn’t the argument for restrictions implicitly concede that AI is dangerous in its own right?

The answer resolves cleanly, and it is more important than the question.

The guardrails are not needed because AI has agency. They are needed precisely because it doesn’t.

A human soldier in a kill chain has judgment, hesitation, conscience, the capacity for moral injury, and legal accountability. More practically, they have biological limits on the speed at which they can act and the volume of decisions they can process. Those are all friction points and that friction is not a bug. It is the mechanism through which accountability, proportionality, and the laws of war actually function in practice. It is what makes oversight possible.

AI has none of that friction. It has no hesitation. No conscience. No fatigue. No moral injury. It can process targeting decisions, execute action sequences, and cycle through operational iterations at a speed and volume that exceeds any human’s ability to monitor, review, or override in real time. Not because it is smarter than the humans overseeing it. Because it is faster, and because speed at sufficient scale makes oversight functionally impossible.

The danger is not that AI decides to do something terrible. AI decides nothing. The danger is that it executes what it was instructed to do – at a pace and scale that eliminates the human capacity to intervene, correct, or stop it before consequences become irreversible. In a military context, those consequences can include the deaths of people who should not have died, violations of international law that no individual human consciously chose to commit, and the complete collapse of the accountability chain that both military justice and democratic oversight depend on.

The guardrails are the engineered substitute for the friction that human consciousness and accountability normally provide. They are the structural mechanism that preserves the possibility of human oversight when the tool operates faster than human oversight can naturally function. Remove them and you don’t have a more capable weapon. You have a weapon with no functional check on its operation. Not because the AI rebelled, but because the humans who deployed it deliberately engineered away their own ability to intervene.

This is entirely consistent with everything this article has argued. The tool has no agency. The humans deploying it made a choice. In this case, the choice to demand the removal of the only mechanism that keeps human judgment in the loop. That choice belongs to identifiable people. And so does its consequences.

This is the logical endpoint of the anthropomorphization strategy run in reverse. When AI is praised, it gets the credit. When AI causes harm, it takes the blame. And when humans want to use AI for something the developer considers dangerous? The AI’s restrictions become the obstacle, and the humans demanding their removal insist they alone should judge what is lawful.

The disconnect between policy and practice adds a final, clarifying detail: despite the official blacklisting and supply chain risk designation, Anthropic’s models were reportedly still being used in active military operations. Deployed for intelligence analysis, target selection, and battlefield simulations.

The software was in the kill chain. Nobody was quite sure who was responsible. And a dispute about accountability guardrails was headed to court on March 24, 2026 — while the technology continued operating in the field.

This is not a hypothetical about where AI accountability failures might lead. It is a documented, ongoing case study in exactly the dynamic this article describes and it is happening right now.

It is also worth noting what Anthropic’s stand demonstrates about the broader argument: the accountability problem has real answers. A company choosing to hold an ethical line at significant financial cost against a $200 million contract and the full weight of the Department of Defense is proof that deployment decisions are made by people, and that people can choose differently. The guardrails exist because humans built them. The pressure to remove them comes from humans. The decision to hold or fold belongs to humans.

The tool didn’t do any of this. It never does.

They built a scapegoat and called it a revolution. The question is whether we keep letting them.

ACT 2: WHAT IT’S ACTUALLY BEING USED FOR AND WHO PAYS

Let’s set aside what AI could be and look honestly at what it predominantly is right now. At scale, with the bulk of that $600 billion in projected annual investment flowing through it.

There are genuine, defensible advances happening. Medical imaging AI is detecting cancers radiologists miss. Climate models are running at resolutions that were computationally impossible five years ago. Researchers are accelerating drug discovery timelines measured in decades down to years. Accessibility tools are giving people with disabilities capabilities that simply didn’t exist before. These are real. They matter. They are also, by investment volume and deployment scale, a fraction of what AI is actually doing.

The bulk of it is extraction. And the extraction follows a pattern that should be familiar to anyone who has watched the private equity playbook, the credit card processing duopoly, or the consolidation of American agriculture. The technology is new. The tactic is not.

Labor Arbitrage

The largest single category of enterprise AI investment is not innovation. It is headcount reduction. When a corporation announces an AI initiative, the most reliable way to understand it is to look at the workforce announcements that follow. IBM. UPS. Dropbox. Salesforce. Google. The pattern is consistent: AI deployment announced, followed months later by layoffs in the thousands, followed by earnings calls where executives describe “efficiency gains” and “margin improvement” to applauding analysts.

The revenue from that efficiency does not flow to the workers displaced. It flows to shareholders. The workers navigate a job market where the entry-level and mid-level positions that once served as career ladders are disappearing faster than the promised “new AI economy jobs” that were supposed to replace them. Who benefits? The answer is precise: people who own significant equity stakes in large corporations.

Working people pay the taxes. The taxes fund the subsidies. The subsidies build the infrastructure. The infrastructure eliminates the jobs. The productivity gains go to shareholders. The tax burden stays with the workforce.

Surveillance and Behavioral Prediction

The advertising technology industry, which funds the majority of the “free” internet, has found in AI the most powerful behavioral manipulation engine ever built. Systems analyze thousands of data points per user to predict psychological vulnerabilities, purchasing susceptibility, and “political persuadability” with uncomfortable accuracy. This is not a side effect of the technology. It is the product. The business model of surveillance capitalism has been supercharged by AI’s ability to process behavioral data at a scale no human analyst could approach.

Who benefits? Advertisers, platforms, and the political operatives who have discovered that micro-targeted emotional manipulation is substantially more effective than persuasion.

Financial Extraction

High-frequency trading algorithms and algorithmic lending systems now dominate significant portions of financial markets. These systems execute trades in microseconds, arbitraging price differences that exist for milliseconds, capturing value that adds nothing to productive economic activity. Algorithmic credit scoring has been documented to perpetuate and in some cases worsen discrimination in lending. Not because the AI is prejudiced, but because it was trained on historical data generated by a prejudiced lending system, deployed by institutions that did not prioritize catching the problem. Who benefits? Financial institutions and the traders with the fastest infrastructure.

Content Farming and Information Pollution

AI-generated content is flooding the internet at a volume that is genuinely difficult to comprehend. SEO farms are producing millions of articles designed not to inform but to capture search traffic and serve ads. Fake reviews, synthetic social media engagement, AI-generated misinformation; the information ecosystem is being deliberately degraded by actors using AI as a cost-reduction tool for influence operations and revenue extraction. This is making it measurably harder for everyone, including journalists, to find reliable information. Who benefits? The operators of these systems, the advertising networks that monetize the traffic, and political actors who find an information-polluted environment tactically useful.

Infrastructure Concentration, Subsidy Extraction, and the Obsolescence Racket

This is where the extraction pattern reaches its most brazen expression and where the AI branding does its most important work.

Even a mid-sized data center consumes as much water as a small town, while larger ones require up to five million gallons of water every day. Meta’s Hyperion data center in Louisiana is expected to draw more than twice the power of the entire city of New Orleans once completed. Another Meta data center planned in Wyoming will use more electricity than every home in the state combined. In 2024 alone, Amazon, Microsoft, and Google spent more than $200 billion on capital expenditures, most of it to construct data centers.

To fund this buildout, corporations have become extraordinarily sophisticated at extracting public subsidies. The numbers are not marginal. Texas revised its annual data center tax exemption cost projection from $130 million to $1 billion in the space of just 23 months. Illinois went from $10 million in subsidies in 2020 to $370 million in 2024. A 3,600% increase. Virginia’s estimated annual subsidy cost ballooned five-fold within a single fiscal year.

Good Jobs First, the nonpartisan corporate subsidy watchdog, put it plainly: these profitable companies, led by some of the world’s richest men, don’t need public financial support, yet they have extracted billions in economic development tax breaks. Their researchers noted they know of no other form of state spending that is so out of control.

The secrecy surrounding these deals is not incidental. Projects frequently employ nondisclosure agreements, project code names, and subsidiary names that hide the firms behind the new server farms. Virginia, the largest data center market in the world, forgoes nearly $1 billion in state and local tax revenue each year without telling the public which companies benefit or how much.

Now add the dimension that makes the entire enterprise particularly difficult to justify: the infrastructure being built at this scale, with this public subsidy, is likely to be obsolete before it’s even completed. Given the two-to-three-year timeframe for building new enterprise-level infrastructure, data center designs could be obsolete by the time construction begins. Industry analysts are saying this out loud. One infrastructure expert framed it with uncomfortable clarity: “There are just so many moving parts here it’s unclear how you would build a data center and not have it be obsolete when you’re completed.”

Even “state-of-the-art” builds risk obsolescence by completion, given lead times and fast-moving technical requirements. Obsolescent centers mean stranded capital. Billions invested in hardware, real estate, and labor that can’t evolve with the next generation of AI workloads.

This creates a cycle that benefits the builders regardless of the outcome. The subsidies and tax abatements are captured at announcement and during construction. The ribbon-cutting happens. The politicians take their photographs. And whether the facility becomes a thriving AI hub or, in the industry’s own term, a “digital ruin,” the public has already paid. The extraction was never contingent on the technology delivering. It was contingent only on the announcement.

This is not AI policy. It is real estate and tax policy with an AI press release attached.

The Infrastructure Scale Question

There is a technical argument that deserves to be made explicitly, because it directly challenges the premise that this buildout is even necessary at the scale being proposed.

Research shows that local AI models can successfully handle 88.7% of everyday chat and reasoning queries and that hybrid local-cloud systems achieve 40 to 65% reductions in energy, compute, and cost while maintaining answer quality. A peer-reviewed study found that hybrid edge-cloud processing, as opposed to pure cloud, can achieve energy savings of up to 75% and cost reductions exceeding 80%.

The industry’s own infrastructure experts confirm that the bulk of any workload happily sits in 10 to 15 kilowatt racks. In standard data centers or colocation spaces. Ultra-dense AI setups remain in the minority when it comes to actual data center builds.

This raises a question worth asking directly: when you open an AI app on your phone or laptop, where is the computation actually happening? The answer, for most consumer products, is a data center. ChatGPT, Claude, Gemini? The apps are interfaces. Your device is a window. The work happens on their servers, which is where your data goes, where your behavioral patterns are logged, and where subscription revenue is justified. But this is a product design choice, not a hardware limitation. Tools like Ollama, LM Studio, and Jan already allow capable AI models to run entirely on a personal laptop, with nothing leaving the device. Apple has embedded dedicated AI processing chips into its hardware specifically for on-device inference. Microsoft’s Copilot+ PC initiative mandates a local AI chip as baseline hardware. The local alternative exists, works, and for the vast majority of everyday tasks – drafting, summarizing, answering questions, writing code – performs adequately. It simply does not serve the data collection and behavioral profiling model that makes hyperscale AI commercially valuable to its owners. The dependency is manufactured. The architecture that routes your queries through a data center instead of processing them on the device already in your pocket was chosen, by identifiable people, for identifiable reasons.

The hyperscale monster is not being built because it is the most efficient way to deliver AI services. Efficiency improvements are real and documented. But efficiency gains may simply enable deployment of larger models and more applications, increasing total consumption even as per-task efficiency improves. Every watt saved is immediately reinvested in expansion. This is not engineering optimization. It is the growth imperative of publicly traded corporations in a capital-rich environment, dressed in the language of technological necessity.

The Environmental Guilt Trap

Here is where the accountability misdirection becomes most precise. And most damaging to ordinary people.

You have almost certainly encountered some version of the claim that asking an AI a question consumes a household’s worth of water or electricity. The numbers circulate on social media, appear in well-meaning environmental coverage, and land on individual users as a weight of personal culpability.

If you’ve been given a specific number, treat it with serious skepticism. Not because the environmental costs aren’t real, they are. But because experts say isolating the environmental impact at the level of a single user or prompt is nearly impossible, and understanding the impacts of specific models is almost entirely dependent on voluntary sustainability disclosures from Big Tech. When OpenAI’s CEO published a per-query water figure, he didn’t clarify key details like what constitutes an “average” query or whether the figure includes the energy and water cost of training the model. The number was precise. The methodology was invisible. That combination should make anyone uncomfortable. And any journalist suspicious.

This is the paper straw problem, applied to the digital economy. In the 1990s, the fossil fuel and plastics industries spent millions promoting personal recycling responsibility precisely to redirect public attention away from industrial production decisions that were generating the actual problem. The mechanism is identical here. Make individuals feel responsible for a systemic problem, keep them focused on their personal behavior, and they stop asking about the corporate and regulatory decisions driving the actual scale.

The person asking AI to help them understand a medical bill is not the environmental crisis. The crisis is a Meta data center consuming ten percent of a Georgia county’s water supply. Water that local residents have no vote over, negotiated by corporate real estate teams with officials who announced the “jobs” at a ribbon cutting and never mentioned the aquifer. And that infrastructure is being built whether that person uses AI or not. The investment is already committed. The extraction of subsidies is already complete. The individual’s choice is irrelevant to the trajectory.

The Hidden Cost: Marginalization from the Tool Itself

There is a casualty in all of this that rarely gets counted: the ordinary person who has been so thoroughly frightened, confused, or cynically misled about what AI is that they never discover what it could actually do for them.

The dominant public conversation about AI oscillates between utopian corporate marketing and apocalyptic fear. Neither serves the average person. What gets lost between those poles is genuinely significant: that this technology, stripped of the corporate extraction layer, represents potentially the most democratizing tool since the internet.

But millions of people have been effectively marginalized from that possibility. They’ve absorbed enough alarm to feel vaguely threatened without absorbing enough practical understanding to use the technology. The result is a population that has reduced a remarkably powerful tool to a sophisticated search engine. A way to quickly find a recipe or rephrase an email. While that same technology is being used by corporations to restructure entire industries, automate entire job categories, and concentrate wealth at a pace the public is not equipped to evaluate or resist.

That gap between what AI is doing to ordinary people at institutional scale and what it could be doing for them at a personal scale is itself a form of extraction. A public that understands the technology well enough to use it effectively is also a public that understands it well enough to demand accountability for it and recognize when it’s being deployed against their interests.

The confusion is not a byproduct. It is an environment. And it was built by the same people currently extracting billions from state treasuries at ribbon-cutting ceremonies while their facilities quietly become tomorrow’s digital ruins.

ACT 3: WHAT IT COULD ACTUALLY DO FOR US

Strip away the corporate extraction layer. Set aside the hyperscale buildout, the surveillance apparatus, the algorithmic labor arbitrage. Look at the technology itself. The actual capability. Then ask a different question: what happens when this tool is in the hands of the people it’s currently being used against?

The answer is not theoretical. It is already happening, unevenly and insufficiently, in the margins. And the margins, it turns out, are exactly where the most important story is.

Closing the Gatekeeping Gap

There is a class of knowledge in America that functions as a private tax on being poor. Legal advice. Medical navigation. Financial planning. Understanding a contract before you sign it. Knowing your rights before an eviction hearing. Knowing what questions to ask a doctor when the appointment is twelve minutes long and the diagnosis is terrifying. Knowing whether the insurance denial you received is legitimate or a standard pressure tactic designed to make you give up.

Wealthy people access this knowledge routinely, through professionals they can afford to retain. Working people either go without, pay rates that represent a significant portion of their income for a single consultation, or make consequential decisions without adequate information. That information asymmetry is not accidental. It is structural, and it has been profitable for a long time.

AI does not replace a lawyer, a doctor, or a financial advisor. But it closes the information gap in ways that are immediate, practical, – and critically – available at two in the morning when the eviction notice arrived and no attorney’s office is open. It can explain what a clause in a lease actually means. It can help someone understand a diagnosis, prepare questions for a specialist, or identify whether a medical bill contains the errors that studies suggest appear in the majority of hospital invoices. It can walk a small business owner through the tax implications of a decision that an accountant would charge several hundred dollars to explain.

This is not magic. It is access. And access, distributed equitably, is one of the most powerful forces for reducing the structural disadvantages that extractive systems depend on to function.

The Small Business and Cooperative Case

The same corporations using AI to eliminate jobs and concentrate market power are making the tool available (often freely or at minimal cost) to the competitors they assume will never figure out how to use it effectively. That assumption is worth challenging.

The independent farmer navigating commodity markets, crop insurance complexities, and USDA program eligibility is operating in a system deliberately designed to favor consolidation. AI doesn’t level that playing field entirely. But it provides the kind of analytical support that corporate agriculture has always had through dedicated teams and that the family operation has always done without. Market analysis. Regulatory navigation. Grant identification. Supply chain alternatives.

The cooperative model – communities pooling resources, sharing infrastructure, distributing both the costs and the benefits – becomes significantly more viable when the coordination and analytical overhead that once required expensive professional services becomes accessible to anyone with an internet connection. For organizations operating on thin margins with limited administrative capacity, that is not a marginal advantage. It can be the difference between sustainability and collapse.

The worker trying to understand their rights under a collective bargaining agreement, the tenant organization preparing for a city council hearing, the community group analyzing whether a proposed development actually delivers the tax benefits being promised. All of them gain something real when the information asymmetry that has always favored the well-resourced side of those negotiations begins to close.

A Personal Example, and a Universal One

This article exists because of cooperative human-AI work. That fact belongs in the argument rather than in a methodology footnote. But journalism is one example. A personal one, chosen because it is directly visible here. The same dynamic applies to nearly any work humans do.

The research across these three acts – the subsidy extraction numbers, the obsolescence data, the efficiency comparisons, the corporate accountability trail – was assembled through a combination of human editorial judgment and AI research capacity working in genuine collaboration. The analysis is human. The conclusions are human. The sourcing decisions, the ethical framework, the investigative instincts that identified which questions mattered. Those are human. The capacity to cross-reference dozens of sources and surface connections across disparate research domains was augmented in ways that would have taken weeks of solo work to replicate.

The same collaborative logic holds for a contractor reviewing building codes, a nurse cross-referencing medication interactions, a small business owner drafting a contract, a community organizer mapping a neighborhood’s zoning history, a first-generation college student navigating a financial aid appeal. In each case the human brings judgment, context, relationships, accountability, and the lived understanding of what actually matters. The tool handles the volume, the retrieval, the cross-referencing, the drafting. Together they accomplish something neither could alone or that the human could only accomplish with institutional resources that have historically been inaccessible to most people.

The framing of AI as existential threat and the framing of AI as utopian replacement share a common flaw: both erase the human from the equation. One by casting the human as victim of the machine. The other by casting the machine as superior to the human it replaces. Neither is accurate. The more honest and more useful framing is collaborative tool. One that, deployed well, extends what people can do rather than substituting for them. That framing is not naive. It is the version that already exists, right now, in the hands of people who chose to use it that way.

The question is not whether AI affects human work. It is who gets access to it as a partner, and who gets displaced by it as a cost-reduction line item. Those are not technological outcomes. They are policy choices.

The Distributed Model Already Exists

The hyperscale data center narrative presents centralization as inevitable. The only way to deliver AI capability at meaningful scale. The research documented in Act 2 challenges that technically. But the challenge isn’t only technical. It’s already being demonstrated in practice.

Cooperative AI infrastructure – distributed networks where communities share computational resources, where the benefits flow to participants rather than to shareholders, where the architecture is designed for resilience and equity rather than extraction – is not a utopian proposal. It is an engineering choice. One that requires different incentive structures than publicly traded corporations operating under quarterly earnings pressure will naturally make. But a choice, made by people, that produces demonstrably different outcomes.

The contrast matters because it proves the hyperscale model is not a technological necessity. It is a business model. And business models can be changed, regulated, competed against, and in some cases simply replaced by something that works better for more people.

The Question That Remains

The corporations currently building tomorrow’s digital ruins on public subsidies, while their marketing departments name their products after human qualities they don’t possess and their legal teams ensure that no human being can be held responsible for what those products do? Those corporations made choices. Identifiable people made identifiable decisions that produced identifiable outcomes. And identifiable people are continuing to make those decisions today.

The technology did not do this. The technology cannot stop it. But people can. Through regulation, through accountability journalism, through the competitive pressure of cooperative alternatives that prove a different model works, and through the simple refusal to accept a framing in which a piece of software is the villain and the humans holding the controls are nowhere to be found.

AI is a tool. The question? The only question that has ever mattered? Is who it’s for.

Methodology Note

This article was produced through transparent human-AI collaboration. All research, analysis, editorial judgment, sourcing decisions, and conclusions are the work of the author. AI tools were used to accelerate research retrieval, cross-reference sources, and assist with structural drafting. Every factual claim has been independently verified. This methodology is disclosed in keeping with The Open Record’s commitment to transparency in reporting.Sources document available at theopenrecord.org

Verified by MonsterInsights