THE EXTRACTION PROTOCOL: Why AI Infrastructure Is About Control, Not Compute

How platform consolidation and technical obfuscation are building infrastructure for extraction instead of adaptation.

Part of our AI Isn’t A Bubble Series…

I. THE PARADOX

Google announced a protocol on January 11, 2026, that lets AI agents buy products on your behalf. Twenty major companies signed on: Shopify, Walmart, Target, Etsy, Wayfair, along with payment giants Mastercard, Visa, PayPal, and Stripe. The technology, called the Universal Commerce Protocol, is so lightweight it runs on retailers’ existing servers. No new infrastructure required.

The next day, Apple (with $3 trillion in market capitalization, the world’s best chip designers, and custom silicon built specifically for AI) announced a multi-year partnership making Google’s Gemini fundamental to Siri and Apple’s AI features. That same afternoon, Alphabet’s market value crossed $4 trillion for the first time. That evening, Defense Secretary Pete Hegseth announced that both Google’s AI and Elon Musk’s Grok would operate inside Pentagon networks.

All within 48 hours. Banks surrendering proprietary AI to Microsoft. Apple abandoning its build-it-yourself philosophy for Google’s platform. The Pentagon integrating private AI companies into defense systems. Markets rewarding consolidation with trillion-dollar valuations.

Something doesn’t add up.

If agentic commerce (AI agents shopping on your behalf) runs on Walmart’s existing servers, why are companies building $1 trillion in new centralized infrastructure? If Anthropic’s Claude can interpret your medical records using queries so lightweight they run on your phone, why does OpenAI’s CFO say “the real bottleneck isn’t money, it’s power”? If distributed AI is technically feasible, why is every major announcement about centralized platforms?

Between January 2 and January 12, 2026—ten days—the architecture of AI deployment snapped into focus: OpenAI launched ChatGPT for Health. Anthropic unveiled Claude for Healthcare with partnerships across major pharmaceutical companies. Google announced its commerce protocol with twenty industry partners. Nvidia committed $1 billion over five years to build AI drug discovery labs with Eli Lilly. Société Générale, one of Europe’s largest banks with $1.5 trillion in assets, announced it would phase out its proprietary AI tools in favor of Microsoft’s Copilot. Apple signed a multi-year pact with Google rather than compete. And the U.S. Department of Defense chose platform integration over sovereign infrastructure.

All within ten days. All pointing to the same pattern.

These announcements reveal something communities negotiating data center deals need to understand: The choice of where AI runs… on centralized platforms or distributed infrastructure… is a business decision, not a technical requirement. And that business decision determines whether AI serves extraction or adaptation.

The technical capability for distributed AI exists. Research from organizations like Anyway Systems demonstrates that 80-90% of AI inference workloads can run on modest distributed infrastructure. As few as 4-10 computers working together. Edge computing is often faster and cheaper than centralized facilities for most tasks. Small language models (1-7 billion parameters) perform remarkably well for specific applications like shopping assistance, medical record interpretation, and customer service.

But distributed AI doesn’t generate $1 trillion in capital deployment. It doesn’t create platform dependency. It doesn’t enable the extraction of data and transaction fees at scale. Centralization does.

This article examines why companies are building massive centralized AI infrastructure for workloads that don’t technically require it, who profits from that choice, and what communities need to understand before approving deals that subsidize extraction infrastructure instead of supporting adaptation alternatives.


II. CONSOLIDATION, NOT BUBBLE

Earlier in this series titled “AI Isn’t a Bubble,” I argued that dismissing AI as speculative mania would leave workers and communities unprepared for real transformation. The investment is real. The technology works. The economic disruption is coming. But I also predicted inevitable consolidation: platform scale would prove insurmountable, smaller players would be absorbed or surrender, and the survivors would be few but dominant.

The week of January 6-12, 2026, proved that thesis correct faster than even I anticipated.

Consider Société Générale’s surrender to Microsoft Copilot. This is one of Europe’s largest banks with $1.5 trillion in assets, thousands of engineers, massive technology budgets, and sophisticated AI capabilities. They built proprietary AI tools. They invested heavily. They had every resource a company could want to compete.

They failed. They’re phasing out their custom AI infrastructure in favor of Microsoft’s platform.

If a $1.5 trillion bank can’t build competitive AI infrastructure, what does that tell us about market consolidation? If an organization with that scale and sophistication surrenders to platform dependency, who can compete?

This isn’t a bubble where everyone loses. It’s consolidation where a few win massively and everyone else becomes dependent. Within that consolidation, we see clear winners:

  • Microsoft: Capturing enterprise with Copilot (even banks surrender)
  • Google: Capturing commerce with Universal Commerce Protocol
  • OpenAI: Capturing consumer market with ChatGPT
  • Anthropic: Capturing specialized enterprise with Claude
  • Nvidia: Controlling infrastructure layer with chips and systems

The consolidation happens in layers. At the platform level, a handful of AI companies control the models everyone uses. At the infrastructure level, a few vendors control the computing power. At the application level, enterprises integrate these platforms instead of building their own. And at each layer, the companies that control the bottleneck can extract value.

This isn’t speculation about future consolidation. This is observable market consolidation happening in real-time. Smaller AI labs raised billions but can’t keep pace with GPT, Claude, and Gemini development cycles. Enterprise custom solutions prove too expensive to maintain. Companies switch to platform APIs. Even open-source alternatives, while free, lack the support infrastructure enterprises require.

The bubble isn’t in AI technology or investment. The bubble is in the belief that anyone can compete with platform scale.

But here’s what makes this consolidation particularly concerning for communities: concentrated market power makes extraction easier, not harder. In competitive markets, multiple platforms compete for users, switching costs stay manageable, and competition limits extraction. In consolidated markets, few platforms control infrastructure, switching becomes nearly impossible, and extraction becomes structural.

The Week Apple Blinked

On January 12, 2026 (the same day Alphabet’s market capitalization crossed $4 trillion for the first time) Apple announced a multi-year partnership for Google’s Gemini to power AI-enhanced Siri features and other Apple AI capabilities.

This represents the most significant platform consolidation signal yet. Apple doesn’t partner. Apple builds. Their entire business strategy revolves around vertical integration and controlling the full technology stack. When they couldn’t make Intel chips fast enough, they built their own. When they needed better displays, they developed their own technology. When they wanted services revenue, they built their own payment system, streaming service, and cloud storage.

Apple has unlimited resources to build competitive AI. They have:

  • Custom silicon (Apple Neural Engine) designed specifically for on-device AI
  • Three trillion dollars in market capitalization
  • The world’s best chip design team
  • Privacy-first brand positioning that on-device AI would reinforce
  • Vertical integration from hardware to services
  • 2+ billion active devices that could run distributed AI

And they chose Google’s centralized platform instead. Not for a single product cycle. Not as a temporary solution while they build alternatives. A multi-year strategic commitment formalized in a joint statement from both companies.

Multi-year means Apple evaluated the cost of building competitive AI in-house and determined surrender to Google’s platform was more economically viable. Multi-year means whatever Apple is paying Google, it’s cheaper than competing. Multi-year means the platform advantage isn’t just significant, it’s economically insurmountable even for a $3 trillion company with every possible advantage.

The same day, Google’s market value crossed $4 trillion. The market is rewarding centralization. Companies that control platforms get trillion-dollar valuations. Companies that compete with platforms – even companies as powerful as Apple – eventually surrender.

When Even the Pentagon Surrenders

If the consolidation thesis needed validation beyond Apple’s multi-year surrender, it arrived the same evening from an unexpected source: the United States Department of Defense.

Defense Secretary Pete Hegseth announced that both Google’s generative AI and Elon Musk’s Grok chatbot will operate inside the Pentagon network. Not building Pentagon-owned AI infrastructure with appropriate security controls and data sovereignty. Not developing military-grade AI with proper oversight. Integrating private platforms. Two of them. Into the nation’s most sensitive defense systems.

Consider what this means. The Pentagon has:

  • Effectively unlimited budget for national security priorities
  • The strongest possible incentive to maintain data sovereignty
  • Access to top AI researchers and engineers
  • Ability to classify and protect critical infrastructure
  • Legal authority to mandate domestic development
  • National security imperative to avoid platform dependency
  • Existential reasons to avoid conflicts of interest

And they’re integrating Google and Elon Musk’s xAI into Pentagon networks instead.

The conflicts of interest are remarkable. Elon Musk controls xAI (whose Grok AI now operates inside Pentagon networks), SpaceX (major DoD contracts), Starlink (military communications infrastructure), and Tesla (potential defense applications). His companies hold billions in defense contracts. Now his AI platform gets access to Pentagon systems. Who owns the data that flows through Grok? What happens when Pentagon decisions affect Musk’s other companies? These questions apparently don’t override platform advantages.

If the Department of Defense, with unlimited resources and existential reasons to maintain independence, chooses platform dependency over building sovereign AI infrastructure, what does that tell communities negotiating data center deals? What does that tell banks like Société Générale? What does that tell Apple?

The platform advantages aren’t just economically insurmountable. They’re strategically insurmountable. Even when national security demands independence, even when conflicts of interest are obvious, even when data sovereignty should be non-negotiable – platforms win.

This isn’t just consolidation. This is capitulation at every level. Tech companies, banks, device manufacturers, and now the military itself. All surrendering to private platforms rather than building alternatives.

Communities evaluating data center proposals should understand: If the Pentagon won’t build its own AI infrastructure despite having every possible reason to do so, the platform advantage isn’t just significant. It’s absolute.

The Evidence Cascade

The evidence arrived all in one week:

  • Société Générale couldn’t compete, surrendered to Microsoft
  • Google consolidated commerce through UCP with 20 partners in one announcement
  • Healthcare consolidated around three major platforms (OpenAI, Anthropic, plus enterprise players)
  • Pharmaceutical R&D consolidated through Nvidia/Lilly partnership
  • Infrastructure financing consolidated through Brookfield’s $100 billion program
  • Apple surrendered despite trillion-dollar resources and vertical integration strategy
  • Pentagon surrendered despite national security imperatives and sovereignty concerns

Two things are simultaneously true: AI investment is justified because the technology is real and transformation is happening, and the consolidation of that investment is creating extraction infrastructure at every layer; platform, infrastructure, and capital.

Understanding this distinction between bubble (false value) and consolidation (concentrated value) is essential for communities negotiating data center deals right now. The investment is real. But real investment can still build infrastructure designed for extraction rather than adaptation. And once that infrastructure is in place, extraction becomes the only available model.

II.B – The “AI Isn’t a Bubble” Series Context

This analysis builds on earlier work in “AI Isn’t a Bubble,” a series examining why AI investment is real rather than speculative, but warning that consolidation would prove inevitable and create concentrated platform power.

Previous Articles in Our AI Isn’t A Bubble Series have addressed this as well.

The events of January 2-12, 2026 validated these predictions faster and more completely than anticipated. This article documents that validation in real-time and examines implications for communities negotiating infrastructure deals during platform consolidation.

The series argued: AI is real, investment is justified, but consolidation creates extraction opportunities. Communities unprepared for consolidation will subsidize extraction infrastructure without understanding alternatives at their peril.

The difference between earlier articles and this one: Those were predictive analysis. This is documentation of consolidation happening in real-time, with specific examples of even the most powerful entities (Apple, Pentagon) surrendering to platforms within days of each other.


III. THE COMPLEXITY SHIELD

When developers present data center proposals to township boards, they arrive with thick binders full of technical specifications: gigawatts, teraflops, GPU clusters, cooling requirements, fiber optic capacity. The presentations include complex diagrams, technical jargon, and confident assertions about what AI “requires.” The message is clear: This is too complex for you to understand. Trust us.

This complexity serves a purpose. It prevents scrutiny. It conflates “AI” with “massive data centers.” It presents centralization as technical necessity rather than business choice. It makes distributed alternatives sound experimental or unrealistic when they’re neither.

What they’re hiding is simple: The choice of where to run AI is a business decision, NOT a technical requirement. But if communities understood that, they’d ask harder questions.

The conflation strategy works because most people think AI equals data centers. They imagine that artificial intelligence inherently requires massive facilities with thousands of computers. But AI is software. Software can run in many places. Your phone runs AI when it recognizes your face or transcribes your voice. Your laptop runs AI when it autocompletes your sentences or filters your photos. Small tasks run on small devices. Medium tasks run on local servers. Only the largest training operations or the most complex real-time processing actually requires data center scale.

And most commercial AI applications aren’t complex real-time processing.

When Google announced its shopping agent protocol, it didn’t announce new data centers. Why? Because Walmart already has servers. The technology is lightweight. Product searches are database queries, payment processing happens through existing APIs, and personalization uses rules engines plus modest machine learning models. The same companies announcing they need billions in new infrastructure are simultaneously demonstrating that most AI workloads run on existing systems.

Both things can’t be true. Either AI requires massive new infrastructure, or it runs on current systems with modest upgrades. The contradiction reveals the strategy.

Notice they never explain why centralization is technically necessary. They explain how it will work. Power requirements, cooling systems, fiber connections. But never why distributed alternatives won’t work for their specific use case. That’s not an accident. The business model depends on you not asking why.

Think of it this way: When someone sells you car insurance, they explain coverage options, deductibles, and premium costs. They don’t explain internal combustion engines or transmission mechanics. The complexity isn’t relevant to your decision. But with data centers, developers bury you in technical details precisely because those details aren’t relevant to the actual decision you’re making.

The decision you’re making is political and economic: Does your community want to subsidize someone else’s infrastructure for their business model? The technical specifications are designed to make that decision seem inevitable rather than political.

Consider the language they use. They say “AI requires massive computing power.” Technically accurate but misleadingly broad. Training large AI models requires significant computing power. But training happens once, often over weeks or months. Running AI models (what’s called inference) is far less demanding. That’s why your phone can run speech recognition or photo editing AI locally. The distinction matters enormously, but presentations rarely make it clear.

They say “data centers are necessary for the AI revolution.” But necessary for what, exactly? For training foundation models at companies like OpenAI or Anthropic? Sure. For running the shopping assistants and medical record queries they’re announcing? No. Those run on infrastructure that already exists.

The obfuscation extends to power requirements. When OpenAI’s CFO says “the real bottleneck isn’t money, it’s power,” she’s creating urgency around infrastructure spending. But what she doesn’t say is that distributed AI requires far less power than centralized facilities. Individual devices drawing watts versus data centers drawing megawatts. The power bottleneck exists only if you choose centralization.

This is the complexity shield: make the technology sound so sophisticated that questioning the business model seems naive. Make distributed alternatives sound unrealistic so centralization seems inevitable. Make the infrastructure spending sound technically necessary so communities feel they’re choosing between progress and stagnation.

But communities aren’t choosing between AI and no AI. They’re choosing between extraction infrastructure and adaptation alternatives. And the complexity shield exists specifically to hide that choice.

III.B – THE MANUFACTURED URGENCY

There’s a reason data center proposals are accelerating now. There’s a reason developers demand fast approvals. There’s a reason complexity shields get deployed with such force.

They’re racing against awareness.

Every data center approved before communities understand distributed alternatives is infrastructure locked in for extraction. Every fast-track approval that bypasses careful evaluation is a deal communities can’t undo. Every “approve now or we’ll go elsewhere” ultimatum is pressure to commit before you understand what you’re committing to.

The urgency isn’t technical. It’s strategic.

The Window They’re Trying to Close:

Right now, most communities don’t know distributed AI infrastructure is viable. They don’t know that 80-90% of AI workloads could run on smaller, local systems. They don’t know that “grid-independent” means “avoiding oversight.” They don’t understand that platform consolidation makes even the Pentagon surrender.

Developers are pushing approvals through before this knowledge spreads.

Once enough centralized infrastructure is built, once enough communities have committed tax incentives and subsidies, once enough platforms have established dependency, the conversation shifts. It’s no longer “should we build centralized extraction infrastructure?” It becomes “everyone else already did, we have to compete.”

That’s the mechanism. Build fast enough, in enough places, and you create inevitability. The question stops being “what kind of infrastructure?” and becomes “how do we get our share?”

The Misdirected Outrage:

Community resistance to data centers exists. People organize. They show up to planning commission meetings. They express concerns about water usage, power consumption, environmental impact, traffic, property values.

And developers are perfectly comfortable with that outrage – because it’s focused on the wrong question.

The debate becomes: “Data center or no data center?” That’s a losing frame for communities. AI transformation is real. Opposing all infrastructure looks like opposing progress. Developers can position themselves as bringing economic development while opponents look obstructionist.

But the real question isn’t “data center or no data center?” The real question is: “Centralized extraction infrastructure or distributed adaptation infrastructure?”

That’s the question developers don’t want asked. Because once communities understand they have alternatives? That distributed infrastructure could serve the same technical needs while retaining local control and value? The negotiating position shifts entirely.

The outrage about environmental impact and water usage is legitimate. But it’s insufficient. Even if communities win concessions on those issues, they’re still approving extraction infrastructure. They’re still subsidizing platform dependency. They’re still accepting architectural choices that concentrate risk locally while extracting value to distant platforms.

Why Speed Matters to Them:

The faster developers get approvals, the less time communities have to:

  • Research distributed alternatives
  • Consult with independent technical experts
  • Coordinate with other communities facing similar decisions
  • Understand that Société Générale, Apple, and the Pentagon all surrendered to platforms
  • Realize that “technical necessity” claims are actually business model preferences

They are trying to overwhelm us because every month that passes, more information becomes available. Academic research on distributed AI. Examples of platform consolidation. Evidence that infrastructure projections exceed demonstrated need. Communities sharing experiences and strategies.

Developers need approvals before that information reaches critical mass. Before communities realize they’re negotiating from a position of strength, not desperation. Before distributed alternatives become part of the standard conversation.

The Coordination Is Visible:

January 2-12, 2026 saw massive platform announcements across sectors. This created media momentum, market validation (Google hitting $4T), and an inevitability narrative. Communities negotiating deals during this period face “everyone’s doing it” pressure.

That’s not accidental timing. That’s strategic positioning. Build the narrative that AI requires centralized infrastructure. Get major announcements coordinated to reinforce that narrative. Create urgency through market momentum. Push approvals through before communities can evaluate alternatives.

The speed isn’t about technical requirements. It’s about closing the window for informed decisions.

What This Means for Communities:

When developers say “we need fast approval,” they’re really saying “we need approval before you understand your alternatives.”

When they say “everyone else is doing it,” they’re really saying “don’t be the first to ask hard questions.”

When they deploy complexity shields and technical jargon, they’re really saying “don’t examine whether distributed infrastructure would work better.”

The urgency is artificial. The inevitability is manufactured. The technical necessity is overstated.

Communities that slow down, ask hard questions, demand alternatives analysis, and coordinate with other communities facing similar decisions can change the trajectory. But only if they recognize that the speed itself is part of the extraction strategy.

The question isn’t whether AI infrastructure gets built. The question is whether communities approve extraction infrastructure before understanding they had adaptation alternatives.


IV. THE TECHNICAL REALITY, TRANSLATED

Understanding what AI actually requires—without the complexity shield—starts with knowing what AI actually is.

AI is software that recognizes patterns and makes predictions based on those patterns. When you ask Google “What’s the weather?”, it doesn’t fire up a massive data center. It queries a database and sends you an answer. That takes milliseconds and minimal computing power. When Netflix recommends a show, it’s comparing your viewing history against patterns from other users. Database lookups and simple math. When your phone transcribes your speech, it’s running a small model locally that converts audio patterns into text.

The “agentic AI” that companies are hyping? AI agents that shop for you, summarize your medical records, or answer customer service questions? Works similarly. These agents are doing:

  • Product searches: Database lookups comparing your query against product catalogs
  • Price comparisons: Simple arithmetic across retailer databases
  • Payment processing: API calls to existing payment systems (Visa, PayPal, Stripe)
  • Medical record interpretation: Text parsing and terminology translation
  • Customer service triage: Pattern matching against common question databases

None of this requires a $20 billion facility. None of this needs 1.8 gigawatts of power. These are lightweight operations that run efficiently on distributed infrastructure.

Think of it like music streaming. You used to need a large stereo system with powerful amplifiers. Now your phone handles it. The technology got smaller and more efficient, not larger and more centralized. The same thing is happening with AI. Small devices can increasingly do what used to require massive computers.

The difference is in training versus deployment. Training a large language model, teaching it patterns from billions of text examples, requires significant computing power. That’s like pressing a vinyl record master. It takes specialized equipment and substantial resources. But once trained, running that model (inference) is like playing the music. Your phone can do it.

The research backs this up. Anyway Systems has documented that 80-90% of AI inference workloads run effectively on modest distributed infrastructure. Edge computing (processing on devices or local servers rather than distant data centers) is often faster because data doesn’t travel as far, and cheaper because you’re not paying for data center overhead.

Small language models (in the 1-7 billion parameter range) perform remarkably well for specific tasks. A medical records assistant doesn’t need to know everything about everything, it needs to know medical terminology and how to parse health data. A shopping agent doesn’t need to answer philosophical questions, it needs to search products and process transactions. These specialized models run on individual servers or even high-end phones.

So why build massive centralized infrastructure?

Here’s the honest answer: because centralization enables control and extraction. If AI runs on your device or local servers, companies can’t easily collect your data. They can’t charge transaction fees on every purchase. They can’t analyze your behavior patterns to train better models. They can’t build platform businesses that make everyone else dependent.

But if AI runs on their infrastructure? Their data centers, their platforms, their protocols? They control the system. They see the transactions. They collect the data. They set the terms. They extract the value.

This is why Google’s Universal Commerce Protocol runs on existing Walmart servers for the demo but requires integration with Google’s systems for actual deployment. This is why Anthropic’s Claude can interpret medical records with lightweight queries but those queries flow through Anthropic’s platform. This is why Bloom Energy can sell $5 billion in fuel cells despite AI workloads not actually requiring that much centralized power.

The technical capability for distributed AI exists. Companies are choosing centralization for business reasons, then using technical complexity to make that choice seem inevitable.

Understanding this distinction, between technical necessity and business model requirements, is essential. Because communities negotiating data center deals aren’t being asked to enable AI technology. They’re being asked to subsidize someone else’s extraction infrastructure.

This bakery analogy makes it clear. Your community needs bread. You have two options:

Option A – Distributed (local bakeries): Small bakeries in multiple neighborhoods. Each bakes fresh daily. If one closes, others remain. Community members can open competing bakeries. Value stays local.

Option B – Centralized (factory): One giant factory 50 miles away. All bread comes from there. If it closes, you’re stuck. You can’t compete. Scale requirements are too high. Profits leave the community.

Both options produce bread. But Option B means dependency, extraction, and risk concentration. Data centers are the same choice dressed up in technical language.

The question isn’t whether AI will transform commerce, healthcare, and customer service. It will. The question is whether that transformation happens through distributed infrastructure that communities can adapt to local needs, or centralized platforms designed for extraction. And that question is being answered right now, in planning commission meetings and township halls, by people who deserve clear information instead of complexity shields.


V. THE EXTRACTION ECONOMY

Understanding extraction requires understanding timing. Between January 2 and January 12, 2026—ten days—the architecture of AI extraction became unmistakably clear.

Week One (January 2-5):

  • OpenAI launched ChatGPT for Health, positioning their platform as an intermediary between patients and medical information
  • Infrastructure announcements continued as data center proposals and power deals moved through approval processes

Week Two (January 6-12):

  • Wednesday: Bloom Energy stock surged on news of AI power contracts worth billions
  • Friday: Anthropic announced Claude for Healthcare plus partnerships with AstraZeneca, Sanofi, Genmab, Banner Health, and others
  • Sunday: Google launched Universal Commerce Protocol with 20+ partners including Shopify, Walmart, Target, Mastercard, Visa, and PayPal
  • Monday: Nvidia committed $1 billion over five years to build AI drug discovery labs with pharmaceutical giant Eli Lilly
  • Monday: Société Générale announced it would phase out proprietary AI tools for Microsoft’s Copilot
  • Monday: Apple announced multi-year partnership with Google for Gemini
  • Monday: Alphabet crossed $4 trillion market capitalization
  • Monday: Pentagon announced Google and xAI integration into defense networks

This isn’t coincidence. This is coordinated infrastructure capture across multiple sectors. Commerce, healthcare, enterprise software, pharmaceuticals, defense. All within ten days. The extraction protocol isn’t coming. It’s here.

The extraction happens in three distinct layers, each reinforcing the others.

Layer One: Platform Extraction

At the platform layer, AI companies position themselves as intermediaries between users and services. They make themselves necessary for accessing your own data, completing your own transactions, understanding your own medical records.

In Commerce: Google’s Universal Commerce Protocol sounds democratic. An open standard, endorsed by twenty companies, compatible with existing industry protocols. But examine how it actually works. Merchants must integrate Google’s system. Shopping data flows through Google’s infrastructure. When you ask a shopping question, Google decides which products to show you, in what order, with which merchants featured.

The merchant remains the “merchant of record” (legally responsible for the sale) but loses the relationship with the customer. As Richard Crone, CEO of Crone Consulting, explained to American Banker: “The other side of this is that if the checkout goes to Gemini, the merchant loses the last touch point.” That last touch point (when a customer is ready to buy) accounts for 33% to 76% of upsell and cross-sell opportunities. Google captures the relationship. The merchant becomes a fulfillment center.

OpenAI is already extracting transaction fees. When you buy something through ChatGPT, OpenAI takes a percentage. Not because they manufacture products or provide warehousing or handle logistics. Because they control the interface. Because they positioned their AI as the intermediary. That’s pure extraction. Value taken without value added.

In Healthcare: Anthropic’s Claude for Healthcare connects to HealthEx, which aggregates medical records from over 50,000 health systems. Users connect their patient portal logins to HealthEx. HealthEx unifies records across providers. When users ask Claude health questions, the platform decides which categories of information to retrieve: medications, allergies, lab reports, doctor notes.

The privacy protections sound reassuring: user consent required, data never used for model training, users can revoke access. But examine the structure. Your medical records are data you created, about your own body, for your own healthcare. Yet you need an AI intermediary to understand them. And that intermediary decides which data to access based on its interpretation of your question.

This week, while Anthropic announced consumer health features, they also announced partnerships with AstraZeneca, Sanofi, and Genmab for drug discovery. Nvidia simultaneously committed $1 billion over five years with Eli Lilly for AI-powered drug labs. The platform layer (AI models), the infrastructure layer (Nvidia chips), and the enterprise layer (pharmaceutical companies) are converging on the same extraction opportunity: healthcare data and drug development.

The patient provides the data. The AI intermediary accesses it. The pharmaceutical company benefits from the analysis. Where in that chain does the patient capture value?

In Enterprise Software: Société Générale’s surrender to Microsoft Copilot reveals how platform extraction works at enterprise scale. Even sophisticated organizations with massive resources can’t compete with platform advantages. Building custom AI requires:

  • Large engineering teams (expensive)
  • Continuous model updates (expensive)
  • Computing infrastructure (expensive)
  • Support systems (expensive)
  • Integration maintenance (expensive)

Or you can pay Microsoft a monthly fee per user for Copilot. The integration is easier. The updates are automatic. The support is included. But you become dependent on Microsoft’s roadmap, Microsoft’s pricing, Microsoft’s terms.

SocGen tried to maintain independence. They failed. They surrendered. If a $1.5 trillion bank with thousands of engineers can’t build competitive AI infrastructure, who can? The answer is almost nobody. Which means almost everyone becomes platform-dependent. Which means platforms can extract value through subscription fees, data access, feature control, and pricing power.

In Defense: The Pentagon’s integration of Google and xAI represents platform extraction at the highest level. Defense data flows through private platforms. Military decision-making potentially influenced by systems controlled by companies with other government contracts and commercial interests. The ultimate extraction: even national security apparatus becomes platform-dependent.

The pattern repeats across sectors: Platforms position themselves as necessary intermediaries. They make switching costs high. They consolidate market power. Then they extract.

Layer Two: Infrastructure Extraction

While platforms extract at the software layer, infrastructure vendors extract by selling the hardware and power systems that platforms supposedly need.

Bloom Energy exemplifies this extraction. Their stock price increased roughly 400% in one year from around $25 per share in early 2025 to over $130 by January 2026. Market capitalization: $24-26 billion. The catalyst? Deals to provide on-site power generation for AI data centers.

The pitch is compelling: AI data centers need massive power. Traditional utilities can’t upgrade infrastructure fast enough (grid bottleneck). Bloom’s solid oxide fuel cells provide “grid-independent” power on-site. Problem solved.

The $5 billion Brookfield Asset Management partnership announced in October 2025 provided validation. Brookfield manages over $900 billion in assets. If they’re betting on Bloom, the technology must be legitimate. The Wyoming project – a 1.8 gigawatts for an “AI Factory” powered by 900 megawatts of Bloom Energy fuel cells – demonstrates scale. These aren’t experiments. These are multi-billion-dollar commitments.

But examine the assumptions underlying Bloom’s $24 billion valuation:

  1. AI requires hyperscale centralized computing (narrative driven by platforms)
  2. Hyperscale computing requires massive power (true if centralized)
  3. Traditional grids can’t deliver power fast enough (creates urgency)
  4. On-site fuel cells solve the bottleneck (their product)

Remove assumption one that AI requires hyperscale centralization and the entire thesis collapses. If 80-90% of AI inference can run distributed on existing infrastructure, you don’t need 1.8 gigawatts in Wyoming. You don’t need $5 billion in fuel cell capacity. You don’t need “grid-independent” facilities.

Bloom Energy’s valuation is a $24 billion bet that centralization is technically necessary. But the same week Bloom stock surged on power infrastructure deals, Google announced a shopping protocol that runs on existing Walmart servers. Both can’t be technically necessary.

The power narrative serves business purposes. When OpenAI’s CFO Sara Friar tells CNBC “the real bottleneck isn’t money, it’s power,” she’s not making a technical statement. She’s creating urgency around infrastructure spending. But that bottleneck exists only if you choose centralization. Distributed AI (inference running on edge devices and local servers) requires far less power. Individual devices draw watts. Data centers draw megawatts. The power bottleneck is a consequence of architectural choice, not technical inevitability.

The “grid-independent” framing deserves scrutiny. Bloom and developers present this as solving utility bottlenecks. But grid-independent also means bypassing utility regulation and public oversight. The Wyoming project got approval at “a fraction of the 5-to-10-year timeline typically required” according to FinancialContent. Fast approvals sound efficient. They also mean minimal environmental review, limited public input, and reduced regulatory accountability.

Brookfield’s involvement reveals the scale of infrastructure extraction. Their $100 billion AI infrastructure program, developed with Nvidia and the Kuwait Investment Authority, positions the Bloom deal as an “early seed investment.” This isn’t one facility. This is a template for “grid-independent” developments across multiple jurisdictions, each bypassing traditional utility oversight, each extracting value through power infrastructure that may prove unnecessary.

Other infrastructure vendors follow similar patterns. Digital Realty, Equinix, and other data center REITs pitch municipalities on the necessity of AI infrastructure. CoreWeave (a cloud computing company that rose over 90% since its 2025 IPO) partners with Bloom on power. Construction firms, cooling system vendors, fiber optic providers – an entire ecosystem extracts value by building centralized infrastructure for workloads that may not require it.

The Speculative Infrastructure Layer: Not every infrastructure buildout attracts McKinsey forecasts and Bloom Energy valuations. Some reveal the speculative nature of AI infrastructure claims more transparently.

DataVault AI announced on January 12, 2026, plans to deploy 100+ edge computing nodes across 33 U.S. cities, projecting $400-500 million in revenue for 2026 and $2-3 billion by 2027. The company emphasized “AI-powered edge infrastructure” and “nationwide rollout strategy.”

Financial analysts forecast 2026 revenue of $45 million – 11% of the company’s guidance and less than 10% of their stated project revenue target. The stock, already down 45% over 12 months, fell further on the announcement despite the “ambitious billion-dollar revenue goals.”

This reveals the speculation underlying infrastructure buildouts. DataVault’s projections assume massive demand for edge AI infrastructure. Analysts examining actual market conditions forecast revenues that make the infrastructure economically unviable. The market, voting with capital, agrees with analysts over company projections.

DataVault is building “edge infrastructure”. Ostensibly the distributed alternative this article advocates. Yet their buildout faces the same problem centralized facilities face: projections based on optimistic assumptions about AI infrastructure demand rather than demonstrated need.

The lesson isn’t that edge computing fails while centralization succeeds. The lesson is that speculative infrastructure whether centralized mega-facilities or distributed edge networks faces fundamental questions about actual demand versus projected demand. Communities evaluating any infrastructure proposal should examine whether revenue projections match analyst consensus, whether stock markets reward or punish the announcements, and whether the infrastructure serves demonstrated demand or speculative forecasts.

The difference in market response is telling:

  • Bloom Energy (+400%, $24B market cap): Betting on centralized extraction = market rewards
  • DataVault AI (-45%, stock falling): Betting on distributed edge without extraction model = market punishes

This proves the point: The market rewards extraction infrastructure (centralized) even when speculative, but punishes non-extraction infrastructure (distributed) even when technically superior. Because extraction creates predictable revenue streams through platform dependency. Distribution doesn’t.

Layer Three: Capital Extraction

At the capital layer, financial forecasts and market valuations create a speculative cascade where each layer depends on assumptions from the layer below.

McKinsey & Co. forecasts roughly $7 trillion in data center capital outlays by 2030. That’s the number cited in corporate presentations, investor pitches, and media coverage. It sounds authoritative. McKinsey is a respected consultancy. Seven trillion is specific enough to seem researched.

But examine the assumptions underlying that forecast:

Assumption 1: AI transformation will continue (TRUE—technology is real) Assumption 2: AI transformation requires hyperscale data centers (QUESTIONABLE—if distributed works) Assumption 3: Data centers require massive power infrastructure (FALSE—if Assumption 2 is wrong)

The forecast assumes centralized architecture. If distributed AI proves viable for 80-90% of workloads, what happens to the $7 trillion forecast? What happens to Bloom Energy’s $24 billion valuation? What happens to the data center REITs, the construction contracts, the power infrastructure deals?

This creates what I call the bubble within the bubble. AI isn’t a bubble. The technology is real, investment is justified, transformation is happening. But the infrastructure buildout betting on centralization is speculative. And the power infrastructure serving that speculative buildout is doubly speculative.

It’s a three-layer cascade:

Foundation: AI will transform commerce and healthcare (TRUE) Middle: Therefore we need $7 trillion in data centers (QUESTIONABLE) Top: Therefore we need massive power infrastructure (SPECULATIVE)

Bloom Energy sits at the top layer. Their business model depends on the middle layer assumption being correct. If distributed AI scales and centralization proves unnecessary, Bloom’s multi-billion-dollar fuel cell orders evaporate.

Who holds the bag when that happens?

Communities hold the bag. They provide tax incentives, infrastructure subsidies, expedited approvals. They accept the promises of permanent jobs and economic transformation. They approve deals based on technical necessity claims that may prove false. When centralized facilities become stranded assets, communities bear the cost.

Investors might hold the bag. Bloom’s 400% stock gain attracts momentum investors betting on continued AI infrastructure growth. If that growth proves to be in distributed rather than centralized systems, valuations collapse. Retail investors who bought at peak lose substantially.

Workers definitely hold the bag. Infrastructure jobs promised during construction are temporary. 2-3 years, then gone. Operations jobs in highly automated facilities number in the dozens, not hundreds. Meanwhile, the AI deployed through that infrastructure automates retail positions, healthcare administration, customer service – millions of jobs over the same timeframe. Workers lose twice: displacement accelerates while infrastructure promises evaporate.

The extraction cascade concentrates risk downward while channeling value upward. Platforms extract transaction fees and data. Infrastructure vendors extract through hardware and power sales. Financial markets extract through valuations and speculation. Communities, workers, and late investors bear the risk.

The Week Everything Accelerated

The timing of announcements during January 6-12, 2026 reveals coordination rather than coincidence. Platform vendors raced to establish partnerships before competitors. Infrastructure vendors leveraged platform momentum to justify capacity expansion. Financial markets rewarded both with surging valuations.

This coordination creates an inevitability narrative: everyone’s doing it, resistance is futile, you have no choice. But communities negotiating now still have choices. Once infrastructure is built, deals are signed, and platforms are entrenched, adaptation becomes impossible. Extraction becomes the only model available.

The question isn’t whether AI will transform these sectors. That’s happening. The question is whether transformation serves extraction or adaptation. And that question is being answered right now, in corporate boardrooms and partnership announcements, before communities understand what’s at stake.


VI. WHAT COMMUNITIES NEED TO KNOW

Communities across the country are negotiating data center deals right now. Township supervisors, city council members, planning commissioners – people without computer science degrees – are being asked to approve billion-dollar projects based on technical claims they can’t fully evaluate.

This section translates what’s really happening. Not to stop AI development, but to ensure communities understand what they’re actually agreeing to. Because the choice between extraction and adaptation is being made right now, in planning commission meetings and township halls, by people who deserve clear information.

You don’t need to be a technologist to understand this. You just need to know the right questions to ask.

The Simple Truth They’re Not Telling You

The pitch you’ll hear: “AI is the future. AI needs massive computing power. We need to build a giant data center in your community. You’ll benefit from construction jobs, permanent operations positions, increased tax revenue, and designation as a technology hub.”

What they’re leaving out: Most of what AI actually does can run on regular computers distributed across existing infrastructure. Building a massive centralized facility is a choice, not a technical necessity. And that choice benefits them financially while putting risk on you.

What AI Actually Is (In Plain English)

AI is software that recognizes patterns and makes predictions:

  • Answering questions (like Google or Alexa)
  • Making recommendations (like Netflix suggestions)
  • Recognizing patterns (like photo tagging)
  • Automating routine tasks (like email autocomplete)

Most of these tasks are lightweight. They don’t need supercomputers.

When you ask Google “What’s the weather?”, it doesn’t fire up a massive data center. It queries a database and sends you an answer. That takes milliseconds and minimal computing power.

The agentic AI they’re hyping? AI agents that shop for you, summarize medical records, answer customer service questions? That works similarly:

  • Product searches: Database lookups (like Amazon search)
  • Price comparisons: Simple arithmetic across stores
  • Payment processing: API calls to existing credit card systems
  • Medical record interpretation: Text parsing and terminology translation
  • Customer service triage: Pattern matching against common questions

None of this requires a $20 billion facility. None needs 1.8 gigawatts of power.

The Questions Your Community Must Ask

These are simple questions that cut through technical obfuscation. You don’t need a computer science degree. You need clarity about what you’re approving.

Question 1: “What percentage of your workload technically REQUIRES massive centralized facilities rather than distributed infrastructure?”

Don’t let them talk in generalities. Demand specifics:

  • Which specific tasks can’t run on smaller, distributed systems?
  • Why is centralization more cost-effective than distributed alternatives?
  • Show us the technical requirements documentation, not just the business plan.

If they can’t answer clearly with specific technical justifications: Red flag. Technical necessity should be demonstrable, not asserted.

Question 2: “What happens if distributed AI becomes standard in 3-5 years?”

Technology changes rapidly. Ask:

  • Who bears the financial risk if this facility becomes obsolete or underutilized?
  • What happens to our tax incentives and infrastructure subsidies?
  • Can you guarantee the facility won’t become a stranded asset?
  • What’s your plan if inference workloads move to edge computing?

If they claim “that won’t happen”: Ask why Apple spent billions developing on-device AI capabilities, why Qualcomm and AMD are investing in edge computing processors, why Microsoft is developing small language models. Are they all wrong? Why?

Question 3: “Why ‘grid-independent’ rather than grid-connected?”

This distinction is crucial:

Grid-connected infrastructure:

  • Subject to utility regulation and oversight
  • Public service commission jurisdiction
  • Environmental review requirements
  • Rate-setting transparency
  • Community accountability mechanisms

Grid-independent infrastructure:

  • Bypasses utility oversight
  • Minimal regulatory accountability
  • Expedited approval timelines
  • Less environmental scrutiny
  • Private operational decisions

They’ll say grid-independent is faster to build. That’s true. They’ll say it avoids utility bottlenecks. Also true. But ask: Why is avoiding public oversight desirable? Whose interests does that serve?

Your community deserves oversight and accountability. Grid-independent means you have less of both.

Question 4: “Show us the jobs math in detail”

Get specifics. Demand documentation:

  • Construction jobs: How many? What duration? (Usually 2-3 years maximum)
  • Permanent operations jobs: How many non-automated positions? (Often under 50 for highly automated facilities)
  • Timeline: How long before automation reduces staffing?
  • Comparison: How many retail, service, and administrative jobs will AI deployed through this facility automate in your region?

The real math usually reveals that job losses from AI-driven automation outpace job gains from facility operations by factors of 10 or more. They’re asking you to subsidize infrastructure that eliminates far more local jobs than it creates.

Question 5: “How does this compare to distributed alternatives?”

Require them to present alternatives analysis:

  • What would distributed edge computing infrastructure cost?
  • How many distributed nodes could serve the same workload?
  • What are the performance differences?
  • Why is centralization technically superior for your specific use case?

If they haven’t analyzed alternatives, or refuse to share that analysis, ask why. Responsible infrastructure planning always evaluates options. Refusing to discuss alternatives suggests the choice is about business model (extraction) rather than technical necessity (requirements).

The Red Flags to Watch For

🚩 “You wouldn’t understand the technical details”

Translation: Don’t ask questions, don’t scrutinize claims, trust us to know better.

Response: “Explain it to us simply. If the technology is legitimate and the need is real, you should be able to make the case in plain language. Technical complexity isn’t an excuse for avoiding scrutiny.”

🚩 “AI requires massive computing power—everyone knows this”

Translation: Accept centralization as inevitable, don’t question architectural choices.

Response: “Which specific AI tasks require centralization versus distributed processing? Show us the workload analysis. Explain why your use case can’t use edge computing that’s faster and often cheaper.”

🚩 “We need fast approval or we’ll go elsewhere”

Translation: Don’t do due diligence, don’t evaluate alternatives, approve before you understand what you’re approving.

Response: “Then go elsewhere. Our community isn’t so desperate for development that we’ll skip responsible planning. If this deal benefits you so substantially, you can wait for proper evaluation.”

🚩 “Grid-independent solves the power bottleneck”

Translation: Let us bypass utility regulation and public oversight for faster, less accountable development.

Response: “Why can’t you work with our local utility? What specific power requirements make grid connection infeasible? How do we ensure accountability without utility oversight?”

🚩 “This will create hundreds of high-paying jobs”

Translation: We’re emphasizing temporary construction jobs while understating how few permanent positions highly automated facilities require.

Response: “Separate construction jobs from permanent operations jobs. How many permanent, non-automated positions will exist five years after opening? What happens when AI deployed through this facility automates jobs in our community?”

What Extraction Looks Like in Practice

Watch for this pattern as deals progress:

Phase 1 – Big Promises: Hundreds of jobs, substantial tax revenue, “technology hub” designation, economic transformation, community partnership

Phase 2 – Urgency Pressure: Fast-tracked approvals, “approve now or lose out,” complexity shields preventing scrutiny, minimal public input

Phase 3 – Incentives Demanded: Tax abatements, infrastructure subsidies, utility rate concessions, expedited permitting, relaxed environmental review

Phase 4 – Light Oversight: Grid-independent operation, special district status, minimal reporting requirements, private operational decisions

Phase 5 – Reality (5-10 years later): Mostly automated facility (few jobs), technology potentially obsolete (stranded asset risk), company seeks more concessions, community can’t reverse the deal

The value gets extracted upward to platforms and investors. The risk remains with your community. You subsidized their infrastructure. You relaxed your standards. You can’t undo those decisions.

The Alternative Exists

Adaptation infrastructure looks different from extraction infrastructure:

Distributed Computing:

  • Multiple smaller facilities rather than one giant center
  • Edge computing nodes serving local needs
  • Community-owned where feasible
  • Grid-connected (public oversight)
  • Sized for actual demonstrated demand rather than speculative projections

Open Protocols:

  • Technology standards that any vendor can implement
  • No platform dependency or lock-in
  • Community retains infrastructure control
  • Competition remains possible

Local Value Retention:

  • Jobs that can’t be easily automated or off-shored
  • Infrastructure serving community needs
  • Economic value that stays local
  • Transparent accountability

This isn’t theoretical. Communities are building distributed infrastructure. Municipal fiber networks, community-owned edge computing hubs, cooperative data services. Smaller scale, local control, value retention.

But you won’t hear about these alternatives from developers pitching $20 billion centralized extraction facilities. Their business model requires you not to know these alternatives exist.

Your Negotiating Position Is Stronger Than You Think

Remember these facts as you negotiate:

They need approval from you. Your community controls land use, zoning, permits. Without your approval, they can’t build. They need you more than you need them.

You don’t need them. AI transformation is happening regardless. The question is whether it happens through infrastructure that serves your community or extracts from it. You can say no to extraction and still benefit from AI advancement.

Other communities are watching. Your decisions create precedents. Approve extraction infrastructure, and other communities face pressure to match your terms. Demand adaptation alternatives, and you strengthen negotiating positions elsewhere.

You can say no. This is perhaps the most important realization. You can reject proposals that don’t serve community interests. You can demand better terms. You can require genuine alternatives analysis. You can insist on distributed infrastructure over centralized extraction.

Even the most powerful entities surrender to platforms. Société Générale couldn’t compete. Apple couldn’t compete. The Pentagon chose platform dependency over sovereign infrastructure. You’re negotiating with entities that have proven they can make trillion-dollar companies and military departments capitulate. Understanding that power imbalance is essential.

The questions to keep asking:

  1. Show us why centralization is technically necessary for your specific workload
  2. Show us risk analysis if distributed AI scales
  3. Show us real jobs numbers. Permanent, non-automated, long-term
  4. Show us why grid-independent is better than accountable utility oversight
  5. Show us what happens if you leave in 5 years

If they can’t answer these questions clearly, with documentation and technical justification, that tells you everything you need to know.

You’re not being asked to enable AI technology. You’re being asked to subsidize someone else’s extraction infrastructure. Understanding that distinction is the first step toward negotiating from strength instead of desperation.


VII. THE WORKER TRAP

While communities negotiate infrastructure deals, workers face a different but related threat. The same AI systems that supposedly require massive data centers will automate millions of jobs—likely before the infrastructure promises materialize.

The Displacement Timeline

Retail (4.6 million jobs at risk): Google’s Universal Commerce Protocol announced January 11, 2026, enables AI agents to shop on behalf of users. When customers interact with AI instead of salespeople, retail employment contracts. Timeline: 2-3 years for substantial impact.

The protocol already has 20 major partners. Shopify, Walmart, Target, Etsy, Wayfair. Plus payment processors Mastercard, Visa, PayPal, Stripe. This isn’t experimental. This is commercial deployment at scale.

Healthcare Administration (1+ million jobs at risk): Anthropic’s Claude for Healthcare and OpenAI’s ChatGPT Health announced within days of each other (January 2 and 11, 2026) automate:

  • Medical records specialists (142,000 jobs)
  • Patient service representatives (320,000+ jobs)
  • Medical secretaries (551,000 jobs)
  • Triage nurses (partial automation)
  • Medical billing specialists (prior authorization automation)

Timeline: 2-3 years for significant displacement as healthcare systems integrate these platforms.

Customer Service (millions at risk): AI agents handle common questions, route complex issues, provide 24/7 availability without human staff. Every company announcing “AI customer service” is announcing headcount reduction.

Timeline: Already happening, accelerating over next 2-3 years.

The Infrastructure Promise

Against this displacement, communities are offered infrastructure jobs:

Construction Positions: Large data center projects employ substantial construction crews. Hundreds or even thousands of workers. But these are temporary positions, typically lasting 2-3 years during facility construction. When construction completes, those jobs disappear.

Operations Positions: Modern data centers are highly automated. A $20 billion facility might employ 50-100 permanent operations staff. Contrast that with the thousands of construction workers or the millions of retail and service workers being displaced.

The math doesn’t work. Displacement happens faster and at larger scale than infrastructure job creation. Even if every data center proposal gets approved and every construction job materializes, the net employment impact is deeply negative.

The Double Loss

Workers lose twice under extraction infrastructure:

First Loss – Current Jobs Automated: AI deployed through centralized platforms automates existing employment. Retail salespeople lose positions to shopping agents. Healthcare administrators lose jobs to medical record AI. Customer service representatives lose work to chatbots.

Second Loss – False Infrastructure Promises: The infrastructure jobs offered as consolation prove to be temporary (construction) or minimal (operations). Worse, if distributed AI scales and centralized facilities become stranded assets, even those limited infrastructure jobs disappear faster than anticipated.

Workers who dismissed AI as a bubble face unemployment without preparation. Workers who trusted infrastructure promises discover those promises were based on false technical necessity. Workers who planned careers in data center operations find positions automated or eliminated when distributed alternatives prove cheaper.

The Automotive Parallel

Michigan’s automotive transformation provides instructive precedent. Workers who recognized transformation early had options. Retraining, relocation, career pivots. Workers who denied change, trusting that manufacturing would always need human labor at scale, lost everything when automation and offshoring accelerated.

AI transformation follows similar dynamics. The technology is real. Displacement is inevitable. But the timeline and magnitude depend partly on architectural choices happening now.

If AI infrastructure builds toward centralized extraction, displacement accelerates because platforms can deploy automation at scale with minimal labor. If infrastructure builds toward distributed adaptation, displacement may slow because distributed systems often require more human oversight and maintenance.

Workers can’t prevent AI transformation. But understanding that centralization serves extraction while distribution might serve adaptation could inform career planning, retraining decisions, and political advocacy.

What Workers Can Do

Understand the timeline: 2-3 years for substantial displacement in retail and healthcare administration. This isn’t distant future. This is immediate.

Don’t trust infrastructure promises: Construction jobs are temporary. Operations jobs are minimal. Net employment impact is negative even in optimistic scenarios.

Advocate for adaptation: Push for distributed infrastructure, open protocols, community ownership. These models may create more sustained employment because they require more human involvement than fully automated platforms.

Prepare now: Retraining, skill development, career pivots. Waiting until displacement accelerates means competing with millions of other workers for limited positions.

Recognize the pattern: Companies promising jobs are the same companies deploying automation. They’re not lying exactly, they’re emphasizing temporary construction jobs while understating permanent automation. Understanding this pattern prevents false hope.

The worker trap is structural: lose your job to automation enabled by infrastructure you were told would create jobs. Escape requires understanding that the infrastructure promises are based on extraction economics that inherently minimize labor costs.


VIII. THE ALTERNATIVE PATH

Distributed infrastructure exists. Communities just aren’t hearing about it from developers pitching centralized extraction facilities.

What Distributed AI Infrastructure Looks Like

Technical Architecture:

  • Edge computing hubs: Smaller facilities (10-50 servers) distributed across regions rather than massive centralized campuses
  • Local inference: AI models running on community-owned servers, devices, and infrastructure
  • Federated learning: Models that train across distributed data without centralizing information
  • Mesh networks: Community-owned connectivity that isn’t dependent on single corporate providers

Economic Model:

  • Community ownership: Infrastructure controlled by local entities rather than distant corporations
  • Value retention: Economic benefits stay in the community rather than extracting to platforms
  • Transparent operation: Public oversight of publicly-financed infrastructure
  • Adaptable scale: Capacity grows with demonstrated demand rather than speculative forecasts

Real-World Examples

Apple’s Strategy (Before Surrender): Before announcing their Google partnership, Apple was betting on on-device AI rather than cloud-based processing. Their chips include neural engines designed for local AI inference. Why? Privacy protection and user control. The side effect was demonstrating that sophisticated AI runs efficiently on distributed devices. Their surrender to Google’s platform proves how insurmountable platform advantages have become even when you’ve already built the distributed infrastructure.

Qualcomm’s Approach: Qualcomm’s latest Snapdragon processors include AI capabilities that enable smartphones and PCs to run substantial AI workloads locally. They’re not doing this to be nice, they’re doing it because edge computing is faster (no round trip to data centers), more private (data stays on device), and often cheaper (no cloud fees).

Microsoft’s Small Models: Microsoft’s Phi series of small language models (less than 7 billion parameters) perform remarkably well for specific tasks while running on consumer hardware. They’re proving that specialized AI doesn’t need massive scale.

Academic Research: Research institutions including Stanford, MIT, and University of Washington are documenting that distributed inference is not only feasible but often superior for real-world applications where latency, privacy, and cost matter.

Why Distributed Infrastructure Isn’t Being Built

If distributed AI is technically feasible and often superior, why aren’t communities being offered that alternative?

Simple answer: Distributed infrastructure doesn’t generate $1 trillion in capital deployment. It doesn’t create platform dependency. It doesn’t enable extraction at scale. Companies can’t monetize AI running on your device or community-owned servers the way they can extract value through centralized platforms.

Decentralized models don’t make billionaires. They create diffuse value that stays local. That makes them better for communities but worse for capital concentration. So you won’t hear about them from developers pitching mega-facilities.

Even when companies try to build distributed infrastructure speculatively (like DataVault AI’s edge network announcement), the market punishes them because distributed architecture doesn’t enable the extraction model that creates predictable platform revenue.

What Communities Can Demand

Municipal fiber networks: Community-owned high-speed internet that enables distributed computing without corporate control

Edge computing cooperatives: Shared infrastructure owned by local businesses and institutions rather than distant corporations

Open protocol requirements: If a company wants public subsidies for infrastructure, require they use open protocols that prevent vendor lock-in

Distributed-first analysis: Before approving any centralized facility, require developers to document why distributed alternatives won’t work for their specific use case

Community ownership options: Reserve the right to purchase infrastructure if companies want to abandon facilities or sell to distant investors

None of this prevents AI transformation. But it shapes transformation toward adaptation (communities benefit and retain control) rather than extraction (platforms profit and retain control).

The Choice That’s Actually Being Made

Communities aren’t choosing between AI progress and stagnation. They’re choosing between:

Extraction Infrastructure:

  • Centralized facilities designed for platform control
  • Grid-independent operation with minimal oversight
  • Corporate ownership and decision-making
  • Value extraction through data and transactions
  • Risk concentration in communities
  • Jobs automation at scale

Adaptation Infrastructure:

  • Distributed facilities designed for community benefit
  • Grid-connected with public accountability
  • Community ownership where feasible
  • Value retention locally
  • Risk management through diversification
  • Jobs that maintain human roles where appropriate

Both involve AI transformation. Both require infrastructure investment. But the distinction between extraction and adaptation determines who benefits and who bears risk.

Right now, extraction infrastructure is being built because communities don’t understand they have alternatives. The complexity shield makes centralization seem technically necessary. The platform consolidation makes distributed alternatives seem unrealistic. The timing pressure makes careful evaluation seem like obstruction.

But communities that pause, ask hard questions, and demand genuine alternatives analysis might discover that adaptation infrastructure serves their interests better than extraction facilities even if adaptation generates less spectacular capital deployment numbers for developers to cite.

The fact that even Apple, after investing billions in on-device AI capability, surrendered to Google’s centralized platform shows how powerful extraction economics have become. But it also shows why communities must demand alternatives explicitly. No one will offer them voluntarily.


IX. CONCLUSION

On January 11, 2026, Google announced a protocol enabling AI agents to shop on your behalf using lightweight technology that runs on existing retail servers. The next day, Apple—with three trillion dollars and custom AI chips—announced a multi-year partnership making Google’s AI fundamental to Siri. That evening, the Pentagon announced Google’s and Elon Musk’s AI would operate inside defense networks. And Alphabet’s market value crossed $4 trillion for the first time.

All within 48 hours. All describing the same technological transformation. But recommending radically different infrastructure.

Google’s shopping protocol suggests AI works fine on current distributed systems. Apple’s surrender suggests even trillion-dollar companies can’t compete with platforms. The Pentagon’s integration suggests even national security imperatives don’t overcome platform advantages. The market’s trillion-dollar reward suggests consolidation is complete.

Between January 2 and 12, 2026—ten days—the pattern became unmistakable. OpenAI and Anthropic raced to capture healthcare. Google consolidated commerce. Nvidia partnered with pharmaceutical giants. Microsoft absorbed another bank that tried to build alternatives. Apple signed away years of potential independence. The Pentagon integrated private platforms into classified systems. Bloom Energy stock surged on power deals. All within ten days. All building infrastructure for extraction.

This isn’t a bubble. The investment is real. The technology works. The transformation is inevitable. But it’s consolidation, not bubble, and that consolidation is creating extraction infrastructure at every layer: platforms extracting transaction fees and data, infrastructure vendors extracting through hardware and power sales, capital markets extracting through speculation and valuation.

The consolidation predicted in the “AI Isn’t a Bubble” series proved correct faster than anticipated. When Société Générale with$1.5 trillion in assets, thousands of engineers, sophisticated capabilities surrenders its proprietary AI tools to Microsoft Copilot, platform advantages are real. When Apple with $3 trillion in market cap, vertical integration strategy, custom AI silicon signs a multi-year pact with Google rather than compete, platform advantages are insurmountable. When the Pentagon with an unlimited budget, national security imperatives, data sovereignty requirements integrates private platforms instead of building sovereign infrastructure, platform advantages are absolute.

On January 12, 2026, three entities surrendered to AI platforms: Société Générale, one of Europe’s largest banks; Apple, the world’s most valuable technology company; and the United States Department of Defense. Banks, tech giants, and military. All choosing platform dependency over building alternatives.

Communities negotiating data center deals right now are negotiating during this consolidation. They’re being told AI requires massive centralized facilities, that distributed alternatives won’t work, that grid-independent operation is necessary, that job creation will be substantial. These claims depend on technical complexity shields that prevent scrutiny.

But the technical reality is simpler than complexity shields suggest: most AI workloads run efficiently on distributed infrastructure; centralization is an architectural choice, not a technical requirement; that choice serves extraction business models rather than community adaptation needs.

And the urgency with which developers push approvals is strategic, not technical. They’re racing to build centralized infrastructure before communities understand distributed alternatives exist. Before awareness spreads that Société Générale, Apple, and the Pentagon all surrendered to platforms. Before communities realize they’re negotiating from strength, not desperation.

The questions communities must ask cut through obfuscation: Show us why centralization is technically necessary for your specific workload. Show us risk analysis if distributed AI scales. Show us real jobs numbers. Permanent, non-automated, documented. Show us why grid-independent is better than accountable oversight. Show us what happens if you leave in five years.

If developers can’t answer these questions clearly, that tells communities everything they need to know. They’re not being asked to enable AI technology. They’re being asked to subsidize extraction infrastructure before understanding they had adaptation alternatives.

Workers face parallel dynamics. Retail, healthcare administration, customer service. Millions of jobs automating over 2-3 years. The same infrastructure promising limited construction and operations jobs is deploying AI that eliminates far more positions. Workers lose twice: current jobs automated, infrastructure promises proving hollow. The automotive precedent is instructive. Workers who recognized transformation early had options, workers who denied change lost everything.

The alternative path exists. Distributed infrastructure, edge computing, community ownership, open protocols. These models create more diffuse value that stays local rather than concentrating in distant platforms. They enable adaptation rather than just extraction. But communities won’t hear about these alternatives from developers pitching $20 billion facilities, because distributed models don’t generate the capital deployment numbers that attract major investors.

The choice being made right now in planning commissions and township halls, by all of us determines whether AI transformation serves extraction or adaptation. Once infrastructure is built, deals are signed, platforms are entrenched, the choice is made. Extraction becomes the only available model.

Communities hold more power than they realize. Developers need approvals. Communities control zoning, permits, land use. They can say no. They can demand alternatives analysis. They can require distributed infrastructure over centralized extraction. They can insist on genuine accountability rather than grid-independent operation.

But that power only exists while negotiations continue. Once approved, once built, the leverage shifts permanently. And as Société Générale, Apple, and the Pentagon have proven, even the most powerful entities ultimately surrender to platforms.

The technical capability for distributed AI exists. The business model requires centralization. Understanding that distinction between technical necessity and business model requirements is the only way to negotiate from strength instead of desperation.

AI will transform commerce, healthcare, customer service, defense, and every sector it touches. That’s inevitable. The question isn’t whether transformation happens. The question is whether transformation serves extraction or empowerment. Right now, we’re building infrastructure for the former while pretending it’s technically necessary for the latter.

Communities that understand this distinction – that pause, ask hard questions, demand genuine alternatives – might shape AI transformation toward adaptation instead of just extraction. Communities that accept complexity shields, trust inevitability narratives, and approve deals before understanding them will subsidize extraction infrastructure and bear the risk when consolidation proves that distributed alternatives were feasible all along.

The extraction protocol isn’t coming. It’s operational. Platforms have consolidated. Even banks, tech giants, and military departments have surrendered. The question for communities is whether they recognize the extraction model before they subsidize it. Whether they demand adaptation alternatives before infrastructure is built. Whether they negotiate from understanding rather than desperation.

The choice is being made now. In township halls and planning commission meetings. By community leaders who deserve clear information instead of technical obfuscation. By workers who need honest assessment instead of false promises. By policymakers who should understand that centralization serves extraction while distribution enables adaptation.

The window is closing. But it hasn’t closed yet.

Sources available upon request. Will update after they process through the Internet Archive (Wayback).

Verified by MonsterInsights