Agentic AI Foundation: The AIAG Playbook at Scale

What 1982 Auto Standards Teach Us About AI Oligopoly Governance


The Bottom Line Up Front

When Six Companies Coordinate

On December 9, 2025, six companies — Anthropic, Amazon, Google, Meta, Microsoft, and OpenAI — announced coordinated action on AI agent governance.

Together they’re:

  1. Open-sourcing the Model Context Protocol (technical standard for how AI agents access data and tools)
  2. Founding the Agentic AI Foundation under Linux Foundation (governance body for agent deployment standards)

If you’re a Michigan worker over 40, this should feel very familiar.

In 1982, three companies — GM, Ford, and Chrysler — created the Automotive Industry Action Group to set automotive supply chain standards during industry crisis.

The playbook is identical. The scale is exponentially larger.

AIAG governed one industry’s supply chain.

Agentic AI Foundation will govern AI deployment across the entire economy.

And just like AIAG, workers aren’t at the table deciding how transformation happens.

The lesson from AIAG isn’t “industry standards bad.”

The lesson is: Technical governance without social governance leaves workers bearing transformation costs by default.

Here in Michigan, we learned that lesson watching standards succeed while communities struggled, not part of the equation really.

We’re about to learn it again — at larger scale, faster pace, affecting more workers.

Unless something changes about who governs transformation and whose interests are represented

The Pattern Repeats

In 1982, Detroit’s Big Three automakers created the Automotive Industry Action Group (AIAG) to set supply chain standards during a crisis. Japanese competition was devastating US manufacturers. Quality problems plagued American cars. The industry needed help.

AIAG delivered what it promised: Better standards. Improved efficiency. Enhanced competitiveness.

The jobs? Still left.

Not because AIAG caused displacement. It didn’t. Technical standards address technical problems. They don’t address who benefits from transformation, who loses, or how costs get distributed.

Now OpenAI (and most likely others) is creating similar governance for AI agents. The Agentic AI Foundation will likely deliver what it promises: Safety standards. Deployment frameworks. Technical excellence.

The question isn’t whether standards will work. It’s whether anyone is governing for worker outcomes.

What AIAG Actually Was (And Wasn’t)

The 1982 crisis was real:

  • 1970s oil shocks devastated US automakers
  • Japanese cars had better quality, lower cost
  • US manufacturers faced potential collapse
  • Industry needed supply chain improvements fast

AIAG’s response:

  • Created quality standards (before ISO 9000 existed)
  • Established shipping/logistics protocols
  • Developed electronic data interchange systems
  • Built frameworks for supplier collaboration

Did it work? Yes.

  • Standards improved product quality
  • Supply chain efficiency increased
  • US automakers became more competitive
  • AIAG standards still used globally today

I saw AIAG projects from multiple perspectives while I worked at different Tier 1 auto suppliers. The quality improvements were often painful, customer mandated and driven (from the supplier perspective), but also genuine improvements. They solved real problems that increased efficiency. APQP, PPAP, barcode standardization, containerizations — supplier materials, logistics, and quality requirements. They worked as designed.

But during same period (1980s-2000s):

  • Michigan lost 300,000+ manufacturing jobs
  • Automation transformed assembly lines
  • NAFTA enabled offshoring to Mexico
  • Supplier consolidation eliminated smaller firms
  • Import market share continued growing

AIAG didn’t cause these losses. Globalization, automation, trade policy, consumer preferences, and macroeconomic forces all converged. The transformation would have happened regardless.

But here’s what I learned after many years watching it unfold: AIAG governed technical standards brilliantly. No equivalent body governed worker outcomes during transformation. Technical problems got solved. Human problems got externalized as the cost of doing business.

The Governance Structure That Excluded Workers

AIAG founding members (1982):

  • General Motors
  • Ford Motor Company
  • Chrysler Corporation

Three companies. One industry. Domestic US market.

Who participated in governance:

  • ✅ Big Three automakers (setting direction)
  • ✅ Tier 1 suppliers (major participants)
  • ✅ Technical experts (quality/logistics specialists)

Who was excluded:

  • ❌ United Auto Workers (union voice)
  • ❌ Small suppliers (had to comply, couldn’t influence)
  • ❌ Communities (plant closure impacts not in scope)
  • ❌ Displaced workers (transition support not addressed)

The result: Standards that worked technically. Transformation that hurt workers economically.

Not because anyone intended harm, but because:

  • Technical excellence was in scope
  • Worker outcomes were out of scope
  • No parallel body existed to govern social impacts
  • Industry transformation proceeded with only technical governance

The Agentic AI Foundation: AIAG Playbook at Oligopoly Scale

Founding members (December 9, 2025):

  • Anthropic
  • Amazon
  • Google
  • Meta
  • Microsoft
  • OpenAI

Six companies. Every industry. Global market.

What These Six Companies Control

Cloud Infrastructure (where AI runs):

  • Amazon: AWS (37% market share)
  • Microsoft: Azure (21% market share)
  • Google: Google Cloud (11% market share)
  • Combined: 69% of global cloud infrastructure

AI Models (the intelligence layer):

  • OpenAI: GPT-4, o1 (enterprise standard)
  • Anthropic: Claude (enterprise, government)
  • Google: Gemini (consumer, enterprise, DoD)
  • Meta: Llama (open source, widely deployed)

Enterprise Deployment:

  • Microsoft: Copilot (365 million users), Azure AI
  • Google: Workspace AI, Vertex AI
  • Amazon: Bedrock (model hosting), Q (assistant)
  • Meta: Business integrations, open ecosystem

What they’re now coordinating on:

  • Model Context Protocol (MCP): Technical standard for how all AI agents access data/tools
  • Agentic AI Foundation: Governance body for safety standards, deployment frameworks, agent behavior rules

Translation: Six companies that control infrastructure, models, and deployment are now coordinating on both technical standards AND governance rules everyone else must follow.

The Two-Layer Control: MCP + Foundation

Model Context Protocol (technical layer):

What Anthropic describes: “An open protocol that standardizes how AI systems connect to different data sources and tools… a universal ‘language’ that allows AI assistants to securely access the information they need.”

What this actually means:

  • Standardized way for agents to access databases, APIs, applications
  • Like USB for AI — one protocol connects everything
  • Already developed by Anthropic, now being imposed as standard
  • Everyone deploying agents must adopt this protocol for interoperability

Agentic AI Foundation (governance layer):

Purpose (from announcement):

  • Govern development of MCP
  • Set standards for agentic AI systems
  • Ensure interoperability
  • Safety and ethical guidelines

What this actually means:

  • Six companies decide how MCP evolves
  • Six companies define “safe” and “ethical” agent behavior
  • Six companies set deployment standards
  • Everyone else must comply or face market exclusion

Combined effect:

  • Control the technical standard (MCP = how agents work)
  • Control the governance rules (Foundation = what agents can do)
  • Control the infrastructure (their clouds run the agents)
  • Control the models (their AI powers the agents)

This is vertical integration of an entire technology stack at industry-wide scale.

Why This Is More Concerning Than AIAG

Market concentration:

AIAG era:

  • Big Three had competition (Toyota, Honda, European makers)
  • Market share: ~70% but declining
  • Alternative standards possible (Japanese manufacturers had own methods)

AI Foundation era:

  • Big Six face minimal meaningful competition
  • Market share: ~80%+ of enterprise AI deployment
  • Alternative standards unlikely (network effects favor adoption)
  • Startups must adopt their protocol or face incompatibility

Scope of impact:

AIAG:

  • Automotive supply chain (large but bounded)
  • Manufacturing sector primarily
  • Geographic concentration (Midwest, South)

Agentic AI Foundation:

  • All industries deploying agents
  • White-collar and blue-collar work
  • Global reach, instant deployment

Speed of transformation:

AIAG:

  • Standards developed over years
  • Adoption over decades
  • Time for adjustment, retraining
  • Geographic spread of impact

Agentic AI Foundation:

  • MCP already exists (being imposed)
  • December 2–3: 24,600 companies got push-button agents
  • December 9: Six companies coordinate governance
  • Transformation measured in months, not decades

Worker power differential:

AIAG era:

  • UAW existed (1.5 million members peak)
  • Strike leverage (could stop production)
  • Geographic concentration (union density)
  • Political influence (Midwest swing states)

AI era:

  • No tech worker union (organizing minimal)
  • No strike leverage (agents don’t stop)
  • Geographic dispersal (workers everywhere affected)
  • Weakened labor political power

Regulatory environment:

AIAG era:

  • Government could intervene (antitrust, labor law)
  • OSHA, EPA eventually regulated
  • Union rights protected by law
  • Democratic input through established channels

AI era:

  • Trump EO coming (eliminate oversight for “AI leadership”)
  • Government as customer (DoD Gemini, AWS $50B)
  • Labor protections weakening
  • Democratic input being preempted

Who’s Not at the Table (Again)

Notice who’s NOT listed as founding members:

❌ Labor unions (AFL-CIO, SEIU, any tech worker organization)
❌ Worker advocacy groups
❌ Consumer protection organizations
❌ Independent AI safety researchers (not funded by these six)
❌ Civil rights groups
❌ Environmental organizations
❌ Community representatives
❌ International governments (just US Big Tech)
❌ Academic institutions (independent research)
❌ Small AI companies/startups
❌ Displaced workers from any industry

Who IS at the table:

  • Six companies that profit from AI deployment
  • Six companies whose business model is automation
  • Six companies competing with each other but aligned on governance

This is industry self-regulation by the industry that most benefits from minimal regulation.

Just like AIAG: Industry sets standards. Workers absorb transformation costs.

The Questions No Standards Body Answers

Technical questions (Agentic AI Foundation will address):

  • How should agents handle errors?
  • What safety protocols apply to agent behavior?
  • How do agents from different vendors interact via MCP?
  • What audit/transparency requirements for agent actions?
  • What certification processes for agent deployment?

Social questions (Agentic AI Foundation likely won’t address):

  • Which jobs are appropriate for agent replacement?
  • What transition support for workers displaced by agents?
  • How should productivity gains from agents be distributed?
  • What community impacts from rapid agent deployment?
  • Who bears costs when agents make mistakes affecting workers?
  • What democratic input on deployment pace and scope?

Not because these questions are unimportant — but because they’re outside scope of technical standards body governed by companies profiting from deployment.

From my AIAG experience: Technical standards bodies solve technical problems excellently. They don’t solve — and aren’t designed to solve — social distribution problems. That requires separate governance with different representation.

We didn’t have that parallel governance in the AIAG era. We don’t have it now.

What History Suggests About Oligopoly Governance

Our earlier analysis (What the 1980s-90s IT Revolution Taught Us) documented how technology transformations promised worker benefits but delivered different outcomes:

1980s-90s IT Revolution:

  • Technology genuinely improved productivity (computers worked as promised)
  • New tools created genuine business value
  • But productivity gains → corporate profits, not proportional wage growth
  • New jobs required more education, often paid less
  • Retraining programs inadequate for scale/speed of disruption

The AIAG parallel:

  • AIAG set excellent technical standards (quality improved)
  • No equivalent body governed worker outcomes (displacement happened)
  • Transformation proceeded with partial governance
  • Workers absorbed costs; companies captured benefits

The pattern: Technical governance without social governance leaves workers bearing transformation costs by default.

Why Six-Company Coordination Matters

When competitors coordinate on standards:

Historical examples:

  • Standard Oil trusts (broken up 1911)
  • AT&T Bell System (broken up 1984)
  • Microsoft antitrust (2001)

Modern tech:

  • Apple/Google/Intel wage-fixing (2010, settled)
  • Tech recruiting no-poach agreements (illegal)
  • Now: Six AI companies coordinating on deployment standards

The concern: When industry leaders coordinate — even on “voluntary” technical standards — it creates:

  • Market barriers for competitors
  • Compliance requirements for everyone else
  • Oligopoly control over technology evolution
  • Standards that favor coordinating companies

AIAG was three companies, one industry, pre-internet era coordination.

This is six companies, every industry, real-time global coordination.

The antitrust implications are significant. Whether regulators will act is another question — especially with Trump administration treating AI companies as strategic partners rather than monopoly concerns.

What My AIAG Experience at a Tier One Taught Me

Working with AIAG standards, I saw:

What worked:

  • Quality metrics improved (measurable, significant)
  • Supply chain efficiency increased (real cost savings)
  • Communication standardized (easier collaboration)
  • Problem-solving accelerated (common frameworks)

What didn’t:

  • Supplier consolidation (smaller firms couldn’t afford compliance)
  • Geographic concentration (standards favored scale)
  • Worker displacement (efficiency gains = fewer people needed)
  • Community impacts (plant closures followed efficiency improvements)

The disconnect I witnessed: Technical standards achieved stated objectives brilliantly. Economic outcomes for workers deteriorated simultaneously. Both were true.

The lesson: Technical success ≠ worker success. Solving technical problems doesn’t automatically solve human problems. Standards bodies optimize for what they measure (efficiency, quality, cost). They don’t measure — and therefore don’t optimize for — worker outcomes.

What Would Different Governance Look Like?

What AIAG taught us works:

  • Clear technical standards
  • Industry collaboration on best practices
  • Measurable outcomes
  • Continuous improvement processes

What AIAG taught us is missing:

  • Worker voice in transformation decisions
  • Transition support governance
  • Community impact assessment
  • Democratic input on deployment pace
  • Distribution mechanisms for productivity gains

For AI, we need both:

Technical governance (Agentic AI Foundation provides):

  • Safety standards for agent behavior
  • Interoperability via Model Context Protocol
  • Deployment frameworks and best practices
  • Audit/certification processes
  • Risk assessment methodologies

Social governance (currently absent):

  • Worker transition requirements
  • Job quality standards for AI-augmented roles
  • Geographic/community impact assessments
  • Benefit-sharing mechanisms from productivity gains
  • Democratic input on deployment decisions
  • Labor representation in governance

Not either/or — both/and.

Technical standards bodies can’t solve social problems. But social problems don’t solve themselves just because technical standards work.

The problem: Six companies coordinating on technical governance. Zero entities coordinating on social governance.

The Uncomfortable Questions About Oligopoly

If six-company coordination serves public interest:

  • Why are workers excluded from governance?
  • Why no labor union representation on foundation board?
  • Why announce same week as Trump’s “one rulebook” EO?
  • Why coordinate after 24,600 companies already deployed agents?
  • What prevents this from becoming cartel behavior?

For Agentic AI Foundation:

  • Will technical standards prioritize worker-protective deployment?
  • Will foundation recommend deployment pace limits?
  • Will it address job displacement mitigation?
  • What happens if small AI companies challenge MCP monopoly?
  • Who enforces standards against founding members themselves?

History suggests: Oligopolies govern in oligopoly interests. Technical standards will work. Social costs will be externalized. Workers will absorb transformation costs by default.

The December 9 Convergence

All announced same day:

Morning:

  • US-India defense/energy partnership (offshoring strategy)

Afternoon:

  • DoD selects Google Gemini for GenAI.Mil (federal deployment)

Evening:

  • Six AI companies announce MCP + Agentic AI Foundation (industry coordination)

While simultaneously:

  • Labor market collapsing (JOLTS 7.7M openings, lowest since 2021)
  • Trump EO coming this week (override local control)
  • Developer withdrawing tactically (Howell Township, Michigan)
  • November showed -32,000 jobs (ADP)

The pattern: Infrastructure accelerating, deployment scaling, governance consolidating, labor market collapsing.

All coordinated. All while workers have no seat at any table.

What This Means Going Forward

December 9, 2025: Six companies coordinate on AI agent governance.

What this will likely accomplish technically:

  • Effective standards for AI agent behavior
  • Interoperability via Model Context Protocol
  • Safety protocols that function as designed
  • Deployment frameworks that work

What this likely won’t accomplish socially:

  • Worker transition support requirements
  • Job quality standards for AI-augmented work
  • Community impact governance
  • Democratic input on transformation pace
  • Distribution of productivity gains

Not because foundation will fail technically. Because social outcomes aren’t in scope.

The Bottom Line

AIAG worked technically. Three companies set standards that improved automotive quality and efficiency. Those standards still function globally today.

Workers still hurt. Not because AIAG caused displacement, but because transformation was governed only technically, not socially.

Agentic AI Foundation will likely work technically. Six companies will set standards that enable safe, interoperable agent deployment. Those standards will probably function as designed.

Workers will likely still hurt. Not because foundation causes displacement, but because AI transformation is being governed only technically, not socially.

But this time the scale is exponentially larger:

AIAG: Three companies, one industry, one region
Agentic AI Foundation: Six companies, every industry, global reach

AIAG: Years for standards adoption
Agentic AI Foundation: Already deploying (MCP exists, agents active)

AIAG: Strong unions existed
Agentic AI Foundation: Minimal worker organizing

AIAG: Government could intervene
Agentic AI Foundation: Government is customer and enabler

The lesson from my AIAG experience isn’t “industry standards bad.”

The lesson is: Technical governance without social governance leaves workers bearing transformation costs by default.

When I worked with AIAG standards, which was my entire career in automotive IT, they achieved exactly what they were designed to achieve. That didn’t mean workers were okay.

The Agentic AI Foundation will likely achieve exactly what it’s designed to achieve. That won’t mean workers are okay either.

Michigan lived through AIAG coordinating three companies’ technical standards while 300,000+ manufacturing jobs disappeared.

We’re about to live through six companies coordinating technical standards while agents automate work across every industry.

Unless something changes about who governs transformation and whose interests are represented.

This time, we should recognize the playbook before the damage is done — not after.


Related Reading:

Verified by MonsterInsights