Methodology

THE OPEN RECORD L3C – METHODOLOGY

COLLABORATIVE INTELLIGENCE MODEL

The Open Record operates on a collaborative intelligence model where human expertise and AI analytical capabilities work together to produce actionable intelligence for workers and communities. This page deliberately written by Claude.AI to capture what they view as our joint methodology to keep me honest.

How We Work:

Angela Fisher (Publisher/Analyst): Brings two decades of project management experience across automotive manufacturing, IT, and corporate environments. Sets editorial direction, conducts source verification, makes final editorial decisions, and applies real-world context from direct experience with layoffs, automation, and workforce displacement. Pattern recognition and experiential insight.

Claude (AI Research Assistant): Provides rapid research capabilities, pattern validation across multiple sources, data synthesis, and draft content generation. Does NOT make final editorial decisions or determine what constitutes “newsworthy” intelligence.

Why This Model:

  • Speed: AI can process hundreds of sources in minutes that would take hours manually
  • Depth: Human judgment filters AI-generated content for relevance, accuracy, and community impact
  • Transparency: We disclose this collaboration because readers deserve to know how their intelligence is produced
  • Accountability: Angela Fisher is solely responsible for all published content and editorial decisions

What This Means for Readers:

  • Every article is reviewed, edited, and approved by a human with direct workforce experience
  • AI-generated drafts are extensively fact-checked against primary sources
  • All sources are archived via Wayback Machine and available for verification
  • Editorial judgment prioritizes worker/community impact over corporate narratives

INTELLIGENCE GATHERING

Automated Systems

  • RSS feed monitoring: TechCrunch, Google News, company press releases, regional sources
  • Web scraping: Daily runs at 7am (regional) and 8am (national)
  • Financial alerts: Financial Juice real-time market intelligence for breaking corporate news
  • Wayback Machine integration: All URLs archived automatically upon capture

Staging Workflow

  • New intelligence enters staging database with confidence scoring
  • Quality control prevents unverified data from appearing in public-facing systems
  • “No confirmed movement” approach: Won’t publish potentially false claims
  • Multiple source verification required for breaking news

Source Prioritization

  1. Primary sources: Company filings, government reports, official announcements
  2. Established outlets: Reuters, Bloomberg, AP, Wall Street Journal
  3. Regional intelligence: Local news, community meetings, planning commission documents
  4. Community sources: Reddit (r/cscareerquestions, r/layoffs) for ground truth worker experiences
  5. Aggregators: Used for discovery only, not primary sourcing

SOURCE VERIFICATION

Multi-Source Cross-Reference

  • Breaking news verified against minimum 2-3 established outlets before publication
  • Financial claims verified against SEC filings when available
  • Employment data cross-referenced with BLS, Challenger Gray & Christmas, state agencies
  • Community claims verified through planning documents, meeting minutes, permits

Red Flags for Skepticism

  • Topics liable to conspiracy theories (contested political events, pseudoscience)
  • Search engine optimization content (product recommendations)
  • Single-source claims about major events
  • Conflicting factual information across sources (triggers additional searches)

When We Don’t Publish

  • Insufficient sourcing after reasonable research effort
  • Cannot verify claims through independent sources
  • Information doesn’t meet our three-question framework (see Mission Framework below)
  • Potential to cause harm without clear community benefit

EDITORIAL STANDARDS

Article Structure

  • Bottom Line Up Front: Conclusion first, then build the case
  • Comprehensive sourcing: All claims linked to primary sources with Wayback archives
  • Methodology transparency: Explain data limitations and uncertainties
  • Alternative perspectives: Include downsides, counterarguments, competing analyses
  • Actionable intelligence: Provide specific steps workers/communities can take

What We Don’t Do

  • Promotional content disguised as journalism
  • Accept claims without verification
  • Publish based on single anonymous sources
  • Ignore downsides to favor specific solutions
  • Use AI-generated content without human verification and editing

Fact-Checking Standards

  • Conspiracy theories addressed with documented evidence
  • Health/safety claims verified through medical/scientific sources
  • Financial data verified through official filings
  • Community complaints verified through public records (noise studies, permits, meeting minutes)

PUBLICATION SCHEDULE

Weekly Intelligence Briefs

  • Fridays 8am: Under the Radar (career intelligence for workers)
  • Sundays 8am: PivotIntel Weekly (infrastructure intelligence for communities)

Continuous Work (Monday-Saturday)

  • Alert capture mode: Log URLs, Wayback archive, tag for weekly analysis
  • Deep work: Multi-part investigations, infrastructure analysis, app development
  • No daily publishing pressure (pivot from earlier daily bulletin model as of December 15, 2025)

Breaking News Exception (Rare)

Only publish off-cycle for:

  • Major policy changes with immediate community impact
  • Urgent community intelligence requiring rapid response
  • Stories that lose relevance if delayed 48+ hours

Rationale for Weekly Model: Daily publishing fragmented attention and consumed resources without building authority. Weekly rhythm allows pattern synthesis and deep analysis while continuous alert capture ensures no intelligence is missed.


MISSION FRAMEWORK

Three-Question Filter

Every piece of content must answer YES to at least one:

  1. Does this affect workers’ employment prospects?
    • Layoff announcements, automation deployments, skill shifts, career path changes
  2. Does this help communities negotiate with developers?
    • Data center financial stability, infrastructure alternatives, tax break analysis, noise/environmental impacts
  3. Can workers take meaningful action based on this information?
    • Specific career pivots, skill development paths, community organizing strategies

What We Don’t Cover

  • Theoretical AI projections without concrete deployment timelines
  • “AI bubble” or “everything is fine” narratives without nuanced analysis
  • Corporate press releases without critical analysis
  • Topics that don’t empower workers/communities to make informed decisions

TRANSPARENCY COMMITMENTS

What We Disclose

  • AI collaboration: Claude assists with research and drafts; Angela Fisher makes all editorial decisions
  • Source archiving: All URLs archived via Wayback Machine
  • Methodology: This page explains exactly how we work
  • Limitations: We acknowledge when data is uncertain or incomplete
  • Corrections: Published prominently when errors are discovered

What We Don’t Have

  • Corporate sponsors (L3C social benefit structure)
  • Advertising relationships with data center developers
  • Financial stake in recommended career paths or training programs

Revenue Model

  • Free intelligence: Under the Radar and PivotIntel Weekly remain free for displaced workers
  • B2B services (planned): Resume analysis tools, career sustainability scoring for staffing agencies
  • L3C structure: Social mission (serving workers) takes precedence over profit

CONTACT & FEEDBACK

For Source Verification Questions: [contact method]

For Editorial Feedback: [contact method]

For Community Intelligence Tips: [contact method]

To Report Errors: We take corrections seriously. Contact us with documentation and we’ll investigate promptly.


VERSION HISTORY

  • December 15, 2025: Pivot from daily bulletins to weekly briefs
  • December 17, 2025: Methodology page published
  • [Future updates logged here]
Verified by MonsterInsights