TRANSPARENT JOURNALISM IN PRACTICE
A human and AI finding a way to make it work.
By The Open Record, L3C Editorial Team
angelaf@duck.com • theopenrecord.org
Our Commitment to Transparency
The Open Record, L3C believes that modern journalism should demonstrate exactly how technology assists in research and reporting while maintaining human editorial control. This special edition showcases our methodology.
Human-AI Collaboration Process
- Editorial Direction: Human editors define story angles, research questions, and editorial standards
- Research Phase: AI assists with systematic information gathering and source identification
- Analysis: Human journalists interpret findings and provide context
- Fact-Checking: All claims are independently verified through original sources
- Editorial Control: Final decisions on content, framing, and publication remain with human editors
What We Share
- Source URLs for independent verification
- Search terms used in research
- Clear attribution of AI-assisted versus human-generated content
- Methodology notes explaining our process
Our Standards
We believe transparency builds trust. By showing our work and citing our sources, we help readers understand both the capabilities and limitations of AI-assisted journalism while demonstrating how technology can enhance rather than replace human judgment in reporting.
THE OPEN RECORD, L3C • Old-School Truth in the Digital Age
Contact: angelaf@duck.com • theopenrecord.org
Transparent Journalism for the Public Good
General Background Research: Background that was involved before our first edition consolidates comprehensive research and conversations about artificial intelligence, demonstrating transparent human-AI collaboration in journalism. All sources have been independently verified and are available for reader verification. The Open Record, L3C is committed to showing our work and maintaining editorial integrity while leveraging technology to enhance reporting capabilities.
Sources and Further Reading
Independent/Nonpartisan Sources
- TechTarget – AI Chatbot Analysis (2025)
- Science Alert – AI Bias Research (2025)
- University of Kansas – AI Bias Education
- TechRxiv – Political Bias in AI Language Models
Government & International Organizations
- World Economic Forum – Future of Jobs Report 2025
- WEF – Jobs of the Future and Skills Analysis
- Sustainability Magazine – AI Impact Analysis
- WEF – AI and Entry-Level Jobs Impact
Academic & Research Organizations
- Anthropic – AI Safety Research & Constitutional AI
- OpenAI – GPT Development & Safety Measures
- Google DeepMind – AI Alignment Research
- UC Santa Cruz – AI Empathy Study
- Stanford – Mental Health Chatbot Risks
Industry & Technical Sources
- GitHub Copilot – AI Pair Programming
- Anthropic – Claude 4 Capabilities
- GitHub Blog – AI Model Selection Guide
- Developer Experience – AI Partnership Model
Policy & Analysis Sources
- Brookings Institution – ChatGPT Politics
- ArXiv – Bias and Fairness Overview
- Decrypt – Political Spectrum Analysis
- SHRM – Why Robots Won’t Steal Your Job
Key Reference Works
- Kai-Fu Lee – “AI Superpowers: China, Silicon Valley, and the New World Order”
- National Committee on US-China Relations – Kai-Fu Lee Interview
- Washington Post – AI Superpowers Review
Research Methodology Note: Sources were selected across the political spectrum and organizational types to ensure balanced perspective. All factual claims in this article have been cross-referenced across multiple sources. Links provided enable independent verification of all statements made.