Case File: When AI Hijacks the Pen: A Deep Dive into the Boston Globe’s ‘Writing Is Dying’ Claim
Background - The Op-Ed That Sparked a Storm
On a rainy Tuesday, the Boston Globe published an opinion piece titled “AI is destroying good writing.” The author warned that large language models could turn prose into a "factory-produced blandness" that erodes nuance and craft.1 The headline alone generated more clicks than the average feature story that week, proving that fear sells as well as any breaking news. What made the piece stand out was not just its alarmist tone but its timing: AI-generated text had just crossed the 10-second threshold for a full article, a speed that dwarfs the typical two-hour drafting cycle of a seasoned reporter.
Pegasus in the Shadows: How the CIA’s Deception...
Chart: Average time to produce a 500-word article (human vs. AI)
AI can draft a 500-word piece in under 10 seconds, while humans need roughly 120 minutes.
For tech-savvy early adopters, the op-ed was a call to arms - or at least a call to question the value of speed over substance. The Globe’s editorial board, composed of about 120 writers, suddenly found itself under a microscope, with every newsroom across the globe wondering whether the same threat loomed over their own desks.2
Challenge - Quality, Credibility, and the Tuition Trap
Beyond the headline, the Globe’s warning intersected with a second, less-discussed dilemma: education costs. In a separate Boston Globe story, Berklee College of Music students were reported to be paying up to $85,000 for a degree that includes AI-focused coursework.3 The juxtaposition of a $85k tuition and a free AI tool raised eyebrows. If students can afford a six-figure education to learn how to wield AI, what does that say about the accessibility of high-quality writing skills for the broader public?
"AI may democratize content creation, but it also creates a new gatekeeper: the ability to afford the tools and training that keep the output human."
Approach - A Six-Month Pilot in a Mid-Size Digital Newsroom
To test the Globe’s alarm, a mid-size digital newsroom of 45 staff members launched a six-month pilot in early 2024. The goal was simple: compare audience engagement, error rates, and writer satisfaction between AI-assisted drafts and fully human-written pieces. The newsroom used a popular open-source language model, fine-tuned on its own archive of award-winning articles. No external brand names were mentioned, keeping the focus on process rather than product.
The pilot followed a strict protocol. Every story was assigned a “human-first” or “AI-first” label. Human-first pieces followed the traditional workflow: pitch, research, draft, edit. AI-first pieces began with a model-generated draft, which a senior editor then refined. Both streams were published under the same section headings to control for topic bias. Engagement metrics (time on page, scroll depth) and error logs (fact-check failures, grammatical slips) were automatically captured.
Key Insight: The newsroom deliberately limited AI use to the drafting phase, avoiding full-automation to preserve editorial oversight.
Throughout the pilot, the team held bi-weekly retrospectives, documenting writer sentiment on a Likert scale from 1 (frustrated) to 5 (empowered). The data collected would later become the backbone of the Results section.
Results - Numbers, Nuance, and Unexpected Trade-offs
When the pilot concluded, the raw numbers painted a mixed picture. AI-first articles were published 30% faster on average, shaving roughly 35 minutes off the total production cycle. However, the engagement metrics showed a modest 4% dip in average time on page for AI-first pieces, suggesting that readers skimmed more quickly.
Fact-check errors were marginally higher in AI-first drafts: 2.3% versus 1.7% for human-first. Most of these errors were minor - a misplaced date or a misquoted statistic - and were caught during editorial review. The error rate difference was not statistically significant, but it highlighted the need for vigilant human oversight.
Writer sentiment revealed the most surprising trend. While senior editors reported a 0.8-point increase in perceived workload (they spent more time polishing AI drafts), junior reporters felt a 1.2-point boost in empowerment, citing the model’s ability to generate first-pass outlines as a confidence-builder. The pilot’s qualitative feedback echoed the Globe’s warning: speed does not automatically translate to quality, but it can free up senior staff to focus on deeper analysis.
Chart: Average production time (minutes) - Human-first vs. AI-first
AI-first stories were completed in roughly 55 minutes, while human-first took about 79 minutes.
Lessons Learned - Balancing the Scales of Speed and Substance
The newsroom’s experiment distilled three core lessons that directly address the Globe’s alarm. First, AI is a tool, not a replacement; the most successful stories emerged when the model supplied a skeletal draft that human editors fleshed out with context and voice. Second, the financial paradox highlighted by the Berklee tuition story can be mitigated by open-source models and internal training, reducing the barrier to entry for smaller outlets. Third, audience trust is fragile; even a small dip in engagement signals that readers can sense when a piece lacks the subtlety of a fully human narrative.
Importantly, the pilot demonstrated that the fear of “AI destroying good writing” can be reframed as a challenge to preserve craftsmanship in a faster world. When senior editors redirected the time saved by AI toward investigative depth, the overall content quality improved, even if individual metrics showed a slight engagement dip. The key is to treat AI as a co-author rather than a ghostwriter.
What We Can Learn - A Playbook for Early Adopters
For tech-savvy early adopters eyeing AI in their own content pipelines, the case study offers a pragmatic roadmap. Start with a pilot that limits AI to the drafting stage, measure both speed and error rates, and involve writers of all seniority in feedback loops. Invest in internal training rather than expensive tuition programs - the open-source ecosystem provides ample resources for free. Finally, keep a human-centric editorial gate; the most compelling stories will always carry the imprint of a mind that can weigh nuance, ethics, and audience expectations.
In the end, the Boston Globe’s op-ed may have sounded like a death knell, but the data from this newsroom experiment suggests a more nuanced reality: AI can erode certain aspects of writing, but it can also amplify the strengths of skilled journalists when used responsibly. The choice, as always, lies in how we wield the technology, not in the technology itself.