
The credibility of journalism has come under renewed scrutiny after the Chicago Sun-Times published a summer reading list containing multiple fictitious book titles—produced by artificial intelligence. The article, which recommended books for the 2025 summer season, included several fabricated titles attributed to real, well-known authors. Social media users quickly identified the inconsistencies, accusing the publication of relying on ChatGPT or a similar AI tool without proper verification. As the backlash intensified, the newspaper admitted that the content was created by a freelancer associated with one of its content partners, not by its in-house editorial team.

I. Discovery of the Fake Book Titles
1. Social Media Sparks Outrage
On Tuesday, readers began circulating screenshots of the Sun-Times article titled “Summer Reading List for 2025”, noting that many of the book titles simply didn’t exist. Kelly Jensen, an editor at Book Riot, posted on Bluesky after checking the library’s archives. She confirmed the article was genuine and criticized the publication: “Why the hell are you using ChatGPT to make up book titles? You used to have a books staff. Absolutely no fact checking?”
Her post quickly went viral, garnering over 1,000 likes and nearly 500 reposts in just a few hours. This sparked broader concern over AI’s role in content creation and the diminishing emphasis on fact-checking in modern journalism.
2. Real Authors, Imaginary Books
Among the nonexistent titles were Hurricane Season by Brit Bennett, Nightshade Market by Min Jin Lee, and Boiling Point by Rebecca Makkai. Though these are respected, real-life authors, the listed books have never been published. Strikingly, each fake entry included detailed descriptions and personalized reader recommendations, lending the article an air of legitimacy—further misleading readers.
The article did list a few authentic titles, such as Atonement by Ian McEwan. However, the presence of genuine books did little to mask the number of entirely fabricated works presented alongside them.
II. Deeper Issues Within the Publication
1. AI-Generated Content Appears Elsewhere
The issue didn’t end with the reading list. Readers also spotted similar red flags in other sections of the Chicago Sun-Times’ summer coverage. One article on seasonal food trends cited a supposed Cornell University food anthropologist named Catherine Furst, who appears to be fictitious. Another piece offering backyard renovation tips quoted someone from a non-existent website called FirepitBase.com.
These revelations have fueled criticism that AI-generated stories may be more widespread than initially believed, especially in content sourced from third-party providers.
2. Newspaper’s Response and Damage Control
Facing increasing scrutiny, the Chicago Sun-Times issued a public response on Bluesky. “We are looking into how this made it into print as we speak,” the publication stated. It clarified that the content was not produced by its newsroom and had not been approved by its editorial team. By Tuesday evening, the paper confirmed that the material was part of a special section provided by a nationally recognized content partner.
“This should be a learning moment for all of journalism,” the paper wrote in a follow-up article. “Our work is valued because of the relationship our very real, human reporters and editors have with our audiences.”
III. Broader Repercussions Across the Industry
1. Syndicated Misinformation
The controversy extended beyond Chicago. AngelaReadsBooks, a user on Threads, pointed out that the same fabricated book list was published by the Philadelphia Inquirer, suggesting the AI-generated material was syndicated to multiple outlets.
This revelation has raised alarms about how unchecked content can spread across trusted news platforms, threatening reader trust and journalistic integrity.
2. Backlash from the Literary Community
The reaction from the book world was swift and cutting. Bestselling author Jasmine Guillory expressed her disbelief on Threads, writing: “Holy shit. Just imaginary books and they printed it.” Many others in the literary field condemned the article for undermining the importance of genuine literature, especially at a time when libraries and publishing budgets face ongoing cuts.
As one post stated: “At a time when libraries and library budgets are being threatened every day, this is a slap in the face to anyone in the profession. Where is the journalistic integrity???”
IV. The Ethics of AI in Journalism
1. Risks of “AI Hallucinations”
The incident underscores the phenomenon known as “AI hallucination,” where chatbots generate plausible-sounding but false information. While this is a known limitation of generative AI like ChatGPT, the problem becomes especially serious when such content makes its way into published journalism without verification.
Once presented to the public, even subtly inaccurate content can damage the credibility of the media outlet and misinform readers who trust these platforms.
2. Newsrooms Grapple With AI Integration
The rise of AI-generated content is forcing news organizations to reconsider how they gather, verify, and present information. While some publications have embraced AI to boost productivity—openly posting job ads for “AI-assisted” reporters—others are clearly struggling with the balance between innovation and accuracy.
The Chicago Sun-Times now faces a reputational challenge that may impact its long-standing credibility. Founded in 1948, the paper has historically held the second-highest circulation in Chicago, trailing only the Chicago Tribune. Its future reputation may depend on how it handles this situation and safeguards editorial standards moving forward.
Conclusion: A Cautionary Tale for the Media Industry
The use of AI to create a reading list filled with nonexistent books serves as a stark reminder of the responsibilities that come with using advanced technology in journalism. While artificial intelligence can streamline content production, it cannot—and should not—replace human oversight. The Chicago Sun-Times incident illustrates what happens when those checks fail: fabricated stories are presented as truth, misleading readers and harming the publication’s credibility.
Moving forward, media outlets must reinforce fact-checking protocols and establish strict guidelines for using AI-generated material. Audiences trust human reporters not just to inform, but to tell the truth. As the boundaries between human and machine-written content blur, that trust becomes more valuable—and more fragile—than ever.














