A former journalist at Australian Community Media (ACM) expressed concern after discovering that an internal generative artificial intelligence (AI) model had created a potentially defamatory headline for his article. The journalist, referred to as Tim to protect his identity, caught the error just before publication. He stated, "It had generated something false from the story. With my knowledge about the story, I knew that could have potentially defamed someone who could have been wrongly identified from what was generated." Tim felt anxious about the possibility of other unchecked errors being published.
Another ACM reporter, identified as Terri, also raised alarms about the AI's legal advice regarding a news story. She described the guidance as troubling, stating it overstated the legal risks involved. "The AI returned a lot of information saying that the story posed a defamation risk, and going through what it returned, I don't think it was correct," she said. Terri chose not to follow the AI's advice, likening it to self-diagnosing medical symptoms instead of consulting a qualified professional.
In a leaked email dated October 3, ACM management informed staff about ongoing "AI experiments and testing" in their newsrooms. These tests include story editing, headline writing, and generating story ideas. Reports indicate that the generative AI model in use is Google's Gemini, which has been customized for ACM to ensure that its data remains private.
Cassie Derrick, a director at the Media, Entertainment and Arts Alliance (MEAA), claimed that some ACM newspapers have been directed to use Gemini for all reporting tasks. She noted that the AI has made significant errors, such as misattributing court charges. "Gemini has attributed charges to the wrong person," Derrick said. "That journalist caught it, by doing the fact-checking, but had they not, it obviously would have been a disaster."
Concerns about job security have also emerged among ACM employees. An employee named Sam expressed fears that the technology could lead to job cuts. "Some people will lose jobs and the ones who are left behind will be left picking up the pieces," he said. Despite these concerns, some reporters have opted not to use the AI tools. ACM had previously cut 35 jobs, citing a loss of funding from Meta, the parent company of Facebook.
While ACM has not reported any factual errors or legal issues stemming from AI-generated content, a spokesperson stated that claims about the use of generative AI in their newsrooms are "flawed." The spokesperson emphasized that humans make all decisions regarding published content and that AI is not a substitute for journalists or legal professionals. "Integrity and accuracy are not negotiable," the spokesperson added.
Experts in AI and media, such as RMIT University's TJ Thomson, noted that the use of AI in journalism is becoming increasingly common. However, he cautioned against relying on AI for legal advice due to its geographic bias. "These models have been trained primarily with information scraped from the World Wide Web, a lot of it from North America, which is a very different legal context," he explained.
Other media organizations, including the ABC and The New York Times, have implemented their own AI tools while maintaining strict editorial oversight. The ABC has introduced an internal generative AI called ABC Assist, which aids journalists in various tasks. The New York Times has clarified that it does not use AI to write articles, emphasizing that journalists are responsible for all published content. Meanwhile, News Corp in Australia has been hiring AI engineers to enhance its editorial team. Google has not provided any comments on the matter.

Australia News

NBC Bay Area Sports
AmoMama
5 On Your Side Sports
TODAY Pop Culture
The Hollywood Reporter Movies
Bored Panda
AlterNet
The Daily Beast