Google Tests A.I. Tool That Is Able to Write News Articles - Looks like journalism will be automated before trucking

The product, pitched as a helpmate for journalists, has been demonstrated for executives at The New York Times, The Washington Post and News Corp, which owns The Wall Street Journal.​


1689859701241.jpeg
Google is testing a product, known internally as Genesis, that can take in information and produce news stories. Alastair Grant/Associated Press

By Benjamin Mullin and Nico Grant
Published July 19, 2023 Updated July 20, 2023, 12:47 a.m. ET

Google is testing a product that uses artificial intelligence technology to produce news stories, pitching it to news organizations including The New York Times, The Washington Post and The Wall Street Journal’s owner, News Corp, according to three people familiar with the matter.

The tool, known internally by the working title Genesis, can take in information — details of current events, for example — and generate news content, the people said, speaking on the condition of anonymity to discuss the product.

One of the three people familiar with the product said that Google believed it could serve as a kind of personal assistant for journalists, automating some tasks to free up time for others, and that the company saw it as responsible technology that could help steer the publishing industry away from the pitfalls of generative A.I.

Some executives who saw Google’s pitch described it as unsettling, asking not to be identified discussing a confidential matter. Two people said it seemed to take for granted the effort that went into producing accurate and artful news stories.

Jenn Crider, a Google spokeswoman, said in a statement that “in partnership with news publishers, especially smaller publishers, we’re in the earliest stages of exploring ideas to potentially provide A.I.-enabled tools to help their journalists with their work.”

“Quite simply, these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles,” she added. Instead, they could provide options for headlines and other writing styles.

A News Corp spokesman said in a statement, “We have an excellent relationship with Google, and we appreciate Sundar Pichai’s long-term commitment to journalism.”

The Times and The Post declined to comment.

Jeff Jarvis, a journalism professor and media commentator, said Google’s new tool, as described, had potential upsides and downsides.

“If this technology can deliver factual information reliably, journalists should use the tool,” said Mr. Jarvis, director of the Tow-Knight Center for Entrepreneurial Journalism at the Craig Newmark Graduate School of Journalism at the City University of New York.

“If, on the other hand, it is misused by journalists and news organizations on topics that require nuance and cultural understanding,” he continued, “then it could damage the credibility not only of the tool, but of the news organizations that use it.”
News organizations around the world are grappling with whether to use artificial intelligence tools in their newsrooms. Many, including The Times, NPR and Insider, have notified employees that they intend to explore potential uses of A.I. to see how it might be responsibly applied to the high-stakes realm of news, where seconds count and accuracy is paramount.

But Google’s new tool is sure to spur anxiety, too, among journalists who have been writing their own articles for decades. Some news organizations, including The Associated Press, have long used A.I. to generate stories about matters including corporate earnings reports, but they remain a small fraction of the service’s articles compared with those generated by journalists.

Artificial intelligence could change that, enabling users to generate articles on a wider scale that, if not edited and checked carefully, could spread misinformation and affect how traditionally written stories are perceived.

While Google has moved at a breakneck pace to develop and deploy generative A.I., the technology has also presented some challenges to the advertising juggernaut. While Google has traditionally played the role of curating information and sending users to publishers’ websites to read more, tools like its chatbot, Bard, present factual assertions that are sometimes incorrect and do not send traffic to more authoritative sources, such as news publishers.

The technology has been introduced as governments around the world have called on Google to give news outlets a larger slice of its advertising revenue. After the Australian government tried to force Google to negotiate with publishers over payments in 2021, the company forged more partnerships with news organizations in various countries, under its News Showcase program.

Publishers and other content creators have already criticized Google and other major A.I. companies for using decades of their articles and posts to help train these A.I. systems, without compensating the publishers. News organizations including NBC News and The Times have taken a position against A.I.’s sucking up their data without permission.

Source (Archive)
 
Not surprising. 90% of journalism is nothing more than content aggregation with whatever political slant thrown onto it. How many articles do you read these days that are just re-releasing a government press release then tacking on some "expert" that spews the party line. In this age, when you can find primary source information all on your own, the only type of journalism that is worth anything is investigative journalists. The rest just regurgitate twitter.
 
This is nothing new. They won't admit it openly, but Google and other news organisations have had 'bots' writing news articles - including adding a fake, Ai generated picture and journalist name, since 2014.

If you want to go down a rabbit hole, look at the amount of bot-created content that has been used for the last decade, used by who and for what reasons. Combine that with the millions and tens of millions of bots that flooded social media sites, using bots to repost bot news and it soon makes this new wave of AI scaremongering a little laughable.
 
Hallucination and Libel Laws. Its not a stretch for an AI to go on a journo-rant against a private figure in an article, and hallucinate that they're also a pedophile or had some criminal record. Unless the editor is checking every detail, which they won't ever do, the AI will have just libeled a private individual. Public figures are harder to nail with those laws, but private individuals? You'll be in legal hell. And "Well the AI got it wrong" doesn't protect them from publishing lies.

Mouthbreathing Journalists aren't much better, but at least in that situation, they can try and shove the legal matter off onto the writer, especially if they're freelance. If it came from an AI run by the publication, and run through a publication editor, its all on them.
That's a good point actually. Journos are disposable. They can be hidden behind whenever lawsuits come knocking. AI not so much.

I mean, the easy option is to just proofread and fact check what the AI says, but these are MSM outlets we're talking about. They get paid by ShareBlue to pump out propaganda by the truckload, not waste time making sure what they say is correct.
 
  • Agree
Reactions: Kuritan Deplorable
Back