Microsoft’s Use Of ‘AI’ In Journalism Has Been An Irresponsible Mess
from the I'm-sorry-I-can't-do-that,-Dave dept
We’ve noted repeatedly how early attempts to integrate “AI” into journalism have proven to be a comical mess, resulting in no shortage of shoddy product, dangerous falsehoods, and plagiarism. It’s thanks in large part to the incompetent executives at many large media companies, who see AI primarily as a way to cut corners, assault unionized labor, and automate lazy and mindless ad engagement clickbait.
The folks rushing to implement half-cooked AI at places like Red Ventures (CNET) and G/O Media (Gizmodo) aren’t competent managers to begin with. Now they’re integrating “AI” with zero interest in whether it actually works or if it undermines product quality. They’re also often doing it without telling staffers what’s happening, revealing a widespread disdain for their own employees.
Things aren’t much better over at Microsoft, where the company’s MSN website had already been drifting toward low-quality clickbait and engagement gibberish for years. They’re now busy automating a lot of the content at MSN with half-baked language learning models, and it’s… not going great.
The company recently came under fire after MSN reprinted a Guardian story about the murder of a young Australian woman, including a tone deaf AI-generated poll some felt made light of the death. But as CNN notes, MSN has also been rife with a flood of “news” that’s either weirdly heartless or just false. Even in instances where it’s simply republishing human-written content from other outlets:
“In August, MSN featured a story on its homepage that falsely claimed President Joe Biden had fallen asleep during a moment of silence for victims of the catastrophic Maui wildfire.
The next month, Microsoft republished a story about Brandon Hunter, a former NBA player who died unexpectedly at the age of 42, under the headline, “Brandon Hunter useless at 42.”
Then, in October, Microsoft republished an article that claimed that San Francisco Supervisor Dean Preston had resigned from his position after criticism from Elon Musk.”
It’s a pretty deep well of dysfunction. One of my personal favorites was when an automated article on Ottawa tourism recommended that tourists prioritize a trip to a local food bank. When caught, Microsoft often tries to pretend the problem isn’t lazily implemented automation, deletes the article, then just continues churning out automated clickbait gibberish.
While Microsoft executives have posted endlessly about the responsible use of AI, that apparently doesn’t include their own news website. MSN is routinely embedded as the unavoidable default launch page at a lot of enterprises and companies, ensuring this automated bullshit sees fairly widespread distribution even if users don’t actually want to read any of it.
Microsoft, for its part, says it will try to do better:
“As with any product or service, we continue to adjust our processes and are constantly updating our existing policies and defining new ones to handle emerging trends. We are committed to addressing the recent issue of low quality articles contributed to the feed and are working closely with our content partners to identify and address issues to ensure they are meeting our standards.”
Again though, MSN, like so many outlets, had been drifting toward garbage clickbait long before language learning models came around. AI has just supercharged existing bad tendencies. Most of these execs see AI as a money-saving shortcut to creating automated ad-engagement machines that effectively shit money — without the pesky need to pay human editors or reporters a living wage.
With an army of well-funded authoritarian hacks keen on using propaganda to befuddle the masses at unprecedented scale, quality, ethical journalism is more important than ever. But instead of fixing the sector’s key shortcomings or paying our best reporters and editors a living wage, we’re seemingly dead set on ignoring their input and doubling down on — and automating — all of the sector’s worst habits.
While the AI will certainly improve, there’s little indication the executives making key decisions will. U.S. journalism has been on a very unhealthy trajectory for a long while due to these same execs who will dictate most of what happens next. Without really consulting (or in many instances even telling) any of the employees who actually understand how the industry actually works.
What could possibly go wrong?
Filed Under: ai, artificial intelligence, disinformation, failures, journalism, language learning models, misinformation, propaganda
Companies: microsoft