Monday morning, as news of the mass shooting in Las Vegas greeted the awakening East Coast, something almost as ugly reared its head: gross misinformation about the attack pouring out of the nastiest corners of the internet, rising straight to the top of algorithmically defined trending lists on Google, Facebook and YouTube.
For society to function well, this has to stop. I applaud the media critics, scholars, activists and politicians who are taking platforms to task and forcing them to make tough decisions (or at least release awkward statements) about their role in spreading misinformation. But I also think that relying on platforms to clean up their mess will, at minimum, take a very long time and a lot of sustained effort. And I don’t believe that it’s responsible for media companies to wait for them.
Yesterday, as I watched horrific bile bubble up to the top of every single platform, I started thinking about whether there was a simple way folks in the media could act, now, to stem the tide and use the power of platform trending algorithms to direct more people to high-quality information during breaking news events, especially in the initial hours.
It occurred to me that there might be a simple way for media companies to come together and, well, game the algorithms. Even if it fails, at the very least it would speak to the problems caused by algorithmically compiled trending sections in a language platforms will understand.
There’s just one catch. For the idea to work, folks in the media will have to minimally cooperate. Here’s why.
Propaganda sites deliberately spread misinformation by taking advantage of the way breaking news algorithms work. These algorithms favor sites that publish stories quickly, update them frequently, use words likely to lead to engagement throughout the content (“terrorist,” “shooting,” “communist,” etc.) and, of course, get a lot of early traction in the form of clicks, shares and links from other sources. Often, this early traction comes from a devoted network of core followers, some of whom are likely bots, who have been primed to share a particular narrative and page on social media and blogging platforms.
Platforms typically catch up and remove these stories and social media posts from trending modules after a few hours, but not before the damage of having them at the top of rankings for substantial periods of time has been done.
I propose that news organizations counter this misinformation by using the combined power of their algorithmically authoritative websites and reporters on social media as one of these cooperative propaganda networks. With any luck, this coordinated effort will have the effect of getting high-quality news to the top of algorithmically compiled trending sections during breaking news events.
I know that getting the media to coordinate on anything of this scale isn’t exactly easy. But it’s not entirely unprecedented, either. News organizations have historically cooperated on efforts like pool reports, election coverage and, of course, the Associated Press (hat tip to Joe Murphy). And they’re increasingly working together on larger impact efforts like the International Consortium of Investigative Journalists and Climate Desk.
Moreover, the coordination I’m proposing could be achieved with minimal investment and disruption to current workflows. Here’s how it could work.
Media organizations that cover breaking news should create a cooperative site that publishes automatically updating topic pages that link to high-quality coverage of breaking news events. Becoming a member of the cooperative would require adhering to a set of fact-checking, journalistic, and security standards that the founding members would jointly agree to.
Once the cooperative is setup, its website would publish “topic pages” during breaking news events. These topic pages would automatically aggregate all coverage of the event from member news organizations. Member news organizations could automatically populate these topic pages simply by linking to it in any articles they file about the breaking news event and notify the topic page of the link using a widely supported protocol like trackbacks or Webmentions.
In addition to auto-population, linking to the topic page serves an important purpose: these links would quickly signal to search engines that the page is highly authoritative about the breaking news event, causing it to quickly rise to the top of results. And because the pages would be updated frequently as organizations file new articles, they would remain high in rankings throughout the news lifetime of the event.
To achieve the same effect with social media algorithms, human editors (perhaps loaned by member organizations) would create optimized headline, image and description packages for these topic pages. Member organizations and their employees would be encouraged to share links to the topic pages in addition to their organization’s coverage, particularly in the early stages of the event. To ensure that member organizations don’t lose out on traffic, the page could support custom links that put a sharing organization’s stories on the top of the page.
In theory, participating organizations would make up for lost traffic because other organizations and savvy news readers would link to topic pages, too, driving more traffic to their articles. And it’s possible that, over time, readers would develop the habit of visiting the cooperative site during breaking events and share topic pages. The cooperative would thus function as a sort of “reverse wire” (coined by my friend Jason Bade), collecting verified information that could be safely shared by people who care about quality and veracity.
If a large enough group of companies commit to the project, I think it has a shot of achieving its technical aim: to game algorithms to display high-quality news at the top of searches and trending sections during breaking news events. It’s true that, if the effort succeeded, platforms could quickly alter their algorithms to counter it. But I doubt they would. Imagine the headlines: “Google knocks down a consortium of journalists in favor of 4chan trolls.”
(Update, 10/22: several folks in the position to know have said that far from trying to counteract a cooperative like this, platforms might want to participate and even partially fund it. I’m told they increasingly see misinformation as a risk, but lack the teams/will to solve for it internally. I’m now imagining a scenario where platforms even supply the cooperative with trending topics to create topic pages for, and link to those pages at the top of their trending sections. More tk on this soon.)
And the cooperative could even take on a life of its own, becoming a new destination for readers during breaking news events that’s owned and managed by media companies, not platforms.
But even if it fails, the primary purpose of the cooperative wouldn’t be to become a destination news site, to provide verification services, or to educate readers. The primary purpose would to provide media companies with a simple, quick, low-effort and low-investment way to start counteracting misinformation on platforms using language and techniques the platforms understand. If nothing else, the unprecedented media coordination on a technical solution could help finally bring platforms to the table to talk with media companies about how to better serve society. That would be a win, whether or not anyone clicks.