Twitter has said it plans to put misleading tweets from official accounts about the Ukraine war behind warning notices.

The change follows heightened scrutiny of the social media platform after the war unleashed a new surge of misinformation, sometimes from government sources.

Twitter has already limited content from more than 300 Russian government accounts, including President Putin.

But it also faces free speech concerns.

Under the company’s new “crisis” policies, Twitter will prioritise labelling false posts from accounts with wide reach, like state media or official government accounts, while preserving them for “accountability” reasons.

Users will be required to click through the warning notice to view the post and Twitter will disable the ability to like, retweet or share the content.

Twitter said it would also change its search and explore features to avoid amplifying false tweets.

“While this first iteration is focused on international armed conflict, starting with the war in Ukraine, we plan to update and expand the policy to include additional forms of crisis,” Yoel Roth, Twitter’s head of security and safety wrote in a blog post announcing the changes.

Twitter said examples of problematic posts included false or misleading allegations of war crimes, false information regarding the international response and false allegations regarding use of force.

The company said it would rely on multiple sources to determine when claims are misleading. Strong commentary and first person accounts are among the types of tweets that would not be challenged by the policy, it said.

The new policies come just weeks after Twitter’s board agreed to a $44bn (£34.5bn) takeover offer from billionaire businessman Elon Musk, who has called for less moderated speech on the platform.

He has said he would revoke Twitter’s controversial ban of former US President Donald Trump, whom Twitter suspended citing the risk that he would incite further violence.

Analysis

by Mike Wendling, BBC Disinformation Unit

Social media companies don’t want to act as referees of the truth, but they now have so much power over what we see that they feel increasingly compelled to act as information judge and jury.

Twitter’s new crisis policies are geared towards war and conflict. It’s hard not to see them through the lens of war in Ukraine, where there’s been an intense information war.

But of course there are other conflicts where these rules might apply – take Myanmar, where a civil war rages and where social media played a key role in a deadly slaughter, according to the UN.

Announcing a rule is one thing, but implementing it is another. The company uses automated systems but user reporting is rather limited. Twitter has only rolled out its misinformation reporting tool to the US and handful of other countries.

And of course, there’s another event on the horizon that could easily change things.

You may have heard Twitter might have a new owner soon – one whose views tend to be more laissez-faire than the current management.

We don’t yet have a very clear idea of what Elon Musk will do about moderation – if he does eventually take control.

Mr Roth said Twitter had started working on the new policies for crisis situations before the invasion of Ukraine, though the conflict had highlighted their importance.

Early in the war, the company took steps to limit the reach of Russian media accounts. But it did not have a clear approach to disinformation spread by political figures or government accounts.

While moderators did remove some posts, experts called the lack of strategy regarding government propaganda a “critical gap” in the firm’s moderation policies.

Last month Twitter said it had identified more than 300 Russian government accounts that it would stop recommending in timelines, notifications or elsewhere on the site.

On a call with reporters, Mr Roth said the firm had seen “both sides share information that may be misleading and/or deceptive”.

“Our policy doesn’t draw a distinction between the different combatants,” he said. “Instead, we’re focusing on misinformation that could be dangerous, regardless of where it comes from.”

To view the original article, click here.