What you need to know
- Twitter has revealed the findings of its investigation into racist abuse on the platform following the Euro 2020 final in July.
- England players were subjected to abuse on multiple platforms following the game.
- Twitter says ID verification would not have stopped the abuse from happening, as 99% of account identifiers were identifiable.
Twitter says that an investigation into racist abuse on its platform following the Euro 2020 final shows that ID verification of users would not have prevented abuse, because 99% of accounts suspended were not anonymous.
Twitter undertook research into abuse following the game last month, but was first keen to note that plenty of abuse had been proactively removed in the aftermath of the final:
Following the appalling abuse targeting members of the England team on the night of the Final, our automated tools, which had been in place throughout Euro 2020, kicked in immediately to identify and remove 1622 Tweets during the Final and in the 24 hours that followed.
Twitter says that about 90% of the tweets it removed were detected proactively and that it continued to remove content in wake of the final. By July 14 if had removed 1,961 Tweets, 126 were from reports.
Whilst ID verification has been floated as a potential tool to help reduce online abuse on social media, Twitter says this case proves otherwise, in a Twitter thread it noted:
"Our data suggests that ID verification would have been unlikely to prevent the abuse from happening - as of the permanently suspended accounts, 99% of account owners were identifiable."
A blog post from Twitter states:
While we have always welcomed the opportunity to hear ideas from partners on what will help, including from within the football community, our data suggests that ID verification would have been unlikely to prevent the abuse from happening - as the accounts we suspended themselves were not anonymous. Of the permanently suspended accounts from the Tournament, 99% of account owners were identifiable.
Twitter says "we fully acknowledge our responsibility to ensure the service is safe - not just for the football community, but for all users." The company is planning to trial a new tool that will autoblock accounts for harmful language, and continue to roll out reply prompts that warn people their responses are offensive, which Twitter says prompts 34% of users to amend their replies or not send it at all. Twitter also called on the government and football authorities for a "collective approach to combat this deep societal issue."