What you need to know
- Twitter will begin taking action against COVID-19 misinformation with a new labeling system.
- It will apply labels on tweets sharing disputed or misleading information and may delete them in severe cases.
- The system will work retroactively, applying to tweets made before the announcement as well.
As the COVID-19 pandemic continues to fuel the rise of conspiracy theories and fake news, Twitter will now warn users if a tweet they are viewing will contain health misinformation. Posts and tweets have been circulating on social media, espousing miracle cures, warning about microchipped vaccines, and linking 5G to the virus — among other conspiracies. While some of these have been benign, others have the ability to cause real harm, and have already done so.
Twitter's Site Integrity and Public Policy team heads made the announcement today, saying:
Earlier this year, we introduced a new label for Tweets containing synthetic and manipulated media. Similar labels will now appear on Tweets containing potentially harmful, misleading information related to COVID-19. This will also apply to Tweets sent before today.
These labels will link to a Twitter-curated page or external trusted source containing additional information on the claims made within the Tweet.
Depending on the propensity for harm and type of misleading information, warnings may also be applied to a Tweet. These warnings will inform people that the information in the Tweet conflicts with public health experts' guidance before they view it.
Twitter will classify tweets under three levels. The first and most severe level is "Misleading information" for tweets which have been confirmed as false. The company may opt to remove those tweets. For claims which are contested or with a truth-value that is unknown, Twitter will simply issue a warning and label them as "disputed information". Simply sharing "Unverified" information on its own will not lead to action being taken. That's the least severe of the labels and refers to information that for one reason or another has not yet been confirmed or debunked by health experts or public authorities.
Twitter isn't the only platform to handle COVID-19 misinformation in recent weeks. YouTube expanded its fact checks to the U.S. to tackle COVID-19 misinformation. In the same vein, WhatsApp limited the amount of forwards users have access to curb the spread of COVID-19 hoaxes.
It's not a new problem. MIsinformation always springs up around the conversation of the day as different bad actors take action for a range of motives. Now, as always, it falls to platform owners to determine what is acceptable.