The changes include expanding the definitions of Twitter's violent threats policy, along with more options for the company's service team to lock out accounts. Twitter's violent threats policy now include "threats of violence against others or promot[ing] violence against others" in addition to "direct, specific threats of violence against others". The company says:
Twitter's support team now has the option to lock out accounts that generate abusive comments for specific periods of time, which they feel will help if many users start posting derogatory comments on one person or group. Those users will be contacted to delete the offending posts before their account can be reactivated.
Finally, Twitter is testing out a new feature that will allow them to spot abusive posts. The company said:
These changes come one day after Twitter gave its users the option to receive direct messages from any other user, even those that are not following them.
I have been writing professionally about technology and gaming news for 14 years.
Accounts don't generate abusive comments, people do. Seriously, though, that paragraph could be read to say that if I compose a tweet that then causes (or "generates") abusive comments, I would be the one to have my account suspended, even if though my tweet wasn't abusive. That would be punishing the victim. Please tell me that is not the case.
Thank you for signing up to iMore. You will receive a verification email shortly.
There was a problem. Please refresh the page and try again.