On Tuesday, Twitter announced it will restrict the visibility of content from users exhibiting what it deems to be negative behavioral patterns.
It also stated the new measures are optional but will be on by default.
This approach contrasts with that of other tech organizations like Google and Facebook, which have decided to focus predominantly on monitoring content rather than user behavior. Both Google and Facebook have recently announced plans to increase the size of their content monitoring teams.
Twitter stated they’ll consider restricting content published by a user if that user:
- Has signed up for multiple accounts from the same IP address;
- Has tweeted to a large number of accounts that don’t follow them;
- Has been blocked by other users; and
- Maintains an account Twitter deems is similar to other accounts that have violated its terms of service.
The latter point is likely to be the most contentious of all, and some may deem the “will of the crowd” approach isn’t always the most appropriate solution where combative conversations are concerned.
Liberals and conservatives frequently express concerns over Twitter’s approach to moderation, and prominent conservative commentators accused Twitter of disproportionately targeting conservative voices when it decided to remove a large number of automated accounts from its platform earlier this year.
Twitter responded by stating it remains “apolitical”.
The issue came to the fore again later in the year during a live periscope Q&A when Twitter CEO Jack Dorsey was asked whether the platform has a “progressive liberal bias”.
Dorsey handed the question over to Twitter’s Legal, Policy and Safety Lead, Vijaya Gadde, who stated, “I’m not going to be able to deny the fact that we…have a lot of employees who have particular political views at the company. That is probably true not just of Twitter but of…Silicon Valley,” said Vijaya Gadde.
“… we’ve always tried to…be as impartial as we can be when making decisions and not look at content or political viewpoints when we’re assessing behaviour on the platform”.
The live Q&A was held to address users’ questions on the steps Twitter plans to take in implementing what it refers to as a new “health metric,” which will address the problems of “safety abuse, misinformation and manipulation” on the platform.
Dorsey described the process as Twitter’s “number one priority as a company right now”.
The measures announced on Tuesday were the first step in this process. Twitter also states the new measures have led to an 8 percent and 4 percent drop in the number of complaints made about replies to tweets and search results.
In response to the implementation of the new measures, Dorsey stated, “Directionally, it does point to probably our biggest impact change…this is a step, but we can see this going quite far”.