YouTube has published a blog post claiming machine learning helped it remove 6.7 million of the 8 million videos removed from its platform between October and December 2017.
It claims adult content or spam account for most deleted videos. Of the 6.7 million flagged for review “by machines,” the video sharing platform also claims 76 percent hadn’t receive a single view before deletion.
It also claims 8 percent of videos of violent extremism were taken down with fewer than 10 views in 2017, while more than half of all videos of violent extremism are taken down with fewer than 10 views today.
It attributes this improvement to introducing machine learning in June 2017 for the purposes of automatically reviewing video content.
YouTube also claims that, contrary to conventional wisdom, introducing machine learning means more people reviewing content, not fewer.
In its video outlining the processes behind flagging content, YouTube claims “subject matter experts help [them] flag content related to the most sensitive subjects like ‘violent extremism’ and ‘hate speech’”.
It also states “we’ve developed powerful machine learning that detects content that may violate our policies and sends it for human review.” The video also states “in some cases that same machine learning automatically takes an action, like removing spam videos”.
The video sharing platform has come under fierce criticism in recent months after numerous complaints were voiced in relation to inappropriate content published on its platform, including the “Tide Pod challenge” and a Logan Paul video taken in a “Japanese suicide forest”.
In response, it has since revealed it will add disclaimers to content published by organizations that receive some level of government funding and it plans to expand the size of its content review team to 10,000 across all Google properties.