YouTube has been under fire lately because of people calling them out for allowing “bad actors” for years to get away with posting videos that are targeting children but are actually harmful for children. Because of this, they have been cleaning up house for the past month or so, terminating channels and videos that feature children in harmful situations and strictly implementing new policies to flag videos with inappropriate content. Now they are “expanding their work” in this fight against abuse of the platform, and that includes adding more human moderators to help machine learning systems become better.
While YouTube relies heavily on machine learning technology to flag videos that violate their community guidelines, they recognize that there is a need to increase the number of human reviewers to help train these systems. YouTube CEO Susan Wojcicki said in her blog post that they will be hiring more than 10,000 people in 2018 to be part of their review team to be able to improve machine-learning technology in flagging similar videos in the future.
YouTube is now focusing on violent extremism and since June, they have removed more than 150,000 videos, and 98% of those were flagged by algorithms. They will be training the technology in other problematic content areas like child safety and hate speech and that is why the context that human moderators bring is pretty important. They are also becoming even more transparent as to how and why videos are flagged and then removed by having a regular report by 2018.
As for the business side of things, YouTube will also be taking steps to protect both advertisers and creators. They will be reviewing which channels and videos are eligible for advertising and also ensuring that the ads come out for the appropriate audience that the advertiser is looking for. Read more about all these steps in the source link.
December 5, 2017 at 05:05PM