Create a free account, or log in

YouTube places fresh scrutiny on inappropriate content and advertising

It’s estimated YouTube plays host to more than 5 billion videos. That is an unfathomable amount of content, so it’s no wonder the video-sharing site has been having trouble monitoring all of it. But events over the past year have led YouTube to take seriously concerns not only about brand safety and reputation, but also […]
Fi Bendall
Fi Bendall

It’s estimated YouTube plays host to more than 5 billion videos. That is an unfathomable amount of content, so it’s no wonder the video-sharing site has been having trouble monitoring all of it. But events over the past year have led YouTube to take seriously concerns not only about brand safety and reputation, but also the troubling content targeted to children on the site as well.

Earlier in the year YouTube came under fire from major advertisers who threatened a boycott of the site unless it did more to ensure ads did not end up being placed in objectionable content.

An investigation from The Times in London revealed ads from brands such as Sainsbury’s, Argos, and the Royal Air Force Charitable Trust had been running alongside extremist videos, ranging from Islamist propaganda to pick-up artist videos. Ads for the BBC and other UK government bodies and departments have also fallen foul of the reputational damage caused by the automated ad placement process on YouTube.

More recently, YouTube has been brought to task for its failure to adequately monitor content uploaded to the site that purported to be child-friendly. Disturbing content masquerading as suitable for children has been slipping through YouTube’s moderation procedures, angering parents and once again bringing the site’s reputation into question. The New York Times reported on the problem and cited one such video with the title ‘PAW Patrol Babies Pretend to Die Suicide by Annabelle Hypnotized’.

YouTube’s CEO Susan Wojcicki has acknowledged the site, owned by Google’s parent company Alphabet, has to do a better job at moderating its content.

“I’ve seen how some bad actors are exploiting our openness to mislead, manipulate, harass or even harm,” Wojcicki said in a statement.

“In the last year, we took actions to protect our community against violent or extremist content, testing new systems to combat emerging and evolving threats. We tightened our policies on what content can appear on our platform, or earn revenue for creators. We increased our enforcement teams. And we invested in powerful new machine learning technology to scale the efforts of our human moderators to take down videos and comments that violate our policies.”

Wojcicki said in the statement that up to 10,000 human moderators will be given the job of scouring videos and comments for objectionable material. This will come as a relief to parents, but also to good faith content providers and advertisers who don’t want their content and reputations damaged by malicious elements.

“We’re also taking actions to protect advertisers and creators from inappropriate content. We want advertisers to have peace of mind that their ads are running alongside content that reflects their brand’s values. Equally, we want to give creators confidence that their revenue won’t be hurt by the actions of bad actors,” said Wojcicki.

This is no easy task for YouTube, but it is a necessary one. Viewers, content creators and advertisers all need to have trust that the site is doing all it can to clean up the more unsavoury material that can be found on there. If it doesn’t succeed, viewers and advertisers may well start to look elsewhere for their video fix.

Never miss a story: sign up to SmartCompany’s free daily newsletter and find our best stories on TwitterFacebookLinkedIn and Instagram.