Skip to main content

Humans Still Irreplaceable in Monitoring Offensive YouTube Content: Portland SEO's Augusto Beato

If there's anything that the recent spate of YouTube brand scandals proved, it is that reliance on humans in monitoring offensive content remain indispensable, according to the CEO of Portland SEO, Augusto Beato.

"The algorithm that YouTube uses to weed out inappropriate content clearly hasn’t been working," says Beato. "While YouTube says it also provides dedicated human teams to review flagged videos 24/7, this is an area where YouTube still has room for improvement."

An investigation by UK paper The Times finding clips of scantily clad children appeared alongside the ads of major brands late last year alleged that YouTube does not pro-actively check for inappropriate images of children but instead relies on software algorithms, external non-government organizations and police forces to flag such content.

The incident caused German discount retailer Lidl, Diageo - the maker of Smirnoff vodka and Johnnie Walker whiskey - and chocolate makers Mondelez and Mars and other companies to withdraw advertising from YouTube.

Businesses seeking to increase their brand awareness through YouTube may tap the services of Portland SEO through this link.

Last April, YouTube countered with a report claiming that its anti-abuse machine learning algorithm, which it relies on to monitor and handle potential violations at scale, is paying off across high-risk, low-volume areas, like violent extremism, and in high-volume areas, like spam.

It added that it wanted to increase the number of people “working to address violative content” to 10,000 across Google by the end of 2018. Now it says it has almost reached that goal and also hired more full-time anti-abuse experts and expanded their regional teams. It also claims that the addition of machine-learning algorithms enables more people to review videos.

Its report added that YouTube removed 8.2 million videos during the last quarter of 2017, most of which were spam or contained adult content. Of that number, 6.7 million were automatically flagged by its anti-abuse algorithms first.

Of the videos reported by a person, 1.1 million were flagged by a member of YouTube’s Trusted Flagger program, which includes individuals, government agencies, and NGOs that have received training from the platform’s Trust & Safety and Public Policy teams.

However, just early this month, confectionery giant Mars recently withdrew all advertising from YouTube in the United Kingdom after its Starburst ad appeared alongside a video of a Drill, a rap music genre that contains lyrics about violence and gang symbols.

There was also the issue of bizarre and disturbing videos aimed at young children using keywords and popular children’s characters that appeared on YouTube Kids.

Cisco has pulled its ads from the YouTube platform a few months ago after a CNN investigative report which revealed that ads from over 300 companies and organizations ran on YouTube channels running sensitive content. This includes the promotion of white nationalists, Nazis, pedophilia, conspiracy theories and North Korean propaganda.

British telecom company BT said it manually tested Google’s brand safety measures 20,000 times to check they work, but it was possible that “a small number of ads slip through and appear next to inappropriate content or content with inappropriate comments”.

On the other hand, IT equipment and services company HP blamed the problem on a “content misclassification” by Google.

###

Contact Portland SEO:

Augusto Beato
(503) 278-5580
info@portlandseo.net
111 SW 5TH AVE Suite 3100,#3102 Portland, OR 97204

FacebookTwitterGoogleDiggRedditLinkedIn

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.