Google’s hate speech policy is undergoing intensive surgery with new areas being added to address concerns of ads promoting and even financially funding inappropriate content online (Basic Economics 101).
Several media outlets in the U.S. joined the clamour triggered by the British government, complaining against the placement of advertisements over videos that are offensive or promoting forms of hate speech.
The changes fulfil the plans that Google had announced back a month ago, in response to the YouTube controversy that arose and then almost spiralled out of control.
This was not the first time a controversy has been seen – just a little while ago, Google got caught up in the furore over Fake News (specifically about Google’s ad network supporting fake news).
The policy additions should over time, address an increasingly toxic online environment that currently harbours content that borders on hate speech.
The policy has now been expanded to include more groups like immigrants and refugees and also applies to discriminatory pages. The language of the policy is such that it covers pages which deny the Holocaust or promote the exclusion of certain groups. Earlier, the policy had a very narrow range, addressing speech that was threatening against defined groups (including religious and ethnic groups, LGBT groups and individuals).
The definition of protected groups and individuals has been expanded as well, to include those who share “any characteristic that is associated with systemic discrimination and marginalization”.
The definition implies that harassing and disparaging speech against immigrants or refugees will be in violation of the policy.
Summers, who oversees the development and implementation of Google policies impacting publishers, said in a statement that this status was used as a proxy for attacking people in what is commonly known as a protected group.
The revamped policy will also apply to specific pages with content in violation of the policy meaning that the ads will not need to be removed from an entire site or account.
What this means is for example, an article on Breitbart that uses a derogatory term for transgender won’t get any ad money but will still receive ads on others pages.
Google denied commenting about whether the parent company would be affected or not. Keeping in mind the size of the revamp and understanding that the change is global, the current implementations wouldn’t be noticeable immediately.
In March, Philipp Schindler, Google’s top business executive, in his blog, mentioned how the company was taking a tougher stance on hateful, offensive and derogatory content to remove ads from the inappropriate content more effectively.
The policy enables companies to reject their advertisements being played on content that might have “offensive or malicious intent” as per YouTube’s own standards.
Although it sounds like a decent step, however YouTubers across the spectrum have mentioned that the move is reducing their ad revenues and sometimes for reasons that “don’t make sense“.
While a majority of YouTubers do use curse words in their videos, now companies can stop putting advertisements on such videos too. YouTubers like Pewdiepie have gone outright against YouTube for “pandering” to the press like Wall Street Journal and some other websites who initially caused the stir by doctoring videos of the said stars as supporting the Third Reich.
It remains to be seen how YouTube fares with these decisions and policy changes, and how much of it survives beyond the first few months. Revenue after all, is the lifeblood of all online forums.
Also published on Medium.