Alex Jones, brand safety, and the issue of how brands can protect themselves in a murky online world
To paraphrase Game of Thrones, the internet is dark and full of terrors. We’re all aware of issues on the web: the proliferation of hate speech, fake news, offensive content, easy access to pornography or violent images, even terrorist propaganda.
For individuals wanting to safely navigate the online world, it’s easy to avoid the darker fringes. For brands, however, it’s a major concern, as automated ad technology can place adverts against inappropriate content, which can potentially harm a brand’s reputation – and its bottom line.
The issue of brand safety has come into sharp focus recently, when articles and videos created by Alex Jones, the right-wing US conspiracy theorist and owner of Infowars, were banned from Facebook, Apple, Spotify, YouTube, and Twitter, on the grounds that he violated rules on broadcasting hate speech.
Read more: Anger-inducing adverts and the power of outrage in marketing
These platforms have been trying to combat offensive content for some time, mainly to improve the experience for users, but also to appease advertisers, especially after brand safety concerns came to a head last year.
Several UK brands, including Marks & Spencer, pulled adverts from YouTube when it emerged that they were running against extremist videos. YouTube’s owner Google had to make several big changes to its ad rules in order to improve brand safety.
Of course, brand safety isn’t a problem unique to the digital world. In the past, even in carefully curated spaces like newspapers, adverts could be misplaced with embarrassing or negative consequences: in 2012, a Swedish paper placed cruise ads next to a story about Costa Concordia, the ship which sank in the Mediterranean sea.
However, in the online world, misalignment can be much more damaging, with the potential for an ad to appear alongside content that is not just inappropriate, but actively offensive or even illegal.
This problem hasn’t gone unnoticed – brand safety concerns are widespread. Computer vision company GumGum released a survey of US marketers earlier this year, and found that 75 per cent of brands reported at least one brand-unsafe exposure in the past year.
The survey found that the consequences for a brand safety incident included social media backlash, negative press, and lost revenue.
“Most responded to say hate speech is the riskiest kind of content, but also pornography and violence, as prominent risk factors,” says Ed Preedy, GumGum’s managing director of Europe.
And despite YouTube’s repeated brand safety issues, respondents actually cited Facebook as the riskiest platform, due to fake news and user-created hate speech groups. LinkedIn was unsurprisingly the safest.
Obviously, publishers like Google and Facebook must do more to tighten up their systems to prevent misalignment and police their platforms for fake news and hate speech, and they have belatedly started making an effort to do so. But what can brands do to protect themselves?
One solution is to hire brand safety officers to inspect where ads appear, implement safeguarding strategies when an issue occurs, and liaise with the brand’s partners to make sure that they know what is and isn’t appropriate for their ad.
“Given the damage that an ad placed next to fake news or offensive content can have on brand image, we’re going to see a rise in in-house brand safety officers,” predicts Nicky Palamarczuk, editorial director at content creator VCCP Kin.
Sairah Ashman, chief executive of brand consultancy Wolff Olins, suggests that they could also help keep a brand tied to its core values. “In an ideal world, we wouldn’t need brand safety officers. But technology is changing the rules of engagement for businesses at a rate few are able to keep pace with.”
However, others scoff at the idea. Zoheb Raza, social media director at creative agency Isobel, doesn’t see brand safety officer becoming a necessary cost of doing business anytime soon.
“Unless there is a cataclysmic error, the cost of hiring will quite quickly outweigh the potential losses. Also, what a grim job to be a brand safety officer in advertising land. By the time an ad fail has done the rounds and had its 15 seconds of viral fame, there will be another controversy, fail, or presidential tweet to grab the attention of consumers.”
Fergus Hay, chief executive of global creative agency Leagas Delaney, also placed scorn on the concept.
"You don’t pour on the aftersun to prevent sunburn, and a Brand Safety Officer is a reactive, not proactive solution to a problem. As such, marketers should concern themselves not solely with damage limitation, but how to secure an unfair advantage by really integrating media placement into creative development. In other words what you say should be at equal measure to where you appear," he says.
Alternatively, better technology could help. Ad platforms already use semantic analysis to check whether content is appropriate for an ad. This tool looks for problematic keywords in the text and metadata on a website.
The problem with these contextual tools is that they struggle with images and videos, which have become much more common on the internet.
Instead, computer vision could be used to “see” the contents of an image. GumGum has technology that can analyse the pixels in images and videos for problems. Too much flesh colour in an image could indicate it’s pornography, for example. This, combined with other contextual analysis tools, could avoid brand safety issues.
“Some of the major incidents over the last year or two have involved video advertising and placement around videos, where the metadata has not pointed to the real content of the video, so we would say an increase in the deployment of computer vision should be at the top of the agenda for a lot of advertisers,” says Preedy.
Whatever systems you have in place, mistakes are going to happen. But by combining human input, in the form of brand safety officers, with these analytical tools that can flag up problems, brands can more reliably avoid ad misalignment, and keep their reputation safe.
The internet remains dark and full of terrors, but hopefully it is getting safer too.
Read more: Representation in advertising isn't just about 'pinkwashing'