It’s Time For The Industry to Combat Brand Safety Issues

0
85

Brand safety is keeping CMOs up at night – eight in every 10 say that they are more concerned about brand safety than ever before, according to a study by Teads. One of the biggest brand safety issues that marketers face is the sheer amount of user-generated content online. Unlike traditional content publishers, there is very little to stop people from creating their own content portraying extremist views, fake news, and explicit content. It’s exactly this content that brands do not want their adverts to appear against.

Today, more than $628 billion in total digital ad spend worldwide is at stake and if brand safety issues are not tamed, the online advertising industry will crumble.

The extent of the problem

Social media is one of the areas suffering the most and marketers are voting with their budgets. Furthermore, Facebook has reacted with a decision to devalue media content within its news feed, which is a part of a bigger change in its algorithm and video strategy, hypothesised to be due to its own well-publicised brand safety woes. According to an Advertiser Perceptions and Oath study, 45% of advertisers think that social media sites do a bad job on brand safety, which is now a top concern among a vast majority (94%) of advertising decision-makers.

For example, in November 2017, The Times carried out an investigation that revealed: “some of the world’s biggest brands were advertising on YouTube videos showing scantily clad children that attracted comments from hundreds of paedophiles”.

Advertisers’ concerns about YouTube’s ability to protect brand safety have been so great that its ad revenue has plateaued during 2018 (+0.2% year-on-year), a time when video advertising across other platforms has thrived.

The brand safety trust crisis has opened up an opportunity for Instagram to monetise its new IGTV app which features high-quality, long-form video and take a large slice of the $10 million annual digital and mobile video ad spend.

A united front is needed

The brand safety issue cannot be solved by any one entity; it requires collaboration between brands, media agencies, ad tech companies, and industry bodies.

This year, the 4A’s and company executives from top global ad firms announced the formation of a new Advertiser Protection Bureau to tackle the escalating brand safety issues. The end goal is to develop the discussion around brand safety to a “more holistic view of what our responsibility is to consumers, to brands and each other because advertising assurance can’t happen if we’re not communicating and working together,” according to a statement by 4A’s president and CEO, Marla Kaplowitz.

Another global ad industry leader, GroupM, has specialist brand safety teams that “regularly contribute to industry committees, research and debates” and “actively promote and participate in industry standard-setting and self-regulation to create and uphold integrity.” Key actions include creating quality media environments, curbing ad misplacements alongside “fake news”, and espousing a proactive approach when ads are inappropriately placed.

Is AI really the answer?

Internet giant Google (and others) are using advanced artificial intelligence (AI) technology and deep learning and computer vision to avoid ad misplacements next to inappropriate or disturbing content. This approach is already helping car brands avoid placing ads next to news about a road crash or help companies to avoid ads placed on media sites with extreme points of view.

For example, Google unveiled a new ad unit for AdSense that helps to protect brand safety. It uses machine learning via its Auto Ads that automatically reads a page to instantly detect content and context, and place only the most suitable ads. According to Google, publishers can earn up to 10% more by using it.

RTB House uses natural language processing in its proprietary multi-layer brand safety platform. The algorithm provides rapid, comprehensive, and precise page level inspection and ad blocking. This includes scraping at the URL, article content, and metadata levels, before allowing ads to be placed on a page.

This enables brands to exclude specific content that is inappropriate e.g. automotive brands may want to avoid ad placement alongside articles about drinking alcohol but not necessarily alongside articles about windshield washer fluid mentioning this contains alcohol.

Many experts believe that advances in AI are the ad industry’s best bet to help solve the escalating brand safety crisis. While AI developments have shown to help identify and avoid showing ads on websites with inappropriate content, it remains to be seen if this is a viable, scalable, long-term solution. One of the biggest threats to the success of AI will be the ability of content creators to continually evolve how they mask inappropriate content.

The consumer effects

Almost three-quarters (70%) of consumers expect brands to curb the spread of fake news, and 68% state that brands should shield social media users from offensive content, according to the Trust Barometer social media study by Edleman. Consumers are already showing that they will not tolerate brands’ support of fake news and inappropriate content through advertising, illustrated by the Stop Funding Hate movement on Twitter.

There is continued development of technology to detect and remove inappropriate content from the online advertising industry but the key is collaboration between all sides to ensure a consistent approach.

LEAVE A REPLY

Please enter your comment!
Please enter your name here