We’re not even one quarter of the way through 2019 and there have already been enough digital scandals to last us a decade! Social media improprieties, illegal data sharing, poor algorithm decisions, review-bombing and accusations of censorship pervade the headlines until it feels like there’s no way for your brand to exist online without courting a PR disaster. Whether you’re looking to launch a new website, conduct a successful rebrand or wade into the ever-murkier waters of digital advertising for the first time, it can feel more and more difficult to build a successful digital marketing strategy that works in the modern era. Let’s take a look at some of the controversies that have rocked the web in the last few months, and what they’ve meant for publishers and consumers alike:
Instagram’s Self-Harm Content
Though not the first time Instagram’s content standards have been called into question (their standards for breastfeeding images cause a kerfuffle at least once a year), the issue of self-harm content on the platform was thrust into the limelight in January of this year after the family of a teenager who committed suicide demanded stricter regulations for self-harm-related content across Instagram. After some hemming & hawing, Instagram implemented ‘sensitivity screens’ that require a user to click through a blurred layer before arriving at content related to self-harm or suicide. The difficulty for the site, though, remains that censoring all self-harm content would also censor recovery-related content—something they don’t want to do. Will the sensitivity screens reduce the number of searches for the content? Unlikely, but by making them more difficult to spot, they may increase the chances of a user scrolling right past the image rather than seeing disturbing content.
Pinterest’s Anti-Vax Battle
Over the past few years, anti-vaccination propaganda has taken over search results for vaccine information on most social media platforms. Three years ago, a study found that 75% of vaccine-related posts on Pinterest were promoting anti-vax ideology, and Facebook’s content is similar. However, while Facebook has dragged its feet and refused to take any measures to combat this, Pinterest found a solution: remove the content. Of course, it isn’t as simple as that in real life, but through a combination of ‘hashing’ removed images to avoid their re-appearance, blocking new pins from objectionable websites and removing search results that violate Pinterest’s T&Cs, they’ve managed to make finding anti-vax propaganda much more difficult.
As the measures were put into play earlier this year, the science and medical communities praised the social media platform for taking a stand. Anti-vaxxers, though, called the measures fascist, Orwellian and a form of censorship, pitching Pinterest back into the fray between science and pseudoscience. Whether other social media platforms take Pinterest’s cue and work to remove this content remains to be seen, but so far Pinterest hasn’t suffered much from taking a stance.
YouTube’s Algorithm Foibles
YouTube, not for the first time, has been put into the headlines for paedophile scandals, but it might be the first time that the site has been put into the spotlight for its algorithms working too well. Let’s back up: in late February, controversy arose when it was noticed that many videos of children were generating millions of views and comments, the comments often consisting of only a timestamp. On further exploration, journalists discovered that the timestamps marked specific moments during the videos where children inadvertently flashed the camera. As Wired explained, these thinly-veiled comments allowed the paedophiles to hide in plain sight.
That wasn’t the end of it, though: because YouTube’s algorithm is programmed to show you more videos similar in content to videos you’ve already watched, after watching one video, a viewer continues to be recommended similar videos, leading the viewer down a rabbit hole of objectionable content. Still not the worst part: the majority of these videos were monetised, meaning that brands were inadvertently supporting this malicious behaviour as well as having their ads show up alongside the videos. The problem here was obviously accidental: YouTube had no intention of promoting these videos like this, but in a vicious cycle, as they got more popular, their demand grew, so the algorithm continued to promote them to additional viewers… making them more popular and thus more likely to continue to be promoted. In a flurry of brand safety damage control, YouTube first issued a statement to brands and major holding companies stating that they were addressing the situation after cornerstone brands like Disney and Nestle halted ad buys on the platform.
Since then, YouTube has removed the ability to comment from nearly all videos including children, but this will likely just be a plaster on a larger issue that will require continued attention over the coming years. It remains to be seen how the new policy will work in reality, but in the interim, some of the world’s largest brands are making their opinions clear by keeping their ads off of YouTube until further notice.
DrainerBot’s Illicit Data
DrainerBot, as Oracle has named the virus that has infiltrated potentially millions of Android phones, is a bot that invisibly runs videos on infected apps, draining users’ data without their knowledge. The users never see the video ads that run, but brands are charged for them anyway. As one Oracle official said, ‘this is a crime with three layers of victims:’ the users who are charged for data overages, the brands who pay for ads that no one ever sees and the app developers whose reputations are tarnished even though they had nothing to do with the virus. So what are brands to do when even Google didn’t see this one coming?
The short answer is this: find a trusted marketing agency to handle your ad buys. An agency has leverage to combat these kinds of missteps on the part of publishers, and marketers are likely to see scandals coming down the pipeline long before consumers do. With ten months left to go in 2019, there are sure to be more scandals on platforms new and old alike in this year alone. No platform can ever 100% brand-safe these days, but understanding your brand’s audience, analytics and ad campaigns goes a long way toward making sure your brand avoids controversy in the future—and acting quickly when you’re caught in a maelstrom goes a long way toward restoring customer confidence. Need some help navigating the digital landscape? Contact Elastic today!