Content is gaining ground online as a way to build brand awareness and engage with customers. But along with more informative content comes disreputable stories and comments spreading across the internet.
In the wake of a rise in fake news sites and anonymous negative posts, companies are reassessing their interaction strategies to avoid negative brand associations.
Brand perception heavily influences consumer decisions and customers form opinions about brands when they read a news story or online review, hear about a friendâs experience, or visit the brandâs website. Simply put, brand perception matters.
So how can companies monitor and foster positive perceptions of their brand? It begins with content moderation. Moderators today face substantial challenges in making sure that user-generated content and ad placements meet the companyâs standards. But human moderators canât do the job alone. An effective content moderation strategy needs both human moderators and technology solutions.
The need for content moderation
âBrand safetyâ has historically involved making sure ads didnât appear on questionable websites. However, the rise of programmatic media buying, where ads are distributed across thousands of websites within milliseconds, complicates the moderatorâs task. It can be difficult even for advertisers to know where their ads may appear.
Programmatic ads are placed on sites based on factors such as audience demographics, not specific site buys. For example, a consumer may be tagged under âfashionâ or âbeautyâ and will see these types of ads regardless of the site he or she is viewing. And the practice continues to grow. U.S. programmatic display ad spending is expected to reach $37.9 billion, or 82 percent of total spending on display ads in 2018, up from $25.2 billion in 2016, according to eMarketer.
âGiven the scale of the internet, itâs impossible for a human being to sort through all that content,â says Harikesh Nair, a professor of marketing at the Stanford Graduate School of Business. âCompanies want to use digital tools to help them tackle this problem, but even then they need to hire and/or train employees to do it, which is why some companies have put it off or failed to adequately address the problem.â
These are some of the factors that caused brands to suddenly find themselves mired in controversy. Kelloggâs, for example, was hit with a social media storm when its ads appeared on a website known for publishing anti-Semitic, sexist, and racist articles. Kelloggâs announced that it was pulling all advertising from the website on the grounds that the site did not align with its values.
A significant part of the online criticism has been driven by a Twitter account called Sleeping Giants, which encourages people to tweet at brands whose ads appear on the contentious site. As of this writing, 818 brands have pulled ads from the site, according to Sleeping Giants.
Besides external pressure, advertisers expressed fears about inventory transparency even before the fake news and hate speech phenomenon gained attention. According to a survey from the Association of National Advertisers (ANA) and Forrester Consulting, 59 percent of U.S. marketers who responded said they felt âsignificantly challenged by the lack of inventory transparencyâ in 2016, up from 43 percent in 2014.
Tools such as whitelists, blacklists, and semantic technologies are designed to help advertisers filter out objectionable sites and placements. Whitelists allow advertisers to select the sites where they want their ads to appear, while blacklists allow advertisers to do the oppositeâindicate the sites on which they donât their ads to appear. Semantic technologies allow advertisers to prevent their ads from appearing on certain sites or next to certain types of content by filtering for language.
But research suggests that few marketers are actively using these safeguarding tools. Only 51 percent of U.S. marketers aggressively update blacklists, while 45 percent use whitelists, according to research from ANA and Forrester Consulting.
Even when these tools are used, they are not foolproof. Advertisers canât just launch an automated solution and forget about it. They must regularly check their ad placements and the sites they appear on to make sure nothing slips through the cracks. Facebookâs algorithm mistakenly blocked bsdetector.tech, the website for a popular browser extension that detects and tags fake news sources, for example.
Companies need a comprehensive strategy that enables them to proactively weed out problematic content quickly and efficiently. Enter content moderation.
What does a content moderator do?
Content moderators play a critical role in monitoring submitted content and ensuring that the information fits within a brandâs guidelines of acceptable content. A content moderator may handle any of the following jobs:
Pre-moderation: Moderators review content submitted to a brandâs website or community forum before making it visible to all members. This approach is best suited for content that is not time sensitive and/or includes legal risks where a moderator is needed to block libelous content. It could be helpful, for example, in online communities targeted at children, as a way to prevent bullying and other inappropriate behavior.
Post-moderation: User-generated content appears instantly online and is simultaneously placed in a virtual queue for moderation. This allows users to engage in real-time discussions without impeding the flow of conversation. Objectionable content on a Facebook ad or post, for instance, is flagged for the assigned team to review and potentially remove quickly after itâs been posted.
Reactive moderation: Reactive moderators rely on users to report inappropriate content when they see it. This means posts are usually checked only if a complaint has been made, such as via a âreport thisâ button. When alerted, an internal team of moderators will review the post and remove it if necessary. Reactive moderation puts power in the hands of the user. It shares the responsibility for blocking objectionable content with members. On the other hand, reactive moderation can result in a lot of false positives, where users flag content for no good reason.
User-only moderation: In this approach, users are responsible for deciding how useful or appropriate the user-generated content is. If a post has been flagged a certain number of times it will be automatically hidden. And similar to reactive moderation, this type of moderation can be easily scaled as the online community grows, with minimal costs. However, trusting a community to self-moderate itself carries obvious risks. A company should consider appointing staff members to review the flagged content and/or use an average score to determine whether the content should remain visible or be reviewed.
Automated moderation: This approach uses various technical tools to process user-generated content and apply defined rules to reject or approve submissions. The most commonly used tool is the word filter, in which a list of banned words is entered and the tool either replaces it with alternative word or blocks it out. A similar tool is the IP ban list. And some of the more sophisticated tools being developed include engines that generate automated conversational pattern analytics. Of course, the drawback is that an automated solution could miss nuanced phrases or other irregularities that a human content moderator would catch.
Balancing automation with human insight
To be effective, content moderators must be well-acquainted with the siteâs content and its audienceâs tastes, in addition to being knowledgeable of the topics discussed on the site. They must also be steeped in the relevant laws governing the siteâs country of origin and be experts in the user guidelines and other platform-level specifics concerning what is or what is not allowed. And most important, they must be able to make quick decisions.
However, it is impossible for even the fastest moderator to keep up with the constant stream of content that floods the internet. Increasingly, companies are using a combination of technology and human intervention to flag objectionable content.
Facebook, for example, uses algorithms and human moderators to help it flag fake news. Algorithms tally fake news signals and prioritize whatâs sent to the fact checkers. If an article is flagged a certain number of times, it is directed to a coalition of fact-checking organizations including PolitiFact, Snopes, and ABC News. Those groups will review the article and offer recommendations on whether it should be marked as a âdisputedâ piece, a designation that will be visible to Facebook users.
Additionally, disputed articles will appear lower in the news feed. If users still decide to share a disputed article, they will receive a pop-up message reminding them that the accuracy of the piece is questionable.
Nair, though, thinks it is unlikely that human moderators will ever be completely replaced by digital technology. âMachine learning and artificial intelligence are only as useful as the data that they receive,â he says. âBut objectionable content doesnât have a fixed set of attributes; itâs always changing, which means we need people to continuously âteachâ the machines to recognize the right content and keep an eye on the results.â
How to measure results
Measuring the effectiveness of content moderation requires more than adding up the number of blocked posts. Itâs possible, for example, that only a few people had published objectionable posts that day, thereby skewing the results.
A more effective approach is to examine a range of factors, suggests Chris Brown, chief of staff at TeleTech. âResponse times, as well as comparing the number and types of objectionable content that is being flagged are important factors,â he says. âFurthermore, what can the company learn about the content that is being removed or blocked? Is there a system in place to update a knowledge management database with these learnings?â
Ultimately, he adds, the performance metrics applied to the content moderation team must be aligned with the companyâs objectives and goals. If the objective is to increase positive brand perception, for instance, then the company may want to survey or mine the opinions of its customers as part of its analysis. âMeasuring the effectiveness of content moderation is complicated, especially since the space changes quickly,â Brown notes. âSuccess must be determined on a case-by-case basis.â
Content moderation is at a turning point; the rapid and constant nature of digital media requires companies to take a more active role in monitoring content that affects their brand. But this is not something marketers can set and forget. Effective content moderation strategies need technology to quickly comb the internet for problematic content as well as trained human associates to separate the false positives from the actual problems.
Though fake news and other objectionable content are here to stay, brands prepared to handle the accompanying challenges are the ones who will win positive mindshare and customer loyalty.