Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).

Here’s what’s new with us:

  • Nandini will be on Resource Alliance’s panel “The ethics of Facebook for nonprofits” on September 3rd (tomorrow) at 10am EST/3pm GMT. Sign up here!
  • Nandini will also be on Media Rumble’s panel on September 4th discussing hate speech in Indian media along with Stop Funding Hate’s Richard Wilson and News Laundry’s Manisha Pande. Sign up here!

For a brief and glorious window of time last week, we received a rare glimpse into the secret algorithm that determines who on the internet gets to receive advertising dollars and who gets blocked.

Every year, brand safety technology companies block about $3 billion from reaching news organizations based on a single myth: that placing your ads on negative news stories is unsafe for your brand.

There is not a shred of evidence that this is true, but that hasn’t stopped them from providing at least two half-baked ways to do it:

  1. Keyword blocklists — a list of “bad words” brands don’t want their ads placed near (which we’ve previously covered here)
  2. Contextual intelligence — AI that “reads” a page and classifies its topic, category, and overall sentiment rating as positive, negative or neutral.

Thanks to the war on “negative” news, the sentiment rating has become one of the most critical factors behind whether a news article gets monetized. It’s also one of the shadiest, because no one knows how the machine works.

But last week, Integral Ad Science launched its “Context Control” demo, inviting users to test its version of the tool, which allegedly reads “like a human” and spits out a real-time decision of whether it makes readers feel overall good (positive) or bad (negative).

We took the tool for a spin, testing it on (what else?) hate outlets. The next day, the demo had vanished.

We can guess why. The demo reveals a machine that is actually alarmingly and dangerously mixed up. It does not understand North from South or up from down. It is a college student’s computer science project, which she turns in to the professor at the end of the semester and says, “Good thing no one’s ever going to use this in real-life haha.”

Except IAS does use this machine in real-life and at scale, to control billions of impressions and ultimately, the fate of newsrooms around the world.

IAS’s ‘context control’ demo: The results

Here’s what we found:

First, we looked up a handful of extremist and extremist-adjacent sites. The sentiment machine didn’t seem to catch on:

  • American Renaissance (amren[dot]org) is a white nationalist site run by Jared Taylor. It was rated neutral.
  • Liberty Hangout (libertyhangout[dot]org) is the website of Kaitlin Bennett, the Kent State grad who harasses students on college campuses about homophobia and guns. It was rated positive.
  • Drudge Report (drudgereport[dot]com) is the aggregation and disinformation site run by Matt Drudge. It had no rating.

Then we tested an article about “lesbian bed death.” We’ve long hypothesized that this socially accepted topic would be unfairly dinged because it contains two “bad words” (guess which ones?). It turns out we were right:

We also looked at Kenosha’s local newspaper covering Jacob Blake’s story:

Then we looked at coverage of Jacob Blake’s story from The Root, which provides an unflinching analysis of issues in the Black community:

  • The Root’s coverage was categorized as “sensitive social issues.” It was rated negative.

IAS’s demo was removed before we could investigate further. But we saw enough to confirm that if you’re taking IAS’s advice to avoid negative content and using their technology to do it, you’re actually keeping your brand dollars away from some of the most responsible media coverage out there today. And probably still funding white nationalism.

They’re not against the news, just the negative news

Why are we measuring negative sentiment to begin with? If you’re not in the adtech bubble, let’s catch you up. For years, the adtech industry has coalesced around the myth that placing ads on negative news could harm your brand.

It’s a fabrication that is as bold as it is bonkers. The fact is, no brand has faced a brand safety crisis for placing their ads on the news. In fact, any brand that takes this advice is forfeiting its spot on the most highly trafficked, highly reputable domains in the world.

It’s ridiculous and it’s coming from the top. At the start of the global pandemic, the CEO of IAS urged clients to use their technology to target “positive hero-related content.” What she didn’t mention was that there is precious little good news to go around in 2020, and that following this advice means withholding revenues from news organizations that are producing the essential coronavirus reporting we all depend on.

The anti-bad-news campaign worked, too. Buzzfeed reported in March that one major brand blocked 2.2 million ads from appearing next to “coronavirus-related keywords,” which resulted in up to 56% of impressions being blocked from the Washington Post, New York Times, and Buzzfeed News.

“But Nandini and Claire, they’re still getting the money”

Do publishers still receive the revenues if the ad is blocked? No one can say for sure. Ad tech folks will tell you that blocking the news happens so quickly on a page-by-page basis that it often has to happen after the “bid” has taken place on ad exchanges — after the budget has already been spent. This is somehow meant to defend this type of blocking, as if to say “it’s harmless anyway, so why complain?” Here are some reasons:

  • Because no one outside brand safety companies can tell when the block is pre-bid or post-bid.
  • Because publishers are definitely getting blocked from collecting revenues, but we don’t know by how much.
  • Because post-bid blocking would mean that marketers are spending money on ads that don’t even appear.

If the block happens pre-bid, it’s bad for publishers. If it happens post-bid, it’s bad for marketers. Folks, this would all be a lot easier if we just didn’t block the news.

Who built this program anyway?

Our Twitter thread made its way to the adops subreddit, where one user had a message for Nandini (“leftist psycho”) and all the other idiots:

“If you think any ‘machine learning’ or other bullshit is going to fit into your subjective interpretation of all pages, think again.”

They nailed it. How is a machine supposed to be subjective?

Interpreting what we read is a human thing. We decide how we feel about an article based on our knowledge, values, cultural identity, and the sum of our experiences as sentient creatures. The only thing that matters in brand safety is what humans think.

Machines don’t have any of those. They only know what the humans who built them taught them. And that’s where we hit a wall. The algorithms are built by a team of people that Integral Ad Science (and DoubleVerify and Oracle) prefer to keep under wraps.

We don’t know who made it. We don’t know their backgrounds, their cultural experiences, whether they are a diverse group. We do not know whether the developers understand that racism and white nationalism is bad and whether they have a baseline understanding of how to identify hate speech and disinformation.

And that means we are handing over one of our most critical brand safety decisions — where our brand appears — to a group of unknown people and the black box they built. Both publishers and brands are left in the dark.

If you’re uncomfortable about censorship, how about these advertising decisions that we never even see? You couldn’t find a more dystopian way to kill the free press.

A product looking for a problem to solve

We’ve covered this before, but let’s reiterate: Brand safety is not about a page-by-page analysis. No social media crisis will come from an awkward ad placement. What will create a brand safety crisis, though, is funding organizations that peddle dangerous rhetoric (hate speech, conspiracy theories, dangerous disinformation).

Brand safety is about ensuring that your ad spend aligns with your brand values. When the technology doesn’t fit the goal, it does more bad than good. You don’t need to scan every page on The Boston Globe. It should be on your inclusion list.

What can you do?

If you use brand safety technology, here’s what you should do:

  • Review your keyword blocklist. Ask for a copy of it and send it back with heavy edits. This list should be short.
  • Review your inclusion list. When Chase Bank reduced their inclusion list from 400,000 to 5,000 websites, their performance stayed the same. What websites would you include in your list of 5,000?
  • Finally, and you probably saw this coming… check your ads! If you’re curious about what’s happening in your ad spend, the very first place to start is your site list of placements.

Thanks for reading!

Nandini and Claire


Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin.