Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).

Here’s what’s new with us:


Two weeks ago, the U.S. Treasury made a very rare move. They announced new sanctions against a Russian disinformation operation — and they named names. They don’t usually do that.

In an April 15 press release, they called out four outlets operated by Russian Intelligence Services for attempting to interfere in the 2020 U.S. presidential elections: Newsfront, SouthFront, InfoRos, and Strategic Culture Foundation.

The press release goes on to explain that the Russia-backed effort has been operating “a network of websites that obscure their Russian origin to appeal to Western audiences.” The outlets focus on amplifying divisive issues in the United States, denigrating U.S. political candidates, and disseminating false and misleading information.

You’ll never believe what happened next. All the adtech platforms got together and immediately pulled the websites from their inventory so they could ensure a continued brand safe environment for their clients.

Haha just kidding. They either don’t know or don’t care, and are now illegally transacting with a Russian influence operation on the U.S. sanctions list. Technically, this puts them on the hook for civil and criminal penalties of up to twenty years imprisonment and $1,000,000 in fines per violation.

This BRANDED, we’re going to explore the most extreme brand crisis we’ve seen yet: one that not only undermines our brands, but our democratic process — and could result in the opening of a federal investigation.

Let’s check out the companies propping up SouthFront.

Wow, this is really bad

In our research, we found that over a dozen adtech companies are still serving SouthFront, a disinformation outlet described by the U.S. Treasury as follows:

SouthFront is an online disinformation site registered in Russia that receives taskings from the FSB. It attempts to appeal to military enthusiasts, veterans, and conspiracy theorists, all while going to great lengths to hide its connections to Russian intelligence. In the aftermath of the 2020 U.S. presidential election, SouthFront sought to promote perceptions of voter fraud by publishing content alleging that such activity took place during the 2020 U.S. presidential election cycle.

Using Adalytics, we have been able to confirm the following companies are actively working with Southfront:

  • Google
  • RhythmOne
  • OpenX (they dropped the site after we contacted them for comment)
  • Taboola via Disqus Reveal
  • Sovrn
  • AcuityAds
  • Criteo
  • The Trade Desk
  • Xandr (formerly AppNexus)
  • NextRoll (formerly AdRoll)

What’s the excuse this time? Maybe they “just didn’t catch this one”? Maybe it slipped through their “rigorous” vetting process? Those responses may not work this time. This isn’t just another brand safety mishap. These platforms are performing illegal transactions on behalf of their clients — all while dragging their clients’ brands through the mud.

The Department of State already identified some of these media outlets as disinformation last August. And this isn’t the first time that brands have unwittingly funded Russian propaganda.

Since the Sleeping Giants campaign, advertisers have invested heavily in brand safety solutions to make sure they wouldn’t be caught out again. That makes this ad-placements-on-an-illegal-Russian-disinformation-operation both a monumental business and legal cluster*&^k.

It also presents an opportunity for us to talk about an uncomfortable truth: the advertising industry has no blueprint, plan, or even the slightest idea of how to stop the flow of millions of ad dollars towards disinformation.

How did all these companies miss this?

The answer is straightforward. What do you do when you know there is a multi-billion dollar market opportunity for a problem but you don’t have a solution for it? You fake it till you make it. You make things up as you go along. And you project a hell of a lot of confidence in a solution you know doesn’t work.

That’s what’s happening here.

Brand safety technology can’t actually identify disinformation

Brand safety vendors talk endlessly about the importance of protecting your brand from “fake news.” They release endless studies about it. This is to distract you from realizing how limited their capabilities really are.

Even after years of product development, the only thing their technology can technically do is identify hateful or inflammatory content — which can intersect with disinformation. In a blog post, Integral Ad Science explains that their tech can “effectively tackle the controversial side of fake news, for example hate speech and offensive language.” They confirm this in their Media Quality Report.

(Sidebar: Wtf is the “controversial side” of fake news?)

They appear to be powerless to surface disinformation that doesn’t use foul language — like SouthFront.

IAS’s competitor, DoubleVerify doesn’t appear to fare much better. Their technology can identify “inflammatory, politically-charged stories,” but their sales pitch shies away from claiming they can identify false stories and disinformation efforts that use normal, everyday language.

For all their fearmongering,  Oracle Data Cloud doesn’t even mention disinformation as one of the biggest on-going sources of reputational harm.

Oh well, at least they’re finding the hate speech? No, not that either. In their press release, the Director of National Intelligence describes a series of tactics that should have been flagged up by even the most basic brand safety technology.

These publishers:

  • Make people lose confidence in the elections
  • Make people lose faith in COVID vaccines
  • Inflame racial tensions and violence

It really makes you wonder: What exactly are you paying them for?

Subscribe now

Adtech platforms don’t have a clue what disinformation is

The folks tasked with protecting your brand are spitballing.

“The truth is that fake news is a very difficult thing to classify, because what one person sees as fake news, another person may see as legitimate,” says Integral Ad Science in a blog post titled “Everything You Wanted To Know About Fake News.”

No. One man’s fake news is not another man’s real news, Brad. Disinformation is not just a bad opinion; it’s a well-defined and documented tactic used by bad actors to accelerate antisocial outcomes.

This is just not the big, messy dilemma adtech leaders want it to be. Journalistic standards exist. Working definitions of disinformation developed by respected researchers exist. The Media Manipulation Casebook literally has a glossary of case studies and definitions available, free of charge.

There are answers and solutions here. But it seems adtech companies would rather default to whatever is more profitable for them, at the expense of their customers.

Magnite (formerly Rubicon Project) bizarrely admits that disinformation doesn’t violate their brand safety standards: “Though we’re committed to preventing the monetization of content that’s clearly extreme, we don’t see it as our role to remove content that falls short of that mark but is simply untrue or even offensive,” writes Magnite CTO Tom Kershaw in a July 2020 post.

Tell that to the Director of National Intelligence, Tom.

The industry doesn’t have a working definition of disinformation

There are only a few associations in the industry that have any real clout in the room. The World Federation of Advertisers (WFA) is one of them.

Last summer, a WFA initiative called GARM (Global Alliance for Responsible Media) launched an updated set of definitions for harmful content they intended for Facebook, Twitter and the wider adtech industry to adopt. They fully left out disinformation. As Nandini commented on business TV show MoneyTalks, “we thought they’d be further along by now.”

We don’t know what happened on those GARM Zoom calls. But we do know that their new standards are toothless, particularly if they’ve left advertisers with brand safety standards that don’t account for COVID disinformation, anti-vaccination content, election disinformation and conspiracy theories.

There is one small light though: a 4As whitepaper with a fleshed out definition of disinformation and misinformation that advertisers could actually use. It provides us with a never-before seen level of clarity:

Misinformation and Disinformation are defined as the presentation of verifiably false or misleading claims that are likely to cause harm:

  • Misleading content: Misleading use of information to frame an issue or individual
  • Imposter content: genuine sources that are impersonated
  • Fabricated content: New false content
  • False connection: headlines, visuals or captions don’t support the content
  • False Context: genuine content that is shared with false contextual information
  • Manipulated content: genuine information or imagery that is manipulated to deceive

It’s still sitting there, not yet adopted by GARM. Without the inclusion of this kind of definition, adtech people will continue to hem and haw about bias, opinions, and free speech. But with a working definition, we stand a real chance of stopping ad-funded disinformation.

We’re paying a disinformation tax

This is not the first time we’ve said this: the $300 billion digital advertising industry is a national security threat.

There is no system in the world that is more opaque, unregulated, and so actively working against our interests as businesses and as a nation. There is no system in the world that makes turning against our own country the default setting.

As advertisers, we are all now locked into paying a disinformation tax. We’re in a system that tricked the Biden campaign to pay for Breitbart ads. We’re in system that enables Breitbart to keep collecting ad dollars long after advertisers thought they blocked them. We’re in a system where we have to pay to protect our brands, and have no recourse when they fail us.

How much have these adtech companies already funneled to Russian operatives? How many of us have participated in an illegal transaction we had no idea about? And how much longer would it have gone on if we weren’t publishing this article today?

We should not have to choose between advertising and not undermining the pillars of our democracy. The sooner we admit that there is no plan, the sooner we can start working towards real solutions.

For now, we have two action items for you:

  • Check your ads. Find out if you have had your ads placed on any of the four sites above.
  • If you find your ads on these sites, ask for a refund. It’s the American way.

As always, thanks for reading,

Nandini and Claire


Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin.