It's wild how these adtech platforms are violating U.S. sanctions against Russia

Placing ads with the enemy

Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).

Here’s what’s new with us: 


Two weeks ago, the U.S. Treasury made a very rare move. They announced new sanctions against a Russian disinformation operation — and they named names. They don’t usually do that.

In an April 15 press release, they called out four outlets operated by Russian Intelligence Services for attempting to interfere in the 2020 U.S. presidential elections: Newsfront, SouthFront, InfoRos, and Strategic Culture Foundation.

The press release goes on to explain that the Russia-backed effort has been operating “a network of websites that obscure their Russian origin to appeal to Western audiences.” The outlets focus on amplifying divisive issues in the United States, denigrating U.S. political candidates, and disseminating false and misleading information.

You’ll never believe what happened next. All the adtech platforms got together and immediately pulled the websites from their inventory so they could ensure a continued brand safe environment for their clients.

Haha just kidding. They either don’t know or don’t care, and are now illegally transacting with a Russian influence operation on the U.S. sanctions list. Technically, this puts them on the hook for civil and criminal penalties of up to twenty years imprisonment and $1,000,000 in fines per violation.

This BRANDED, we’re going to explore the most extreme brand crisis we’ve seen yet: one that not only undermines our brands, but our democratic process — and could result in the opening of a federal investigation.

Let’s check out the companies propping up SouthFront.

Wow, this is really bad

In our research, we found that over a dozen adtech companies are still serving SouthFront, a disinformation outlet described by the U.S. Treasury as follows:

SouthFront is an online disinformation site registered in Russia that receives taskings from the FSB. It attempts to appeal to military enthusiasts, veterans, and conspiracy theorists, all while going to great lengths to hide its connections to Russian intelligence. In the aftermath of the 2020 U.S. presidential election, SouthFront sought to promote perceptions of voter fraud by publishing content alleging that such activity took place during the 2020 U.S. presidential election cycle.

Using Adalytics, we have been able to confirm the following companies are actively working with Southfront: 

  • Google 

  • RhythmOne 

  • OpenX (they dropped the site after we contacted them for comment)

  • Taboola via Disqus Reveal

  • Sovrn

  • AcuityAds

  • Criteo

  • The Trade Desk

  • Xandr (formerly AppNexus)

  • NextRoll (formerly AdRoll) 

What’s the excuse this time? Maybe they “just didn’t catch this one”? Maybe it slipped through their “rigorous” vetting process? Those responses may not work this time. This isn’t just another brand safety mishap. These platforms are performing illegal transactions on behalf of their clients — all while dragging their clients’ brands through the mud. 

The Department of State already identified some of these media outlets as disinformation last August. And this isn't the first time that brands have unwittingly funded Russian propaganda.

Since the Sleeping Giants campaign, advertisers have invested heavily in brand safety solutions to make sure they wouldn’t be caught out again. That makes this ad-placements-on-an-illegal-Russian-disinformation-operation both a monumental business and legal cluster*&^k.

It also presents an opportunity for us to talk about an uncomfortable truth: the advertising industry has no blueprint, plan, or even the slightest idea of how to stop the flow of millions of ad dollars towards disinformation.

How did all these companies miss this? 

The answer is straightforward. What do you do when you know there is a multi-billion dollar market opportunity for a problem but you don’t have a solution for it? You fake it till you make it. You make things up as you go along. And you project a hell of a lot of confidence in a solution you know doesn’t work.

That’s what’s happening here.

Brand safety technology can’t actually identify disinformation

Brand safety vendors talk endlessly about the importance of protecting your brand from “fake news.” They release endless studies about it. This is to distract you from realizing how limited their capabilities really are.

Even after years of product development, the only thing their technology can technically do is identify hateful or inflammatory content — which can intersect with disinformation. In a blog post, Integral Ad Science explains that their tech can “effectively tackle the controversial side of fake news, for example hate speech and offensive language.” They confirm this in their Media Quality Report. 

(Sidebar: Wtf is the “controversial side” of fake news?) 

They appear to be powerless to surface disinformation that doesn’t use foul language — like SouthFront. 

IAS’s competitor, DoubleVerify doesn’t appear to fare much better. Their technology can identify “inflammatory, politically-charged stories,” but their sales pitch shies away from claiming they can identify false stories and disinformation efforts that use normal, everyday language. 

For all their fearmongering,  Oracle Data Cloud doesn't even mention disinformation as one of the biggest on-going sources of reputational harm.

Oh well, at least they’re finding the hate speech? No, not that either. In their press release, the Director of National Intelligence describes a series of tactics that should have been flagged up by even the most basic brand safety technology. 

These publishers:

  • Make people lose confidence in the elections

  • Make people lose faith in COVID vaccines 

  • Inflame racial tensions and violence 

It really makes you wonder: What exactly are you paying them for?

Adtech platforms don’t have a clue what disinformation is

The folks tasked with protecting your brand are spitballing.

“The truth is that fake news is a very difficult thing to classify, because what one person sees as fake news, another person may see as legitimate,” says Integral Ad Science in a blog post titled “Everything You Wanted To Know About Fake News.”

No. One man’s fake news is not another man’s real news, Brad. Disinformation is not just a bad opinion; it’s a well-defined and documented tactic used by bad actors to accelerate antisocial outcomes.

This is just not the big, messy dilemma adtech leaders want it to be. Journalistic standards exist. Working definitions of disinformation developed by respected researchers exist. The Media Manipulation Casebook literally has a glossary of case studies and definitions available, free of charge. 

There are answers and solutions here. But it seems adtech companies would rather default to whatever is more profitable for them, at the expense of their customers. 

Magnite (formerly Rubicon Project) bizarrely admits that disinformation doesn’t violate their brand safety standards: “Though we’re committed to preventing the monetization of content that’s clearly extreme, we don’t see it as our role to remove content that falls short of that mark but is simply untrue or even offensive,” writes Magnite CTO Tom Kershaw in a July 2020 post.

Tell that to the Director of National Intelligence, Tom.

The industry doesn’t have a working definition of disinformation

There are only a few associations in the industry that have any real clout in the room. The World Federation of Advertisers (WFA) is one of them. 

Last summer, a WFA initiative called GARM (Global Alliance for Responsible Media) launched an updated set of definitions for harmful content they intended for Facebook, Twitter and the wider adtech industry to adopt. They fully left out disinformation. As Nandini commented on business TV show MoneyTalks, “we thought they’d be further along by now.” 

We don’t know what happened on those GARM Zoom calls. But we do know that their new standards are toothless, particularly if they’ve left advertisers with brand safety standards that don’t account for COVID disinformation, anti-vaccination content, election disinformation and conspiracy theories.

There is one small light though: a 4As whitepaper with a fleshed out definition of disinformation and misinformation that advertisers could actually use. It provides us with a never-before seen level of clarity:

Misinformation and Disinformation are defined as the presentation of verifiably false or misleading claims that are likely to cause harm:

  • Misleading content: Misleading use of information to frame an issue or individual 

  • Imposter content: genuine sources that are impersonated 

  • Fabricated content: New false content 

  • False connection: headlines, visuals or captions don’t support the content 

  • False Context: genuine content that is shared with false contextual information 

  • Manipulated content: genuine information or imagery that is manipulated to deceive

It’s still sitting there, not yet adopted by GARM. Without the inclusion of this kind of definition, adtech people will continue to hem and haw about bias, opinions, and free speech. But with a working definition, we stand a real chance of stopping ad-funded disinformation.

We’re paying a disinformation tax

This is not the first time we’ve said this: the $300 billion digital advertising industry is a national security threat. 

There is no system in the world that is more opaque, unregulated, and so actively working against our interests as businesses and as a nation. There is no system in the world that makes turning against our own country the default setting.

As advertisers, we are all now locked into paying a disinformation tax. We’re in a system that tricked the Biden campaign to pay for Breitbart ads. We’re in system that enables Breitbart to keep collecting ad dollars long after advertisers thought they blocked them. We’re in a system where we have to pay to protect our brands, and have no recourse when they fail us.

How much have these adtech companies already funneled to Russian operatives? How many of us have participated in an illegal transaction we had no idea about? And how much longer would it have gone on if we weren’t publishing this article today? 

We should not have to choose between advertising and not undermining the pillars of our democracy. The sooner we admit that there is no plan, the sooner we can start working towards real solutions. 

For now, we have two action items for you: 

  • Check your ads. Find out if you have had your ads placed on any of the four sites above.

  • If you find your ads on these sites, ask for a refund. It’s the American way. 

As always, thanks for reading,

Nandini and Claire 

Leave a comment

Oh no, they let us testify at the EU Parliament!

And you better believe we named names and called out Google.

Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).

Here’s what’s new with us: 

🔥 We also learned that we’ve been nominated for Innovator of the Year in AdWeek’s Reader Choice Awards. If you have a moment, please vote for us! We appreciate your support.🔥


We’re bringing you a special edition of BRANDED this week.

Yesterday, Nandini, along with Clare Melford, Co-Founder of the Global Disinformation Index, and Director of Government Affairs & Public Policy at Google Ghita Harris-Newton testified for the Special Committee on Foreign Interference in all Democratic Processes in the European Union, including Disinformation.

That’s a mouthful! But what it means is that she got to virtually take the floor at the European Parliament in Brussels and loudly advocate for — who else? — brands and advertisers.

Each expert had 10 minutes to explain ad tech before they were pummeled with two rounds of questions. You can watch the recording here (Nandini starts at 16:24).

Or read the full statement here:

The statement

My name is Nandini Jammi. I run Check My Ads, a consultancy that helps companies keep their ads off hate and disinformation. 

Before the 2016 U.S. elections I was working as a marketer for a tech startup. After the 2016 U.S. elections, I took on a second voluntary role: I began co-running Sleeping Giants, an anonymous social media campaign that centered around making one website, Breitbart.com, unprofitable.

Breitbart had become one of the most influential propaganda outlets in the U.S. by November 2016, by putting out false and intentionally misleading “news” stories designed to misinform and inflame the public. 

These tactics worked. They worked so well that Breitbart is widely credited with helping Trump win the election through the use of false narratives, the chaos and confusion they sowed in the traditional media ecosystem and the environment of fear and hatred they cultivated across our country.

Their disinformation operation was so successful that they announced plans to expand into Germany and France ahead of their national elections. 

At the time, they looked unstoppable. 

But what I saw when I visited this website was that it was full of ads — and that in fact, advertising revenues were how they were sustaining their growth. I also knew that given how ads are placed at scale across the internet today, that it was very unlikely that the advertisers were even aware their ads were running on — let alone funding the growth and success of this website.

Under the pseudonym Sleeping Giants, my campaign co-founder and I began taking screenshots of ads on Breitbart and alerting advertisers on Twitter — and found that we were correct. It didn’t matter who you were — small businesses, multinational corporations, non-profits, government agencies — even the largest and most sophisticated marketing organizations in the world were unaware that their ads were being automatically placed alongside bigoted, hateful and racist content. 

During the course of our campaign, over 4000 advertisers and over 30 ad exchanges blocked Breitbart from their media buys. Breitbart was projected to make $8 million in revenue in 2017. Instead, they lost 90% of their projected revenue within just 3 months as a direct result of our campaign.

Just as importantly, they abandoned their plans to open outlets in Germany and France. At Sleeping Giants, we successfully curbed the growth trajectory of one of the largest vectors of disinformation in our country. 

Our campaign also established a precedent: that advertisers overwhelmingly do not want their budgets to fund hate, disinformation or election interference. 

But despite universal consensus across the advertising industry, the flow of ad dollars towards disinformation continues every single day. In fact, every advertiser running an ad campaign on the open web today - whether it’s your neighborhood yoga studio or a multinational corporation - is funding disinformation by default.

Think of it as a tax that advertisers are being forced to pay because the digital advertising supply chain refuses to do its job. No reputable advertiser wants anything to do with disinformation. But their ad budgets has been hijacked by a handful of adtech companies who are forcing brands to take the hit for their negligence.

Stopping the flow of ad dollars towards disinformation has now become both a personal and professional mission for me. 

As an activist, I view disinformation as a threat to our collective public safety and our democratic process. And as a brand safety consultant to some of the biggest brands in the world, I view this as a solvable business problem. 

How programmatic advertising works

Let me take a moment to explain programmatic advertising. It has a lot of moving parts, but I’m going to keep simple today and focus on what matters.

Advertising is really just about connecting two players: Publishers & Advertisers. Publishers have the audience. Advertisers want to pay to get in front of that audience. 

Now, before the internet, advertisers would make media buys - they would make direct deals with newspapers, magazines, billboards, wherever they want to be. It was manual and it was time-consuming, but they did know where their ads were being placed.

Ad exchanges

What ad exchanges brought to the table was the promise of efficiency. 

They said: “You know what? Let us make this efficient for you. We have a pool of publishers in our inventory and we can place those ads for you. You can reach anyone, anywhere in the world at a fraction of the cost. You just tell us your budget and we’ll take care of it.” This was revolutionary. 

They made these tools available to everybody. And by they, I mean mostly Google. Your average small business could now place their ads not just in hundreds of places but across hundreds of thousands of webpages all at once.

You can think of it as going shopping in the supermarket. You walk into a supermarket assuming they adhere to basic food safety standards. You assume their employees regularly check on inventory, you assume you won’t accidentally go home with a bag of moldy bread. It’s the supermarket’s job to provide you with safe, fresh food, right?

Now what if you went to a supermarket that doesn’t check their inventory? And also stocks the shelves with expired and recalled foods because they think it’s a “gray area” and they don’t want to stop you from buying it? How is the average shopper supposed to know what’s good and what shouldn’t be eaten?

Google and other ad exchanges have not been vetting their inventory adequately. Instead, they’ve pushed the responsibility to manage their growing inventory onto their customers.

As Google accepts more and more disinformation into their inventory, they want advertisers to find and block it themselves.

“Here, have more filters, more settings, more page-by-page granular control.”

This has become a core strategy in an industry that realizes they can continue to invite disinformation and extremism into their inventory (in the name of choice) while putting the onus on advertisers to sniff it out.

I can speak from experience: No marketing team in the world has the time, resources or expertise to seek out and block individual disinformation websites on the internet. Google does.

And today, Google is the biggest sponsor of disinformation in the world. And they’re doing it against the consent of their own customers.

Brand safety vendors 

So how do you get control over where your ads are placed? That’s where brand safety vendors have swooped in. Today, ad verification companies like Integral Ad Science, DoubleVerify and Oracle wield an enormous amount of power over how ad budgets are distributed across the internet.

We don’t see these decisions take place, but their brand safety algorithms scan every page and every piece of content we look at to decide whether it’s “safe” before serving an ad. These millions of little verdicts add up. They determine who on the web gets monetized — and who gets blocked. 

There’s just one problem: black box algorithms. These algorithms distribute billions of ad dollars across the web and we don’t know how their algorithms work and whether they even work accurately.

In fact, it doesn’t appear that they can tell the difference between “the promotion of disinformation” and “legitimate news coverage”

These companies claim that they can keep their clients’ ads away from disinformation, but we have evidence that suggests these algorithms are fundamentally broken. With research provided to us by adtech researcher Dr. Krzysztof Franaszek, we recently reported that:

  • Oracle marked nearly one-third (30.3%) of New York Times articles as unsafe, including 98% of  article by Marilyn Stasio, who reviews crime fiction for the New York Times Book Review.

  • Oracle marked one-fifth (21.4%) of The Economist as unsafe, including an article about molecular cells which was likely classified under “Death Injury, or Military Conflict” because it happened to mentioned “programmed cell death”  

An algorithm that can’t tell the difference between actual violence and news coverage? That’s not a very smart algorithm, is it?

Now, if these are the kinds of numbers we’re seeing for English language outlets, imagine how dismal the numbers must be in German, French, Italian, Spanish and so on. 

Additionally, none of these companies have a public disinformation policy, which is significant because according to Dr. Franaszek’s research, they are funneling their clients’ ad budgets towards disinformation at higher rates than the news: 

  • One America News Network (OANN.com), which is a critical vector of election disinformation was 88.5% safe

  • Hannity.com, whose figurehead denied and downplayed the pandemic since this spring was 60% safe.

  • TownHall.com, whose coverage of what they call the “Wuhan Virus” has been racist at best, was only 69.5% safe.

The cost of this faulty, broken technology is doing immeasurable damage to our free press. According research published in The Guardian, brand safety technology cost news publishers in the UK, US, Japan, and Australia, around $3.2 billion worth of digital revenues.

Advertisers are not in control

We often think of disinformation as a societal or political problem — and it is. But as I said before, I have come to see disinformation as a business problem. Advertisers have no idea how their money is being spent and enormous amounts of money are completely unaccounted for.

I’ll leave you with the story of one of our clients: We performed an ad audit for Headphones.com, a small business in Canada that sells high-end headphones. We identified disinformation in their ad spend, which prompted them to check their ads. After implementing our recommendations, their ad spend went down from $1200/day to $40/day with no change in performance. That means 95% of their ad spend had nothing to do with their success.

At every step of the way, the advertising supply chain is protecting its own interests while throwing advertisers under the bus. But advertisers care about their brands, they care about their customers, they care about their communities.

If and when advertisers get back control of their ads, I’m confident they’ll do the right thing.

Thank you for your time.

Leave a comment

Thanks for reading!

Nandini & Claire


Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin

Share BRANDED

Steve Bannon is getting by with a little help from the world’s biggest ad agencies

A working theory of how major brands ended up in bed with War Room: Pandemic

Here’s what’s new with us: 


How do your ads end up on Steve Bannon’s TV show? 

Hi folks, Nandini here. Last week, I was checking in on Steve Bannon’s latest media venture War Room: Pandemic when an odd commercial break caught my attention: four high-quality ads running on an identical loop every 10 minutes or so. It wasn’t programmatic — these ads looked like they had been placed directly.

I have been consumed by the mystery ever since. Who greenlighted these placements? Do these brands know they are sponsoring a show whose host called for the beheading of Dr. Fauci and later urged his followers to join the insurrection at Capitol Hill

In the following days, trusted sources reached out to me to confirm that multiple brands were not aware of these ad placements and were working to investigate how they happened. Three of the advertisers - GoDaddy, GoodRx and Norton Lifelock - also vanished from the lineup.

We couldn’t resist. We started our own investigation. It didn’t take us long to find a direct line between Real America’s Voice (AVN), the media outlet that hosts Steve Bannon’s show, and two of the more recognizable names in advertising: Havas Media and Horizon Media.

We also learned that other consumer brands like Casper, Peloton, Purple, Stanley Steemer, UnitedHealthcare, Vistaprint and Chewy, may have been unwittingly financially supporting a major disinformation operation hidden within a legitimate-looking media network.

How? We reached out to all the agencies and brands mentioned in this piece hoping for some answers. Of the brands, only GoDaddy responded. 

So in today’s issue of BRANDED, we’re sharing our working theory of how unsuspecting advertisers ended up funding a Steve Bannon production — yet again

Brands (usually) don’t place ads themselves

Here’s what we already know: Major brands don’t generally place ads themselves. They hire agencies to do it for them. This is how it generally works: 

Brands → Agencies

Brands (or advertisers) lean on agencies to help them reach their desired target audiences and demographics. For most brands, this is the most efficient way to place ads. It’s not unusual for brand teams to be hands-off during this process and sign off on an Excel spreadsheet presented to them by their agency. 

Agencies → Media Networks

Agencies (or “media buyers”) provide the expertise to facilitate media buys on behalf of clients. They develop relationships across a range of media networks, make recommendations and develop media plans for clients.

Media Networks → Audiences

Media networks (or “media owners”) develop the audiences. They provide commercial breaks on their outlets, and package & pitch their audiences to agencies in order to fill those ad spots. 

So given what we know, we’ll work off the assumption that none of these brands directly placed ads on this show. What’s more likely is that they were placed by their agencies who weren’t quite keeping their eyes on the ball.

So here’s what we think happened

This is no Cambridge Analytica, so this isn’t going to be hard to keep up with. Here’s the rundown.

  1. War Room: Pandemic runs on Real America’s Voice (AVN)

Bannon needed a new hub for his bullshit after leaving Breitbart and being kicked off his SiriusXM show. He found his home base with Real America’s Voice.

  1. AVN was founded by a media network called Performance One Media LLC

AVN’s website tells us that the outlet “is a wholly owned subsidiary of Performance One Media LLC.” That is, the *media network* and the *media outlet* are owned by the same person.

And the pitch for the outlet is really something:

Real America’s Voice is a media solutions firm that enables Content Providers, Agencies and Advertisers to leverage our 130 years of combined media expertise to deliver the country’s first audience-driven news platform!

Our Creative Services, Video Production, Content Delivery, Media Buying and Broadcast Studio teams have been delivering impactful messaging to multi-screen, multi-cultural and multi-platform audiences for over 15 years.”

Sure, they’re missing a few key details — for example, it’s a major disinformation hub employing the likes of Steve Gruber and Raheem Kassam — but who’s counting?

  1. Performance One Media’s owner is a career criminal & fraudster

It gets sketchier. The Daily Beast reported last year that Performance One Media is owned and operated by Colorado-based Robert Sigg, who has a track record of white-collar crime dating back to the ’80s.

“In 2006, he was convicted for his role in a $19 million mortgage-fraud scheme. At the time of his arrest, the FBI announced that Sigg and his fellow defendants were charged because of their ‘alleged roles in a scheme to obtain loans employing stolen identities, and then utilizing these loan proceeds to purchase substandard houses.”’ 

Now, we’re not trying to say that people with a criminal past are not capable of change, but we would perhaps proceed with a little caution when it comes to a guy who perpetrated a scheme as odious and low-brow as this one.  

  1. Performance One Media seems to be running a very intentional strategy

Performance One Media boasts a total of four outlets, but one of these things is not like the others. If we didn’t know any better, we would say that someone snuck a major disinformation outlet in there between “the weather” and “fishing.”

  1. Weather Nation — “24-7 Weather-News Delivering National \ Local \ Regional Severe Weather Coverage”

  2. America’s Voice — “A Powerful New Voice In American Politics” 

  3. Pursuit — The Most Widely Distributed Fishing and Hunting television network in America

  4. ICTV — “A television platform for content creators, infomercial marketers and television shoppers”

So…is this a scam or what?

Is it a scam if it takes you about ten minutes of googling to put all the pieces together? To us, it looks like we’re looking at a couple of grifters who managed to con a system that is ripe for a con. How hard do you think they laughed after they wrote this up?

If it’s this easy to get scammed, then we have only ourselves to blame. 

In the meantime, the mystery deepens: Performance One Media suddenly scrubbed its website of all its partner and client logos yesterday (Tuesday) afternoon. This was one day after we started sending out emails.

This leaves us with more questions:

  • Were Havas and Horizon ever actually working with Performance One Media?

  • If so, how did they vet this network?

  • How long were they serving ads on this network before we pointed it out? 

Share

Brand safety is bigger than adjacency — it’s about who’s getting your money

All this points us to the possibility that the way we currently think about brand safety has outlived its usefulness: it’s not just about what content your ads show up next to. It’s about assessing the real-world impact of your brand’s financial ties — something that cannot be automated.

One of Bannon’s advertisers, GoDaddy, has pulled its ads. They told us: 

“We don’t advertise on the site. It was an unintentional oversight related to automated advertising and the ad was immediately pulled when discovered. We always aim to advertise on sites that are aligned with GoDaddy’s mission and values and we apologize if this upset any of our customers.”

But those brands that did not appear directly on Bannon’s show still have a big question to answer: they now have to decide whether they want to have a relationship with a business that feeds into — and could possibly even be a strategic ploy to fund — a disinformation network.

We’re sure there’s more

There is more to this story — and there are more national brands supporting Steve Bannon’s show. Next, we will be updating you on AVN’s advertiser line-up on Pluto TV, a ViacomCBS company.

But for now, these are the agencies that — according to Performance One Media’s now-scrubbed website — have a relationship with Performance One Media. If your brand is working with any of the following, reach out to ask them whether they are running your ads on this network. And then tell us about it.  

We have pasted their responses (if any) beside their name:

As always, thanks for reading,

Nandini and Claire

P.S. We’ve shared a screenshot of Performance One Media’s website before they scrubbed it Tuesday afternoon.

Note 6:30p EST Wednesday March 31: Havas Media got back to us after publication. Thank you to this team!

Havas Media is not partner of Performance One Media and has not purchased media on America’s Voice. A nominal amount of budget has run on Performance One Media owned Weather Nation and Havas is working with distribution partners to block all affiliated outlets.

We are fully committed as a business to ensuring safe, appropriate and ethical media investment for our clients and will further address the link between Performance One Media and America’s Voice – we have committed to the Conscious Advertising Network’s manifestos to ensure advertising does not fund hate speech or misinformation.


Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin

Share


Performance One Media’s website, p1.media, on Monday March 29, 2021:

Here’s what you should do about your Fox News ads

We can't tell you what to do, but we have some advice.

Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).

Here’s what’s new with us:

— 

Hey folks, Nandini here. Claire is on vacation this week, so I have free rein on the newsletter. So I’m going to talk about Fox News. (Sorry for any grammar mistakes - she usually catches those.) 

Last week, I obtained a copy of a secret sales deck Fox News has been using to hang on to their increasingly skittish advertisers. It’s a lot of fun, not just because we’re not supposed to see it…but also because I’m featured in it!

At Sleeping Giants, I helped lead an effort that led to the cancellation of The O’Reilly Factor - the highest rated news show on cable at the time - and went on to lose his successor Tucker Carlson nearly every single one of his advertisers. If you connect the dots, you might conclude (as I have) that they are running Tucker Carlson Tonight at this point purely out of spite. 

The effect of our coalition work has had such a chilling effect at Fox News HQ that the New York Times learned last year that Fox News has a 3-person SWAT team dedicated to tracking and discrediting us

There is one thing to note here: Fox News does not answer to advertisers because Fox News does not need advertiser dollars to survive. They enjoy fixed revenues no matter what they do because they have successfully negotiated cable fees at 2-3x the rate of networks like CNN and MSNBC. That means every household with cable in America pays them a $2/month “tax” whether or not they watch Fox News, which amounts to $1.8 billion in guaranteed revenue.

So no, they don’t actually need advertisers.

However, they do need legitimacy… and advertisers represent legitimacy. When advertisers flee a toxic TV show to protect their reputations, new advertisers don’t come in to replace them. And once you lose legitimacy, you are NewsMax. 

Fox News depends on the legitimacy of their advertisers to maintain their own position as a “conservative” news channel — rather than an outright disinformation outlet. It is with the support of global brands like Proctor & Gamble, Pfizer, AstraZeneca and General Motors, that they’re able to retain that legitimacy. 

Which begs the question: Fox News needs advertisers, but do advertisers need Fox News?

Should we advertise on Fox News?: A questionnaire

March is a critical time for Fox News, as their annual upfronts - where they sell ~80% of their advertising spots for the year - begin. And nothing says “we’re a healthy and safe place to advertise” like a sales deck pleading with advertisers to ignore three (3) Twitter accounts. 

But if you’re still on the fence, here are the questions I’d ask.

1. Can we establish a difference between Fox News and Breitbart? 

For advertisers around the world, having Breitbart on the universal blocklist is a no-brainer. Dozens of ad exchanges have dropped the site and ad agencies have it blocked across their clients by default.

The reason is obvious and uncontested: Breitbart’s bigoted, misogynist and white supremacist rhetoric is globally understood to be unsafe and inappropriate for brands. So if that’s the standard we’ve set, here’s what’s happened at Fox News…

Over the last 12 months, Fox News hosts and their guests have suggested the COVID-19 death toll would be less than the 2018 flu season, claimed the virus is a fraud invented by China, claimed that Dr.Fauci is responsible for COVID-19, discouraged wearing masks, called BLM protestors “poison”, defended QAnon conspiracy theorists, fueled vaccine skepticism, claimed that Democrats are behind a “chilling, Orwellian” effort to silence opposition, accused voting systems company Smartmatic of election rigging, claimed that “antifa” was behind the Capitol Hill insurrection, claimed the “corrupt, stolen” election was financed by George Soros, and cast doubt on the election results nearly 800 times

Last week alone Tucker Carlson: 

I’m squinting real hard and I can’t tell the difference. Can you?

2. How long will you be able to differentiate between Fox “News” and “Opinion”? 

If all of the above could happen in 12 months, think about what could happen over the next 12. We have learned in off-the-record conversations with marketing leaders and agency execs, that advertisers feel uncomfortable leaving Fox News — and so they have struck a compromise. They’re moving their ads from the controversial evening “opinion” shows and pushing them up to the less scrutinized daytime slots.

The differentiation has worked in the past. But as Tucker Carlson and co.’s advertisers dry up, it won’t be long before the public turns its eye towards advertisers supporting Fox News as an entity. And speaking from experience, it does not take too long to turn an outlet into a toxic hotspot.

What will you do then?

3. How will advertising on Fox News affect your other brand investments?

The louder you are about your brand values, the harder the fall. For a multinational corporation like Proctor & Gamble to lean this heavily on brand while also being one of Fox News’s biggest advertisers?

I just don’t recommend it. From the perspective of an activist, all I see here is fodder for a campaign that undermines Proctor & Gamble’s own brand. A crisis of their own making, if you ask me.

Maybe it’s a risk you’re willing to take?

I know that we live in an unpredictable media landscape, but your advertising choices shouldn’t be a “risk” you’re willing to take. Your should not be secretly hoping that your ads are seen enough for your product to sell but *not so much* that your logo ends up going viral on Twitter atop Tucker Carlson’s quizzical face.

Your brand association is an association, for better and for worse. It should be a decision you approach intentionally and strategically. It should be something you’re able to own up to and justify to the public if and when that association is called into question. For those of you building long-term and legacy brands, you should be gaming out scenarios at least ten steps down the road.

When it comes to the Fox News Dilemma, I’m not going to tell you what to do here. Your brand is yours to protect. All I can do is show you the difference between having an advertising strategy and living on a prayer.

Thanks for reading!

Nandini


Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin

This kid is not flying a plane and we're not in control of our ads

We just think we are.

A few days after the January 6th insurrection, shareholders at two major companies filed a resolution asking that Home Depot and Omnicom undertake audits to prove that their ad budgets haven’t been funding disinformation and extremism online. 

This, as far as we know, is the first time the link between programmatic advertising budgets and disinformation has been flagged at the shareholder level. The lowly and mundane ad audit is suddenly in the spotlight!

Investor scrutiny is probably not a welcome development for the $300 billion advertising industry — particularly the brand safety wing — which has for years coalesced around the claim that ad-funded disinformation is a problem they mostly have under control. 

This isn’t the first time ad tech will be scrutinized, of course. For years, the adtech industry has been responding to complaints about questionable placements with the same answer:  “Oh THAT? That was just a little oversight. Hang on a second, we’re going to give you more control over your ad placements. We’re going to help you get even more granular.

More control over our ad placements? Who can argue with that? Well, we’re about to. 

Sure little man, you can fly the plane!

The adtech industry has good reason for wanting advertisers to feel like we’re in control: it takes the heat off them and puts advertisers to work instead.

Google was the first to catch onto the appeal of introducing new brand safety features that absolves them of further responsibility. When UK agencies discovered that their ads were funding extremists, Google was summoned to Parliament and asked to come back with a fix. Two months later, they rolled out their Big Idea: “Page-level enforcements for more granular policy actions” 

“To allow more precise enforcements, and provide you with feedback about policy issues as we identify them, we’re introducing page-level enforcements. A page-level enforcement affects individual pages where violations of the AdSense Program Policies are found. As a result, ad serving is restricted or disabled on those pages. Ads will continue to serve where no policy violations have been found, either at the page- or site-level.” (emphasis ours).

It was a solidly underhanded maneuver. On the one hand, page-level enforcement offered more granular control. On the other hand, no marketing team in the world has time or resources to seek out and block individual pages on every website on the internet.

But that hasn’t stopped adtech from running with the marketing ploy that more controls means you’re in control.

Granularity, or the concept of giving advertisers ever more filters and settings, has since become a core strategy in an industry that realizes they can continue to invite disinformation and extremism into their inventory (in the name of choice) while putting the onus on advertisers to sniff it out and block it themselves (in the name of control).

You have the power to choose exactly where you want your ads to go, they say. It’s just functionally impossible to use.

It sure looks like we’re not in charge

That kid is not actually flying a plane and you are not calling the shots on your ad placements. Ad tech has locked us into a system that makes us believe that marketers are in control. Sure, they let us push the buttons…

  • What topics or categories do we want to avoid?

  • What is our risk tolerance? 

  • Do we want to only be on “positive sentiment” content?

But we don’t know what the buttons do. A handful of product and engineering teams do — and they are making the real strategic decisions behind the scenes: They decide what content is deemed positive and negative. They decide what risky content looks like. They decide what a piece of content is about and apply this across billions of articles. And from what we’ve learned through their leaked data, it doesn’t look like it even works.

But let’s pretend for a moment that it did. Have you ever tried to rate an article the way that brand safety technology does? Try it now. Here are some articles:

  • A CNN article about a Black Lives Matter protest

  • A Daily Mail article about Elliot Page, who recently came out as trans

  • The Root’s obituary of Mary Wilson, a co-founding member of the musical group The Supremes

  • Al Jazeera’s coverage of multiple attacks in Afghanistan

  • One of the many Epoch Times articles about bunnies

How would you rate these articles for brand safety?

  1. What topic or category would you put each article in? (Use the IAB Tech Lab’s Content Taxonomy for reference.

  2. How you rate each article for “risk”? (Low, medium, or high)

  3. How would you rate each article for “sentiment?” (Negative, neutral, or positive?)

We couldn’t even apply this rubric by hand to one article without scratching our heads.

And if you were to send this exercise to your friends, we’re certain that each of you would come up with different answers, and none of you would be sure that you were right. As readers, we all interpret different articles differently.

What’s the end game here? We’re not in control - we’ve given it all up to to someone else.

We need control, not more controls

If adtech gets any more granular, it may disappear up its own butt. Now, we don’t want that. That leaves us with one option: start zooming out.

As marketers, it is our job to connect with consumers. Every day, marketers add or subtract from our brand equity ‘bank accounts’ by associating and disassociating with ideas, causes, people, and products.

When we advertise on respected publishers, we gain a little bit of their brand equity. When we advertise on disreputable publishers, we lend them a bit of ours. Ad placements are a powerful way to live out our brand values.

Our clients — mostly marketers and people who work in brand and reputation — get this right away. Here’s how they might break down the above list of articles with ease:

  • CNN is brand SAFE for most brands because it adheres to journalistic standards

  • The Daily Mail is brand UNSAFE for many brands because it can be racist and misogynist and it doesn’t consistently adhere to journalistic standards.

  • The Root is consistently brand SAFE for most brands because it adheres to journalistic standards. 

  • Al Jazeera is brand SAFE for most brands because it adheres to journalistic standards.

  • The Epoch Times is consistently brand UNSAFE because it regularly publishes disinformation.

The ultimate decision depends on each team’s brand values. Do they need any more granular controls for this? No.

Stay safe folks. Check your ads. 

Thanks for reading!

Nandini & Claire


Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin

Loading more posts…