Oh no, they let us testify at the EU Parliament!
And you better believe we named names and called out Google.
Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).
Here’s what’s new with us:
Nandini was quoted in The Markup’s groundbreaking investigation on secret YouTube blocklists [READ: Google Has a Secret Blocklist that Hides YouTube Hate Videos from Advertisers—But It’s Full of Holes]
Claire spoke to Ad Age about how brands should deal with rogue influencers [What Brands Can Learn From YouTube Star David Dobrik’s Fall From Grace]
Nandini wrote an op-ed for NBC News about why deplatforming extremists works. [READ: White Lives Matter' protests are failing across America. Here's one big reason why.]
🔥 We also learned that we’ve been nominated for Innovator of the Year in AdWeek’s Reader Choice Awards. If you have a moment, please vote for us! We appreciate your support.🔥
We’re bringing you a special edition of BRANDED this week.
Yesterday, Nandini, along with Clare Melford, Co-Founder of the Global Disinformation Index, and Director of Government Affairs & Public Policy at Google Ghita Harris-Newton testified for the Special Committee on Foreign Interference in all Democratic Processes in the European Union, including Disinformation.
That’s a mouthful! But what it means is that she got to virtually take the floor at the European Parliament in Brussels and loudly advocate for — who else? — brands and advertisers.
Each expert had 10 minutes to explain ad tech before they were pummeled with two rounds of questions. You can watch the recording here (Nandini starts at 16:24).
Or read the full statement here:
My name is Nandini Jammi. I run Check My Ads, a consultancy that helps companies keep their ads off hate and disinformation.
Before the 2016 U.S. elections I was working as a marketer for a tech startup. After the 2016 U.S. elections, I took on a second voluntary role: I began co-running Sleeping Giants, an anonymous social media campaign that centered around making one website, Breitbart.com, unprofitable.
Breitbart had become one of the most influential propaganda outlets in the U.S. by November 2016, by putting out false and intentionally misleading “news” stories designed to misinform and inflame the public.
These tactics worked. They worked so well that Breitbart is widely credited with helping Trump win the election through the use of false narratives, the chaos and confusion they sowed in the traditional media ecosystem and the environment of fear and hatred they cultivated across our country.
Their disinformation operation was so successful that they announced plans to expand into Germany and France ahead of their national elections.
At the time, they looked unstoppable.
But what I saw when I visited this website was that it was full of ads — and that in fact, advertising revenues were how they were sustaining their growth. I also knew that given how ads are placed at scale across the internet today, that it was very unlikely that the advertisers were even aware their ads were running on — let alone funding the growth and success of this website.
Under the pseudonym Sleeping Giants, my campaign co-founder and I began taking screenshots of ads on Breitbart and alerting advertisers on Twitter — and found that we were correct. It didn’t matter who you were — small businesses, multinational corporations, non-profits, government agencies — even the largest and most sophisticated marketing organizations in the world were unaware that their ads were being automatically placed alongside bigoted, hateful and racist content.
During the course of our campaign, over 4000 advertisers and over 30 ad exchanges blocked Breitbart from their media buys. Breitbart was projected to make $8 million in revenue in 2017. Instead, they lost 90% of their projected revenue within just 3 months as a direct result of our campaign.
Just as importantly, they abandoned their plans to open outlets in Germany and France. At Sleeping Giants, we successfully curbed the growth trajectory of one of the largest vectors of disinformation in our country.
Our campaign also established a precedent: that advertisers overwhelmingly do not want their budgets to fund hate, disinformation or election interference.
But despite universal consensus across the advertising industry, the flow of ad dollars towards disinformation continues every single day. In fact, every advertiser running an ad campaign on the open web today - whether it’s your neighborhood yoga studio or a multinational corporation - is funding disinformation by default.
Think of it as a tax that advertisers are being forced to pay because the digital advertising supply chain refuses to do its job. No reputable advertiser wants anything to do with disinformation. But their ad budgets has been hijacked by a handful of adtech companies who are forcing brands to take the hit for their negligence.
Stopping the flow of ad dollars towards disinformation has now become both a personal and professional mission for me.
As an activist, I view disinformation as a threat to our collective public safety and our democratic process. And as a brand safety consultant to some of the biggest brands in the world, I view this as a solvable business problem.
How programmatic advertising works
Let me take a moment to explain programmatic advertising. It has a lot of moving parts, but I’m going to keep simple today and focus on what matters.
Advertising is really just about connecting two players: Publishers & Advertisers. Publishers have the audience. Advertisers want to pay to get in front of that audience.
Now, before the internet, advertisers would make media buys - they would make direct deals with newspapers, magazines, billboards, wherever they want to be. It was manual and it was time-consuming, but they did know where their ads were being placed.
What ad exchanges brought to the table was the promise of efficiency.
They said: “You know what? Let us make this efficient for you. We have a pool of publishers in our inventory and we can place those ads for you. You can reach anyone, anywhere in the world at a fraction of the cost. You just tell us your budget and we’ll take care of it.” This was revolutionary.
They made these tools available to everybody. And by they, I mean mostly Google. Your average small business could now place their ads not just in hundreds of places but across hundreds of thousands of webpages all at once.
You can think of it as going shopping in the supermarket. You walk into a supermarket assuming they adhere to basic food safety standards. You assume their employees regularly check on inventory, you assume you won’t accidentally go home with a bag of moldy bread. It’s the supermarket’s job to provide you with safe, fresh food, right?
Now what if you went to a supermarket that doesn’t check their inventory? And also stocks the shelves with expired and recalled foods because they think it’s a “gray area” and they don’t want to stop you from buying it? How is the average shopper supposed to know what’s good and what shouldn’t be eaten?
Google and other ad exchanges have not been vetting their inventory adequately. Instead, they’ve pushed the responsibility to manage their growing inventory onto their customers.
As Google accepts more and more disinformation into their inventory, they want advertisers to find and block it themselves.
“Here, have more filters, more settings, more page-by-page granular control.”
This has become a core strategy in an industry that realizes they can continue to invite disinformation and extremism into their inventory (in the name of choice) while putting the onus on advertisers to sniff it out.
I can speak from experience: No marketing team in the world has the time, resources or expertise to seek out and block individual disinformation websites on the internet. Google does.
And today, Google is the biggest sponsor of disinformation in the world. And they’re doing it against the consent of their own customers.
Brand safety vendors
So how do you get control over where your ads are placed? That’s where brand safety vendors have swooped in. Today, ad verification companies like Integral Ad Science, DoubleVerify and Oracle wield an enormous amount of power over how ad budgets are distributed across the internet.
We don’t see these decisions take place, but their brand safety algorithms scan every page and every piece of content we look at to decide whether it’s “safe” before serving an ad. These millions of little verdicts add up. They determine who on the web gets monetized — and who gets blocked.
There’s just one problem: black box algorithms. These algorithms distribute billions of ad dollars across the web and we don’t know how their algorithms work and whether they even work accurately.
In fact, it doesn’t appear that they can tell the difference between “the promotion of disinformation” and “legitimate news coverage”
These companies claim that they can keep their clients’ ads away from disinformation, but we have evidence that suggests these algorithms are fundamentally broken. With research provided to us by adtech researcher Dr. Krzysztof Franaszek, we recently reported that:
Oracle marked nearly one-third (30.3%) of New York Times articles as unsafe, including 98% of article by Marilyn Stasio, who reviews crime fiction for the New York Times Book Review.
Oracle marked one-fifth (21.4%) of The Economist as unsafe, including an article about molecular cells which was likely classified under “Death Injury, or Military Conflict” because it happened to mentioned “programmed cell death”
An algorithm that can’t tell the difference between actual violence and news coverage? That’s not a very smart algorithm, is it?
Now, if these are the kinds of numbers we’re seeing for English language outlets, imagine how dismal the numbers must be in German, French, Italian, Spanish and so on.
Additionally, none of these companies have a public disinformation policy, which is significant because according to Dr. Franaszek’s research, they are funneling their clients’ ad budgets towards disinformation at higher rates than the news:
One America News Network (OANN.com), which is a critical vector of election disinformation was 88.5% safe
Hannity.com, whose figurehead denied and downplayed the pandemic since this spring was 60% safe.
TownHall.com, whose coverage of what they call the “Wuhan Virus” has been racist at best, was only 69.5% safe.
The cost of this faulty, broken technology is doing immeasurable damage to our free press. According research published in The Guardian, brand safety technology cost news publishers in the UK, US, Japan, and Australia, around $3.2 billion worth of digital revenues.
Advertisers are not in control
We often think of disinformation as a societal or political problem — and it is. But as I said before, I have come to see disinformation as a business problem. Advertisers have no idea how their money is being spent and enormous amounts of money are completely unaccounted for.
I’ll leave you with the story of one of our clients: We performed an ad audit for Headphones.com, a small business in Canada that sells high-end headphones. We identified disinformation in their ad spend, which prompted them to check their ads. After implementing our recommendations, their ad spend went down from $1200/day to $40/day with no change in performance. That means 95% of their ad spend had nothing to do with their success.
At every step of the way, the advertising supply chain is protecting its own interests while throwing advertisers under the bus. But advertisers care about their brands, they care about their customers, they care about their communities.
If and when advertisers get back control of their ads, I’m confident they’ll do the right thing.
Thank you for your time.
Thanks for reading!
Nandini & Claire