Hitting the far right where it hurts

The digital advertising industry is worth billions, but many companies don’t really know where their advertising dollars end up. Programmatic ad exchanges and other third-party platforms have allowed companies to buy ads without having to visit every seller – and in doing so, opened the door to fake news, misinformation and hyper partisan sites.

On this week’s CANADALAND show, journalist Cherise Seucharan explores financial incentives to give bad news and talks to activists working to stop mainstream advertising money from funding the far right :

Some of the biggest companies in the world directly contribute to the flow of disinformation on the Internet. And many are not even aware that this is happening.

For years, campaigners and advertising experts have been sounding the alarm about how online advertising helps fund hate and misinformation, due to the way it is sold through third-party platforms.

In March, the Global Disinformation Index began reporting that ads from mainstream organizations were appearing right next to stories containing false claims about Russia’s invasion of Ukraine. Even ads from a global non-profit organization seeking humanitarian aid for Ukraine ran alongside articles echoing Russian propaganda about “biological weapons” and the supposed release of “neo-Nazis”.

“We are in a misinformation crisis, and there are some publishers on the internet who are making huge sums of money from the advertising industry,” says Claire Atkin, Canadian co-founder of Check My Ads, a non-profit organization and of advice. agency that helps advertisers understand where their advertising money is going.

Atkin and her US-based co-founder, Nandini Jammi, have been at the forefront of the fundraising movement for these sites by helping advertisers know where their ads are placed.

In 2017, Jammi was part of an activist group called Sleeping Giants, which took on the monetization of far-right American website Breitbart. Sleeping Giants monitored advertisements appearing on Breitbart, took images of those advertisements and tweeted them to featured companies. As a result, many pulled their ads and Breitbart lost a lot of money.

So how did advertisers end up paying to have their ads appear in places they didn’t want them to be?

The answer lies in so-called programmatic ad exchanges, which began to form in the very early days of the internet, when blogging platforms started making it easy for anyone to build a website.

“It got to a point where big advertisers couldn’t go to 10,000 small websites and say, ‘Can we buy you ads’, right? It just wasn’t practical,” says Augustine Fou, a New York-based advertising consultant.

Third-party platforms began to appear where businesses and web publishers could buy and sell ads on a larger scale, and even set up ad space for auctions. However, it also allowed bad actors to take advantage of it.

Most exchanges, like Google, have policies prohibiting misinformation, promoting hatred, or inciting harassment. But Fou says that bad actors can easily pass an exchange’s screening due to the large number of websites they include. Often, these “fraudsters,” as Crazy calls them, are sites that post fake news, promote hatred against minority groups, or publish plagiarized information and articles for the sole purpose of gaining ad revenue.

“If you’re mixed in with hundreds of thousands of other sites, the big advertiser just doesn’t know about it,” Foh says. “So their money ends up unknowingly flowing to both fraudulent websites and disinformation websites.”

He says it’s the exchanges’ fault for not meeting their own standards.

“This kind of escalating arms race, the good guys will always be at a disadvantage, because the bad guys can innovate faster, they can move faster. They don’t play by the rules,” he says.

In 2019, BuzzFeed News reported on two fake news sites called “The Albany Daily News” and “City of Edmonton News” disguised to look like local outlets. They featured content copied from around the web, including celebrity content unrelated to the cities they were supposed to serve. The City of Edmonton News had gotten more page views than real outlet sites like Edmonton’s Log and Sun.

In 2020, CNBC reporter Megan Graham decided to test how easy it would be to run ads on a low-quality website, creating a fake post and copying her own CNBC articles into it. In the end, three advertising platforms approved the monetization of his new site, and ads for legitimate businesses appeared on it. If she had continued to operate the site, she could have started fundraising.

Major ad aggregator platforms are working to address these issues. Google Canada tells us in an email that they “have strict ad policies and publisher policies that govern the types of ads and advertisers we allow on our platform.”

These policies prohibit content that makes unreliable claims, such as content that may undermine trust in a democratic process, claims harmful to health, and content that denies the existence of climate change.

To enforce the policies, Google says it uses “a mixture of automated systems and human review” and can disable ads on specific pages or remove ads from a site entirely. They say that in 2020, they took action against more than 1.3 billion publisher pages and removed ads from several prominent right-wing sites, including The Gateway Pundit, Bongino Report and MyMilitia.

Many ad exchanges have started relying on AI and other technologies to help them determine whether a site is safe or not. One of them is keyword blocking, which can block sites containing any number of words deemed “dangerous”. Another is known as sentiment analysis, which can filter a page for its tone and the feelings it might evoke in a reader.

But these approaches have had unintended effects on legitimate news sites, whose articles often contain “negative” words and sentiments.

In April 2020, The Guardian reported that UK newspapers were set to lose over £50million due to advertisers blocking words related to the coronavirus pandemic. A spokesperson for Newsworks, an organization representing the UK newspaper industry, told the paper that the listings “threaten our ability to fund quality journalism”. (A more recent campaign by Newsworks is pushing the ad industry to stop blocking words related to climate change, saying it could prevent funding for much-needed reporting on the subject.)

In April 2020, Postmedia, Canada’s largest newspaper publisher, held a virtual town hall for its employees, during which the company described how the Covid-related ad blocking was affecting them. CANADALAND obtained a recording of a person present.

“Even though we have more users, more page views than in the past, many online advertisers don’t want their content, their ads, to be associated with content that deals with illness and death,” said Lucinda Chodan, then a senior at Postmedia. Editorial Vice President and Editor-in-Chief of Montreal Gazettetell the people present.

She said as a result, record web traffic coincided with a “disastrous drop in revenue”.

Postmedia and several other major Canadian news outlets declined requests for comment on the matter.

“You have to be human to understand when something is being posted in bad faith,” says Atkin, who thinks these technologies just won’t work.

Google says it is working to solve the problem of monetizing Ukrainian disinformation. At the end of February, the company suspended the monetization of Russian state-funded media. A month later, they suspended monetization of “content that exploits, rejects, or condones war.”

However, as Atkin discovered, many Google ads were still displayed on Russian disinformation sites, the kind that, she said, “lie to the Russian people and lie to the whole world about what’s going on.” And American and Canadian companies are funding this, and they’re funding it against their interests, against their knowledge, and they’re doing it because Google basically forced them to do it.

Danny Rogers, executive director of the Global Disinformation Index, says they’ve been trying to draw attention to the issue for years, noting an occasion when ads for the US Department of Veterans Affairs appeared on a Kremlin-funded site.

“In our minds, the responsibility lies squarely with the platforms, given their outsized market power, to do whatever they can,” he says.

Lauren Skelly, spokeswoman for Google Canada, said in an email that the company is closely monitoring the situation in Ukraine and Russia. She says the specific sites we inquired about were part of a larger group that was being reviewed and that they will take action if they violate Google’s policies.

Rogers says, “When a company that builds quantum computers and self-driving cars and launches satellites says they’re doing their best, and it’s still not happening, that doesn’t seem genuine to me.

More recently, Jammi and Atkin alerted advertisers to the running of their ads on The Post Millennial (TPM), a Montreal-based site that describes itself as a news and investigative journalism team but has come under fire for posting false statements about the Covid pandemic and negative portrayals of immigrants and the LGBTQ community.

Chad Loder, a computer software developer who worked with Jammi to better understand how programmatic ads work, counted more than 20 platforms and ad exchanges from which TPM has been delisted.

“We know our work has an impact because they do everything they can to slander us,” says Jammi.

Beginning in the fall of 2021, a series of TPM articles called Jammi a “deranged activist” and claimed that she attacked a Jewish journalist and that Atkin was trying to buy material that was inappropriate for minors.

Libby Emmons, editor-in-chief of TPM, declined an interview with CANADALAND, but said, “We stand by our reporting and will not be silenced by Nandini Jammi’s massive intimidation of our journalists.

According to Skelly, Google has taken action against The Post Millennial in the past over specific pages that violated their policies.

When CANADALAND checked the site last week to see what kinds of ads are still running there, we came across two: one for MyPillow, a company whose CEO has been among the leading proponents of false claims about the 2020 US election, and another, served by Google, encouraging users to subscribe to The Globe and Mail.

Comments are closed.