Facebook and Google pressured EU experts

Alexia Barakou

 EU experts pressured to soften fake news regulations, say insiders

Google and Facebook pressured and “arm-wrestled” a group of experts to soften European guidelines on online disinformation and fake news, according to new testimony from insiders released to journalists at Investigate Europe today.

The EU’s expert group met last year as a response to the wildfire spread of fake news and disinformation seen in the Brexit referendum and in the US election of President Donald Trump. Their task was to help prevent the spread of disinformation, particularly at pivotal moments such as this week’s hotly contested European parliamentary elections.

However, some of these experts say that representatives of Facebook
and Google undermined the work of the group, which was convened by the
European Commission and comprised leading European researchers, media
entrepreneurs and activists.

In particular, the platforms opposed proposals that would have forced
them to be more transparent about their business models. And a number
of insiders have raised concerns about how the tech platforms’ funding
relationships with experts on the panel may have helped to water down
the recommendations.

In the wake of numerous reports of massive disinformation campaigns
targeting the European elections, many linked to Russia and to
far-right groups, EU politicians and transparency campaigners have
called these fresh allegations about the tech platforms’ behaviour a
“scandal”.

One of those politicians, German MEP Ska Keller, has demanded that Facebook, in particular, “puts its cards on the table”.

“Disinformation threatens democracy,” she said. “The commission
should not have given in [to the platforms] but instead should have
insisted on further discussing competition tools to limit the power of
the platforms and the disinformation spread through them.

“We were blackmailed”

It had all sounded too good to be true: a set of rules that would
guide the EU to take effective action against online disinformation and
ensure the giant tech platforms that spread it were kept accountable.
The research by Investigate Europe has found that ‘too good to be true’
is exactly what it was.

The EU Code of Practice on Disinformation
was euphorically announced last September as a world first: the first
time that the platforms had agreed to self-regulate following common
standards. The code of practice was drawn up in response to the expert group’s report,
which had been published in March 2018. Announcing it, the EU digital
affairs commissioner, Mariya Gabriel, said: “I am very happy that the
report reflects all our principles, transparency, diversity, credibility
and inclusion.”

That wasn’t how some of the experts themselves saw it. In March 2018,
during the group’s third meeting, “there was heavy arm-wrestling in the
corridors from the platforms to conditionalise the other experts”, says
a member of the group, one of two who spoke with Investigate Europe
under the condition of anonymity, referring to a confidentiality clause
signed by all group members.

Another member, Monique Goyens – director-general of BEUC, which is
also known as The European Consumer Association – is blunter. “We were
blackmailed,” she says.

The group of 39
met for the first time in January 2018. Among them were representatives
of major media houses such as Bertelsmann and Sky, important
non-governmental organisations such as Reporters Without Borders and
Wikimedia, scientists, as well as Facebook and Google employees.

Matters came to a head when Goyens and other members of the group
suggested looking into whether European policy on commercial competition
could have a role in limiting fake news. Such a move would have allowed
the EU competition commissioner to examine the platforms’ business
models to see whether they helped misinformation to spread. “We wanted
to know whether the platforms were abusing their market power,” says
Goyens.

She recalls that in a subsequent break Facebook’s chief lobbyist,
Richard Allan – another member of the expert group – said to her: “We
are happy to make our contribution, but if you go in that direction, we
will be controversial.”

Allan spelled out more clearly what this meant to another group
member: “He threatened that if we did not stop talking about competition
tools, Facebook would stop its support for journalistic and academic
projects.”

Facebook declined to comment on these incidents. In the end, the proposed vote on competition policy tools never took place.

Update, 22 May 2019: In a statement issued to Buzzfeed News,
Facebook says: “This is a deliberate misrepresentation of a technical
discussion about the best way to bring a cross-industry group together
to address the issues around false news. We believe real progress has
been made through the code of conduct process and we are looking forward
to working with the European institutions to implement it.”

Conflicts of interest?

The platforms had influence over the group’s decisions in other ways,
too. “It was not made transparent [to some members of the group] that
some members had a conflict of interest. Because they worked for
organisations that received money from the platforms,” says Goyens.

“The Google people did not have to fight too hard for their
position,” says another group member, speaking on condition of
anonymity. “It quickly became clear that they had some allies at the
table.”

At least 10 organisations with representatives in the expert group
received money from Google. One of them is the Reuters Institute for the
Study of Journalism, at the University of Oxford. By 2020, the
institute will have received almost €10m from Google to pay for its
annual Digital News Report. Google is one of 14 funders of this major
project, which began in 2015. The institute declared this funding
relationship to the European Commission in its application to be part of
the expert group.

A number of other organisations represented on the group have also
received funding from the Google Digital News Initiative, including the
Poynter Institute and First Draft News.

When asked about the selection process and potential conflicts of
interests within the group, Commissioner Gabriel’s cabinet insisted they
had carefully examined applications from organisations that said they
had received funding from Google. This was done in order “to exclude
situations where applicants would have an interest that could compromise
or be reasonably perceived to compromise their capacity to act
independently”, a cabinet member states.

Divina Frau-Meigs, who teaches media sociology at the Sorbonne
Nouvelle University in Paris and was also a member of the expert group,
stresses the integrity of the academics in the group, who she believes
were working honestly and diligently.

However, she still found the financial relationship between the
platforms and some other group members troublesome: “It’s a very subtle
dependency relationship: it is difficult to focus on the platform that
supports you, even without strings attached. Having a financial
relationship lays you suspect to some kind of bias of self-censorship.”

“It’s been known for some time that Google, Facebook and other tech
companies give money to academics and journalists,” said Ska Keller, the
German MEP. “There is a problem because they can use the threat of
stopping this funding if these academics or journalists criticise them
in any reporting they do.”

According to Frau-Meigs, independent funding for academics as well as
journalists is extremely important. “Google and Facebook are paying
these partnerships from their direct marketing arm, not through more
neutral foundations,” she says.

The fact that platforms support both academics and journalists is in
itself not a problem for Frau-Meigs as long as it is transparent, but
the lack of clear criteria and a separate funding entity is. This can
lead to a complicated and opaque situation, as the expert group’s
example demonstrates. Frau-Meigs concludes: “The platforms should not
exert influence the way they do now.”

A hobbled code

When the ‘high-level group’ on fake news and online disinformation
was created it had grand ambitions. Commissioner Gabriel – alarmed by
the Brexit vote and Trump’s election – announced in autumn 2017: “The
fake news phenomenon is not new, but the reach and speed with which
disinformation has spread online is unprecedented.” She said she had
decided to call together a group of experts “in order to find a common
solution to this growing phenomenon”.

The code of practice that the European Commission adopted last
September certainly included some valuable guidelines. It acknowledged
the need to improve the scrutiny of advertisement placements; to ensure
the transparency of political and issue-based advertising; to establish
clear labelling and rules for bots so that they would not be confused
with human interaction; and to reduce the visibility of disinformation
by making trustworthy content easier to find and by ensuring users have
access to different news sources with a range of viewpoints.

A significant omission, however, was the mechanisms proposed by
Goyens and others that would have forced the platforms to be more
transparent about their business models and, in turn, would have helped
policymakers assess whether these models might enable or promote
disinformation. This was something the platforms tried hard to prevent,
according to group members who spoke to Investigate Europe.

A year later, the consequences of this obstruction are clear. The
code of conduct with the platforms is no more than voluntary. There were
no laws created. Aside from strong words, there is no way to pressure
the platforms to fulfil their obligations. The platforms agreed to take
stronger action against fake accounts, to give preference to trustworthy
sources and to make it transparent to their users why they see
political advertising and who pays for it – but progress has been
limited.

Damning criticism
of the code of practice came from a ‘Sounding Board’ that was convened
by the European Commission to track the proposals drawn up in response
to the expert group’s report. The Sounding Board, which included
representatives from media, civil society and academia, said that the
code of practice “contains no common approach, no clear and meaningful
commitments, no measurable objectives or KPIs, hence no possibility to
monitor process, and no compliance or enforcement tool.

“It is by no means self-regulation, and therefore the platforms, despite their efforts, have not delivered a code of practice”.

The European Commission itself, meanwhile, signalled its
dissatisfaction with the results of its long-running engagement with the
big tech companies. “More systematic information is needed for the
Commission to assess the efforts deployed by the online platforms to
scrutinise the placement of ads and to better understand the
effectiveness of the actions taken against bots and fake accounts,” four
commissioners said in a statement issued in March.

As Investigate Europe revealed this spring,
the EU’s instruments against disinformation remain largely ineffective.
In March, when campaigns for the European parliamentary elections had
already started, both Facebook and Google were struggling to get their
ad transparency initiatives running.

Money no object

Facebook and Google have been robustly defending their interests in
Brussels for years. According to the EU lobby register, Facebook spent
at least €3.5 million on its employees there in 2018, while Google spent
more than €6 million.

When it comes to certain topics, the companies reach even deeper into
their pockets. Last year, for example, the British music industry
association UK Music calculated that Google had spent almost €31 million
to lobby against a stricter copyright law.

Goyens sums up: “The code of conduct was total crap. It was just a
fig leaf. The whole thing was a rotten exercise. It was just taking more
time, extending more time.”

In the expert group’s final report, which would lead to the code of
practice, there was barely any mention of the competition tools that
were so widely discussed during the group’s meetings. The possibility of
a relationship between the platforms’ business models and the
unprecedented reach and speed of disinformation campaigns is mentioned
in the 44-page report – but only within an 80-word footnote.

We asked Google to comment on this article as well, but it declined to do so.

This news is part of our in-depth investigation on disinformation: The disinformation machine. Find out more and read the story in one of our partner publications.