top of page

The algorithm is an influencer like any other




Guillaume Chevillon


The banning of Facebook and Twitter in Russia, which follows the banning of RT and Sputnik on these platforms, confirms once again that social media are not simple agoras where citizens express themselves freely. Instead, they constitute centers of influence subject to manipulation. The upsurge of hijackings during electoral campaigns in recent years should warn us between now and the presidential election.


Indeed, the risks do not only come from cybercrime networks based in Russia, instead manipulations are of multiple origins. For example, in an investigation published on February 2 on the website of Le Monde, the person in charge of Eric Zemmour's strategy acknowledges that they have organized massive campaigns of automated retweets in order to artificially inflate the presence of the candidate. This objective was achieved because several pro-Zemmour messages obtained the status of "trend" and its associated visibility on the homepage of Twitter. Le Monde's journalists point out that these operations "violate several rules of the social network, officially committed to the fight against disinformation and political manipulation."


Who's to blame?The agreement reached on Saturday, April 23 by the European institutions to adopt the Digital Services Act complements the March 24 Digital Market Act. Together, they aim to ensure that recommendation algorithms are safe and transparent, promote fair competition and encourage innovation. To do this, they strengthen oversight of "gatekeepers," the large platforms with a lasting and systemic role in connecting individuals and businesses.


One of the key measures obliges platforms to reinforce the explicability of their recommendations, making them responsible for the content they sponsor and considering them more as media and not as simple agoras. In doing so, these regulations also encourage us to change our ways of thinking.


Who’s to blame?


To understand what is at stake in terms of responsibility and the reversal of the "burden of proof", let's look at a recent example: the investigation published on February 2 on the Le Monde website, where the person in charge of French presidential candidate Éric Zemmour's strategy assumed that he had organized massive campaigns of automated retweets in order to artificially inflate the candidate's presence. This objective was achieved because several pro-Zemmour messages obtained a "trend" status and the associated visibility on the Twitter homepage. Le Monde's journalists point out that these operations "violate several rules of the social network, officially committed to the fight against disinformation and political manipulation."


Should we blame primarily the one who subverts the stated rules or that who is too easily fooled? The question obviously applies in many contexts, in particular when States announce strong principles (no to tax havens, to tax optimization of companies and inheritance, to oil spills, to those who take advantage of migrants and asylum seekers...) but do not put in place sufficient rules and controls.


Does this logic apply to the case of algorithms? The moral fault undoubtedly lies with both parties, the deceiver and the deceived, but what about the legal responsibility? Academic research has indeed shown that algorithms have many a priori effects that were not anticipated by their designers.


Blind spots and errors


An article by Emilio Calvano of the University of Rome and his co-authors, published in the American Economic Review, focuses on algorithms that set prices online - think of the auctions set up by Google - and can gradually learn to collude in order to increase their mutual revenues. This is obviously wrong in the eyes of the law, but are the designers of these algorithms guilty of such unintended consequences?


In fact, we must question the very notion of "unexpected" because it is often a design flaw. In the case of algorithmic collusion, Xavier Lambin, a colleague of mine at ESSEC, and Ibrahim Abada show in a recent paper to be published in Management Science that collusion between algorithms can disappear when the latter are allowed to experiment more. This is a general principle: the consequences take time to be evaluated and are difficult to anticipate because these algorithms are intended to interact with humans whose behavior will evolve in response to the influences exerted on them.


Some, like Tristan Harris, in the recently successful Netflix documentary "The Social Dilemma," suggest that an independent agency could analyze algorithms ex ante via social impact assessment criteria. Such an agency would then authorize and license algorithms in the same way that the French Agence de Sécurité du Médicament proceeds with health products.


Would such an a priori evaluation of algorithms be effective? There would necessarily be blind spots and systematic errors due to the unpredictability and changing nature of human reactions. It would indeed be wrong to think that platforms are indifferent to the consequences of their tools.


To Twitter, the control


However, algorithms are designed for specific purposes and achieve their goal, e.g., "likes" on social networks work well to generate engagement, but since notions of truth and quality are absent from today's algorithms, the resulting misinformation is not controlled.


Thus, the algorithm achieves its short-term goal, but its medium-term effects - polarization of information and communities, lack of contradiction and prioritization - are beyond its scope. In economics, this is called an externality when companies internalize some benefits but externalize the negative consequences, such as pollution in an industrial context, leaving the whole Society to deal with the fallout.


Isn't the most effective solution to force digital platforms to internalize the costs that today fall on the rest of society? In the case of the hijacking of Twitter's "trends" by Éric Zemmour's campaign, a simple solution is to make Twitter responsible for the content and recommendations put forward.


The European Digital Services Act forces platforms to fight misinformation, and explain the measures they take to this end, but shouldn't all recommendations be taken into account? The hijacked "trend" could in this case be singled out by Twitter as sponsored content or advertorial. Just as influencers on Instagram indicate their paid partnerships, the onus would be on Twitter - and its competitors - to monitor their algorithms and announce the advertising (or other) benefits they receive.

Comments


bottom of page