top of page

The power of persuading: Can and should we regulate AI algorithms?

In short:

Adding to the General Data Protection Regulation (GDPR), the European Commission has now proposed two legislative initiatives: the Digital Services Act (DSA) and the Digital Markets Act (DMA). Together, they aim to ensure safety and transparency while promoting fair competition and fostering innovation. As with the GDPR, a stated objective is to regulate not only the European Single Market but also to have a global impact.

A key principle of the DSA and DMA is to reinforce the oversight of the “gatekeepers”, i.e., the very large online platforms that play an entrenched systemic role in that have durably been linking many individual users and businesses. For years now, the Vice President of the European Commission in charge of Competition and the program Europe Fit for the Digital Age, Margrethe Vestager, has expressed herself in favor of policies that combine the promotion of competition with regulatory constraints rather than pursuing an approach consisting in breaking down these systemic players.

With these new legislative tools, the next question is how national regulatory authorities should be framed to achieve their policies. Several issues are at stake:

  1. To assess the impact of an algorithm, we need a measure of the quality of its recommendations. Such a measure cannot be based on the immediate individual response but must be understood at the population level over a longer term.

  2. When it comes to influencing a person, the difficulty lies in that the latter’s behavior evolves in response to the influences he or she receives. We, humans, are like machines that change function, shape or mode of operation, as soon as something tries to nudge us on. The response to algorithmic recommendations therefore evolves over time.

  3. In turn, policies cannot ex ante control people’s beliefs and reactions, so social algorithms need continuous oversight rather than a priori assessment;

  4. In a decentralized economy where individuals make their own decisions, the purposes of the Agency in charge of the required oversight must be clearly stated and understood. The key is to generate the trust and coordination that enables it to monitor the impact of algorithms, thus fulfilling its mission.

Here is a short video introduction of my article.


bottom of page