Prior to the development of the Internet, it was difficult to widely distribute content without it being validated by the gatekeepers of the traditional press, i.e., editorial boards that decided whether or not to publish a piece of information. The web revolutionized this process by allowing us to bypass this filter: now, anyone can produce content and make it accessible to the entire world by publishing it on a site or a social network.
This undeniably constitutes a democratic breakthrough. For example, it has become more difficult for a ruling power to cover up an embarrassing affair by pressuring the editorial boards of mainstream media outlets. Indeed, there is no guarantee that a journalist or “whistleblower” will not leak the information onto social networks. However, the Internet has also facilitated the rapid and massive dissemination of rumors, misinformation, information manipulation, conspiracy theories and hate speech.
Given this situation, it is tempting to demand that digital platforms (Facebook, Twitter, YouTube, Google, etc.) clean up their pages. Like other countries, France is exerting increasing legislative pressure on the operators of these platforms, enjoining them to “fight against the dissemination of fake news likely to disturb public order...” (Law of December 22, 2018 on the fight against the manipulation of information), or even to remove “within a maximum period of 24 hours [...] any content that manifestly incites hatred or discriminatory insults” (draft law of March 20, 2019 aiming to fight against hatred on the Internet, almost entirely censored by the Constitutional Council).
States are not the only entities putting pressure on digital platforms. In the context of the anti-racism movement that emerged after the death of George Floyd, large brands such as Coca-Cola or Unilever have interrupted their advertising campaigns on Facebook, accusing the company of not doing enough to fight against racist comments on their platform. The social network was quick to respond: Facebook has already announced a strengthening of its policy on hate speech. YouTube, for its part, claims to be leading the charge after having closed 25,000 channels accused of broadcasting hateful content. These include the channels operated by Alain Soral and Dieudonné M'Bala M'Bala, French conspiracy theorists who have been sentenced several times as a result of their anti-Semitic remarks.
While it is easy to be pleased by the fact that social network operators are taking action against flagrant misinformation and harmful content, one can nonetheless wonder whether it is desirable for them to become the guardians of what is considered true and good on the Internet. In requiring them to decide what can and cannot be said, do we not run the risk of restricting freedom of expression on an arbitrary basis? This can certainly be feared. Indeed, calling for digital platforms to fight against false information and hate speech, all the while expecting them to unilaterally determine which content falls into these categories, could well lead to a form of digital precautionism that would restrict individual freedoms.
Social networks possess two mechanisms by which they can identify and remove fake news or hateful messages: user reports, which are subsequently evaluated by moderators, and algorithms that detect undesirable content. The issue with user reports is that they are subjective and not necessarily disinterested – online communities sometimes try to remove the publications of their ideological opponents by asking their members to massively report their content. Moreover, because of the quantity of data that must be processed, moderators’ analysis of the reported content remains superficial. If, for fear of possible judicial or economic repercussions, digital platforms begin moderating content too aggressively, there is a risk that any content that is widely reported will be removed without any further consideration.
This same logic of digital precautionism could also be applied to algorithms, which would be made so sensitive that no controversial content could conceivably pass through them. This would evidently come at the cost of removing content that should not have been removed. Ultimately, the ability to freely express oneself on topics that have been deemed sensitive could be altogether obstructed.
The Internet too often becomes a lawless zone where hateful or misleading statements can be made without consequence. Transforming it into a place where the arbitrary and precautionary moderation exercised by the major actors of the web would reign is probably not a desirable alternative. There are no simple solutions to fight against online hate and misinformation and simultaneously preserve freedom of expression. However, developing legal procedures that would allow for rapid responses to the dissemination of hate speech online and reinforcing education on the mechanisms of disinformation would perhaps be more promising and less liberticidal solutions than expecting of the major platforms that they become the judges and policemen of the web.
This article was originally published in l’Opinion of August 20, 2020, p.7.
 https://www.lemonde.fr/societe/article/2019/11/27/dieudonne-condamne-a-9-000-euros-d-amende-pour-antisemitisme_6020751_3224.html ; https://www.lepoint.fr/justice/alain-soral-condamne-a-un-an-ferme-pour-ses-propos-sur-le-pantheon-02-10-2019-2339049_2386.php