DIRECT NEWS INPUT SEARCH
COVID infodemic and the lure of censorship |
13 May 2020: posted by the editor - General, Health, Human Rights, Journalism, Technology, Internet news | |
Article by Chloé Berthélémy, EDRi Policy Advisor We already knew that social media companies perform pretty badly when it comes to moderate content on their platforms. Regardless of the measures they deploy (whether using automated processes or employing human moderators), they make discriminatory and arbitrary decisions. They fail to understand context and cultural and linguistic nuances. Lastly, they provide no proper effective access to remedies. In times of a global health crisis where accessing vital health information, keeping social contact and building solidarity networks are so important, online communications, including social media and other content hosting services, have become even more essential tools. Unfortunately, they are also vectors of disinformation and misinformation that erupt in such exceptional situations and threaten public safety and governmental responses. However, private companies – whether voluntarily or pressured by governments - should not impose over-strict, vague, or unpredictable restrictions on people’s conversations about important topics. Automated tools don’t work: what a surprise! Their “anti-spam” system was striking down quality COVID-19 content from trustworthy sources as violations of the platforms’ community guidelines. Sharing newspaper articles, links to official governmental websites or simply mentioning the term “coronavirus” in a post would result in having your content preemptively blocked. This whole trend perfectly demonstrates why relying on automated processes can only be detrimental to freedom of expression and to freedom of receiving and imparting information. The current context led even the Alan Turing Institute to suggest that content moderators should be considered “key workers” in the context of the COVID-19 pandemic. Content filters show high margins of error and are prone to over-censoring. Yet the European Parliament adopted a resolution on the EU’s response to the pandemic which calls on social network companies to proactively monitor and "stop disinformation and hate speech". In the meantime, the European Commission continues its “voluntary approach” with the social media platforms and contemplates the possibility to propose soon a regulation. Criminalising misinformation: a step too far The risks of abuse of such measures and unjustified interference with the right to freedom of expression directly impair the media’s ability to provide objective and critical information to the public, which is crucial for individuals' well-being in times of national health crisis. While extraordinary situations definitely require extraordinary measures, they have to remain proportional, necessary and legitimate. Both the EU and Member States must refrain from undue interference and censorship and instead focus on measures that promote media literacy and protect and support diverse media both online and offline. None of the approaches taken so far show a comprehensive understanding of the mechanisms that enable the creation, amplification and dissemination of disinformation as a result of curation algorithms and online advertising models. It is extremely risky for a democratic society to rely only on very few communications channels, owned by private actors of which the business model feeds itself from sensationalism and shock. The emergency measures that are being adopted in the fight against COVID-19 health crisis will determine how European democracies will look like in its aftermath. The upcoming Digital Services Act (DSA) is a great opportunity for the EU to address the monopolisation of our online communication space. Further action should be done specifically in relation to the micro-targeting practices of the online advertising industry (Ad Tech). This crisis also showed to us that the DSA needs to create meaningful transparency obligations for better understanding of the use of automation and for future research -starting with transparency reports that include information about content blocking and removal. What we need for a healthy public debate online are not gatekeepers entitled by governments to restrict content as in non-transparent and arbitrary manner. Instead, we need diversified, community-led and user-empowering initiatives, that allow everyone to contribute and participate. Read more:
|
|
|
Name: | Remember me |
E-mail: | (optional) |
Smile: | |
Captcha | |