This mechanism, designed to prevent alleged copyright infringements, is leaving the tackling of potentially illegal content uploaded by users to algorithms. However, an idea that was supposed to be an efficient way to safeguard author’s rights, has in reality turned out to be a “censorship machine” that is not even addressing the so-called “value gap” between platforms and rightsholders.
Giving the obligation of deciding what can be expressed or not online to algorithms, without any human involvement, can pose serious risks to our societies. Particularly sensitive issues deeply linked with the respect of our fundamental right to freedom of expression should be decided by a court, not by a machine. There are already many real life examples about how automated upload filters are failing, censoring a broad range of content from innocent videos to human rights activism.
Kittens purring infringes copyright: YouTube’s content ID system that filters the uploads by its users thought that a cat purring was a copyright infringement. The purring was identified as a musical composition already owned by a company, making the purring a ”pirate” product. This perfectly illustrates the randomness of the content spotted by the filter.
Content used for educational purposes: Harvard’s Professor Lawrence Lessig had one of his lectures removed by the platform’s copyright filter because he was using parts of several well-known songs. Even though the music was part of the didactic material used in his lecture and was therefore legal to be employed for educational purposes, YouTube proceeded to mute his entire lecture. This is a very representative example of how filters can restrict access to culture and education without taking into account the exceptional use of protected content.
Human rights activism censored: There are several examples of how automated upload filters are censoring human rights activists. As it has been proven, some filters used to classify content which is “offensive”, “extremist” or simply “inadequate for minors” have ended up censoring videos which tried to denounce injustices. For instance, thousands of videos reporting atrocities in the Syrian war were removed. This resulted in a loss of extremely valuable material to prosecute war crimes. Other examples of this censorship is the removal of videos of LGTB activists.
The examples mentioned above show that automated upload of filters can lead to illegitimate removal of material from the internet. In addition, they can encourage internet users to self-censor – and limit their uploads “voluntarily”, in fear of being censored. These practices deeply affect human rights such as freedom of expression and access to information, culture and education. If it is already complex understanding the status of freedoms to use cultural works in the EU by copyright experts, algorithms are even less likely to understand the context and the purpose of using protected copyright material, nor being the ones deciding whether a content is “offensive” or not suitable for the audience.
European policy-makers must take into account this reality and seriously reconsider the use of filters and their impact on democratic societies.