The impact of digital platforms and social media on freedomof expression and pluralism

date: 27 April 2021

location: Webinar (University of Belgrade Faculty of Law)

presenters: Dr. Dušan V. Popović

moderators: -

themes: the loss of equal access to and participation in speech on these platforms; the loss of equal access to and participation in speech on these platforms, and (2) the problem of accountability

Event Summary

The speaker opened the webinar with an explanation of the foundations of freedom of speech on the Internet and their evidence in US and European case law. The three main lines of thought regarding freedom of expression include (1) recognition of truth as a prerequisite to social development, (2) freedom of expression as an instrument of democratic self-government, and (3) freedom of speech as a value in itself. The speaker further elaborated on the concept of ‘speech’, especially in the digital/online context. The notion of ‘speech’ is interpreted extensively to include both direct (words) and symbolic speech (actions). For example, a simple ‘like’ on Facebook is considered ‘speech’, as established in the US case Bland v. Roberts. The speaker concluded that approaches in the US and Europe are somewhat different. US law differentiates among several categories of speech, some of which are given little or no protection under the First Amendment. Subsequently, the European approach prescribes different limitations of freedom of speech.

The webinar continued with a discussion on whether social networks should be perceived as private spaces or public forums. The public forum doctrine was articulated in US law in the case of the Perry Education Association v. Perry Local Educators’ Association. The speaker noted the increasing difficulty in distinguishing between private and public use of social network profiles and provided an example of the private accounts used by politicians to communicate with citizens. The issue was further analysed in the US case Knight First Amendment Inst. at Columbia University v. Trump. It is of utmost importance to determine whether social networks should be treated as technology companies (neutral players) or media companies (those making editorial choices).

Two sets of rules are related to content removal: (1) state-adopted regulations and (2) internal content removal rules adopted by social networks. The webinar further explored the legal basis for content censorship in comparative law by referring to two main models: the US and the EU legislative approaches. The speaker began his analysis of US law with section 230 of the Communications Decency Act of 1996, specifically ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’ Section 230 has four statutory exclusions: (1) prosecutions of federal crimes (e.g. obscenity and sexual exploitation of children), (2) plaintiff’s claims based on the Electronic Communications Privacy Act or state law equivalents, (3) claims based on the Fight Online Sex Trafficking Act, and (4) intellectual property claims. Furthermore, the speaker analysed the notice and takedown mechanism of the Digital Millennium Copyright Act (DMCA). This mechanism is applicable in cases of copyright infringement complaints.

The speaker’s analysis of EU law began with the Directive on Electronic Commerce (2000), which lays down safe harbours for mere conduits, caching, and hosting, and was inspired by the US DMCA. The speaker further analysed the decision of the Court of Justice of the European Union in the Promusica case, in which the Court concluded that in transposing the directives and implementing the measures ‘the Member States must (…) take care to rely on an interpretation of the directives which allows a fair balance to be struck between the various fundamental rights protected by the Community legal order’. Additionally, the speaker reviewed the EU Directive on Copyright in the Digital Single Market (2019), which represents a shift from a horizontal to a vertical approach in EU law. Another example of the shift towards vertical legislative intervention can be found in the proposal for a Regulation for Preventing the Dissemination of Terrorist Content Online (2018).

The speaker continued with an examination of the latest version of the Audiovisual Media Services (AVMS) Directive (2018), which defines a ‘video-sharing platform service’ as follows: ‘the principal purpose of the service or of a dissociable section thereof or the essential functionality of the service, is devoted to providing programmes, user-generated videos, or both, to the general public, for which the video-sharing platform provider does not have editorial responsibility’. Consequently, social media services can constitute ‘video-sharing platform services’ and fall within the scope of the AVMS Directive. This Directive requires Member States to ensure that video-sharing platform providers under their jurisdiction take appropriate measures to protect: (1) minors from programmes, user-generated videos and audio-visual commercial communications which may impair their physical, mental or moral development, (2) the general public from programmes, user-generated videos and audio-visual commercial communications containing incitement to violence or hatred directed against a group of persons or a member of a group, and (3) the general public from programmes, user-generated videos and audio-visual commercial communications containing content which constitutes an activity which is a criminal offence under Union law, namely public provocation to commit a terrorist offence, offences concerning child pornography, racism and xenophobia.

Following an analysis of legislative acts related to content removal, the speaker reviewed the internal rules adopted by social networks. Types of internal rules and regulations include terms of service, privacy policies, IP policies, and community standards. The speaker observed that these rules suffer from a deficit in transparency. Another issue highlighted was the use of automatic detection and filtering technologies, which hardly involve human intervention and thereby introduce the risk of bias in various technological stages, including algorithm design. The existence of numerous examples of obvious mistakes or questionable content removal decisions (e.g. Facebook blocking the sharing of the ‘Napalm Girl’ photo) was discussed. The speaker then discussed Facebook’s response to criticism: its establishment of the Facebook Oversight Board.

The presentation concluded with an assessment of the impact of digital platforms on freedom of expression and pluralism. Two major downsides of social networks were identified: (1) the loss of equal access to and participation in speech on these platforms, and (2) the problem of accountability.

Share This