

Discover more from The Cyber Solicitor
Mere Messengers No More
The idea that internet platforms are just conduits in the information age is withering away. But at what cost?
The original Communications Decency Act of 1996 did not last long. Soon after it went into effect, internet companies marched to the courts challenging its compatibility with the US Constitution. But even after this legal battle, one significant provision remained.
Section 230 of the CDA stipulates that those who provide an “interactive computer service” are exempt from liability for third party content that they publish. Such a provision “was a massive win for the giant social media platforms yet to come.”1 By this point, “[s]earch engines were buzzing, sites proliferated, and surfing the Internet had become a daily ritual.”2
In the EU, a similar approach was taken with the E-Commerce Directive in 2000. Article 14 states that “information society services” are not liable for user-generated content (UGC) uploaded to their platforms. The exception to this is where service providers have “actual knowledge” of the illegal content, in which case they must act “expeditiously” to remove or disable access to such content.
Should internet platforms be liable for the actions of their users and the content on their platforms? In the late 90s and the early 2000s, the answer seemed to be no. Such intermediaries lacked effective control over UGC and its rapid increase in volume on a daily basis made the imposition of liability inequitable.
How times have changed. The idea that internet platforms like Facebook and YouTube are mere conduits in the information age used to be the motif of the traditional legal apparatus. Now that idea is being chipped away. In fact, as Jeff Kosseff points out in his book The Twenty Six Words That Created the Internet, by 2009 “[u]ser content-focused platforms such as Facebook and Yelp had evolved from start-ups into large businesses” and even judges were questioning why the law seemed to give them a free ride.3
Now the first earnest conversations about reforming platform liability are starting to come to fruition in the US. In September, the Department of Justice submitted to Congress draft legislation to weaken the protections of Section 230. “For too long Section 230 has provided a shield for online platforms to operate with impunity,” said William Barr, the US attorney-general. Instead, they should be held “accountable both when they unlawfully censor speech and when they knowingly facilitate criminal activity online.”
Action is being taken on the other side of the Atlantic too. The EU’s Digital Copyright Directive, which comes into effect in June 2021, makes internet platforms liable for UGC that infringes copyright. This is unless platforms can show that they have used their “best efforts” to detect infringing content, ensure its removal and prevent its future upload. #Article13 and content filters will once again be rearing their ugly heads.
In December, the European Commission put forward its proposals for a Digital Services Act (DSA). The legislation would impose new rules for internet platforms in dealing with illegal content, placing stricter transparency and due diligence obligations on the larger platforms like Twitter, Tik Tok and others. The Commission made the proposals on the basis that the use of such platforms “has become the source of new risks and challenges, both for society as a whole and individuals using such services.”
There is also the online harms policy in the UK; internet companies will become liable for UGC on their platform, but such liability will not be limited to just illegal content. So-called “harmful” content, namely content that poses a reasonably foreseeable risk of “causing physical or psychological harm to adults”, will also be in scope. This marks a contrast with the DSA proposals which state that the issue of harmful content is “a delicate area with severe implications for protection of freedom of expression.”
The problem that all these proposed legislative reforms share is that more power is just being shifted to the internet platforms. By making them responsible for UGC on their platforms, politicians are simultaneously consigning the decision-making authority to determine what is and is not allowed on the world wide web.
It could be argued that this is, in actual fact, a reasonable delegation of responsibility. These companies have shown themselves capable of content moderation for various lucrative commercial ends (surveillance capitalism et alia), so why not use those same techniques to alleviate the existence of content that does no good for society?
But herein lies the issue. Internet platforms are ultimately being encouraged to decide what is acceptable for society according to their own sense of morality. Big Tech already suffers from a cognitive dissonance in how they view their impact on the world. Thus, the imposition of intermediary liability, especially in the way envisaged by the UK, would only add fuel to the fire.
In particular, if such companies face the prospect of being slapped with fines as high as 10% of annual turnover for failing to follow their obligations, many could be tempted to employ an aggressive approach with content moderation. “Free speech encompasses the right to offend”,4 unless the platforms rule otherwise.
In essence, our speech and online activity “will be mediated and moderated by private technology firms” according to the legal and commercial risks that they are willing to stomach.5 While internet platforms might be more than conduits, they should not be crowned lordly arbiters of cyberspace either.
[1] Margaret O’Mara, The Code: Silicon Valley and the Remaking of America (Penguin Press 2019), 330.
[2] Ibid.
[3] Jeff Kosseff, The Twenty-Six Words That Created the Internet (Cornell Publishing Press 2019) 188.
[4] Scottow v CPS [2020] EWHC 3421 (Admin), [45].
[5] Jamie Susskind, Future Politics: Living Together in a World Transformed by Tech (OUP 2018), 190.
Other Sources:
Joe Rogan Experience #1258 - Jack Dorsey, Vijaya Gadde & Tim Pool