In April 2019, the UK Government published the Online Harms White Paper.Ā¹ This paper set out a new policy proposal designed to make the UK āthe safest place in the world to go online.Ā²
The Government plans to achieve this by imposing a new āstatutory duty of careāĀ³ on ācompanies that allow users to share or discover user-generated content, or interact with each other onlineā.ā“ This will include social media companies as well as non-profit organisations, cloud hosting providers and retailers allowing users to review products online.āµ
This new duty of care consists of taking āreasonable steps to keepā¦users safe and tackle illegal and harmful activityā online.ā¶ Complying with this duty will be enforced and monitored by a new independent regulator.ā· In practice, companies charged with this responsibility will need to take action to ātackle harmful content or activity on their servicesā by preventing it being shared, made public or, in some cases, removing the harmful content completely.āø
The kind of harmful content which will have to be dealt with is not just limited to that which is already recognised as unlawful in the UK, such as terrorist content and activity. It also expands to ācontent that may directly or indirectly cause harm to other usersā¦[including] offensive materialā.ā¹
The policies contained in the White Paper are part of the Governmentās wider aim to make the UK an ideal location to start a digital business. Nicky Morgan, the Secretary of State for Digital, Culture, Media and Sport, says that she will continue to tackle the internetās ādark sideā by addressing the serious issue of āsome users seizing on social media to bully, intimidate or promote terrorismā. The Government confirmed this policy objective by declaring earlier in October its intention to publish draft legislation based on the White Paper.Ā¹ā°
A New Approach
The debate of which the White Paper concerns relates to what extent should internet platforms be liable āfor the content and activities of their usersā.Ā¹Ā¹ A further ancillary question to this is, if these platforms are to be liable, what steps should they be required to take to address the identified harmful content.
During the early 2000s, the answer to both of these questions were more favourable to internet platforms. This was the case for three main reasons. Firstly, these intermediaries were considered to have a lack of effective control over the content generated on their websites to meet their liabilities. Secondly, even if such control existed, there was a perceived āinequality in imposing liability upon a mere intermediaryā.Ā¹Ā² Thirdly, the consequences of potentially unlimited liability appeared to be unjustified.
The first argument, concerning the lack of control, stemmed from the fact that internet platforms could not possibly check all of the content or activity taking place. More specifically, there was no practical method for conducting such monitoring āwithout impossible amounts of delay and expenseā.Ā¹Ā³ It was also thought that to do so would invade user privacy and confidentiality.Ā¹ā“
However, these concerns are not as convincing as they may have once been. This is largely because āautomated content curation has become steadily more sophisticated and prevalentā.Ā¹āµ Especially with the rise of machine learning, āautomated blocking has begun to look more feasibleā, hence the proactive obligations suggested in the White Paper; companies have to identify the risks associated with its services and implement measures to guard against those risks.Ā¹ā¶ The duty of care is therefore not just limited to responding to complaints from users.
The second argument amounts to one of ādonāt shoot the messengerā. It rests on the idea that it would be inequitable for internet companies to be liable for merely hosting the content generated by its users. However, an important underpinning of this notion was that companies were impartial to the variety of content that would surface on their platforms, giving the impression that they were in fact mere messengers.
This was the view held by Justice Eady in Tamiz v Google (2012).Ā¹ā· In that case, a claim was brought against Google which at the time operated a blogging service called Blogger.com. The claim concerned a defamatory comment which appeared on one of the blogs being hosted on the service.
In finding against the claimant in the High Court proceedings, Justice Eady summarised the issue as Google being the owner of a wall āwhich various people had chosen to inscribe graffiti onā and that Google did not āregard itself as being more responsible for the content of these graffiti than would be the owner of such a wallā.Ā¹āø Accordingly, Justice Eady made the following observation:
The fact that an entity in Google Incās position may have been notified of a complaint does not immediately convert its status or role into that of a publisher. It is not easy to see that its role, if confined to that of a provider or facilitator beforehand, should be automatically expanded thereafter into that of a person who authorises or acquiesces in publication. It claims to remain as neutral in that process after notification as it was before. It takes no position on the appropriateness of publication one way or the other.Ā¹ā¹
Lord Justice Richards echoed these remarks to some degree when the case reached the Court of Appeal. It was held that, although Google did exercise limited control by having the ability to remove content that breached its terms of service, it did not āseek to exercise prior control over the content of blogs or comments posted on themā.Ā²ā° In other words, Google was not deemed a publisher and nor was it ācomparable to that of [an] author or editor of a defamatory articleā.Ā²Ā¹
However, today, the likes of Google and Facebook have now proven themselves as more than just hosts. The most compelling evidence of this includes the use of targeted advertising and the filtering of content on the feeds of its users based on their personal preferences.Ā²Ā²
The weakening of the first and second arguments of the early 2000s are thus closely connected. The first, concerning the lack of control, revealed that an increasing number of internet intermediaries have shown themselves capable of the technical capabilities to monitor and manipulate the user-content generated on their platforms. The second, concerning the perceived inequity of imposing liability for such content, demonstrates that such intermediaries not only have the technical capabilities at its disposal but also make use of them in accordance with their own terms of service.
The White Paper changes this landscape by requiring internet companies to expand their use of their technical capabilities to respond to legal responsibilities imposed by the State in accordance with a prescribed list of non-permissible content regardless of their own terms of service.
This somewhat leads into the third old argument, which was that it would not be economically sensible to impose unlimited liability on internet companies. But given the immense revenues procured by many social media companies, such an argument would be difficult to promote today. In September, Facebook announced plans to invest in a new oversight board responsible for managing the companyās ācontent decisionsā.
Treacherous Territory
The statutory duty of care detailed in the White Paper departs from the conventional approach to duty of care that was recently reiterated by the Supreme Court in Robinson v Chief Constable of West Yorkshire (2018).Ā²Ā³
This case concerned a pedestrian who was injured after being knocked over by police officers struggling to arrest a suspect. The pedestrian sought damages for personal injury as a result of the alleged negligence committed by the officers. However, the Supreme Court found against the claimant. In doing so, Lord Reed made the important point that private bodies āgenerally owe no duty of care towards individuals to prevent them from being harmed by the conduct of a third partyā.Ā²ā“
Conversely, the White Paper contradicts these stipulations by proposing that companies take responsibility for the actions of third parties. In effect, it does away with the ordinary duty to avoid inflicting injury and replaces it with a duty to prevent injury from another person.
Graham Smith, a solicitor and writer of the Cybereagle blog, argues that the Robinson case reflects ācarefully considered limits on the existence and extent of existing duties of careāĀ²āµ and that the White Paper disregards these norms without acknowledging āits radical departure from existing principlesā.Ā²ā¶ In particular, by disregarding the distinction between oneās own conduct and third party conduct, the Government is proposing to create āa generic basis for liability [that] has the potential to spread like split milk, with negative impacts on society at large as well as unjust consequences for the person subject to the liabilityā.Ā²ā·
The policy further exacerbates the problem by subjecting internet companies to a generic liability for āoffensive contentā. It is unclear from the White Paper how such a term would be defined and fails to clarify what standard would be applied to determine when content constitutes as offensive. The logical consequence this omission is that such a term must be treated as being subjective in nature, meaning that whether content is offensive or not depends on the views of different people who might access it.
As such, there cannot necessarily be an objective identification of the foreseeable risk arising from particular content. Smith illustrates the problem with this by comparing the issue to a fictional nail in the floor:
A tweet is not a projecting nail to be hammered back into place, to the benefit of all who may be at risk of tripping over it. Removing a perceived speech risk for some people also removes benefits to others. Treating lawful speech as if it were a tripping hazard is wrong in principle and highly problematic in practice. It verges on equating speech with violence.Ā²āø
The White Paper therefore appears to approach the issue of offensive content as the imposition of a very low threshold for the risk of harm whereby a wide range of content could therefore be penalised. As a result, it would take minimal complaints to eliminate access to allegedly offensive content for other internet users, evidencing a seemingly disproportionate approach.
Just the Beginning
It could be argued that both the unconventional widening the duty of care and the significant lowering of the risk-of-harm threshold evidence a policy that is, on the face of it at least, unjustifiably broad and intrusive. It may culminate in a gross intrusion by the State on the freedom of commerce and speech.
At the same time though, the White Paper does represent a growing recognition of the power held by the likes of Facebook and Google as the dominant controllers of modern information flows in society; these are the entities increasingly responsible for controlling what we can and cannot see. The Online Harms White Paper is thus part of a wider effort to regulate the previously unregulated. But at what cost?
Sources:
[1] HM Government, Online Harms White Paper (CP 57, 2019).
[2] Ibid, [1].
[3] Ibid, [3.1].
[4] Ibid, [4.1].
[5] Ibid, [4.3].
[6] Ibid, [3.1].
[7] Ibid, [3.2].
[8] Ibid, [7.1].
[9] Ibid, [7.19].
[10] HM Government, The Queenās Speech and Associated Background Briefing, On the Occassion of the Opening of Parliament on Monday 14 October 2019 (October 2019), 61.
[11] L Edwards, āWith Great Power Comes Great Responsibility?: The Rise of Platform Liabilityā in Lillian Edwards (ed) Law, Policy and the Internet (Hart Publishing 2018) 253.
[12] Ibid, 257.
[13] Ibid.
[14] Ibid.
[15] Ibid, 258.
[16] White Paper (n 1), [7.3].
[17] Payam Tamiz v Google Inc [2012] EWHC 449 (QB).
[18] Ibid, [10].
[19] Ibid, [38].
[20] Payam Tamiz v Google Inc [2013] EWCA Civ 68 (CA), [24].
[21] Ibid, [25].
[22] Edwards (n 10), 259.
[23] Robinson v Chief Constable of West Yorkshire [2018] UKSC 4.
[24] Ibid, [37].
[25] Graham Smith, Online Harms White Paper - Response to Consultation (June 2019), [para. 1.5].
[26] Ibid, [para. 1.7].
[27] Ibid, [para. 1.4].
[28] Ibid, [para. 5.9].