The aberrant rise of platform liability in the UK
In April 2019, the UK Government published the Online Harms White Paper.¹ This paper set out a new policy proposal designed to make the UK “the safest place in the world to go online.²
The Government plans to achieve this by imposing a new “statutory duty of care”³ on “companies that allow users to share or discover user-generated content, or interact with each other online”.⁴ This will include social media companies as well as non-profit organisations, cloud hosting providers and retailers allowing users to review products online.⁵
This new duty of care consists of taking “reasonable steps to keep…users safe and tackle illegal and harmful activity” online.⁶ Complying with this duty will be enforced and monitored by a new independent regulator.⁷ In practice, companies charged with this responsibility will need to take action to “tackle harmful content or activity on their services” by preventing it being shared, made public or, in some cases, removing the harmful content completely.⁸
The kind of harmful content which will have to be dealt with is not just limited to that which is already recognised as unlawful in the UK, such as terrorist content and activity. It also expands to “content that may directly or indirectly cause harm to other users…[including] offensive material”.⁹
The policies contained in the White Paper are part of the Government’s wider aim to make the UK an ideal location to start a digital business. Nicky Morgan, the Secretary of State for Digital, Culture, Media and Sport, says that she will continue to tackle the internet’s “dark side” by addressing the serious issue of “some users seizing on social media to bully, intimidate or promote terrorism”. The Government confirmed this policy objective by declaring earlier in October its intention to publish draft legislation based on the White Paper.¹⁰
A New Approach
The debate of which the White Paper concerns relates to what extent should internet platforms be liable “for the content and activities of their users”.¹¹ A further ancillary question to this is, if these platforms are to be liable, what steps should they be required to take to address the identified harmful content.
During the early 2000s, the answer to both of these questions were more favourable to internet platforms. This was the case for three main reasons. Firstly, these intermediaries were considered to have a lack of effective control over the content generated on their websites to meet their liabilities. Secondly, even if such control existed, there was a perceived “inequality in imposing liability upon a mere intermediary”.¹² Thirdly, the consequences of potentially unlimited liability appeared to be unjustified.
The first argument, concerning the lack of control, stemmed from the fact that internet platforms could not possibly check all of the content or activity taking place. More specifically, there was no practical method for conducting such monitoring “without impossible amounts of delay and expense”.¹³ It was also thought that to do so would invade user privacy and confidentiality.¹⁴
However, these concerns are not as convincing as they may have once been. This is largely because “automated content curation has become steadily more sophisticated and prevalent”.¹⁵ Especially with the rise of machine learning, “automated blocking has begun to look more feasible”, hence the proactive obligations suggested in the White Paper; companies have to identify the risks associated with its services and implement measures to guard against those risks.¹⁶ The duty of care is therefore not just limited to responding to complaints from users.
The second argument amounts to one of ‘don’t shoot the messenger’. It rests on the idea that it would be inequitable for internet companies to be liable for merely hosting the content generated by its users. However, an important underpinning of this notion was that companies were impartial to the variety of content that would surface on their platforms, giving the impression that they were in fact mere messengers.
This was the view held by Justice Eady in Tamiz v Google (2012).¹⁷ In that case, a claim was brought against Google which at the time operated a blogging service called Blogger.com. The claim concerned a defamatory comment which appeared on one of the blogs being hosted on the service.
In finding against the claimant in the High Court proceedings, Justice Eady summarised the issue as Google being the owner of a wall “which various people had chosen to inscribe graffiti on” and that Google did not “regard itself as being more responsible for the content of these graffiti than would be the owner of such a wall”.¹⁸ Accordingly, Justice Eady made the following observation:
The fact that an entity in Google Inc’s position may have been notified of a complaint does not immediately convert its status or role into that of a publisher. It is not easy to see that its role, if confined to that of a provider or facilitator beforehand, should be automatically expanded thereafter into that of a person who authorises or acquiesces in publication. It claims to remain as neutral in that process after notification as it was before. It takes no position on the appropriateness of publication one way or the other.¹⁹
Lord Justice Richards echoed these remarks to some degree when the case reached the Court of Appeal. It was held that, although Google did exercise limited control by having the ability to remove content that breached its terms of service, it did not “seek to exercise prior control over the content of blogs or comments posted on them”.²⁰ In other words, Google was not deemed a publisher and nor was it “comparable to that of [an] author or editor of a defamatory article”.²¹
However, today, the likes of Google and Facebook have now proven themselves as more than just hosts. The most compelling evidence of this includes the use of targeted advertising and the filtering of content on the feeds of its users based on their personal preferences.²²
The weakening of the first and second arguments of the early 2000s are thus closely connected. The first, concerning the lack of control, revealed that an increasing number of internet intermediaries have shown themselves capable of the technical capabilities to monitor and manipulate the user-content generated on their platforms. The second, concerning the perceived inequity of imposing liability for such content, demonstrates that such intermediaries not only have the technical capabilities at its disposal but also make use of them in accordance with their own terms of service.
The White Paper changes this landscape by requiring internet companies to expand their use of their technical capabilities to respond to legal responsibilities imposed by the State in accordance with a prescribed list of non-permissible content regardless of their own terms of service.
This somewhat leads into the third old argument, which was that it would not be economically sensible to impose unlimited liability on internet companies. But given the immense revenues procured by many social media companies, such an argument would be difficult to promote today. In September, Facebook announced plans to invest in a new oversight board responsible for managing the company’s “content decisions”.
The statutory duty of care detailed in the White Paper departs from the conventional approach to duty of care that was recently reiterated by the Supreme Court in Robinson v Chief Constable of West Yorkshire (2018).²³
This case concerned a pedestrian who was injured after being knocked over by police officers struggling to arrest a suspect. The pedestrian sought damages for personal injury as a result of the alleged negligence committed by the officers. However, the Supreme Court found against the claimant. In doing so, Lord Reed made the important point that private bodies “generally owe no duty of care towards individuals to prevent them from being harmed by the conduct of a third party”.²⁴
Conversely, the White Paper contradicts these stipulations by proposing that companies take responsibility for the actions of third parties. In effect, it does away with the ordinary duty to avoid inflicting injury and replaces it with a duty to prevent injury from another person.
Graham Smith, a solicitor and writer of the Cybereagle blog, argues that the Robinson case reflects “carefully considered limits on the existence and extent of existing duties of care”²⁵ and that the White Paper disregards these norms without acknowledging “its radical departure from existing principles”.²⁶ In particular, by disregarding the distinction between one’s own conduct and third party conduct, the Government is proposing to create “a generic basis for liability [that] has the potential to spread like split milk, with negative impacts on society at large as well as unjust consequences for the person subject to the liability”.²⁷
The policy further exacerbates the problem by subjecting internet companies to a generic liability for “offensive content”. It is unclear from the White Paper how such a term would be defined and fails to clarify what standard would be applied to determine when content constitutes as offensive. The logical consequence this omission is that such a term must be treated as being subjective in nature, meaning that whether content is offensive or not depends on the views of different people who might access it.
As such, there cannot necessarily be an objective identification of the foreseeable risk arising from particular content. Smith illustrates the problem with this by comparing the issue to a fictional nail in the floor:
A tweet is not a projecting nail to be hammered back into place, to the benefit of all who may be at risk of tripping over it. Removing a perceived speech risk for some people also removes benefits to others. Treating lawful speech as if it were a tripping hazard is wrong in principle and highly problematic in practice. It verges on equating speech with violence.²⁸
The White Paper therefore appears to approach the issue of offensive content as the imposition of a very low threshold for the risk of harm whereby a wide range of content could therefore be penalised. As a result, it would take minimal complaints to eliminate access to allegedly offensive content for other internet users, evidencing a seemingly disproportionate approach.
Just the Beginning
It could be argued that both the unconventional widening the duty of care and the significant lowering of the risk-of-harm threshold evidence a policy that is, on the face of it at least, unjustifiably broad and intrusive. It may culminate in a gross intrusion by the State on the freedom of commerce and speech.
At the same time though, the White Paper does represent a growing recognition of the power held by the likes of Facebook and Google as the dominant controllers of modern information flows in society; these are the entities increasingly responsible for controlling what we can and cannot see. The Online Harms White Paper is thus part of a wider effort to regulate the previously unregulated. But at what cost?
 HM Government, Online Harms White Paper (CP 57, 2019).
 Ibid, .
 Ibid, [3.1].
 Ibid, [4.1].
 Ibid, [4.3].
 Ibid, [3.1].
 Ibid, [3.2].
 Ibid, [7.1].
 Ibid, [7.19].
 HM Government, The Queen’s Speech and Associated Background Briefing, On the Occassion of the Opening of Parliament on Monday 14 October 2019 (October 2019), 61.
 L Edwards, ‘With Great Power Comes Great Responsibility?: The Rise of Platform Liability’ in Lillian Edwards (ed) Law, Policy and the Internet (Hart Publishing 2018) 253.
 Ibid, 257.
 Ibid, 258.
 White Paper (n 1), [7.3].
 Payam Tamiz v Google Inc  EWHC 449 (QB).
 Ibid, .
 Ibid, .
 Payam Tamiz v Google Inc  EWCA Civ 68 (CA), .
 Ibid, .
 Edwards (n 10), 259.
 Robinson v Chief Constable of West Yorkshire  UKSC 4.
 Ibid, .
 Graham Smith, Online Harms White Paper - Response to Consultation (June 2019), [para. 1.5].
 Ibid, [para. 1.7].
 Ibid, [para. 1.4].
 Ibid, [para. 5.9].