Legal Knots, VLOPs and a White Box
The Digital Services Act and its implications for the EU’s digital economy
On 15 December 2020, the European Commission announced and published two legislative proposals focused on the digital single market. These proposals form part of the European Commission’s Digital Single Market Strategy that it commenced in 2015.
Among the proposals was the Digital Services Act (DSA), the aim of which is to ensure “a safe, predictable and trusted online environment”.[1] The proposal states that “the use of [internet intermediary services] has…become the source of new risks and challenges, both for society as a whole and individuals using such services”.[2] The DSA is thus the Commission’s latest attempt at addressing the perceived risks arising from internet platforms by augmenting the existing regulatory framework, in particular the E-Commerce Directive (ECD).
Before the introduction of the ECD, Member States regulated online intermediaries in their own way, of which at the time was only a novel and relatively small sector of the economy. Eventually though the Commission, in attempt to harmonise these rules, introduced the ECD that included liability protections for internet platforms.
The original rationale for the ECD echoed the legal developments taking place in the US in the 1990s. Both the Directive and section 230 of the Communications Decency Act of 1996 were based on the idea that internet platforms were mere conduits in the information age. So long as they remained in their passive roles, there was a lack of an equitable basis to make such platforms liable for the activity of its users.
It was this kind of thinking that led to the creation of the ‘safe harbour’ limited liability regime contained in the ECD. Under Article 14, internet platforms are not liable for content on its platform, uploaded by its users, that is illegal content. This is as long as the platform does not have actual knowledge of such illegality and, when that illegality is brought to its attention, it acts expeditiously to remove or disable access to that content.
In addition, Article 15 states that even where an internet platform is required to remove or disable access to illegal content, it cannot be subject to a general obligation to monitor the content uploaded to its platform by users. In other words, there is no requirement to proactively detect infringing content and ensure its removal.
Both Articles 14 and 15 of the ECD make up the safe harbour; a regime of limited regulatory interference which has resulted in a broad freedom of commerce facilitating the growth of today’s tech giants.
Yet, under this regime, such platforms have not only assumed great market domination with incredible lucrative success, but also a tremendous amount of political power and influence sometimes even surpassing that of a State. Never before has so much of our conversations and cognitive exchanges been “mediated and moderated by private technology firms”.[3] But even in the midst of this grand transformation, two views could be had. On the one hand, internet platforms, in order to protect the individual right to freedom of expression online, “should be protected from measures requiring entries to be erased from indexes, choke-points to be inserted in pipes, and filters to be installed in hosts”.[4]
On the other hand, “they profit from the creative endeavours of others, ignore wider social responsibilities, wilfully turn a blind eye to unlawful content coursing through their systems and refuse to install or close gates that would staunch the flow”.[5] It is this view that would seem to have captured the imaginations of those in Brussels and led to the creation of the Digital Single Market Strategy.
Thus, the DSA marks a significant turning point for the regulation of internet platforms in the EU. The proposal “seeks to ensure the best conditions for the provision of innovative digital services in the internal market, to contribute to online safety and the protection of fundamental rights, and to set a robust and durable governance structure for the effective supervision of providers of intermediary services”.[6]
A Layered Approach
The application of the rules contained in the DSA take a layered approach based on the type and size of the internet intermediary in question. Accordingly, different obligations are placed on different intermediaries.
The starting point for this differentiation is Article 2, which contains the definitions under the Regulation. Under paragraph (f) of that Article, an “intermediary service” could refer to one of three types of entities. A “mere conduit” is a service that involves the transmission of information through a communication network or the provision of access to a communication network. A “caching” service means a service involving the transmission of information via a communication network entailing the automatic, intermediate and temporary storage of that information. Examples of entities falling within these categories include internet service providers or domain name registrars.
The third type of entity is a “hosting” service. This is a service that, simply put, involves the storage of user-generated content (UGC). This would include, for instance, cloud or web hosting services like Amazon Web Services or WordPress.com (on which this website is hosted). Article 2(h) develops this concept further with the definition of an “online platform”: a provider of a hosting service that stores and disseminates UGC to the public. This is unless the service is ancillary to the provision of another service, such as the comments section of a news website. Social media platforms would thus certainly fall within the definition of an online platform for the purposes of the DSA.
Article 25 then adds a further layer with its definition of “very large online platforms” (VLOP). This is an online platform that provides their services to a number of average monthly active users of the service in the Union equal to or high than 45 million. An obvious example of this would be Facebook which, in the second quarter of 2020, had over 400 million active monthly users in the EU.
The obligations under the Regulation, for the most part, are thus predicated on these definitions, with VLOPs being subjected to the most stringent rules.
Territorial Application
The application of the DSA is determined by Article 1(3). That provision states that the Regulation applies to intermediary services provided to users residing in the EU. This is regardless of where the service provider may be based. This echoes the extraterritorial reach of the GDPR and once again signals the EU’s desire to regulate the digital realm beyond its own borders.
The question of whether an intermediary can be considered to be providing its service to EU users is determined by the “substantial connection” test.[7] A substantial connection between the intermediary and EU users can be demonstrated by either the existence of a significant number of EU users or the targeting of activities towards the EU.[8] Such targeting activities could include, for example, the availability of an application in the relevant national application store.[9] However, the mere accessibility of a website from the EU cannot, on this ground alone, constitute a substantial connection with the EU.[10] Thus, the DSA is capable of applying to the many US intermediaries based outside of the EU.
In terms of the allocation of enforcement responsibilities among the Member States, this depends on whether a Member State has the necessary jurisdiction. Under Article 40, there a three ways to determine whether a Member State has jurisdiction over an intermediary.
Firstly, and most straightforwardly, a Member State has jurisdiction over intermediaries with a main establishment located in that Member State.[11] A main establishment is the place where an intermediary has its head office or registered office where the principal financial functions and operational control are exercised.[12]
Secondly, where an intermediary is not based in the EU, but offers its services in the EU, the Member State where its legal representative resides or is established has jurisdiction.[13] Intermediaries outside of the EU without a main establishment are responsible for designating a legal representative who deals with compliance issues directly with either Member State authorities or the other authorities who can enforce the Regulation.[14]
Thirdly, where an intermediary is not based in the EU and has not appointed a legal representative, then all Member States in the EU have jurisdiction to enforce the Regulation against that intermediary.[15] However, the public international law principle of ne bis in idem applies, which means that nobody should be judged twice for the same offence. Thus, Member States must cooperate with each other when enforcing the DSA in this context.[16]
Get Your House in Order
One of the most notable aspects of the proposed DSA are the content moderation rules. Under the Regulation, “content moderation” means the detection, identification and addressing of illegal content or content that infringes an intermediary services’ terms and conditions (T&Cs).[17] This includes adjusting the availability, visibility or accessibility of content.[18] The DSA therefore contemplates content moderation to involve the demotion, disabling of access or the removal of content and even the termination of a users’ account.[19]
Under the DSA, such content moderation can, essentially, be carried out by intermediaries on two different bases. The first basis is voluntary; under Article 6, intermediaries may engage in voluntary own-initiative investigations to detect, identify and remove or disable access to illegal content. However, when dealing with UGC that infringes the T&Cs of the service, intermediaries must act in a diligent, objective and proportionate manner in applying and enforcing such policies. This includes having due regard to the rights and legitimate interests of all parties involved, together with the fundamental rights of users under the EU Charter of Fundamental Rights.
To support such efforts, intermediaries engaging in voluntary content moderation do not lose the limited conditional liability codified under the DSA of which closely resembles the safe harbour under the ECD. Thus, under Article 5(1), hosting services (which includes online platforms and VLOPs), will not be liable for illegal UGC if such services do not have actual knowledge of the illegal content and, where it does, it acts expeditiously to remove that content. In addition, Article 7 provides that no intermediary services are required to monitor content on their platform or actively to seek facts or circumstances indicating illegal activity.
The imposition of limited conditional liability with regard to voluntary content moderation has been dubbed as the ‘Good Samaritan principle’. This is where “online intermediaries are not penalized for good faith measures against illegal or other forms of inappropriate content”.
The second basis on which content moderation can take place is where such content moderation is mandatory under the Regulation. Under Article 8, intermediaries must comply with orders from judicial or national authorities to take down illegal content. In doing so, the intermediary must, without undue delay, inform the authority giving the order of how it has given effect to the order, specifying the action taken and the when the action was taken.
Under Article 14, hosting service providers must implement mechanisms allowing any of its users to notify the service provider of content on its platform that may be illegal. This notice-and-takedown mechanism (N&A) must be easy to access, user-friendly, and allow for the submission of notices exclusively by electronic means.
The notice submitted by a user must include, among other things, an explanation as to why they believe the content to be illegal and a statement confirming their good faith belief that the information and allegations contained in the notice are accurate and complete. Such notices constitute actual knowledge for the purposes of Article 5(1) and the hosting service provider must inform the user making the submission of the decision taken without undue delay. In addition, such notices must be processed in a timely, diligent and objective manner.
The N&A mechanism is a clear augmentation of the ECD provisions by prescribing in greater detail how intermediaries are to deal with illegal content that is flagged to them by users. However, it is interesting to note that the N&A mechanism stipulated under Article 14 does not apply to content that may infringe an intermediary services’ T&Cs and only applies to illegal content as defined under the DSA. This suggests that intermediaries are free to enforce their T&Cs as they see fit under the guise of voluntary content moderation and thus benefitting from the limited conditional liability applied to such activity.
This leads into one of the problems with the content moderation rules in the Regulation, in particular the Good Samaritan principle under Article 6. As the Center for Democracy and Technology (CDT) points out, the DSA “shields intermediaries from liability for their own efforts to remove illegal content but exposes them to liability based on mere assertions by anyone”. Under the proposal, the light-touch approach to liability embodied by the ECD is applied when intermediaries carry out content moderation on their own accord. This thus gifts them the ability to exercise a broad discretion as to how they may deal with content infringing their own T&Cs or even illegal content that they detect independently.
However, a greater problem with the content moderation rules under the DSA is that, ultimately, private actors will be assuming State-like responsibilities in policing their platforms. In particular, intermediaries will be playing three distinct roles: they will be acting like a legislature when “defining what constitutes legitimate content on their platforms”; they will be acting like judges “who determine the legitimacy of content in particular instances; they will be acting like administrative agencies “who act on [their own] adjudications to block illegitimate content”.[20]
The problem with intermediaries taking on these roles, especially the judicial role, is that there exists a conflict of interest. On the one hand, internet intermediaries “are commercial players which compete in data capitalists markets for users, business partners, and data-driven innovation”.[21] On the other hand, they are required to use their technical capabilities to moderate activity on their platforms, which for social media platforms inevitably involves the regulation of people’s speech. As such, the DSA potentially “blurs the distinction between private interests and public responsibilities”.[22]
Furthermore, intermediaries are becoming increasingly reliant on AI-powered content filtering systems to moderate their platforms at scale. However, such systems “effectively blend norm setting, law enforcement, and adjudication powers”.[23] In particular, content filters are not always successful at detecting the nuances of UGC that may not necessarily render it illegal. A common example of this is in relation to copyright, whereby considerations must be made as to whether certain UCG benefits from ‘fair use’ or another lawful exception under the relevant copyright law. Content filters may not always detect when these exceptions apply and thus such “errors in algorithmic content moderation may result in censoring legitimate content, and sometimes also in disproportionately censoring some groups”.[24]
The controversies surrounding the delegation of public responsibilities to private actors are more heightened in the context of online speech. In its proposal, the Commission states that ‘harmful’ content, while not necessarily constituting illegal content, will not be defined by the DSA nor be subject to removal obligations since “this is a delicate area with severe implications for the protection of freedom of expression”.[25] However, such regulation may nevertheless come through the backdoor due to the definition of “illegal content” provided in the DSA; it includes any information (either in itself or by reference to an activity, including the sale of goods or the provision of services) which is not in compliance with Union law or the law of a Member State.[26]
The potential problem here is that such a definition plugs the DSA into a body of caselaw from the European Court of Human Rights (ECtHR) that has, so far, lacked clarity on the question of so-called “hate speech”. More specifically, the Court has somewhat struggled on “the demarcation line between types of harmful expression that ordinarily are entitled to protection and the most harmful types of expression that attack the values of the [European Convention on Human Rights] and therefore do not enjoy protection”.[27]
The EU is a signatory to the European Convention on Human Rights (the Convention) and thus binds all of its Member States. Furthermore, the meaning and scope of the rights contained in the EU Charter must be the same as those laid down by the Convention so far as the rights contained in either text correspond with each other.[28]
Article 10 of the Convention, as well as Article 11 of the Charter, states that everyone has the right to freedom of expression. This includes the freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers. However, such a right is not without limitations as it carries with it duties and responsibilities. Thus, free expression may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society on various legitimate grounds. For example, restrictions may be placed on free expression for the protection of health or morals or for the protection of the reputation or rights of others (eg defamation).
The ECtHR “has by and large interpreted Article 10 expansively and in a way that is faithful to the broad principles of freedom of expression”. In other words, free expression is the default rule, whereas its limitations are the exception of which must be explored on a case-by-case basis. This has been applied even to offending, shocking or disturbing ideas, for such ideas must be allowed to circulate to ensure “pluralism, tolerance and broadmindedness without which there is no democratic society”.[29]
Hate speech, while not appearing anywhere in the text of the Convention, is a term referring to speech that is so vulgar and offensive that it cannot possibly warrant protection under Article 10. However, the regulation of hate speech by the ECtHR has not historically been carried out on the basis on Article 10. Rather, Article 17 of the Convention has been the source of the Court’s jurisprudence on such speech. That provision states that nothing in the Convention shall be interpreted as allowing anyone to engage in any activity or perform any act aimed at the destruction of any of the rights and freedoms contained in the Convention or at their limitation to a greater extent than is provided for in the Convention. Accordingly, Article 17 acts as a “safety valve that denies protection to acts that seek to undermine the Convention and go against its letter and spirit”.[30]
However, the criteria for using Article 17 of the Convention as a basis for suppressing hate speech has been somewhat ambiguous. In Delfi AS v Estonia, the ECtHR held that an online news portal can be liable for unlawful hate speech posted on its platform.[31] However, while the Court was clear on the question of liability, it avoided the preliminary question of what constitutes hate speech. The content in this case directly advocated for acts of violence and thus constituted hate speech which was deemed unlawful under Article 17. Even so, the lack of an analysis as to criteria for determining hate speech under that provision leaves the question open in relation to speech that does not directly advocate violence but may be considered ‘borderline’ or otherwise offensive. The balance that should be struck between Articles 10 and 17 under the Convention therefore remains to be clarified.
Yet, the DSA proposes delegating that difficult question to internet intermediaries that are not necessarily focused on upholding the rule of law. A resulting concern is “the risk of over-censorship and the removal of content ‘to be on the safe side’ and to thereby avoid incurring liability for such content”.[32] The actions of various tech companies in the aftermath of the Capitol Hill riots in January 2021 could be cited as an example of this. One could question whether internet platforms removed former President Trump’s accounts after the riots on the basis of the illegality, or at least the immorality, of his actions or rather on the basis that it was commercially expedient to do so given that other platforms were doing the same.
There are some provisions in the proposed DSA that could help to mitigate the problems arising from this though. Firstly, Article 17 states that online platforms must provide users with access to an effective internal complaints handling system. It is through this system that aggrieved users should be able to, through electronic means and free of charge, lodge complaints in relation to content moderation decisions made by online platforms on the basis that the content in question was deemed to be illegal or infringed the platforms’ T&Cs. This pertains to decisions for the removal or disabling of access to content, the suspension or termination of the service or the suspension or termination of a users’ account. Online platforms must then reverse the decision made if the complaining user has presented grounds for doing so. Action must also be taken against users that abuse this complaint mechanism or N&A mechanism under Article 14.[33]
Secondly, Article 18 provides for the possibility of out-of-court dispute settlements. Under this provision, certain certified bodies can resolve disputes relating to content moderation decisions made by online platforms. Such bodies must be impartial and independent, equipped with the necessary expertise, easily accessible through electronic communication technology, capable of settling disputes swiftly, effectively and in a cost effective manner, and have clear and fair rules of procedure. Such a system is rather unprecedented and it would be interesting to see how it would work in practice.
Thirdly, Article 19 makes provision for so-called ‘trusted-flaggers’. These are essentially entities that can demonstrate the necessary competence, expertise and independence in tackling illegal content. For example, organisations committed to notifying illegal, racist and xenophobic expressions online may be capable of being trusted flaggers.[34] Online platforms must take the necessary technical and organisational measures to ensure that notices submitted by trusted flaggers relating to allegedly illegal content are processed and decided upon with priority and without delay.
Black Boxes No More?
In addition to the complaint mechanisms, the DSA also proposes rules for greater transparency by intermediaries. Such rules may also help to mitigate the problems arising from the judicial role assumed by intermediaries carrying out content moderation on their platforms.
The starting point for this is Article 12. This provision states that intermediary service providers must include in their T&Cs information on any restrictions that they impose in relation to the use of their service in respect of UGC. This includes information on any policies, procedures, measures and tools for the purpose of content moderation, including algorithmic decision-making and human review. The use of unambiguous language is required and the T&Cs must be publicly available. One criticism that could be raised, however, is that the DSA does not articulate specific information that must be included in the T&Cs similar to the way that the GDPR standardises the content of privacy notices.[35]
Specific to decisions made by intermediaries when carrying out content moderation is Article 13, which mandates the publication of content moderation reports. Such reports must be published at least once a year and provide a detailed, clear and easily accessible account of the content moderation activities carried out by the intermediary. A wide range of information must be included in such reports, including on the number of orders received from MS authorities, notices received from users, the number and type of measures taken and the number of complaints received in respect of the measures taken (including the average time taken to process these complaints and whether any decisions were reversed).
Under Article 15, where a provider of a hosting service takes a content moderation measure against a user, it must convey to the user the measure taken and a specific statement of the reasons for taking the measure. In particular, that statement must contain, inter alia, the facts and circumstances relied on in taking the decision, the legal provisions relied on if the UGC was considered to be illegal content, and the provisions of the T&Cs relied on if the UGC was in violation of those T&Cs. This information must be conveyed in a clear and easily comprehensible manner and be as precise and specific as reasonably possible under the given circumstances.
The other provisions on transparency are specifically focused on online platforms as opposed to intermediary service providers in general. To begin with, Article 23 states that online platforms must produce transparency reports that contain further information in addition to that required for content moderation reports under Article 13. Among this further information includes the number of disputes submitted to the out-of-court dispute settlement bodies (Article 18) and their outcomes, the number of suspensions imposed on those abusing the complaints or N&A mechanism (as per Article 20) and any use of automatic means for content moderation activities.
The proposed DSA also contains specific provisions on online advertising in relation to online platforms and VLOPs respectively. Firstly, Article 24 stipulates that, whenever a user is shown an advertisement on an online platform, certain information must be made accessible to that user by the platform. That information includes the confirmation that the content being displayed is in fact an advertisement, the person on whose behalf the advertisement is displayed, and meaningful information about the main parameters used to determine the user to whom the advertisement is displayed (ie why the user was shown the particular advertisement on display).
Separately, for VLOPs displaying advertising on their platform, Article 30 states that they must compile and make publicly available through application programming interfaces (APIs) specific information. This includes the content of the advertisement, the person on whose behalf the advertisement is displayed, the period during which the advertisement was displayed, whether the advertisement was intended to be displayed specifically to one or more particular groups of users and if so the main parameters used for that purpose, and the total number of users reached with the advertisement and, where applicable, aggregate numbers for the group or groups of users to whom the advertisement was targeted specifically. Such information must be contained in a repository accessible through the API up until one year after the advertisement was displayed.
Article 29 provides rules on recommender systems used by VLOPs. The DSA defines a “recommender system” as a full or partially automated system used by an online platform to suggest in its online interface specific information to users of the service. This could be as a result of a search initiated by a user or other ways of determining the relative order or prominence of the information displayed to a user. VLOPs must set out in their T&Cs, in a clear, accessible and easily comprehensible manner, the main parameters used in their recommender systems, as well as any options for the users to modify or influence those main parameters. Users must be provided with an option that exempts them from profiling, the definition of which is borrowed from Article 4(4) of the GDPR. The interface for this must provide accessible functionality where multiple options are given to users to adjust the recommender system. One could argue here that, on the basis of Articles 13(2)(f), 14(2)(g) and 22, the GDPR already requires controllers to provide such information and user-control in relation to automated decision-making systems. Thus, there is a question as to how these clashing provisions under the DSA and the GDPR could be reconciled if at all.
Furthermore, these rules on recommender systems raise two other issues. Firstly, there may be barriers to explainability. For one, there is a question of whether recommender systems can be explained to users in an accessible manner, let alone provide users with the options to modify how such systems work. Some of the recommendation engines deployed by the likes of Tik Tok or Instagram utilise deep learning algorithms, the interpretability of which may not always be straightforward (of which has come to be known as the ‘blackbox’ problem, although this could be tackled to a certain extent). Also, recommendation engines often form a precious part of the intellectual property of online platforms. Accordingly, such platforms may be reluctant to explain the decision-making process of their algorithms in any great detail to users.
Additionally, the rules on recommender systems in the DSA may be missing a trick in relation to internet content creators. These stakeholders engage in a novice type of economic activity that exists on the internet, which is the ability to generate revenue by displaying advertisements with the content created. For example, content creators on YouTube, if they meet the required criteria,[36] have the option to place advertisements in or around their videos. These advertisements are generated by YouTube and derive from various companies or brands that the platform may have branding arrangements with. Where the advertisement placed by the content creator is viewed by a user, that creator receives a portion of the generated revenue. Generally, the more views a creator can attain, the greater the revenue they can generate.
However, changes that YouTube makes to its content moderation algorithms, in particular any recommendation engines, can impact the views obtained by a creator and thus the revenue that they can generate. Thus, many creators “feel that their livelihoods hang at the whims of mysterious algorithms”. It is acknowledged in Recital (62) of the proposed DSA that such recommendation engines can have a significant impact on the ability of users to retrieve and interact with information. However, while the DSA does require transparency on the use of such recommendation engines, this transparency is only focused on content that is illegal or infringes the T&Cs. Such rules do pertain to the modification of recommendation engines at the potential expense of content creators. This could end up being a significant oversight in the future.
Know Your Platform
Apart from the transparency requirements imposed by the DSA, the proposed Regulation also contains a number of risk management provisions exclusively aimed at VLOPs. Under Recital (56), it is stated that VLOPs are used in a way that strongly influences safety online, the shaping of public opinion and discourse, as well as online trade. Accordingly, without effective regulation and enforcement, such platforms may not identify and mitigate the risks and societal and economic harms that they can exist on their platforms. The risk management provisions under the DSA thus focus on obligations around both risk identification and mitigation.
This starts with Article 26; VLOPs are required to carry out risk assessments on their platforms at least once a year. Such assessments must identify, analyse and assess any significant systemic risks stemming from the functioning and use made of their services in the EU. Article 26 lists the systemic risks that should be identified in the assessment, of which extend beyond the dissemination of illegal content by including two other broadly-worded systemic risks. Firstly, there are the negative effects for the exercise of fundamental rights under the EU Charter, including Articles 7 (right to privacy), 11 (freedom of expression and information), 21 (non-discrimination) and 24 (the rights of the child). Secondly, there is the intentional manipulation of the service, including by means of inauthentic use or automated exploitation of the service, with an actual or foreseeable negative effect on the protection of public health, minors, civic discourse, or actual or foreseeable effects related to electoral processes and public security.
There are thus a wide variety of systemic risks that VLOPs must ascertain on their platform. This includes illegal hate speech, counterfeit products, methods for silencing speech or hampering competition, fake accounts and the use of bots.[37] In conducting these risk assessments, VLOPs must consider how their content moderation systems, recommender systems and systems for selecting and displaying advertisements influence any of the systemic risks, including the potentially rapid and wide dissemination of illegal content and of information that is incompatible with their T&Cs.
After identifying these systemic risks, Article 27 states that VLOPs must implement reasonable, proportionate and effective mitigation measures to address those specific risks. Such measures may include adapting content moderation or recommender systems, targeted measures aimed at limiting the display of advertisements, or reinforcing the internal processes or supervision to detect systemic risks. The measures listed in Article 26 are not exhaustive but any other mitigation measures implemented by VLOPs must be effective and appropriate for the specific risks identified on the platform and be proportionate in light of the platform’s economic capacity and the need to avoid unnecessary restrictions on the use of their service.[38]
Article 28 stipulates a particularly onerous obligation on VLOPs that completes the risk management regime imposed on such platforms under the proposed DSA. That Article states that VLOPs shall be subject, at their own expense and at least once a year, to audits assessing compliance with the Regulation, including its transparency and due diligence obligations. The audit must be completed by organisations that are independent from the VLOP, have proven expertise in the area of risk management, technical competence and capabilities, and have proven objectivity and professional ethics, based in particular on adherence to codes of practice or appropriate standards.
The required contents of the audit report are also stipulated in Article 28. This includes the main findings, either a positive or negative opinion on whether the VLOP complied with its obligations under the DSA, and where the opinion is not positive, operational recommendations on specific measures to achieve compliance. In addition, within one month of receiving those recommendations, the VLOP must adopt an audit implementation report setting out the necessary measures to implement the recommendations. Reasons and alternative measures must be contained in the implementation report where the VLOP does not adopt measures to implement the recommendations from the audit report.
These risk management provisions would mark a fairly drastic change in the regulation of internet intermediaries in the EU going well beyond what was previously envisaged by the ECD. However, it is not clear how the risk management provisions are consistent with the prohibition against general monitoring under Article 7. Recital (28) provides that nothing in the DSA should be construed as an imposition of a general monitoring obligation or active fact-finding obligation, or as a general obligation for providers to take proactive measures in relation to illegal content. This is despite the fact that certain risk mitigation measures, such as reinforcing the internal processes or supervision to detect systemic risks (under Article 27), may implicitly require, or at least encourage, a form of general monitoring to implement such measures.
Conclusion
Overall, the DSA represents a stark evolution in the regulation of internet platforms in the EU. It does so in particular by mandating certain procedures and transparency regarding content moderation as well as imposing detailed risk management obligations. Thus, the DSA moves away from the self-regulation approach of the past 20 years and embraces more granular legislation accompanied by aggressive sanctions; fines for non-compliance for VLOPs can be as high as 6 per cent of total turnover.[39]
Given its significance, the question that many will be asking is when the proposed DSA will turn into binding law. The next step will be for the European Parliament and the Council of the European Union to scrutinise the proposals, after which a final text will need to be agreed. This could take several years, much like the GDPR which was first proposed in January 2012 and eventually adopted in December 2015. Although, given the importance and imperativeness of the DSA’s subject matter, the process for adoption could be quicker. This will ultimately depend on the level of agreement between the different EU institutions and the Member States.
However, the longer the adoption process takes, the more inclined other Member States may be to take matters into their own hands. France, for instance, is working on its own legislation similar to the DSA. As such, the Commission has warned the tech platforms that, unless they want to be hit with a fragmented regulatory landscape in Europe, they should work with the EU to ensure the passage of the DSA. But given the disruptive lobbying known to be deployed by some of these platforms, with the proposed e-Privacy Regulation being the most recent example of this, it remains to be seen how collaborative such companies will be this time around.
[1] Proposal for a Regulation of the European Parliament and the of the Council on a Single Market for Digital Services (Digital Services Act) and amending Directive 2000/31/EC (15 December 2020), Article 1(2)(b).
[2] Ibid, p.1.
[3] Jamie Susskind, Future Politics: Living Together in a World Transformed by Tech (OUP 2018), 190.
[4] Graham Smith, Internet Law and Regulation (5th edn Sweet and Maxwell 2020), [5-063].
[5] Ibid.
[6] DSA (n 1), p.2.
[7] Ibid, Recital (7).
[8] Ibid, Recital (8).
[9] Ibid.
[10] Ibid.
[11] Ibid, Article 40(1).
[12] Ibid, Recital (76).
[13] Ibid, Article 40(2).
[14] Ibid, Article 11(2).
[15] Ibid, Article 40(3).
[16] Ibid.
[17] Ibid, Article 2(p).
[18] Ibid.
[19] Ibid.
[20] Giancarlo Frosio (ed), The Oxford Handbook of Online Intermediary Liability (2020 OUP), 669.
[21] Ibid, 671.
[22] Ibid, 670.
[23] Ibid, 671.
[24] Ibid, 672.
[25] DSA (n 1), p.9.
[26] Ibid, Article 2(g).
[27] Frosio (n 20), 484.
[28] European Charter of Fundamental Rights, Article 52(3).
[29] Handyside v UK, App no. 5493/72 (ECHR, 7 December 1976), [49].
[30] Frosio (n 20), 469.
[31] Delfi AS v Estonia, App no. 64569/09 (ECHR, 16 June 2015).
[32] Frosio (n 20), 483.
[33] DSA (n 1), Article 20.
[34] DSA (n 1), Recital (46).
[35] See Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), Articles 13 and 14.
[36] In order to generate ad revenue from videos, the creator has to turn on video monetisation for their YouTube channel. In order to turn on video monetisation, the creator has to be part of the YouTube Partner Programme. Once admitted into this programme, the creator will have the option to monetise their videos. However, if choosing to place ads on their videos, the video must meet the advertiser-friendly content guidelines.
[37] DSA (n 1), Recital (57).
[38] Ibid, Recital (58).
[39] Ibid, Article 59.
Other Sources:
Sacha Green and Inge Govaere, The Internal Market 2.0 (Hart Publishing 2020)
France pushes for big changes to proposed EU tech regulation
‘I can’t trust YouTube anymore’: creators speak out in Google advertising row
European Commission Proposes New Rules for Digital Platforms | Wilson Sonsini
Europe’s Digital Services Act (Stanford University Webinar)
Further Exploration of Europe’s Digital Services Act (Stanford University Webinar)