The Cyber Solicitor

The Cyber Solicitor

Data Rights

Why Substack is asking for your age now

A primer on the UK Online Safety Act and age assurance

Mahdi Assan's avatar
Mahdi Assan
Feb 06, 2026
∙ Paid

Late last year, I received this email from Substack:

And then when trying to access the direct messaging feature on the Substack app, I was presented with this:

In fact, if you are in the UK, you may have noticed a number of websites requiring your age to be verified before being able to access the site. Or you may have seen this in the news.

Article from The Financial Times
Article from BBC News
Article from Sky News

Now this was not just limited to adult websites. Even platforms like X, Reddit and and TikTok have also been introducing some form of age checks. And lots of people have been wondering why.

The main reason is because of a law in the UK called the Online Safety Act 2023 (OSA). The main purpose of this legislation is to impose obligations on online platform operating in the UK to ensure that they are safe for UK users, especially children.

In this newsletter, I attempt to explain, in simple terms, why OSA has led to certain websites like Substack asking for your age. I will cover:

  • What OSA is and what it requires

  • What are the specific requirements regarding age checks

  • The data protection issues with age checks

  • What this all really means

What is the Online Safety Act and what does it require of Substack?

The main aim of OSA is to provide a regulatory framework for making the use of internet services safer for people in the UK. It does this by:

  • Imposing duties on certain service providers to identify, mitigate and manage risks of harm from illegal content and activity and content and activity that is harmful to children

  • Conferring certain functions and powers to Ofcom, the UK regulator for broadcasting, internet and telecommunications

Among the service providers caught within the scope of OSA are what are called ‘user-to-user services’, or U2U. Under the Act, a U2U service provider is an internet service that enables and hosts content generated, uploaded or shared by users that other users can also access.1

In its guidance, Ofcom states that a U2U service provider includes the following types of internet services:

  • Social media sites or apps

  • Photo- or video-sharing services

  • Chat or instant messaging services, including dating apps

  • Online or mobile gaming services

Substack describes itself in the following way:

...a new media app that connects you with the creators, ideas, and communities you care about most. Here, you can discover world-class video, podcasts, and writing from a diverse set of creators who cover politics, pop culture, food, philosophy, tech, travel, and so much more.

It is pretty clear then that Substack constitutes a U2U service under OSA, which is also implicit in the communications it has been sending out regarding compliance with this law.

If Substack is a U2U service provider under OSA, then what is it required to do?

The obligations falling on Substack can be found in Chapter 2 of the legislation, which in the main include the following:

  • Carry out an assessment that evaluates how likely users are to encounter illegal content on the service2

  • Take measures to mitigate and manage the risk of illegal content existing on the service, including through, among other things, the design of certain functionalities or other features, drafting policies on terms of use, content moderation or other measures3

  • Enable users to report illegal content or content that is harmful to children4

What are the specific requirements regarding child safety and age checks?

So how do the OSA requirements apply to Substack in such a way that it needs to check the ages of its users?

If you read Substack’s 18+ content policy on its support page, we see the following:

Substack supports a wide range of writing and creative expression, including material intended for adult audiences. In compliance with the UK Online Safety Act (“OSA”), we are required to provide UK users with a way to report potentially illegal or age-restricted content.

Let’s unpack this a bit.

As suggested in Substack’s policy, the platform allows material to be published for adult audiences and therefore material that may not be suitable for children. Where this is the case, OSA imposes child safety duties on service providers where that service is “likely to be accessed by children.”5

The likelihood of a child accessing a service depends on the outcome of a children’s access assessment.6 This is an assessment that determines whether it is possible for a child to access the service and whether either a significant number of children use the service or the service likely to attract a significant number of child users.7 All U2U service providers, including Substack, are required to carry out this assessment.8 Interestingly, OSA explicitly states that if a service has not already implemented age checks for its service, it cannot conclude that its service is not likely to be accessed by children.9

If the service is likely to be accessed by children, then a U2U service provider like Substack will also need to carry out a child risk assessment for its service.10 This assessment needs to evaluate how likely child users are to encounter what OSA calls ‘primary priority content’ that is harmful to children. In its guidance, Ofcom has stated that content relating to the following falls in this category:11

  • Pornography

  • Suicide

  • Self-harm

  • Eating disorders

  • Abuse and hate speech

  • Bullying

  • Violence

  • Harmful substances

  • Dangerous stunts and challenges

In carrying out these risk assessment, Ofcom has specified the characteristics of the service that should be taken into account. Among these characteristics include the ability to send direct messages on the service, on which Ofcom states the following in its guidance:

Direct messaging can allow users to share content harmful to children in a closed and more targeted manner. While direct messaging can enable users to protect their privacy, our evidence shows direct messaging can enable abuse and hate content, and bullying content behaviours, particularly between two users, that are more likely to go unnoticed by others. This risk may increase when users are able to message other users without the recipient’s permission. Children can also receive direct messages containing pornographic content, often in the form of hyperlinks and frequently by users they do not know or suspect to be ‘bots’.

As part of the child safety duties under OSA, U2U service providers are required to prevent children from encountering primary priority content as well as protect children from the risk of harm of other content on the service.12

To comply with this duty, OSA requires U2U service providers to use age verification or age estimation (or both) to prevent children encountering harmful content.13 In doing so, the service provider must ensure that the age checks are “highly effective at correctly determining whether or not a particular user is a child.”14

OSA is quite specific regarding what counts as age verification or age estimation:

  • ‘Age verification’ means “any measure designed to verify the exact age of users of a regulated service.”15

  • ‘Age estimation’ means “any measure designed to estimate the age or age-range of users of a regulated servic.”16

Accordingly, any measures where a user simply self-declares that they are of a certain age (for example by ticking a box which says that they are 18 or above) is neither considered age verification or estimation for the purposes of OSA.17 Such mechanisms are not enough to comply with the law.

So putting this altogether:

  • Substack allows material suitable for adults and therefore not suitable for children

  • Substack has probably concluded that its service is likely to be accessed by children

  • Substack’s child risk assessment likely included the risks of having direct messaging features as part of its service

  • Substack considers itself to be subject to child safety duties under OSA

  • To comply with these duties, Substack has implemented age checks for its service, including for its direct messaging feature

Data protection and age checks

If you have gone ahead and completed the age checks mandated by Substack, you may be wondering how your data is handled as part of this process.

There are different ways for an internet service to carry out age checks on users:

  • Verifying ID documents. This requires uploading a picture of an government-issued ID such as a driving licence or passport. The document is then checked for authenticity and to verify the age of the user presented in the documents.

  • Computer-vision approach. This is where the service predicts the age of a given user based on an image of their face. With this, users are required to take a selfie using their device from which their age is estimated.

  • Analysing account information. This involves using data from or generated about a user on the service to predict their age. For example, YouTube applies machine learning for age estimation of its users by relying various signals such as search history and how long an account has existed.

Third party providers are usually preferred by internet services for carrying out the checks. Substack uses Persona, an online identity verification service based in San Francisco. Substack’s support page describes how an age check is conducted based initially on a selfie or, failing this, a copy of a government-issued ID document.

The handling of user data for age checks is something that OSA does anticipate, as it states that the following duty applies to internet services subject to the legislation:18

When deciding on, and implementing, safety measures and policies, a duty to have particular regard to the importance of protecting users from a breach of any statutory provision or rule of law concerning privacy that is relevant to the use or operation of a user-to-user service (including, but not limited to, any such provision or rule concerning the processing of personal data). (Emphasis added)

OSA does not contain much by way of specific data protection and privacy rules. For this, Ofcom relies on the enforcement activity of the Information Commissioner’s Office (ICO), as the UK’s data protection regulator, stating:

We work closely with the ICO and where we have concerns that a provider has not complied with data protection law, we may refer the matter to the ICO.

To that effect, the ICO has published guidance on the use of age checking technology whilst ensuring compliance with relevant data protection rules, including the UK GDPR. In that guidance, the ICO sets out their expectations for age assurance and compliance with each of the data protection principles.

There are a few issues here that I think are worth highlighting.

The first one concerns accuracy. U2U service providers that estimate a user’s age using computer-vision mechanisms or based on account information are relying on approaches where the likelihood of error is much higher. These errors either result in genuine adults being denied access to the service or part of the service or granting access to children when they should not.

Sometimes the source of this accuracy is the bias ingrained in the facial estimation systems themselves. These systems utilise machine learning classifiers trained on large datasets of facial images. However, these datasets are not always demographically diverse, resulting in varying performance levels when used for different types of people, including in terms of skin colour and gender.

From Yoti’s Facial Age Estimation Fact Sheet. Mean absolute error (MAE) measures how far off an estimate is from the true age. For example, a MAE of 1.6 means an estimate of a person’s age is off by 1.6 years.

Additionally, as with any system, there are security risks. Back in July 2024, AU10TIX, an Israeli ID verification company, suffered a data breach which exposed its logging platform containing images of identity documents like driving licences and passports. This provider was used by TikTok, Uber X and other well-known internet services at the time.

By relying on third party age checking systems, which has become the standard approach, another entity is added to the data processing chain which in turn increases the lack of control one has over their data. These providers may state that they only use the data to verify identity, but from a user perspective it is difficult to verify if this is really true. Once the ID documents or selfies have been sent to their servers, the fate of that sensitive information is essentially in the hands of the service provider. This is especially if those providers are based in another country where data protection laws may be weaker or basically non-existent.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Mahdi Assan · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture