The SCHUFA case explained
The legal implications of relying on algorithmic outputs
TL;DR
This newsletter is about the SCHUFA case on automated individual decision-making under the GDPR. It looks at the facts of the case, the court verdict and its implications for AI governance.
Here are the key takeaways:
The SCHUFA case is about a data subject who had her loan application rejected by a bank based on a credit score produced by a credit agency using her personal data. The case brought before the Court of Justice of the European Union (CJEU) concerned whether the credit scoring constituted automated individual decision-making under the GDPR.
The GDPR provides a definition of 'automated individual decision-making' and specifies when such processing may be carried out. Data subjects whose personal data are used for automated individual decision-making have the right to be informed of such processing and to receive further information about how its carried out.
The CJEU held in the SCHUFA case that a credit score generated by a credit agency is an automated individual decision if a third party (like a bank) "draws strongly" on it to make a decision. Accordingly, if one entity makes a decision that "draws strongly" on an algorithmic output produced by another, then the entity that produced the algorithmic output is effectively carrying out automated individual decision-making.
The case highlights the risk of automation bias in the context of AI. This is where the users of AI systems over-rely on the outputs of such systems to make decisions or act in certain situations.
Certain parts of the AI Act complement the GDPR in terms of mitigating the risk of automation bias. These include provisions on AI literacy and human oversight for high-risk AI systems.
Facts of the Case
The SCHUFA case (OQ v Land Hessen) is about a German data subject, named OQ in the case report, who applied for a loan at a bank and was rejected. The rejection was based on an assessment by SCHUFA, a credit agency that provides banks with information on an individuals' creditworthiness.
SCHUFA uses "mathematical and statistical procedures" to determine the probability of a future behaviour of a person, such as repaying a loan. The output of this process (the 'score') is then used to assign the person to a certain group with similar characteristics that have behaved in a certain way.
OQ was denied her loan after SCHUFA produced a negative assessment of her credit worthiness and shared this with the bank. OQ then asked SCHUFA to send her information on the personal data used and to erase that which was incorrect.
SCHUFA provided OQ with her credit score and a broad explanation of how the score was calculated. However, the agency did not share specific details on the algorithm it used as this constitute a trade secret. It also contended that it was the bank that made the final decision on loans and therefore SCHUFA's role is merely limited to making the assessment and passing this on to the bank.
OQ then lodged a complaint with her data protection supervisory authority, the HBDI (Hessischer Beauftragter für Datenschutz und Informationsfreiheit). She requested the regulator to order SCHUFA to grant her requests for further information and erasure of the incorrect personal data.
The HBDI rejected her application for an order, which OQ then appealed before the Administrative Court in Wiesbaden, Germany. One of the main legal questions at issue was whether the credit scoring carried out by SCHUFA constituted 'automated individual decision-making' under the GDPR. The Court made a preliminary request to the Court of Justice of the European Union (CJEU) on this specific question.
Automated individual decision-making under the GDPR
The GDPR provides the following definition of 'automated individual decision-making':1
...a decision based solely on automated processing, including profiling, which producers legal effects concerning him or her or similarly significantly affects him or her.
This definition can be broken down into three cumulative conditions:
A decision has been made. This means an action, stance or measure2 taken regarding a data subject that is binding on them.3 Not included in this are the steps that complement or support the decision.
That decision has been based solely on automated processing or profiling. This means that the decision is enforced with no human input. A distinction should therefore be made from the use of automated tools that merely aid decisions ultimately made by a human.
That automated decision produces either legal effects or similarly significant effects on the data subject. A decision producing legal effects is one that impacts a person's legal rights or status.4 A decision that produces ‘similarly significant effects’ is one that does not necessarily affect a person’s legal rights or status but has an impact that is similar. An example of this could be e-recruiting practices that do not involve any human intervention.5
The GDPR also includes in its definition of automated decision-making the prospect of 'profiling', which itself has a specific definition under the GDPR:
...any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.6
Automated decision-making is only permitted under the GDPR if:7
It is necessary for the performance of a contract
It is authorised by law
The data subject has given explicit consent to the processing
If a data subject's personal data is subject to automated decision-making, they have a right to obtain from the data controller, as part of a subject access request (SAR), "meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject."8
The CJEU's verdict in SCHUFA
In simple terms, according to the CJEU, a credit score generated by a credit agency is an automated individual decision if a third party (like a bank) "draws strongly" on it to make a decision.
In coming to this decision, the Court firstly recognised that an automated individual decision, as per the GDPR definition, had taken place. The decision here was "the result of calculating a person’s creditworthiness in the form of a probability value concerning that person’s ability to meet payment commitments in the future."9 This calculation was produced by algorithmically profiling OQ, which constitutes automated profiling under the GDPR.10 And the resulting decision also affected OQ significantly.11
The Court identified that, in this case, there were two main stakeholders at play: the credit agency and the bank. Accordingly, the decision-making process was distributed among these stakeholders.12 SCHUFA carried out the profiling of OQ, and the bank used that to reject OQ's loan application.
However, that different stakeholders are involved in a decision-making process does not necessarily mean that automated individual decision-making cannot take place. Therefore, SCHUFA's role, in determining the probability value, was not merely a preparatory act and that the only decision that took place was the bank rejecting OQ's application.13
If the GDPR provisions on automated individual decision-making were to be interpreted in this way, the CJEU contended that there would be a risk of circumventing these provisions and create "a lacuna in legal protection":14
In that situation, the establishment of a probability value such as that at issue in the main proceedings would escape the specific requirements provided for in Article 22(2) to (4) of the GDPR, even though that procedure is based on automated processing and that it produces effects significantly affecting the data subject to the extent that the action of the third party to whom that probability value is transmitted draws strongly on it.15
Accordingly, if one entity makes a decision that "draws strongly" on an algorithmic output produced by another, then the entity that produced the algorithmic output is effectively carrying out automated individual decision-making. This is supported by the fact that if a data subject wanted to submit a SAR to obtain further information about the analysis for the decision, it would need to submit this to the analysing entity since it is that entity that would have the requisite information.16
Thoughts on the case
What exactly constitutes "drawing strongly"? This is something that the CJEU does not elaborate on in its judgment in the SCHUFA case. In that case the bank seemed to make its decision to reject OQ's loan application at least predominantly based on the scoring produced by SCHUFA. Presumably then, if entities place significant weight on algorithmic outputs to make decisions that significantly affect data subjects, then the GDPR rules on automated individual decision-making will apply. This may not be the case, however, if the algorithmic output is just one of several factors taken into account and its significance is not weighted more than the others.
An important aspect of the SCHUFA case is that it highlights a prominent AI risk: automation bias. This is where the users of AI systems over-rely on the outputs of such systems to make decisions or act in certain situations.
In her book Code Dependent: How AI Is Changing Our Lives, Madhumita Murgia explores the risk of automation bias in the medical field. This is in the context of medical AI systems developed and deployed in India to help diagnose tuberculosis and other diseases. Murgia questions whether the increased use of such systems may cause doctors to "become less vigilant and more complacent about second-guessing diagnostic AI assumptions" and whether it might diminish their skills and agency. To quote one Indian doctor Murgia interviewed for her book:
'In a new generation of medical students looking for quick fixes and early gratification, it's hard to help them to see the value of engaging deeply with patients, with histories and [physical] exams...I see a risk of it becoming your master only if you're not grounded in good medicine.'17
It is risks like automation bias that the GDPR rules on automated individual decision-making are designed to protect data subjects from, as explained by the CJEU in the SCHUFA case. The purpose of these rules is to "to provide suitable safeguards and to ensure fair and transparent processing in respect of the data subject, in particular through the use of appropriate mathematical or statistical procedures for the profiling and the implementation of technical and organisational measures appropriate to ensure that the risk of errors is minimised."18
This is complimented by certain provisions in the AI Act pertaining to high-risk AI systems:
AI literacy. Those dealing with the operation of high-risk AI systems must have a sufficient level of AI literacy,19 which includes understanding "the suitable ways in which to interpret the AI system's output, and, in the case of affected persons, the knowledge necessary to understand how decisions taken with the assistance of AI will have an impact on them."20
Human oversight. AI systems must be "designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use."21 This includes enabling users of the system to be "aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias)."22
What the SCHUFA case and the AI Act require of AI system developers is to understand how their systems work and the context in which they may be used. This is imperative for understanding the risks arising from the development and deployment of such systems.
GDPR, Article 22.1.
GDPR, Recital (71).
Kuner et al (eds), The EU General Data Protection Regulation (GDPR): A Commentary (OUP 2020), p.532.
Article 29 Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (2018), p.21.
Article 29 Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (2018), p.21.
GDPR, Article 4.4.
GDPR, Article 22.2.
GDPR, Article 15.1(h).
Case C‑634/21, OQ v Land Hessen (7 December 2023), para. 46.
Case C‑634/21, OQ v Land Hessen (7 December 2023), para. 47.
Case C‑634/21, OQ v Land Hessen (7 December 2023), para. 49.
Case C‑634/21, OQ v Land Hessen (7 December 2023), para. 61.
Case C‑634/21, OQ v Land Hessen (7 December 2023), para. 61.
Case C‑634/21, OQ v Land Hessen (7 December 2023), para. 61.
Case C‑634/21, OQ v Land Hessen (7 December 2023), para. 62.
Case C‑634/21, OQ v Land Hessen (7 December 2023), para. 63.
Madhumita Murgia, Code Dependent: How AI Is Changing Our Lives (Picador 2024), p.116.
Case C‑634/21, OQ v Land Hessen (7 December 2023), para. 59.
EU AI Act, Recital (20).
EU AI Act, Article 14.1.
EU AI Act, Article 14.4(b).