TL;DR
This newsletter is about the powerful digital systems that data rights are designed to protect you from. It looks at the system that underpins Facebook, how it works and its negative externalities.
Here are the key takeaways:
During the mid-to-late 2000s, Facebook launched three features/projects that significantly contributed to how the platform works today:
The news feed focused the Facebook UX on a 'personalised newspaper' where user activity would be most concentrated.
Pandemic implemented a range of features oriented around targeted advertising and provided the foundation for Facebook's business model.
The Like button accelerated Facebook's pursuit of data maximisation to fuel its targeted advertising operations and generate revenue from the processing of personal data.
A good way to think about how Facebook works is by looking at it from a systems thinking lens. Facebook can be viewed as an ecosystem with elements and interconnections that fulfil a particular function.
The system in place at Facebook is one based on the surveillance capitalist model. Such a system is essentially one big reinforcing feedback loop designed to generate revenue from personal data.
With this model, user activity is turned into behavioural insights which in turn is used to build detail profiles of users. These profiles then form the basis of the adverts targeted at those users, and it is the selling of these ads that generates billions in revenue for Meta.
The platform's recommendation engine, and the reinforcement feedback loop it executes, seeks to provide users with content that they are most likely to engage with, regardless of the nature of that content (i.e., whether that content is good or bad). Whatever content engages the user the most is the content that is presented in their feed, to keep them hooked and coming back for more.
It encourages users to be trapped in a cycle of consumption as a way to achieve higher engagement, and therefore continue producing more revenue from targeted advertising. In doing so, the system takes advantage of the fact that people are generally poor at conceptualising risks, are malleable to the social media slot machine and are easily persuaded by the convenience of network effects.
Data rights provide a means for confronting the surveillance capitalist machine. But in the first place, as an individual, if you can help it, do not feed the machine!
Intro
As is evident from the title, this newsletter is a follow up to a piece I wrote back in December 2023 titled What data rights are for. That piece was my attempt at distilling what I think is the main purpose of your data rights: they help you to take control of your journey through the various evolutions of the digital age.
In the many conversations I have had with people outside of the data rights space, they tend to undervalue the importance of their data rights. Sometimes they do not even know that they exist, and even if they do, they are doubtful of their importance.
I find this worrying, though not that surprising. I have the feeling that most people are very unaware of not just the data rights they are entitled to, but the very harms that they are designed to prevent or mitigate. And even if they are so aware, then they express some sort of defeatist mindset of 'I have nothing to hide, so why do I need privacy' or 'so what if my social media platform knows everything about me' or 'my information is already out there so what is the point of caring about it now' or 'nothing bad has ever happened to me, what's the big deal?'.
So many aspects of our lives are dictated by technological developments, which often happen at a very fast pace. In fact, the pace is usually so fast that it is incredibly difficult to really comprehend what is going on. Even I can find it hard to keep up, someone who is not only supposed to understand technology but also help others navigate its risks and benefits. So I can sympathise with others who may feel overwhelmed or otherwise unmotivated to understand their data rights, how to use them and, most importantly, why they should use them.
However, as a society, I think this is untenable.
Technological development, and everything that comes with it, does not just happen by accident. They are the result of decisions and work carried out by humans, driven towards certain goals and influenced by certain incentives. Various risks and benefits arise from such activity, and we ought to equip our society with mechanisms that enable us to control these developments in a way that maximises the benefits and minimises the risks.
But if we as a society are largely unwilling to exercise these controls, then the fate of technology's risks and benefits are at the whim of its creators, which, as history has shown, is not always a good thing.
And so this is the question I would use to respond to those that are skeptical about data rights: do you want your journey in the digital world to be not dictated by you but by other entities driven by their own goals and influenced by their own incentives?
Or, as Jamie Susskind puts more succinctly in his book Future Politics:1
...to what extent should our lives by directed and controlled by powerful digital systems - and on what terms?
Maybe this sparks further questions. What are these powerful digital systems? Why are they powerful? Who builds and operates them? How do they control and direct our lives?
I touch on some of these questions in What data rights are for, though not in detail. This follow-up aims to explore these first-order questions that will hopefully help people to understand exactly why data rights are so important.
The focus here is on social media platforms, and in What data rights are for I explain, at a very high level, the risks posed by these powerful digital systems:
Think about your social media feed. As you scroll through it, you might see all sorts of content that appeals to you, which you like, comment on or share.
You might see funny memes, the latest online trends, and other weird quirks of internet pop culture. All fairly innocuous, all quite normal.
But as you scroll, like, comment, share and so on, you are constantly being monitored. All this activity is being recorded.
What happens to this information? It feeds a recommendation engine, a system that powers your social media app.
This engine contains algorithms that are designed to keep your attention for as long as possible. It does so by watching what you do and figuring out what you like.
Once it figures out what you like, it shows more of it on your feed. It does so over and over again, keeping you scrolling for longer and longer.
But this recommendation engine is not limited to learning what you like. It will learn what makes you scared, what makes you angry and what makes you sad, too.
This is because the engine is optimised to keep your attention, regardless of the content that achieves this. It is therefore capable of showing you the good stuff and the bad stuff.
Inevitably, this will influence the experience you have on the app. And maybe even for the worse.
Perhaps this takes you down a dark rabbit hole that seems impossible to escape from. Your digital existence becomes clouded with a pervasive negativity, emanating from a pane of glass in the palm of your hand.
And perhaps this impacts your mental health, your work, your relationships, or a host of other things in your life. The online has interfered with the offline, infringing on your peace.
This deep dive starts with Facebook. Steven Levy's book Facebook: The Inside Story provides detailed coverage of the company's evolution from social network used exclusively by American college students to a global billion-dollar business, and the various data rights-related perils along the way. This post will include excerpts from the book that describe how Facebook eventually embraced surveillance capitalism and made this the basis of its business model, which remains critical to its success today.
But it is important to zoom out from this story and look at the bigger picture. The Facebook story reveals something important about incentives within organisations and how this influences the way they pursue their goals. Looking at Facebook and others through the lens of system thinking is a helpful way of illuminating this. So in this post, I attempt to provide a system model for online platforms like Facebook that shows exactly what they are built to do and the implications this has for data rights.
The main point I am trying to convey is this: if you look at the way that these companies work, whereby the collection of people's data is central to their business model, you will see why having data rights is so critical. We cannot just allow these organisations to pursue their profit-seeking ends whilst producing the various negative externalities that come at the expense of society. Our data rights are there to protect us from this, to set the terms on which these powerful systems and their creators can operate.
In my long post on the potential national security issues with TikTok, I cited a passage from a book by Carrisa Véliz, an associate professor of philosophy and ethics at the University of Oxford, called Privacy is Power and I think it is worth including it here to reemphasis the message:2
Not everyone will use access to your privacy in your interest...Fraudsters might use your date of birth to impersonate you while they commit a crime; companies might use your taste to lure you into a bad deal; enemies might use your darkest fears to threaten and blackmail you. People who do not have your best interest at heart will exploit your data to further their own agenda. And most people and companies you interact with do not have your best interests as their priority. Privacy matters because the lack of it gives others power over you.
You might think you have nothing to hide, nothing to fear. You are wrong - unless you are an exhibitionist with masochistic desires about suffering identity theft, discrimination, joblessness, public humiliation, and totalitarianism, among other misfortunes. You have plenty to hide, plenty to fear, and the fact that you don't go around publishing your passwords or giving copies of your keys to strangers attests to that.
You might think your privacy is safe because you are a nobody - nothing special, interesting or important to see here. Don't short-change yourself. If you weren't that important, businesses and governments wouldn't be going to so much trouble to spy on you. (Emphasis added)
The Facebook story
The key parts of the Facebook story from a data rights perspective includes the launch of three features/projects in the mid-to-late 2000s:
The news feed (launched in 2005)
Targeted advertising (launched in 2007)
The Like button (launched in 2009)
The News Feed
Levy notes how the news feed "would become Facebook's biggest boon and also the source of its future woes."3
Zuckerberg's idea for the news feed was for it to function as a "personalized newspaper" for Facebook users.4 It would consist of content and activities from the social network in a manner more convenient than visiting individual profiles one-by-one.
This was the standard way to navigate social media in the mid 2000s. In fact, "Facebook logs showed that a huge number of people actually went through their friend list in alphabetical order to make sure they were on top of new activities."5
The work on the news feed feature started in late 2005 and would not be launched until the following year (it took over six months to build). The team consisted of Zuckerberg and others at Facebook at the time, including Adam D'Angelo (a current OpenAI board member).
Zuckerberg's role in the project was to think about what content should appear on the feed and how it should be ranked:
He wanted to make it easier for people to see what was important in the world of the friends they had consciously connected to. He had one word in mind as a yardstick for inclusion: "interesting-ness." It sounded innocent at the time. He had no idea then how important such a ranking would be, that democracies could fall and minds could deaden by the wrong stories appearing on one's News Feed.6
Eventually, Zuckerberg would identify three categories of stories that should appear on the feed:7
"Stories about you." This would consist of things like people posting on your wall, tagging you in photo or commenting on your post.
"People you cared about." This would include for example relationship statuses of friends and life events of others.
"Stories about things you care about and other interesting things." This would consist of information that augmented or supplemented traditional news and entertainment.
This last category was unique in that it introduced to a users' feed information that does not originate from their social network. Instead, Zuckerberg envisaged a stream of content consisting of:8
Trends in media, interest groups, etc.
Event that might be interesting
External content
Platform applications
Paid content
Bubbled-up content
In thinking through these design choices for the news feed, Zuckerberg would conceptualise his vision for Facebook: "The Information Engine." To quote from Zuckerberg's Book of Change (a personal notebook he kept in the early days of Facebook where he kept his ideas, strategies, and philosophies for the future of the company):
Using Facebook needs to feel like you're using a futuristic government-style interface to access a database full of information linked to every person.9
This vision seemed to fuel Zuckerberg's idea for what he called 'dark profiles', which he also wrote about in Book of Change:
He envisioned users creating these Dark Profiles of their friends - or just about anyone who didn't have a Facebook account. By giving the name and email address of the person, you could start such a profile - you'd be informed if one already exists - and you could add information to it, like a person's biographical details or interests. The owner of one of those dark accounts would be part of the Facebook conversation. Every so often email alerts might pop up in a dark-profiled person's inbox about activity involving them on Facebook. Presumably that would motivate them to sign up. Zuckerberg was aware that opening profiles for people who had no desire to be on Facebook might stir up some privacy concerns. He spent some time pondering how this could avoid being "creepy." Maybe, he wondered, dark accounts might not be searchable? It's not clear how much of this idea was implemented. Kate Losse, an employee at the time, would later write that she worked on a dark-profile project around September 2006. "The product created hidden profiles for people who were not yet Facebook users but whose photographs had been tagged on the site," she wrote in her 2012 memoir. She now explains that when those nonusers responded to an email - provided by the person tagging them in the picture - the tagged photos would be waiting for them. "It was kind of peer-to-peer marketing at Facebook directed at people who had friends on the site but hadn't signed up yet," she says. Ezra Callahan confirms this, adding that though the idea that users would be able to create and edit dark profiles of friends Wikipedia-style was brainstormed, it was not executed. (Facebook has always insisted that dark profiles do not exist.)10
The news feed would become part of a major revamping of Facebook's UI as it would essentially become the front page of the site.
The concerns about privacy were acknowledged, but downplayed:
...the thinking was that since Facebook users were already looking at one another's profiles all the time - it was the key activity on the site - it was no big deal to have the news of your friends delivered proactively. It was all information that people had chosen, right?11
The customer support team, which saw the feature not long before its launch and "understood right away that people were going to freak out" given that "most users had no concept of what Facebook did or didn't know about them", warned about a backlash. But these privacy concerns were just "brushed off":
Whatever, people are looking at each other's profiles all the time - what's the big deal?12
And for Zuckerberg specifically, user objections were just a "distraction":
If you just keep your head down and ignored the noise, people would get over it, and in a couple of weeks it would be like the outcry never happened.13
But when the newsfeed was launched in September 2006, many people vehemently opposed it. In fact, it was the first time that Facebook had received major backlash.
So while Zuckerberg and others were eager to brush off the concerns around privacy, Levy explains that such concerns were central to the initial rejection of the news feed:
What Facebook simply hadn't realized about the News Feed was that pushing information to people was qualitatively different from publishing it on someone's home page. (More accurately, it had shrugged off the early warnings to this effect.) One case in particular stood out as a symbol of the difference: the "relationship status" that Facebook encouraged users to append to their profiles, kind of a mood ring for the state of their romantic life. At any time, it could signify married, single, in a relationship, or the weirdly fraught "it's complicated." When someone changed the status on their profile page, visitors would encounter it as a straightforward self-description of someone's love life. But when instantly broadcast to all of one's friends if changed, it hit your social graph like a stack of tabloid newspapers crashing on the sidewalk. Your girlfriend dumped you, and suddenly your buddy list would explode with lookie-loos demanding the lurid details. All because of Facebook! The corporate inbox overflowed with howls from people whose relationship status and other "news" had become the unwelcome content of a brand-new media channel.14
Yet, despite the objections to the new feature, Facebook did not turn it off (even when its major investors encouraged them to do so). Instead, they stuck with the news feed, citing the interesting data it observed about its launch:
[The Facebook team] were looking at the logs and finding something amazing. Even as hundreds of thousands of users expressed their disapproval of News Feed, their behaviour indicated that they felt otherwise. Users were spending more time on Facebook than ever before. It was a validation of the entire concept.
[...]
The protest's massive traction was actually a vindication of the very product some wanted to smother in its crib. The anger against the News Feed was being fueled by...the News Feed...[Facebook] had ginned up an algorithmic amplifier.15
Facebook would eventually augment the news feed to include options for users to control who could see their activity on the platform. Some in the company thought that nobody would use these settings, but having it in place seemed to calm the storm.
Essentially, in the end, people simply acquiesced to the news feed and the way it changed their UX on Facebook.
Accordingly, the lesson that Facebook took away regarding privacy was the following:
Though people might complain in the abstract about it, they loved even more to share things with their friends, and especially to see what their friends were up to. What's more, they moved a step closer to Zuckerburg's vision of a new standard of privacy, where people shared more and more with one another.16
The introduction of the news feed was therefore a significant moment in Facebook's journey toward surveillance capitalism.
Pandemic
It was in mid-2007 that Facebook decided to "make its big push for revenue growth."17
The work on this was given the codename 'Panda' (a combination of the words 'Pages' and 'ads') but would later change to 'Pandemic'. This project focused on a form of advertising that would go further than what Google had started previously.
As Levy explains:
Facebook was going to change its current ad system to be less about how many people saw the ads and more about targeting them to the right people. As Google had, Facebook would create an auction-based system where advertisers would bid against one another to place ads in the sidebars alongside the News Feed or - and this was controversial within the company - in the News Feed itself...The metric that people paid for would be engagement-based rather than exposure-based: the advertiser would pay for each click as opposed to how many eyeballs grazed on the ad.18
So while Google used keywords included in a search query as the criteria for the ad bidding, Facebook would opt to use demographic information about its users. Facebook was already using this information for recruiting by "directing ads to engineers whose profiles identified them as working for rival companies."19
The project also involved Facebook introducing a range of features for businesses relating to this new advertising model, much of which remains today. For example, Facebook introduced Pages which allowed organisations to have their own profiles, a departure from the previous policy of only allowing individual people to have accounts on the platform.
But a more significant part of Pandemic was an initiative called Beacon:
Facebook struck deals with forty-four partners to put invisible monitors on their web pages, called beacons. The pitch: Add three lines of code and reach millions of users. The beacons flagged activity to Facebook. When a user made a purchase on the site, the good news would be shared on the News Feed of friends.20
Essentially, the Facebook beacons would "stealthily track people as they bought things on the web and then - by default - circulate the news of their private purchases."21
Users were provided with a one-time pop-up providing instructions on how to disable this feature. But this was provided on an opt-out basis, meaning that if a user did not respond to this prompt, this was taken as the user consenting to the feature.
The rationale for this opt-out approach to Beacon was inspired by what happened with the launch of the news feed: once implemented, people will eventually like it. Accordingly, the opt-in option, whereby the feature would be off by default and only switched on if a user proactively switches it on, was disregarded.
Prior to the launch of Pandemic, there was fierce debate internally about whether to make Beacon opt-in or opt-out. In the end, Zuckerberg made the final decision before the night of Pandemic's launch:
While the headlines from the Pandemic launch focused on micro-targeting and social ads, attention soon shifted to the Beacon component. As Kelly [Facebook's counsel and privacy chief at the time] and others had warned, automatically spreading news of purchases made on the designated websites could result in some unfortunate outcomes. If one were to pick an extreme hypothetical example, you might imagine someone buying diamond engagement ring on the partner site, and having the intended recipient learn about the purchase not by bended knee but via her Facebook News Feed. And that is exactly what happened. People started complaining when their purchases started appearing of people's News Feeds.22
Like with the news feed, Zuckerberg hoped that people would eventually acquiesce to Beacon. But this did not happen, and people remained furious with the new feature.
Zuckerberg would eventually have to yield and changed to the opt-in approach for Beacon. But even this ended up being insufficient:
Beacon was transmitting data even when the user opted out, as well as giving Facebook a lot of other information about what its users did on those outside websites. Beacon even gave Facebook information about people who weren't signed up for Facebook.23
The partners involved in Beacon also got cold feet, with Coca Cola and Overstock, among others, suspending their arrangements with Facebook.
Despite this, Beacon would remain in place for almost two years (shutting down in September 2009). During this time, Facebook would be sued for the feature in a class-action lawsuit filed by its users, which it had to settle for $9.5 million.24
Nevertheless, Pandemic established two important things for the future of Facebook:
Targeted advertising would become the business model of choice for the company. These ads would be displayed to users based on their demographic information via a bidding process that Google used for keywords on its search engine.
Measures designed to enhance privacy would seen as frustrating the essence of this business model. The launch of the news feed and Beacon demonstrated that Facebook was more committed to data maximisation than data minimisation.
The Like button
The news feed and Beacon had exposed Facebook to an important reality about its platform:
People were now asking tough questions about Facebook, and about the privacy trade-off involved in social networking, especially when social networks were funded by advertising.25
People around Zuckerberg insisted that he needed "an experienced leader alongside him at Facebook" to navigate the challenges that this would bring. This person ended up being Sheryl Sandberg.
For Zuckerberg, Sandberg's role was to deal with the things that he did not like doing himself. This included sales, policy, communication, lobbying, legal and "anything else with low geek quotient" so that he could be free to spend time on the Facebook product itself.
Accordingly, Sandberg was tasked with the job of expanding the business model established with the launch of Pandemic. In doing so, Sandberg believed that Facebook could have an even bigger business than Google "because it had the potential to create and monetize demand"26:
People come every day to Facebook to learn what's new and share their interests. So advertisers would be able to sell to Facebook users things that they wanted even before they thought to ask for them.27
As a result of Pandemic, targeted ads were already working on the platform, generating $500 million for company around the time Sandberg was hired in 2008. So for her, it was clear that targeted advertising would be critical to the financial success of Facebook, and that everything else was "a rounding error."28
Not everyone at Facebook was happy with this development though. Levy notes that some of the younger people "thought that ads sucked and Facebook should do something less...smarmy."29
But this would not stop Sandberg as she carried on with the mission of making Facebook (more) money.
The key to targeted advertising was that Facebook collected so much data about its users, well beyond mere demographic information. Such information provided it with the ability to see when users were "prone to pitches for specific products, or even political candidates."30
In essence, it was using people's personal data to make inferences about their preferences and behaviour, and using this as a basis to decide what ads to target them with.
With this model in place, Facebook would eventually create perhaps one of the most influential features in the world of social media, the Like button.
The power of this feature is that it provided a "subtle way of helping to identify a user's interests without the user conspicuously sharing them with Facebook."31 It was a way to fuel the targeted advertising machine.
Facebook had tried a similar feature previously, which it called the Awesome button. But this did not take off.
In 2007, before the Awesome button, FriendFeed, a startup that aggregated messages and posts from different social media sites into one, had developed a feature very similar to the Like button we know today. Facebook would go on to acquire this startup and rebrand this feature for its own platform.
Initially, Zuckerberg was not keen on the Like button. And there was fears that "installing a one-click way to comment on a News Feed post would inhibit actual comments, and instead of interesting conversational threads there would only be a mindless accumulation of positive clicks."32
However, real-world testing of the Like button would eventually convince Zuckerberg to go ahead with it:
In late December 2008, product manager Jared Morgenstern inherited the Like-button albatross and tried to figure out how to lift the curse. The big hurdle was proving that a Like button would not cannibalize commenting, a much higher-quality form of sharing. He built in some tricks, like moving the cursor to the comment box after someone hit the Like button. But ultimately Facebook would only know if Likes depressed commenting by trying the button out and measuring the responses. Instead of another Zuck review, Morgenstern sent an email to Zuckerberg, casually mentioning that he was going to launch the Like button in the Scandinavian countries. He interpreted Zuckerberg's non-response as an implicit go-ahead. After giving some Nordics the use of a Like button and comparing their behaviour to those who didn't have one, Facebook's researchers discovered that a Like button would increase commenting.
Zuckerberg ruled that the project go forward. "It's going to be Like with a thumbs up, just build it and ship it," he said. "We're done with this."33
When the Like button was launched fully in February 2009, it was a great success:
The Like button exceeded all expectations. People took to it immediately. As originally intended, it provided a crucial signal to help rank News Feed posts. What could be a clearer indication that people liked a post than an explicit action that expressed that very sentiment? Since the goal of the News Feed was showing people what they wanted to see, Facebook's job became easier.34
But the Like button would not be limited to the Facebook platform. The company would also expand its feature across the web, as Levy explains:
The company essentially made a deal with the World Wide Web: if you put our Like button on your page, whatever you're selling, promoting, or just saying in public could be boosted by implied (though unwitting) approval from millions of users. It was as if the entire web were posting to News Feed. And it was an unbelievable source of data for Facebook.35
Once again though, privacy concerns were raised. Arnold Roosendaal for example, a doctoral candidate from the Netherlands, published his paper 'Facebook Tracks and Traces Everyone: Like This!' in 2010 with two important findings:
Regardless of whether a user actually pressed the Like button on the website, Facebook would plant cookies on the visitor's browser and track their web browsing activity.
Even those without a Facebook account were tracked and when they created an account later on, the data collected before could be combined with that new account.
In addition to this, the Like button also ended up creating a so-called 'attention war' for both individuals and businesses:
At an individual level, there was a "very subtle incentive for people to tailor what they posted to court those clicks and people would feel bad when they didn't get Likes for posts that meant something to them."36
For businesses, the Like button represented how much reach they had across the Facebook audience. So if a post received lots of likes, "the News Feed algorithm would distribute it more widely, giving it "organic" traffic that went to the News Feeds of the friends of those people. It was free advertising."37
The Like button would become "a gateway drug for Facebook's data gathering to extend beyond its borders."38
It was the "revenge of Beacon", as Levy puts it:
While Beacon shared personal data it received from websites with other users on Facebook, the Like button let Facebook use that data for its own purposes, largely to build its profiles of users and power its advertising. Facebook had learned that going beyond its borders to augment its monetization would be transformative. Later, Facebook would take the further step of buying information from data brokers. What was once "blasphemous" to its chief privacy officer was now Facebook's business as usual."39
This went beyond even what Sandberg had initially envisaged:
From the day she arrived at Facebook, she thought that the company would be limited to advertising that created demand for a product, a huge market to be sure. But by gathering information about what people were doing on the web - what they were shopping for and fantasizing about - it could also capture the precious information involving people's intent.40
Such data processing enabled Facebook to offer an advertising service that businesses were willing to pay huge sums for. It ended up putting the company "in a much more powerful position to capture the lion's share of online advertising revenue."41
To quote Sandberg herself:
"The idea that we could move up the chain and do more intent rather than demand fulfilment...was pretty fundamental."42
The takeaway from the Facebook story
The news feed, Pandemic and the Like button all helped push Facebook toward surveillance capitalism:
The news feed focused the Facebook UX on a 'personalised newspaper' where user activity would be most concentrated.
Pandemic implemented a range of features oriented around targeted advertising and provided the foundation for Facebook's business model.
The Like button accelerated Facebook's pursuit of data maximisation to fuel its targeted advertising operations and generate revenue from the processing of personal data.
It's all about systems
The Facebook story shows that ensuring the responsible and ethical use of people's data is not at the forefront. There is simply little-to-no incentive to do so.
Data rights are not, and cannot, be a priority.
The eco-system imposed by Facebook's business model is such that compliance with data rights law frustrates the function of that very eco-system. This is something that I explained in The system in the system:
Internet platforms are systems. A system represents interrelated elements of identifiable activity. They therefore do not exist in a vacuum and can be created and curated.
If you can identify a system, and observe its behaviour over time, you may be able to identify the incentives that drive that system. In other words, you can see how the interrelated elements work together to perform a particular function, i.e., you can figure out the purpose of the system.
Systems can exist within systems, and where this is the case there is an eco-system. Being able to draw the appropriate boundaries to highlight the different systems that make up an eco-system can sometimes be difficult since these are often complex structures that are interrelated. But at least in an abstract sense, one can imagine how an eco-system can be made up of multiple systems operating in the same environment.
[...]
Ultimately, the identifiable behaviour of a platform's eco-system will largely stem from those systems that are more dominant. The other systems within the same environment will therefore essentially serve the needs of those more dominant systems. I would argue that the systems that dominate the eco-system operated by internet platforms are ones based on surveillance capitalism. In other words, the primary function of platforms is to generate revenue via the collection and processing of user data. These platforms are for-profit corporations after all, and this is something that Google had to realise early on in its existence; after the dot com crash in 2000, many of its investors cast doubt about the company's growth and even "their top venture capitalists, John Doerr from Kleiner Perkins and Michael Moritz from Sequoia, were frustrated." (Zuboff 2019, 72) This was because, up to that point, Google was a free service that generated little revenue. The crash thus created a sense of emergency at Google which forced the company to start looking for a way to generate profits and make the business viable in the advent of difficult times. A business model based on surveillance capitalism is what Page and Brin eventually turned to, and this model has formed the foundation of its shareholders' expectations ever since. Facebook/Meta and Twitter have also endured similar fates, and so all these social media companies are welded to this model in a rather irreversible way.
In very simple terms, with a surveillance capitalist system, the more data that can be collected and processed, the more revenue the platform can generate. Thus, there is an incentive built into the system to facilitate mass data collection. Anything that frustrates this is therefore, in the end, not going to be prioritised...
Looking at how Facebook works through the prism of systems thinking demonstrates well the incentives that are at play and where data rights fit within this.
What is systems thinking?
In simple terms, systems thinking is about interpreting the way the world works by looking at its constituent systems. As defined by Donella Meadows, author of Thinking in Systems: A Primer, a system "is a set of things — people, cells, molecules, or whatever — interconnected in such a way that they produce their own pattern of behaviour over time."43
These systems are made up of three distinct parts: (i) elements, (ii) interconnections and (iii) a purpose:44
The elements of a system include its stocks (which are the parts that can most easily be quantified, e.g. the amount of water in a bathtub) and flows (the filling and draining of the stock, e.g. the tap and the drain of the bathtub).
The interconnections are the way that those stocks and flows are connected and interact with each other. Continuing with the bathtub example, a tap acts as an inflow that allows water into the bathtub (increases the stock) whereas the drain acts as an outflow that takes water out of the bathtub (decreases the stock).
The purpose of the system refers to the function that its interconnected elements fulfil.
By understanding the different elements of a system and how they interconnect, there are three relatively simple principles of the behaviour of systems that can be deduced:
Principle 1: "As long as the sum of all inflows exceeds the sum of all outflows, the level of the stock will rise."45
Principle 2: "As long as the sum of all outflows exceeds the sum of all inflows, the level of the stock will fall."46
Principle 3: "If the sum of all outflows equals the sum of all inflows, the stock level will not change; it will be held in dynamic equilibrium at whatever level it happened to be when the two sets of flows become equal.”47
An important component of many systems are feedback loops. In fact, "the consistent behaviour patterns over a long period of time is the first hint of the existence of a feedback loop."48
So what are feedback loops? According to Meadows, a feedback loop "is formed when changes in a stock affect the flows into or out of that same stock."49
Feedback loops therefore provide a mechanism for reacting to the level of stock by manipulating the inflow and outflow rates. However, not all systems have feedback loops, as stocks and flows can change from external factors without the flows necessarily reacting to the changing stock levels.
Think about the money held in a bank account as a simple example of a feedback loop:
The total amount of money in the account (the stock) affects how much money comes into the account as interest. That is because the bank has a rule that the account earns a certain percent interest each year [which would be the system goal or function]. The total dollars of interest paid into the account each year (the flow in) is not a fixed amount, but varies with the size of the total in the account.50
There are two different types of feedback loops: balancing feedback loops and reinforcing feedback loops.
Balancing feedback loops try to keep the stock within a particular range. If the stock level is too high, then the feedback loop will ensure that the inflow rate is decreased and/or the outflow rate is increased. If the stock is too low, then the feedback loop will ensure that the inflow rate is increased and/or the outflow rate is decreased.
Balancing feedback loops are therefore 'goal-seeking' or 'stability seeking'.
An important component of systems with feedback loops is the discrepancy between the actual stock level and the desired stock level. It is this discrepancy that determines how the feedback loop reacts to the stock level.
In the diagram above, if the energy available for work is lower than the desired energy level, then the system will increase coffee intake in order to increase the metabolic moblization of energy (the inflow for the energy available for work). Conversely, if the energy available for work is equal or greater than the desired energy level, then the system will not change or decrease the coffee intake in order to stabilise the metabolic moblization of energy.
Reinforcing feedback loops enhance "whatever direction of change is imposed on it."51 Systems that possess this type of dynamic can cause the stock levels to grow exponentially or decline exponentially.
Referring back to the example with the money held in a bank account earning interest; the more money held in the bank, the more interest is earned. The more interest is earned, the more money there is the bank, and the process repeats itself.
Accordingly, reinforcing feedback loops "are self-enhancing, leading to exponential growth or runaway collapses over time."52 In other words, reinforcing feedback loops can be either positive (increase the stock) or negative (decrease the stock).
The surveillance capitalist system model
The ecosystem employed by Facebook that generates a substantial amount of its revenue is one based on surveillance capitalism, as described by Shoshana Zuboff in The Age of Surveillance Capitalism (2019). In that book, Zuboff provides an overview of what surveillance capitalism is:
Surveillance capitalism claims human experience as free raw material for translation into behavioural data. Although some of these data are applied to product or service improvement, the rest are declared as a proprietary behavioural surplus, fed into advanced manufacturing processes known as “machine intelligence,” and fabricated into prediction products that anticipate what you will do now, soon, and later. Finally, these prediction products are traded in a new kind of marketplace for behavioural predictions that I call behavioural future markets. Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are eager to lay bets on our future behaviour.53
Accordingly, surveillance capitalism can be seen as a system whereby entities like Facebook collect people’s information to extract insights about their behaviour from which predictions about their behaviour can be made. It is those predictions which then can be used for various other means that generate revenue for those entities, such as targeted advertising.
Surveillance capitalism consists of four main constituent elements: the capturing of human experience; the extraction of behavioural data from that human experience; the creation of prediction products from that behavioural data; the selling of prediction products that generate revenue.
Below is a system model based on the description of surveillance capitalism provided by Zuboff with some additional elements and interconnections to provide a fuller picture of what is likely happening under the hood of social media platforms like Facebook.
It is important to note that this system model could be an over-simplification of the actual activity taking place at Facebook (and indeed other social media platforms). However, this model hopefully captures and essence of what takes place.
The whole system is essentially one big reinforcing feedback loop designed to generate revenue from personal data. The more user activity that occurs on the platform, the more personal data about that activity can be collected.
This then allows more behavioural insights to be garnered, leading to more prediction products to be created. Those prediction products become more effective as more users engage with the content, in turn allowing those prediction products to increase in value and generating more revenue for the platform when they are sold to advertisers.
Surveillance capitalism is therefore central to Meta's existence. To quote its annual report for 2023:
We report financial results for two segments: Family of Apps (FoA) [which includes Facebook, Instagram, Messenger, Threads and WhatsApp] and Reality Labs (RL). Currently, we generate substantially all of our revenue from selling advertising placements on our family of apps to marketers, which is reflected in FoA. Ads on our platform enable marketers to reach people across a range of marketing objectives, such as generating leads or driving awareness. Marketers purchase ads that can appear in multiple places including on Facebook, Instagram, Messenger, and third-party applications and websites. (Emphasis added)54
The incentives behind the surveillance capitalist model
The incentives dominant within an organisation are downstream of the goals of the system operated by that organisation. This means that in the surveillance capitalist system model any activity that detracts from the system function is deprioritised, watered down or resisted against.
Any event that increases any of the outflows of the big reinforcing feedback loop (content moderation, data protection compliance/enforcement, user churn, costs or competition) depletes the stocks in the system that help generate revenue which ultimately impacts growth. Equally, a tightening of the other flows of the system (data collection, data analytics and AI, prediction product sales, marketing or onboarding) will also have a similar impact by slowing the system down.
There is therefore an incentive to ensure that the net effect of the work that companies operating the surveillance capitalist model is conducive to the bottom line. There is an incentive to continue the processing of personal data to generate revenue from targeted advertising - this is the system goal.
This is evident from the key parts of Facebook's development during the 2000s. The news feed, Pandemic and the Like button all contributed to the surveillance capitalist model:
The news feed, powered by its recommendation engine, established a way to keep users engaged on the platform, resulting in more user activity to quantify and measure.
Pandemic provided Facebook with a means to monetise this user activity by making inferences about their behaviour and using this to provide a targeted advertising system to businesses.
The Like button provided further means to analyse user activity, in turn enhancing the targeted advertising system, and attracting more businesses to pay to advertise on the platform.
Even though there were doubts raised prior to the launching of these different features/projects (including from a privacy/data protection perspective), they ended being launched nevertheless. They supported the system goal, so why would they not be launched?
If Facebook had taken a different decision and prioritised the privacy of its users, this would have ultimately not supported the system goal. This would have increased the outflow of the big reinforcing feedback loop and have the effect of reducing revenue.
You can see from this specific part of the surveillance capitalist model below with activities like content moderation and data protection compliance/enforcement represented as outflows. When these outflows increase, they reduce the stock levels for user activity and personal data.
More content moderation likely means less user activity, since it involves measures such as removing content or users from the platform. Equally, more data protection means less personal data is processed, since such measures typically juxtapose data maximisation and reduce the volume of data Meta can actually collect and use.
Accordingly, introducing measures that enhance data protection is not incentivised. This is viewed within Meta as frustrating the system.
Meta explicitly acknowledges this in its own annual report:
Substantially all of our revenue is currently generated from advertising on Facebook and Instagram...Our advertising revenue has been, and we expect will continue to be, adversely affected by reduced marketer spending as a result of limitations on our ad targeting and measurement tools arising from changes to the regulatory environment and third-party mobile operating systems and browsers.
In particular, legislative and regulatory developments such as the General Data Protection Regulation, including its evolving interpretation through decisions of the Court of Justice of the European Union, ePrivacy Directive, the European Digital Services Act, and U.S. state privacy laws including the California Consumer Privacy Act, as amended by the California Privacy Rights Act, have impacted our ability to use data signals in our ad products, and we expect these and other developments such as the Digital Markets Act will have further impact in the future. As a result, we have implemented, and we will continue to implement, whether voluntarily or otherwise, changes to our products and user data practices, which reduce our ability to effectively target and measure ads. (Emphasis added)55
This is why the recent developments in the EU regarding the (il)legality of surveillance capitalism are a big threat to Meta. I have a series covering this topic in more detail, but the gist is that these legal developments have made it increasingly difficult for Meta to sustain its surveillance capitalist model in Europe.
This is so much the case that Meta has realised the need to balance data protection with its business model:
To mitigate these developments, we are continually working to evolve our advertising systems to improve the performance of our ad products. We are developing privacy enhancing technologies to deliver relevant ads and measurement capabilities while reducing the amount of personal information we process, including by relying more on anonymized or aggregated third-party data
[...]
We are also engaging with others across our industry to explore the possibility of new open standards for the private and secure processing of data for advertising purposes. We believe our ongoing improvements to ad targeting and measurement are continuing to drive improved results for advertisers. However, we expect that some of these efforts will be long-term initiatives, and that the legislative, regulatory and platform developments described above will continue to adversely impact our advertising revenue for the foreseeable future.56
But even so, the important point here is that Meta is not doing this out of its own volition. It is doing this because it is being legally required to do so.
The surveillance capitalist model is still at play here. In the same annual report that mentions the development of 'privacy enhancing technologies' (whatever this means), it reported $134.9 billion in revenue, 98% ($131.95 billion) of which came from targeted advertising.57
The negative externalities of the surveillance capitalist model
In September 2021, the Wall Street Journal (WSJ) published a series of articles called The Facebook Files "based on a review of internal Facebook documents, including research reports, online employee discussions and drafts of presentations to senior management." These documents were supplied by Frances Haugen, a former employee at Facebook between 2019 and 2021 working in the Civic Misinformation and Counter-Espionage teams at the company.
Among the tens of thousands of documents provided to the WSJ included an internal report called 'Problematic Use of Facebook: User Journey, Personas & Opportunity Mapping'. In its article on the report, the WSJ noted the following:
Facebook researchers have found that 1 in 8 of its users report engaging in compulsive use of social media that impacts their sleep, work, parenting or relationships, according to documents reviewed by The Wall Street Journal.
These patterns of what the company calls problematic use mirror what is popularly known as internet addiction. They were perceived by users to be worse on Facebook than any other major social-media platform, which all seek to keep users coming back, the documents show.
[...]
The researchers on the well-being team said some users lack control over the time they spend on Facebook and have problems in their lives as a result.
[...]
Those problems, according to the documents, include a loss of productivity when people stop completing tasks in their lives to check Facebook frequently, a loss of sleep when they stay up late scrolling through the app and the degradation of in-person relationships when people replace time together with time online. In some cases, “parents focused more on FB than caring for or bonding with their children,” the researchers wrote.
[...]
The researchers estimated these issues affect about 12.5% of the flagship app’s more than 2.9 billion users, or more than 360 million people. About 10% of users in the U.S., one of Facebook’s most lucrative markets, exhibit this behavior. In the Philippines and in India, which is the company’s largest market, the employees put the figure higher, at around 25%.
Looking at the surveillance capitalist model, it is easy to see how these problematic uses of Facebook can occur. The platform's recommendation engine, and the reinforcement feedback loop it executes, seeks to provide users with content that they are most likely to engage with, regardless of the nature of that content (i.e., whether that content is good or bad).
The goal here is engagement. Whatever content engages the user the most is the content that is presented in their feed, to keep them hooked and coming back for more.
Facebook's internal report shows that the company itself is aware of this. And as is consistent with the incentives behind the surveillance capitalist model, despite identifying the issue, the work to deal with it was never prioritised:
A Facebook team focused on user well-being suggested a range of fixes, and the company implemented some, building in optional features to encourage breaks from social media and to dial back the notifications that can serve as a lure to bring people back to the platform.
Facebook shut down the team in late 2019.
The Facebook Files expose how the surveillance capitalist model pries on the inherent weaknesses of people. And it makes them worse.
This is because of the reinforcing feedback loop executed by the platform. It encourages users to be trapped in a cycle of consumption as a way to achieve higher engagement, and therefore continue producing more revenue from targeted advertising.
In doing so, the system takes advantage of the fact that people are generally poor at conceptualising risks, are malleable to the social media slot machine and are easily persuaded by the convenience of network effects.
Such vulnerabilities are key to the arguments against data rights law based on the individual control model. Under this model, individuals are empowered by laws giving them control of their data:
Privacy and data protection laws sprouted up in the United States, Europe, and around the world, and most embraced the Individual Control Model in significant part. These laws relied heavily on providing individual privacy rights so that people could manage their data. In the United States, these rights generally included a right to information about data collected about a person, a right to access that data, and a right to correct errors or omissions in the data. European laws provided additional rights such as a right to delete (or erase) data from records, a right to object to the processing of data, a right to not be subject to automated decisions, and more.58
The problem with this model is that it empowers people to take control over their data when those very people often make choices that are detrimental to themselves. People are given a means of control when they are often not minded to exercise that control.
These typicalities of human behaviour are starkly illustrated in the work of Franz Kafka. This prominent 20th century Austrian-Czech author (and lawyer) is known for several famous novels, among which include The Trial (published posthumously in 1925).
The Trial is about a man called Josef K. who is prosecuted for unknown crimes by an enigmatic, unidentified and bureaucratic agency under an opaque legal system. It tells the story of how Josef attempts to navigate this absurdity and the various strange happenings it brings into his life.
There are many parallels that could be drawn between The Trial and the state of data rights today under the individual control model. Kafka reveals in his novel how people end up making decisions that are actually harmful to themselves, as Solove and Hartzog describe in their article on the individual control model:
The most challenging and deeply disturbing dimension to Kafka’s depiction of human nature is that people are often not passive victims; they willingly participate in their peril. They rush toward it and embrace it. In some cases, they even crave it. Surveillance isn’t just hoisted upon people; many people eagerly sign up for it. People embrace and normalize the fruits of the digital age, no matter how poisonous they might be. People will often make choices that are not in their own best interest.59
Solove and Hartzog explain further:
Kafka shows us that it is profoundly difficult to empower people, not just because the forces arrayed against them are overpowering, but also because people willingly surrender to those forces.
[...]
...in The Trial, Josef K. believes in the legitimacy of the court system despite countless signs it is illegitimate—the offices are in attics in rundown buildings; court proceedings are held in decrepit living rooms; what appear to be law books are not. At every turn, the system is unprofessional and even ramshackle. Yet, Josef K. accepts its authority and willingly submits to its power—even his own execution. In each piece, people acquiesce to authority without being forced to do so.
Kafka’s works defy simple explanations as to why people make these ruinous decisions to submit. Kafka invites us to contemplate the bewildering complexity and absurdity of human psychology with all its restless emotions, sudden impulses, inexplicable irrationality, contradictory dimensions, and subconscious forces. Kafka shows us that we must reckon with this side of human nature.
As with Kafka’s characters, in the real world, people frequently make detrimental and submissive privacy decisions. People often trust companies without much basis (and sometimes contrary to the previous actions by these entities). People readily click the “accept” button or share their data without even trying to exercise their choices. For the most part, people just follow along and do what companies want them to do.60
And so herein lies the problem with the individual control model in the face of systems built on surveillance capitalism:
The Individual Control Model assumes that if people are given the tools to manage their privacy, they will effectively do so, or at least have a meaningful opportunity to try. But the task of privacy self-management is an impossible one—people can’t exercise their privacy rights at scale; nor can people learn enough to effectively determine the risks when sharing their data or make appropriate cost-benefit decisions. Kafka also shows us that even if privacy self management were somehow possible at scale, many people might not behave as the Individual Control Model envisions. Instead, if bestowed with control over their data, people will willingly cede it to the large entities that are collecting and using their data. And they will do so even when it harms them.61
Meta have shown that they have enough insight into their platforms to understand this reality. But the idea that they would suddenly change tact, given the systems and incentives at play, is laughable.
Companies like Meta rely on the flaws of the individual control model. They are underpinned by systems that depend on people not exercising their data rights.
In the end, most people simply acquiesce to the the surveillance capitalist model.
This is something that has long been recognised in the data rights space. The so-called 'privacy paradox' highlights how people will say they care about their privacy whilst acting in a manner that is inconsistent with this sentiment.62
When it comes to making choices about privacy and exercising data rights, people are often confronted with three problems:
Incomplete information. People may not be always be aware of the scope of the data processing taking place, and are therefore unaware of the risks or implications of such processing. Long and jargon-filled privacy notices/policies, which sometimes do not reflect the reality of how personal data is actually used, can be partly blamed for this.
Bounded rationality. People do not always have the time to read and understand the information presented to them about how their data are used. Hyperbolic discounting ('I need this service now'/the need for immediate gratification), optimism bias ('Surely the likelihood of me being affected by a negative event is really low') and status quo bias (the general affinity for default choices) all play a role here.
Context dependence. Exercising data rights sometimes depends on the context. This hints at the fact that people have dynamic boundaries in terms what data they are willing to share and not share in different situations.
But a big contributor to the privacy paradox, and people acquiescing to the surveillance capitalist model, are dark patterns. This refers to the design choices of platforms that exploit biases to encourage people to behave in a manner that might be inconsistent with their preferences.
Some typical dark patterns include:
Default settings (which exploits status quo bias)
Cumbersome privacy choices
Framing (how a choice is described and presented)
Rewards and punishments (exploits hyperbolic discounting)
Forced action (e.g., consent that is not freely given, again linked to hyperbolic discounting)
Norm-shaping (the information-sharing behaviour of other people)
Distractions and delays
Don't feed the machine!
Data rights provide a means for confronting the surveillance capitalist machine.
But in the first place, as an individual, if you can help it, do not feed the machine!
Do not give your data to entities that are not incentivised to use it in your best interests. Do not give your data to those who are not incentivised to use it ethically or fairly.
Do not give your data to those who will not look after it.
If you understand the system at play, which is set out in this post, then you will understand why:
A system developed over the years by those driven by a formidable desire to change society in profound ways through rapacious entrepreneurialism and groundbreaking technology without consideration for all the consequences of their creations.
A system that prioritises data maximisation at the expense of responsible and ethical uses of personal data, all driven by profit-seeking motives encouraged by financiers ultimately concerned with ensuring a return on investment.
A system that is represented by mantras like 'move fast and break things' whereby the supposed beneficial ends always, somehow, justify the means and anyone who argues otherwise are simply luddites determined to stifle progress and growth.
A system that in many ways is no different from the incentives evident from the behaviour of previous corporate leviathans which, when allowed to pursue their goals unconstrained, cause devastating negative externalities.
Cf. from Is TikTok a national security threat?:
The main point here is this: it may not be today, or tomorrow, or next week, or next month, or next year, or in the next two, five or even ten years. But one day, because you have given away your data to an entity with ulterior motives...it could be used against you, and by the time that happens it may be too late to do anything about it. So be very careful about who you decide to give your data to...
Jamie Susskind, Future Politics: Living Together in a World Transformed by Tech (OUP 2018), p.2.
Carrisa Véliz, Privacy Is Power: Why and How You Should Take Back Control of Your Data (Melville House Publishing 2021).
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.123.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.123.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.123.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), pp.127-128.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.128.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.128.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.129.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), pp.129-130.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), pp.137-138.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.138.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.139.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.141.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.142.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.143.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.180.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.181.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.181.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.181.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.182.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.186.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.188.
It is important to note that this lawsuit was filed prior to Facebook changing to the opt-in model.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.189.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.194.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), pp.195-196.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), pp.198-199.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.199.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.199.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.202.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.204.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.204.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.204.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), pp.204-205.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.205.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.205.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.206.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.206.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.207.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.207.
Steven Levy, Facebook: The Inside Story (Penguin Random House 2020), p.207.
Donella Meadows, Thinking in Systems: A Primer (Chelsea Green Publishing 2008), p.2.
Donella Meadows, Thinking in Systems: A Primer (Chelsea Green Publishing 2008), p.14.
Donella Meadows, Thinking in Systems: A Primer (Chelsea Green Publishing 2008), p.22.
Donella Meadows, Thinking in Systems: A Primer (Chelsea Green Publishing 2008), p.22.
Donella Meadows, Thinking in Systems: A Primer (Chelsea Green Publishing 2008), p.22.
Donella Meadows, Thinking in Systems: A Primer (Chelsea Green Publishing 2008), p.25.
Donella Meadows, Thinking in Systems: A Primer (Chelsea Green Publishing 2008), p.25.
Donella Meadows, Thinking in Systems: A Primer (Chelsea Green Publishing 2008), p.25.
Donella Meadows, Thinking in Systems: A Primer (Chelsea Green Publishing 2008), p.30.
Donella Meadows, Thinking in Systems: A Primer (Chelsea Green Publishing 2008), p.32.
Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (Profile Books 2019), p.8.
Meta Annual Report 2023, pp.61-62.
Meta Annual Report 2023, p.62.
Meta Annual Report 2023, p.59.
Solove et al, ‘Kafka in the Age of AI and the Futility of Privacy As Control’ (2024) 104 Boston University Law Review 1021, 1025.
Solove et al, ‘Kafka in the Age of AI and the Futility of Privacy As Control’ (2024) 104 Boston University Law Review 1021, 1032.
Solove et al, ‘Kafka in the Age of AI and the Futility of Privacy As Control’ (2024) 104 Boston University Law Review 1021, 1032-1033.
Solove et al, ‘Kafka in the Age of AI and the Futility of Privacy As Control’ (2024) 104 Boston University Law Review 1021, 1035.
Travis D. Breaux (ed), An Introduction to Privacy for Technology Professionals (IAPP 2020), p.180.
Such as good topic. Would be interested to read more about practical steps individuals could take to protect their personal data and their identity online. This is not something I see covered a lot. Speaking for myself, I have a good understanding of why data rights are important and why BigTech's privacy-intrusive business model is terrible. What to do about it, however, and how to best to act against it, I am less certain of.