Recommender Systems: Automated decisions, AI and Human Freedom

(Below is a pre-print shared for educational purposes. The full article is available here, with credit to my co-author Jan Blockx: https://doi.org/10.1145/3597512.3599712)

My latest academic work looks at recommender systems, the recommendations that shape our digital lives on YouTube, Amazon, Netflix, and so on.

Abstract

Recommender systems form the backbone of modern e-commerce, suggesting items to users based on the collection of algorithmic data of a user’s preferences. Companies that use recommender systems claim that they can give users what they want, or more precisely, what they desire. Netflix, for example, gives users recommended movies based on the user’s behaviour on the platform, thereby listing new movies that the user may want to watch. This article explores whether there is a difference between what engages us, on the one hand, and what we truly want to want, on the other. This builds on the hierarchical structure of desires, as posed by Harry Frankfurt and Gerald Dworkin. Recommender systems, to use Frankfurt’s terminology, may not allow for the formation of second-order desires, or for users to consider what they want to want. Indeed, recommender systems may rely on a narrow form of human engagement, a voyeuristic mode, rather than an active wanting. In bypassing second-order desires, there is a risk that recommender systems can start to control the user, rather than the user controlling the algorithm. This raises important questions concerning human autonomy, trustworthiness, and Byung-Chul Han’s conception of an information regime, where the owners of the data make decisions about what users consume online, and ultimately, how they live their lives.

Introduction

In the digital age, the distinction between human autonomy and recommender systems cannot be easily ignored. The rise of recommender systems, or algorithms that suggests items and products to users, has brought about a new level of convenience to our everyday lives [1]. However, the use of recommender systems poses important questions about the extent to which algorithms are shaping our desires and influencing our decisions [2]. As Walter Benjamin wrote in his famous essay The Work of Art in the Age of Mechanical Reproduction, the introduction of technology into art and culture has the potential to both democratize and commodify it [3]. Similarly, the introduction of recommender systems into our daily lives has the potential to both empower and manipulate us.

On the one hand, recommender systems provide a valuable service by sifting through vast amounts of information online and suggesting the most relevant items for our immediate needs [1]. In this way they save us both time and money and expose us to new and exciting items that we may not otherwise discover [4, 5, 6]. On the other hand, recommender systems collect vast amounts of personal data concerning our individual preferences, behaviours, and identities [4, 5, 6]. They use this data to influence our decisions in ways that we may be unaware of. The algorithms also often act as black boxes, with very little transparency, oversight, or readability for the end user [1, 7]. The more we use recommender systems, the more they understand us, the more they understand us, the more useful they become, the more useful they become, the more they become part of our everyday decision-making, and therefore, the more they determine what decisions we take. Eventually, we may come to trust the system to make decisions on our behalf.

Recommender systems are created with specific goals and biases in mind, including, significantly, the profit motive [8]. These goals and biases may not always align with a user’s wants and desires [8], and thereby open the user up to forms of manipulation and control. If a company wants a user to buy a product, for example, that does not mean that the user wants to buy that product, even if it is superficially engaging to the user, or someone of the user’s profile. Some recommender systems bias lucrative items, for example, regardless of whether a user may actually want to buy them [8].

However, in many instances, the profit motive of the owner of the recommender system will mean that the algorithm aligns with our wants and desires. If the system recommends an item that the user wants, it is simply more likely that the user will buy that item than if the systems recommends another item that the user does not want. This paper points out that, even in those instances, we should be wary of recommender systems. The main argument in this paper is that even an algorithm that recommends a product or service that we desire can be detrimental to us, since it impacts on our (attempts at) autonomous decision making.

1. Human Autonomy

The “I” has long been a subject of philosophical inquiry. What makes a person into a person? What does it mean to say that a person holds a certain opinion, has certain desires, takes certain actions? These are perennial questions about the existence of an individual mind and will. The singular “I” suggests an identity looming underneath every opinion, desire or action, but upon closer scrutiny, I hold many contradictory opinions, have many incompatible desires, and take many inconsistent actions. I may believe that everyone should pay taxes to fund public services, but at the same time fail to pay my own full contribution. I may want to lose weight but at the same time eat that delicious piece of chocolate pie.

There are multiple descriptive and normative theories about how persons can maintain a true or apparent identity despite their internal contradictions. Some of these theories argue that identity or personality is somehow innate beneath our multiple opinions and desires; others that it is shaped by confronting our own internal contradictions and/or by the world around us. Regardless of its origin, we all experience the discovery or shaping of our identity as a process which often resembles a struggle.

In this context, Harry Frankfurt, in his seminal work Freedom of the Will and the Concept of the Person, has proposed the insightful concept of ‘second-order desires’ [9].  First-order desires are our immediate wants, such as hunger or thirst [9].  Second-order desires are our desires about our desires, or what we want to want [9]. For example, we may want to be less impulsive or want to go to the gym more often, rather than sitting on the couch. According to Frankfurt, human autonomy comes from our capacity to form such second-order desires, that is, to exercise a degree of control that surpasses our first-order impulses: “The autonomous person is one who has, in some sense, mastery over their desires” [10]. He draws a distinction here between animals, who act on impulse, and humans, who act on contemplation, rationalization, and reflection on what they want [9]. The autonomous person is one who decides who they want to be by deciding what they want to want [9]. They shape their preferences to determine their future.

To surrender to an algorithm this rationalization process runs the risk of turning us, in Frankfurt’s terminology, into a “wanton,” someone who is less than human, more equivalent to an animal or a machine [9]. Recommender systems shape our first-order desires by pre-empting decisions and presenting pre-ordered lists for consumption. Some systems do not recommend a specific item to the user but merely rank items or list ratings to help a user make a decision [11]. Others go further, by making decisions on the user’s behalf, presuming what the user wants to do [11]. Spotify, for example, automatically plays recommended songs to users without the user needing to decide anything at all [12]. This saves the user time, avoiding the browsing of millions of songs, but it also bypasses the user’s self-reflection and autonomy [12]. A music recommender may therefore give users what they want (first-order desires), but not necessarily what they want to want (second-order desires), and it is in this loss of second-order preferencing that they risk losing autonomy.

This article explores the difference between what merely engages us, on the one hand, and what we truly want to want, on the other. This builds upon the hierarchical structure of desires, posed by Frankfurt and Dworkin. Recommender systems rely on a narrow form of human engagement, a voyeuristic mode, rather than an active wanting. In bypassing our desires, there is a risk that recommender systems come to control the user, rather than the user controlling the algorithm. This raises important questions concerning free will, human autonomy, and Byung-Chul Han’s conception of an information regime, where it is the owners of the data, rather than users, who are making the real decisions about what users consume online, and ultimately, how users live their lives [13].

2. Protect Me from What I Want

In 1982, Times Square was lit up with a giant digital billboard reading: Protect me from what I want [14]. The artwork, by conceptual artist Jenny Holzer, was part of her “truism” series, which raised important questions at the time about power, feminism, and society. Protect me from what I want, as a phrase, emerged from the early-1980s American counterculture, where instant gratification, addiction, and uncontrolled desire were topical in the news. But the artwork also drew a telling distinction between what we want, on the one hand, and human autonomy on the other. If someone needs to be protected from what they want, then does that mean that they do not actually want what they want? Holzer’s artwork asks us to imagine someone who desires to be free from their desires, and therefore begs the question, what would that kind of freedom look like.

Harry Frankfurt rejects the hedonistic idea that being free means to do whatever we want to do [9, 10].  Instead, he suggests that being free means to decide what we want to want [9, 10].  Instead of merely wanting and choosing to do an act on impulse, humans are capable of wanting to be “different to what they are” [9]. Second-order desires are created by self-reflection on first-order desires, and are emblematic of the rationality of human thought [10].

Gerald Dworkin extends this thought, emphasizing that human autonomy comes from our “capacity to change first-order preferences” through rationality, to reflect on what we want, and therefore adopt the new want “as one’s own” [10, 15]. Dworkin points to the example of Odysseus, who has a first-order desire to sail towards the sirens but who recognizes this desire as “alien to him” [10, 15]. Something we want can therefore, upon reflection, be something we do not want at all. In this context, Jenny Holzer’s Protect me from what I want comes into focus. If someone wants an item (first-order desire) and also wants to be protected from it (first-order desire), then it is up to their second-order desire – their reflection and introspection – to decide between the first two, and therefore decide a course of action. The reconciliation between two desires is where we find human agency; our will and capacity to decide.

If we do not decide between our first-order desires we risk becoming, in Frankfurt’s somewhat dated terminology, a “wanton” [9]. Frankfurt, like Holzer, imagines a situation where a person wants to be protected from what they want, and uses the example of a reluctant drug addict [9]. The reluctant drug addict is trapped between two competing first-order desires [9]. On the one hand, they want to take the drug, and on the other hand, they want to be protected from taking the drug, or choose not to take it [9].  What is different about a wanton drug addict from a reluctant drug addict is that, instead of evaluating between the two first-order desires, the wanton succumbs to his first-order desires without any contemplation [9]. He not only pursues the “course of action he is most strongly inclined to pursue, but he does not care which of his inclinations is the strongest” [9]. The wanton drug addict will take the drug or seek recovery, but he does not evaluate between these desires. In other words, he becomes ruled by his first-order desires, and in the process, loses autonomy [9]. A reluctant drug addict, by contrast, has the capacity to choose between his first-order desires, and thus can seek recovery as the better, rational option [9]. It is his reflection upon his first-order desires that allows the reluctant drug addict to assert agency over his life.

3. Recommender Systems Framed as Helpful Technology

In the best-case scenario, recommender systems empower users to get what they want more than ever before [16]. An RS, from this perspective, allows for everything you want at the touch of a button. In the worst-case scenario, however, an RS may imprison users into choices they did not necessarily want to make. An effective RS may create wants that did not previously exist in the user, influencing the user’s future decisions, both on an individual and at a collective level [1, 2]. An RS may also pigeon-hole users into filter bubbles, where they only receive certain types of information and not others [2]. This can shape the kind of content they are exposed to, and therefore, what they are capable of wanting on a platform.

Recommender systems may influence a user’s desires directly, as users have a strong disposition to click on recommended items [11]. When flooded with information online, users tend to favour prominent items and the most popular items [11]. Users also reduce searches for non-advertised products when given personalized advertising on a particular item [17]. An RS can therefore reduce the amount of time a user spends searching for what they want, and in doing so, guide users towards certain items that the system presumes that they want [17]. Far from giving users what they actually want however, this suggests that the system can actually shape a user’s desires, either intentionally or not, by structuring their online behaviour. The user, over time, begins to learn what the system wants them to want – even if this reflects some part of the user’s identity, it is not necessarily reflective of their entire self.

In the literature, RS are often represented as helpful new technologies, that help users combat information overload on the internet and the paralysis of choice, and that help increase speed of a user’s decision-making [1].RS are designed to help users find products quickly [1, 11]. They are even said to help users learn about themselves [11]. If for example, a user does not know what they want, the system can show them new wants and desires, in list form, empowering the user to understand themselves better [11]. However, users may come to trust that a recommender system knows them better than they know themselves. In a survey on social media companies (N = 368), Engelmann et al found that most respondents thought the social media algorithm “could make accurate and correct judgments about them in general (78.2%),” correctly tell “what is valuable to them” (66.2%) and “reflect who they are” (51.1%) [18]. Furthermore, the participants believed that social media algorithms could “accurately infer their interests (89.9%), their past (81.3%) and future purchasing behaviors (64.5%) [18]. The survey participants therefore expressed a high degree of trust in the predictions made by the algorithm. However, this makes them vulnerable to the recommendations made by the algorithm, if these would somehow not be accurate. This is particularly problematic since it is often difficult to assess whether the algorithm really knows the users better than they know themselves. What criterion should be used to make such an assessment? Is the fact that users are likely to continue to engage with or click on certain content, really a sign that the algorithm that feeds the users this content gives users “valuable” content?

In instances where the quality of algorithmic predictions can be accurately measured, the trust placed by humans in the algorithms sometimes appears to be misguided. For example, a study by the University of Michigan showed that an early warning system used to predict sepsis in hospital patients based on algorithmic analysis of patient data was deeply flawed: the system “missed two-thirds of sepsis cases, rarely found cases medical staff did not notice, and frequently issued false alarms” [19]. The trust placed in the system was therefore misplaced. One reason for this was that the proprietary algorithms used in the system are not properly scrutinized. But at least in the instance of algorithmic predictions for medical conditions, academic researchers can conduct ex-post assessments to measure the accuracy of the system. In the case of recommender systems used on internet platforms, such ex-post assessment is much more difficult, since there is no appropriate criterion to measure accuracy.

Research on recommender systems has traditionally focused on accuracy, but new threats to RS, including cyber-attacks, system noise, user noise and system bias, as in the medical case above, have shifted the focus towards trust and trustworthiness [20]. What makes a user trust an RS? The answer is complicated and multi-faceted. As a starting point, the literature suggests that a trustworthy RS is transparent [1]. This means that it gives an explanation for how it reached a recommendation [1], or has a transparent system logic that is apparent to the user [21]. Salesforce, for example, argues that the transparency of its enterprise app recommender system, which offers explanations, is crucial for building consumer trust in the system [22]. There is at least one study that contradicts this argument, where trust and transparency were not found to be correlated, and where instead, users were more likely to accept recommendations when an explanation was given, but not necessarily trust the system more [23]. Furthermore, highly detailed explanations for a recommendation do not necessarily lead to higher trust [24]. Detailed explanations can even reduce trust, if aimed at expert users, who do not require such detail [24].

The European Union (EU) has laid out a much wider set of ethical principles for Trustworthy Artificial Intelligence, which goes beyond transparency to include other considerations, such as the prevention of harm and the fairness of the system [25]. Fan et al suggest further criteria, including safety and robustness, privacy, environmental wellbeing, accountability, and auditability [26]. The latter two offer some way for RS to be corrected if they are veering into untrustworthy territory, by internal or external audits. In summary, a trustworthy RS “should not only be accurate, but also transparent [27], unbiased and fair as well as “robust to noise or attacks” [20]. This is a much higher bar to pass than mere transparency. Nevertheless, a robust RS that matches these criteria will not necessarily give users exactly what they want, if for example, what they want breaches one of these principles.

If a user is given exactly what they want, does that mean that they will automatically trust the RS? This too seems false. Consider the common phrase told to children: don’t take candy from a stranger. Someone who gives you something that you want, can still be untrustworthy. One of the hallmarks of “word of mouth” recommendations that predate RS, for example, is that the person making the recommendation was already a trusted part of an existing friend network [2]. An RS has to build trust from scratch, making the recommendation process itself a trust-building exercise.

The difference between what a user wants and what a user trusts becomes clear when applying the trustworthy principles listed above, by the EU, [25]and Fan et al [26]. Consider the basic principle of transparency: a user could be given exactly what they want without knowing why the system has made that recommendation, leading to a potential loss of trust and acceptance [28]. Consider the prevention of harm principle: a user could be given exactly what they want on a food ordering app, with upsized burger meals, but rarely if ever, be shown healthy eating options, leading to long-term preventable harm. Finally, consider the privacy principle: a user could be given exactly what they want, however, their data could still be syphoned off and sold to third parties without their consent. Getting what you want does not therefore necessarily ensure that the system is robust, fair, private, secure, and so on. One final aspect to consider is responsibility. Wang et al suggest that an RS can only be trustworthy if it is developed and deployed in a “responsible manner,” meaning that the RS does not harm anybody, either explicitly or implicitly [20]. All stakeholders of an RS, they also suggest, should benefit from the generated recommendation result [20]. The authors will contend in more detail below that an RS that revokes someone’s autonomy is not a responsible RS, because of the implicit harm caused by a loss of human agency. Furthermore, if all stakeholders are considered, then it might not be enough for a system to merely work well or give good results. A system that biases the interests of the platform over the desires of the user would not be responsible, under this framework, as the two parties would be unequal. To have a responsible RS, the user would need to have a seat at the table.

4. Speeding Up Our Lives

Technology companies are keen to present the ubiquity of RS in our lives as a positive development – a way to speed up our lives, automate our chores and decisions, and give us a better user experience. Google, for example, presents the apartment of the future as a place of innumerable recommender systems operating simultaneously, in concert, constantly fulfilling a user’s wants through pre-empted decisions, via fully automated service and delivery [29]. The apartment of the future is an “electronic orchestra, and you are the conductor” [29]:

With simple flicks of the wrist and spoken instructions, you can control temperature, humidity, ambient music and lighting… While a freshly cleaned suit is retrieved from your automated closet because your calendar indicates an important meeting today… Your central computer system suggests a list of chores your housekeeping robot should tackle today, all of which you approve.

There’s a bit of time left before you need to leave for work – which you’ll get to by driverless car, of course. Your car knows what time you need to be in the office each morning based on your calendar… it communicates with your wristwatch to give you a sixty-minute countdown.

Google’s apartment is a techno-optimist portrayal of what recommender systems can do for our everyday lives. The user is presented as fully in control of an “orchestra” of technology, including technologies that recommend housework and a time to leave for work. Human autonomy, it is heavily implied, is amplified by recommender systems. New technology can augment life to create lifestyle. Users relax on the way to work in their driverless cars, eating an apple that the system likely recommended them to eat. This is automated luxury; users getting what they want at all times, without lifting a finger or doing any work. Their desires, it would seem, are constantly being fulfilled. The machine already knows what they want, therefore they don’t have to say anything.

Look more closely at the quote above however, and a different picture emerges. Notice how the “freshly cleaned suit” is delivered without the choice of the user on which suit to wear, the chores are all agreed to by the user without any objection or dissent, the car signals the user’s wristwatch without the user’s consent, setting up an automated timer, not consented to, by which the user must comply to get to work on time. The user at first looks like they are getting everything that they want, but on second glance, looks like they are getting everything that the system tells them that they want. They no longer assert agency over their environment; their environment asserts agency over them. The user may even come to trust the system, but this does not mean that the system will suddenly align with their desires.

Byung-Chul Han suggests that Google’s home of the future is not a smart utopia, but “a smart prison” [30]. Instead of being the conductor of our home, “we are conducted by various actors, even invisible actors that dictate the rhythm” [30]. This is reinforced by surveillance: “a smart bed fitted with various sensors continues the surveillance even during sleep” [30]. The user getting everything they want becomes a constraint, where they begin to get controlled by pre-empted desires that they may or may not really want at all. Perhaps yesterday, they wanted that fresh suit, but today, when it is delivered, it’s not exactly to their taste. In a fully automated system, dissent becomes difficult and time-consuming. “In a world controlled by algorithms,” Han writes, “the human being gradually loses the power to act, loses autonomy… he or she obeys algorithmic decisions, which lack transparency. Algorithms become black boxes. The world is lost in deep layers of neural networks” [30]. The recommender system, unknown and unknowable, makes decisions for the user, about the kinds of things the user should want, what to eat and what to wear and when it’s sensible to leave the house, and the user starts following these orders, obeying recommended instructions, from their new algorithmic overlords.

Of course, human autonomy is an aspiration rather than a given fact. Also, in a world without recommender systems, we often act upon our first-order desires without actually considering whether we want them or not. After a long day in the office, we might want some mindless entertainment at home, or to quickly get some fast food, without having to think about whether this makes us the person we want to be. In other contexts, however, not considering our second-order desire can be critical. In a newsroom, a first-order desire to break the story first, is very different from a second-order desire to be accurate and not mislead the public. In a healthcare setting, a patient might have a first-order desire to get the test results immediately, but they might also want to want to have the patience to wait for the doctor to perform the tests properly and accurately. In this manner, following first-order desires can lead to uninhibited or reckless behaviour. As RS extend into new domains in our lives, so too does this risk of shortcutting our decision-making.

Admittedly, in the absence of recommender systems, our first-order desires are already shaped by the world around us. I may put on a specific suit because my partner says that it will look good on me, without actually considering whether I want to wear it for that reason. I may wear a suit to work because my parents told me to always dress up for work, and I may not consider whether this is what I want to want. Recommendations are nothing new: they were always something to factor in into our choices. This is a point made in the literature, that RS are just the latest form of an old idea [2, 7]: we trust our neighbours and those like us to give us recommendations, so why not trust an RS?

However, recommender systems go further than human recommendations, as we will see below. In most instances, they leave little room for explanation and little room for discussion – both of which are generally available when dealing with human recommendations. I can ask my partner why they think I look good in this suit. I can argue with my parents about whether suits are old-fashioned at work. Put simply, human recommendations can easily be dismissed as merely that: recommendations. Algorithmic recommendations, on the other hand, are much more difficult to dislodge.

5. Autonomy and Recommenders:

    The problem gets worse if the algorithm is making decisions based on a user’s first-order desires alone. If a recommender system uses implicit feedback, for example, considering what users click on, what they look at, what engages them [1, 11] – then it only references who a user is now, not necessarily who the user wants themselves to be. The user may get a list of films that they currently want (say, action flicks), but not necessarily a list of films they may want to want (say, historical documentaries). In Frankfurt’s formulation, a user who watches the action films risks becoming a “wanton” – someone led by their first-order desires alone, and less than human.

    The user who chooses an action film on Netflix is still making a choice between the available recommendations, admittedly, but this is undermined by the impact of their personal data on the listing itself. The user’s choice is constrained by the list only containing action films, and therefore by their first-order desires from the past. A user who wants to want to watch historical documentaries will be unable to assert agency over this list containing only action films. The user’s past desires are reinforced by the RS; even though the user could be changing their wants or desires over time, through second-order re-preferencing. The user, metaphorically speaking, might be a new person entirely, but the system has no way of knowing, nor does the user have a way of telling it so.

    Byung-Chul Han writes [13]:

    The longer I surf the internet, the more my filter bubble becomes filled with information that I like and that reinforces my convictions. I am shown only those views of the world that conform to my own. All other information is kept outside the bubble. In the filter bubble, I am caught in an endless ‘you loop’.

    The problem is in fact, starker than Han suggests. The user is not only trapped in a ‘you loop,’ but in a ‘you loop’ that may not necessarily any longer reflect who they are. Frankfurt acknowledges that what we care about is only evident “for extended periods of time” [18]. Desires, by contrast, “typically last for moments only: if one cared about something for only a moment one could not be distinguished from a person that acted out of impulse,” or a wanton [18]. The user of an RS is shown a list as if that list is a “truthful interpretation of their wishes, wants and desires,” when really, it is a less-than-ideal conceptualization of the user themselves [18]. It is a list that reflects a single origin point, without the intervention of the user’s self-reflection. The user cannot say, “no, the system is wrong,” “I no longer want that,” or “show me something radically different because I am not that person anymore” [31].[1] The user is trapped in time.

    Recommender systems that rely on Collaborative-Filtering techniques may face a similar, but distinct, line of criticism, along these lines. Recall how Frankfurt says that a person must choose for themselves what they want to want. Collaborative-Filtering, neighbourhood-based and “Lookalike” models by contrast, rely on similar users, rather than the user themselves making the choice (even implicitly) here [18]. Collaborative-Filtering (CF) works by determining “the desirability of one’s desires as equal to or at least similar to the desirability of other, already “known” users [18]. Recommendations from CF are a product of other user’s wants, rather than one’s own. In Frankfurt’s framework, giving up choosing what we want for what the crowd wants, would be abandoning one’s will for the will of the crowd – an untenable position for human autonomy. To borrow a phrase from Virginia Wolf, Frankfurt wants A Want of One’s Own. Collaborative-filtering is simply unable to provide one.

    Neighbourhood models are problematic for another reason, similar to that discussed above, because they merely reflect who we currently are, or more specifically, who we currently relate to, rather than who we may want to relate to in the future. It is difficult to imagine that two identical users exist who simultaneously want to watch action films, but equally want to want to watch historical documentaries. And in that latter context, that the recommender system could identify this want to want from implicit feedback from each user. A user would have to show their engagement with historical documentaries in some new way, either explicitly or implicitly, for the system to pick up on it – or to move them into a new neighbourhood. This makes it hard for users to assert agency, or to extend the metaphor, to control the moving van.

    Frankfurt’s objection might be avoided if an RS uses explicit feedback alone. However, explicit feedback raises a different objection. Explicit feedback refers to a user explicitly stating a preference for a particular item, for example, giving a high rating or product review [1, 11]. Explicit feedback, even among developers, is seen as the more reliable form of feedback because it directly gathers insights from the user about their own preferences, therefore avoiding the need for observational learning and/or neighbourhood systems [1]. Explicit feedback essentially asks the user: Tell us what you want, and we will recommend it back to you.

    The most popular form of explicit feedback is a rating, where users decide numerically or through words, how much they like a particular item [1, 32]. This lets users tell the system what they want. Itdoes not necessarily however, let users tell the system what they want to want. The user who only watches action films will only rate action films and will therefore not necessarily be able to tell the system that they want to want to watch historical documentaries. This is similar to the “exposure effect”, where users only engage with, and rate, items that are already shown to them in the recommendations [11].

    For a user to reach their second-order desires, there needs to be a place and time for self-reflection on the items they have already clicked upon. They need to be able to question their wants, their first-order desires, within the context of the system itself. Neither CF or explicit models necessarily allow for this second-order reflection as such; the user being given a chance to step back and ask: do I really want that, or do I want something else? Or to update their preferences according to the formation of a newly differentiated desire: I am no longer that person anymore.

    Frankfurt’s second-order objection may be dealt with better by knowledge-based systems. Knowledge-based systems are built off explicit data but allow user’s interactively to input their preferences [11]. Pinterest, for example, asks users to select which areas they are interested in, when they first load the app [11]. This then dictates what they get shown. Knowledge-based methods allow users to choose, up front, the kind of content they want to be shown. In this way, they avoid the lag time of ranking systems, or the cold start problem. The user is confronted with a choice about what they want immediately. What topics are you interested in? This can easily become a choice of what they want to want. The user can be asked: what kind of movie do you want to see, action or historical documentary? – and they might pick both options, upon reflection. In this way, a user can engage with their second-order desires, not merely their first-order desires. This suggests that knowledge-based systems allow for a higher degree of human autonomy.

    Knowledge-based systems have limitations, however. The system may have systematically biased outcomes, since selecting themes may not narrow down the recommendation to a specific item [11]. As a result, the most popular item of that category might be recommended more often to the user (popularity bias) or similar users may be recommended the same item, creating “within-group conformity” (conformity bias), or in some cases, “echo chambers” and “filter bubbles” [11]. This may create the same objections raised about neighbourhood systems, where users are trapped within neighbourhoods that they no longer want to belong to.

    To facilitate human autonomy, users need to be able to update their preferences whenever they want to, rather than just the first time they log onto the app. In some circumstances, it could even be argued that the availability of the possibility of updating preferences might itself not be enough and users should be prompted to reflect on their preferences at routine intervals, to sufficiently accommodate Frankfurt’s objection, to give them time to consider their wants: Do you still like action films? Do you want to add a new category to your interests? Have you considered this new, radically different genre of film? A fine balance will need to be struck here to not overburden the user along the lines of what happens with cookie pop-ups today, which can trigger information overload [33].[2] A possible tool could be to allow users to choose the interval for such re-evaluations in their preferences.

    On the extreme end of things, a user could increase their autonomy by opting out of recommender systems altogether. This is likely unrealistic, given the ubiquity of RS in our current society, and in particular, online and digital society. Instead of opting out, users may instead wish to gain greater control over the recommender systems, the user profiling and their responses to that profiling [34]. It is not necessary to have full control and oversight or transparency over the algorithm, to remain in control [34]. Human autonomy, in the context of an RS, appears to involve “what aspects of the algorithm are controllable, that allow users to reflect on, and possibly reconsider, their own preferences” [34]. The preferences are what are at issue. Users should be able to reconsider their preferences, to allow them to decide what they want to want from the system.

    Instead of an RS simply showing users what they want, a user could have greater control over how the recommendation system works, resetting it back to default, or changing variables over time. Krajnovic makes a similar case for change: “Safeguarding personal autonomy could include being able to reset to default, choose between different recommendation logics, make a complaint about a tool, provide feedback or trigger people to reset their personal choices at regular… intervals” [34].Instead of allowing an algorithm to make decisions, these changes allow users to assert agency over the algorithm, restructuring the recommendation process. This kind of agency does not require full transparency of the algorithm per se, but rather autonomy over the output. Personal autonomy can be enhanced without opening the black box [34].There is no need to open the black box when variables, user resets and so on are changeable at a constant rate. Instead of explaining AI, users can merely be given greater control over its functioning apparatus. The recommender system can remain opaque, so long as users can influence its recommendations.

    6. Responsible Recommender Systems

    Design choices for RS should reflect the principles of Responsible Research and Innovation. This includes a commitment to ethical principles and fundamental rights [35]. Autonomy, or the ability to make decisions about one’s life, can be viewed as one of these rights, or even the precursor to other rights. James Griffith argues that all human rights rest on the bedrock of human agency and autonomy [36]. The right to autonomy can here be defined as “the capacity to assess options and form some… conception of a worthwhile life” [37].Humans need autonomy to plan and make decisions, for when decision-making is taken away from us, we are no longer free.As Griffith puts it [36]:

    Human life is different from the life of other animals. We human beings have a conception of ourselves and of our past and future. We reflect and assess. We form pictures of what a good life would be – often, it is true, only on a small scale, but occasionally also on a large scale. And we try to realize these pictures. This is what we mean by a distinctively human experience… Human rights can then be seen as protections of our human standing or… our personhood.

    Griffith’s argument is similar to Harry Frankfurt’s, in that humans require a form of dignity beyond that of animals, namely, the ability to determine our futures by reflecting on what we want [9]. Bringing it full circle, one could argue that a responsible RS must allow for a person to retain their autonomy, by allowing them to reflect on what they want, and asserting that want over the system. The key is for users to remain in control, rather than the algorithms controlling users. There are possible exceptions here. For example, a person could choose to outsource their decision-making to a person or machine for a single choice, a few choices, or even a short period of time. But to outsource decisions throughout our entire lives runs the risk of losing control over the person we may become, or who we want to be. At that point, we become mere products of instinct, or worse, merely do what machines tell us to do.

    To ensure autonomy, an RS should be designed to never trap, mislead, deceive, or coerce the user into doing something that they do not want to do. Whether that be trapping them into feedback loops or echo chambers [11], biased responses based on gender, race, or ethnicity [38],  or addictive mechanisms that prevent them from choosing when the user wishes to engage. An RS designer should aspire to keep the user free: free to change their choices, free to make new choices or free to turn off the system entirely. Therefore, responsible designers should avoid the use of dark patterns, or misleading graphical user interfaces (GUI) which may deceive users into staying on a platform for longer than they intended, or clicking on a recommendation that they did not actually intend to click on [8]. For the same reason, designers should be very careful of implementing addictive features such as infinite scrolling, lootboxes or other pseudo-gambling techniques, which keep a user ‘hooked’ onto a platform in a manner that appears, at times, at odds with individual autonomy [39]. While we admit that no one can ever be fully autonomous, we can still design systems to enhance our autonomy, and to empower people to take control over their own lives.

    However, designers of RS are seemingly already pushing us towards a less autonomous future.  A survey by Pew Research, for example, found that 56% of experts surveyed agreed that by 2035 smart technologies “will not be designed to allow humans to easily be in control of most tech-aided decision-making” [40]. Many of the experts said that this will be a positive development. Humans have long outsourced their decisions to institutions, whether that be governments, religious or tribal leaders, so why not machines? [40]. Information overload will only increase, so therefore, will our need to outsource our decisions to RS [40]. Still others have framed the conversation around “important decision-making” in our lives, saying that we already consult others about whether to take a new job or who to date [40]. This final argument seems to miss the more nuanced point however, that RS are not only aiming at the big decisions, but actually, all of the smaller choices in our lives as well. It is one thing to say that people should occasionally rely on machines for certain choices in their life, and quite another to say that people should always rely on machines for every choice in their life.  This latter possibility is where human autonomy is most at risk, when we stop asking for advice, and we start asking for a machine to assume control over all our decisions. The machine is no longer an advisor, but an instructor in the type of life it wants us to lead. If designers are not making systems that guarantee human autonomy, as the survey suggests [40], then this potential future of decreased human autonomy is a real possibility. RS designers who encourage this trajectory may be seen to be acting irresponsibly, by risking the autonomy of the average user, who may not understand the full extent of the risk.

    If an RS designer is irresponsible in their system design, then this could lead to system bias, and a decrease in trust [20].  By contrast, a system that takes account of potential harms, such as system bias, and mitigates them, would increase the trust of the user [20].  RS designers who wish to see the widespread uptake of RS should consider implementing trustworthy in principle, and trustworthy in practice ideas. This includes not just human agency, but also transparency, the prevention of harm and the fairness of the system [25], along with safety and robustness, privacy, environmental wellbeing, accountability, and auditability [26]. For users to trust the system, the RS designer needs to make sure the system works for users, and not just in the interests of the platform or service provider.

    Responsible Research and Innovation (RRI) Statement:

    Gender and diversity play an inherent role in how consumers engage with online marketplaces. The profiling of users, using personal data filtered through machine learning, may be contributing to a cyclical feedback loop, resulting in customers being treated differently according to their demographic features. This paper considers various biases in recommender systems and places an emphasis on human autonomy, questioning whether profiling tools lead to positive outcomes for individuals. System designers may gain valuable insight from this work on how to empower the user to make their own decisions, thereby mitigating systematic bias in their software.

    The European Science Foundation (ESF) outlines a framework for Responsible and Open Science, including the promotion of free access to all research findings, to allow for researchers from a diversity of backgrounds access to and engagement with ongoing scientific debate [41]. As part of a commitment to open science, this research will be published in open-access journals and made available to colleagues in the field. Findings will be shared at conferences and faculty seminars. If possible, recordings of presentations will be shared on publicly accessible platforms, such as YouTube and Vimeo. The research findings will also be shared more widely in publicly accessible formats, such as academic news sites, blog posts and through an innovative academic video essay, that will clearly and succinctly describe the research with accompanying visuals.

    Conclusion

    Recommender systems play a fundamental role in the purchasing and browsing of products online and will likely shape the online space for many years to come. As such, the relationship between human autonomy and recommender systems cannot be easily ignored. An RS can be seen as both a blessing and a curse, giving users unprecedented access to product recommendations, in a manner which can save them time and money, while at the same time, abrogating some human autonomy. An RS can pre-empt desires, and reflect who someone used to be, rather than who they are now, and in this way deviate from what they actually want. In the worst-case scenario, an RS could completely take over the decision-making function of a person’s life, leading to something of a lesser life, where people are disempowered from asserting agency over their own environments.

    Harry Frankfurt’s hierarchy of desires is a useful template to use when considering whether recommender systems reduce human autonomy, and what can be done to rectify this state. An RS may be set up to give users what they want, but this does not necessarily mean that the system gives users what they want to want. The difference can be teased out using Jenny Holzer’s clever phrase Protect me from what I want – which reveals the inherent difficulties users face when confronted with wants that they do not, in some sense, actually want. Desire is complicated, and human desire is changeable and responsive, which means that it should be treated with great delicacy.

    It’s important to be aware of the potential for manipulation and control by companies who have a profit motive when using and selling recommender systems. It is also important to take a proactive approach to the use of these systems to increase human autonomy and a users’ ability to make their own choices about their own desires. Choosing what we want to want, has a direct implication on the choice of who we want to be. By understanding the difference between what we want and what we want to want, and by being aware of our own desires and the influence of recommender systems, we can ensure that autonomy is enhanced, and that we are making choices that align with our own goals and values. Laws and regulations can play their part, but this paper has also suggested ways in which recommender systems can allow for human autonomy by-design. Further research may consider the methods by which Frankfurt’s second-order desires could be implemented directly into a recommender system, and/or different measures of testing human autonomy in the recommendation process.

    REFERENCES

    • [1] Francesco Ricci, Lior Rokach and Bracha Shapira (Eds.). 2022. Recommender Systems Handbook (3rd ed.) Springer, New York, NY.
    • [2] Silvia Milano, Mariarosaria Taddeo and Luciano Floridi. 2020. Recommender systems and their ethical challenges. AI & Society 35, 957-967.
    • [3] Walter Benjamin. The Work of Art in the Age of Mechanical Reproduction, in Hannah Arendt (ed.) Harry Zohn (trans.) 1969. Illuminations. Schocken Books, New York, NY.
    • [4] Dimitris Paraschakis. 2016. Recommender systems from an industrial and ethical perspective. In: Proceedings of the 10th ACM conference on recommender systems—RecSys. 16, 463–466.
    • [5] Dimitris Paraschakis. 2017. Towards an ethical recommendation framework. In: 2017 11th International Conference on Research Challenges in Information Science (RCIS), 211–220.
    • [6] Dimitris Paraschakis. 2018. ‘Algorithmic and ethical aspects of recommender systems in e-commerce’ Malmö.
    • [7] Jonathan L. Herlocker, Joseph A. Konstan, John Riedl. 2000. Explaining collaborative filtering recommendations. In: ACM conference on Computer supported cooperative work, 241–250.
    • [8] Mireille Hildebrandt. 2022. The Issue of Proxies and Choice Architectures. Why EU Law Matters for Recommender Systems’ Frontiers in Artificial Intelligence, 5, https://doi.org/10.3389/frai.2022.789076.
    • [9] Harry G. Frankfurt. 1971. Freedom of the Will and the Concept of a Person. The Journal of Philosophy 68, 1. 6 – 14.
    • [10] Dennis Loughrey. 1998. Second-order Desire Accounts of Autonomy. International Journal of Philosophical Studies. 6, 2.
    • [11] Amelia Fletcher, Peter L. Ormosi and Rahul Savani. 2022. Recommender Systems and Supplier Competition on Platforms. SSRN Working Paper. 1-37.
    • [12] Markus Schedl, Hamed Zamani, Chang-Wei Chen, Yashar Deldjoo and Mehdi Elahi. 2018. Current challenges and visions in music recommender systems research. Automation in Construction. 95-116.
    • [13] Byung-Chul Han. 2022. Infocracy: Digitzation and the Crisis of Democracy. (1st ed.) Polity. Cambridge, UK.
    • [14] Jenny Holzer. 1982. Protect Me from What I Want. Times Square, New York, NY.
    • [15] Gerald Dworkin. 1988. The Theory and Practice of Autonomy. Cambridge University Press. Cambridge, UK.
    • [16] Gérald Kembellec, Ghislaine Chartron and Imad Saleh (Eds.). 2014. Recommender Systems. ISTE Ltd and John Wiley & Sons, Inc. Hoboken, NJ.
    • [17] Nathan M Fong. 2017. How Targeting Affects Customer Search: A Field Experiment. Management Science. 63, 7, 2353–64.
    • [18] Severin Engelmann, Valentin Scheibe, Fiorella Battaglia, and Jens Grossklags. 2022. Social Media Profiling Continues to Partake in the Development of Formalistic Self-Concepts. Social Media Users Think So, Too in Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES) 238 – 252.
    • [19] Tom Simonite. 2023. An algorithm that predicts deadly infections is often flawed Wired. 21 June 2021 (accessed 23 February 2023). https://www.wired.com/story/algorithm-predicts-deadly-infections-often-flawed/.
    • [20] Shoujin Wang, Xiuzhen Zhang, Yan Wang, Huan Liu and Francesco Ricci. 2022. Trustworthy Recommender Systems. ACM Computer Survey, 1, 1.
    • [21] Sinha Swearingen, Kirsten Medhurst and Rashmi Sinha. 2001. Beyond algorithms: an HCI perspective on recommender systems, in Recommender Systems, In papers from the 2001 ACM SIGIR Workshop, New Orleans, LA.
    • [22] Wenzhuo Yang, Jia Li, Chenxi Li, Latrice Barnett, Markus Anderle, Simo Arajarvi, Harshavardhan Utharavalli, Caiming Xiong, and Steven Hoi. 2021. On the Diversity and Explainability of Recommender Systems: A Practical Framework for Enterprise App Recommendation. In Proceedings of the 30th ACM Int’l Conf. on Information and Knowledge Management, 4302–4311.
    • [23] Henriette Cramer, Vanessa Evers, Satyan Ramlal, Maarten van Someren, Lloyd Rutledge, Natalia Stash, Lora Aroyo and Bob Wielinga. 2008. The effects of transparency on trust in and acceptance of a content-based art recommender. User Model. User-Adapt. Interact 18, 5, 455–496.
    • [24] Mohamed Amine Chatti, Mouadh Guesmi, Laura Vorgerd, Thao Ngo, Shoeb Joarder, Qurat Ul Ain and Arham Muslim. 2022. Is more always better? The effects of personal characteristics and level of detail on the perception of explanations in a recommender system. In Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization. 254–264. https://doi.org/10.1145/3503252.3531304.
    • [25] Nathalie A Smuha. 2019. The EU approach to ethics guidelines for trustworthy artificial intelligence. Computer Law Review International 20, 4, 97–106.
    • [26] Wenqi Fan, Xiangyu Zhao, Xiao Chen, Jingran Su, Jingtong Gao, Lin Wang, Qidong Liu, Yiqi Wang, Han Xu, Lei Chen and Qing Li. 2022. A Comprehensive Survey on Trustworthy Recommender Systems. arXiv. 1, 1.
    • [27] Taha Hassan and D. Scott McCrickard. 2019. Trust and Trustworthiness in Social Recommender Systems’ In Companion Proceedings of the 2019 World Wide Web Conference. https://doi.org/10.1145/3308560.3317596.
    • [28] Amina Samih, Amina Adadi and Mohammed Berrada. 2019. Towards a knowledge based Explainable Recommender Systems. BDIoT’19, October 23–24. http://dx.doi.org/10.1145/3372938.3372959.
    • [29] Eric Schmidt and Jared Cohen. 2013. The New Digital Age: Reshaping the Future of People, Nations and Business. Alfred A Knopf. New York, NY. 29 – 31.
    • [30] Byung-Chul Han. 2022. Non-things: Upheaval in the Lifeworld. (1st ed.) Polity. Cambridge, UK.
    • [31] Krisztian Balog, Filip Radlinski and Shushan Arakelyan. 2019. Transparent, Scrutable and Explainable User Models for Personalized Recommendation’ SIGIR ’19. https://doi.org/10.1145/3331184.3331211.
    • [32] Michael Reusens, Wilfried Lemahieu, Bart Baesens and Luc Sels. 2017. A note on explicit versus implicit information for job recommendation. Decis. Support Syst. 98, 26–35. https://doi.org/10.1016/j.dss.2017.04.002.
    • [33] Arno R. Lodder and Jorge Morais Carvalho. 2022. Online Platforms: Towards an Information Tsunami with New Requirements on Moderation, Ranking, and Traceability. European Business Law Review.
    • [34] Tihana Krajnovic. 2017. Freedom of Expression in the Digital Age – a provocation on autonomy, algorithms and curiosity. Centre for Media Pluralism and Media Freedom. https://cmpf.eui.eu/freedom-of-expression-in-the-digital-age-a-provocation-on-autonomy-algorithms-and-curiosity/
    • [35] TetRRIS: Territorial Responsible Research and Innovation and Smart Specialization. 2020. Responsible Research & Innovation (RRI) – what exactly is it? https://tetrris.eu/what-is-responsible-research-and-innovation-rri/.
    • [36] James Griffith. 2008. On Human Rights. Oxford University Press, Oxford, UK.
    • [37] Rowan Cruft. 2010. Two Approaches to Human Rights. The Philosophical Quarterly. 60, 238. 176-182.
    • [38] Behnoush Abdollahi and Olfa Nasraoui. 2018. Chapter 1: Transparency in Fair Machine Learning: The Case of Explainable Recommender Systems in Human and Machine Learning. Springer, New York, NY.
    • [39] Andrew Brady and Garry Prentice. 2019. Are Loot Boxes Addictive? Analyzing Participant’s Physiological Arousal While Opening a Loot Box. Games and Culture. 16, 4. https://doi.org/10.1177/1555412019895359.
    • [40] Janna Anderson and Lee Rainie. 2023. The Future of Human Agency. Pew Research Centre. https://www.pewresearch.org/internet/2023/02/24/the-future-of-human-agency/
    • [41] The European Science Foundation (ESF). 2022. Responsible and Open Science. https://www.esf.org/responsible-and-open-science/

    * This research is in part funded by the UKRI Trustworthy Autonomous Systems Hub, EP/V00784X/1.

    [1] In the literature, this is referred to as the problem of scrutability. For example, see: [31].

    [2] For example, see the related argument made by Lodder and Carvalho about online platform regulation and the difficulties of information overload for consumers. [33].