Munich Center for Mathematical Philosophy (MCMP)
print


Breadcrumb Navigation


Content

Workshop on Recommender Systems and Digital Nudging

26-27 May 2025, Munich Center for Mathematical Philosophy (MCMP), LMU Munich

26.05.2025 – 27.05.2025

Idea and Motivation

Recommender systems influence many aspects of our daily life by shaping what information, products, or people we encounter online. At the same time, digital nudging—using recommender system design to steer user behavior—raises important questions about autonomy, responsibility and welfare. This workshop brings together researchers from computer science, economics, and philosophy to explore how recommender systems act as digital nudges and what this means for users and society. The aim is to open space for interdisciplinary exchange and to identify research questions at the intersection of these fields.

Speakers

Organiser

Dr. Silvia Milano (s.milano@exeter.ac.uk)

Registration

Registration is free but required. Please send an email to s.milano@exeter.ac.uk, specifying your name and affiliation.

Venue

Day 1 / 26 May: MCMP, Ludwigstraße 31, Room 028
Day 2 / 27 May: LMU, Richard-Wagner Straße 10, Room D 018

Programme

Day 1 / 26 May MCMP, Ludwigstraße 31, Room 028
16:00–16:30 Registration and welcome
16:30–17:15 Silvia Milano: Algorithmic Recommendations: What’s the Problem?                                                                
17:15–17:45 Ignacio Ojea: The Reward Puzzle in Recommender Systems
17:45–18:00 overview of the activities and practical arrangements
18:30              optional dinner: Kun Tuk, Amalienstrasse 81     

 

Day 2 / 27 May         
LMU, Richard-Wagner Straße 10, Room D 018
09:00–09:15 Coffee and welcome
09:15–10:00 Dietmar Jannach: Recommender Systems and Digital Nudging – An Introduction
10:00–10:30 break
10:30–11:00 Caterina Giannetti: Ethical AI in Recommendations: A Public Good Dilemma           
11:00–11:30 Yashar Deldjoo: Toward Holistic Evaluation of Recommender Systems Powered by Generative Models
11:30–12:00 break
12:00–12:45 Malte Dold: Algorithms and Autonomy: Liberal Response to the Preference-Shaping Power of Recommender Systems
12:45–14:00 lunch
14:00–14:30 Akshat Jitendranath: Hard Choices and the Choice Architecture
14:30–15:00 Holly Dykstra: Ongoing work on nudging in behavioural economics
15:00–15:30 break
15:30-16:00 Martijn Willemsen: Nudging in recommender systems: the effectiveness of nudging in (personalized) recommendations to help people achieve their goals
16:00-16:15 break
16:15–17:00 Francesco Ricci: Simulation-Based Trustworthy Recommender Systems Evaluation
17:00-17:15 break
17:15–18:00 panel discussion
19:15 Workshop dinner: Steinheil 16, Steinheilstraße 16

Abstracts

Yashar Deldjoo: “Toward Holistic Evaluation of Recommender Systems Powered by Generative Models”

Recently, the AI community has witnessed a paradigm shift with the emergence of generative models. Regardless of the underlying technique, a remarkable aspect about these models is their ability to generate large volumes of data and responses that align closely with human intelligence. Notable examples include large language models, such as ChatGPT, which have demonstrated great success in various conversational tasks.

Today, the recommender systems and broader information retrieval communities are rapidly adopting this trend, exploring novel applications and systems powered by generative AI. These systems offer several advantages, including fine-grained personalization, the ability to generate meaningful textual and visual content, effective solutions to the cold start problem via data augmentation (where items lack metadata), and, most recently, the rise of agentic AI—autonomous agents capable of self-improvement and adaptive behavior.
Notwithstanding their great success, generative models also introduce significant risks and ethical or societal challenges, many arising from their training on vast and often unregulated internet-scale data. In this talk, I will first compare and contrast classical (non-generative) recommender system models with generative ones across various dimensions of system design and capability. Next, we will classify the risks associated with these new models, distinguishing between "known but potentially exacerbated challenges" (such as bias and privacy issues) and "emerging, less understood risks" (including hallucinations, forgetfulness, and other unforeseen behaviors). Finally, I will advocate for the necessity of "holistic evaluation" frameworks to assess recommender systems by generative models comprehensively. A detailed version of this talk and my perspectives can be found in our upcoming paper at SIGIR’25.

https://arxiv.org/abs/2504.06667

Malte Dold: “Algorithms and Autonomy: Liberal Response to the Preference-Shaping Power of Recommender Systems”

A common critique of behavioral public policies is their one-size-fits-all approach, both in terms of assumed goals (e.g., encouraging saving) and methods (e.g., automatic enrollment in savings plans). However, individuals vary—not only in their preferences but also in the nature of their behavioral constraints. In response, some scholars have advocated for the personalization of behavioral policies, such as nudges and sin taxes, to better align interventions with individual differences. Personalization consists of two key components: Choice Personalization (CP), which aims to respect individual diverse goals and preferences and Delivery Personalization (DP), which seeks to tailor the method of intervention. Proponents of personalized behavioral policies argue that CP and DP can be effectively grounded in behavioral data (e.g., past choices to infer preferences) and psychometric profiling (e.g., personality traits, demographics). While personalized recommendations in the marketplace might have both advantages and risks, we focus on the implications of personalization within behavioral public policy, where we identify conceptual and ethical challenges in current proposals. We highlight two key concerns: (1) The Optimality Problem – How can an external entity reliably determine what is truly in an individual’s best interest, given the exploratory and creative nature of human choice? (2) The Universality Problem – How can personalized interventions preserve the universality of law and regulation, which serve as cornerstones of liberal societies, ensuring social cohesion and equal treatment before the law? These challenges raise questions about the feasibility and desirability of personalized behavioral public policies

Holly Dykstra: “Ongoing work on nudging in behavioural economics”

This talk will present work from three projects. In the first project, I present evidence from a series of large-scale lab experiments that show that people display a strong preference for agency, but are also very willing to forego agency in order to avoid making a decision themselves; when presented with a menu of investment options, decision-makers are much more willing to forgo agency if choosing an investment option for themselves requires even a cursory consideration of the investment options. In the second, I present evidence of a “buy-in effect,” where simply increasing the upfront effort during a sign-up process by a small amount increases whether people follow-through on their intended action, including carpooling to work. Finally, in the third project, I present pilot results from an upcoming study about using generative AI to help people apply for unemployment benefits.

Caterina Giannetti: “Ethical AI in Recommendations: A Public Good Dilemma”

We explore the public good dilemma underlying ethical AI: while users value transparency, fairness, and alignment with personal values, the developers who deploy these systems face private costs in implementing them. We model this interaction as a delegation game with heterogeneous users (principals) and a profit-driven developer (agent), where ethical alignment enhances user engagement but imposes rising development costs. Through both analytical modeling and experimental design, we show that decentralized incentives lead to systematic underprovision of ethics, especially in multi-principal environments where each user hopes others will bear the cost.

Dietmar Jannach: “Recommender Systems and Digital Nudging – An Introduction”

This talk provides a light-weight introduction to recommender systems and a discussion of the relationship between recommender systems and concepts of digital nudging. We start with a review of the value of recommender systems for different stakeholders, briefly discuss algorithmic strategies for making personalized recommendations, and outline how recommender systems can be evaluated. Afterwards we introduce the idea of digital nudging and how recommender systems can be seen as a mechanism to steer user behavior in certain directions.

Literature: https://www.sciencedirect.com/science/article/pii/S245195882030052X

Akshat Jitendranath : “Hard Choices and the Choice Architecture”

This paper challenges the prevailing philosophical accounts of hard choices, which typically characterize them as involving incomparability, inconsistency, vagueness, or parity between alternatives. Observe that these accounts focus exclusively on the ranking of pairs of options. Consequently, they neglect a crucial dimension: the role of the choice menu itself as an argument in rational deliberation. I argue that frameworks for adjudicating justified choices must consider not just the binary relation but the opportunity set from which these options emerge. The first part of this talk demonstrates why existing philosophical models—whether focused on incomparability, vagueness, or parity—fall short by overlooking the menu. Even when some alternatives may be incomparable or vaguely ranked when considered in isolation, there may still exist a clearly justified "best" option when the full menu is considered. The second part shows how this approach yields an important practical implication: rather than forcing individuals to navigate impossible trade-offs, institutions should avoid constructing decision environments that require making hard choices in the first place.

Francesco Ricci: “Simulation-Based Trustworthy Recommender Systems Evaluation”

Building more trustworthy recommender systems (RSs) is a societal issue. This goal has motivated the development of new types of RSs and the introduction of European regulations (Digital Service Act). A key step in the design and audit of such multistakeholder systems is their multidimensional evaluation. In fact, system's effect on users' behaviour is hard to estimate off-line, and it is risky to asses it online. A new line of research is revamping the usage of choice simulation techniques, which have the advantage of enabling offline measurement of the effect of novel recommendations and nudging strategies on users behaviour. The talk will introduce these topics and illustrate the application of simulation-based evaluation methods to promote sustainable tourism.

Martijn Willemsen: “Nudging in recommender systems: the effectiveness of nudging in (personalized) recommendations to help people achieve their goals”

Behavioral economics and decision making research has been discussing how the choice environment shapes decisions for decades. Concept such as preference construction, nudges and choice architectures have brought clearer understanding of how decision makers discover what their want and how the environment influences this choice. However, many of these techniques (such as the use of nudges or defaults) are often ‘one size fits all’. With the increased use of AI technologies such as recommender systems, digital nudging has been proposed as a way to make choice architecture tools more personalized, which makes some scholars worried about ‘hyper-nudging’: people being manipulated via subtle but very effective targeted nudges.

In this talk I will discuss my work on recommender systems and (digital) nudging. As recommender systems personalize the type of recommendations provided to the user, the question arises how much nudging is still needed if the personalization is effective. I will argue we might still need them in domains where we help users go beyond their current preferences/behavior. I will review our work on energy saving recommendations and music genre exploration, two domains in which personalized nudges might help to achieve new goals. We found that in these domains nudges have (some) value to help people with more effective choices and exploration.