top of page

A glimpse into our work-in-progress

12 Feb 2024

Reflections on the research presented at Etmaal

On Thursday 8th and Friday 9th February 2024, Hanne Vandenbroucke and Ulysse Maes attended the 26th edition of Etmaal van Communicatiewetenschap in Rotterdam. At this two-day conference, researchers from Communication Sciences in Belgium and The Netherlands unite to present their projects, get feedback from peers, and be inspired by the work of others.

 

They were invited to present their work-in-progress during the Research Escalator. In this blogpost we highlight the key take-aways from their presentations.


Multi-stakeholder approach to news personalization - Hanne Vandenbroucke

What lies behind the "For you", "Read more" or "See also" sections on the website or mobile app of your favorite news brand? By conducting stakeholder interviews with professionals working in commercial news organizations operating in Flanders: DPG Media, Mediahuis and Mediafin; We aim to map the development and implementation of recommender systems. The key internal stakeholder groups involved in and impacted by news recommender systems are: (1) the newsroom, (2) the technical development, and (3) the commercial business unit.


Based on the stakeholder interviews, we are able to build upon the multi-stakeholder framework of Smets et al. (2022). The preliminary results give insights in the actual decision-making process for recommender development. News organisations started experimenting with a news recommender system on average 3 years ago. The initial process of trial and error had transformed into an ongoing cycle of adjusting the RS design. In practice both the newsroom and the business unit express their objectives, preconditions and concerns to the product owner who aligns the different perspectives and formulates a concrete set of goals. Next, the technology development team will operationalize these objectives into computational metrics and adjust the recommender system design. Performance data is continuously being collected and monitored. A feedback loop set up to communicate the results of the adjusted recommender to the product owner who – together with the data analytics team – derives insights from the data and report back to the business unit and newsrooms.


Exploring the influence of misleading explanations on the perceived quality of recommender systems – Ulysse Maes

Nowadays, recommender systems are everywhere: you find them on Amazon, on Netflix and Spotify, for example. These algorithmic curation systems help internet users to efficiently navigate through vast amounts of content. While holding clear advantages in terms of user experience, there are some limitations and normative concerns. One of these concerns stems from the limited transparency they provide. This may lead to distrust and frustration.


Adding explanations may yield different results, which depend on your objectives. (Tintarev & Masthoff, 2007) suggest seven goals of explainable recommendations: effectiveness, efficiency, satisfaction, scrutability, transparency, trust, and persuasiveness. Note that maximizing for one goal might be beneficial for another (e.g. improving for scrutability- giving users the ability to change the outcomes to their liking- might improve satisfaction). However, optimizing for one goal might also harm other goals. This research specificially dives into the possible conflict between optimizing for persuasiveness and its effects on transparency and trust. By “optimizing for persuasiveness”, we mean creating compelling narratives to persuade users to consume recommended content. For example, when you buy a pair of jeans online, the shop can try to upsell by recommending some white t-shirts and explain the recommendations as “Style advice for the perfect shirt to wear on your new jeans.” A more neutral explanation could be: “Customers also bought.”


Persuasion in itself is not problematic, but it might become problematic once it becomes misleading: hiding important information or even lying about the working of the system, or the drivers behind a decision. As explanations are often linked to a conception of transparency, both with end-users and academia, the mere presence of explanations can already lead to increased trust in the system. By crafting compelling but incorrect explanations, it might be possible to manipulate users into consuming certain content, while still giving them a feeling of agency. There are clearly some incentives to create misleading explanations. But do they work? Theory is still inconclusive. While some research mentions the effectiveness of personalized persuasion (Burtell & Woodside, 2023), others highlight the detrimental effects on long-term trust: would platforms really jeopardize this? Another argument against the effectiveness of misleading explanations is that the costs don’t outweigh the benefits – a critique sometimes formulated on explanations in general as well. The field of explainable AI recognizes the immense potential of Large Language Models (LLMs), such as ChatGPT, to generate personalized, dynamic explanations at scale. It has already been shown that these explanations can be more persuasive than human-generated texts. One of the reasons why we do not see LLM-generated explanations popping up everywhere, is because of their tendency to “hallucinate” - to make up plausible, but incorrect narratives.


bottom of page