top of page

Looking back at IUI 2025

31 Mar 2025

Ulysse attended the conference and presented at the AXAI workshop

Pristine beaches, a bright turquoise sea, impressive rock formations and pink flamingo’s… Cagliari has it all. Despite the undeniable beauty of Sardinia’s capital, the most memorable element of my stay was however related to Intelligent User Interfaces. You read that right: I was lucky enough to attend the yearly conference on Intelligent User Interfaces (IUI) - which turned out to be an exceptionally inspirational and instructive experience. It shouldn’t come as a surprise that my interest was sparked by the central theme - the conjunction between Artificial Intelligence (AI), and how User Interfaces (UI) can optimally accommodate for them. But the real experience went above and beyond my initial expectations. In this article, I break it down in 3 key elements.


1. The overarching theme: AI should benefit the user

Believe it or not, but the word “user” was mentioned even more often than “LLM” (see paragraph 3; “There is no escaping LLMs”). Shouldn’t be too surprising for a conference focusing on user interfaces, right? Well, it struck me that this community really tries to model user preferences and -needs accurately, for which it draws a lot more often from cognitive and behavioral research than is common in computer science. As I’m also considering to conduct user experiments to evaluate the effects of explanations on the complex experience of the system, this focus meant that I learned a lot about suitable methodologies.


During the course of the week, my belief in the importance of putting the user central has been fortified as well. Now more than ever, it is important to guard user agency and autonomy, and to prevent over-reliance on increasingly capable and connected AI systems. The current generative AI boom raises a lot of concerns among various affected stakeholders. Sometimes, it feels like these tools are at least partly exploitative in nature - a concern that was vividly expressed by professor Giulio Jacucci in his opening keynote at the HAI-GEN workshop.


2. The AXAI workshop: Explanations should be adaptive to be impactful

Throughout the presentations in the AXAI (Adaptive eXplainable AI) workshop session, I learned that adaptivity can be geared towards multiple users and use cases, and is therefore often interpreted differently by various researchers and fields. Some focus on adapting explanations to the user at hand, in terms of content, or complexity. Others consider adaption to context. In any case, it seems that LLMs offer a great avenue to increase the level of adaptation in many cases, although this may spark concerns regarding trustworthiness.


Related to that topic, I presented our paper (link), co-authored with Lien and supervised by Annelien. What set it apart was mainly its ridiculously long title: “Mitigating Misleadingness in LLM-Generated Natural Language Explanations for Recommender Systems: Ensuring Broad Truthfulness Through Factuality and Faithfulness”. Not exactly the title of Sabrina Carpenter’s next hit, but it certainly sparked some lively discussion.


I opened my presentation with a straightforward example of gender bias in a job recommender: it generated the exact same “high-quality” explanation for two different users—a female designer recommended design jobs and a male designer steered toward management roles. This example served as a way to illustrate the benefit of incorporating uncertainty and interactive counterfactuals into explanations, to enable greater transparency and scrutability. I recognized a similar call for communicating uncertainty as a means to obtain transparency in Prof. Q. Vera Liao’s keynote on Thursday. I was also pleased to see Prof. Turchi present “Talking Back - human input and explanations to interactive AI systems“, an inspiring study on interactive counterfactuals using SHAP values as sliders. It sparked my interest in further exploring the integration of interactive explanations with SHAP in content-based recommender systems.


My presentation then proceeded with stating the difference between explanations and justifications, and how LLMs enable the generation of plausible justifications at scale. This can pose a problem of unfounded trust in recommender systems, especially since earlier research has shown that the mere presence of explanations already enhances the trust and item acceptance. This underscores the importance of truthful explanations - which brought me to the core of my presentation: to assess “truthfulness”, we should agree on a definition and an operationalization. This is where I propose to frame truthfulness as “providing accurate information”, consisting of both factuality and faithfulness. Unfortunately, we often see that different research disciplines consider different aspects of truthfulness: while computer science works often focus on factuality, social science merely investigates faithfulness. Luckily, as Krzysztof Gajos mentioned in Wednesday’s morning panel, the field of IUI is well-positioned to bridge these different perspectives into truly useful interfaces.


Slide from AXAI presentation: "Defining truthfulness"
Slide from AXAI presentation: "Defining truthfulness"

I proceeded with mentioning the 4 evaluation perspectives to assess explanation quality in recommender systems, as proposed by Ge et al., while also mentioning the 7 explanation goals as defined by Tintarev in 2007. I was curious if the audience would deem this evaluation method useful outside of recommender systems as well, and it indeed proved to be fertile ground for discussion. While most workshop participants agreed that the dimensions could be useful, the explanation goal of “persuasion” was contested, as this should be seen as a side-effect or external result, not something to optimize for. To end my presentation, I went over some possible mitigation strategies that focus on prompting, interface and model-based approaches (all are further discussed in the paper).


Slide from AXAI presentation: "Mitigation strategies for LLM-generated misleading explanations"
Slide from AXAI presentation: "Mitigation strategies for LLM-generated misleading explanations"

Besides the "Talking back" paper, the presentations most relevant to my area of interest in this session were "Toward a Human-Centered Metric for Evaluating Trust in Artificial Intelligence Systems" and "'Loss in Value': What it reveals about WHO an explanation serves well and WHEN".


By the way, in the morning, I attended another great workshop: HAI-GEN, in which the shift to an intent-based paradigm was stressed multiple times. I was very happy to hear this, as I’m also convinced that we are effectively moving to a new way of interaction with our systems, were we move from command-based interaction towards a more natural way of communicating our goals to digital systems. As far as I’m aware, the term “intent-based interaction” was coined by Jakob Nielsen in a now-famous blog post. I find this very inspiring and consider it one of the main guiding threads for my research.


  1. There is no escaping LLMs


As announced in the opening talk, I indeed noticed that LLMs were ubiquitous. From self-improving LLM-agents that learn to play Minecraft, to LLMs that optimize meeting schedules: a lot of authors (including myself) reported on the promises and perils of incorporating language models to enhance interactivity, accessibility or automation.


The popularity of LLMs should however not be mistaken for devotion. Many times I realized, as a scientific community, we have the privilege to be critical, and look beyond the hype - so that we can report on both the opportunities as well as the limitations of LLMs. That doesn’t mean that we shouldn’t be excited about the seemingly endless possibilities of this new technology of course. But as was rightfully highlighted by Prof. Burnett, we should consider it in a responsible way. By doing so, we can provide valuable insights that help to steer the development and implementation of AI-models beyond purely profit-driven goals.


Conclusion: An inspiring conference at the heart of my research interests

This post could have been much longer if I had included all my notes. I'll end it here for now, but in the coming weeks, I hope to frequently revisit these notes to reignite the inspiration I felt from attending the conference, listening to speakers, admiring impressive research projects, and meeting some of the most influential HCI researchers. Two memorable encounters will forever remain in my mind: the first was a lunch conversation with Professor Ted Selker, the creator of the well-known red pointing stick on my ThinkPad (though there's a more familiar term for it; if you know, you know). The second? A close encounter with a startled yet adorably cute beaver during a run around Cagliari’s stunning salt mines. I hope to reconnect with many of the inspiring individuals I met at the conference in the future. In the meantime, let’s continue our ongoing pursuit of the perfect intelligent interface.

  • Facebook
  • LinkedIn
  • X
Logo's SMIT (3 kleuren)-05.png
vub_mono_wit_outline (1).png

© 2024 imec-SMIT, Vrije Universiteit Brussel. All Rights Reserved

bottom of page