Conversational Health Interfaces in the Era of LLMs: Designing for Engagement, Privacy, and Wellbeing

Shashank Ahire, Melissa Guyre, Bradley Rey , Minha Lee, Heloisa Candello

Published in Conversational User Interfaces, 2025

In Summary

At CUI 2025, our workshop “Conversational Health Interfaces in the Era of LLMs: Designing for Engagement, Privacy, and Wellbeing” brought together researchers, designers, and practitioners to explore the promises and pitfalls of using conversational user interfaces (CUIs) for health and wellbeing. Across group activities, we unpacked challenges around bias and fairness, user agency in stress interventions, and proactivity in exercise support.

Abstract

As Large Language Models (LLMs) revolutionize Conversational User Interfaces (CUIs) in health and wellbeing, these technologies offer unprecedented potential to enhance user wellbeing by improving physical health, psychological resilience, and social connectivity. However, the integration of such advanced AI into everyday CUI health applications brings substantial challenges, including privacy, user agency, and the psychological impacts of AI interactions. This workshop will provide a platform for collaborative dialogue to explore leveraging these advancements to improve health outcomes while addressing the ethical challenges and risks. Through presentations, breakout sessions, and collaborative discussions, participants will delve into themes such as designing multimodal CUI interventions, structuring conversational interventions for privacy and engagement, personalizing user experiences, and developing proactive and context-adaptive CUI strategies. These discussions aim to develop effective, user-centered CUI strategies that ensure the benefits of LLM-driven innovations are realized without compromising user wellbeing.

Proactive Health Interventions

In our group activity we explored the design space of proactive health interventions through conversational user interfaces (CUIs). Participants were asked to consider a scenario where a person receives proactive, in-the-moment health interventions while exercising, via a CUI (integrated into their smart device and/or headphones). The task: identify design challenges, open research questions, and early design directions for CUIs to truly be proactive within this context.

One of the first points raised highlighted the fine line between helpful and overbearing. For example, if a user’s metric drops mid-workout, should a CUI enthusiastically chime in with encouragement? While well-intended, such interventions could be patronizing or discouraging, especially for users with different fitness baselines, motivations, or personality traits. Not everyone is a “gym rat”, and a one-size-fits-all approach can alienate or demotivate users. These issues raise a core question: How should CUIs adapt their proactiveness across diverse emotional, physical, and goal-based states? What personality traits play a role in the likability of proactive CUI response structures?

The group also emphasized the importance of context. Factors like illness, location, long-term trends, current activity, or even the user’s mood could drastically change what kind of proactive intervention is appropriate. A user wanting to slow down may feel an uneasiness if the system insists on speeding up. If proactive CUIs aren’t tuned carefully, users might become over-reliant on the system or, conversely, desensitized to frequent or mismatched proactive interventions.

Designing for internal states, such as motivation, mood, or intention, emerged as an important research direction. Shifting from goal- and metric-based logic toward reflective, conversational prompts (“How are you feeling right now?”) could enable more expressive and proactive CUIs over time. These CUIs wouldn’t just wait for a drop in data (i.e. trigger-based reactivity) or a prompt from the user (i.e., user-initiated reactivity), they’d engage users in ongoing dialogue that evolves with them. In doing so, proactive systems could help foster long-term engagement rather than short-term compliance (similar to how notifications have us act on some information in the short-term).

The group also tried to consider research questions around the ethics and unintended consequences of proactive intervention. Could such interactions encourage addiction to self-optimization? What happens when baseline data is sparse, or when the user does not want an intervention at a given time?

As proactive health CUIs grow more sophisticated, so too must our questions about how, when, and why they intervene. As well, what makes a proactive system truly proactive and not reactive? This group conversation was just a starting point, but it highlighted the complexity and promise of designing CUIs that truly listen before they, and we, speak.

In More Detail

Please review our workshop call (linked above). You can also view other findings from additional workshop activities as well as the submitted workshop papers via the workshop’s website.