![]() |
Workshop Topic The rapid evolution of augmented and mixed reality (AR/MR) technologies, coupled with the integration of multimedia experiences, is reshaping how users interact with their environments. While traditional handheld displays like smartphones and tablets have democratized access to AR/MR applications, new technologies such as Apple's Vision Pro, smart wearables, and tactile interfaces are setting the stage for enhanced multimodal interactions. These advancements come with their own set of challenges, including developing intuitive interaction techniques, creating seamless cross-device user experiences, and ensuring the effective integration of multimedia elements like sound, visuals, and haptics. Addressing these challenges, following our successful first workshop on “Experience 2.0 and Beyond” at EICS 2024, this second version of our workshop “Experience 2.0 and Beyond” provides a platform for researchers, developers, and practitioners to explore innovative solutions for engineering AR/MR applications that span devices and modalities from visuals to multimedia, fostering a future where immersive, interactive, and multimedia experiences are accessible to all. The workshop will be held in synchronous mode. Each accepted paper will feature a short presentation. At the workshop the positions will be grouped into discussion themes, that are intended to be interactively explored, resulting in research agendas and a draft vision for future joined elaboration. The workshop attendees will also be encouraged to showcase their work during the main conference Poster session.
Preliminary Workshop Program
Keynotes @ XP2.0 Workshop
Anke Dittmar (University of Rostock): Short CV: Anke is a long-time researcher and educator in human-computer interaction, interaction design, and software engineering. Her research interests include user-oriented design methods, design representations, collaborative design activities, task modelling, and empirical studies of artefact use. Anke was elected President of the European Association of Cognitive Ergonomics (EACE) from 2017-2024. She is Full Papers and Tech Notes co-chair of EICS 2025. Abstract: User experience has become a key concept in human-centred design approaches which helped to expand the focus from task-oriented aspects to aesthetic, emotional, and social aspects of interaction. In my talk, I argue that existing understandings of user experience and corresponding (co-)design practices tend to take product-centered design perspectives, but they pay less attention on the impact of new technologies on people and their experiences. I introduce experience spaces as a complementary concept to design spaces that supports systematic co-reflection on digital artifact use. Peter Klein (uCORE Systems GmbH): Short CV: Peter is Chief Creative Officer (CCO) at uCORE Systems GmbH and an experienced innovator at the interface of UX, AI and smart assistance systems. As the long-standing head of the UID Lab (uidlabs.de), he has worked intensively on the design of user-centered technologies and has driven interdisciplinary research projects in the fields of human-computer interaction, gamification and intelligent assistance systems. His expertise spans technology, psychology and design, with the aim of making smart systems not only technically efficient, but also acceptable, intuitive and suitable for everyday use. Abstract: In an era where interfaces compete for attention, this talk explores the opposite path: designing systems that disappear into the fabric of daily life. Based on real-world applications in assisted living and smart housing, we introduce the concept of Silent Interfaces — ambient, context-aware, ethically guided technologies that support without surveillance, act without noise, and earn trust by being reliably invisible. We present seven dimensions of UX in living systems: from sensor fusion and semantic AI to ethical reflection, multimodal calmness, and responsible gamification. The central thesis: Homes should not be smart; they should be understanding. Technology should not dominate presence; it should amplify dignity. And design must not serve control, but care. Drawing from the interdisciplinary work of uCORE Systems, the talk demonstrates how to create user experiences not for a life online, but for a life well lived — unseen, but deeply felt.
Position Talks @ XP2.0 Workshop Nils Ove Beese, Jan Spilski, Thomas Lachmann, Jan-Hendrik
Sünderkamp, Jan Hendrik Plümer, Alexander Jaksties, and Kerstin Müller: Abstract: The study investigated the impact of outdoor and indoor landmarks in a wayfinding task performed in an unknown complex office building in Virtual Reality. To investigate different performance measures of orientation, twenty-two participants had to find the office building’s conference room. The office building was constructed like a maze, with dead ends and loops, and constructed as two-story buildings. These either had no landmarks, indoor landmarks, outdoor landmarks or both types of landmarks. At the end, participants had to draw a digital sketch map to show the way to the conference room. The existence of landmarks led to more accurate sketch maps of the participants in comparison to not having landmarks. The time for sketching, number of the rooms traversed as well as the time needed to reach the end room did not differ statistically significant for the same comparisons. Maties Claesen, Kris Luyten, and Raf
Ramakers: Abstract: Astronauts routinely train spacewalks when on earth. These spacewalks, named extravehicular activities (EVAs) are either trained in neutral-buoyancy pools or virtual-reality (VR) environments. While being standardised, neither environment conveys the chaotic micro-dynamics of how a tethered tool moves in micro-gravity. We designed and developed ZeroTraining; an encountered-type haptic training rig (ZeroArm) in combination with a VR environment that simulation the physical behavior of a tethered floating object in space (ZeroPGT). The combination of virtual and physical interactions ensure training of dexterity skills and increase the transferability of the training to real life situations. We demonstrate the feasibility of such a setup with low-cost components and validated our initial design in a formative study with ten participants. Kai J. Klingshirn, Christoph Garth, and Achim Ebert: Abstract: AI is increasingly integrated into a wide array of XR applications, including sophisticated navigation systems, immersive training simulations and educational use-cases. Consequently, ensuring the transparency and interpretability of these AI-driven functionalities has become a major challenge in XR development. This paper examines the growing importance of Explainable AI (XAI) in Extended Reality (XR) environments and identifies key challenges in developing effective explanation systems.We analyze how these AI-powered XR applications particularly benefit from transparent explanations that build trust, enhance user understanding and improve overall adoption. After summarizing the general challenges in the field of XAI, we analyze how these challenges manifest in the specific context of XR. By synthesizing current research and identifying critical open questions, this work aims to guide future XAI development towards more transparent, trustworthy systems that prioritize human needs across XR applications and beyond. Claudia Nass Bauer: Abstract: As AI systems increasingly permeate design education, the dominant interaction paradigm – text-based chat – risks constraining cognitive engagement in complex, iterative design tasks. This work explores whether and how multimodal interaction methods (e.g., visual, auditory, haptic) enhance cognitive performance compared to traditional chat-based AI interfaces within human-centered design (HCD) education. Based on an initial literature review, multimodal interfaces have demonstrated benefits such as reduced cognitive load, increased user engagement, improved learning outcomes, and enhanced collaborative processes. The current work is situated within a broader doctoral research project investigating how generative AI reshapes cognitive design processes in novice designers. Building on an earlier case study that demonstrated that chat interfaces often lead to superficial understanding and linear thinking, this article urges a rethinking of AI-assisted design education as a multimodal, situational, and didactically coordinated experience. The current workshop provides a platform to exchange frameworks and strategies for engineering multimodal, cross-device AI experiences that better serve cognitive growth and design literacy in the generative age. Roel Vertegaal and Roderick Murray-Smith: Abstract: This review introduces a new perspective on analysis and design of HCI systems, called Interactive Inference. Interactive Inference models any task as work that minimizes surprise (or approximations thereof). The framework is based on the Active Inference model of predictive coding developed in computational neuroscience and which has seen rapid growth in recent years. Interactive Inference will provide a coherent framework for managing generative models of humans, their environments, sensors and interface components. It can inform design and support real-time, online adaptation, with model-based explanations for behaviours observed in HCI, and new tools to measure important concepts such as agency and engagement. There are currently two approaches: Murray-Smith et al. is more directly built around the canonical active inference loop, with explicit sampling of predictive models, while the approach taken in this paper builds on Vertegaal et al. It simplifies active inference to two equations: The logarithm and the square of the Signal-to-Noise Ratio of a task.
Registration Please visit the EICS conference website for registration and conference fees!
Contact In case of any questions please contact us using the following email adress: xp2<AT>hciv.de (please replace <AT> by @).
|