Workshops
CHIMIE – Challenges in Internet-scale Multi-sensorial Interactions in Virtual Environments
Organizers: Klara Nahrstedt (UIUC, USA), Prabhakaran Balakrishnan (UT Dallas, USA), Jacob Chakareski (NJIT, USA)
Abstract: VR (Virtual Reality)-based immersive and interactive environments can be a great resource for learning and training, especially concepts that involve safety aspects by interacting with inflammable or breakable objects. VR environments used for learning and training would need natural user interactions facilitating grabbing of objects, moving them along with multi-sensorial experiences of touch and smell. This would need a better understanding of the effects of incorporating multiple sensorial displays (such as thermal, mid-air haptics and olfactory) in VR-based collaborative learning environments. Facilitating collaborations at Internet-scale is challenging for such multi-sensorial interactions in realistic VR environments. Tedious nature of developing these VR experiences, continues to be a limiting factor for VR becoming mainstream. VR software working with devices such as thermal displays must be tested and verified for the safety of users. The goal of this workshop is to under the challenges in creating and providing Internet-scale virtual environments for learning and training with support for multi-sensorial interactions.
Website: https://chimie.github.io/
Healthcare Intelligence and eXtended Reality
Organizers: Faraz Janan (Imperial College London and Anglia Ruskin University, UK) & Imran Ahmed (Anglia Ruskin University, UK)
Abstract: As the healthcare sector continues to evolve rapidly, the integration of cutting-edge technologies is reshaping how care is delivered. Extended Reality (XR), which includes Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR), is becoming increasingly influential in healthcare, offering new opportunities for training, surgical planning, and remote collaboration. Simultaneously, Artificial Intelligence (AI) is revolutionising diagnostics, patient care, and data-driven decision-making. The convergence of these two fields is set to usher in a new era of healthcare, marked by unparalleled precision, personalisation, and patient engagement.
This workshop aims to: Facilitate in-depth discussions and showcase the latest advancements at the intersection of XR and AI in healthcare; Identify emerging research areas and collaboratively chart the future directions for these interconnected fields; and Enhance the synergy between AI's analytical capabilities and XR's immersive experiences to create next-generation healthcare solutions.
Ethics in AI & XR
Organizers: Ana-Despina Tudor (University of the Arts London - London College of Communication) and Eric Fanghanel (University of the Arts London - London College of Communication)
Abstract: The integration of Artificial Intelligence (AI) into Extended Reality (XR) platforms has introduced transformative opportunities, alongside pressing ethical challenges. We aim to address concerns surrounding privacy, agency, bias, and the moral responsibilities of users, developers, and platform owners within AI-enhanced XR environments.
We will explore topics such as AI-driven avatars, representation, and social interaction in virtual spaces. Furthermore, AI's role in content curation and creation within XR platforms poses challenges to user agency, urging the need for ethical design practices that prioritise transparency, inclusivity, and meaningful user consent.
In addition, this workshop will discuss ethical considerations related to data protection in XR, emphasizing the development of robust consent processes and data governance structures that safeguard user rights while allowing for technological advancement. By examining these issues, we seek to establish a more responsible approach to the ethical implementation of AI in XR.
Website: https://sites.google.com/view/aixr/aixvr-2025?authuser=1
Leveraging immersive technologies for sustainable Industry 5.0
Organizers: Ingrid Winkler (SENAI CIMATEC University, Brazil), Yiyu Cai (Nanyang Technological University, Singapore), Tiago Silva (NOVA University, Portugal), Fabio Alves (NVIDIA), Arlindo Galvão (Goias Federal University, Brazil), Rui Silva (Lusiada University, Portugal), Marcio Catapan (Paraná Federal University, Brazil), Paulo Ambrósio (Santa Cruz State University, Brazil)
Abstract: Our planet faces unprecedented sustainability challenges, necessitating innovative solutions aligned with the UN's Sustainable Development Goals (SDGs). This workshop highlights the transformative potential of immersive technologies in addressing SDG 12, which calls for Sustainable Consumption and Production Patterns. XR technologies have proven effective in advancing SDG 5 (Gender Equality), SDG 13 (Climate Action), and SDG 8 (Decent Work for All). Despite their potential, XR is underutilized in addressing SDG 12 targets such as responsible management of natural resources (12.2), reducing food waste (12.3), substantial waste reduction (12.5), promoting sustainable lifestyles (12.8), supporting developing countries' capacity for sustainable practices (12.9), and promoting sustainable tourism (12.A). Generative AI for creating immersive experiences also offers significant opportunities for sustainable industrial processes. Creating these experiences is still complex for those without knowledge in 3D modeling or programming, such as digital creators, teachers, doctors, engineers, and other professionals. By making the development of immersive experiences more intuitive and accessible, generative AI can contribute to more sustainable production and consumption overall. Thus, this workshop brings together researchers, developers, technology providers, industry professionals, policymakers, and end-users to identify actionable strategies and share best practices, aiming for a sustainable future through XR solutions.
Website: https://sites.google.com/doc.senaicimatec.edu.br/aixr-for-industry50/
XRiM – XR technologies in Museums
Organizers: Agata Marta Soccini (University of Torino, Italy), Nobuyuki Umezu (Ibaraki University, Japan), Shih-Wei Sun (Taipei National University of the Arts, Taiwan)
Abstract: Museums have always been attractive and unique places where people learn from various exhibits, displays, samples, and special devices. Although recent smartphones, PCs, and game consoles are very powerful, there should be solid differences between experiences with daily equipment and those in museums full of special-purpose devices. Our workshop aims to showcase a wide range of excellent designs, implementations, experimental results, and case studies. We encourage papers that extend the ability of museums to enhance people’s experiences in museums with a range of XR/AR/VR/MR and related technologies.
We also believe in artwork as the core entity of the museum experience, supporting the development of ART that leverages on AI and XR. We therefore encourage submissions of ART papers.
Website: https://xrim.work/AIxVR2025/
Generative AI and LLMs for XR
Organizers: Bhojan Anand (National University of Singapore), Anita Hu (National Tsing Hua University), Pasquale Cascarano (University of Bologna)
Abstract: The intersection of Generative AI (GenAI), Large Language Models (LLMs) and Extended Reality (XR) environments is reshaping the landscape of digital interactions, leading to more immersive, personalized, and responsive virtual experiences. This integration enables users to engage with digital objects and environments in ways that are context-aware, dynamic, and highly interactive, paving the way for the future of human-machine collaboration. As GenAI drives the creation of real-time content and adaptive virtual worlds, LLMs enhance natural language understanding, enabling fluid, conversational interactions and rich narrative generation within XR spaces. Incorporating AI-generated content, interactive dialogue, and context-aware systems into XR applications represents a great opportunity across various industries such as gaming, education, healthcare, and entertainment. Real-time content adaptation driven by AI allows virtual environments to respond dynamically to user inputs, behaviors, and preferences, making every experience unique and tailored. Furthermore, intelligent virtual agents powered by LLMs can simulate human-like behaviours, providing more intuitive and realistic interactions between users and XR environments. This workshop brings together leading researchers and practitioners to explore the transformative potential of integrating GenAI and LLMs into XR environments.
AI and AR/VR for Exergaming
Organizers: Sean Banerjee (Wright State University), Natasha Banerjee (Wright State University), Bhawna Shiwani (Delsys / Altec, Inc.), Serge Roy (Delsys / Altec, Inc)
Abstract: AI-enabled AR/VR-based exergames have the potential to enable a broader spectrum of users to participate in rehabilitative and fitness activities in immersive social environments that provide continual feedback to improve performance and incentivize continued usage. However, consumer grade AR/VR systems do not yet have the full capabilities needed to realize exergaming at scale. These challenges include, but are not limited to AI feedback algorithms that can run on untethered systems, long-term usage comfort due to system bulkiness, inability to simulate real-world attributes such as weight and resistance, novel exercise routine generation, and multi-user latency.