top of page
Main: Image

Workshop XR in Games - Final Program

Workshop Program

​

8h30 Invited Talk: Using Wearable Devices to Interact with Virtual Agents in 3D Interactive Storytelling (Tsai-Yen Li, National Chengchi University in Taiwan)

​

9h00 Invited Talk:  Creating a virtual reality serious game using a domain specific language for interactive 3D environments  (Florent Robert, Polytech of the Côte d’Azur University in France)

​

9h30 Invited Talk: Investigating Users’ NaturalEngagement with a 3D DesignApproach in an Egocentric VisionScenario (Eder de Oliveira , Universidade Federal Fluminense, Brazil)

​

10h00 Invited Talk: Designing an Adaptive Assisting Interface for Learning Virtual Filmmaking (Hui-Yin Wu, Université Côte d’Azur, Inria, France)

​

10h30 [Paper presentation] Cooking in the dark: a mixed reality empathy experience for the embodiment of blindness. RENAN GUARESE (RMIT- Australia), FRANKLIN BASTIDAS (UFRGS, Brazil), JOÃO BECKER, (UFRGS, Brazil), MARIANE GIAMBASTIANI (UFRGS, Brazil), YHONATAN IQUIAPAZA (UFRGS, Brazil), LENNON MACEDO (UFRGS, Brazil), LUCIANA NEDEL (UFRGS, Brazil), ANDERSON MACIEL (UFRGS, Brazil), FABIO ZAMBETTA (RMIT, Australia), RON VAN SCHYNDEL (RMIT, Australia)

​

10h50 [Paper Presentation] An experimental methodology to capture user and gameplay data tied to cybersickness. THIAGO PORCINO, DANIELA TREVISAN, ESTEBAN CLUA (Universidade Federal Fluminense, Brazil)

​

11h10 – Panel: XR in Games – How to make a better game experience

​

12h30 – End of Workshop

 

Details

 

Title: Using Wearable Devices to Interact with Virtual Agents in 3D Interactive Storytelling

Tsai-Yen Li

Professor in the Computer Science Department of the National Chengchi University in Taiwan. His research domains include interactive storytelling, virtual reality, and character animation. He has applied Artificial Intelligence technologies to the applications of game design, intelligence user interface, and computer animation. He is currently also working on the design of virtual film set, camera control in games, drone cinematography, and virtual psychological experiments.

Abstract:

In recent years, the applications of virtual reality are booming in various industries such as training and entertainment. However, most of them use traditional user interfaces such as buttons on controllers to interact with virtual agents. In the entertainment applications such as VR games, when a player has chosen her movement, the responses from the non-player character (NPC) are usually fixed animations, voice, or text outputs triggered by events. We think this kind of interaction cannot immerse a player into a virtual world and allow her to enjoy the story easily. In this work, we propose to use wearable devices to capture the player’s gesture and use her full-body movements as inputs. In addition, we attempt to make the animation module of virtual character parameterizable to deliver appropriate, flexible, and diversified responses to the player. We have also designed an interactive storytelling scenario such that the player can experience different story plots and perceive responsive animation feedbacks through her interactions with the virtual world. We have implemented an interactive storytelling system that captures and interprets user’s body actions through wearable devices. The system can then decide how to perform the NPC’s animation accordingly. The storyline will be adjusted through the interactions with the NPC or the environment, thus leading to different story experiences. We have conducted a user study to evaluate our system by using a traditional controller and wearable device for comparison. The participants evaluated the system by filling questionnaires and were interviewed after the experiment. The experimental results reveal that the interaction methods we have designed are more intuitive and easy to use for the users, compared to the controller. In addition, the users are willing to play with the system multiple times, which confirms the replay value of our interactive storytelling system.

---------

Title: Creating a virtual reality serious game using a domain specific language for interactive 3D environments

Speaker: Florent Robert

Bio: I am an engineering student in IT at Polytech of the Côte d’Azur University in France. Specialized in HCI, I am interested in immersive environments, complex human-computer interactions and research work in these domains.Abstract

Virtual reality (VR) places players in immersive environments with which they can interact. This allows players to experience lifelike scenarios in various contexts. However, setting up a scenario in a game can turn out to be a long and complex process for the developer for two main reasons : (1) the large number of different objects composing a scene and their arrangement in it, making the task of taking into account all the parameters in the creation of a scenario (what the user sees, can touch, what he can interact with, ...) challenging, and (2) the diversity of tasks with varying complexity that the user must be able to accomplish. A way to set a realistic and achievable goal for the user to make them feel like they are progressing through the game and provide them with help as and when they need it is necessary. This project presents a system allowing the creation of scenarios in virtual reality. A scenario is a series of several tasks that the user must accomplish in a given time. This system uses a DSL (Domain Specific Language) to help the developer in creating scenarios, and it allows two functionalities for this purpose : (1) annotate the various elements of the scene according to the type of object as well as its location and (2) define the tasks composing a scenario and the various constraints and aids related to them. Thus, the developer can create scenarios that provide progressive assistance to the user while taking into account the different properties of the objects in the scene and their situation in relation to the player. This system is implemented in Unity, it features two types of 3D environments, first an indoor scenario in a house and the second an outdoor scenario near a road with traffic.

----------------------------

Investigating Users’ NaturalEngagement with a 3D DesignApproach in an Egocentric VisionScenario

Eder de Oliveira (Universidade Federal Fluminense, Brazil)

User interfaces based on modalities such as touch, gestures, or voice are often referred to as NaturalUser Interfaces (NUI). In virtual and augmented reality, gestures are a very natural way to interactwith digital objects. Still, there is a lack of clear guidelines for designing gesture-based interactionin AR and VR. This work presents an approach based on a user-centered design that generated aninteraction vocabulary, considering users’ preferences, seeking a set of natural and comfortable handposes, and evaluate users’ satisfaction and performance

----------------------

Title: Designing an Adaptive Assisting Interface for Learning Virtual Filmmaking

speaker: Hui-Yin Wu

Junior researcher in computer science at the Université Côte d’Azur, Inria, France. Her research domain is interactive and multimedia storytelling, notably to enable the design of personalized content for immersive 3D environments, and investigating algorithms and approaches for the automated analysis and synthesis of visual media content. Her ongoing work focuses on accessible multimedia technologies for people with low vision using virtual reality.

Abstract:

For film pre-visualization, 3D virtual environments have changed the way filmmakers can plan and create visual content before investing in physical sets and actors. These same tools can also offer students in filmmaking more opportunities for hands-on practice. Setting out from this motivation, we developed a “one-man movie” application using virtual reality headsets in which users can take-on and experience different roles in film production – the director, cinematographer, and editor – for the same filming scene. Over the years this application has continued to evolve with the addition of various functionalities, such as more intuitive character animation systems, smoother camera planning and movement editors, and camera positioning recommendations. Simultaneously, these evolutions also gradually introduce and reveal the complexity of the filmmaking process itself. This raises the difficulty for novice users to familiarize with all the functionalities, operate the virtual film set, and approach the learning goals envisioned. In this talk, we will introduce our VR filmmaking setup, and focus on an adaptive assisting interface to help users learn virtual filmmaking through a game-like environment. The design of the system is based on the scaffolding theory, where the idea is to provide timely guidance to the user in the form of visual and audio messages that are adapted to each person’s skill level, performance, and the tasks they are trying to achieve. The adaptive assisting interface was developed on the existing one-man movie virtual filmmaking setup using HTC Vive Pro headsets. In order to evaluate the learning interface, we conducted a study with 24 participants, who were asked to operate the film set with or without our adaptive assisting interface. Results suggest that our system can provide users with a better learning experience and enable positive knowledge harvest.

-------------

[Paper] Cooking in the dark: a mixed reality empathy experience for the embodiment of blindness

Authors: RENAN GUARESE (RMIT- Australia), FRANKLIN BASTIDAS (UFRGS, Brazil), JOÃO BECKER, (UFRGS, Brazil), MARIANE GIAMBASTIANI (UFRGS, Brazil), YHONATAN IQUIAPAZA (UFRGS, Brazil), LENNON MACEDO (UFRGS, Brazil), LUCIANA NEDEL (UFRGS, Brazil), ANDERSON MACIEL (UFRGS, Brazil), FABIO ZAMBETTA (RMIT, Australia), RON VAN SCHYNDEL (RMIT, Australia)

In the context of promoting a sense of empathy for the difference in people without disabilities, we propose a gaming experience that allows users to embody having a visual impairment. By occluding the user’s vision and providing spatialized audio and passive haptic feedback, allied with a speech recognition digital assistant, our goal is to offer a multi-sensory experience to enhance the user’s sense of embodiment inside a mixed reality blindness simulation. Inside the game environment, while expecting a guest to arrive, the player is required to cook a meal completely in the dark. Being aided solely by their remaining senses and a digital assistant, players must go through several tasks as to prepare dinner in time, risking to lose a love interest.

------------

[Paper] An experimental methodology to capture user and gameplay data tied to cybersickness

Authors: THIAGO PORCINO, DANIELA TREVISAN, ESTEBAN CLUA (Universidade Federal Fluminense, Brazil)

Virtual reality and head-mounted displays are constantly gaining popularity in various fields such as education, military, entertainment, and bio/medical informatics. Although such technologies provide a high sense of immersion, they can also trigger symptoms of discomfort. This condition is called cybersickness (CS) and is quite popular in recent publications in the virtual reality context. We created and conducted an iterative evaluating protocol methodology and proposed two VR games (a racing game and a flight game). The recorded data can be used for further machine learning analysis tied to cybersickness.

800px-Virtual-reality.coolminds.png
Main: Event Details
bottom of page