• Keine Ergebnisse gefunden

Advanced multimodal interaction techniques and user interfaces for serious games and virtual environments

N/A
N/A
Protected

Academic year: 2022

Aktie "Advanced multimodal interaction techniques and user interfaces for serious games and virtual environments"

Copied!
2
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Vol.:(0123456789)

1 3

Journal on Multimodal User Interfaces (2021) 15:255–256 https://doi.org/10.1007/s12193-021-00380-0

EDITORIAL

Advanced multimodal interaction techniques and user interfaces for serious games and virtual environments

Fotis Liarokapis1,2 · Sebastian von Mammen3 · Athanasios Vourvopoulos4

Published online: 10 August 2021

© The Author(s), under exclusive licence to Springer Nature Switzerland AG 2021

Human–computer interaction and multimodal user interfaces have evolved dramatically over the last few years offering novel ways of interaction for serious games and virtual environments.

Multimodal user interfaces have progressed both in terms of hardware (i.e. basic I/O controls) to sophisticated sensor devices (i.e. body tracking, physiological sensors, etc.) as well as in terms of software from simple graphical user interfaces to advanced user interfaces. This development of new multimodal interaction paradigms has augmented the way serious games and virtual environments are utilized, expanding rapidly the breadth of opportunities for training and simulation. As a result, the user-base has been significantly expanded, and making them accessible to a diverse audience.

There are many factors affecting the evolution of advanced multimodal interaction techniques and user inter- faces, such as the broad variety of facets of virtual worlds and computer games that include computer graphics, soft- ware engineering, and human–computer interaction tech- niques. Among these factors can be identified the technol- ogy-driven nature of new developments, which often does not take usability and applicability into account, as well as the convergence between new technologies such as: learning interfaces, brain-computer interfaces, exergames, interactive

physics simulations and AI techniques, that require multi- modal approaches.

For this special issue we have accepted 5 articles that demonstrate the breadth of techniques for advanced mul- timodal interaction techniques and user interfaces for seri- ous games and virtual environments. These articles are a mix of original submissions and of extended and revised versions of selected best technical papers presented at the 9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games 2017), 6–8 September 2017, Athens, Greece.

The first article, entitled "fNIRS-based classification of mind-wandering with personalized window selection for multimodal learning interfaces" by Liu et al., presents a study using functional near-infrared spectroscopy (fNIRS) and investigated machine learning classifiers to detect mind- wandering episodes based on fNIRS data, both on an indi- vidual level and a group level, specifically focusing on auto- mated window selection to improve classification results.

Results show that the proposed algorithm achieved signifi- cant improvement compared to the previous state of the art in terms of brain-based classification of mind-wandering building a foundation for both the evaluation of multimodal learning interfaces and for future attention-aware systems.

The second article, entitled "A BCI video game using neurofeedback improves the attention of children with autism" by Mercado et al., describes the results of a 10-week deployment study with 26 children with severe autism using FarmerKeeper as a tool to support the neurofeedback thera- pies of children with autism. Pre- and post-assessment evalu- ation indicated that all children with autism improved their attention, attentional control and sustained attention. Direc- tions for future work and the potential benefits of this new generation of BCI videogames in healthcare scenarios are also discussed.

The third article, entitled "Circus in Motion: a multi- modal exergame supporting vestibular therapy for chil- dren with autism" by Cibrian et al., presents the design and

* Fotis Liarokapis

f.liarokapis@cyens.org.cy; fotios.liarokapis@cut.ac.cy Sebastian von Mammen

sebastian.von.mammen@uni-wuerzburg.de Athanasios Vourvopoulos

athanasios.vourvopoulos@tecnico.ulisboa.pt

1 CYENS—Centre of Excellence, Nicosia, Cyprus

2 Cyprus University of Technology, Limassol, Cyprus

3 Faculty of Mathematics and Computer Science, University of Würzburg, Würzburg, Germany

4 Institute for Systems and Robotics (ISR), Instituto Superior Técnico Universidade de Lisboa, Lisbon, Portugal

(2)

256 Journal on Multimodal User Interfaces (2021) 15:255–256

1 3

development of Circus in Motion, a multimodal exergame supporting children with autism with the practice of non- locomotor movements. A controlled experiment with 12 children with autism showed that Circus in Motion excels traditional vestibular therapies in increasing physical activa- tion and the number of movements repetitions. Open chal- lenges and opportunities of multimodal exergames to sup- port motor therapeutic interventions for children with autism in the long-term are finally discussed.

The fourth article, entitled "Neighborhood based decision theoretic rough set under dynamic granulation for BCI motor imagery classification" by Devi and Inbarani, analyses and classifies the brain signals for motor movements to aid in rehabilitation and restoration. By proposing a novel opti- mization technique with Neighborhood Decision Theoretic Rough Set under Dynamic Granulation. Results indicate that the proposed method Neighborhood based Decision Theo- retic Rough Set under Dynamic Granulation gives higher classification accuracy compared to existing approaches.

The final article, entitled "Identifying and evaluating conceptual representations for auditory-enhanced interac- tive physics simulations" by Tomlinson et al., evaluated

sets of audio designs for two different, but contextually- and visually-similar simulations. We identified key aspects of the audio representations and the simulation content which needed to be evaluated were identified. Designs across two simulations were compared to understand which auditory designs could generalize to other simulations. Authors sug- gest important characteristics to represent through audio for future simulations, provide sound design suggestions, and address how overlap between visual and audio representa- tions can support learning opportunities.

The guest editors wish to thank Prof. Jean-Claude Martin for his guidance in producing this special issue and we also wish to thank the authors and reviewers for their hard work.

We hope you enjoy reading these articles and learn more about advanced multimodal interaction techniques and user interfaces for serious games and virtual environments.

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Referenzen

ÄHNLICHE DOKUMENTE

The authors describe important measurement techniques divided into triangulation, scene analysis and proximity in detail and create performance metrics, shown in

The second employs complex event processing to handle the interaction with more com- plex input devices of interactive tabletop systems, which is a rather novel and uncommon field

A runtime environment for multimodal (mobile) service consumption con- sisting of a lightweight (mobile) client and a dialogue system that acts as a middleware between the user and

The Google keyboard uses two mechanism to facilitate entry of common emoji: (1) it maintains a list of recently used emoji, and (2) remembers the last used page per category.. If

Two psychologists with expertise in self-determination theory, but who were not themselves involved in this research project, reviewed the pool in a 2-hour workshop. The goal was

It was shown how the child varies these resources with regard to the ongoing action (e.g. repair or topic shift). Lexis, tone, and non-vocal acts are thus independent and variably

Modalities of the InfoPlant: (1) controlling the overall state of the plant by regulating the water intake, (2) changing the orientation of the plant, (3) illumination of the plant

To measure the effect of the presented modalities on users’ presence we conducted a study with 80 participants.. Participants were recruited through postings in the university