Since we are part of the media department at the University of Applied Sciences Düsseldorf (HSD), research is one of the most important aspects of our work. Mixed reality technology is a rather young field of research and is constantly extending and changing, which is why there is a great number of intriguing topics to explore and questions to ask. Here we collect and present a selection of our recent and previous scientific publications.

February 2021
Unmasking Communication Partners: A Low-Cost AI Solution for Digitally Removing Head-Mounted Displays in VR-Based Telepresence
Face-to-face conversation in Virtual Reality (VR) is a challenge when participants wear head-mounted displays (HMD). A significant portion of a participant’s face is hidden and facial expressions are difficult to perceive. Past research has shown that high-fidelity face reconstruction with personal avatars in VR is possible under laboratory conditions with high-cost hardware. In this paper, we propose one of the first low-cost systems for this task which uses only open source, free software and affordable hardware.
November 2020
is a rose – A Performative Installation in the Context of Art and Technology
The advancing technology allows new forms of contemporary art expressions which, however, require a large set of skills to be developed and therefore involve a team with diverse backgrounds. In this paper, we present implementation details and the artistic background of the art piece is a rose that was developed and exhibited in 2019. Based on this example and our previous experience of work on different art applications, we provide an insight into the interdisciplinary work between artists and developers.
September 2020
Smart Object Segmentation to Enhance the Creation of Interactive Environments
The objective of our research is to enhance the creation of interactive environments such as in VR applications. An interactive environment can be produced from a point cloud that is acquired by a 3D scanning process of a certain scenery. The segmentation is needed to extract objects in that point cloud to, e.g., apply certain physical properties to them in a further step. It takes a lot of effort to do this manually as single objects have to be extracted and post-processed. Thus, our research aim is the real-world, cross-domain, automatic, semantic segmentation without the estimation of specific object classes.
September 2020
Auf dem Weg zu Face-to-Face-Telepräsenzanwendungen in VR mit generativen neuronalen Netzen
Three-dimensional capture of faces with personal expressions for facial reconstruction under a head-mounted display.
July 2020
MotionHub: Middleware for Unification of Multiple Body Tracking Systems
There is a substantial number of body tracking systems (BTS), which cover a wide variety of different technology, quality and price range for character animation, dancing or gaming. To the disadvantage of developers and artists, almost every BTS streams out different protocols and tracking data. Not only do they vary in terms of scale and offset, but also their skeletal data differs in rotational offsets between joints and in the overall number of bones
July 2020
Point Cloud Segmentation: Solving a Perceptual Grouping Task with Deep Reinforcement Learning
We propose a method to segment a real world point cloud as perceptual grouping task (PGT) by a deep reinforcement learning (DRL) agent. A point cloud is divided into groups of points, named superpoints, for the PGT. These superpoints should be grouped to objects by a deep neural network policy that is optimised by a DRL algorithm.
April 2020
Point Cloud Segmentation with Deep Reinforcement Learning
The segmentation of point clouds is conducted with the help of deep reinforcement learning (DRL) in this contribution. We want to create interactive virtual reality (VR) environments from point cloud scans as fast as possible. These VR environments are used for secure and immersive trainings of serious real life applications such as the extinguishing of a fire. It is necessary to segment the point cloud scans to create interactions in the VR
September 2019
Iterative Prototyping of a Cut for a Finger Tracking Glove
The perception of the movement of the own fingers is important in VR. Also the sense of touch of the hands provide crucial information about an object that is touched. This is especially noticeable when climbing in VR. While prototyping a VR climbing application, we developed the finger tracking glove g 1. Glove g 1 enables the perception of the finger movements but limits the sense of touch of the hand.
September 2019
Remote Guidance for Machine Maintenance Supported by Physical LEDs and Virtual Reality
Machines that are used in industry often require dedicated technicians to fix them in case of defects. This involves travel expenses and certain amount of time, both of which may be significantly reduced by installing small extensions on a machine as we describe in this paper. The goal is that an authorized local worker, guided by a remote expert, can fix the problem on the real machine himself.
September 2019
The Effects on Presence of Personalized and Generic Avatar Faces
With today’s technology it has become possible to generate and control personalized as well as authentic avatar faces in 3D for social Virtual Reality (VR) applications, as Lombardi et al. [LSSS18] have recently shown. Creating a personalized avatar with facial expressions is expensive in terms of time, computational power and hardware.
October 2018
War Children: Using AR in a Documentary Context
The goal of the project “War Children” is to tell stories of survivors of the SecondWorldWar by means of augmented reality. We want to make memories persistent, accessible and comprehensible to users that do not yet have access to these memories, e. g. , digital natives. The application of immersive technologies provides us with new ways to tell stories about the past in an empathic way by augmenting the narration with audio-visual assets.
October 2018
Classification of Beyond-Natural Interaction Techniques in Spatial Human-Computer Interaction (Poster)
In the context of spatial user interfaces for virtual or augmented reality, many interaction techniques and metaphors are referred to as being (super-)natural, magical or hyper-real. However, many of these terms have not been defined properly, such that classification and distinction between those interfaces is often not possible. We propose a new classification system which can be used to identify those interaction techniques and relate them to reality-based and abstract interaction techniques.
October 2018
Examining effects of altered gravity direction in Room-Scale VR
In this paper, we present the development and results of an experiment in which virtual environments (VE) with altered gravity direction are traversed by real walking. We found similarities to redirected walking and pursued the goal to identify thresholds for the extent in which shifted gravity can be applied in room-scale walking scenarios in VEs. From the results of our experiment, we give a first estimation for possible thresholds for this sort of application.
September 2018
The Expanded Body - Ergebnisse einer tänzerisch-künstlerischen Intervention
In diesem Beitrag stellen wir die Ergebnisse eines einwöchigen Workshops mit dem Ziel der Überprüfung des Nutzens alternativer Arbeitsprozesse zur Erforschung künstlerisch-wissenschaftlicher Synergiepotentiale im tänzerischen Umfeld vor. Der Workshop fand zwischen Teilnehmern aus dem kreativ-tänzerischen Umfeld und Teilnehmern aus dem wissenschaftlich-technischen Umfeld statt. Während des Workshops kamen Technologien wie Motion-Capturing, Virtual Reality, Biosensorik und Muskelstimulation zum Einsatz, und es entstanden insgesamt sechs verschiedene Konzepte für und teilweise Umsetzungen von Installationen, Performances und Collagen.
September 2018
Walking on Sunshine – Dynamische Wetterparameter in Outdoor VR
Die Nutzererfahrung einer VR-Applikation, die eine Outdoor Virtual Environment (OVE) enthält, hängt von Kriterien wie der Präsenz während der Interaktion ab. Interagiert man mit einer OVE in einem geschlossenen Raum, können Wetterparameter nur simuliert werden. Die Immersion des Systems wird jedoch erhöht, wenn der Nutzer die Anwendung im Freien erlebt, und die natürlichen Gegebenheiten der realen Umgebung in die virtuelle Nutzererfahrung übertragen werden.

No publications could be found.