Basic Research

Publication
Unmasking Communication Partners: A Low-Cost AI Solution for Digitally Removing Head-Mounted Displays in VR-Based Telepresence
Face-to-face conversation in Virtual Reality (VR) is a challenge when participants wear head-mounted displays (HMD). A significant portion of a participant’s face is hidden and facial expressions are difficult to perceive. Past research has shown that high-fidelity face reconstruction with personal avatars in VR is possible under laboratory conditions with high-cost hardware. In this paper, we propose one of the first low-cost systems for this task which uses only open source, free software and affordable hardware.
Blog Post
Towards Face-to-Face Telepresence Applications in Virtual Reality with Generative Neural Nets
Philipp Ladwig held a talk at the 17th GI VR/AR workshop about our research in the area of telepresence in virtual reality applications. The associated paper written by Philipp Ladwig and Alexander Pech under the guidance of Prof. Dr. Christian Geiger received the "Best Paper" award.
Publication
Smart Object Segmentation to Enhance the Creation of Interactive Environments
The objective of our research is to enhance the creation of interactive environments such as in VR applications. An interactive environment can be produced from a point cloud that is acquired by a 3D scanning process of a certain scenery. The segmentation is needed to extract objects in that point cloud to, e.g., apply certain physical properties to them in a further step. It takes a lot of effort to do this manually as single objects have to be extracted and post-processed. Thus, our research aim is the real-world, cross-domain, automatic, semantic segmentation without the estimation of specific object classes.
Publication
Auf dem Weg zu Face-to-Face-Telepräsenzanwendungen in VR mit generativen neuronalen Netzen
Three-dimensional capture of faces with personal expressions for facial reconstruction under a head-mounted display.
Publication
Point Cloud Segmentation: Solving a Perceptual Grouping Task with Deep Reinforcement Learning
We propose a method to segment a real world point cloud as perceptual grouping task (PGT) by a deep reinforcement learning (DRL) agent. A point cloud is divided into groups of points, named superpoints, for the PGT. These superpoints should be grouped to objects by a deep neural network policy that is optimised by a DRL algorithm.
Publication
Point Cloud Segmentation with Deep Reinforcement Learning
The segmentation of point clouds is conducted with the help of deep reinforcement learning (DRL) in this contribution. We want to create interactive virtual reality (VR) environments from point cloud scans as fast as possible. These VR environments are used for secure and immersive trainings of serious real life applications such as the extinguishing of a fire. It is necessary to segment the point cloud scans to create interactions in the VR
Blog Post
Personalized Avatars Generated by a Neural Network for Telepresence and Mixed Reality
Skype, Facetime and similar apps have become an integral part of our everyday life. However, the digital meeting does not compare to the real, physical one. Both spatial perception and non-verbal communication are limited by the current digital channels.
Blog Post
3D Scanner for Mixed Reality
Being able to teleport people and objects or even entire rooms is a long-cherished dream and has been shown in many science fiction movies. Thanks to innovative technologies, it is now possible to scan rooms and people as well as to digitize and “teleport” them by sending the scans to a remote location.
Blog Post
Finger Tracking
Gesturing with arms, hands and fingers is a natural part of conversations. That is why we have developed a system that is able to track finger movement with a high accuracy. This allows users of mixed reality to have a virtual conversation without having to forgo complex gestures.
Publication
Iterative Prototyping of a Cut for a Finger Tracking Glove
The perception of the movement of the own fingers is important in VR. Also the sense of touch of the hands provide crucial information about an object that is touched. This is especially noticeable when climbing in VR. While prototyping a VR climbing application, we developed the finger tracking glove g 1. Glove g 1 enables the perception of the finger movements but limits the sense of touch of the hand.
Publication
Remote Guidance for Machine Maintenance Supported by Physical LEDs and Virtual Reality
Machines that are used in industry often require dedicated technicians to fix them in case of defects. This involves travel expenses and certain amount of time, both of which may be significantly reduced by installing small extensions on a machine as we describe in this paper. The goal is that an authorized local worker, guided by a remote expert, can fix the problem on the real machine himself.
Publication
The Effects on Presence of Personalized and Generic Avatar Faces
With today’s technology it has become possible to generate and control personalized as well as authentic avatar faces in 3D for social Virtual Reality (VR) applications, as Lombardi et al. [LSSS18] have recently shown. Creating a personalized avatar with facial expressions is expensive in terms of time, computational power and hardware.
Blog Post
Facetracking VR
When people talk to each other, they do not only use the various facets of language and voice to convey information, but also their facial expressions. The latter are an essential part of our social interaction, and communication in virtual reality (VR) has been lacking this factor entirely as of yet. That is why we explore the possibilities of face tracking technology in this project.
Publication
Examining effects of altered gravity direction in Room-Scale VR
In this paper, we present the development and results of an experiment in which virtual environments (VE) with altered gravity direction are traversed by real walking. We found similarities to redirected walking and pursued the goal to identify thresholds for the extent in which shifted gravity can be applied in room-scale walking scenarios in VEs. From the results of our experiment, we give a first estimation for possible thresholds for this sort of application.
Publication
Dynamic Movement Monitoring - Algorithms for Real Time Exercise Movement Feedback
Following to an implantation of an artificial knee joint, patients have to perform rehabilitation exercises at home. The motivation to exercise can be low and if the exercises are not executed, an extended rehabilitation time or a follow-up operation is possibly required. Moreover, incorrect exercise executions over a long period can lead to injuries. Therefore, we present two Programming by Demonstration (PbD) algorithms, a Nearest-Neighbour (NN) model and the Alpha Algorithm (AlpAl), for measuring the quality of exercise executions, which can be used in order to give feedback in exergames.

No posts could be found.