Since we are part of the media department at the University of Applied Sciences Düsseldorf (HSD), research is one of the most important aspects of our work. Mixed reality technology is a rather young field of research and is constantly extending and changing, which is why there is a great number of intriguing topics to explore and questions to ask. Here we collect and present a selection of our recent and previous scientific publications.
By presenting the process of developing an augmented reality (AR) installation, the paper explores the transformative potential of AR as a mediation tool in the context of a multifaceted socio-cultural issue. Focused on the removal of a 120-year-old Atlas cedar from a municipal park in the German town of Ratingen, the study employs research through design (RtD) approach, combining qualitative field research and digital documentation. The AR application serves as a platform for location-based storytelling, aiming to foster dialogue among citizens, scientists and city officials through a multi-perspective approach. As the project envisions a future where AR serves as a potent tool for mediating intricate societal challenges, this paper adds to the ongoing discourse regarding the convergence of technology, nature, and public engagement.
Patrick Kruse, Ivana Družetić-Vogel, Anja Vormann, Christian Geiger. 2024. AR Poensgenpark: Multiperspective Storytelling with Augmented Reality as an Attempt of Dialogue Facilitation in a Multifaceted Dispute. International Symposium on Electronic Art
The creation of interactive virtual reality (VR) applications from 3D scanned content usually includes a lot of manual and repetitive work. Our research aim is to develop agents that recognize objects to enhance the creation of interactive VR applications. We trained partition agents in our superpoint growing environment that we extended with an expert function. This expert function solves the sparse reward signal problem of the previous approaches and enables to use a variant of imitation learning and deep reinforcement learning with dense feedback. Additionally, the function allows to calculate a performance metric for the degree of imitation for different partitions. Furthermore, we introduce an environment to optimize the superpoint generation. We trained our agents with 1182 scenes of the ScanNet data set. More specifically, we trained different neural network architectures with 1170 scenes and tested their performance with 12 scenes. Our intermediate results are promising such that our partition system might be able to assist the VR application development from 3D scanned content in near future.
Marcel Tiator, Anna Maria Kerkmann, Christian Geiger and Paul Grimm. 2021. Union of Superpoints to Recognize Objects. International Journal of Semantic Computing. DOI: https://doi.org/10.1142/S1793351X21400146
@article{Tiator2021,
abstract = {The creation of interactive virtual reality (VR) applications from 3D scanned content usually includes a lot of manual and repetitive work. Our research aim is to develop agents that recognize objects to enhance the creation of interactive VR applications. We trained partition agents in our superpoint growing environment that we extended with an expert function. This expert function solves the sparse reward signal problem of the previous approaches and enables to use a variant of imitation learning and deep reinforcement learning with dense feedback. Additionally, the function allows to calculate a performance metric for the degree of imitation for different partitions. Furthermore, we introduce an environment to optimize the superpoint generation. We trained our agents with 1182 scenes of the ScanNet data set. More specifically, we trained different neural network architectures with 1170 scenes and tested their performance with 12 scenes. Our intermediate results are promising such that our partition system might be able to assist the VR application development from 3D scanned content in near future.},
author = {Tiator, Marcel and Kerkmann, Anna Maria and Geiger, Christian and Grimm, Paul},
doi = {10.1142/S1793351X21400146},
file = {:home/mati3230/Documents/Mendeley Desktop/S1793351X21400146.pdf:pdf},
issn = {1793-351X},
journal = {International Journal of Semantic Computing},
month = {dec},
number = {04},
pages = {513--537},
title = {{US2RO: Union of Superpoints to Recognize Objects}},
url = {https://www.worldscientific.com/doi/abs/10.1142/S1793351X21400146},
volume = {15},
year = {2021}
}
The performative art installation is a rose uses plants as natural interfaces in order to establish a direct relation between natural and technological systems. The installation is used to visualize digital-physical interaction that is not necessarily explicit – triggered by touch or air movement, by direct and non-direct manipulation, depicting the sum of all interactions during the course of the day.
Charlotte Triebus, Ivana Druzetic, Bastian Dewitz, Calvin Huhn, Paul Kretschel, and Christian Geiger. 2021. Is a rose – A Performative Installation between the Tangible and the Digital. In Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '21). Association for Computing Machinery, New York, NY, USA, Article 80, 1–4. DOI:https://doi.org/10.1145/3430524.3444640
@inproceedings{10.1145/3430524.3444640, author = {Triebus, Charlotte and Druzetic, Ivana and Dewitz, Bastian and Huhn, Calvin and Kretschel, Paul and Geiger, Christian}, title = {Is a Rose – A Performative Installation between the Tangible and the Digital}, year = {2021}, isbn = {9781450382137}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3430524.3444640}, doi = {10.1145/3430524.3444640}, abstract = {The performative art installation is a rose uses plants as natural interfaces in order to establish a direct relation between natural and technological systems. The installation is used to visualize digital-physical interaction that is not necessarily explicit – triggered by touch or air movement, by direct and non-direct manipulation, depicting the sum of all interactions during the course of the day. The technical realization consists of detection of the movement of plants caused by the movements in their immediate vicinity and the subsequent deformation of a computer-generated sphere. The paper is explaining several different layers of meanings the artist was motivated by when developing the artwork.}, booktitle = {Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction}, articleno = {80}, numpages = {4}, keywords = {Performative Installation, Agency, Tangible and Body-Centered Interaction, Plants}, location = {Salzburg, Austria}, series = {TEI '21} }
Face-to-face conversation in Virtual Reality (VR) is a challenge when participants wear head-mounted displays (HMD). A significant portion of a participant’s face is hidden and facial expressions are difficult to perceive. Past research has shown that high-fidelity face reconstruction with personal avatars in VR is possible under laboratory conditions with high-cost hardware. In this paper, we propose one of the first low-cost systems for this task which uses only open source, free software and affordable hardware.
P. Ladwig, A. Pech, R. Dorner and C. Geiger. (2020). Unmasking Communication Partners: A Low-Cost AI Solution for Digitally Removing Head-Mounted Displays in VR-Based Telepresence. In: IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). DOI: 10.1109/AIVR50618.2020.00025.
@INPROCEEDINGS {9319106,
author = {P. Ladwig and A. Pech and R. Dorner and C. Geiger},
booktitle = {2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)},
title = {Unmasking Communication Partners: A Low-Cost AI Solution for Digitally Removing Head-Mounted Displays in VR-Based Telepresence},
year = {2020},
volume = {},
issn = {},
pages = {82-90},
keywords = {faces;avatars;resists;image reconstruction;gallium nitride;three-dimensional displays;training},
doi = {10.1109/AIVR50618.2020.00025},
url = {https://doi.ieeecomputersociety.org/10.1109/AIVR50618.2020.00025},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
month = {dec}
}
The advancing technology allows new forms of contemporary art expressions which, however,
require a large set of skills to be developed and therefore involve a team with diverse
backgrounds. In this paper, we present implementation details and the artistic background of the
art piece is a rose that was developed and exhibited in 2019. Based on this example and our
previous experience of work on different art applications, we provide an insight into the
interdisciplinary work between artists and developers.
The objective of our research is to enhance the creation of interactive environments such as in VR applications. An interactive environment can be produced from a point cloud that is acquired by a 3D scanning process of a certain scenery. The segmentation is needed to extract objects in that point cloud to, e.g., apply certain physical properties to them in a further step. It takes a lot of effort to do this manually as single objects have to be extracted and post-processed. Thus, our research aim is the real-world, cross-domain, automatic, semantic segmentation without the estimation of specific object classes.
M. Tiator. (2020). Smart Object Segmentation to Enhance the Creation of Interactive Environments. In: 1st Doctoral Consortium at the European Conference on Artificial Intelligence - DC-ECAI '20.
@inproceedings{Tiator2020c,
address = {Santiago de Compostela, Spain},
author = {Tiator, Marcel},
booktitle = {1st Doctoral Consortium at the European Conference on Artificial Intelligence - DC-ECAI '20},
editor = {{Alonso Moral}, Jos{\'{e}} Mar{\'{i}}a and Cort{\'{e}}s, Ulises},
file = {:home/mati3230/Documents/Promotion/ECAI2020/dc/extended{\_}abstract/main.pdf:pdf},
pages = {21 -- 23},
title = {{Smart Object Segmentation to Enhance the Creation of Interactive Environments}},
url = {https://minerva.usc.es/xmlui/handle/10347/23263},
year = {2020}
}
P. Ladwig, A. Pech and C. Geiger. (2020). Auf dem Weg zu Face-to-Face-Telepräsenzanwendungen in Virtual Reality mit generativen neuronalen Netzen. In: Weyers, B., Lürig, C. & Zielasko, D. (Hrsg.), GI VR / AR Workshop. Gesellschaft für Informatik e.V.. DOI: 10.18420/vrar2020_15.
@inproceedings{mci/Ladwig2020,
author = {Ladwig, Philipp AND Pech, Alexander AND Geiger, Christian},
title = {Auf dem Weg zu Face-to-Face-Telepräsenzanwendungen in Virtual Reality mit generativen neuronalen Netzen},
booktitle = {GI VR / AR Workshop},
year = {2020},
editor = {Weyers, Benjamin AND Lürig, Christoph AND Zielasko, Daniel} ,
doi = { 10.18420/vrar2020_15 },
publisher = {Gesellschaft für Informatik e.V.},
address = {}
}
There is a substantial number of body tracking systems (BTS), which cover a wide variety of different technology, quality and price range for character animation, dancing or gaming. To the disadvantage of developers and artists, almost every BTS streams out different protocols and tracking data. Not only do they vary in terms of scale and offset, but also their skeletal data differs in rotational offsets between joints and in the overall number of bones
P. Ladwig, K. Evers, E. J. Jansen, B. Fischer, D. Nowottnik, and C. Geiger. (2020). MotionHub: Middleware for Unification of Multiple Body Tracking Systems. In: Proceedings of the 7th International Conference on Movement and Computing (MOCO '20). Association for Computing Machinery, New York, NY, USA, Article 1, 1–8. DOI: 10.1145/3401956.3404185.
@inproceedings{Ladwig-MotionHub-2020,
author = {Ladwig, Philipp and Evers, Kester and Jansen, Eric J. and Fischer, Ben and Nowottnik, David and Geiger, Christian},
title = {MotionHub: Middleware for Unification of Multiple Body Tracking Systems},
year = {2020},
isbn = {9781450375054},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi-org.ezp.hs-duesseldorf.de/10.1145/3401956.3404185},
doi = {10.1145/3401956.3404185},
abstract = {There is a substantial number of body tracking systems (BTS), which cover a wide variety of different technology, quality and price range for character animation, dancing or gaming. To the disadvantage of developers and artists, almost every BTS streams out different protocols and tracking data. Not only do they vary in terms of scale and offset, but also their skeletal data differs in rotational offsets between joints and in the overall number of bones. Due to this circumstance, BTSs are not effortlessly interchangeable. Usually, software that makes use of a BTS is rigidly bound to it, and a change to another system can be a complex procedure. In this paper, we present our middleware solution MotionHub, which can receive and process data of different BTS technologies. It converts the spatial as well as the skeletal tracking data into a standardized format in real time and streams it to a client (e.g. a game engine). That way, MotionHub ensures that a client always receives the same skeletal-data structure, irrespective of the used BTS. As a simple interface enabling the user to easily change, set up, calibrate, operate
and benchmark different tracking systems, the software targets artists and technicians. MotionHub is open source, and other developers are welcome to contribute to this project.},
booktitle = {Proceedings of the 7th International Conference on Movement and Computing},
articleno = {1},
numpages = {8},
keywords = {Azure Kinect, middleware, Body tracking, motion capture, skeletal data, OptiTrack},
location = {Jersey City/Virtual, NJ, USA},
series = {MOCO '20}
}
We propose a method to segment a real world point cloud as perceptual grouping task (PGT) by a deep reinforcement learning (DRL) agent. A point cloud is divided into groups of points, named superpoints, for the PGT. These superpoints should be grouped to objects by a deep neural network policy that is optimised by a DRL algorithm.
M. Tiator, C. Geiger and P. Grimm. (2020). Point Cloud Segmentation: Solving a Perceptual Grouping Task with Deep Reinforcement Learning. In: Proceedings of the 17th Workshop of Virtual and Augmented Reality - GI VR/AR '20. DOI: 10.18420/vrar2020_28.
@inproceedings{Tiator2020a,
address = {Trier, Germany},
author = {Tiator, Marcel and Geiger, Christian and Grimm, Paul},
booktitle = {Proceedings of the 17th Workshop of Virtual and Augmented Reality - GI VR/AR '20},
doi = {http://dx.doi.org/10.18420/vrar2020_28},
file = {:home/mati3230/.local/share/data/Mendeley Ltd./Mendeley Desktop/Downloaded/Tiator, Geiger, Grimm - 2020 - Point Cloud Segmentation Solving a Perceptual Grouping Task with Deep Reinforcement Learning.pdf:pdf},
keywords = {an anaesthesia room that,ceptual grouping task,cloud,corresponding wireframe of the,deep reinforcement learning,figure 1,left,mesh,one textured mesh of,per-,point cloud segmentation,right,scans,superpoints,the scene has,the visualization of th
e,virtual reality,was generated by point},
publisher = {Gesellschaft f{\"{u}}r Informatik e.V.},
title = {{Point Cloud Segmentation : Solving a Perceptual Grouping Task with Deep Reinforcement Learning}},
url = {http://dx.doi.org/10.18420/vrar2020{\_}28},
year = {2020}
}
The segmentation of point clouds is conducted with the help of deep reinforcement learning (DRL) in this contribution. We want to create interactive virtual reality (VR) environments from point cloud scans as fast as possible. These VR environments are used for secure and immersive trainings of serious real life applications such as the extinguishing of a fire. It is necessary to segment the point cloud scans to create interactions in the VR
M. Tiator, C. Geiger and P. Grimm. (2020). Point Cloud Segmentation with Deep Reinforcement Learning. In: Proceedings of the 24th European Conference on Artificial Intelligence - ECAI '20. DOI: 10.13140/RG.2.2.30674.50885.
@inproceedings{Tiator2020,
address = {Santiago de Compostela, Spain},
author = {Tiator, Marcel and Geiger, Christian and Grimm, Paul},
booktitle = {Proceedings of the 24th European Conference on Artificial Intelligence - ECAI '20},
file = {:home/mati3230/.local/share/data/Mendeley Ltd./Mendeley Desktop/Downloaded/Tiator, Geiger, Grimm - 2020 - Point Cloud Segmentation with Deep Reinforcement Learning.pdf:pdf},
publisher = {IOS Press},
title = {{Point Cloud Segmentation with Deep Reinforcement Learning}},
url = {http://ecai2020.eu/papers/1345{\_}paper.pdf},
year = {2020}
}
The perception of the movement of the own fingers is important in VR. Also the sense of touch of the hands provide crucial information about an object that is touched. This is especially noticeable when climbing in VR. While prototyping a VR climbing application, we developed the finger tracking glove g 1. Glove g 1 enables the perception of the finger movements but limits the sense of touch of the hand.
E. Hanak, A.-M. Heuer, B. Fischer, M. Tiator and C. Geiger. (2019). Iterative Prototyping of a Cut for a Finger Tracking Glove. In: GI VR/AR Workshop.
@article{Hanak2019,
author = {Hanak, Eva and Heuer, Anna-maria and Fischer, Ben and Tiator, Marcel and Geiger, Christian},
file = {:home/mati3230/.local/share/data/Mendeley Ltd./Mendeley Desktop/Downloaded/Hanak et al. - 2019 - Iterative Prototyping of a Cut for a Finger Tracking Glove.pdf:pdf},
journal = {GI VR/AR Workshop},
keywords = {glove,iterative prototyping,vr},
title = {{Iterative Prototyping of a Cut for a Finger Tracking Glove}},
year = {2019}
}
Machines that are used in industry often require dedicated technicians to fix them in case of defects. This involves travel expenses and certain amount of time, both of which may be significantly reduced by installing small extensions on a machine as we describe in this paper. The goal is that an authorized local worker, guided by a remote expert, can fix the problem on the real machine himself.
P. Ladwig, B. Dewitz, H. Preu and M. Säger. (2019). Remote Guidance for Machine Maintenance Supported by Physical LEDs and Virtual Reality. In: MuC'19: Proceedings of Mensch und Computer 2019. DOI: 10.1145/3340764.3340780.
@inproceedings{Ladwig:2019:RGM:3340764.3340780,
author = {Ladwig, Philipp and Dewitz, Bastian and Preu, Hendrik and S\"{a}ger, Mitja},
title = {Remote Guidance for Machine Maintenance Supported by Physical LEDs and Virtual Reality},
booktitle = {Proceedings of Mensch Und Computer 2019},
series = {MuC'19},
year = {2019},
isbn = {978-1-4503-7198-8},
location = {Hamburg, Germany},
pages = {255--262},
numpages = {8},
url = {http://doi.acm.org/10.1145/3340764.3340780},
doi = {10.1145/3340764.3340780},
acmid = {3340780},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {Collaboration, Industry 4.0, LED, light emitting diodes, remote machine maintenance, virtual reality},
}
With today’s technology it has become possible to generate and control personalized as well as authentic avatar faces in 3D for social Virtual Reality (VR) applications, as Lombardi et al. [LSSS18] have recently shown. Creating a personalized avatar with facial expressions is expensive in terms of time, computational power and hardware.
The goal of the project “War Children” is to tell stories of survivors of the SecondWorldWar by means of augmented reality. We want to make memories persistent, accessible and comprehensible to users that do not yet have access to these memories, e. g. , digital natives.
The application of immersive technologies provides us with new ways to tell stories about the past in an empathic way by augmenting the narration with audio-visual assets.
17th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Munich
Authors:
Chris Zimmer, Nanette Ratz, Michael Bertram, and Christian Geiger
C. Zimmer, N. Ratz, M. Bertram and C. Geiger. (2018). War Children: Using AR in a Documentary Context. In: 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). DOI: 10.1109/ISMAR-Adjunct.2018.00112.
@INPROCEEDINGS{8699203,
author={C. {Zimmer} and N. {Ratz} and M. {Bertram} and C. {Geiger}},
booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
title={War Children: Using AR in a Documentary Context},
year={2018},
volume={},
number={},
pages={390-394},
doi={10.1109/ISMAR-Adjunct.2018.00112}}
In the context of spatial user interfaces for virtual or augmented reality, many interaction techniques and metaphors are referred to as being (super-)natural, magical or hyper-real. However, many of these terms have not been defined properly, such that classification and distinction between those interfaces is often not possible. We propose a new classification system which can be used to identify those interaction techniques and relate them to reality-based and abstract interaction techniques.
6th ACM Symposium on Spatial User Interaction (SUI 2018), Berlin
Authors:
Bastian Dewitz, Philipp Ladwig, Christian Geiger, and Frank Steinicke
B. Dewitz, P. Ladwig, F. Steinicke, and C. Geiger. (2018). Classification of Beyond-Reality Interaction Techniques in Spatial Human-Computer Interaction. In: Proceedings of the Symposium on Spatial User Interaction (SUI '18). DOI: 10.1145/3267782.3274680.
@inproceedings{10.1145/3267782.3274680,
author = {Dewitz, Bastian and Ladwig, Philipp and Steinicke, Frank and Geiger, Christian},
title = {Classification of Beyond-Reality Interaction Techniques in Spatial Human-Computer Interaction},
year = {2018},
isbn = {9781450357081},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3267782.3274680},
doi = {10.1145/3267782.3274680},
abstract = {In the context of spatial user interfaces for virtual or augmented reality, many interaction techniques and metaphors are referred to as being (super-)natural, magical or hyper-real. However, many of these terms have not been defined properly, such that a classification and distinction between those interfaces is often not possible. We propose a new classification system which can be used to identify those interaction techniques and relate them to reality-based and abstract interaction techniques.},
booktitle = {Proceedings of the Symposium on Spatial User Interaction},
pages = {185},
numpages = {1},
keywords = {Hyper-natural, Magical Interaction Techniques, Natural User Interfaces, Spatial Interaction, Classification, Super-natural},
location = {Berlin, Germany},
series = {SUI '18}
}
We use cookies to provide you with the best possible experience on our website and to improve our communication with you. We take your preferences into account and process data for marketing, analytics and personalization only if you give us your consent by clicking on “Agree and continue”. You can revoke your consent at any time with future effect. You can find more information on cookies and customization options under the “Show details” button.
Your settings for the cookies of this website
We use cookies to provide you with the best possible use of our website and to improve our communication with you. Make your personal preference here:
Necessary Cookies
Necessary cookies help make our website usable by enabling basic capabilities such as page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Analytics & Personalization
These cookies are used to enable functionalities that enable you to use the website as conveniently as possible and tailored to your interests. Furthermore, the analysis of user behavior also helps us to continuously improve the quality of our website.
Marketing
We use these cookies to show you advertising tailored to your interests, both inside and outside our websites.