In order to achieve this, precise positioning and tracking of the virtual content is demanded. In addition to that, user input has to be natural and easy to perform. The Hololens is a pair of AR-glasses equipped with a semitransparent display. The user can see the virtual content as well as the real environment. Using build in sensors like a depth-camera and a inertial measurement unit the Hololens is able to scan its environment and place the virtual content in relation to real objects. Graduated students of the experience lab get perpetuated with the Victory Pillar. A vintage stylized picture of each student and their examiners is taken and mounted at a pillar at the experience lab. Part of each thesis presentation is a short video, which demonstrates the topic the student worked on. This video is used to augment the picture of the student in addition to the name of the student and the title of the thesis.

First the application has to be calibrated. During this process the virtual content is mapped to the real pictures. This is done using the image target tracking from the Vuforia SDK(2). After the successful positioning of the content, Vuforia is turned off, the spatial tracking of the Hololens takes over and continues tracking the scene in the real space. This is done since the spatial tracking of the Hololens is much more stable and robust than the continuous use of the Vuforia Image Tracking. After calibration, the user can freely explore the AR experience and examine specific graduates and their projects. The augmented video of each picture will play after the user triggers it by looking at the picture and performing the Air-Tap gesture or by using simple voice commands. As you would expect, the virtual content gets occluded by the real pillar, which is done by using custom aligned occlusion masks. A well-received feature is the use of spatial audio. This provides strong engagement since it enables the user to track the digital content when moving around the pillar or by moving the head from auditory feedback.

Beside the stationary version of the victory pillar, which requires only one image target for calibration, we provide a mobile version which can be used to show the application at fairs.

For the mobile version we design a custom image target marker for each student which can be used to place the virtual content for each picture individually.




Prof. Dr. Christian Geiger, Isabell Pötschke, Sonja Schmickler, Michael Bertram, Fabian Büntig