About me

My name is Martin Feick, and I am a PhD student in the Cognitive Assistants Departement at the German Research Center for Artificial Intelligence (DFKI), and the Ubiquitous Media Technology Lab (UMTL) at Saarland University.

I finished my Master’s Thesis research in Human-Computer Interaction (HCI) at the University College London, UCLIC (United Kingdom). During my master studies, I also worked part-time as a HCI researcher at Saarland University, HCI-lab. I previously spent six months in the Interactions Lab at the University of Calgary (Canada) writing my Bachelor’s thesis. I hold a Master and Bachelor of Sciences in Applied Computer Science from the Saarland University of Applied Sciences.


My HCI research interests are in designing and developing technology to support object-centred interaction and collaboration. I combine Virtual-, and Mixed-Reality, Fabrication as well as Robotics to create novel interfaces enabling more efficient and natural interaction.

Recent Projects

The Virtual Reality Questionnaire Toolkit: In this work, we present the VRQuestionnaireToolkit, which enables the research community to easily collect subjective measures within virtual reality (VR). We contribute a highly customizable and reusable open-source toolkit which can be integrated in existing VR projects rapidly. The toolkit comes with a pre-installed set of standard questionnaires such as NASA TLX, SSQ and SUS Presence questionnaire. Our system aims to lower the entry barrier to use questionnaires in VR and to significantly reduce development time and cost needed to run pre-, in between- and post-study questionnaires.

TanGi: Tangible Proxies For Embodied Object Exploration And Manipulation In Virtual Reality: Exploring and manipulating complex virtual objects is challenging due to limitations of conventional controllers and free-hand interaction techniques. We present the TanGi toolkit which enables novices to rapidly build physical proxy objects using Composable Shape Primitives. TanGi also provides Manipulators allowing users to build objects including movable parts, making them suitable for rich object exploration and manipulation in VR. With a set of different use cases and applications we show the capabilities of the TanGi toolkit and evaluate its use. In a study with 16 participants, we demonstrate that novices can quickly build physical proxy objects using the Composable Shape Primitives and explore how different levels of object embodiment affect virtual object exploration. In a second study with 12 participants we evaluate TanGi’s Manipulators and investigate the effectiveness of embodied interaction. Findings from this study show that TanGi’s proxies outperform traditional controllers and were generally favored by participants.

Tactlets: Adding Tactile Feedback to 3D Objects Using Custom Printed Controls: Rapid prototyping of haptic output on 3D objects promises to enable a more widespread use of the tactile channel for ubiquitous, tangible, and wearable computing. Existing prototyping approaches, however, have limited tactile output capabilities, require advanced skills for design and fabrication, or are incompatible with curved object geometries. In this paper, we present a novel digital fabrication approach for printing custom, high-resolution controls for electro-tactile output with integrated touch sensing on interactive objects. It supports curved geometries of everyday objects. We contribute a design tool for modeling, testing, and refining tactile input and output at a high level of abstraction, based on parameterized electro-tactile controls. We further contribute an inventory of 10 parametric Tactlet controls that integrate sensing of user input with real-time electro-tactile feedback. We present two approaches for printing Tactlets on 3D objects, using conductive inkjet printing or FDM 3D printing. Empirical results from a psychophysical study and findings from two practical  application cases confirm the functionality and practical feasibility of the Tactlets approach.

Mixed-Reality for Object-Focused Remote Collaboration: In this paper we outline the design of a mixed-reality system to support object-focused remote collaboration. Here, being able to adjust collaborators’ perspectives on the object as well as understand one another’s perspective is essential to support effective collaboration over distance. We propose a low-cost mixed-reality system that allows users to: (1) quickly align and understand each other’s perspective; (2) explore objects independently from one another, and (3) render gestures in the remote’s workspace. In this work, we focus on the expert’s role and we introduce an interaction technique allowing users to quickly manipulation 3D virtual objects in space.

The Way You Move: The Effect of a Robot Surrogate Movement in Remote Collaboration: In this paper, we discuss the role of the movement trajectory and velocity enabled by our tele-robotic system (ReMa) for remote collaboration on physical tasks. Our system reproduces changes in object orientation and position at a remote location using a humanoid robotic arm. However, even minor kinematics differences between robot and human arm can result in awkward or exaggerated robot movement. As a result, user communication with the robotic system can become less efficient, less fluent and more time intensive.

Perspective on and Re-Orientation of Physical Proxies in Object-Focused Remote Collaboration: Remote collaborators working together on physical object have difficulty building shared understanding of what each person is talking about. Conventional video chat systems are insufficient for many situations because they present a single view of the object in a flattened image. To understand how this limited perspective affects collaboration, we designed the Remote Manipulator (ReMa), which can reproduce orientation manipulations on a proxy object at a remote site. We conducted two studies with ReMa, with two main findings. First, a shared perspective is more effective and preferred compared to the opposing perspective offered by conventional video chat systems. Second, the physical proxy and video chat complement one another in a combined system: people used the physical proxy to understand object, and used video chat to perform gestures and confirm remote actions.