We investigate robots that can understand their environment semantically and geometrically, in order to perform manipulation and other safety critical tasks in proximity to humans. This encompasses semantic understanding under open-set conditions, map representations of the environment, active perception and planning, as well as adaptation and continual self-supervised learning.
Method for setting more precisely a position and/or orientation of a device head
US Patent 12,226,867, 2025
ARKit LabelMaker: A New Scale for Indoor 3D Scene Understanding
NeuSurfEmb: A Complete Pipeline for Dense Correspondence-based 6D Object Pose Estimation without CAD Models
IROS 2024
Method for localizing a mobile construction robot on a construction site using semantic segmentation, construction robot system and computer program product
US Patent App. 18/284,646, 2024
OptXR: Optimization of Maintenance Processes with Extended Reality and Digital Twins
Center for Sustainable Future Mobility Symposium 2024 (CSFM 2024), 2024
Lost & Found: Updating Dynamic 3D Scene Graphs from Egocentric Observations