We investigate robots that can understand their environment semantically and geometrically, in order to perform manipulation and other safety critical tasks in proximity to humans. This encompasses semantic understanding under open-set conditions, map representations of the environment, active perception and planning, as well as adaptation and continual self-supervised learning.
Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization Using Geometrical Information
European Conference on Computer Vision, 188-205, 2025
Method for localizing a mobile construction robot on a construction site using semantic segmentation, construction robot system and computer program product
US Patent App. 18/284,646, 2024
OptXR: Optimization of Maintenance Processes with Extended Reality and Digital Twins
Center for Sustainable Future Mobility Symposium 2024 (CSFM 2024), 2024
DepthSplat: Connecting Gaussian Splatting and Depth