Inferring 3D Shape Distributions for Robot Perception and Planning
Our goal is to develop methods for a mobile robot to reason about volumetric spatial uncertainty of objects in its environment.
For a mobile robot to operate autonomously in an unknown environment, it must actively construct a representation of space. To do so, the robot needs to position its sensors to gather information and reason about objects, while coping with occlusions and sensor noise. In this work, we propose a distributional spatial representation based on 3D geometric shapes (such as cylinders and cuboids), which captures the structure of volumetric information compactly and provides a meaningful abstraction for reasoning about objects and viewpoints in the face of uncertainty. We develop methods for inferring shape parameters from point clouds, predicting viewpoint information over shapes, and robustly grasping objects.