Enabling Semantic Understanding for Autonomous Marine Robots
Our goal is to develop unsupervised or minimally supervised marine learning frameworks that allow autonomous underwater vehicles (AUVs) to explore unknown marine environments and communicate their findings in a semantically meaningful manner.
The oceans cover over 70% of the Earth’s surface, yet less than five percent of this important biosphere has been explored to date. Much of the marine environment is dangerous or inaccessible to human divers. Thus, the task of exploring Earth’s oceans, and one day the oceans of other worlds, will fall to marine robots. However, the development of exploratory marine robots has been stymied by the marine environment's unique challenges. The lack of radio communication necessitates the development of comprehensive and robust robot autonomy. Our work in this area is split between two complementary thrusts: 1) learning an abstract representation of underwater image data that is conducive to semantic reasoning, and 2) using that abstract representation to build probabilistic models of the robot’s visual environment that allow more efficient exploratory path planning, anomaly detection, and mission data summarization. We address the problem of learning a meaningful representation of underwater images using deep learning. Our current research involves using unsupervised convolutional autoencoders or minimally-supervised transfer learning frameworks to learn a latent feature representation of underwater image data. Given this abstract feature representation, we apply various probabilistic models to represent the robot’s knowledge about the observable world. Topic models provide a natural probabilistic framework for both anomaly detection and data summarization. Much of our previous work has focused on extending the Hierarchical Dirichlet Process (HDP), a Bayesian nonparametric topic model, to the real-time, spatiotemporally correlated image data from a marine robot’s video stream.