TY - GEN
T1 - Scene recognition for mobile robots by relational object search using next-best-view estimates from hierarchical implicit shape models
AU - Meißner, Pascal
AU - Schleicher, Ralf
AU - Hutmacher, Robin
AU - Schmidt-Rohr, Sven R.
AU - Dillmann, Rüdiger
PY - 2016/11/28
Y1 - 2016/11/28
N2 - We present an approach for recognizing indoor scenes in object constellations that require object search by a mobile robot, as they cannot be captured from a single viewpoint. In our approach that we call Active Scene Recognition (ASR), robots predict object poses from learnt spatial relations that they combine with their estimates about present scenes. Our models for estimating scenes and predicting poses are Implicit Shape Model (ISM) trees from prior work [1]. ISMs model scenes as sets of objects with spatial relations in-between and are learnt from observations. In prior work [2], we presented a realization of ASR, limited to choosing orientations for a fixed robot head with an approach to search objects that uses positions and ignores types. In this paper, we introduce an integrated system that extends ASR to selecting positions and orientations of camera views for a mobile robot with a pivoting head. We contribute an approach for Next-Best-View estimation in object search on predicted object poses. It is defined on 6 DoF viewing frustums and optimizes the searched view, together with the objects to be searched in it, based on 6 DoF pose predictions. To prevent combinatorial explosion when searching camera pose space, we introduce a hierarchical approach to sample robot positions with increasing resolution.
AB - We present an approach for recognizing indoor scenes in object constellations that require object search by a mobile robot, as they cannot be captured from a single viewpoint. In our approach that we call Active Scene Recognition (ASR), robots predict object poses from learnt spatial relations that they combine with their estimates about present scenes. Our models for estimating scenes and predicting poses are Implicit Shape Model (ISM) trees from prior work [1]. ISMs model scenes as sets of objects with spatial relations in-between and are learnt from observations. In prior work [2], we presented a realization of ASR, limited to choosing orientations for a fixed robot head with an approach to search objects that uses positions and ignores types. In this paper, we introduce an integrated system that extends ASR to selecting positions and orientations of camera views for a mobile robot with a pivoting head. We contribute an approach for Next-Best-View estimation in object search on predicted object poses. It is defined on 6 DoF viewing frustums and optimizes the searched view, together with the objects to be searched in it, based on 6 DoF pose predictions. To prevent combinatorial explosion when searching camera pose space, we introduce a hierarchical approach to sample robot positions with increasing resolution.
UR - http://www.scopus.com/inward/record.url?scp=85006427743&partnerID=8YFLogxK
U2 - 10.1109/IROS.2016.7759046
DO - 10.1109/IROS.2016.7759046
M3 - Published conference contribution
AN - SCOPUS:85006427743
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 137
EP - 144
BT - IROS 2016 - 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2016
Y2 - 9 October 2016 through 14 October 2016
ER -