Abstract
Robot grasping and manipulation relies mainly on two types of sensory data: vision and tactile sensing. Localisation and recognition of the object is typically done through vision alone, while tactile sensors are commonly used for grasp control. Vision performs reliably in uncluttered environments, but its performance may deteriorate when the object is occluded, which is often the case during a manipulation task, when the object is in-hand and the robot fingers stand between the camera and the object. This paper presents a method to use the robot's sense of touch to refine the knowledge of a manipulated object's pose from an initial estimate provided by vision. The objective is to find a transformation on the object's location that is coherent with the current proprioceptive and tactile sensory data. The method was tested with different object geometries and proposes applications where this method can be used to improve the overall performance of a robotic system. Experimental results show an improvement of around 70% on the estimate of the object's location when compared to using only vision.
Original language | English |
---|---|
Title of host publication | IEEE International Conference on Intelligent Robots and Systems |
Publisher | IEEE |
Pages | 4021-4026 |
Number of pages | 6 |
ISBN (Print) | 9781467363587 |
DOIs | |
Publication status | Published - 2013 |
Event | 2013 26th IEEE/RSJ International Conference on Intelligent Robots and Systems: New Horizon, IROS 2013 - Tokyo, Japan Duration: 3 Nov 2013 → 8 Nov 2013 |
Conference
Conference | 2013 26th IEEE/RSJ International Conference on Intelligent Robots and Systems: New Horizon, IROS 2013 |
---|---|
Country/Territory | Japan |
City | Tokyo |
Period | 3/11/2013 → 8/11/2013 |