Abstract
Robot grasping and manipulation require very accurate and timely knowledge of the manipulated object's shape and pose to succesfully perform a desired task. One of the main reasons current systems fail to carry out complex tasks in a real, unstructured environment is their inability to accurately determine where in the object the fingers are touching. Most systems use vision to detect the pose of an object, but the performance of this sensing modality deteriorates as soon as the robot grasps the object. When the robot hand contacts an object, it partially occludes it, which makes it difficult for vision systems to track the object's location. This thesis presents algorithms to use the robot's available tactile sensing to correct the visually determined pose of a grasped object. This method is extended to globally estimate the pose of the object even when no initial estimate is given.Two different tactile sensing strategies have been employed: single-point and distributed, and measurement models for these two strategies are presented. Different optimisation algorithms are developed and tested to minimise the output of these measurement models and find one or more poses that satisfy current tactile measurements. Results show that the method is able to successfully estimate the pose of a grasped object with high accuracy, even for objects with a high degree of geometric complexity. Other applications of the method are proposed, such as determining grasp stability or identifying the grasped object, as well as future research directions.
Date of Award | 2016 |
---|---|
Original language | English |
Awarding Institution |
|
Supervisor | Hongbin Liu (Supervisor) & Kaspar Althoefer (Supervisor) |