TY - CHAP
T1 - Leveraging Multi-modal Sensing for Robotic Insertion Tasks in R&D Laboratories
AU - Butterworth, Aaron
AU - Pizzuto, Gabriella
AU - Pecyna, Leszek
AU - Cooper, Andrew I.
AU - Luo, Shan
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Performing a large volume of experiments in Chemistry labs creates repetitive actions costing researchers time, automating these routines is highly desirable. Previous experiments in robotic chemistry have performed high numbers of experiments autonomously, however, these processes rely on automated machines in all stages from solid or liquid addition to analysis of the final product. In these systems every transition between machine requires the robotic chemist to pick and place glass vials, however, this is currently performed using open loop methods which require all equipment being used by the robot to be in well defined known locations. We seek to begin closing the loop in this vial handling process in a way which also fosters human-robot collaboration in the chemistry lab environment. To do this the robot must be able to detect valid placement positions for the vials it is collecting, and reliably insert them into the detected locations. We create a single modality visual method for estimating placement locations to provide a baseline before introducing two additional methods of feedback (force and tactile feedback). Our visual method uses a combination of classic computer vision methods and a CNN discriminator to detect possible insertion points, then a vial is grasped and positioned above an insertion point and the multi-modal methods guide the final insertion movements using an efficient search pattern. Through our experiments we show the baseline insertion rate of 48.78% improves to 89.55% with the addition of our 'force and vision' multi-modal feedback method.
AB - Performing a large volume of experiments in Chemistry labs creates repetitive actions costing researchers time, automating these routines is highly desirable. Previous experiments in robotic chemistry have performed high numbers of experiments autonomously, however, these processes rely on automated machines in all stages from solid or liquid addition to analysis of the final product. In these systems every transition between machine requires the robotic chemist to pick and place glass vials, however, this is currently performed using open loop methods which require all equipment being used by the robot to be in well defined known locations. We seek to begin closing the loop in this vial handling process in a way which also fosters human-robot collaboration in the chemistry lab environment. To do this the robot must be able to detect valid placement positions for the vials it is collecting, and reliably insert them into the detected locations. We create a single modality visual method for estimating placement locations to provide a baseline before introducing two additional methods of feedback (force and tactile feedback). Our visual method uses a combination of classic computer vision methods and a CNN discriminator to detect possible insertion points, then a vial is grasped and positioned above an insertion point and the multi-modal methods guide the final insertion movements using an efficient search pattern. Through our experiments we show the baseline insertion rate of 48.78% improves to 89.55% with the addition of our 'force and vision' multi-modal feedback method.
UR - http://www.scopus.com/inward/record.url?scp=85174005134&partnerID=8YFLogxK
U2 - 10.1109/CASE56687.2023.10260414
DO - 10.1109/CASE56687.2023.10260414
M3 - Conference paper
AN - SCOPUS:85174005134
T3 - IEEE International Conference on Automation Science and Engineering
BT - 2023 IEEE 19th International Conference on Automation Science and Engineering, CASE 2023
PB - IEEE Computer Society
T2 - 19th IEEE International Conference on Automation Science and Engineering, CASE 2023
Y2 - 26 August 2023 through 30 August 2023
ER -