Multi-modal robotic visual-tactile detection of surface cracks

Published in Submitted, 2021

Recommended citation:

We present results for an innovative approach involving vision and tactile sensing to detect and characterise surface cracks. The proposed algorithm localises surface cracks in a remote environment through videos/photos taken by an on-board robot camera, which is then followed by automatic tactile inspection of the surfaces. Faster R-CNN deep learning-based object detection is used for identifying the location of potential cracks. Random forest classifier is used for tactile identification of the cracks to confirm their presences. Offline and online experiments to compare vision only and combined vision and tactile based crack detection are demonstrated. Two experiments have been developed to test the efficiency of the multi-modal approach: online accuracy detection and time required to explore a surface and localise a crack. A total of 10 trials have been conducted to compare the accuracy of the multi-modal approach with vision only method. When using both modalities cooperatively, the model is able to correctly detect 92.85% of the cracks while it decreases to 46.66% when using only vision information. Exploring a surface using only tactile requires around 199 seconds. This time is reduced to 31 seconds when using both vision and tactile together. This approach may be implemented also in extreme environments (e.g. in nuclear plants) since gamma radiation does not interfere with the basic sensing mechanism of fibre optic-based sensors.

multimodal Image