- مبلغ: ۸۶,۰۰۰ تومان
- مبلغ: ۹۱,۰۰۰ تومان
Purpose – This paper aims to improve the diversity and richness of haptic perception by recognizing multi-modal haptic images. Design/methodology/approach – First, the multi-modal haptic data collected by BioTac sensors from different objects are pre-processed, and then combined into haptic images. Second, a multi-class and multi-label deep learning model is designed, which can simultaneously learn four haptic features (hardness, thermal conductivity, roughness and texture) from the haptic images, and recognize objects based on these features. The haptic images with different dimensions and modalities are provided for testing the recognition performance of this model. Findings – The results imply that multi-modal data fusion has a better performance than single-modal data on tactile understanding, and the haptic images with larger dimension are conducive to more accurate haptic measurement. Practical implications – The proposed method has important potential application in unknown environment perception, dexterous grasping manipulation and other intelligent robotics domains. Originality/value – This paper proposes a new deep learning model for extracting multiple haptic features and recognizing objects from multimodal haptic images.
A new multi-class and multi-label deep learning model is designed for recognizing haptic images. The model can simultaneously extract four haptic features and recognize objects from the haptic images with multi-modal signals. The results show that: compared with single-modal haptic signals, multi-modal signal fusion can successfully improve the recognition performance; increasing the dimension of haptic images is also beneficial for better recognition performance. The proposed model can provide more accurate and richer haptic information, and improve the capabilities of unknown environment perception and dexterous manipulation.