Palpation, the use of touch in medical examination, is almost exclusively performed by humans. We investigate a proof of concept for an artificial palpation method based on self-supervised learning. Our key idea is that an encoder-decoder framework can learn a $\textit{representation}$ from a sequence of tactile measurements that contains all the relevant information about the palpated object. We conjecture that such a representation can be used for downstream tasks such as tactile imaging and change detection. With enough training data, it should capture intricate patterns in the tactile measurements that go beyond a simple map of forces -- the current state of the art. To validate our approach, we both develop a simulation environment and collect a real-world dataset of soft objects and corresponding ground truth images obtained by magnetic resonance imaging (MRI). We collect palpation sequences using a robot equipped with a tactile sensor, and train a model that predicts sensory readings at different positions on the object. We investigate the representation learned in this process, and demonstrate its use in imaging and change detection.
翻译:触诊,即医学检查中的触觉运用,目前几乎完全由人类执行。本研究探讨了一种基于自监督学习的人工触诊方法的概念验证。我们的核心思想是:编码器-解码器框架能够从触觉测量序列中学习到包含被触诊对象所有相关信息的表征。我们推测此类表征可应用于下游任务,如触觉成像与变化检测。在充足训练数据条件下,该表征应能捕捉超越当前技术水平(即简单力分布图)的触觉测量复杂模式。为验证该方法,我们同时开发了仿真环境并收集了真实世界数据集,包含软体对象及其通过磁共振成像(MRI)获取的真实图像。使用配备触觉传感器的机器人采集触诊序列,并训练可预测对象不同位置感官读数的模型。我们探究了该过程中学习的表征,并展示了其在成像与变化检测中的应用。