The proliferation of wearable technology has established multi-device ecosystems comprising smartphones, smartwatches, and headphones as critical enablers for ubiquitous pedestrian localization. However, traditional pedestrian dead reckoning (PDR) struggles with diverse motion modes, while data-driven methods, despite improving accuracy, often lack robustness due to their reliance on a single-device setup. Therefore, a promising solution is to fully leverage existing wearable devices to form a flexiwear bodynet for robust and accurate pedestrian localization. This paper presents Suite-IN++, a deep learning framework for flexiwear bodynet-based pedestrian localization. Suite-IN++ integrates motion data from wearable devices on different body parts, using contrastive learning to separate global and local motion features. It fuses global features based on the data reliability of each device to capture overall motion trends and employs an attention mechanism to uncover cross-device correlations in local features, extracting motion details helpful for accurate localization. To evaluate our method, we construct a real-life flexiwear bodynet dataset, incorporating Apple Suite (iPhone, Apple Watch, and AirPods) across diverse walking modes and device configurations. Experimental results demonstrate that Suite-IN++ achieves superior localization accuracy and robustness, significantly outperforming state-of-the-art models in real-life pedestrian tracking scenarios.
翻译:可穿戴技术的普及使得由智能手机、智能手表和耳机组成的多设备生态系统成为实现泛在行人定位的关键推动力。然而,传统行人航位推算(PDR)难以应对多样化的运动模式,而数据驱动方法虽提升了精度,却常因依赖单设备设置而缺乏鲁棒性。因此,充分利用现有可穿戴设备构建柔性穿戴体域网以实现鲁棒且精准的行人定位成为一种前景广阔的解决方案。本文提出Suite-IN++,一种基于柔性穿戴体域网的行人定位深度学习框架。Suite-IN++整合来自身体不同部位可穿戴设备的运动数据,利用对比学习分离全局与局部运动特征。该框架基于各设备数据可靠性融合全局特征以捕捉整体运动趋势,并采用注意力机制揭示局部特征中的跨设备关联性,从而提取有助于精确定位的运动细节。为评估本方法,我们构建了一个真实场景下的柔性穿戴体域网数据集,涵盖Apple套件(iPhone、Apple Watch和AirPods)在多种行走模式与设备配置下的数据。实验结果表明,Suite-IN++在真实行人追踪场景中实现了卓越的定位精度与鲁棒性,显著优于当前最先进的模型。