Human hand simulation plays a critical role in digital twin applications, requiring models that balance anatomical fidelity with computational efficiency. We present a complete pipeline for constructing multi-rigid-body approximations of human hands that preserve realistic appearance while enabling real-time physics simulation. Starting from optical motion capture of a specific human hand, we construct a personalized MANO (Multi-Abstracted hand model with Neural Operations) model and convert it to a URDF (Unified Robot Description Format) representation with anatomically consistent joint axes. The key technical challenge is projecting MANO's unconstrained SO(3) joint rotations onto the kinematically constrained joints of the rigid-body model. We derive closed-form solutions for single degree-of-freedom joints and introduce a Baker-Campbell-Hausdorff (BCH)-corrected iterative method for two degree-of-freedom joints that properly handles the non-commutativity of rotations. We validate our approach through digital twin experiments where reinforcement learning policies control the multi-rigid-body hand to replay captured human demonstrations. Quantitative evaluation shows sub-centimeter reconstruction error and successful grasp execution across diverse manipulation tasks.
翻译:人手仿真在数字孪生应用中具有关键作用,需要模型在解剖学保真度与计算效率之间取得平衡。本文提出了一套完整的流程,用于构建人手的多刚体近似模型,该模型在保持真实外观的同时支持实时物理仿真。从特定人手的光学运动捕捉数据出发,我们构建了个性化的MANO(基于神经操作的多抽象手部模型)模型,并将其转换为具有解剖学一致关节轴的URDF(统一机器人描述格式)表示。关键技术挑战在于将MANO模型无约束的SO(3)关节旋转投影到刚体模型的运动学约束关节上。我们推导了单自由度关节的闭式解,并针对双自由度关节提出了一种基于Baker-Campbell-Hausdorff(BCH)校正的迭代方法,以正确处理旋转的非交换性。通过数字孪生实验验证了所提方法,其中强化学习策略控制多刚体手部重放捕捉到的人类演示动作。定量评估显示,重建误差低于厘米级,且在各种操作任务中均能成功执行抓取动作。