Speech therapy is essential for rehabilitating speech disorders caused by neurological impairments such as stroke. However, traditional manual and computer-assisted systems are limited in real-time accessibility and articulatory motion feedback. Recent advances in multimodal large language models (MLLMs) have demonstrated significant potential in healthcare, especially through their adaptive assessment and therapeutic feedback capabilities. Nevertheless, challenges including insufficient acquisition and fusion of articulatory information, inadequate parsing of articulatory organ motion trajectories, and the scarcity of domain-specific datasets hinder the application of MLLMs in speech therapy. To address these limitations, we propose an MLLM-based speech rehabilitation assistance system that leverages ultrasound tongue imaging and speech signals to deliver precise, interactive articulatory feedback. We construct a high-quality domain-specific dataset comprising ultrasound-speech dialogue pairs. This dataset facilitates fine-tuning to enhance the model's clinical adaptability. Furthermore, our method develops spatiotemporal fusion training strategy of ultrasound videos and speech signals, enabling fine-grained articulatory impairment analysis and ultimately generating actionable feedback. Experimental results demonstrate the effectiveness of our model in articulatory analysis and clinical assessment.
 翻译:暂无翻译