Visually grounded speech models learn from images paired with spoken captions. By tagging images with soft text labels using a trained visual classifier with a fixed vocabulary, previous work has shown that it is possible to train a model that can detect whether a particular text keyword occurs in speech utterances or not. Here we investigate whether visually grounded speech models can also do keyword localisation: predicting where, within an utterance, a given textual keyword occurs without any explicit text-based or alignment supervision. We specifically consider whether incorporating attention into a convolutional model is beneficial for localisation. Although absolute localisation performance with visually supervised models is still modest (compared to using unordered bag-of-word text labels for supervision), we show that attention provides a large gain in performance over previous visually grounded models. As in many other speech-image studies, we find that many of the incorrect localisations are due to semantic confusions, e.g. locating the word 'backstroke' for the query keyword 'swimming'.
翻译:以视觉为基础的语音模型从配有语音字幕的图像中学习。 通过使用经过训练的视觉分类器使用固定词汇的软文本标签标记图像, 先前的工作表明, 有可能训练一种模型来检测特定文本关键字是否出现在语音语句中。 我们在这里调查视觉辅助语言模型是否也可以使用关键字本地化: 预测在语句中, 给定文本关键字在哪里出现, 没有明确的文本或对齐监督。 我们特别考虑将注意力纳入一个卷进模式是否有利于本地化。 虽然在视觉监督模型中绝对本地化的性能仍然很小( 相对于使用未经排序的组合文字文本标签来监督 ), 我们显示, 与先前的视觉基础模式相比, 注意力在性能上有很大的优势。 正如许多其他语音图像研究一样, 我们发现, 许多不正确的本地化导致语系混淆, 例如, 为查询关键词的“ wimming” 定位“ 背strokokee” 。