We study idiom-based visual puns--images that align an idiom's literal and figurative meanings--and present an iterative framework that coordinates a large language model (LLM), a text-to-image model (T2IM), and a multimodal LLM (MLLM) for automatic generation and evaluation. Given an idiom, the system iteratively (i) generates detailed visual prompts, (ii) synthesizes an image, (iii) infers the idiom from the image, and (iv) refines the prompt until recognition succeeds or a step limit is reached. Using 1,000 idioms as inputs, we synthesize a corresponding dataset of visual pun images with paired prompts, enabling benchmarking of both generation and understanding. Experiments across 10 LLMs, 10 MLLMs, and one T2IM (Qwen-Image) show that MLLM choice is the primary performance driver: GPT achieves the highest accuracies, Gemini follows, and the best open-source MLLM (Gemma) is competitive with some closed models. On the LLM side, Claude attains the strongest average performance for prompt generation.
翻译:我们研究基于习语的视觉双关——即同时呈现习语字面意义与比喻意义的图像,并提出一种迭代框架,通过协调大语言模型(LLM)、文本到图像模型(T2IM)和多模态大语言模型(MLLM)实现自动生成与评估。给定一个习语,系统迭代执行以下步骤:(i)生成详细的视觉提示,(ii)合成图像,(iii)从图像推断习语,(iv)优化提示,直至识别成功或达到步数限制。我们以1,000个习语作为输入,合成包含配对提示的视觉双关图像数据集,为生成与理解任务提供基准测试。在10种LLM、10种MLLM和一种T2IM(Qwen-Image)上的实验表明,MLLM的选择是性能的主要驱动因素:GPT获得最高准确率,Gemini次之,最佳开源MLLM(Gemma)与部分闭源模型表现相当。在LLM方面,Claude在提示生成任务中取得了最强的平均性能。