Answering end user security questions is challenging. While large language models (LLMs) like GPT, LLAMA, and Gemini are far from error-free, they have shown promise in answering a variety of questions outside of security. We studied LLM performance in the area of end user security by qualitatively evaluating 3 popular LLMs on 900 systematically collected end user security questions. While LLMs demonstrate broad generalist ``knowledge'' of end user security information, there are patterns of errors and limitations across LLMs consisting of stale and inaccurate answers, and indirect or unresponsive communication styles, all of which impacts the quality of information received. Based on these patterns, we suggest directions for model improvement and recommend user strategies for interacting with LLMs when seeking assistance with security.
翻译:回答终端用户的安全问题具有挑战性。尽管像GPT、LLAMA和Gemini这样的大型语言模型(LLMs)远非无错误,但它们在回答安全领域以外的各种问题上已显示出潜力。我们通过定性评估3种流行的LLMs对900个系统收集的终端用户安全问题的回答,研究了LLMs在终端用户安全领域的表现。虽然LLMs展现出对终端用户安全信息的广泛通用“知识”,但不同LLMs存在错误和局限性的模式,包括过时和不准确的答案,以及间接或不响应的沟通风格,这些都影响了所获信息的质量。基于这些模式,我们提出了模型改进的方向,并建议用户在寻求安全帮助时与LLMs互动的策略。