[Context] Large Language Models (LLMs) are increasingly used to assist qualitative research in Software Engineering (SE), yet the methodological implications of this usage remain underexplored. Their integration into interpretive processes such as thematic analysis raises fundamental questions about rigor, transparency, and researcher agency. [Objective] This study investigates how experienced SE researchers conceptualize the opportunities, risks, and methodological implications of integrating LLMs into thematic analysis. [Method] A reflective workshop with 25 ISERN researchers guided participants through structured discussions of LLM-assisted open coding, theme generation, and theme reviewing, using color-coded canvases to document perceived opportunities, limitations, and recommendations. [Results] Participants recognized potential efficiency and scalability gains, but highlighted risks related to bias, contextual loss, reproducibility, and the rapid evolution of LLMs. They also emphasized the need for prompting literacy and continuous human oversight. [Conclusion] Findings portray LLMs as tools that can support, but not substitute, interpretive analysis. The study contributes to ongoing community reflections on how LLMs can responsibly enhance qualitative research in SE.
翻译:[背景] 大型语言模型(LLMs)在软件工程(SE)领域的定性研究中正得到日益广泛的应用,然而这种应用的方法论意义仍未得到充分探讨。LLMs融入主题分析等解释性过程引发了关于严谨性、透明度及研究者能动性的根本性问题。[目的] 本研究探讨经验丰富的SE研究者如何理解将LLMs整合到主题分析中的机遇、风险及方法论影响。[方法] 通过组织25名ISERN研究者参与的反思性研讨会,引导参与者围绕LLM辅助的开放式编码、主题生成与主题评审展开结构化讨论,并采用颜色编码画布记录其感知到的机遇、局限性与建议。[结果] 参与者认识到LLM可能带来的效率与可扩展性提升,但强调了与模型偏见、语境缺失、可复现性及LLMs快速演进相关的风险。同时,他们指出需要提升提示语素养并保持持续的人工监督。[结论] 研究结果表明LLMs可作为支持性工具,但无法替代解释性分析。本研究为社区持续探讨如何负责任地运用LLMs增强SE领域的定性研究提供了参考。