Precisely controlling Large Language Models (LLMs) to generate efficient and concise code is a central challenge in software engineering. We introduce a framework based on Test-Driven Development (TDD) that transforms code specification into a combinatorial optimization task. The framework first prompts an LLM to generate a test suite, then formulates the Test Case Minimization (TCM) problem as a Quadratic Unconstrained Binary Optimization (QUBO) model. This QUBO paradigm is compatible with both classical solvers and emerging hardware such as quantum annealers. Experimentally, quantum annealing solves the core TCM task 16 times faster than simulated annealing. This performance underpins our end-to-end framework, which reduces total token consumption by 36.5\% and significantly improves code quality. This work demonstrates a powerful synergy between generative AI and combinatorial optimization in software engineering, highlighting the critical importance of precise model formulation.
翻译:精确控制大型语言模型(LLMs)以生成高效简洁的代码是软件工程中的核心挑战。本文提出一种基于测试驱动开发(TDD)的框架,将代码规约转化为组合优化任务。该框架首先提示LLM生成测试套件,随后将测试用例最小化(TCM)问题建模为二次无约束二进制优化(QUBO)模型。此QUBO范式兼容经典求解器及量子退火器等新兴硬件。实验表明,量子退火器求解核心TCM任务的速度比模拟退火快16倍。该性能支撑了我们的端到端框架,使总令牌消耗降低36.5%,并显著提升代码质量。本工作展示了生成式人工智能与组合优化在软件工程中的强大协同效应,凸显了精确模型构建的关键重要性。