Recent efforts leverage knowledge distillation techniques to develop lightweight and practical sentiment analysis models. These methods are grounded in human-written instructions and large-scale user texts. Despite the promising results, two key challenges remain: (1) manually written instructions are limited in diversity and quantity, making them insufficient to ensure comprehensive coverage of distilled knowledge; (2) large-scale user texts incur high computational cost, hindering the practicality of these methods. To this end, we introduce CompEffDist, a comprehensive and efficient distillation framework for sentiment analysis. Our framework consists of two key modules: attribute-based automatic instruction construction and difficulty-based data filtering, which correspondingly tackle the aforementioned challenges. Applying our method across multiple model series (Llama-3, Qwen-3, and Gemma-3), we enable 3B student models to match the performance of 20x larger teacher models on most tasks. In addition, our approach greatly outperforms baseline methods in data efficiency, attaining the same performance level with only 10% of the data.
翻译:近期研究利用知识蒸馏技术开发轻量级且实用的情感分析模型。这些方法基于人工编写的指令和大规模用户文本。尽管取得了有希望的结果,仍存在两个关键挑战:(1)人工编写的指令在多样性和数量上有限,难以确保蒸馏知识的全面覆盖;(2)大规模用户文本带来高昂的计算成本,限制了这些方法的实用性。为此,我们提出CompEffDist,一个面向情感分析的全面高效蒸馏框架。该框架包含两个核心模块:基于属性的自动指令构建和基于难度的数据过滤,分别对应解决上述挑战。将我们的方法应用于多个模型系列(Llama-3、Qwen-3和Gemma-3)后,3B参数的学生模型在多数任务上达到了比其大20倍的教师模型的性能水平。此外,我们的方法在数据效率上显著优于基线方法,仅需10%的数据即可达到相同性能水平。