Estimating the difficulty of exam questions is essential for developing good exams, but professors are not always good at this task. We compare various Large Language Model-based methods with three professors in their ability to estimate what percentage of students will give correct answers on True/False exam questions in the areas of Neural Networks and Machine Learning. Our results show that the professors have limited ability to distinguish between easy and difficult questions and that they are outperformed by directly asking Gemini 2.5 to solve this task. Yet, we obtained even better results using uncertainties of the LLMs solving the questions in a supervised learning setting, using only 42 training samples. We conclude that supervised learning using LLM uncertainty can help professors better estimate the difficulty of exam questions, improving the quality of assessment.
翻译:准确评估试题难度是编制高质量考试的关键,但教授在此任务上并非总是擅长。本研究比较了多种基于大语言模型的方法与三位教授在预测神经网络与机器学习领域判断题正确率方面的能力。结果表明,教授区分易错题与难题的能力有限,且直接使用Gemini 2.5处理该任务的表现优于教授。然而,在监督学习框架下,仅使用42个训练样本并利用大语言模型解题时的不确定性度量,我们获得了更优的结果。研究结论表明,基于大语言模型不确定性的监督学习方法可辅助教授更精准地评估试题难度,从而提升评估质量。