Journalists face mounting challenges in monitoring ever-expanding digital information streams to identify newsworthy content. While traditional automation tools gather information at scale, they struggle with the editorial judgment needed to assess newsworthiness. This paper investigates whether large language models (LLMs) can serve as effective first-pass filters for journalistic monitoring. We develop a prompt-based approach encoding journalistic news values - timeliness, impact, controversy, and generalizability - into LLM instructions to extract and evaluate potential story leads. We validate our approach across multiple models against expert-annotated ground truth, then deploy a real-world monitoring pipeline that processes trade press articles daily. Our evaluation reveals strong performance in extracting relevant leads from source material ($F1=0.94$) and in coarse newsworthiness assessment ($\pm$1 accuracy up to 92%), but it consistently struggles with nuanced editorial judgments requiring beat expertise. The system proves most valuable as a hybrid tool combining automated monitoring with human review, successfully surfacing novel, high-value leads while filtering obvious noise. We conclude with practical recommendations for integrating LLM-powered monitoring into newsroom workflows that preserves editorial judgment while extending journalistic capacity.
翻译:暂无翻译