Many studies in recommender systems (RecSys) adopt a general problem definition, i.e., to recommend preferred items to users based on past interactions. Such abstraction often lacks the domain-specific nuances necessary for practical deployment. However, models are frequently evaluated using datasets collected from online recommender platforms, which inherently reflect domain or task specificities. In this paper, we analyze RecSys task formulations, emphasizing key components such as input-output structures, temporal dynamics, and candidate item selection. All these factors directly impact offline evaluation. We further examine the complexities of user-item interactions, including decision-making costs, multi-step engagements, and unobservable interactions, which may influence model design. Additionally, we explore the balance between task specificity and model generalizability, highlighting how well-defined task formulations serve as the foundation for robust evaluation and effective solution development. By clarifying task definitions and their implications, this work provides a structured perspective on RecSys research. The goal is to help researchers better navigate the field, particularly in understanding specificities of the RecSys tasks and ensuring fair and meaningful evaluations.
 翻译:暂无翻译