We study a model of subscription-based platforms where users pay a fixed fee for unlimited access to content, and creators receive a share of the revenue. Existing approaches to detecting fraud predominantly rely on machine learning methods, engaging in an ongoing arms race with bad actors. We explore revenue division mechanisms that inherently disincentivize manipulation. We formalize three types of manipulation-resistance axioms and examine which existing rules satisfy these. We show that a mechanism widely used by streaming platforms, not only fails to prevent fraud, but also makes detecting manipulation computationally intractable. We also introduce a novel rule, ScaledUserProp, that satisfies all three manipulation-resistance axioms. Finally, experiments with both real-world and synthetic streaming data support ScaledUserProp as a fairer alternative compared to existing rules.
翻译:我们研究了一种基于订阅的平台模型,用户支付固定费用以无限制访问内容,创作者则获得收入分成。现有的欺诈检测方法主要依赖于机器学习技术,与恶意行为者持续进行着军备竞赛。我们探索了能够从根本上抑制操纵行为的收益分配机制。我们形式化了三种类型的抗操纵性公理,并检验了现有规则中哪些满足这些公理。研究表明,流媒体平台广泛采用的一种机制不仅无法防止欺诈,还使得检测操纵行为在计算上变得不可行。我们还提出了一种新规则——ScaledUserProp,该规则满足所有三种抗操纵性公理。最后,基于真实世界和合成的流媒体数据进行的实验表明,与现有规则相比,ScaledUserProp是一种更为公平的替代方案。