Federated learning (FL) has gained prominence due to heightened concerns over data privacy. Privacy restrictions limit the visibility for data consumers (DCs) to accurately assess the capabilities and efforts of data owners (DOs). Thus, for open collaborative FL markets to thrive, effective incentive mechanisms are key as they can motivate data owners (DOs) to contribute to FL tasks. Contract theory is a useful technique for developing FL incentive mechanisms. Existing approaches generally assume that once the contract between a DC and a DO is signed, it remains unchanged until the FL task is finished. However, unforeseen circumstances might force a DO to be unable to fulfill the current contract, resulting in inefficient utilization of DCs' budgets. To address this limitation, we propose the Renegotiable Contract-Theoretic Incentive Mechanism (RC-TIM) for FL. Unlike previous approaches, it adapts to changes in DOs' behavior and budget constraints by supporting the renegotiation of contracts, providing flexible and dynamic incentives. Under RC-TIM, an FL system is more adaptive to unpredictable changes in the operating environment that can affect the quality of the service provided by DOs. Extensive experiments on three benchmark datasets demonstrate that RC-TIM significantly outperforms four state-of-the-art related methods, delivering up to a 45.76% increase in utility on average.
翻译:暂无翻译