Mean field equilibrium (MFE) has emerged as a computationally tractable solution concept for large dynamic games. However, computing MFE remains challenging due to nonlinearities and the absence of contraction properties, limiting its reliability for counterfactual analysis and comparative statics. This paper focuses on MFE in dynamic models where agents interact through a scalar function of the population distribution, referred to as the scalar interaction function. Such models naturally arise in a wide range of applications involving market dynamics and strategic competition. The main contribution of this paper is to introduce iterative algorithms that leverage the scalar interaction structure and are guaranteed to converge to the MFE under mild assumptions. Leveraging this structure, we also establish an MFE existence result for non-compact state spaces and analytical comparative statics. To the best of our knowledge, these are the first algorithms with global convergence guarantees in such settings. Unlike existing approaches, our algorithms do not rely on monotonicity or contraction properties, significantly broadening their applicability. Furthermore, we provide a model-free algorithm that learns the MFE via simulation and reinforcement learning techniques such as Q-learning and policy gradient methods without requiring prior knowledge of payoff or transition functions. We apply our algorithms to classic models of dynamic competition, such as capacity competition, and to competitive models motivated by online marketplaces, including ridesharing and inventory competition, as well as to social learning models. We show how key market parameters influence equilibrium outcomes through reliable comparative statics in these representative models, providing insights into the design of competitive systems.
翻译:暂无翻译