Multi-objective optimization problems (MOPs) require the simultaneous optimization of conflicting objectives. Real-world MOPs often exhibit complex characteristics, including high-dimensional decision spaces, many objectives, or computationally expensive evaluations. While population-based evolutionary computation has shown promise in addressing diverse MOPs through problem-specific adaptations, existing approaches frequently lack generalizability across distinct problem classes. Inspired by pre-training paradigms in machine learning, we propose a Population Pre-trained Model (PPM) that leverages historical optimization knowledge to solve complex MOPs within a unified framework efficiently. PPM models evolutionary patterns via population modeling, addressing two key challenges: (1) handling diverse decision spaces across problems and (2) capturing the interdependency between objective and decision spaces during evolution. To this end, we develop a population transformer architecture that embeds decision spaces of varying scales into a common latent space, enabling knowledge transfer across diverse problems. Furthermore, our architecture integrates objective-space features through objective fusion to enhance population prediction accuracy for complex MOPs. Our approach achieves robust generalization to downstream optimization tasks with up to 5,000 dimensions--five times the training scale and 200 times greater than prior work. Extensive evaluations on standardized benchmarks and out-of-training real-world applications demonstrate the consistent superiority of our method over state-of-the-art algorithms tailored to specific problem classes, improving the performance and generalization of evolutionary computation in solving MOPs.
翻译:暂无翻译