We propose a gradient-based framework for optimizing parametric nonlinear Gaussian channels via mutual information maximization. Leveraging the score-to-Fisher bridge (SFB) methodology, we derive a computationally tractable formula for the information gradient that is the gradient of mutual information with respect to the parameters of the nonlinear front-end. Our formula expresses this gradient in terms of two key components: the score function of the marginal output distribution, which can be learned via denoising score matching (DSM), and the Jacobian of the front-end function, which is handled efficiently using the vector-Jacobian product (VJP) within automatic differentiation frameworks. This enables practical parameter optimization through gradient ascent. Furthermore, we extend this framework to task-oriented scenarios, deriving gradients for both task-specific mutual information, where a task variable depends on the channel input, and the information bottleneck (IB) objective. A key advantage of our approach is that it facilitates end-to-end optimization of the nonlinear front-end without requiring explicit computation on the output distribution. Extensive experimental validation confirms the correctness of our information gradient formula against analytical solutions and demonstrates its effectiveness in optimizing both linear and nonlinear channels toward their objectives.
翻译:暂无翻译