Heterogeneity is a ubiquitous property of many biological systems and has profound implications for computation. While it is conceivable to optimize neuronal and synaptic heterogeneity for a specific task, such top-down optimization is biologically implausible, prone to catastrophic forgetting, and both data- and energy-intensive. In contrast, biological organisms, with remarkable capacity to perform numerous tasks with minimal metabolic cost, exhibit a heterogeneity that is inherent, stable during adulthood, and task-unspecific. Inspired by this intrinsic form of heterogeneity, we investigate the utility of variations in neuronal time constants for solving hundreds of distinct temporal tasks of varying complexity. Our results show that intrinsic heterogeneity significantly enhances performance and robustness in an implementation-independent manner, indicating its usefulness for both (rate-based) machine learning and (spike-coded) neuromorphic applications. Importantly, only skewed heterogeneity profiles-reminiscent of those found in biology-produce such performance gains. We further demonstrate that this computational advantage eliminates the need for large networks, allowing comparable performance with substantially lower operational, metabolic, and energetic costs, respectively in silico, in vivo, and on neuromorphic hardware. Finally, we discuss the implications of intrinsic (rather than task-induced) heterogeneity for the design of efficient artificial systems, particularly novel neuromorphic devices that exhibit similar device-to-device variability.
翻译:异质性是众多生物系统的普遍特性,对计算过程具有深远影响。虽然针对特定任务优化神经元与突触异质性在理论上是可行的,但这种自上而下的优化方式缺乏生物学合理性,易引发灾难性遗忘,且对数据与能量的需求极高。相比之下,生物有机体能以极低代谢成本执行大量任务,其异质性具有内在固有、成年期稳定且非任务专一的特点。受这种内禀异质性启发,我们研究了神经元时间常数变异在解决数百种不同复杂度时序任务中的作用。实验结果表明,内禀异质性以与实现方式无关的形式显著提升系统性能与鲁棒性,这证明其对(基于发放率的)机器学习与(脉冲编码的)神经形态计算均具实用价值。值得注意的是,只有模拟生物特征的偏态异质性分布才能产生此类性能增益。我们进一步证明,这种计算优势无需依赖大型网络即可实现相当性能,从而分别在硅基模拟、活体生物及神经形态硬件上大幅降低运行、代谢与能耗成本。最后,我们探讨了内禀(而非任务诱发型)异质性对高效人工系统设计的启示,尤其对呈现类似器件间变异性的新型神经形态器件开发具有重要意义。