We introduce Neuro-Vesicles, a framework that augments conventional neural networks with a missing computational layer: a dynamical population of mobile, discrete vesicles that live alongside the network rather than inside its tensors. Each vesicle is a self contained object v = (c, kappa, l, tau, s) carrying a vector payload, type label, location on the graph G = (V, E), remaining lifetime, and optional internal state. Vesicles are emitted in response to activity, errors, or meta signals; migrate along learned transition kernels; probabilistically dock at nodes; locally modify activations, parameters, learning rules, or external memory through content dependent release operators; and finally decay or are absorbed. This event based interaction layer reshapes neuromodulation. Instead of applying the same conditioning tensors on every forward pass, modulation emerges from the stochastic evolution of a vesicle population that can accumulate, disperse, trigger cascades, carve transient pathways, and write structured traces into topological memory. Dense, short lived vesicles approximate familiar tensor mechanisms such as FiLM, hypernetworks, or attention. Sparse, long lived vesicles resemble a small set of mobile agents that intervene only at rare but decisive moments. We give a complete mathematical specification of the framework, including emission, migration, docking, release, decay, and their coupling to learning; a continuous density relaxation that yields differentiable reaction diffusion dynamics on the graph; and a reinforcement learning view where vesicle control is treated as a policy optimized for downstream performance. We also outline how the same formalism extends to spiking networks and neuromorphic hardware such as the Darwin3 chip, enabling programmable neuromodulation on large scale brain inspired computers.
翻译:我们提出神经囊泡框架,该框架通过增加一个缺失的计算层来增强传统神经网络:一个动态的、离散的移动囊泡群体,它们存在于网络之旁而非其张量内部。每个囊泡是一个自包含对象 v = (c, kappa, l, tau, s),携带向量载荷、类型标签、在图 G = (V, E) 上的位置、剩余寿命以及可选的内部状态。囊泡响应活动、误差或元信号而发射;沿习得的转移核迁移;以概率方式停靠在节点处;通过内容依赖的释放算子局部修改激活值、参数、学习规则或外部记忆;最终衰变或被吸收。这种基于事件的交互层重塑了神经调制机制。调制不再通过在前向传播中应用相同的条件张量来实现,而是源自囊泡群体的随机演化——该群体能够积累、扩散、触发级联反应、开辟瞬态通路,并将结构化痕迹写入拓扑记忆。密集、短寿命的囊泡近似于常见的张量机制,如 FiLM、超网络或注意力机制。稀疏、长寿命的囊泡则类似于一小群移动代理,仅在罕见但决定性的时刻进行干预。我们给出了该框架的完整数学定义,包括发射、迁移、停靠、释放、衰变及其与学习的耦合;一种在图上产生可微分反应扩散动力学的连续密度松弛方法;以及将囊泡控制视为针对下游性能优化的策略的强化学习视角。我们还概述了同一形式如何扩展至脉冲神经网络和神经形态硬件(如 Darwin3 芯片),从而在大规模类脑计算机上实现可编程的神经调制。