Spiking Neural Networks (SNNs) are a promising, energy-efficient alternative to standard Artificial Neural Networks (ANNs) and are particularly well-suited to spatio-temporal tasks such as keyword spotting and video classification. However, SNNs have a much lower arithmetic intensity than ANNs and are therefore not well-matched to standard accelerators like GPUs and TPUs. Field Programmable Gate Arrays(FPGAs) are designed for such memory-bound workloads and here we develop a novel, fully-programmable RISC-V-based system-on-chip (FeNN-DMA), tailored to simulating SNNs on modern UltraScale+ FPGAs. We show that FeNN-DMA has comparable resource usage and energy requirements to state-of-the-art fixed-function SNN accelerators, yet it is capable of simulating much larger and more complex models. Using this functionality, we demonstrate state-of-the-art classification accuracy on the Spiking Heidelberg Digits and Neuromorphic MNIST tasks.
翻译:脉冲神经网络(SNNs)是标准人工神经网络(ANNs)的一种有前景且高能效的替代方案,特别适用于关键词检测和视频分类等时空任务。然而,SNNs的算术强度远低于ANNs,因此与GPU和TPU等标准加速器并不匹配。现场可编程门阵列(FPGAs)专为这类内存受限的工作负载而设计,本文开发了一种新颖的、完全可编程的基于RISC-V的片上系统(FeNN-DMA),专为在现代UltraScale+ FPGAs上模拟SNNs而定制。我们证明,FeNN-DMA在资源使用和能耗方面与最先进的固定功能SNN加速器相当,但能够模拟更大、更复杂的模型。利用此功能,我们在Spiking Heidelberg Digits和Neuromorphic MNIST任务上展示了最先进的分类精度。