To Spike or Not to Spike? A Quantitative Comparison of SNN and CNN FPGA Implementations. (arXiv:2306.12742v1 [cs.AR])
By: <a href="http://arxiv.org/find/cs/1/au:+Plagwitz_P/0/1/0/all/0/1">Patrick Plagwitz</a>, <a href="http://arxiv.org/find/cs/1/au:+Hannig_F/0/1/0/all/0/1">Frank Hannig</a>, <a href="http://arxiv.org/find/cs/1/au:+Teich_J/0/1/0/all/0/1">Jürgen Teich</a>, <a href="http://arxiv.org/find/cs/1/au:+Keszocze_O/0/1/0/all/0/1">Oliver Keszocze</a> Posted: June 23, 2023
Convolutional Neural Networks (CNNs) are widely employed to solve various
problems, e.g., image classification. Due to their compute- and data-intensive
nature, CNN accelerators have been developed as ASICs or on FPGAs. Increasing
complexity of applications has caused resource costs and energy requirements of
these accelerators to grow. Spiking Neural Networks (SNNs) are an emerging
alternative to CNN implementations, promising higher resource and energy
efficiency. The main research question addressed in this paper is whether SNN
accelerators truly meet these expectations of reduced energy requirements
compared to their CNN equivalents. For this purpose, we analyze multiple SNN
hardware accelerators for FPGAs regarding performance and energy efficiency. We
present a novel encoding scheme of spike event queues and a novel memory
organization technique to improve SNN energy efficiency further. Both
techniques have been integrated into a state-of-the-art SNN architecture and
evaluated for MNIST, SVHN, and CIFAR-10 datasets and corresponding network
architectures on two differently sized modern FPGA platforms. For small-scale
benchmarks such as MNIST, SNN designs provide rather no or little latency and
energy efficiency advantages over corresponding CNN implementations. For more
complex benchmarks such as SVHN and CIFAR-10, the trend reverses.
Provided by:
http://arxiv.org/icons/sfx.gif