References & Citations
Computer Science > Neural and Evolutionary Computing
Title: Gaining the Sparse Rewards by Exploring Lottery Tickets in Spiking Neural Network
(Submitted on 23 Sep 2023 (v1), last revised 28 Mar 2024 (this version, v3))
Abstract: Deploying energy-efficient deep learning algorithms on computational-limited devices, such as robots, is still a pressing issue for real-world applications. Spiking Neural Networks (SNNs), a novel brain-inspired algorithm, offer a promising solution due to their low-latency and low-energy properties over traditional Artificial Neural Networks (ANNs). Despite their advantages, the dense structure of deep SNNs can still result in extra energy consumption. The Lottery Ticket Hypothesis (LTH) posits that within dense neural networks, there exist winning Lottery Tickets (LTs), namely sub-networks, that can be obtained without compromising performance. Inspired by this, this paper delves into the spiking-based LTs (SLTs), examining their unique properties and potential for extreme efficiency. Then, two significant sparse \textbf{\textit{Rewards}} are gained through comprehensive explorations and meticulous experiments on SLTs across various dense structures. Moreover, a sparse algorithm tailored for spiking transformer structure, which incorporates convolution operations into the Patch Embedding Projection (ConvPEP) module, has been proposed to achieve Multi-level Sparsity (MultiSp). MultiSp refers to (1) Patch number sparsity; (2) ConvPEP weights sparsity and binarization; and (3) ConvPEP activation layer binarization. Extensive experiments demonstrate that our method achieves extreme sparsity with only a slight performance decrease, paving the way for deploying energy-efficient neural networks in robotics and beyond.
Submission history
From: Hao Cheng [view email][v1] Sat, 23 Sep 2023 08:24:36 GMT (3491kb,D)
[v2] Thu, 28 Sep 2023 15:20:37 GMT (3491kb,D)
[v3] Thu, 28 Mar 2024 02:24:38 GMT (4628kb,D)
Link back to: arXiv, form interface, contact.