Current Trends and Future Potential
Spiking Neural Networks
Organizers
Bernhard A. Moser *+, Michael Lunglmayr +, Robert Legenstein #
+ Institute of Signal Processing, Johannes Kepler University Linz
# Institute of Theoretical Computer Science, Graz University of Technology
* Software Competence Center Hagenberg (SCCH)
Workshop Description
Spiking neural networks (SNNs) compute in a fundamentally different and more biologically inspired manner than standard artificial neural networks (ANNs). They have recently gained renewed interest, mainly due to their sparse information processing, larger representation capacity, and potentially much lower computational costs.
This workshop will address the related aspect of sparsity and its impact on energy-efficient (embedded edge) AI solutions.
Key Questions to Explore
- Are current approaches to information encoding for SNNs sufficient to address sparsity and energy efficiency in Edge AI, computer vision, and robotics?
- SNNs are bio-inspired , but to what extent should we stick to the biological model to realize low-power edge AI?
- What are the key mathematical differences between ANNs and traditional signal processing? Do we need a new foundation?
- Do we need better training algorithms or better hardware support for existing ones?
- What are the hardware challenges in enabling sparse and efficient training?
- Despite recent progress, SNNs remain niche , are there any SNN-based killer applications coming soon?
- What are the current trends and future potential of SNNs?
Paper Submission
We expect contributions in the form of extended abstracts that highlight the current state of work, discussions and problems. The workshop accepts submissions in the following forms:
- Extended Abstract (2 - 4 pages)
- Full paper (approx. 8 pages)
Every contribution should be presented as a poster in the joint poster session. Submissions should use the AIRoV template.
Deadlines
- 28th of February Submission of contributions (extended abstract or full paper)
- 13th of March Notification of Acceptance of Paper Contributions
📢 Announcement: breakout session with all-hands discussions on:
- Energy Efficiency: Where do the energy savings of Spiking Neural Networks (SNNs) actually come from? How much analog computation (e.g., 10%, 90%) is needed to achieve 100× energy efficiency without compromising robustness due to noise in hybrid analog–digital chips? Does analog SNN computing make practical sense, and how do the achievable energy savings compare fairly to optimized conventional approaches (ANNs with Shannon sampling) on edge devices?
- Mathematical Foundations: Can we expect a comprehensive mathematical framework for SNNs in the near future, covering capabilities, error bounds, and the required complexity of neuron models?
- Learning Algorithms: Is there a better alternative to backpropagation for training SNNs? Do biologically inspired mechanisms such as STDP or e-prop genuinely enable more intelligent behavior, or do they distract from achieving superior ANN-level performance and temporal resolution?
- Hybrid Architectures: Do hybrid ANN/SNN architectures offer real advantages? How viable are standard-neuromorphic hybrids using non-standard, event-based sampling for reducing data load and bandwidth? Can combinations like standard imaging sensors with neuromorphic add-ons, FPGA hybrids, or quantum sensor fusion yield breakthroughs, or are they still largely speculative?
- Sparse Activations: Are the sparse activations in SNNs truly advantageous over conventional edge AI architectures (CNNs, GPUs, TPUs), or are their benefits overstated by limited lab demonstrations?
- Adoption Barriers: What currently limits large-scale SNN deployment despite the projected $61B edge AI market by 2035, high R&D costs, strong conventional competition, lack of standards, or immature software ecosystems?
- Commercial Breakthroughs: Which field is most likely to drive SNNs toward commercial success in 2026 under strict latency and power constraints (<1ms end-to-end, <100mW edge)? Autonomous vehicles, hearing aids, or another emerging domain?
Program Committee
- Claudio Gallicchio – University of Pisa (Reservoir computing, randomized neural networks)
- Robert Legenstein – TU Graz (Computational neuroscience)
- Michael Lunglmayr – JKU Linz (Hardware-software co-design, edge AI)
- Paolo Meloni – University of Cagliari (FPGAs, on-chip architectures, edge AI)
- Angeliki Pantazi - IBM Principal Research Scientist, Manager Emerging Computing and Circuits
- Bernhard A. Moser – SCCH & JKU Linz (Mathematical foundations of event-driven computing)
- Osvaldo Simeone – King’s College London (Neuromorphic sensing and communications)
- Mihai Petrivici – University of Bern (Brain-inspired computing)
- Sebastian Otte – University of Lübeck (Efficient Learning)