Workshop 1

SNNSys Workshop on Spiking Neural Networks

Bernhard A. Moser1, 2, Michael Lunglmayr1 , Robert Legenstein3

1 Institute of Signal Processing, Johannes Kepler University Linz

2 Software Competence Center Hagenberg (SCCH)

3 Institute of Theoretical Computer Science, Graz University of Technology

Schedule

Time Tuesday, March 26th Time Wednesday, March 27th
13:30 - 14:30 Invited Talk, M. Petrovici: Local learning in physical neuronal networks 09:30 - 10:00 S. Otte: Flexible and Efficient Surrogate Gradient Modeling with Forward Gradient Injection
14:30 - 15:00 B. A. Moser: On Leaky-Integrate-and Fire as Spike-Train-Quantization Operator on Dirac-Superimposed Continuous-Time Signals 10:00 - 10:30 A. Weissmann: Concept for Dense Network of Single-Chip Radar Sensors for Anonymous Human Detection and Tracking using Neuromorphic Computing
15:00 - 15:30 Coffee Break    
15:30 - 16:00 S. Otte: Understanding the Convergence in Balanced Resonate-and-Fire Neurons    
16:00 - 16:30 T. Gierlich: Weight transport through spike timing for robust local gradients    
16:30 - 17:00 M. Lunglmayr: Spike-based QRS complex detection in ECG signals    

Workshop Description

Spiking neural networks (SNNs) compute in a fundamentally different and more biologically inspired manner than standard artificial neural networks (ANNs). They have recently gained renewed interest, mainly due to their sparse information processing, larger representation capacity and potentially much lower computational costs. For this workshop we want to address the related aspect of sparsity and its impact on energy-efficient (embedded edge) AI solutions.
The following is a non-exhaustive list of questions we aim to address through our invited talks, panels, and accepted papers:

  • Are current approaches to information encoding for SNNs sufficient to address sparsity and energy efficiency in Edge AI, computer vision and robotics applications?
  • SNNs are bio-inspired, but to which extent does it make sense to stick to the biological model to realize low-power edge AI solutions?
  • What are the key mathematical issues contrasting ANNs and standard signal processing? Is the mathematical foundation sufficient or do we need something new?
  • Do we need better training algorithms for SNNs or better hardware support for the existing training algorithms?
  • What are the challenges of hardware design for sparse and efficient training of SNNs?
  • Despite recent advances, SNNs still play a niche role. Are there any SNN-based “killer” applications or are there some on the horizon in the near future?
  • What are current trends and the future potentials of SNNs?

Following these questions the workshop invites works from but not limited to the following topics:

  • Mathematical foundation of SNNs

  • Learning algorithms for SNNs

  • SNN architectures

  • Brain-inspired computing

  • Neurmorphic integrated sensing and communications

  • Hardware/Software co-design for SNN systems

  • Analog hardware for SNNs

  • Energy efficiency of SNNs

  • Applications of SNNs

Invited Talk: Mihai Petrovici, “Local learning in physical neuronal networks”


Paper submission (template see menu: Call for Papers):

  • Full Paper (8 pages, AIRoV format), for original unpublished work
  • Extended Abstract (2–4 pages), for ongoing or partly published work

Deadlines

  • 16th of February 2024 23th of February 2024: Paper Submission

  • End of February 2024: Acceptance Notification

  • 26th – 27th of March 2024: Airov Symposium


List of Program Committee Members

  • Sander Bohte (Prof. at the University of Amsterdam/Centrum Wiskunde & Informatica (CWI); focus: bio-inspired neural networks)
  • Claudio Gallicchio (Ass. Prof. at the Department of Computer Science of the University of Pisa; focus: reservoir computing, randomization-based neural networks and learning systems)
  • Robert Legenstein (Head of Institute of Theoretical Computer Science at TU Graz; focus: computational neuroscience)
  • Michael Lunglmayr (Assoc. Prof. at Inst. of Signal Processing at Johannes Kepler University; focus: efficient algorithms and hardware architectures, edge AI)
  • Fabian Lurz (Prof. at University of Magdeburg; focus: SNNs for radar signal processing)
  • Paolo Meloni (Assoc. Prof. at the Department of Electrical and Electronic Engineering (DIEE) at the University of Cagliari; focus: multi-core on-chip architectures and FPGAs, edge AI),
  • Bernhard A. Moser (Software Competence Center Hagenberg and Inst. of Signal Processing at Johannes Kepler University; focus: math foundation of event-driven computing),
  • Osvaldo Simeone (Prof. at the Department of Informatics at King’s College London, focus: neuromorphic integrated sensing and communications)
  • Angeliki Pantazi (Principal Research Staff Member and a Research Manager at the IBM Research – Zurich in Switzerland, focus: neuromorphic architecture and learning),
  • Bipin Rajendran (Professor of Intelligent Computing Systems in the Department of Engineering, King’s College London, focus: hardware-software co-design; algorithms for event-driven computing)
  • Mihai Petrovici (Group Leader of NeuroTMA Lab at University of Bern, focus: computational neuroscience and brain-inspired computing)
  • Robert Weigel (Prof. at the University of Erlangen–Nürnberg, focus: circuit and system techniques for the realization of microelectronic systems)

Image on top generated using Stable Diffusion with the keywords “spiking neural network” and a seed of 1680110698. SNNSys logo added by the workshop chairs.