Implementation of a spiking neuron for coincidence detection

Spiking Neural Networks: An Overview

Spiking Neural Networks (SNNs) are considered the 3rd generation of artificial neural networks. Spiking neurons differ from classical artificial neurons in several aspects:

  • They process data in the form of spikes: binary matrices of 0s and 1s.
  • They are equipped with a membrane that has a potential. This potential increases when input spikes arrives. Then, it decreases over time.
  • Their activation function is discontinuous. Indeed, to fire and transmit a spike to the next cells, their membrane’s potential must reach a certain threshold. If this is the case, a spike is emitted to the next neurons, and then the membrane potential resets.

These neurons, therefore, have several advantages. Firstly, they are closer to biological neurons. Additionally, they communicate using spikes, making them more efficient in terms of computational cost. Moreover, through their membrane, they naturally incorporate the temporal aspect of data, which could make them suitable candidates for the study of dynamic data. However, they also have disadvantages, especially in terms of learning. Since their activation function is discontinuous, applying gradient descent to these networks is challenging

Biological neurons

A neuron consists of a cell body, an axon, and dendrites. The region of exchange between two neurons is called a synapse. Neurons located upstream of the synapse are referred to as presynaptic, while those on the output side are called postsynaptic.

Let’s focus on a synapse. Electrical information travels along the axon of a presynaptic neuron. When it reaches the end of the axon, the electrical signal stimulates vesicles that release neurotransmitters into the synapse. These neurotransmitters cross the synapse and bind to receptors on the dendrites of the postsynaptic neuron. These receptors then capture this chemical message and transform it back into an electrical signal. In the cell body, the electrical messages collected by all dendrites are summed and integrated as potential. When this potential surpasses a certain threshold, the cell membrane discharges, and an electrical impulse is generated. This impulse is transmitted along the axon until it reaches the synapses between the current neuron and the next ones.

LIF Spiking Neuron Model

A Leaky Integrate-and-Fire (LIF) spiking neuron has a membrane with a potential U at each moment t governed by the following equation:

Where U represents the potential, 𝛽 ∈ ℝ is the membrane decay factor, X ∈ {0,1}N is the input signal from N synapses, W ∈ ℝN is the synaptic weight, S ∈ {0,1} is the activation function, and Uthr is the membrane threshold.

In this model, time is discretized. At each time step, the potential is updated: it decreases by a factor 𝛽 and increases based on the received spikes, modulated by the synaptic weights W.X. Then, a test is performed; if the potential exceeds the threshold Uthr, a spike is emitted, and the potential is reset to its resting value.

STDP as the Learning Function

To train spiking neurons, we can use the Spike-Timing-Dependent Plasticity (STDP) algorithm. This unsupervised learning mechanism is a local rule directly inspired by Hebb’s postulate, ‘Neurons that fire together, wire together.’

Consider a pair of presynaptic and postsynaptic spikes (Ipre, Ipost). Since these are temporal signals, two distinct scenarios exist: either Ipre precedes Ipost and has contributed to generating Ipost, suggesting a legitimate connection between the two neurons, or Ipost precedes Ipre, meaning that Ipost did not need the Ipre spike to be generated, implying that the connection between these two neurons is not necessary.

Ultimately, the STDP algorithm adjusts the synaptic weights based on these two scenarios. In one case, it strengthens the connection (Potentiation), while in the other, it weakens it (Depression). The potentiation (or depression) of weights is proportional to the lead (or lag) of the Ipre spike compared to Ipost. These adjustments depend on the one hand on the time constants τdep and τpost, which indirectly define the size of the considered time window. On the other hand, they depend on the factors A+ and A, which can be interpreted as learning rates.

An example of use: Coincidence detection.

The works presented in this section are a replication of the experiments from T. Masquelier, R. Guyonneau, S. J. Thorpe; Spike Timing Dependent Plasticity Finds the Start of Repeating Patterns in Continuous Spike Trains

Data Generation:

To simulate brain activity, we randomly generate spikes for N presynaptic neurons over 40,000 milliseconds. Half of these neurons activate purely randomly. The other half also behaves mostly randomly, but regularly we force them to activate according to a specific pattern. This pattern is chosen by copying a 50ms segment of the activity of these neurons during their random operation, which is then inserted at various locations in the data. Here is a graphical representation of this data:

Data Visualization


These data are then analyzed by a spiking neuron equipped with STDP. Let’s observe its behavior during learning. At the beginning of the learning process, the neuron is not selective and fires regardless of whether the pattern is presented or not. With each new emitted spike, the neuron adjusts its weights using the STDP rule. This allows it to become selective by the end of the learning process. It can be observed that the neuron emits a spike only when the pattern is presented and not otherwise.

If we look at the evolution of synaptic weights during learning, several behaviors can be distinguished. Firstly, we can observe that during the first 8000 milliseconds, the curves are frequently updated. Indeed, during this period, the neuron fires very often because the sum of the signals it receives as input is too strong relative to the threshold of its membrane. However, with each activation, the neuron adjusts its synaptic weights by applying STDP. This results, on average, in a decrease in synaptic weights because we have chosen the parameters Apot, Adep, τpot, τdep such that Apotpot < Adepdep

After 8000ms, the neuron’s activation frequency significantly decreases, and its activations become more correlated with the presence of the pattern: the neuron becomes selective. Two distinct behaviors emerge in the synaptic weight evolution curves. Firstly, the synaptic weights of neurons that activate purely randomly (blue curves) continue to decrease towards 0. For neurons involved in the pattern (red curves), some increase until reaching the threshold value of 1, while others decrease more rapidly towards 0. One might have expected the weights of all neurons involved in the pattern to increase. To understand this, one needs to look more closely at the neuron’s activation at the end of learning.

In particular, we can see that the neuron activates with a slight delay after the pattern begins. It takes a little time for the neuron to recognize that it is indeed the pattern and that the ordered activation of these neurons is not random. As a result, presynaptic neurons activating at the beginning of the pattern will be potentiated, while those activating just after the postsynaptic neuron has detected the pattern will be depressed. To visualize this, let’s classify the presynaptic neurons based on their arrival order in the pattern and observe the evolution of their weights.

In this graph, the 1000 neurons involved in the pattern are classified from 0 to 1000 based on their first appearance in the pattern. Since the pattern lasts 50ms, their first appearance varies from 0 to 49. Additionally, a color is assigned to this value ranging from purple to yellow. The thickness of the points is directly proportional to the synaptic weight. A clearly visible point has a weight close to 1, while an invisible point indicates a synaptic weight close to 0.

It can be observed that during learning, the synaptic weights of the first neurons (0 to 400 approximately) increase. Their color indicates that this corresponds to neurons whose first appearance is before around 7ms. The neurons just above are almost completely erased. Neurons appearing towards the end of the pattern are also depressed but to a lesser extent, as expected when applying STDP.