Tutorial 1

In this tutorial, we are going to build an SNN capable of classifying MNIST by copying the weights obtained by training the following simple ANN using TensorFlow:

Using GeNN for spike-based machine learning.svg

Clearly, this is far from a state of the art architecture, but it still achieves 97.6% accuracy on MNIST.

Install

Download wheel file

[1]:
if "google.colab" in str(get_ipython()):
    !gdown 1wUeynMCgEOl2oK2LAd4E0s0iT_OiNOfl
    !pip install pygenn-5.1.0-cp311-cp311-linux_x86_64.whl
    %env CUDA_PATH=/usr/local/cuda

    !rm -rf /content/ml_genn-ml_genn_2_3_0
    !wget https://github.com/genn-team/ml_genn/archive/refs/tags/ml_genn_2_3_0.zip
    !unzip -q ml_genn_2_3_0.zip
    !pip install ./ml_genn-ml_genn_2_3_0/ml_genn
Downloading...
From: https://drive.google.com/uc?id=1wUeynMCgEOl2oK2LAd4E0s0iT_OiNOfl
To: /content/pygenn-5.1.0-cp311-cp311-linux_x86_64.whl
100% 8.49M/8.49M [00:00<00:00, 105MB/s]
Processing ./pygenn-5.1.0-cp311-cp311-linux_x86_64.whl
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.11/dist-packages (from pygenn==5.1.0) (1.26.4)
Requirement already satisfied: psutil in /usr/local/lib/python3.11/dist-packages (from pygenn==5.1.0) (5.9.5)
Requirement already satisfied: setuptools in /usr/local/lib/python3.11/dist-packages (from pygenn==5.1.0) (75.1.0)
pygenn is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel.
env: CUDA_PATH=/usr/local/cuda
--2025-01-21 10:45:19--  https://github.com/genn-team/ml_genn/archive/refs/tags/ml_genn_2_3_0.zip
Resolving github.com (github.com)... 140.82.116.4
Connecting to github.com (github.com)|140.82.116.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://codeload.github.com/genn-team/ml_genn/zip/refs/tags/ml_genn_2_3_0 [following]
--2025-01-21 10:45:20--  https://codeload.github.com/genn-team/ml_genn/zip/refs/tags/ml_genn_2_3_0
Resolving codeload.github.com (codeload.github.com)... 140.82.113.10
Connecting to codeload.github.com (codeload.github.com)|140.82.113.10|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/zip]
Saving to: ‘ml_genn_2_3_0.zip.1’

ml_genn_2_3_0.zip.1     [  <=>               ] 681.24K  1.96MB/s    in 0.3s

2025-01-21 10:45:20 (1.96 MB/s) - ‘ml_genn_2_3_0.zip.1’ saved [697592]

Processing ./ml_genn-ml_genn_2_3_0/ml_genn
  Preparing metadata (setup.py) ... done
Requirement already satisfied: pygenn<6.0.0,>=5.1.0 in /usr/local/lib/python3.11/dist-packages (from ml_genn==2.3.0) (5.1.0)
Requirement already satisfied: enum-compat in /usr/local/lib/python3.11/dist-packages (from ml_genn==2.3.0) (0.0.3)
Requirement already satisfied: tqdm>=4.27.0 in /usr/local/lib/python3.11/dist-packages (from ml_genn==2.3.0) (4.67.1)
Requirement already satisfied: deprecated in /usr/local/lib/python3.11/dist-packages (from ml_genn==2.3.0) (1.2.15)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.11/dist-packages (from pygenn<6.0.0,>=5.1.0->ml_genn==2.3.0) (1.26.4)
Requirement already satisfied: psutil in /usr/local/lib/python3.11/dist-packages (from pygenn<6.0.0,>=5.1.0->ml_genn==2.3.0) (5.9.5)
Requirement already satisfied: setuptools in /usr/local/lib/python3.11/dist-packages (from pygenn<6.0.0,>=5.1.0->ml_genn==2.3.0) (75.1.0)
Requirement already satisfied: wrapt<2,>=1.10 in /usr/local/lib/python3.11/dist-packages (from deprecated->ml_genn==2.3.0) (1.17.0)
Building wheels for collected packages: ml_genn
  Building wheel for ml_genn (setup.py) ... done
  Created wheel for ml_genn: filename=ml_genn-2.3.0-py3-none-any.whl size=131136 sha256=d03806ed44b34903a04c0c1132228e321f70b53388c57b075bd090a60b855119
  Stored in directory: /tmp/pip-ephem-wheel-cache-3nqx2nqg/wheels/e6/30/c3/d2812036f97eda07dd49782a8c8707b279525e4d30ab961677
Successfully built ml_genn
Installing collected packages: ml_genn
  Attempting uninstall: ml_genn
    Found existing installation: ml_genn 2.3.0
    Uninstalling ml_genn-2.3.0:
      Successfully uninstalled ml_genn-2.3.0
Successfully installed ml_genn-2.3.0

Download pre-trained weights

[2]:
!gdown 1cmNL8W0QZZtn3dPHiOQnVjGAYTk6Rhpc
!gdown 131lCXLEH6aTXnBZ9Nh4eJLSy5DQ6LKSF
Downloading...
From: https://drive.google.com/uc?id=1cmNL8W0QZZtn3dPHiOQnVjGAYTk6Rhpc
To: /content/weights_0_1.npy
100% 402k/402k [00:00<00:00, 86.4MB/s]
Downloading...
From: https://drive.google.com/uc?id=131lCXLEH6aTXnBZ9Nh4eJLSy5DQ6LKSF
To: /content/weights_1_2.npy
100% 5.25k/5.25k [00:00<00:00, 20.1MB/s]

Install MNIST package

[3]:
!pip install mnist
Collecting mnist
  Downloading mnist-0.2.2-py2.py3-none-any.whl.metadata (1.6 kB)
Requirement already satisfied: numpy in /usr/local/lib/python3.11/dist-packages (from mnist) (1.26.4)
Downloading mnist-0.2.2-py2.py3-none-any.whl (3.5 kB)
Installing collected packages: mnist
Successfully installed mnist-0.2.2

Build model

Import standard modules and required mlGeNN classes

[4]:
import mnist
import numpy as np

from ml_genn import InputLayer, Layer, SequentialNetwork
from ml_genn.compilers import InferenceCompiler
from ml_genn.neurons import IntegrateFire, IntegrateFireInput
from ml_genn.connectivity import Dense
from ml_genn.callbacks import SpikeRecorder

Because our network is entirely feedforward, we can define it as a SequentialNetwork where each layer is automatically connected to the previous layer. We are going to convert MNIST digits to spikes by simply treating the intensity of each pixel (multiplied by a scaling factor) as the input to an integrate-and-fire neuron. For our hidden and output layers we are going use very simple Integrate-and-Fire neurons which best match the transfer function of the ReLU neurons our ANN was trained with. Finally, we are going to read classifications from our output layer by simply counting spikes.

[5]:
# Create sequential model
network = SequentialNetwork()
with network:
    input = InputLayer(IntegrateFireInput(v_thresh=5.0), 784)
    Layer(Dense(weight=np.load("weights_0_1.npy")),
          IntegrateFire(v_thresh=5.0))
    output = Layer(Dense(weight=np.load("weights_1_2.npy")),
                   IntegrateFire(v_thresh=5.0, readout="spike_count"))

In mlGeNN, in order to turn an abstract network description into something that can actually be used for training or inference you use a compiler class. Here, as we are performing inference, we use the InferenceCompiler and specify batch size and how many timesteps to evaluate each example for.

[6]:
compiler = InferenceCompiler(dt=1.0, batch_size=128,
                             evaluate_timesteps=100)
compiled_net = compiler.compile(network)

We can then load the MNIST testing data and evaluate the compiled pre-trained network on it. The normal grayscale values of the MNIST dataset are passed into the input layer, scaled by a factor (here 0.01) which is translated into spikes by the IntegrateFireInput layer. The testing labels are passed as a simple array of of 0 to 9, corresponding to the default SparseCategorical loss type.

[7]:
# Load testing
mnist.datasets_url = "https://storage.googleapis.com/cvdf-datasets/mnist/"
testing_images = np.reshape(mnist.test_images(), (-1, 784))
testing_labels = mnist.test_labels()

with compiled_net:
    # Evaluate model on numpy dataset
    metrics, _ = compiled_net.evaluate({input: testing_images * 0.01},
                                       {output: testing_labels})