ml_genn.compilers package
Compilation in mlGeNN refers to the process of converting
ml_genn.Network
and ml_genn.SequentialNetwork
objects into a GeNN model which can be simulated. This module contains a
variety of compiler classes which create GeNN models for inference and
training as well as several compiled model classes which can be subsequently
used to interact with the GeNN models.
- class ml_genn.compilers.CompiledFewSpikeNetwork(genn_model, neuron_populations, connection_populations, communicator, k, pop_pipeline_depth)
Bases:
CompiledNetwork
Compiled network used for performing inference using ANNs converted to SNN using FewSpike encoding [Stockl2021].
- Parameters:
k (int)
pop_pipeline_depth (dict)
- evaluate(x, y, metrics='sparse_categorical_accuracy', callbacks=[<ml_genn.callbacks.progress_bar.BatchProgressBar object>])
Evaluate an input in numpy format against labels
- Parameters:
x (dict) – Dictionary of inputs to inject into input neuron populations.
y (dict) – Dictionary of labels to compare to readout from output neuron population.
metrics – Metrics to calculate.
callbacks – List of callbacks to run during evaluation.
- evaluate_batch_iter(inputs, outputs, data, num_batches=None, metrics='sparse_categorical_accuracy', callbacks=[<ml_genn.callbacks.progress_bar.BatchProgressBar object>])
Evaluate an input in iterator format against labels :param x: Dictionary of inputs to inject
into input neuron populations.
- Parameters:
y – Dictionary of labels to compare to readout from output neuron population.
metrics – Metrics to calculate.
callbacks – List of callbacks to run during evaluation.
data (Iterator)
num_batches (int | None)
- class ml_genn.compilers.CompiledInferenceNetwork(genn_model, neuron_populations, connection_populations, communicator, evaluate_timesteps, base_callbacks, reset_time_between_batches=True)
Bases:
CompiledNetwork
- Parameters:
evaluate_timesteps (int)
base_callbacks (list)
reset_time_between_batches (bool)
- evaluate(x, y, metrics='sparse_categorical_accuracy', callbacks=[<ml_genn.callbacks.progress_bar.BatchProgressBar object>])
Evaluate metrics on a numpy dataset
- Parameters:
x (dict) – Dictionary of testing inputs
y (dict) – Dictionary of testing labels to compare predictions against
metrics (dict | Metric | str) – Metrics to calculate.
callbacks – List of callbacks to run during inference.
- evaluate_batch(x, y, metrics='sparse_categorical_accuracy', callbacks=[])
- Parameters:
x (dict)
y (dict)
- evaluate_batch_iter(inputs, outputs, data, num_batches=None, metrics='sparse_categorical_accuracy', callbacks=[<ml_genn.callbacks.progress_bar.BatchProgressBar object>])
Evaluate metrics on an iterator that provides batches of a dataset
- Parameters:
inputs – Input population(s)
outputs – Output population(s)
data (Iterator) – Iterator which produces batches of inputs and labels
num_batches (int | None) – Number of batches iterator will produce
metrics (dict | Metric | str) – Metrics to calculate.
callbacks – List of callbacks to run during inference.
- predict(x, outputs, callbacks=[<ml_genn.callbacks.progress_bar.BatchProgressBar object>])
Generate predictions from a numpy dataset
- Parameters:
x (dict) – Dictionary of testing inputs
outputs (Sequence | InputLayer | Layer | Population) – Output population(s) to extract predictions from
callbacks – List of callbacks to run during inference.
- class ml_genn.compilers.CompiledNetwork(genn_model, neuron_populations, connection_populations, communicator, num_recording_timesteps=None)
Bases:
object
Base class for all compiled networks.
- custom_update(name)
Perform custom update.
- Parameters:
name (str) – Name of custom update
- get_readout(outputs)
Get output from population readouts
- Parameters:
outputs (Sequence | InputLayer | Layer | Population)
- Return type:
ndarray | List[ndarray]
- reset_time()
Reset the GeNN models internal timestep to 0.
- set_input(inputs)
Copy input data to GPU
- Parameters:
inputs (dict) – Dictionary mapping input populations or layers to data to copy to them
- step_time(callback_list=None)
Simulate one timestep
- Parameters:
callback_list (CallbackList | None) – Callbacks to potentially execute at start and end of timestep
- class ml_genn.compilers.CompiledTrainingNetwork(genn_model, neuron_populations, connection_populations, communicator, losses, example_timesteps, base_train_callbacks, base_validate_callbacks, optimisers, checkpoint_connection_vars, checkpoint_population_vars, reset_time_between_batches=True)
Bases:
CompiledNetwork
- Parameters:
example_timesteps (int)
base_train_callbacks (list)
base_validate_callbacks (list)
optimisers (List[Tuple])
checkpoint_connection_vars (list)
checkpoint_population_vars (list)
reset_time_between_batches (bool)
- save(keys=(), serialiser='numpy')
- save_connectivity(keys=(), serialiser='numpy')
- train(x, y, num_epochs, start_epoch=0, shuffle=True, metrics='sparse_categorical_accuracy', callbacks=[<ml_genn.callbacks.progress_bar.BatchProgressBar object>], validation_callbacks=[<ml_genn.callbacks.progress_bar.BatchProgressBar object>], validation_split=0.0, validation_x=None, validation_y=None)
Train model on an input in numpy format against labels
- Parameters:
x (dict) – Dictionary of training inputs
y (dict) – Dictionary of training labels to compare predictions against
num_epochs (int) – Number of epochs to train for
start_epoch (int) – Epoch to stasrt training from
shuffle (bool) – Should training data be shuffled between epochs?
metrics (dict | Metric | str) – Metrics to calculate.
callbacks – List of callbacks to run during training.
validation_callbacks – List of callbacks to run during validation
validation_split (float) – Float between 0 and 1 specifying the fraction of the training data to use for validation.
validation_x (dict | None) – Dictionary of validation inputs (cannot be used at same time as
validation_split
)validation_y (dict | None) – Dictionary of validation labels (cannot be used at same time as
validation_split
)
- class ml_genn.compilers.Compiler(supported_matrix_type, dt=1.0, batch_size=1, rng_seed=0, kernel_profiling=False, communicator=None, **genn_kwargs)
Bases:
object
Base class for all compilers
- Parameters:
supported_matrix_type (List[int])
dt (float)
batch_size (int)
rng_seed (int)
kernel_profiling (bool)
communicator (Communicator)
- add_custom_update(genn_model, model, group, name)
Add a custom update to model.
- Parameters:
genn_model (pygenn.GeNNModel) –
GeNNModel
being compiledmodel (CustomUpdateModel) – Custom update model to add
group (str) – Name of custom update group to associate update with
name (str) – Name of custom update
- add_out_post_zero_custom_update(genn_model, genn_syn_pop, group, name)
- Parameters:
group (str)
name (str)
- add_softmax_custom_updates(genn_model, genn_pop, input_var_name, output_var_name, custom_update_group_prefix='', temperature=1.0)
Adds a numerically stable softmax to the model:
\[\text{softmax}(x_i) = \frac{e^{x_i - \text{max}(x)}}{\sum_j e^{x_j - \text{max}(x)}}\]This softmax can then be calculated by triggering custom update groups “Softmax1”, “Softmax2” and “Softmax3” in sequence (with optional prefix)
- Parameters:
genn_model –
GeNNModel
being compiledgenn_pop – GeNN population input and output variables are associated with
input_var_name (str) – Name of variable to read
x
fromoutput_var_name (str) – Name of variable to write softmax to
custom_update_group_prefix (str) – Optional prefix to add to names of custom update groups (enabling softmax operations required by different parts of the model to be triggered seperately)
temperature (float)
- apply_delay(genn_pop, conn, delay, compile_state)
Apply delay to synapse population in compiler-specific manner
- Parameters:
genn_pop – GeNN synapse population to apply delay to
conn (Connection) – Connection synapse model is associated with
delay – Base delay specified by connectivity
compile_state – Compiler-specific state created by
pre_compile()
.
- build_neuron_model(pop, model, compile_state)
Apply compiler-specific processing to the base neuron model returned by
ml_genn.neurons.Neuron.get_model()
. If modifications are made, this should be done to a (deep) copy.- Parameters:
pop (Population) – Population neuron model is associated with
model (NeuronModel) – Base neuron model
compile_state – Compiler-specific state created by
pre_compile()
.
- Return type:
- build_synapse_model(conn, model, compile_state)
Apply compiler-specific processing to the base synapse model returned by
ml_genn.synapses.Synapse.get_model()
. If modifications are made, this should be done to a (deep) copy.- Parameters:
conn (Connection) – Connection synapse model is associated with
model (SynapseModel) – Base synapse model
compile_state – Compiler-specific state created by
pre_compile()
.
- Return type:
- build_weight_update_model(connection, connect_snippet, compile_state)
Create compiler-specific weight update model for a connection.
- Parameters:
connection (Connection) – Connection weight update model willl be used for
connect_snippet (ConnectivitySnippet) – Connectivity associated with connection
compile_state – Compiler-specific state created by
pre_compile()
.
- Return type:
- compile(network, name=None, **kwargs)
Compiles network
- Parameters:
network (Network) – Network to compile
name (str | None) – Optional name for model used to determine directory to generate code to. If not specified, name of module calling this function will be used.
kwargs – Keyword arguments passed to
pre_compile()
.
- Returns:
Compiled network
- create_compiled_network(genn_model, neuron_populations, connection_populations, compile_state)
Perform any final compiler-specific modifications to compiled
GeNNModel
and returnml_genn.compilers.CompiledNetwork
derived object.- Parameters:
genn_model –
GeNNModel
with all neuron and synapse groups addedneuron_populations (dict) – dictionary mapping
ml_genn.Population
objects to GeNNNeuronGroup
objects they have been compiled intoconnection_populations (dict) – dictionary mapping
ml_genn.Connection
objects to GeNNSynapseGroup
objects they have been compiled intocompile_state – Compiler-specific state created by
pre_compile()
.
- pre_compile(network, genn_model, **kwargs)
If any pre-processing is required before building neuron, synapse and weight update models, compilers should implement it here. Any compiler-specific state that should be persistent across compilation should be encapsulated in an object returned from this method.
- Parameters:
network (Network) – Network to be compiled
genn_model – Empty
GeNNModel
created at start of compilation
- class ml_genn.compilers.EPropCompiler(example_timesteps, losses, optimiser='adam', tau_reg=500.0, c_reg=0.001, f_target=10.0, train_output_bias=True, dt=1.0, batch_size=1, rng_seed=0, kernel_profiling=False, reset_time_between_batches=True, communicator=None, **genn_kwargs)
Bases:
Compiler
Compiler for training models using e-prop [Bellec2020].
The e-prop compiler supports
ml_genn.neurons.LeakyIntegrateFire
andml_genn.neurons.AdaptiveLeakyIntegrateFire
hidden neuron models; andml_genn.losses.SparseCategoricalCrossentropy
loss functions for classification andml_genn.losses.MeanSquareError
for regression.e-prop is derived from Real-Time Recurrent Learning (RTRL) so does not require a backward pass meaning that its memory overhead does not scale with sequence length. However, e-prop requires a per-connection eligibility trace meaning that it is incompatible with connectivity like convolutions with shared weights. Furthermore, because each connection has to be updated every timestep, training performance is not improved by sparse activations.
- Parameters:
example_timesteps (int) – How many timesteps each example will be presented to the network for
losses – Either a dictionary mapping loss functions to output populations or a single loss function to apply to all outputs
optimiser – Optimiser to use when applying weights
tau_reg (float) – Time constant with which hidden neuron spike trains are filtered to obtain the firing rate used for regularisation [ms]
c_reg (float) – Regularisation strength
f_target (float) – Target hidden neuron firing rate used for regularisation [Hz]
train_output_bias (bool) – Should output neuron biases be trained?
dt (float) – Simulation timestep [ms]
batch_size (int) – What batch size should be used for training? In our experience, e-prop works well with very large batch sizes (512)
rng_seed (int) – What value should GeNN’s GPU RNG be seeded with? This is used for all GPU randomness e.g. weight initialisation and Poisson spike train generation
kernel_profiling (bool) – Should GeNN record the time spent in each GPU kernel? These values can be extracted directly from the GeNN model which can be accessed via the
genn_model
property of the compiled model.reset_time_between_batches (bool) – Should time be reset to zero at the start of each example or allowed to run continously?
communicator (Communicator) – Communicator used for inter-process communications when training across multiple GPUs.
- build_neuron_model(pop, model, compile_state)
Apply compiler-specific processing to the base neuron model returned by
ml_genn.neurons.Neuron.get_model()
. If modifications are made, this should be done to a (deep) copy.- Parameters:
pop (Population) – Population neuron model is associated with
model (NeuronModel) – Base neuron model
compile_state (CompileState) – Compiler-specific state created by
pre_compile()
.
- Return type:
- build_synapse_model(conn, model, compile_state)
Apply compiler-specific processing to the base synapse model returned by
ml_genn.synapses.Synapse.get_model()
. If modifications are made, this should be done to a (deep) copy.- Parameters:
conn (Connection) – Connection synapse model is associated with
model (SynapseModel) – Base synapse model
compile_state (CompileState) – Compiler-specific state created by
pre_compile()
.
- Return type:
- build_weight_update_model(conn, connect_snippet, compile_state)
Create compiler-specific weight update model for a connection.
- Parameters:
connection – Connection weight update model willl be used for
connect_snippet (ConnectivitySnippet) – Connectivity associated with connection
compile_state (CompileState) – Compiler-specific state created by
pre_compile()
.conn (Connection)
- Return type:
- create_compiled_network(genn_model, neuron_populations, connection_populations, compile_state)
Perform any final compiler-specific modifications to compiled
GeNNModel
and returnml_genn.compilers.CompiledNetwork
derived object.- Parameters:
genn_model –
GeNNModel
with all neuron and synapse groups addedneuron_populations (dict) – dictionary mapping
ml_genn.Population
objects to GeNNNeuronGroup
objects they have been compiled intoconnection_populations (dict) – dictionary mapping
ml_genn.Connection
objects to GeNNSynapseGroup
objects they have been compiled intocompile_state (CompileState) – Compiler-specific state created by
pre_compile()
.
- Return type:
- pre_compile(network, genn_model, **kwargs)
If any pre-processing is required before building neuron, synapse and weight update models, compilers should implement it here. Any compiler-specific state that should be persistent across compilation should be encapsulated in an object returned from this method.
- Parameters:
network (Network) – Network to be compiled
genn_model – Empty
GeNNModel
created at start of compilation
- Return type:
CompileState
- class ml_genn.compilers.EventPropCompiler(example_timesteps, losses, optimiser='adam', reg_lambda_upper=0.0, reg_lambda_lower=0.0, reg_nu_upper=0.0, max_spikes=500, strict_buffer_checking=False, per_timestep_loss=False, dt=1.0, ttfs_alpha=0.01, softmax_temperature=1.0, batch_size=1, rng_seed=0, kernel_profiling=False, communicator=None, delay_optimiser=None, delay_learn_conns=[], **genn_kwargs)
Bases:
Compiler
Compiler for training models using EventProp [Wunderlich2021].
The EventProp compiler supports
ml_genn.neurons.LeakyIntegrateFire
hidden neuron models; andml_genn.losses.SparseCategoricalCrossentropy
loss functions for classification andml_genn.losses.MeanSquareError
for regression.EventProp implements a fully event-driven backward pass meaning that its memory overhead scales with the number of spikes per-trial rather than sequence length.
In the original paper, [Wunderlich2021] derived EventProp to support loss functions of the form:
\[{\cal L} = l_p(t^{\text{post}}) + \int_0^T l_V(V(t),t) dt\]such as
\[l_V= -\frac{1}{N_{\text{batch}}} \sum_{m=1}^{N_{\text{batch}}} \log \left( \frac{\exp\left(V_{l(m)}^m(t)\right)}{\sum_{k=1}^{N_{\text{class}}} \exp\left(V_{k}^m(t) \right)} \right)\]where a function of output neuron membrane voltage is calculated each timestep – in mlGeNN, we refer to these as per-timestep loss functions. However, [Nowotny2024] showed that tasks with more complex temporal structure cannot be learned using these loss functions and extended the framework to support loss functions of the form:
\[{\cal L}_F = F\left(\textstyle \int_0^T l_V(V(t),t) \, dt\right)\]such as:
\[{\mathcal L_{\text{sum}}} = - \frac{1}{N_{\text{batch}}} \sum_{m=1}^{N_{\text{batch}}} \log \left( \frac{\exp\left(\int_0^T V_{l(m)}^m(t) dt\right)}{\sum_{k=1}^{N_{\text{out}}} \exp\left(\int_0^T V_{k}^m(t) dt\right)} \right)\]where a function of the integral of voltage is calculated once per-trial.
- Parameters:
example_timesteps (int) – How many timestamps each example will be presented to the network for
losses – Either a dictionary mapping loss functions to output populations or a single loss function to apply to all outputs
optimiser – Optimiser to use when applying weights
reg_lambda_upper (float) – Regularisation strength, should typically be the same as
reg_lambda_lower
.reg_lambda_lower (float) – Regularisation strength, should typically be the same as
reg_lambda_upper
.reg_nu_upper (float) – Target number of hidden neuron spikes used for regularisation
max_spikes (int) – What is the maximum number of spikes each neuron (input and hidden) can emit each trial? This is used to allocate memory for the backward pass.
strict_buffer_checking (bool) – For performance reasons, if neurons emit more than
max_spikes
they are normally ignored but, if this flag is set, this will cause an error.per_timestep_loss (bool) – Should we use the per-timestep or per-trial loss functions described above?
dt (float) – Simulation timestep [ms]
TODO (softmax_temperature)
TODO
batch_size (int) – What batch size should be used for training? In our experience, EventProp works best with modest batch sizes (32-128)
rng_seed (int) – What value should GeNN’s GPU RNG be seeded with? This is used for all GPU randomness e.g. weight initialisation and Poisson spike train generation
kernel_profiling (bool) – Should GeNN record the time spent in each GPU kernel? These values can be extracted directly from the GeNN model which can be accessed via the
genn_model
property of the compiled model.reset_time_between_batches – Should time be reset to zero at the start of each example or allowed to run continously?
communicator (Communicator) – Communicator used for inter-process communications when training across multiple GPUs.
delay_optimiser – Optimiser to use when applying delays. If None,
optimiser
will be used for delaysdelay_learn_conns (Sequence) – Connection for which delays should be learned as well as weight
ttfs_alpha (float)
softmax_temperature (float)
- apply_delay(genn_pop, conn, delay, compile_state)
Apply delay to synapse population in compiler-specific manner
- Parameters:
genn_pop – GeNN synapse population to apply delay to
conn (Connection) – Connection synapse model is associated with
delay – Base delay specified by connectivity
compile_state – Compiler-specific state created by
pre_compile()
.
- build_neuron_model(pop, model, compile_state)
Apply compiler-specific processing to the base neuron model returned by
ml_genn.neurons.Neuron.get_model()
. If modifications are made, this should be done to a (deep) copy.- Parameters:
pop (Population) – Population neuron model is associated with
model (NeuronModel) – Base neuron model
compile_state (CompileState) – Compiler-specific state created by
pre_compile()
.
- Return type:
- build_synapse_model(conn, model, compile_state)
Apply compiler-specific processing to the base synapse model returned by
ml_genn.synapses.Synapse.get_model()
. If modifications are made, this should be done to a (deep) copy.- Parameters:
conn (Connection) – Connection synapse model is associated with
model (SynapseModel) – Base synapse model
compile_state (CompileState) – Compiler-specific state created by
pre_compile()
.
- Return type:
- build_weight_update_model(conn, connect_snippet, compile_state)
Create compiler-specific weight update model for a connection.
- Parameters:
connection – Connection weight update model willl be used for
connect_snippet (ConnectivitySnippet) – Connectivity associated with connection
compile_state (CompileState) – Compiler-specific state created by
pre_compile()
.conn (Connection)
- Return type:
- create_compiled_network(genn_model, neuron_populations, connection_populations, compile_state)
Perform any final compiler-specific modifications to compiled
GeNNModel
and returnml_genn.compilers.CompiledNetwork
derived object.- Parameters:
genn_model –
GeNNModel
with all neuron and synapse groups addedneuron_populations (dict) – dictionary mapping
ml_genn.Population
objects to GeNNNeuronGroup
objects they have been compiled intoconnection_populations (dict) – dictionary mapping
ml_genn.Connection
objects to GeNNSynapseGroup
objects they have been compiled intocompile_state (CompileState) – Compiler-specific state created by
pre_compile()
.
- Return type:
- pre_compile(network, genn_model, **kwargs)
If any pre-processing is required before building neuron, synapse and weight update models, compilers should implement it here. Any compiler-specific state that should be persistent across compilation should be encapsulated in an object returned from this method.
- Parameters:
network (Network) – Network to be compiled
genn_model – Empty
GeNNModel
created at start of compilation
- Return type:
CompileState
- property regulariser_enabled
- class ml_genn.compilers.FewSpikeCompiler(k=10, dt=1.0, batch_size=1, rng_seed=0, kernel_profiling=False, prefer_in_memory_connect=True, communicator=None, **genn_kwargs)
Bases:
Compiler
- Parameters:
k (int)
dt (float)
batch_size (int)
rng_seed (int)
kernel_profiling (bool)
prefer_in_memory_connect (bool)
communicator (Communicator)
- apply_delay(genn_pop, conn, delay, compile_state)
Apply delay to synapse population in compiler-specific manner
- Parameters:
genn_pop – GeNN synapse population to apply delay to
conn (Connection) – Connection synapse model is associated with
delay – Base delay specified by connectivity
compile_state – Compiler-specific state created by
pre_compile()
.
- build_neuron_model(pop, model, compile_state)
Apply compiler-specific processing to the base neuron model returned by
ml_genn.neurons.Neuron.get_model()
. If modifications are made, this should be done to a (deep) copy.- Parameters:
pop (Population) – Population neuron model is associated with
model (NeuronModel) – Base neuron model
compile_state (CompileState) – Compiler-specific state created by
pre_compile()
.
- Return type:
- build_synapse_model(conn, model, compile_state)
Apply compiler-specific processing to the base synapse model returned by
ml_genn.synapses.Synapse.get_model()
. If modifications are made, this should be done to a (deep) copy.- Parameters:
conn (Connection) – Connection synapse model is associated with
model (SynapseModel) – Base synapse model
compile_state (CompileState) – Compiler-specific state created by
pre_compile()
.
- Return type:
- create_compiled_network(genn_model, neuron_populations, connection_populations, compile_state)
Perform any final compiler-specific modifications to compiled
GeNNModel
and returnml_genn.compilers.CompiledNetwork
derived object.- Parameters:
genn_model –
GeNNModel
with all neuron and synapse groups addedneuron_populations (dict) – dictionary mapping
ml_genn.Population
objects to GeNNNeuronGroup
objects they have been compiled intoconnection_populations (dict) – dictionary mapping
ml_genn.Connection
objects to GeNNSynapseGroup
objects they have been compiled intocompile_state (CompileState) – Compiler-specific state created by
pre_compile()
.
- Return type:
- pre_compile(network, genn_model, inputs, outputs, **kwargs)
If any pre-processing is required before building neuron, synapse and weight update models, compilers should implement it here. Any compiler-specific state that should be persistent across compilation should be encapsulated in an object returned from this method.
- Parameters:
network (Network) – Network to be compiled
genn_model – Empty
GeNNModel
created at start of compilation
- Return type:
CompileState
- class ml_genn.compilers.InferenceCompiler(evaluate_timesteps, dt=1.0, batch_size=1, rng_seed=0, kernel_profiling=False, prefer_in_memory_connect=True, reset_time_between_batches=True, reset_vars_between_batches=True, reset_in_syn_between_batches=False, communicator=None, **genn_kwargs)
Bases:
Compiler
Compiler for performing inference on trained networks
- Parameters:
evaluate_timesteps (int) – How many timestamps each example will be presented to the network for
dt (float) – Simulation timestep [ms]
batch_size (int) – What batch size should be used for inference?
rng_seed (int) – What value should GeNN’s GPU RNG be seeded with? This is used for all GPU randomness e.g. weight initialisation and Poisson spike train generation
kernel_profiling (bool) – Should GeNN record the time spent in each GPU kernel? These values can be extracted directly from the GeNN model which can be accessed via the
genn_model
property of the compiled model.prefer_in_memory_connect – Should in-memory connectivity strategies such as TOEPLITZ be used rather than converting all connectivity into matrices.
reset_time_between_batches – Should time be reset to zero at the start of each example or allowed to run continously?
reset_vars_between_batches – Should neuron variables be reset to their initial values at the start of each example or allowed to run continously?
reset_in_syn_between_batches – Should synaptic input variables be reset to their initial values at the start of each example or allowed to run continously?
communicator (Communicator) – Communicator used for inter-process communications when training across multiple GPUs.
- build_neuron_model(pop, model, compile_state)
Apply compiler-specific processing to the base neuron model returned by
ml_genn.neurons.Neuron.get_model()
. If modifications are made, this should be done to a (deep) copy.- Parameters:
pop (Population) – Population neuron model is associated with
model (NeuronModel) – Base neuron model
compile_state (CompileState) – Compiler-specific state created by
pre_compile()
.
- Return type:
- build_synapse_model(conn, model, compile_state)
Apply compiler-specific processing to the base synapse model returned by
ml_genn.synapses.Synapse.get_model()
. If modifications are made, this should be done to a (deep) copy.- Parameters:
conn (Connection) – Connection synapse model is associated with
model (SynapseModel) – Base synapse model
compile_state (CompileState) – Compiler-specific state created by
pre_compile()
.
- Return type:
- create_compiled_network(genn_model, neuron_populations, connection_populations, compile_state)
Perform any final compiler-specific modifications to compiled
GeNNModel
and returnml_genn.compilers.CompiledNetwork
derived object.- Parameters:
genn_model –
GeNNModel
with all neuron and synapse groups addedneuron_populations (dict) – dictionary mapping
ml_genn.Population
objects to GeNNNeuronGroup
objects they have been compiled intoconnection_populations (dict) – dictionary mapping
ml_genn.Connection
objects to GeNNSynapseGroup
objects they have been compiled intocompile_state (CompileState) – Compiler-specific state created by
pre_compile()
.
- Return type:
- pre_compile(network, genn_model, **kwargs)
If any pre-processing is required before building neuron, synapse and weight update models, compilers should implement it here. Any compiler-specific state that should be persistent across compilation should be encapsulated in an object returned from this method.
- Parameters:
network (Network) – Network to be compiled
genn_model – Empty
GeNNModel
created at start of compilation
- Return type:
CompileState