ml_genn_tf.converters package
mlGeNN TF converters take an ANN trained in TensorFlow and convert it to an mlGeNN Network for SNN inference.
- class ml_genn_tf.converters.Converter
Bases:
object
Base class for all converters
- convert(tf_model)
Convert a TensorFlow model to an mlGeNN network. Returns network, list of input populations, list of output populations and dictionary mapping TF layers to populations.
- Parameters:
tf_model (tensorflow.keras.Model) – TensorFlow model to be converted
- Return type:
Tuple[Network, List[Population], List[Population], Dict[tensorflow.keras.layers.Layer, Population]]
- create_compiler(**kwargs)
Create suitable compiler to compile networks produced by this converter
- Return type:
- create_input_neurons(pre_convert_output)
Create converter-specific input neuron model
- Parameters:
pre_convert_output – Compiler-specific state created by
pre_convert()
.- Return type:
- create_neurons(tf_layer, pre_convert_output, is_output)
Create converter-specific neuron model from TF layer
- Parameters:
tf_layer (tensorflow.keras.layers.Layer) – TF layer to convert
pre_convert_output – Compiler-specific state created by
pre_convert()
.is_output (bool) – Is this an output layer?
- Return type:
- post_convert(mlg_network, mlg_network_inputs, mlg_model_outputs)
If any post-processing is required to the network after adding all layers, converters should implement it here.
- Parameters:
mlg_network (Network) – Populated network
mlg_network_inputs (List[Population]) – List of input populations
mlg_model_outputs (List[Population]) – List of output populations
- pre_convert(tf_model)
If any pre-processing is required before converting TF model, converters should implement it here. Any converter-specific state that should be persistent across conversion should be encapsulated in an object returned from this method.
- Parameters:
tf_model (tensorflow.keras.Model) – TensorFlow model to be converted
- validate_tf_layer(tf_layer, config)
- Parameters:
tf_layer (tensorflow.keras.layers.Layer)
- class ml_genn_tf.converters.DataNorm(evaluate_timesteps, signed_input=False, norm_data=None, input_type=InputType.POISSON)
Bases:
Converter
Converts ANNs to network of integrate-and-fire neurons, operating in a rate-based regime If normalisation data is provided, thresholds are balancing using the algorithm proposed by [Diehl2015].
- Parameters:
evaluate_timesteps (int) – ss
signed_input – ss
norm_data – paa
input_type – sss
- create_compiler(**kwargs)
Create suitable compiler to compile networks produced by this converter
- create_input_neurons(pre_convert_output)
Create converter-specific input neuron model
- Parameters:
pre_convert_output (PreConvertOutput) – Compiler-specific state created by
pre_convert()
.- Return type:
- create_neurons(tf_layer, pre_convert_output, is_output)
Create converter-specific neuron model from TF layer
- Parameters:
tf_layer (tensorflow.keras.layers.Layer) – TF layer to convert
pre_convert_output (PreConvertOutput) – Compiler-specific state created by
pre_convert()
.is_output (bool) – Is this an output layer?
- Return type:
- pre_convert(tf_model)
If any pre-processing is required before converting TF model, converters should implement it here. Any converter-specific state that should be persistent across conversion should be encapsulated in an object returned from this method.
- Parameters:
tf_model – TensorFlow model to be converted
- class ml_genn_tf.converters.FewSpike(k=10, alpha=25, signed_input=False, norm_data=None)
Bases:
Converter
- Parameters:
k (int)
alpha (float)
- create_compiler(**kwargs)
Create suitable compiler to compile networks produced by this converter
- create_input_neurons(pre_convert_output)
Create converter-specific input neuron model
- Parameters:
pre_convert_output – Compiler-specific state created by
pre_convert()
.
- create_neurons(tf_layer, pre_convert_output, is_output)
Create converter-specific neuron model from TF layer
- Parameters:
tf_layer – TF layer to convert
pre_convert_output – Compiler-specific state created by
pre_convert()
.is_output – Is this an output layer?
- post_convert(mlg_network, mlg_network_inputs, mlg_model_outputs)
If any post-processing is required to the network after adding all layers, converters should implement it here.
- Parameters:
mlg_network – Populated network
mlg_network_inputs – List of input populations
mlg_model_outputs – List of output populations
- pre_convert(tf_model)
If any pre-processing is required before converting TF model, converters should implement it here. Any converter-specific state that should be persistent across conversion should be encapsulated in an object returned from this method.
- Parameters:
tf_model – TensorFlow model to be converted
- class ml_genn_tf.converters.InputType(value)
Bases:
Enum
An enumeration.
- IF = 'if'
- POISSON = 'poisson'
- SPIKE = 'spike'
- class ml_genn_tf.converters.Simple(evaluate_timesteps, signed_input=False, input_type=InputType.POISSON)
Bases:
Converter
- Parameters:
signed_input (bool)
input_type (InputType)
- create_compiler(**kwargs)
Create suitable compiler to compile networks produced by this converter
- create_input_neurons(pre_compile_output)
Create converter-specific input neuron model
- Parameters:
pre_convert_output – Compiler-specific state created by
pre_convert()
.
- create_neurons(tf_layer, pre_compile_output, is_output)
Create converter-specific neuron model from TF layer
- Parameters:
tf_layer – TF layer to convert
pre_convert_output – Compiler-specific state created by
pre_convert()
.is_output – Is this an output layer?
- pre_compile(mlg_network)
- pre_convert(tf_model)
If any pre-processing is required before converting TF model, converters should implement it here. Any converter-specific state that should be persistent across conversion should be encapsulated in an object returned from this method.
- Parameters:
tf_model – TensorFlow model to be converted
Submodules
ml_genn_tf.converters.converter module
- class ml_genn_tf.converters.converter.Converter
Bases:
object
Base class for all converters
- convert(tf_model)
Convert a TensorFlow model to an mlGeNN network. Returns network, list of input populations, list of output populations and dictionary mapping TF layers to populations.
- Parameters:
tf_model (tensorflow.keras.Model) – TensorFlow model to be converted
- Return type:
Tuple[Network, List[Population], List[Population], Dict[tensorflow.keras.layers.Layer, Population]]
- create_compiler(**kwargs)
Create suitable compiler to compile networks produced by this converter
- Return type:
- create_input_neurons(pre_convert_output)
Create converter-specific input neuron model
- Parameters:
pre_convert_output – Compiler-specific state created by
pre_convert()
.- Return type:
- create_neurons(tf_layer, pre_convert_output, is_output)
Create converter-specific neuron model from TF layer
- Parameters:
tf_layer (tensorflow.keras.layers.Layer) – TF layer to convert
pre_convert_output – Compiler-specific state created by
pre_convert()
.is_output (bool) – Is this an output layer?
- Return type:
- post_convert(mlg_network, mlg_network_inputs, mlg_model_outputs)
If any post-processing is required to the network after adding all layers, converters should implement it here.
- Parameters:
mlg_network (Network) – Populated network
mlg_network_inputs (List[Population]) – List of input populations
mlg_model_outputs (List[Population]) – List of output populations
- pre_convert(tf_model)
If any pre-processing is required before converting TF model, converters should implement it here. Any converter-specific state that should be persistent across conversion should be encapsulated in an object returned from this method.
- Parameters:
tf_model (tensorflow.keras.Model) – TensorFlow model to be converted
- validate_tf_layer(tf_layer, config)
- Parameters:
tf_layer (tensorflow.keras.layers.Layer)
ml_genn_tf.converters.data_norm module
- class ml_genn_tf.converters.data_norm.DataNorm(evaluate_timesteps, signed_input=False, norm_data=None, input_type=InputType.POISSON)
Bases:
Converter
Converts ANNs to network of integrate-and-fire neurons, operating in a rate-based regime If normalisation data is provided, thresholds are balancing using the algorithm proposed by [Diehl2015].
- Parameters:
evaluate_timesteps (int) – ss
signed_input – ss
norm_data – paa
input_type – sss
- create_compiler(**kwargs)
Create suitable compiler to compile networks produced by this converter
- create_input_neurons(pre_convert_output)
Create converter-specific input neuron model
- Parameters:
pre_convert_output (PreConvertOutput) – Compiler-specific state created by
pre_convert()
.- Return type:
- create_neurons(tf_layer, pre_convert_output, is_output)
Create converter-specific neuron model from TF layer
- Parameters:
tf_layer (tensorflow.keras.layers.Layer) – TF layer to convert
pre_convert_output (PreConvertOutput) – Compiler-specific state created by
pre_convert()
.is_output (bool) – Is this an output layer?
- Return type:
- pre_convert(tf_model)
If any pre-processing is required before converting TF model, converters should implement it here. Any converter-specific state that should be persistent across conversion should be encapsulated in an object returned from this method.
- Parameters:
tf_model – TensorFlow model to be converted
ml_genn_tf.converters.enum module
ml_genn_tf.converters.few_spike module
- class ml_genn_tf.converters.few_spike.FewSpike(k=10, alpha=25, signed_input=False, norm_data=None)
Bases:
Converter
- Parameters:
k (int)
alpha (float)
- create_compiler(**kwargs)
Create suitable compiler to compile networks produced by this converter
- create_input_neurons(pre_convert_output)
Create converter-specific input neuron model
- Parameters:
pre_convert_output – Compiler-specific state created by
pre_convert()
.
- create_neurons(tf_layer, pre_convert_output, is_output)
Create converter-specific neuron model from TF layer
- Parameters:
tf_layer – TF layer to convert
pre_convert_output – Compiler-specific state created by
pre_convert()
.is_output – Is this an output layer?
- post_convert(mlg_network, mlg_network_inputs, mlg_model_outputs)
If any post-processing is required to the network after adding all layers, converters should implement it here.
- Parameters:
mlg_network – Populated network
mlg_network_inputs – List of input populations
mlg_model_outputs – List of output populations
- pre_convert(tf_model)
If any pre-processing is required before converting TF model, converters should implement it here. Any converter-specific state that should be persistent across conversion should be encapsulated in an object returned from this method.
- Parameters:
tf_model – TensorFlow model to be converted
ml_genn_tf.converters.simple module
- class ml_genn_tf.converters.simple.Simple(evaluate_timesteps, signed_input=False, input_type=InputType.POISSON)
Bases:
Converter
- Parameters:
signed_input (bool)
input_type (InputType)
- create_compiler(**kwargs)
Create suitable compiler to compile networks produced by this converter
- create_input_neurons(pre_compile_output)
Create converter-specific input neuron model
- Parameters:
pre_convert_output – Compiler-specific state created by
pre_convert()
.
- create_neurons(tf_layer, pre_compile_output, is_output)
Create converter-specific neuron model from TF layer
- Parameters:
tf_layer – TF layer to convert
pre_convert_output – Compiler-specific state created by
pre_convert()
.is_output – Is this an output layer?
- pre_compile(mlg_network)
- pre_convert(tf_model)
If any pre-processing is required before converting TF model, converters should implement it here. Any converter-specific state that should be persistent across conversion should be encapsulated in an object returned from this method.
- Parameters:
tf_model – TensorFlow model to be converted
ml_genn_tf.converters.spike_norm module
- class ml_genn_tf.converters.spike_norm.NormCompiler(evaluate_timesteps, dt=1.0, batch_size=1, rng_seed=0, kernel_profiling=False, prefer_in_memory_connect=True, reset_time_between_batches=True, reset_vars_between_batches=True, reset_in_syn_between_batches=False, communicator=None, **genn_kwargs)
Bases:
InferenceCompiler
- Parameters:
evaluate_timesteps (int)
dt (float)
batch_size (int)
rng_seed (int)
kernel_profiling (bool)
communicator (Communicator)
- build_neuron_model(pop, model, compile_state)
Apply compiler-specific processing to the base neuron model returned by
ml_genn.neurons.Neuron.get_model()
. If modifications are made, this should be done to a (deep) copy.- Parameters:
pop – Population neuron model is associated with
model – Base neuron model
compile_state – Compiler-specific state created by
pre_compile()
.
- create_compiled_network(genn_model, neuron_populations, connection_populations, compile_state)
Perform any final compiler-specific modifications to compiled
GeNNModel
and returnml_genn.compilers.CompiledNetwork
derived object.- Parameters:
genn_model –
GeNNModel
with all neuron and synapse groups addedneuron_populations – dictionary mapping
ml_genn.Population
objects to GeNNNeuronGroup
objects they have been compiled intoconnection_populations – dictionary mapping
ml_genn.Connection
objects to GeNNSynapseGroup
objects they have been compiled intocompile_state – Compiler-specific state created by
pre_compile()
.
- ml_genn_tf.converters.spike_norm.spike_normalise(net, net_inputs, net_outputs, norm_data, evaluate_timesteps, num_batches=None, dt=1.0, batch_size=1, rng_seed=0, kernel_profiling=False, prefer_in_memory_connect=True, reset_time_between_batches=True, **genn_kwargs)
- Parameters:
evaluate_timesteps (int)
num_batches (int | None)
dt (float)
batch_size (int)
rng_seed (int)
kernel_profiling (bool)