API reference

Layers

Components for building spiking models in Keras.

keras_spiking.SpikingActivationCell

RNN cell for converting an arbitrary activation function to a spiking equivalent.

keras_spiking.SpikingActivation

Layer for converting an arbitrary activation function to a spiking equivalent.

keras_spiking.LowpassCell

RNN cell for a lowpass filter.

keras_spiking.Lowpass

Layer implementing a lowpass filter.

class keras_spiking.SpikingActivationCell(units, activation, dt=0.001, seed=None, spiking_aware_training=True, **kwargs)[source]

RNN cell for converting an arbitrary activation function to a spiking equivalent.

Neurons will spike at a rate proportional to the output of the base activation function. For example, if the activation function is outputting a value of 10, then the wrapped SpikingActivationCell will output spikes at a rate of 10Hz (i.e., 10 spikes per 1 simulated second, where 1 simulated second is equivalent to 1/dt time steps). Each spike will have height 1/dt (so that the integral of the spiking output will be the same as the integral of the base activation output). Note that if the base activation is outputting a negative value then the spikes will have height -1/dt. Multiple spikes per timestep are also possible, in which case the output will be n/dt (where n is the number of spikes).

Parameters
unitsint

Dimensionality of layer.

activationcallable

Activation function to be converted to spiking equivalent.

dtfloat

Length of time (in seconds) represented by one time step.

seedint

Seed for random state initialization.

spiking_aware_trainingbool

If True (default), use the spiking activation function for the forward pass and the base activation function for the backward pass. If False, use the base activation function for the forward and backward pass during training.

kwargsdict

Passed on to tf.keras.layers.Layer.

Notes

This cell needs to be wrapped in a tf.keras.layers.RNN, like

my_layer = tf.keras.layers.RNN(
    keras_spiking.SpikingActivationCell(units=10, activation=tf.nn.relu)
)
get_initial_state(inputs=None, batch_size=None, dtype=None)[source]

Set up initial spiking state.

Initial state is chosen from a uniform distribution, seeded based on the seed passed on construction (if one was given).

Note: state will be initialized automatically, user does not need to call this themselves.

call(inputs, states, training=None)[source]

Compute layer output.

get_config()[source]

Return config of layer (for serialization during model saving/loading).

class keras_spiking.SpikingActivation(activation, dt=0.001, seed=None, spiking_aware_training=True, return_sequences=False, return_state=False, stateful=False, unroll=False, time_major=False, **kwargs)[source]

Layer for converting an arbitrary activation function to a spiking equivalent.

Neurons will spike at a rate proportional to the output of the base activation function. For example, if the activation function is outputting a value of 10, then the wrapped SpikingActivationCell will output spikes at a rate of 10Hz (i.e., 10 spikes per 1 simulated second, where 1 simulated second is equivalent to 1/dt time steps). Each spike will have height 1/dt (so that the integral of the spiking output will be the same as the integral of the base activation output). Note that if the base activation is outputting a negative value then the spikes will have height -1/dt. Multiple spikes per timestep are also possible, in which case the output will be n/dt (where n is the number of spikes).

Parameters
activationcallable

Activation function to be converted to spiking equivalent.

dtfloat

Length of time (in seconds) represented by one time step.

seedint

Seed for random state initialization.

spiking_aware_trainingbool

If True (default), use the spiking activation function for the forward pass and the base activation function for the backward pass. If False, use the base activation function for the forward and backward pass during training.

return_sequencesbool

Whether to return the last output in the output sequence (default), or the full sequence.

return statebool

Whether to return the state in addition to the output.

statefulbool

If False (default), each time the layer is called it will begin from the same initial conditions. If True, each call will resume from the terminal state of the previous call (my_layer.reset_states() can be called to reset the state to initial conditions).

unrollbool

If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed up computations, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.

time_majorbool

The shape format of the input and output tensors. If True, the inputs and outputs will be in shape (timesteps, batch, ...), whereas in the False case, it will be (batch, timesteps, ...). Using time_major=True is a bit more efficient because it avoids transposes at the beginning and end of the layer calculation. However, most TensorFlow data is batch-major, so by default this layer accepts input and emits output in batch-major form.

kwargsdict

Passed on to tf.keras.layers.Layer.

Notes

This is equivalent to tf.keras.layers.RNN(SpikingActivationCell(...) ...), it just takes care of the RNN construction automatically.

build(input_shapes)[source]

Builds the RNN/SpikingActivationCell layers contained within this layer.

Notes

This method should not be called manually; rather, use the implicit layer callable behaviour (like my_layer(inputs)), which will apply this method with some additional bookkeeping.

call(inputs, training=None, initial_state=None, constants=None)[source]

Apply this layer to inputs.

Notes

This method should not be called manually; rather, use the implicit layer callable behaviour (like my_layer(inputs)), which will apply this method with some additional bookkeeping.

reset_states(states=None)[source]

Reset the internal state of the layer (only necessary if stateful=True).

Parameters
statesndarray

Optional state array that can be used to override the values returned by SpikingActivationCell.get_initial_state.

get_config()[source]

Return config of layer (for serialization during model saving/loading).

class keras_spiking.LowpassCell(units, tau, dt=0.001, apply_during_training=True, level_initializer='zeros', **kwargs)[source]

RNN cell for a lowpass filter.

The initial filter state and filter time constants are both trainable parameters. However, if apply_during_training=False then the parameters are not part of the training loop, and so will never be updated.

Parameters
unitsint

Dimensionality of layer.

taufloat

Time constant of filter (in seconds).

dtfloat

Length of time (in seconds) represented by one time step.

apply_during_trainingbool

If False, this layer will effectively be ignored during training (this often makes sense in concert with the swappable training behaviour in, e.g., SpikingActivation, since if the activations are not spiking during training then we often don’t need to filter them either).

level_initializerstr or tf.keras.initializers.Initializer

Initializer for filter state.

kwargsdict

Passed on to tf.keras.layers.Layer.

Notes

This cell needs to be wrapped in a tf.keras.layers.RNN, like

my_layer = tf.keras.layers.RNN(keras_spiking.LowpassCell(units=10, tau=0.01))
build(input_shapes)[source]

Build parameters associated with this layer.

get_initial_state(inputs=None, batch_size=None, dtype=None)[source]

Get initial filter state.

call(inputs, states, training=None)[source]

Apply this layer to inputs.

Notes

This method should not be called manually; rather, use the implicit layer callable behaviour (like my_layer(inputs)), which will apply this method with some additional bookkeeping.

get_config()[source]

Return config of layer (for serialization during model saving/loading).

class keras_spiking.Lowpass(tau, dt=0.001, apply_during_training=True, level_initializer='zeros', return_sequences=False, return_state=False, stateful=False, unroll=False, time_major=False, **kwargs)[source]

Layer implementing a lowpass filter.

The initial filter state and filter time constants are both trainable parameters. However, if apply_during_training=False then the parameters are not part of the training loop, and so will never be updated.

Parameters
taufloat

Time constant of filter (in seconds).

dtfloat

Length of time (in seconds) represented by one time step.

apply_during_trainingbool

If False, this layer will effectively be ignored during training (this often makes sense in concert with the swappable training behaviour in, e.g., SpikingActivation, since if the activations are not spiking during training then we often don’t need to filter them either).

level_initializerstr or tf.keras.initializers.Initializer

Initializer for filter state.

return_sequencesbool

Whether to return the last output in the output sequence (default), or the full sequence.

return statebool

Whether to return the state in addition to the output.

statefulbool

If False (default), each time the layer is called it will begin from the same initial conditions. If True, each call will resume from the terminal state of the previous call (my_layer.reset_states() can be called to reset the state to initial conditions).

unrollbool

If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed up computations, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.

time_majorbool

The shape format of the input and output tensors. If True, the inputs and outputs will be in shape (timesteps, batch, ...), whereas in the False case, it will be (batch, timesteps, ...). Using time_major=True is a bit more efficient because it avoids transposes at the beginning and end of the layer calculation. However, most TensorFlow data is batch-major, so by default this layer accepts input and emits output in batch-major form.

kwargsdict

Passed on to tf.keras.layers.Layer.

Notes

This is equivalent to tf.keras.layers.RNN(LowpassCell(...) ...), it just takes care of the RNN construction automatically.

build(input_shapes)[source]

Builds the RNN/SpikingActivationCell layers contained within this layer.

Notes

This method should not be called manually; rather, use the implicit layer callable behaviour (like my_layer(inputs)), which will apply this method with some additional bookkeeping.

call(inputs, training=None, initial_state=None, constants=None)[source]

Apply this layer to inputs.

Notes

This method should not be called manually; rather, use the implicit layer callable behaviour (like my_layer(inputs)), which will apply this method with some additional bookkeeping.

reset_states(states=None)[source]

Reset the internal state of the layer (only necessary if stateful=True).

Parameters
statesndarray

Optional state array that can be used to override the values returned by SpikingActivationCell.get_initial_state.

get_config()[source]

Return config of layer (for serialization during model saving/loading).

Regularizers

Regularization methods designed to work with spiking layers.

keras_spiking.regularizers.L1L2

A version of tf.keras.regularizers.L1L2 that allows the user to specify a nonzero target output.

keras_spiking.regularizers.L1

A version of tf.keras.regularizers.L1 that allows the user to specify a nonzero target output.

keras_spiking.regularizers.L2

A version of tf.keras.regularizers.L2 that allows the user to specify a nonzero target output.

class keras_spiking.regularizers.L1L2(l1=0.0, l2=0.0, target=0, **kwargs)[source]

A version of tf.keras.regularizers.L1L2 that allows the user to specify a nonzero target output.

Parameters
l1float

Weight on L1 regularization penalty.

l2float

Weight on L2 regularization penalty.

targetfloat

Target output value (values will be penalized based on their distance from this point).

get_config()[source]

Return config (for serialization during model saving/loading).

class keras_spiking.regularizers.L1(l1=0.01, target=0, **kwargs)[source]

A version of tf.keras.regularizers.L1 that allows the user to specify a nonzero target output.

Parameters
l1float

Weight on L1 regularization penalty.

targetfloat

Target output value (values will be penalized based on their distance from this point).

class keras_spiking.regularizers.L2(l2=0.01, target=0, **kwargs)[source]

A version of tf.keras.regularizers.L2 that allows the user to specify a nonzero target output.

Parameters
l2float

Weight on L2 regularization penalty.

targetfloat

Target output value (values will be penalized based on their distance from this point).