Note

This documentation is for a development version. Click here for the latest stable release (v0.1.0).

API reference¶

Layers¶

Components for building spiking models in Keras.

 keras_spiking.layers.KerasSpikingCell Base class for RNN cells in KerasSpiking. keras_spiking.layers.KerasSpikingLayer Base class for KerasSpiking layers. keras_spiking.SpikingActivationCell RNN cell for converting an arbitrary activation function to a spiking equivalent. keras_spiking.SpikingActivation Layer for converting an arbitrary activation function to a spiking equivalent. keras_spiking.LowpassCell RNN cell for a lowpass filter. keras_spiking.Lowpass Layer implementing a lowpass filter. keras_spiking.AlphaCell RNN cell for an alpha filter. keras_spiking.Alpha Layer implementing an alpha filter.
class keras_spiking.layers.KerasSpikingCell(*args, **kwargs)[source]

Base class for RNN cells in KerasSpiking.

The important feature of this class is that it allows cells to define a different implementation to be used in training versus inference.

Parameters
sizeint or tuple of int or tf.TensorShape

Input/output shape of the layer (not including batch/time dimensions).

state_sizeint or tuple of int or tf.TensorShape

Shape of the cell state. If None, use size.

dtfloat

Length of time (in seconds) represented by one time step. If None, uses keras_spiking.default.dt (which is 0.001 seconds by default).

always_use_inferencebool

If True, this layer will use its call_inference behaviour during training, rather than call_training.

kwargsdict

Passed on to tf.keras.layers.Layer.

call(inputs, states, training=None)[source]

Call function that defines a different forward pass during training versus inference.

call_training(inputs, states)[source]

Compute layer output when training and always_use_inference=False.

call_inference(inputs, states)[source]

Compute layer output when testing or always_use_inference=True.

class keras_spiking.layers.KerasSpikingLayer(*args, **kwargs)[source]

Base class for KerasSpiking layers.

The main role of this class is to wrap a KerasSpikingCell in a tf.keras.layers.RNN.

Parameters
dtfloat

Length of time (in seconds) represented by one time step. If None, uses keras_spiking.default.dt (which is 0.001 seconds by default).

return_sequencesbool

Whether to return the full sequence of output values (default), or just the values on the last timestep.

return statebool

Whether to return the state in addition to the output.

statefulbool

If False (default), each time the layer is called it will begin from the same initial conditions. If True, each call will resume from the terminal state of the previous call (my_layer.reset_states() can be called to reset the state to initial conditions).

unrollbool

If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed up computations, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.

time_majorbool

The shape format of the input and output tensors. If True, the inputs and outputs will be in shape (timesteps, batch, ...), whereas in the False case, it will be (batch, timesteps, ...). Using time_major=True is a bit more efficient because it avoids transposes at the beginning and end of the layer calculation. However, most TensorFlow data is batch-major, so by default this layer accepts input and emits output in batch-major form.

kwargsdict

Passed on to tf.keras.layers.Layer.

build_cell(input_shapes)[source]

Create and return the RNN cell.

build(input_shapes)[source]

Builds the RNN/cell layers contained within this layer.

Notes

This method should not be called manually; rather, use the implicit layer callable behaviour (like my_layer(inputs)), which will apply this method with some additional bookkeeping.

call(inputs, training=None, initial_state=None, constants=None)[source]

Apply this layer to inputs.

Notes

This method should not be called manually; rather, use the implicit layer callable behaviour (like my_layer(inputs)), which will apply this method with some additional bookkeeping.

reset_states(states=None)[source]

Reset the internal state of the layer (only necessary if stateful=True).

Parameters
statesndarray

Optional state array that can be used to override the values returned by cell.get_initial_state, where cell is returned by build_cell.

get_config()[source]

Return config of layer (for serialization during model saving/loading).

class keras_spiking.SpikingActivationCell(*args, **kwargs)[source]

RNN cell for converting an arbitrary activation function to a spiking equivalent.

Neurons will spike at a rate proportional to the output of the base activation function. For example, if the activation function is outputting a value of 10, then the wrapped SpikingActivationCell will output spikes at a rate of 10Hz (i.e., 10 spikes per 1 simulated second, where 1 simulated second is equivalent to 1/dt time steps). Each spike will have height 1/dt (so that the integral of the spiking output will be the same as the integral of the base activation output). Note that if the base activation is outputting a negative value then the spikes will have height -1/dt. Multiple spikes per timestep are also possible, in which case the output will be n/dt (where n is the number of spikes).

Parameters
sizeint or tuple of int or tf.TensorShape

Input/output shape of the layer (not including batch/time dimensions).

activationcallable

Activation function to be converted to spiking equivalent.

dtfloat

Length of time (in seconds) represented by one time step. If None, uses keras_spiking.default.dt (which is 0.001 seconds by default).

seedint

Seed for random state initialization.

spiking_aware_trainingbool

If True (default), use the spiking activation function for the forward pass and the base activation function for the backward pass. If False, use the base activation function for the forward and backward pass during training.

kwargsdict

Passed on to tf.keras.layers.Layer.

Notes

This cell needs to be wrapped in a tf.keras.layers.RNN, like

my_layer = tf.keras.layers.RNN(
keras_spiking.SpikingActivationCell(size=10, activation=tf.nn.relu)
)

get_initial_state(inputs=None, batch_size=None, dtype=None)[source]

Set up initial spiking state.

Initial state is chosen from a uniform distribution, seeded based on the seed passed on construction (if one was given).

Note: state will be initialized automatically, user does not need to call this themselves.

call_training(inputs, states)[source]

Compute layer output when training and always_use_inference=False.

call_inference(inputs, states)

Compute spiking output, with custom gradient for spiking aware training.

Parameters
inputstf.Tensor

Input to the activation function.

voltagetf.Tensor

Spiking voltage state.

Returns
spikestf.Tensor

Output spike values (0 or n/dt for each element in inputs, where n is the number of spikes).

voltagetf.Tensor

Updated voltage state.

Custom gradient function for spiking aware training.

get_config()[source]

Return config of layer (for serialization during model saving/loading).

class keras_spiking.SpikingActivation(*args, **kwargs)[source]

Layer for converting an arbitrary activation function to a spiking equivalent.

Neurons will spike at a rate proportional to the output of the base activation function. For example, if the activation function is outputting a value of 10, then the wrapped SpikingActivationCell will output spikes at a rate of 10Hz (i.e., 10 spikes per 1 simulated second, where 1 simulated second is equivalent to 1/dt time steps). Each spike will have height 1/dt (so that the integral of the spiking output will be the same as the integral of the base activation output). Note that if the base activation is outputting a negative value then the spikes will have height -1/dt. Multiple spikes per timestep are also possible, in which case the output will be n/dt (where n is the number of spikes).

When applying this layer to an input, make sure that the input has a time axis (the time_major option controls whether it comes before or after the batch axis). The spiking output will be computed along the time axis. The number of simulation timesteps will depend on the length of that time axis. The number of timesteps does not need to be the same during training/evaluation/inference. In particular, it may be more efficient to use one timestep during training and multiple timesteps during inference (often with spiking_aware_training=False, and apply_during_training=False on any Lowpass layers).

Parameters
activationcallable

Activation function to be converted to spiking equivalent.

dtfloat

Length of time (in seconds) represented by one time step. If None, uses keras_spiking.default.dt (which is 0.001 seconds by default).

seedint

Seed for random state initialization.

spiking_aware_trainingbool

If True (default), use the spiking activation function for the forward pass and the base activation function for the backward pass. If False, use the base activation function for the forward and backward pass during training.

return_sequencesbool

Whether to return the full sequence of output spikes (default), or just the spikes on the last timestep.

return statebool

Whether to return the state in addition to the output.

statefulbool

If False (default), each time the layer is called it will begin from the same initial conditions. If True, each call will resume from the terminal state of the previous call (my_layer.reset_states() can be called to reset the state to initial conditions).

unrollbool

If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed up computations, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.

time_majorbool

The shape format of the input and output tensors. If True, the inputs and outputs will be in shape (timesteps, batch, ...), whereas in the False case, it will be (batch, timesteps, ...). Using time_major=True is a bit more efficient because it avoids transposes at the beginning and end of the layer calculation. However, most TensorFlow data is batch-major, so by default this layer accepts input and emits output in batch-major form.

kwargsdict

Passed on to tf.keras.layers.Layer.

Notes

This is equivalent to tf.keras.layers.RNN(SpikingActivationCell(...) ...), it just takes care of the RNN construction automatically.

build_cell(input_shapes)[source]

Create and return the RNN cell.

get_config()[source]

Return config of layer (for serialization during model saving/loading).

class keras_spiking.LowpassCell(*args, **kwargs)[source]

RNN cell for a lowpass filter.

The initial filter state and filter time constants are both trainable parameters. However, if apply_during_training=False then the parameters are not part of the training loop, and so will never be updated.

Parameters
sizeint or tuple of int or tf.TensorShape

Input/output shape of the layer (not including batch/time dimensions).

taufloat

Time constant of filter (in seconds).

dtfloat

Length of time (in seconds) represented by one time step. If None, uses keras_spiking.default.dt (which is 0.001 seconds by default).

apply_during_trainingbool

If False, this layer will effectively be ignored during training (this often makes sense in concert with the swappable training behaviour in, e.g., SpikingActivation, since if the activations are not spiking during training then we often don’t need to filter them either).

level_initializerstr or tf.keras.initializers.Initializer

Initializer for filter state.

kwargsdict

Passed on to tf.keras.layers.Layer.

Notes

This cell needs to be wrapped in a tf.keras.layers.RNN, like

my_layer = tf.keras.layers.RNN(
keras_spiking.LowpassCell(size=10, tau=0.01)
)

build(input_shapes)[source]

Build parameters associated with this layer.

get_initial_state(inputs=None, batch_size=None, dtype=None)[source]

Get initial filter state.

call_inference(inputs, states)[source]

Compute layer output when testing or always_use_inference=True.

call_training(inputs, states)[source]

Compute layer output when training and always_use_inference=False.

get_config()[source]

Return config of layer (for serialization during model saving/loading).

class keras_spiking.Lowpass(*args, **kwargs)[source]

Layer implementing a lowpass filter.

The impulse-response function (time domain) and transfer function are:

$\begin{split}h(t) &= (1 / \tau) \exp(-t / \tau) \\ H(s) &= \frac{1}{\tau s + 1}\end{split}$

The initial filter state and filter time constants are both trainable parameters. However, if apply_during_training=False then the parameters are not part of the training loop, and so will never be updated.

When applying this layer to an input, make sure that the input has a time axis (the time_major option controls whether it comes before or after the batch axis).

Parameters
taufloat

Time constant of filter (in seconds).

dtfloat

Length of time (in seconds) represented by one time step. If None, uses keras_spiking.default.dt (which is 0.001 seconds by default).

apply_during_trainingbool

If False, this layer will effectively be ignored during training (this often makes sense in concert with the swappable training behaviour in, e.g., SpikingActivation, since if the activations are not spiking during training then we often don’t need to filter them either).

level_initializerstr or tf.keras.initializers.Initializer

Initializer for filter state.

return_sequencesbool

Whether to return the full sequence of filtered output (default), or just the output on the last timestep.

return statebool

Whether to return the state in addition to the output.

statefulbool

If False (default), each time the layer is called it will begin from the same initial conditions. If True, each call will resume from the terminal state of the previous call (my_layer.reset_states() can be called to reset the state to initial conditions).

unrollbool

If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed up computations, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.

time_majorbool

The shape format of the input and output tensors. If True, the inputs and outputs will be in shape (timesteps, batch, ...), whereas in the False case, it will be (batch, timesteps, ...). Using time_major=True is a bit more efficient because it avoids transposes at the beginning and end of the layer calculation. However, most TensorFlow data is batch-major, so by default this layer accepts input and emits output in batch-major form.

kwargsdict

Passed on to tf.keras.layers.Layer.

Notes

This is equivalent to tf.keras.layers.RNN(LowpassCell(...) ...), it just takes care of the RNN construction automatically.

build_cell(input_shapes)[source]

Create and return the RNN cell.

get_config()[source]

Return config of layer (for serialization during model saving/loading).

class keras_spiking.AlphaCell(*args, **kwargs)[source]

RNN cell for an alpha filter.

The initial filter state and filter time constants are both trainable parameters. However, if apply_during_training=False then the parameters are not part of the training loop, and so will never be updated.

Parameters
sizeint or tuple of int or tf.TensorShape

Input/output shape of the layer (not including batch/time dimensions).

taufloat

Time constant of filter (in seconds).

dtfloat

Length of time (in seconds) represented by one time step. If None, uses keras_spiking.default.dt (which is 0.001 seconds by default).

apply_during_trainingbool

If False, this layer will effectively be ignored during training (this often makes sense in concert with the swappable training behaviour in, e.g., SpikingActivation, since if the activations are not spiking during training then we often don’t need to filter them either).

level_initializerstr or tf.keras.initializers.Initializer

Initializer for filter state.

kwargsdict

Passed on to tf.keras.layers.Layer.

Notes

This cell needs to be wrapped in a tf.keras.layers.RNN, like

my_layer = tf.keras.layers.RNN(keras_spiking.AlphaCell(size=10, tau=0.01))

build(input_shapes)[source]

Build parameters associated with this layer.

get_initial_state(inputs=None, batch_size=None, dtype=None)[source]

Get initial filter state.

call_inference(inputs, states)[source]

Compute layer output when testing or always_use_inference=True.

call_training(inputs, states)[source]

Compute layer output when training and always_use_inference=False.

get_config()[source]

Return config of layer (for serialization during model saving/loading).

class keras_spiking.Alpha(*args, **kwargs)[source]

Layer implementing an alpha filter.

The impulse-response function (time domain) and transfer function are:

$\begin{split}h(t) &= (t / \tau^2) \exp(-t / \tau) \\ H(s) &= \frac{1}{(\tau s + 1)^2}\end{split}$

The initial filter state and filter time constants are both trainable parameters. However, if apply_during_training=False then the parameters are not part of the training loop, and so will never be updated.

When applying this layer to an input, make sure that the input has a time axis (the time_major option controls whether it comes before or after the batch axis).

Parameters
taufloat

Time constant of filter (in seconds).

dtfloat

Length of time (in seconds) represented by one time step. If None, uses keras_spiking.default.dt (which is 0.001 seconds by default).

apply_during_trainingbool

If False, this layer will effectively be ignored during training (this often makes sense in concert with the swappable training behaviour in, e.g., SpikingActivation, since if the activations are not spiking during training then we often don’t need to filter them either).

level_initializerstr or tf.keras.initializers.Initializer

Initializer for filter state.

return_sequencesbool

Whether to return the full sequence of filtered output (default), or just the output on the last timestep.

return statebool

Whether to return the state in addition to the output.

statefulbool

If False (default), each time the layer is called it will begin from the same initial conditions. If True, each call will resume from the terminal state of the previous call (my_layer.reset_states() can be called to reset the state to initial conditions).

unrollbool

If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed up computations, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.

time_majorbool

The shape format of the input and output tensors. If True, the inputs and outputs will be in shape (timesteps, batch, ...), whereas in the False case, it will be (batch, timesteps, ...). Using time_major=True is a bit more efficient because it avoids transposes at the beginning and end of the layer calculation. However, most TensorFlow data is batch-major, so by default this layer accepts input and emits output in batch-major form.

kwargsdict

Passed on to tf.keras.layers.Layer.

Notes

This is equivalent to tf.keras.layers.RNN(AlphaCell(...) ...), it just takes care of the RNN construction automatically.

build_cell(input_shapes)[source]

Create and return the RNN cell.

get_config()[source]

Return config of layer (for serialization during model saving/loading).

Regularizers¶

Regularization methods designed to work with spiking layers.

 keras_spiking.regularizers.RangedRegularizer A regularizer that penalizes values that fall outside a range. keras_spiking.regularizers.L1L2 A version of tf.keras.regularizers.L1L2 that allows the user to specify a nonzero target output. keras_spiking.regularizers.L1 A version of tf.keras.regularizers.L1 that allows the user to specify a nonzero target output. keras_spiking.regularizers.L2 A version of tf.keras.regularizers.L2 that allows the user to specify a nonzero target output. keras_spiking.regularizers.Percentile A regularizer that penalizes a percentile of a tensor.
class keras_spiking.regularizers.RangedRegularizer(target=0, regularizer=<tensorflow.python.keras.regularizers.L1L2 object>)[source]

A regularizer that penalizes values that fall outside a range.

This allows regularized values to fall anywhere within the range, as opposed to standard regularizers that penalize any departure from some fixed point.

Parameters
targetfloat or tuple

The value that we want the regularized outputs to be driven towards. Can be a float, in which case all outputs will be driven towards that value, or a tuple specifying a range (min, max), in which case outputs outside that range will be driven towards that range (but outputs within the range will not be penalized).

regularizer: tf.keras.regularizers.Regularizer

Regularization penalty that will be applied to the outputs with respect to target.

get_config()[source]

Return config (for serialization during model saving/loading).

classmethod from_config(config)[source]

Create a new instance from the serialized config.

class keras_spiking.regularizers.L1L2(l1=0.0, l2=0.0, target=0, **kwargs)[source]

A version of tf.keras.regularizers.L1L2 that allows the user to specify a nonzero target output.

Parameters
l1float

Weight on L1 regularization penalty.

l2float

Weight on L2 regularization penalty.

targetfloat or tuple

The value that we want the regularized outputs to be driven towards. Can be a float, in which case all outputs will be driven towards that value, or a tuple specifying a range (min, max), in which case outputs outside that range will be driven towards that range (but outputs within the range will not be penalized).

get_config()[source]

Return config (for serialization during model saving/loading).

classmethod from_config(config)[source]

Create a new instance from the serialized config.

class keras_spiking.regularizers.L1(l1=0.01, target=0, **kwargs)[source]

A version of tf.keras.regularizers.L1 that allows the user to specify a nonzero target output.

Parameters
l1float

Weight on L1 regularization penalty.

targetfloat or tuple

The value that we want the regularized outputs to be driven towards. Can be a float, in which case all outputs will be driven towards that value, or a tuple specifying a range (min, max), in which case outputs outside that range will be driven towards that range (but outputs within the range will not be penalized).

class keras_spiking.regularizers.L2(l2=0.01, target=0, **kwargs)[source]

A version of tf.keras.regularizers.L2 that allows the user to specify a nonzero target output.

Parameters
l2float

Weight on L2 regularization penalty.

targetfloat or tuple

The value that we want the regularized outputs to be driven towards. Can be a float, in which case all outputs will be driven towards that value, or a tuple specifying a range (min, max), in which case outputs outside that range will be driven towards that range (but outputs within the range will not be penalized).

class keras_spiking.regularizers.Percentile(percentile=100, axis=0, target=0, l1=0, l2=0)[source]

A regularizer that penalizes a percentile of a tensor.

This regularizer finds the requested percentile of the data over the axis, and then applies a regularizer to the percentile values with respect to target. This can be useful as it is makes the computed regularization penalty more invariant to outliers.

Parameters
percentilefloat

Percentile to compute over the axis. Defaults to 100, which is equivalent to taking the maximum across the specified axis.

Note

For percentile != 100, requires tensorflow-probability.

axisint or tuple of int

Axis or axes to take the percentile over.

targetfloat or tuple

The value that we want the regularized outputs to be driven towards. Can be a float, in which case all outputs will be driven towards that value, or a tuple specifying a range (min, max), in which case outputs outside that range will be driven towards that range (but outputs within the range will not be penalized).

l1float

Weight on L1 regularization penalty applied to percentiles.

l2float

Weight on L2 regularization penalty applied to percentiles.

Examples

In the following example, we use Percentile to ensure the neuron activities (a.k.a., firing rates) fall in the desired range of 5-10 Hz when computing the product of two inputs.

train_x = np.random.uniform(-1, 1, size=(1024 * 100, 2))
train_y = train_x[:, :1] * train_x[:, 1:]
test_x = np.random.uniform(-1, 1, size=(128, 2))
test_y = test_x[:, :1] * test_x[:, 1:]

# train using one timestep, to speed things up
train_seq = train_x[:, None]

# test using 10 timesteps
n_steps = 10
test_seq = np.tile(test_x[:, None], (1, n_steps, 1))

inp = x = tf.keras.Input((None, 2))
x = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(50))(x)
x = spikes = keras_spiking.SpikingActivation(
"relu",
dt=1,
activity_regularizer=keras_spiking.regularizers.Percentile(
target=(5, 10), l2=0.01
),
)(x)
x = tf.keras.layers.GlobalAveragePooling1D()(x)
x = tf.keras.layers.Dense(1)(x)

model = tf.keras.Model(inp, (x, spikes))

model.compile(
# note: we use a dict to specify loss/metrics because we only want to
# apply these to the final dense output, not the spike layer
optimizer="rmsprop", loss={"dense_1": "mse"}, metrics={"dense_1": "mae"}
)
model.fit(train_seq, train_y, epochs=5)

outputs, spikes = model.predict(test_seq)

# estimate rates by averaging over time
rates = spikes.mean(axis=1)
max_rates = rates.max(axis=0)
print("Max rates: %s, %s" % (max_rates.mean(), max_rates.std()))

error = np.mean(np.abs(outputs - test_y))
print("MAE: %s" % (error,))

get_config()[source]

Return config (for serialization during model saving/loading).

Callbacks¶

Callbacks for use with KerasSpiking models.

 keras_spiking.callbacks.DtScheduler A callback for updating Layer dt attributes during training.
class keras_spiking.callbacks.DtScheduler(dt, scheduler, verbose=False)[source]

A callback for updating Layer dt attributes during training.

This uses the same scheduler interface as TensorFlow’s learning rate schedulers, so any of those built-in schedules can be used to adjust dt, or a custom function implementing the same interface.

When using this functionality, dt should be initialized as a tf.Variable, and that Variable should be passed as the dt parameter to any Layers that should be affected by this callback.

For example:

dt = tf.Variable(1.0)

inp = tf.keras.Input((None, 10))
x = keras_spiking.SpikingActivation("relu", dt=dt)(inp)
x = keras_spiking.Lowpass(0.1, dt=dt)(x)
model = tf.keras.Model(inp, x)

callback = keras_spiking.callbacks.DtScheduler(
dt, tf.optimizers.schedules.ExponentialDecay(
1.0, decay_steps=5, decay_rate=0.9
)
)

model.compile(loss="mse", optimizer="sgd")
model.fit(
np.ones((100, 2, 10)),
np.ones((100, 2, 10)),
epochs=10,
batch_size=20,
callbacks=[callback],
)

Parameters
dttf.Variable

Variable representing dt that has been passed to other Layers.

schedulertf.optimizers.schedules.LearningRateSchedule

A schedule class that will update dt based on the training step (one training step is one minibatch worth of training).

verbosebool

If True, print out some information about dt updates during training.

Notes

Because Variable values persist over time, any changes made to dt by this callback will persist after training completes. For example, if you call fit with this callback and then predict later on, that predict call will be using the last dt value set by this callback.

on_epoch_begin(epoch, logs=None)[source]

Keep track of the current epoch so we can count the total number of steps.

on_train_batch_begin(batch, logs=None)[source]

Update dt variable based on the current training step.

Configuration¶

Configuration options for KerasSpiking layers.

 keras_spiking.config.DefaultManager Manages the default parameter values for KerasSpiking layers.
class keras_spiking.config.DefaultManager(dt=0.001)[source]

Manages the default parameter values for KerasSpiking layers.

Parameters
dtfloat

Length of time (in seconds) represented by one time step. Defaults to 0.001s.

Notes

Do not instantiate this class directly, instead access it through keras_spiking.default.