Travis-CI build status Test coverage

Keras Spiking

Keras Spiking provides tools for incorporating spiking neuron activation functions directly within the Keras framework. The main feature is keras_spiking.SpikingActivation, which can be used to transform any activation function into a spiking equivalent. For example, we can translate a non-spiking model, such as

inp = tf.keras.Input((5,))
dense = tf.keras.layers.Dense(10)(inp)
act = tf.keras.layers.Activation("relu")(dense)
model = tf.keras.Model(inp, act)

into the spiking equivalent:

# add time dimension to inputs
inp = tf.keras.Input((None, 5))
# use Keras' TimeDistributed wrapper to apply Dense over time
dense = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(10))(inp)
# replace Activation with SpikingActivation
act = keras_spiking.SpikingActivation("relu")(dense)
model = tf.keras.Model(inp, act)

Models with SpikingActivation layers can be optimized and evaluated in the same way as any other Keras model. They will automatically take advantage of Keras Spiking’s “spiking aware training”: using the spiking activations on the forward pass and the non-spiking (differentiable) activation function on the backwards pass.

Keras Spiking also includes various tools to assist in the training of spiking models, such as additional regularizers and filtering layers.

If you are interested in building and optimizing spiking neuron models, you may also be interested in NengoDL. See this page for a comparison of the different use cases supported by these two packages.