Release history


0.3.1 (unreleased)

Compatible with TensorFlow 2.3 - 2.11

0.3.0 (November 8, 2021)

Compatible with TensorFlow 2.1.0 - 2.7.0


  • LowpassCell, Lowpass, AlphaCell, and Alpha layers now accept both initial_level_constraint and tau_constraint to customize how their respective parameters are constrained during training. (#21)


  • The tau time constants for LowpassCell, Lowpass, AlphaCell, and Alpha are now always clipped to be positive in the forward pass rather than constraining the underlying trainable weights in between gradient updates. (#21)

  • Renamed the Lowpass/Alpha tau parameter to tau_initializer, and it now accepts tf.keras.initializers.Initializer objects (in addition to floats, as before). Renamed the tau_var weight attribute to tau. (#21)


  • SpikingActivation, Lowpass, and Alpha layers will now correctly use keras_spiking.default.dt. (#20)

0.2.0 (February 18, 2021)

Compatible with TensorFlow 2.1.0 - 2.4.0


  • Added the keras_spiking.Alpha filter, which provides second-order lowpass filtering for better noise removal for spiking layers. (#4)

  • Added keras_spiking.callbacks.DtScheduler, which can be used to update layer dt parameters during training. (#5)

  • Added keras_spiking.default.dt, which can be used to set the default dt for all layers that don’t directly specify dt. (#5)

  • Added keras_spiking.regularizers.RangedRegularizer, which can be used to apply some other regularizer (e.g. tf.keras.regularizers.L2) with respect to some non-zero target point, or a range of acceptable values. This functionality has also been added to keras_spiking.regularizers.L1L2/L1/L2 (so they can now be applied with respect to a single reference point or a range). (#6)

  • Added keras_spiking.regularizers.Percentile which computes a percentile across a number of examples, and regularize that statistic. (#6)

  • Added keras_spiking.ModelEnergy to estimate energy usage for Keras Models. (#7)


  • keras_spiking.SpikingActivation and keras_spiking.Lowpass now return sequences by default. This means that these layers will now have outputs that have the same number of timesteps as their inputs. This makes it easier to process create multi-layer spiking networks, where time is preserved throughout the network. The spiking fashion-MNIST example has been updated accordingly. (#3)

  • Layers now support multi-dimensional inputs (e.g., output of Conv2D layers). (#5)


  • KerasSpiking layers’ reset_state now resets to the value of get_initial_state (as documented in the docstring), rather than all zeros. (#12)

  • Fixed a bug with keras_spiking.Alpha on TensorFlow 2.1, where a symbolic tensor in the initial state shape could not be converted to a Numpy array. (#16)

0.1.0 (August 14, 2020)

Compatible with TensorFlow 2.1.0 - 2.3.0

Initial release