Release history¶
Changelog¶
0.3.1 (unreleased)¶
Compatible with TensorFlow 2.3 - 2.11
0.3.0 (November 8, 2021)¶
Compatible with TensorFlow 2.1.0 - 2.7.0
Added
- LowpassCell,- Lowpass,- AlphaCell, and- Alphalayers now accept both- initial_level_constraintand- tau_constraintto customize how their respective parameters are constrained during training. (#21)
Changed
- The - tautime constants for- LowpassCell,- Lowpass,- AlphaCell, and- Alphaare now always clipped to be positive in the forward pass rather than constraining the underlying trainable weights in between gradient updates. (#21)
- Renamed the - Lowpass/Alpha- tauparameter to- tau_initializer, and it now accepts- tf.keras.initializers.Initializerobjects (in addition to floats, as before). Renamed the- tau_varweight attribute to- tau. (#21)
Fixed
- SpikingActivation,- Lowpass, and- Alphalayers will now correctly use- keras_spiking.default.dt. (#20)
0.2.0 (February 18, 2021)¶
Compatible with TensorFlow 2.1.0 - 2.4.0
Added
- Added the - keras_spiking.Alphafilter, which provides second-order lowpass filtering for better noise removal for spiking layers. (#4)
- Added - keras_spiking.callbacks.DtScheduler, which can be used to update layer- dtparameters during training. (#5)
- Added - keras_spiking.default.dt, which can be used to set the default- dtfor all layers that don’t directly specify- dt. (#5)
- Added - keras_spiking.regularizers.RangedRegularizer, which can be used to apply some other regularizer (e.g.- tf.keras.regularizers.L2) with respect to some non-zero target point, or a range of acceptable values. This functionality has also been added to- keras_spiking.regularizers.L1L2/L1/L2(so they can now be applied with respect to a single reference point or a range). (#6)
- Added - keras_spiking.regularizers.Percentilewhich computes a percentile across a number of examples, and regularize that statistic. (#6)
- Added - keras_spiking.ModelEnergyto estimate energy usage for Keras Models. (#7)
Changed
- keras_spiking.SpikingActivationand- keras_spiking.Lowpassnow return sequences by default. This means that these layers will now have outputs that have the same number of timesteps as their inputs. This makes it easier to process create multi-layer spiking networks, where time is preserved throughout the network. The spiking fashion-MNIST example has been updated accordingly. (#3)
- Layers now support multi-dimensional inputs (e.g., output of - Conv2Dlayers). (#5)
Fixed