Release history¶
Changelog¶
0.3.0 (November 8, 2021)¶
Compatible with TensorFlow 2.1.0 - 2.7.0
Added
LowpassCell,Lowpass,AlphaCell, andAlphalayers now accept bothinitial_level_constraintandtau_constraintto customize how their respective parameters are constrained during training. (#21)
Changed
The
tautime constants forLowpassCell,Lowpass,AlphaCell, andAlphaare now always clipped to be positive in the forward pass rather than constraining the underlying trainable weights in between gradient updates. (#21)Renamed the
Lowpass/Alphatauparameter totau_initializer, and it now acceptstf.keras.initializers.Initializerobjects (in addition to floats, as before). Renamed thetau_varweight attribute totau. (#21)
Fixed
SpikingActivation,Lowpass, andAlphalayers will now correctly usekeras_spiking.default.dt. (#20)
0.2.0 (February 18, 2021)¶
Compatible with TensorFlow 2.1.0 - 2.4.0
Added
Added the
keras_spiking.Alphafilter, which provides second-order lowpass filtering for better noise removal for spiking layers. (#4)Added
keras_spiking.callbacks.DtScheduler, which can be used to update layerdtparameters during training. (#5)Added
keras_spiking.default.dt, which can be used to set the defaultdtfor all layers that don’t directly specifydt. (#5)Added
keras_spiking.regularizers.RangedRegularizer, which can be used to apply some other regularizer (e.g.tf.keras.regularizers.L2) with respect to some non-zero target point, or a range of acceptable values. This functionality has also been added tokeras_spiking.regularizers.L1L2/L1/L2(so they can now be applied with respect to a single reference point or a range). (#6)Added
keras_spiking.regularizers.Percentilewhich computes a percentile across a number of examples, and regularize that statistic. (#6)Added
keras_spiking.ModelEnergyto estimate energy usage for Keras Models. (#7)
Changed
keras_spiking.SpikingActivationandkeras_spiking.Lowpassnow return sequences by default. This means that these layers will now have outputs that have the same number of timesteps as their inputs. This makes it easier to process create multi-layer spiking networks, where time is preserved throughout the network. The spiking fashion-MNIST example has been updated accordingly. (#3)Layers now support multi-dimensional inputs (e.g., output of
Conv2Dlayers). (#5)
Fixed