- Getting started
- Basic usage
- API reference
- Examples
- Project information
- Release history
- 0.9.1 (unreleased)
- 0.9.0 (November 21 2025)
- 0.8.0 (July 3, 2025)
- 0.7.0 (July 20, 2023)
- 0.6.0 (May 5, 2023)
- 0.5.0 (January 26, 2023)
- 0.4.2 (May 17, 2022)
- 0.4.1 (February 10, 2022)
- 0.4.0 (August 16, 2021)
- 0.3.1 (November 16, 2020)
- 0.3.0 (November 6, 2020)
- 0.2.0 (November 2, 2020)
- 0.1.0 (June 22, 2020)
- Contributing to KerasLMU
- Citation
- KerasLMU license
- Release history
Note
This documentation is for a development version. Click here for the latest stable release (v0.9.0).
Release history¶
0.9.1 (unreleased)¶
Compatible with TensorFlow 2.8 - 2.19
0.9.0 (November 21 2025)¶
Compatible with TensorFlow 2.8 - 2.19
0.8.0 (July 3, 2025)¶
Compatible with TensorFlow 2.8 - 2.19
Added
Added an
input_dparameter toLMUCell. This only needs to be specified whenhidden_cell=Noneandinput_to_hidden=True; in that scenario it is required in order to accurately setLMUCell.output_size. (#56)
0.7.0 (July 20, 2023)¶
Compatible with TensorFlow 2.4 - 2.13
Changed
Minimum supported Python version is now 3.8 (3.7 reached end of life in June 2023). (#54)
0.6.0 (May 5, 2023)¶
Compatible with TensorFlow 2.4 - 2.11
Changed
LMUFeedforwardcan now be used with unknown sequence lengths, andLMUwill useLMUFeedforwardfor unknown sequence lengths (as long as the other conditions are met, as before). (#52)Allow
input_to_hidden=Truewithhidden_cell=None. This will act as a skip connection. (#52)Changed order of LMU states so that the LMU memory state always comes first, and any states from the hidden cell come afterwards. (#52)
Fixed
Fixed errors when setting non-default dtype on LMU layers. (#52)
0.5.0 (January 26, 2023)¶
Compatible with TensorFlow 2.4 - 2.11
Added
Layers are registered with the Keras serialization system (no longer need to be passed as
custom_objects). (#49)
0.4.2 (May 17, 2022)¶
Compatible with TensorFlow 2.1 - 2.9
Added
Added support for TensorFlow 2.9. (#48)
0.4.1 (February 10, 2022)¶
Compatible with TensorFlow 2.1 - 2.8
Added
0.4.0 (August 16, 2021)¶
Compatible with TensorFlow 2.1 - 2.7
Added
Setting
kernel_initializer=Nonenow removes the dense input kernel. (#40)The
keras_lmu.LMUFFTlayer now supportsmemory_d > 1.keras_lmu.LMUnow uses this implementation for all values ofmemory_dwhen feedforward conditions are satisfied (no hidden-to-memory or memory-to-memory connections, and the sequence length is notNone). (#40)Added
trainable_thetaoption, which will allow thethetaparameter to be learned during training. (#41)Added
discretizeroption, which controls the method used to solve for theAandBLMU matrices. This is mainly useful in combination withtrainable_theta=True, where settingdiscretizer="euler"may improve the training speed (possibly at the cost of some accuracy). (#41)The
keras_lmu.LMUFFTlayer can now use raw convolution internally (as opposed to FFT-based convolution). The newconv_modeoption exposes this. The newtruncate_iroption allows truncating the impulse response when running with a raw convolution mode, for efficiency. Whether FFT-based or raw convolution is faster depends on the specific model, hardware, and amount of truncation. (#42)
Changed
The
AandBmatrices are now stored as constants instead of non-trainable variables. This can improve the training/inference speed, but it means that saved weights from previous versions will be incompatible. (#41)Renamed
keras_lmu.LMUFFTtokeras_lmu.LMUFeedforward. (#42)
Fixed
Fixed dropout support in TensorFlow 2.6. (#42)
0.3.1 (November 16, 2020)¶
Changed
Raise a validation error if
hidden_to_memoryorinput_to_hiddenare True whenhidden_cell=None. (#26)
Fixed
0.3.0 (November 6, 2020)¶
Changed
Renamed module from
lmutokeras_lmu(so it will now be imported viaimport keras_lmu), renamed package fromlmutokeras-lmu(so it will now be installed viapip install keras-lmu), and changed any references to “NengoLMU” to “KerasLMU” (since this implementation is based in the Keras framework rather than Nengo). In the future thelmunamespace will be used as a meta-package to encapsulate LMU implementations in different frameworks. (#24)
0.2.0 (November 2, 2020)¶
Added
Added documentation for package description, installation, usage, API, examples, and project information. (#20)
Added LMU FFT cell variant and auto-switching LMU class. (#21)
LMUs can now be used with any Keras RNN cell (e.g. LSTMs or GRUs) through the
hidden_cellparameter. This can take an RNN cell (liketf.keras.layers.SimpleRNNCellortf.keras.layers.LSTMCell) or a feedforward layer (liketf.keras.layers.Dense) orNone(to create a memory-only LMU). The output of the LMU memory component will be fed to thehidden_cell. (#22)Added
hidden_to_memory,memory_to_memory, andinput_to_hiddenparameters toLMUCell, which can be used to enable/disable connections between components of the LMU. They default to disabled. (#22)LMUs can now be used with multi-dimensional memory components. This is controlled through a new
memory_dparameter ofLMUCell. (#22)Added
dropoutparameter toLMUCell(which applies dropout to the input) andrecurrent_dropout(which applies dropout to thememory_to_memoryconnection, if it is enabled). Note that dropout can be added in the hidden component through thehidden_cellobject. (#22)
Changed
Renamed
lmu.lmumodule tolmu.layers. (#22)Combined the
*_encoders_initializer``parameters of ``LMUCellinto a singlekernel_initializerparameter. (#22)Combined the
*_kernel_initializerparameters ofLMUCellinto a singlerecurrent_kernel_initializerparameter. (#22)
Removed
Removed
Legendre,InputScaled,LMUCellODE, andLMUCellGatingclasses. (#22)Removed the
method,realizer, andfactoryarguments fromLMUCell(they will take on the same default values as before, they just cannot be changed). (#22)Removed the
trainable_*arguments fromLMUCell. This functionality is largely redundant with the new functionality added for enabling/disabling internal LMU connections. These were primarily used previously for e.g. setting a connection to zero and then disabling learning, which can now be done more efficiently by disabling the connection entirely. (#22)Removed the
unitsandhidden_activationparameters ofLMUCell(these are now specified directly in thehidden_cell. (#22)Removed the dependency on
nengolib. (#22)Dropped support for Python 3.5, which reached its end of life in September 2020. (#22)
0.1.0 (June 22, 2020)¶
Initial release of KerasLMU 0.1.0! Supports Python 3.5+.
The API is considered unstable; parts are likely to change in the future.
Thanks to all of the contributors for making this possible!