API reference

LMU Layers

Core classes for the KerasLMU package.

keras_lmu.LMUCell

Implementation of LMU cell (to be used within Keras RNN wrapper).

keras_lmu.LMU

A layer of trainable low-dimensional delay systems.

keras_lmu.LMUFFT

Layer class for the FFT variant of the LMU.

class keras_lmu.LMUCell(memory_d, order, theta, hidden_cell, hidden_to_memory=False, memory_to_memory=False, input_to_hidden=False, kernel_initializer='glorot_uniform', recurrent_initializer='orthogonal', dropout=0, recurrent_dropout=0, **kwargs)[source]

Implementation of LMU cell (to be used within Keras RNN wrapper).

In general, the LMU cell consists of two parts: a memory component (decomposing the input signal using Legendre polynomials as a basis), and a hidden component (learning nonlinear mappings from the memory component). [1] [2]

This class processes one step within the whole time sequence input. Use the LMU class to create a recurrent Keras layer to process the whole sequence. Calling LMU() is equivalent to doing RNN(LMUCell()).

Parameters
memory_dint

Dimensionality of input to memory component.

orderint

The number of degrees in the transfer function of the LTI system used to represent the sliding window of history. This parameter sets the number of Legendre polynomials used to orthogonally represent the sliding window.

thetaint

The number of timesteps in the sliding window that is represented using the LTI system. In this context, the sliding window represents a dynamic range of data, of fixed size, that will be used to predict the value at the next time step. If this value is smaller than the size of the input sequence, only that number of steps will be represented at the time of prediction, however the entire sequence will still be processed in order for information to be projected to and from the hidden layer.

hidden_celltf.keras.layers.Layer

Keras Layer/RNNCell implementing the hidden component.

hidden_to_memorybool

If True, connect the output of the hidden component back to the memory component (default False).

memory_to_memorybool

If True, add a learnable recurrent connection (in addition to the static Legendre system) to the memory component (default False).

input_to_hiddenbool

If True, connect the input directly to the hidden component (in addition to the connection from the memory component) (default False).

kernel_initializertf.initializers.Initializer

Initializer for weights from input to memory/hidden component.

recurrent_initializertf.initializers.Initializer

Initializer for memory_to_memory weights (if that connection is enabled).

dropoutfloat

Dropout rate on input connections.

recurrent_dropoutfloat

Dropout rate on memory_to_memory connection.

References

1

Voelker and Eliasmith (2018). Improving spiking dynamical networks: Accurate delays, higher-order synapses, and time cells. Neural Computation, 30(3): 569-609.

2

Voelker and Eliasmith. “Methods and systems for implementing dynamic neural networks.” U.S. Patent Application No. 15/243,223. Filing date: 2016-08-22.

build(input_shape)[source]

Builds the cell.

Notes

This method should not be called manually; rather, use the implicit layer callable behaviour (like my_layer(inputs)), which will apply this method with some additional bookkeeping.

call(inputs, states, training=None)[source]

Apply this cell to inputs.

Notes

This method should not be called manually; rather, use the implicit layer callable behaviour (like my_layer(inputs)), which will apply this method with some additional bookkeeping.

reset_dropout_mask()[source]

Reset dropout mask for memory and hidden components.

reset_recurrent_dropout_mask()[source]

Reset recurrent dropout mask for memory and hidden components.

get_config()[source]

Return config of layer (for serialization during model saving/loading).

classmethod from_config(config)[source]

Load model from serialized config.

class keras_lmu.LMU(memory_d, order, theta, hidden_cell, hidden_to_memory=False, memory_to_memory=False, input_to_hidden=False, kernel_initializer='glorot_uniform', recurrent_initializer='orthogonal', dropout=0, recurrent_dropout=0, return_sequences=False, **kwargs)[source]

A layer of trainable low-dimensional delay systems.

Each unit buffers its encoded input by internally representing a low-dimensional (i.e., compressed) version of the sliding window.

Nonlinear decodings of this representation, expressed by the A and B matrices, provide computations across the window, such as its derivative, energy, median value, etc ([1], [2]). Note that these decoder matrices can span across all of the units of an input sequence.

Parameters
memory_dint

Dimensionality of input to memory component.

orderint

The number of degrees in the transfer function of the LTI system used to represent the sliding window of history. This parameter sets the number of Legendre polynomials used to orthogonally represent the sliding window.

thetaint

The number of timesteps in the sliding window that is represented using the LTI system. In this context, the sliding window represents a dynamic range of data, of fixed size, that will be used to predict the value at the next time step. If this value is smaller than the size of the input sequence, only that number of steps will be represented at the time of prediction, however the entire sequence will still be processed in order for information to be projected to and from the hidden layer.

hidden_celltf.keras.layers.Layer

Keras Layer/RNNCell implementing the hidden component.

hidden_to_memorybool

If True, connect the output of the hidden component back to the memory component (default False).

memory_to_memorybool

If True, add a learnable recurrent connection (in addition to the static Legendre system) to the memory component (default False).

input_to_hiddenbool

If True, connect the input directly to the hidden component (in addition to the connection from the memory component) (default False).

kernel_initializertf.initializers.Initializer

Initializer for weights from input to memory/hidden component.

recurrent_initializertf.initializers.Initializer

Initializer for memory_to_memory weights (if that connection is enabled).

dropoutfloat

Dropout rate on input connections.

recurrent_dropoutfloat

Dropout rate on memory_to_memory connection.

return_sequencesbool, optional

If True, return the full output sequence. Otherwise, return just the last output in the output sequence.

References

1

Voelker and Eliasmith (2018). Improving spiking dynamical networks: Accurate delays, higher-order synapses, and time cells. Neural Computation, 30(3): 569-609.

2

Voelker and Eliasmith. “Methods and systems for implementing dynamic neural networks.” U.S. Patent Application No. 15/243,223. Filing date: 2016-08-22.

build(input_shapes)[source]

Builds the layer.

Notes

This method should not be called manually; rather, use the implicit layer callable behaviour (like my_layer(inputs)), which will apply this method with some additional bookkeeping.

call(inputs, training=None)[source]

Apply this layer to inputs.

Notes

This method should not be called manually; rather, use the implicit layer callable behaviour (like my_layer(inputs)), which will apply this method with some additional bookkeeping.

get_config()[source]

Return config of layer (for serialization during model saving/loading).

classmethod from_config(config)[source]

Load model from serialized config.

class keras_lmu.LMUFFT(memory_d, order, theta, hidden_cell, input_to_hidden=False, kernel_initializer='glorot_uniform', dropout=0, return_sequences=False, **kwargs)[source]

Layer class for the FFT variant of the LMU.

This class assumes no recurrent connections are desired in the memory component.

Produces the output of the delay system by evaluating the convolution of the input sequence with the impulse response from the LMU cell. The convolution operation is calculated using the fast Fourier transform (FFT).

Parameters
memory_dint

Dimensionality of input to memory component.

orderint

The number of degrees in the transfer function of the LTI system used to represent the sliding window of history. This parameter sets the number of Legendre polynomials used to orthogonally represent the sliding window.

thetaint

The number of timesteps in the sliding window that is represented using the LTI system. In this context, the sliding window represents a dynamic range of data, of fixed size, that will be used to predict the value at the next time step. If this value is smaller than the size of the input sequence, only that number of steps will be represented at the time of prediction, however the entire sequence will still be processed in order for information to be projected to and from the hidden layer.

hidden_celltf.keras.layers.Layer

Keras Layer implementing the hidden component.

input_to_hiddenbool

If True, connect the input directly to the hidden component (in addition to the connection from the memory component) (default False).

kernel_initializertf.initializers.Initializer

Initializer for weights from input to memory/hidden component.

dropoutfloat

Dropout rate on input connections.

return_sequencesbool, optional

If True, return the full output sequence. Otherwise, return just the last output in the output sequence.

build(input_shape)[source]

Builds the layer.

Notes

This method should not be called manually; rather, use the implicit layer callable behaviour (like my_layer(inputs)), which will apply this method with some additional bookkeeping.

call(inputs, training=None)[source]

Apply this layer to inputs.

Notes

This method should not be called manually; rather, use the implicit layer callable behaviour (like my_layer(inputs)), which will apply this method with some additional bookkeeping.

get_config()[source]

Return config of layer (for serialization during model saving/loading).

classmethod from_config(config)[source]

Load model from serialized config.