Extra Nengo objects¶
NengoDL adds some new Nengo objects that can be used during model construction.
These could be used with any Simulator, not just nengo_dl, but they tend to
be useful for deep learning applications.
Neuron types¶
Additions to the neuron types included with Nengo.
-
class
nengo_dl.neurons.SoftLIFRate(sigma=1.0, **lif_args)[source]¶ LIF neuron with smoothing around the firing threshold.
This is a rate version of the LIF neuron whose tuning curve has a continuous first derivative, due to the smoothing around the firing threshold. It can be used as a substitute for LIF neurons in deep networks during training, and then replaced with LIF neurons when running the network [1].
Parameters: - sigma : float
Amount of smoothing around the firing threshold. Larger values mean more smoothing.
- tau_rc : float
Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay).
- tau_ref : float
Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike.
Notes
Adapted from https://github.com/nengo/nengo_extras/blob/master/nengo_extras/neurons.py
References
[1] (1, 2) Eric Hunsberger and Chris Eliasmith (2015): Spiking deep networks with LIF neurons. https://arxiv.org/abs/1510.08829.
Distributions¶
Additions to the distributions included with Nengo. These
distributions are usually used to initialize weight matrices, e.g.
nengo.Connection(a.neurons, b.neurons, transform=nengo_dl.dists.Glorot()).
-
class
nengo_dl.dists.TruncatedNormal(mean=0, stddev=1, limit=None)[source]¶ Normal distribution where any values more than some distance from the mean are resampled.
Parameters: - mean : float, optional
mean of the normal distribution
- stddev : float, optional
standard deviation of the normal distribution
- limit : float, optional
resample any values more than this distance from the mean. if None, then limit will be set to 2 standard deviations
-
sample(n, d=None, rng=None)[source]¶ Samples the distribution.
Parameters: - n : int
Number samples to take.
- d : int or None, optional
The number of dimensions to return. If this is an int, the return value will be of shape
(n, d). If None, the return value will be of shape(n,).- rng :
numpy.random.RandomState, optional Random number generator state (if None, will use the default numpy random number generator).
Returns: - samples : (n,) or (n, d) array_like
Samples as a 1d or 2d array depending on
d. The second dimension enumerates the dimensions of the process.
-
class
nengo_dl.dists.VarianceScaling(scale=1, mode='fan_avg', distribution='uniform')[source]¶ Variance scaling distribution for weight initialization (analogous to TensorFlow
init_ops.VarianceScaling).Parameters: - scale : float, optional
overall scale on values
- mode : “fan_in” or “fan_out” or “fan_avg”, optional
whether to scale based on input or output dimensionality, or average of the two
- distribution: “uniform” or “normal”, optional
whether to use a uniform or normal distribution for weights
-
sample(n, d=None, rng=None)[source]¶ Samples the distribution.
Parameters: - n : int
Number samples to take.
- d : int or None, optional
The number of dimensions to return. If this is an int, the return value will be of shape
(n, d). If None, the return value will be of shape(n,).- rng :
numpy.random.RandomState, optional Random number generator state (if None, will use the default numpy random number generator).
Returns: - samples : (n,) or (n, d) array_like
Samples as a 1d or 2d array depending on
d. The second dimension enumerates the dimensions of the process.
-
class
nengo_dl.dists.Glorot(scale=1, distribution='uniform')[source]¶ Weight initialization method from [1] (also known as Xavier initialization).
Parameters: - scale : float, optional
scale on weight distribution. for rectified linear units this should be sqrt(2), otherwise usually 1
- distribution: “uniform” or “normal”, optional
whether to use a uniform or normal distribution for weights
References
[1] (1, 2) Xavier Glorot and Yoshua Bengio (2010): Understanding the difficulty of training deep feedforward neural networks. International conference on artificial intelligence and statistics. http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf.
-
class
nengo_dl.dists.He(scale=1, distribution='normal')[source]¶ Weight initialization method from [1].
Parameters: - scale : float, optional
scale on weight distribution. for rectified linear units this should be sqrt(2), otherwise usually 1
- distribution: “uniform” or “normal”, optional
whether to use a uniform or normal distribution for weights
References
[1] (1, 2) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. (2015): Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. https://arxiv.org/abs/1502.01852.