A network contains ensembles, nodes, connections, and other networks. 

A group of neurons that collectively represent a vector. 

An interface for making connections directly to an ensemble’s neurons. 

Provide nonneural inputs to Nengo objects and process outputs. 

Connects two objects together. 

An interface for making connections to a learning rule. 

A probe is an object that collects data from the simulation. 
nengo.
Network
(label=None, seed=None, add_to_container=None)[source]¶A network contains ensembles, nodes, connections, and other networks.
A network is primarily used for grouping together related objects and connections for visualization purposes. However, you can also use networks as a nice way to reuse network creation code.
To group together related objects that you do not need to reuse,
you can create a new Network
and add objects in a with
block.
For example:
network = nengo.Network()
with network:
with nengo.Network(label="Vision"):
v1 = nengo.Ensemble(nengo.LIF(100), dimensions=2)
with nengo.Network(label="Motor"):
sma = nengo.Ensemble(nengo.LIF(100), dimensions=2)
nengo.Connection(v1, sma)
To reuse a group of related objects, you can create a new subclass
of Network
, and add objects in the __init__
method.
For example:
class OcularDominance(nengo.Network):
def __init__(self):
self.column = nengo.Ensemble(nengo.LIF(100), dimensions=2)
network = nengo.Network()
with network:
left_eye = OcularDominance()
right_eye = OcularDominance()
nengo.Connection(left_eye.column, right_eye.column)
Name of the network.
Random number seed that will be fed to the random number generator. Setting the seed makes the network’s build process deterministic.
Determines if this network will be added to the current container.
If None, this network will be added to the network at the top of the
Network.context
stack unless the stack is empty.
Connection
instances in this network.
Ensemble
instances in this network.
Name of this network.
Network
instances in this network.
Node
instances in this network.
Probe
instances in this network.
Random seed used by this network.
all_objects
¶(list) All objects in this network and its subnetworks.
all_ensembles
¶(list) All ensembles in this network and its subnetworks.
all_nodes
¶(list) All nodes in this network and its subnetworks.
all_networks
¶(list) All networks in this network and its subnetworks.
all_connections
¶(list) All connections in this network and its subnetworks.
all_probes
¶(list) All probes in this network and its subnetworks.
n_neurons
¶(int) Number of neurons in this network, including subnetworks.
nengo.
Ensemble
(n_neurons, dimensions, radius=Default, encoders=Default, intercepts=Default, max_rates=Default, eval_points=Default, n_eval_points=Default, neuron_type=Default, gain=Default, bias=Default, noise=Default, normalize_encoders=Default, label=Default, seed=Default)[source]¶A group of neurons that collectively represent a vector.
The number of neurons.
The number of representational dimensions.
The representational radius of the ensemble.
The encoders used to transform from representational space to neuron space. Each row is a neuron’s encoder; each column is a representational dimension.
nengo.dists.Uniform(1.0, 1.0)
)The point along each neuron’s encoder where its activity is zero. If
e
is the neuron’s encoder, then the activity will be zero when
dot(x, e) <= c
, where c
is the given intercept.
nengo.dists.Uniform(200, 400)
)The activity of each neuron when the input signal x
is magnitude 1
and aligned with that neuron’s encoder e
;
i.e., when dot(x, e) = 1
.
nengo.dists.UniformHypersphere()
)The evaluation points used for decoder solving, spanning the interval (radius, radius) in each dimension, or a distribution from which to choose evaluation points.
The number of evaluation points to be drawn from the eval_points
distribution. If None, then a heuristic is used to determine
the number of evaluation points.
NeuronType
, optional (Default: nengo.LIF()
)The model that simulates all neurons in the ensemble
(see NeuronType
).
The gains associated with each neuron in the ensemble. If None, then
the gain will be solved for using max_rates
and intercepts
.
The biases associated with each neuron in the ensemble. If None, then
the gain will be solved for using max_rates
and intercepts
.
Random noise injected directly into each neuron in the ensemble as current. A sample is drawn for each individual neuron on every simulation step.
Indicates whether the encoders should be normalized.
A name for the ensemble. Used for debugging and visualization.
The seed used for random number generation.
The biases associated with each neuron in the ensemble.
The number of representational dimensions.
The encoders, used to transform from representational space to neuron space. Each row is a neuron’s encoder, each column is a representational dimension.
The evaluation points used for decoder solving, spanning the interval (radius, radius) in each dimension, or a distribution from which to choose evaluation points.
The gains associated with each neuron in the ensemble.
The point along each neuron’s encoder where its activity is zero. If
e
is the neuron’s encoder, then the activity will be zero when
dot(x, e) <= c
, where c
is the given intercept.
A name for the ensemble. Used for debugging and visualization.
The activity of each neuron when dot(x, e) = 1
,
where e
is the neuron’s encoder.
The number of evaluation points to be drawn from the eval_points
distribution. If None, then a heuristic is used to determine
the number of evaluation points.
The number of neurons.
The model that simulates all neurons in the ensemble
(see nengo.neurons
).
Random noise injected directly into each neuron in the ensemble as current. A sample is drawn for each individual neuron on every simulation step.
The representational radius of the ensemble.
The seed used for random number generation.
neurons
¶A direct interface to the neurons in the ensemble.
size_in
¶The dimensionality of the ensemble.
size_out
¶The dimensionality of the ensemble.
nengo.ensemble.
Neurons
(ensemble)[source]¶An interface for making connections directly to an ensemble’s neurons.
This should only ever be accessed through the neurons
attribute of an
ensemble, as a way to signal to Connection
that the connection
should be made directly to the neurons rather than to the ensemble’s
decoded value, e.g.:
nengo.Connection(a.neurons, b.neurons)
ensemble
¶(Ensemble) The ensemble these neurons are part of.
probeable
¶(tuple) Signals that can be probed in the neuron population.
size_in
¶(int) The number of neurons in the population.
size_out
¶(int) The number of neurons in the population.
nengo.
Node
(output=Default, size_in=Default, size_out=Default, label=Default, seed=Default)[source]¶Provide nonneural inputs to Nengo objects and process outputs.
Nodes can accept input, and perform arbitrary computations for the purpose of controlling a Nengo simulation. Nodes are typically not part of a brain model per se, but serve to summarize the assumptions being made about sensory data or other environment variables that cannot be generated by a brain model alone.
Nodes can also be used to test models by providing specific input signals
to parts of the model, and can simplify the input/output interface of a
Network
when used as a relay to/from its internal
ensembles (see EnsembleArray
for an example).
Function that transforms the Node inputs into outputs, a constant output value, or None to transmit signals unchanged.
The number of dimensions of the input data parameter.
The size of the output signal. If None, it will be determined
based on the values of output
and size_in
.
A name for the node. Used for debugging and visualization.
The seed used for random number generation. Note: no aspects of the node are random, so currently setting this seed has no effect.
The name of the node.
The given output.
The number of dimensions for incoming connection.
The number of output dimensions.
nengo.
Connection
(pre, post, synapse=Default, function=Default, transform=Default, solver=Default, learning_rule_type=Default, eval_points=Default, scale_eval_points=Default, label=Default, seed=Default, modulatory=Unconfigurable)[source]¶Connects two objects together.
The connection between the two object is unidirectional,
transmitting information from the first argument, pre
,
to the second argument, post
.
Almost any Nengo object can act as the pre or post side of a connection. Additionally, you can use Python slice syntax to access only some of the dimensions of the pre or post object.
For example, if node
has size_out=2
and ensemble
has
size_in=1
, we could not create the following connection:
nengo.Connection(node, ensemble)
But, we could create either of these two connections:
nengo.Connection(node[0], ensemble)
nengo.Connection(node[1], ensemble)
The source Nengo object for the connection.
The destination object for the connection.
nengo.synapses.Lowpass(tau=0.005)
)Synapse model to use for filtering (see Synapse
).
If None, no synapse will be used and information will be transmitted
without any delay (if supported by the backend—some backends may
introduce a single time step delay).
Note that at least one connection must have a synapse that is not None if components are connected in a cycle. Furthermore, a synaptic filter with a zero time constant is different from a None synapse as a synaptic filter will always add a delay of at least one time step.
Function to compute across the connection. Note that pre
must be
an ensemble to apply a function across the connection.
If an array is passed, the function is implicitly defined by the
points in the array and the provided eval_points
, which have a
onetoone correspondence.
np.array(1.0)
)Linear transform mapping the pre output to the post input.
This transform is in terms of the sliced size; if either pre
or post is a slice, the transform must be shaped according to
the sliced dimensionality. Additionally, the function is applied
before the transform, so if a function is computed across the
connection, the transform must be of shape (size_out, size_mid)
.
nengo.solvers.LstsqL2()
)Solver instance to compute decoders or weights
(see Solver
). If solver.weights
is True, a full
connection weight matrix is computed instead of decoders.
Modifies the decoders or connection weights during simulation.
Points at which to evaluate function
when computing decoders,
spanning the interval (pre.radius, pre.radius) in each dimension.
If None, will use the eval_points associated with pre
.
Indicates whether the evaluation points should be scaled by the radius of the pre Ensemble.
A descriptive label for the connection.
The seed used for random number generation.
True if and only if the connection is decoded. This will not occur
when solver.weights
is True or both pre and post are
Neurons
.
The given function.
The output dimensionality of the given function. If no function is specified, function_size will be 0.
A humanreadable connection label for debugging and visualization. If not overridden, incorporates the labels of the pre and post objects.
The learning rule types.
The given post object.
The underlying post object, even if post
is an ObjView
.
The slice associated with post
if it is an ObjView, or None.
The given pre object.
The underlying pre object, even if post
is an ObjView
.
The slice associated with pre
if it is an ObjView, or None.
The seed used for random number generation.
The Solver instance that will be used to compute decoders or weights
(see nengo.solvers
).
The Synapse model used for filtering across the connection
(see nengo.synapses
).
Linear transform mapping the pre function output to the post input.
learning_rule
¶(LearningRule or iterable) Connectable learning rule object(s).
size_in
¶(int) The number of output dimensions of the pre object.
Also the input size of the function, if one is specified.
size_mid
¶(int) The number of output dimensions of the function, if specified.
If the function is not specified, then size_in == size_mid
.
size_out
¶(int) The number of input dimensions of the post object.
Also the number of output dimensions of the transform.
nengo.connection.
LearningRule
(connection, learning_rule_type)[source]¶An interface for making connections to a learning rule.
Connections to a learning rule are to allow elements of the network to affect the learning rule. For example, learning rules that use error information can obtain that information through a connection.
Learning rule objects should only ever be accessed through the
learning_rule
attribute of a connection.
connection
¶(Connection) The connection modified by the learning rule.
modifies
¶(str) The variable modified by the learning rule.
probeable
¶(tuple) Signals that can be probed in the learning rule.
size_out
¶(int) Cannot connect from learning rules, so always 0.
nengo.
Probe
(target, attr=None, sample_every=Default, synapse=Default, solver=Default, label=Default, seed=Default)[source]¶A probe is an object that collects data from the simulation.
This is to be used in any situation where you wish to gather simulation data (spike data, represented values, neuron voltages, etc.) for analysis.
Probes do not directly affect the simulation.
All Nengo objects can be probed (except Probes themselves).
Each object has different attributes that can be probed.
To see what is probeable for each object, print its
probeable
attribute.
>>> with nengo.Network():
... ens = nengo.Ensemble(10, 1)
>>> print(ens.probeable)
['decoded_output', 'input']
The object to probe.
The signal to probe. Refer to the target’s probeable
list for
details. If None, the first element in the probeable
list
will be used.
Sampling period in seconds. If None, the dt
of the simluation
will be used.
A synaptic model to filter the probed signal.
ConnectionDefault
)Solver
to compute decoders
for probes that require them.
A name for the probe. Used for debugging and visualization.
The seed used for random number generation.
The signal that will be probed. If None, the first element of the
target’s probeable
list will be used.
Sampling period in seconds. If None, the dt
of the simluation
will be used.
Solver
to compute decoders. Only used for probes
of an ensemble’s decoded output.
A synaptic model to filter the probed signal.
The object to probe.
obj
¶(Nengo object) The underlying Nengo object target.
size_in
¶(int) Dimensionality of the probed signal.
size_out
¶(int) Cannot connect from probes, so always 0.
slice
¶(slice) The slice associated with the Nengo object target.
A base class for probability distributions. 

Convenience function to sample a distribution or return samples. 

A uniform distribution. 

A Gaussian distribution. 

An exponential distribution (optionally with high values clipped). 

Uniform distribution on or in an ndimensional unit hypersphere. 

Discrete distribution across a set of possible values. 

A set of samples. 

An arbitrary distribution from a PDF. 

Distribution of the square root of a Beta distributed random variable. 

Distribution of the length of a subvectors of a unit vector. 

Distribution of the cosine of the angle between two random vectors. 
nengo.dists.
Distribution
[source]¶A base class for probability distributions.
The only thing that a probabilities distribution need to define is a
Distribution.sample
method. This base class ensures that all
distributions accept the same arguments for the sample function.
sample
(self, n, d=None, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>)[source]¶Samples the distribution.
Number samples to take.
The number of dimensions to return. If this is an int, the return
value will be of shape (n, d)
. If None, the return
value will be of shape (n,)
.
numpy.random.RandomState
, optionalRandom number generator state.
Samples as a 1d or 2d array depending on d
. The second
dimension enumerates the dimensions of the process.
nengo.dists.
get_samples
(dist_or_samples, n, d=None, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>)[source]¶Convenience function to sample a distribution or return samples.
Use this function in situations where you accept an argument that could
be a distribution, or could be an array_like
of samples.
Distribution
or (n, d) array_likeSource of the samples to be returned.
Number samples to take.
The number of dimensions to return.
Random number generator.
Examples
>>> def mean(values, n=100):
... samples = get_samples(values, n=n)
... return np.mean(samples)
>>> mean([1, 2, 3, 4])
2.5
>>> mean(nengo.dists.Gaussian(0, 1))
0.057277898442269548
nengo.dists.
Uniform
(low, high, integer=False)[source]¶A uniform distribution.
It’s equally likely to get any scalar between low
and high
.
Note that the order of low
and high
doesn’t matter;
if low < high
this will still work, and low
will still
be a closed interval while high
is open.
The closed lower bound of the uniform distribution; samples >= low
The open upper bound of the uniform distribution; samples < high
If true, sample from a uniform distribution of integers. In this case, low and high should be integers.
sample
(self, n, d=None, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>)[source]¶Samples the distribution.
Number samples to take.
The number of dimensions to return. If this is an int, the return
value will be of shape (n, d)
. If None, the return
value will be of shape (n,)
.
numpy.random.RandomState
, optionalRandom number generator state.
Samples as a 1d or 2d array depending on d
. The second
dimension enumerates the dimensions of the process.
nengo.dists.
Gaussian
(mean, std)[source]¶A Gaussian distribution.
This represents a bellcurve centred at mean
and with
spread represented by the standard deviation, std
.
The mean of the Gaussian.
The standard deviation of the Gaussian.
sample
(self, n, d=None, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>)[source]¶Samples the distribution.
Number samples to take.
The number of dimensions to return. If this is an int, the return
value will be of shape (n, d)
. If None, the return
value will be of shape (n,)
.
numpy.random.RandomState
, optionalRandom number generator state.
Samples as a 1d or 2d array depending on d
. The second
dimension enumerates the dimensions of the process.
nengo.dists.
Exponential
(scale, shift=0.0, high=inf)[source]¶An exponential distribution (optionally with high values clipped).
If high
is left to its default value of infinity, this is a standard
exponential distribution. If high
is set, then any sampled values at
or above high
will be clipped so they are slightly below high
.
This is useful for thresholding and, by extension,
networks.AssociativeMemory
.
The probability distribution function (PDF) is given by:
 0 if x < shift
p(x) =  1/scale * exp((x  shift)/scale) if x >= shift and x < high
 n if x == high  eps
 0 if x >= high
where n
is such that the PDF integrates to one, and eps
is an
infintesimally small number such that samples of x
are strictly less
than high
(in practice, eps
depends on floating point precision).
The scale parameter (inverse of the rate parameter lambda). Larger values make the distribution narrower (sharper peak).
Amount to shift the distribution by. There will be no values smaller than this shift when sampling from the distribution.
All values larger than or equal to this value will be clipped to slightly less than this value.
sample
(self, n, d=None, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>)[source]¶Samples the distribution.
Number samples to take.
The number of dimensions to return. If this is an int, the return
value will be of shape (n, d)
. If None, the return
value will be of shape (n,)
.
numpy.random.RandomState
, optionalRandom number generator state.
Samples as a 1d or 2d array depending on d
. The second
dimension enumerates the dimensions of the process.
nengo.dists.
UniformHypersphere
(surface=False, min_magnitude=0)[source]¶Uniform distribution on or in an ndimensional unit hypersphere.
Sample points are uniformly distributed across the volume (default) or surface of an ndimensional unit hypersphere.
Whether sample points should be distributed uniformly over the surface of the hyperphere (True), or within the hypersphere (False).
Lower bound on the returned vector magnitudes (such that they are in
the range [min_magnitude, 1]
). Must be in the range [0, 1).
Ignored if surface
is True
.
sample
(self, n, d=None, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>)[source]¶Samples the distribution.
Number samples to take.
The number of dimensions to return. If this is an int, the return
value will be of shape (n, d)
. If None, the return
value will be of shape (n,)
.
numpy.random.RandomState
, optionalRandom number generator state.
Samples as a 1d or 2d array depending on d
. The second
dimension enumerates the dimensions of the process.
nengo.dists.
Choice
(options, weights=None)[source]¶Discrete distribution across a set of possible values.
The same as numpy.random.choice
, except can take vector or matrix values
for the choices.
The options (choices) to choose between. The choice is always done
along the first axis, so if options
is a matrix, the options are
the rows of that matrix.
Weights controlling the probability of selecting each option. Will automatically be normalized. If None, weights be uniformly distributed.
sample
(self, n, d=None, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>)[source]¶Samples the distribution.
Number samples to take.
The number of dimensions to return. If this is an int, the return
value will be of shape (n, d)
. If None, the return
value will be of shape (n,)
.
numpy.random.RandomState
, optionalRandom number generator state.
Samples as a 1d or 2d array depending on d
. The second
dimension enumerates the dimensions of the process.
nengo.dists.
Samples
(samples)[source]¶A set of samples.
This class is a subclass of Distribution
so that it can be used in any
situation that calls for a Distribution
. However, the call to
Distribution.sample
must match the dimensions of the samples or
a ValidationError
will be raised.
n
and d
must match what is eventually passed tosample
(self, n, d=None, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>)[source]¶Samples the distribution.
Number samples to take.
The number of dimensions to return. If this is an int, the return
value will be of shape (n, d)
. If None, the return
value will be of shape (n,)
.
numpy.random.RandomState
, optionalRandom number generator state.
Samples as a 1d or 2d array depending on d
. The second
dimension enumerates the dimensions of the process.
nengo.dists.
PDF
(x, p)[source]¶An arbitrary distribution from a PDF.
Values of the points to sample from (interpolated).
Probabilities of the x
points.
sample
(self, n, d=None, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>)[source]¶Samples the distribution.
Number samples to take.
The number of dimensions to return. If this is an int, the return
value will be of shape (n, d)
. If None, the return
value will be of shape (n,)
.
numpy.random.RandomState
, optionalRandom number generator state.
Samples as a 1d or 2d array depending on d
. The second
dimension enumerates the dimensions of the process.
nengo.dists.
SqrtBeta
(n, m=1)[source]¶Distribution of the square root of a Beta distributed random variable.
Given n + m
dimensional random unit vectors, the length of subvectors
with m
elements will be distributed according to this distribution.
Number of subvectors.
Length of each subvector.
See also
sample
(self, num, d=None, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>)[source]¶Samples the distribution.
Number samples to take.
The number of dimensions to return. If this is an int, the return
value will be of shape (n, d)
. If None, the return
value will be of shape (n,)
.
numpy.random.RandomState
, optionalRandom number generator state.
Samples as a 1d or 2d array depending on d
. The second
dimension enumerates the dimensions of the process.
cdf
(self, x)[source]¶Cumulative distribution function.
Note
Requires SciPy.
Evaluation points in [0, 1].
Probability that X <= x
.
nengo.dists.
SubvectorLength
(dimensions, subdimensions=1)[source]¶Distribution of the length of a subvectors of a unit vector.
Dimensionality of the complete unit vector.
Dimensionality of the subvector.
See also
nengo.dists.
CosineSimilarity
(dimensions)[source]¶Distribution of the cosine of the angle between two random vectors.
The “cosine similarity” is the cosine of the angle between two vectors, which is equal to the dot product of the vectors, divided by the L2norms of the individual vectors. When these vectors are unit length, this is then simply the distribution of their dot product.
This is also equivalent to the distribution of a single coefficient from a
unit vector (a single dimension of UniformHypersphere(surface=True)
).
Furthermore, CosineSimilarity(d+2)
is equivalent to the distribution of
a single coordinate from points uniformly sampled from the ddimensional
unit ball (a single dimension of
UniformHypersphere(surface=False).sample(n, d)
). These relationships
have been detailed in [R0dd7d02f1d08Voelker2017].
This can be used to calculate an intercept c = ppf(1  p)
such that
dot(u, v) >= c
with probability p
, for random unit vectors u
and v
. In other words, a neuron with intercept ppf(1  p)
will
fire with probability p
for a random unit length input.
Dimensionality of the complete unit vector.
See also
sample
(self, num, d=None, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>)[source]¶Samples the distribution.
Number samples to take.
The number of dimensions to return. If this is an int, the return
value will be of shape (n, d)
. If None, the return
value will be of shape (n,)
.
numpy.random.RandomState
, optionalRandom number generator state.
Samples as a 1d or 2d array depending on d
. The second
dimension enumerates the dimensions of the process.
cdf
(self, x)[source]¶Cumulative distribution function.
Note
Requires SciPy.
Evaluation points in [0, 1].
Probability that X <= x
.
A base class for connection transforms. 

A dense transformation between an input and output signal. 

An Ndimensional convolutional transform. 

Represents shape information with variable channel position. 
nengo.transforms.
Transform
[source]¶A base class for connection transforms.
sample
(self, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>)[source]¶Returns concrete weights to implement the specified transform.
numpy.random.RandomState
, optionalRandom number generator state.
Transform weights
size_in
¶Expected size of input to transform
size_out
¶Expected size of output from transform
nengo.transforms.
Dense
(shape, init=1.0)[source]¶A dense transformation between an input and output signal.
The shape of the dense matrix: (size_out, size_in)
.
Distribution
or array_like, optional (Default: 1.0)A Distribution used to initialize the transform matrix, or a concrete
instantiation for the matrix. If the matrix is square we also allow a
scalar (equivalent to np.eye(n) * init
) or a vector (equivalent to
np.diag(init)
) to represent the matrix more compactly.
sample
(self, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>)[source]¶Returns concrete weights to implement the specified transform.
numpy.random.RandomState
, optionalRandom number generator state.
Transform weights
init_shape
¶The shape of the initial value.
size_in
¶Expected size of input to transform
size_out
¶Expected size of output from transform
nengo.transforms.
Convolution
(n_filters, input_shape, kernel_size=(3, 3), strides=(1, 1), padding='valid', channels_last=True, init=Uniform(low=1, high=1))[source]¶An Ndimensional convolutional transform.
The dimensionality of the convolution is determined by the input shape.
The number of convolutional filters to apply
ChannelShape
Shape of the input signal to the convolution; e.g.,
(height, width, channels)
for a 2D convolution with
channels_last=True
.
Size of the convolutional kernels (1 element for a 1D convolution, 2 for a 2D convolution, etc.).
Stride of the convolution (1 element for a 1D convolution, 2 for a 2D convolution, etc.).
"same"
or "valid"
, optional (Default: “valid”)Padding method for input signal. “Valid” means no padding, and convolution will only be applied to the fullyoverlapping areas of the input signal (meaning the output will be smaller). “Same” means that the input signal is zeropadded so that the output is the same shape as the input.
If True
(default), the channels are the last dimension in the input
signal (e.g., a 28x28 image with 3 channels would have shape
(28, 28, 3)
). False
means that channels are the first
dimension (e.g., (3, 28, 28)
).
Distribution
or ndarray
, optional (Default: Uniform(1, 1))A predefined kernel with shape
kernel_size + (input_channels, n_filters)
, or a Distribution
that will be used to initialize the kernel.
Notes
As is typical in neural networks, this is technically correlation rather than convolution (because the kernel is not flipped).
sample
(self, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>)[source]¶Returns concrete weights to implement the specified transform.
numpy.random.RandomState
, optionalRandom number generator state.
Transform weights
kernel_shape
¶Full shape of kernel.
size_in
¶Expected size of input to transform
size_out
¶Expected size of output from transform
dimensions
¶Dimensionality of convolution.
output_shape
¶Output shape after applying convolution to input.
nengo.transforms.
ChannelShape
(shape, channels_last=True)[source]¶Represents shape information with variable channel position.
Signal shape
If True (default), the last item in shape
represents the channels,
and the rest are spatial dimensions. Otherwise, the first item in
shape
is the channel dimension.
spatial_shape
¶The spatial part of the shape (omitting channels).
size
¶The total number of elements in the represented signal.
n_channels
¶The number of channels in the represented signal.
dimensions
¶The spatial dimensionality of the represented signal.
Base class for Nengo neuron models. 

Signifies that an ensemble should simulate in direct mode. 

A rectified linear neuron model. 

A rectified integrate and fire neuron model. 

A neuron model whose response curve is a sigmoid. 

Spiking version of the leaky integrateandfire (LIF) neuron model. 

Nonspiking version of the leaky integrateandfire (LIF) neuron model. 

Adaptive spiking version of the LIF neuron model. 

Adaptive nonspiking version of the LIF neuron model. 

Izhikevich neuron model. 
nengo.neurons.
NeuronType
[source]¶Base class for Nengo neuron models.
Signals that can be probed in the neuron population.
current
(self, x, gain, bias)[source]¶Compute current injected in each neuron given input, gain and bias.
Note that x
is assumed to be already projected onto the encoders
associated with the neurons and normalized to radius 1, so the maximum
expected current for a neuron occurs when input for that neuron is 1.
Scalar inputs for which to calculate current.
Gains associated with each neuron.
Bias current associated with each neuron.
Current to be injected in each neuron.
gain_bias
(self, max_rates, intercepts)[source]¶Compute the gain and bias needed to satisfy max_rates, intercepts.
This takes the neurons, approximates their response function, and then uses that approximation to find the gain and bias value that will give the requested intercepts and max_rates.
Note that this default implementation is very slow! Whenever possible, subclasses should override this with a neuronspecific implementation.
Maximum firing rates of neurons.
Xintercepts of neurons.
Gain associated with each neuron. Sometimes denoted alpha.
Bias current associated with each neuron.
max_rates_intercepts
(self, gain, bias)[source]¶Compute the max_rates and intercepts given gain and bias.
Note that this default implementation is very slow! Whenever possible, subclasses should override this with a neuronspecific implementation.
Gain associated with each neuron. Sometimes denoted alpha.
Bias current associated with each neuron.
Maximum firing rates of neurons.
Xintercepts of neurons.
rates
(self, x, gain, bias)[source]¶Compute firing rates (in Hz) for given input x
.
This default implementation takes the naive approach of running the step function for a second. This should suffice for most ratebased neuron types; for spiking neurons it will likely fail (those models should override this function).
Note that x
is assumed to be already projected onto the encoders
associated with the neurons and normalized to radius 1, so the maximum
expected rate for a neuron occurs when input for that neuron is 1.
Scalar inputs for which to calculate rates.
Gains associated with each neuron.
Bias current associated with each neuron.
The firing rates at each given value of x
.
step_math
(self, dt, J, output)[source]¶Implements the differential equation for this neuron type.
At a minimum, NeuronType subclasses must implement this method.
That implementation should modify the output
parameter rather
than returning anything, for efficiency reasons.
Simulation timestep.
Input currents associated with each neuron.
Output activities associated with each neuron.
nengo.
Direct
[source]¶Signifies that an ensemble should simulate in direct mode.
In direct mode, the ensemble represents and transforms signals perfectly, rather than through a neural approximation. Note that direct mode ensembles with recurrent connections can easily diverge; most other neuron types will instead saturate at a certain high firing rate.
nengo.
RectifiedLinear
(amplitude=1)[source]¶A rectified linear neuron model.
Each neuron is modeled as a rectified line. That is, the neuron’s activity scales linearly with current, unless it passes below zero, at which point the neural activity will stay at zero.
Scaling factor on the neuron output. Corresponds to the relative amplitude of the output of the neuron.
nengo.
SpikingRectifiedLinear
(amplitude=1)[source]¶A rectified integrate and fire neuron model.
Each neuron is modeled as a rectified line. That is, the neuron’s activity scales linearly with current, unless the current is less than zero, at which point the neural activity will stay at zero. This is a spiking version of the RectifiedLinear neuron model.
Scaling factor on the neuron output. Corresponds to the relative amplitude of the output spikes of the neuron.
nengo.
Sigmoid
(tau_ref=0.0025)[source]¶A neuron model whose response curve is a sigmoid.
Since the tuning curves are strictly positive, the intercepts
correspond to the inflection point of each sigmoid. That is,
f(intercept) = 0.5
where f
is the pure sigmoid function.
nengo.
LIF
(tau_rc=0.02, tau_ref=0.002, min_voltage=0, amplitude=1)[source]¶Spiking version of the leaky integrateandfire (LIF) neuron model.
Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay).
Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike.
Minimum value for the membrane voltage. If np.inf
, the voltage
is never clipped.
Scaling factor on the neuron output. Corresponds to the relative amplitude of the output spikes of the neuron.
nengo.
LIFRate
(tau_rc=0.02, tau_ref=0.002, amplitude=1)[source]¶Nonspiking version of the leaky integrateandfire (LIF) neuron model.
Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay).
Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike.
Scaling factor on the neuron output. Corresponds to the relative amplitude of the output spikes of the neuron.
nengo.
AdaptiveLIF
(tau_n=1, inc_n=0.01, tau_rc=0.02, tau_ref=0.002, min_voltage=0, amplitude=1)[source]¶Adaptive spiking version of the LIF neuron model.
Works as the LIF model, except with adapation state n
, which is
subtracted from the input current. Its dynamics are:
tau_n dn/dt = n
where n
is incremented by inc_n
when the neuron spikes.
Adaptation time constant. Affects how quickly the adaptation state decays to zero in the absence of spikes (larger = slower decay).
Adaptation increment. How much the adaptation state is increased after each spike.
Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay).
Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike.
Minimum value for the membrane voltage. If np.inf
, the voltage
is never clipped.
Scaling factor on the neuron output. Corresponds to the relative amplitude of the output spikes of the neuron.
References
Camera, Giancarlo La, et al. “Minimal models of adapted neuronal response to in VivoLike input currents.” Neural computation 16.10 (2004): 21012124.
nengo.
AdaptiveLIFRate
(tau_n=1, inc_n=0.01, tau_rc=0.02, tau_ref=0.002, amplitude=1)[source]¶Adaptive nonspiking version of the LIF neuron model.
Works as the LIF model, except with adapation state n
, which is
subtracted from the input current. Its dynamics are:
tau_n dn/dt = n
where n
is incremented by inc_n
when the neuron spikes.
Adaptation time constant. Affects how quickly the adaptation state decays to zero in the absence of spikes (larger = slower decay).
Adaptation increment. How much the adaptation state is increased after each spike.
Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay).
Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike.
Scaling factor on the neuron output. Corresponds to the relative amplitude of the output spikes of the neuron.
References
Camera, Giancarlo La, et al. “Minimal models of adapted neuronal response to in VivoLike input currents.” Neural computation 16.10 (2004): 21012124.
nengo.
Izhikevich
(tau_recovery=0.02, coupling=0.2, reset_voltage=65.0, reset_recovery=8.0)[source]¶Izhikevich neuron model.
This implementation is based on the original paper [Re23bd9a90d801]; however, we rename some variables for clarity. What was originally ‘v’ we term ‘voltage’, which represents the membrane potential of each neuron. What was originally ‘u’ we term ‘recovery’, which represents membrane recovery, “which accounts for the activation of K+ ionic currents and inactivation of Na+ ionic currents.” The ‘a’, ‘b’, ‘c’, and ‘d’ parameters are also renamed (see the parameters below).
We use default values that correspond to regular spiking (‘RS’) neurons. For other classes of neurons, set the parameters as follows.
Intrinsically bursting (IB): reset_voltage=55, reset_recovery=4
Chattering (CH): reset_voltage=50, reset_recovery=2
Fast spiking (FS): tau_recovery=0.1
Lowthreshold spiking (LTS): coupling=0.25
Resonator (RZ): tau_recovery=0.1, coupling=0.26
(Originally ‘a’) Time scale of the recovery variable.
(Originally ‘b’) How sensitive recovery is to subthreshold fluctuations of voltage.
(Originally ‘c’) The voltage to reset to after a spike, in millivolts.
(Originally ‘d’) The recovery value to reset to after a spike.
References
E. M. Izhikevich, “Simple model of spiking neurons.” IEEE Transactions on Neural Networks, vol. 14, no. 6, pp. 15691572. (http://www.izhikevich.org/publications/spikes.pdf)
Base class for all learning rule objects. 

Prescribed Error Sensitivity learning rule. 

BienenstockCooperMunroe learning rule. 

Oja learning rule. 

Vector Oja learning rule. 
nengo.learning_rules.
LearningRuleType
(learning_rate=Default, size_in=0)[source]¶Base class for all learning rule objects.
To use a learning rule, pass it as a learning_rule_type
keyword
argument to the Connection
on which you want to do learning.
Each learning rule exposes two important pieces of metadata that the builder uses to determine what information should be stored.
The size_in
is the dimensionality of the incoming error signal. It
can either take an integer or one of the following string values:
'pre'
: vector error signal in preobject space
'post'
: vector error signal in postobject space
'mid'
: vector error signal in the conn.size_mid
space
'pre_state'
: vector error signal in presynaptic ensemble space
'post_state'
: vector error signal in presynaptic ensemble space
The difference between 'post_state'
and 'post'
is that with the
former, if a Neurons
object is passed, it will use the dimensionality
of the corresponding Ensemble
, whereas the latter simply uses the
post
object size_in
. Similarly with 'pre_state'
and 'pre'
.
The modifies
attribute denotes the signal targeted by the rule.
Options are:
'encoders'
'decoders'
'weights'
A scalar indicating the rate at which modifies
will be adjusted.
Dimensionality of the error signal (see above).
A scalar indicating the rate at which modifies
will be adjusted.
Dimensionality of the error signal.
The signal targeted by the learning rule.
nengo.
PES
(learning_rate=Default, pre_synapse=Default, pre_tau=Unconfigurable)[source]¶Prescribed Error Sensitivity learning rule.
Modifies a connection’s decoders to minimize an error signal provided through a connection to the connection’s learning rule.
A scalar indicating the rate at which weights will be adjusted.
Synapse
, optional (Default: nengo.synapses.Lowpass(tau=0.005)
)Synapse model used to filter the presynaptic activities.
A scalar indicating the rate at which weights will be adjusted.
Synapse
Synapse model used to filter the presynaptic activities.
nengo.
BCM
(learning_rate=Default, pre_synapse=Default, post_synapse=Default, theta_synapse=Default, pre_tau=Unconfigurable, post_tau=Unconfigurable, theta_tau=Unconfigurable)[source]¶BienenstockCooperMunroe learning rule.
Modifies connection weights as a function of the presynaptic activity and the difference between the postsynaptic activity and the average postsynaptic activity.
A scalar indicating the rate at which weights will be adjusted.
Synapse
, optional (Default: nengo.synapses.Lowpass(tau=0.005)
)Synapse model used to filter the presynaptic activities.
Synapse
, optional (Default: None
)Synapse model used to filter the postsynaptic activities.
If None, post_synapse
will be the same as pre_synapse
.
Synapse
, optional (Default: nengo.synapses.Lowpass(tau=1.0)
)Synapse model used to filter the theta signal.
Notes
The BCM rule is dependent on pre and post neural activities,
not decoded values, and so is not affected by changes in the
size of pre and post ensembles. However, if you are decoding from
the post ensemble, the BCM rule will have an increased effect on
larger post ensembles because more connection weights are changing.
In these cases, it may be advantageous to scale the learning rate
on the BCM rule by 1 / post.n_neurons
.
A scalar indicating the rate at which weights will be adjusted.
Synapse
Synapse model used to filter the postsynaptic activities.
Synapse
Synapse model used to filter the presynaptic activities.
Synapse
Synapse model used to filter the theta signal.
nengo.
Oja
(learning_rate=Default, pre_synapse=Default, post_synapse=Default, beta=Default, pre_tau=Unconfigurable, post_tau=Unconfigurable)[source]¶Oja learning rule.
Modifies connection weights according to the Hebbian Oja rule, which augments typically Hebbian coactivity with a “forgetting” term that is proportional to the weight of the connection and the square of the postsynaptic activity.
A scalar indicating the rate at which weights will be adjusted.
Synapse
, optional (Default: nengo.synapses.Lowpass(tau=0.005)
)Synapse model used to filter the presynaptic activities.
Synapse
, optional (Default: None
)Synapse model used to filter the postsynaptic activities.
If None, post_synapse
will be the same as pre_synapse
.
A scalar weight on the forgetting term.
Notes
The Oja rule is dependent on pre and post neural activities,
not decoded values, and so is not affected by changes in the
size of pre and post ensembles. However, if you are decoding from
the post ensemble, the Oja rule will have an increased effect on
larger post ensembles because more connection weights are changing.
In these cases, it may be advantageous to scale the learning rate
on the Oja rule by 1 / post.n_neurons
.
nengo.
Voja
(learning_rate=Default, post_synapse=Default, post_tau=Unconfigurable)[source]¶Vector Oja learning rule.
Modifies an ensemble’s encoders to be selective to its inputs.
A connection to the learning rule will provide a scalar weight for the learning rate, minus 1. For instance, 0 is normal learning, 1 is no learning, and less than 1 causes antilearning or “forgetting”.
A scalar indicating the rate at which encoders will be adjusted.
Synapse
, optional (Default: nengo.synapses.Lowpass(tau=0.005)
)Synapse model used to filter the postsynaptic activities.
A scalar indicating the rate at which encoders will be adjusted.
Synapse
Synapse model used to filter the postsynaptic activities.
A general system with input, output, and state. 

Present a series of inputs, each for the same fixed length of time. 

Filtered white noise process. 

Brown noise process (aka Brownian noise, red noise, Wiener process). 

Fullspectrum white noise process. 

An ideal lowpass filtered white noise process. 

A piecewise function with different options for interpolation. 
nengo.
Process
(default_size_in=0, default_size_out=1, default_dt=0.001, seed=None)[source]¶A general system with input, output, and state.
For more details on how to use processes and make custom process subclasses, see Processes and how to use them.
Sets the default size in for nodes using this process.
Sets the default size out for nodes running this process. Also,
if d
is not specified in run
or run_steps
,
this will be used.
If dt
is not specified in run
, run_steps
,
ntrange
, or trange
, this will be used.
Random number seed. Ensures random factors will be the same each run.
If dt
is not specified in run
, run_steps
,
ntrange
, or trange
, this will be used.
The default size in for nodes using this process.
The default size out for nodes running this process. Also, if d
is
not specified in run
or run_steps
,
this will be used.
Random number seed. Ensures random factors will be the same each run.
apply
(self, x, d=None, dt=None, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>, copy=True, **kwargs)[source]¶Run process on a given input.
Keyword arguments that do not appear in the parameter list below
will be passed to the make_step
function of this process.
The input signal given to the process.
Output dimensionality. If None, default_size_out
will be used.
Simulation timestep. If None, default_dt
will be used.
numpy.random.RandomState
(Default: numpy.random
)Random number generator used for stochstic processes.
If True, a new output array will be created for output.
If False, the input signal x
will be overwritten.
get_rng
(self, rng)[source]¶Get a properly seeded independent RNG for the process step.
numpy.random.RandomState
The parent random number generator to use if the seed is not set.
make_step
(self, shape_in, shape_out, dt, rng)[source]¶Create function that advances the process forward one time step.
This must be implemented by all custom processes.
The shape of the input signal.
The shape of the output signal.
The simulation timestep.
numpy.random.RandomState
A random number generator.
run
(self, t, d=None, dt=None, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>, **kwargs)[source]¶Run process without input for given length of time.
Keyword arguments that do not appear in the parameter list below
will be passed to the make_step
function of this process.
The length of time to run.
Output dimensionality. If None, default_size_out
will be used.
Simulation timestep. If None, default_dt
will be used.
numpy.random.RandomState
(Default: numpy.random
)Random number generator used for stochstic processes.
run_steps
(self, n_steps, d=None, dt=None, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>, **kwargs)[source]¶Run process without input for given number of steps.
Keyword arguments that do not appear in the parameter list below
will be passed to the make_step
function of this process.
The number of steps to run.
Output dimensionality. If None, default_size_out
will be used.
Simulation timestep. If None, default_dt
will be used.
numpy.random.RandomState
(Default: numpy.random
)Random number generator used for stochstic processes.
nengo.processes.
PresentInput
(inputs, presentation_time, **kwargs)[source]¶Present a series of inputs, each for the same fixed length of time.
Inputs to present, where each row is an input. Rows will be flattened.
Show each input for this amount of time (in seconds).
make_step
(self, shape_in, shape_out, dt, rng)[source]¶Create function that advances the process forward one time step.
This must be implemented by all custom processes.
The shape of the input signal.
The shape of the output signal.
The simulation timestep.
numpy.random.RandomState
A random number generator.
nengo.processes.
FilteredNoise
(synapse=Lowpass(tau=0.005), dist=Gaussian(mean=0, std=1), scale=True, synapse_kwargs=None, **kwargs)[source]¶Filtered white noise process.
This process takes white noise and filters it using the provided synapse.
Lowpass(tau=0.005)
)The synapse to use to filter the noise.
Gaussian(mean=0, std=1)
)The distribution used to generate the white noise.
Whether to scale the white noise for integration, making the output
signal invariant to dt
.
Arguments to pass to synapse.make_step
.
Random number seed. Ensures noise will be the same each run.
make_step
(self, shape_in, shape_out, dt, rng)[source]¶Create function that advances the process forward one time step.
This must be implemented by all custom processes.
The shape of the input signal.
The shape of the output signal.
The simulation timestep.
numpy.random.RandomState
A random number generator.
nengo.processes.
BrownNoise
(dist=Gaussian(mean=0, std=1), **kwargs)[source]¶Brown noise process (aka Brownian noise, red noise, Wiener process).
This process is the integral of white noise.
Gaussian(mean=0, std=1)
)The distribution used to generate the white noise.
Random number seed. Ensures noise will be the same each run.
nengo.processes.
WhiteNoise
(dist=Gaussian(mean=0, std=1), scale=True, **kwargs)[source]¶Fullspectrum white noise process.
Gaussian(mean=0, std=1)
)The distribution from which to draw samples.
Whether to scale the white noise for integration. Integrating white
noise requires using a time constant of sqrt(dt)
instead of dt
on the noise term [Rd2dd8a36bd471], to ensure the magnitude of the integrated
noise does not change with dt
.
Random number seed. Ensures noise will be the same each run.
References
Gillespie, D.T. (1996) Exact numerical simulation of the Ornstein Uhlenbeck process and its integral. Phys. Rev. E 54, pp. 208491.
make_step
(self, shape_in, shape_out, dt, rng)[source]¶Create function that advances the process forward one time step.
This must be implemented by all custom processes.
The shape of the input signal.
The shape of the output signal.
The simulation timestep.
numpy.random.RandomState
A random number generator.
nengo.processes.
WhiteSignal
(period, high, rms=0.5, y0=None, **kwargs)[source]¶An ideal lowpass filtered white noise process.
This signal is created in the frequency domain, and designed to have exactly equal power at all frequencies below the cutoff frequency, and no power above the cutoff.
The signal is naturally periodic, so it can be used beyond its period while still being continuous with continuous derivatives.
A white noise signal with this period will be generated. Samples will repeat after this duration.
The cutoff frequency of the lowpass filter, in Hz.
Must not exceed the Nyquist frequency for the simulation
timestep, which is 0.5 / dt
.
The root mean square power of the filtered signal
Align the phase of each output dimension to begin at the value that is closest (in absolute value) to y0.
Random number seed. Ensures noise will be the same each run.
make_step
(self, shape_in, shape_out, dt, rng)[source]¶Create function that advances the process forward one time step.
This must be implemented by all custom processes.
The shape of the input signal.
The shape of the output signal.
The simulation timestep.
numpy.random.RandomState
A random number generator.
nengo.processes.
Piecewise
(data, interpolation='zero', **kwargs)[source]¶A piecewise function with different options for interpolation.
Given an input dictionary of {0: 0, 0.5: 1, 0.75: 0.5, 1: 0}
,
this process will emit the numerical values (0, 1, 0.5, 0)
starting at the corresponding time points (0, 0.5, 0.75, 1).
The keys in the input dictionary must be times (float or int). The values in the dictionary can be floats, lists of floats, or numpy arrays. All lists or numpy arrays must be of the same length, as the output shape of the process will be determined by the shape of the values.
Interpolation on the data points using scipy.interpolate
is also
supported. The default interpolation is ‘zero’, which creates a
piecewise function whose values change at the specified time points.
So the above example would be shortcut for:
def function(t):
if t < 0.5:
return 0
elif t < 0.75
return 1
elif t < 1:
return 0.5
else:
return 0
For times before the first specified time, an array of zeros (of the correct length) will be emitted. This means that the above can be simplified to:
Piecewise({0.5: 1, 0.75: 0.5, 1: 0})
A dictionary mapping times to the values that should be emitted at those times. Times must be numbers (ints or floats), while values can be numbers, lists of numbers, numpy arrays of numbers, or callables that return any of those options.
One of ‘linear’, ‘nearest’, ‘slinear’, ‘quadratic’, ‘cubic’, or ‘zero’.
Specifies how to interpolate between times with specified value.
‘zero’ creates a plain piecewise function whose values begin at
corresponding time points, while all other options interpolate
as described in scipy.interpolate
.
Examples
>>> from nengo.processes import Piecewise
>>> process = Piecewise({0.5: 1, 0.75: 1, 1: 0})
>>> with nengo.Network() as model:
... u = nengo.Node(process, size_out=process.default_size_out)
... up = nengo.Probe(u)
>>> with nengo.Simulator(model) as sim:
... sim.run(1.5)
>>> f = sim.data[up]
>>> t = sim.trange()
>>> f[t == 0.2]
array([[ 0.]])
>>> f[t == 0.58]
array([[ 1.]])
A dictionary mapping times to the values that should be emitted at those times. Times are numbers (ints or floats), while values can be numbers, lists of numbers, numpy arrays of numbers, or callables that return any of those options.
One of ‘linear’, ‘nearest’, ‘slinear’, ‘quadratic’, ‘cubic’, or ‘zero’.
Specifies how to interpolate between times with specified value.
‘zero’ creates a plain piecewise function whose values change at
corresponding time points, while all other options interpolate
as described in scipy.interpolate
.
make_step
(self, shape_in, shape_out, dt, rng)[source]¶Create function that advances the process forward one time step.
This must be implemented by all custom processes.
The shape of the input signal.
The shape of the output signal.
The simulation timestep.
numpy.random.RandomState
A random number generator.
Abstract base class for synapse models. 

Filter 

Zerophase filtering of 

General linear timeinvariant (LTI) system synapse. 

Standard firstorder lowpass filter synapse. 

Alphafunction filter synapse. 

Triangular finite impulse response (FIR) synapse. 
nengo.synapses.
Synapse
(default_size_in=1, default_size_out=None, default_dt=0.001, seed=None)[source]¶Abstract base class for synapse models.
Conceptually, a synapse model emulates a biological synapse, taking in input in the form of released neurotransmitter and opening ion channels to allow more or less current to flow into the neuron.
In Nengo, the implementation of a synapse is as a specific case of a
Process
in which the input and output shapes are the same.
The input is the current across the synapse, and the output is the current
that will be induced in the postsynaptic neuron.
Synapses also contain the Synapse.filt
and Synapse.filtfilt
methods,
which make it easy to use Nengo’s synapse models outside of Nengo
simulations.
The size_in used if not specified.
The size_out used if not specified. If None, will be the same as default_size_in.
The simulation timestep used if not specified.
Random number seed. Ensures random factors will be the same each run.
The simulation timestep used if not specified.
The size_in used if not specified.
The size_out used if not specified.
Random number seed. Ensures random factors will be the same each run.
filt
(self, x, dt=None, axis=0, y0=None, copy=True, filtfilt=False)[source]¶Filter x
with this synapse model.
The signal to filter.
The timestep of the input signal.
If None, default_dt
will be used.
The axis along which to filter.
The starting state of the filter output. If None, the initial value of the input signal along the axis filtered will be used.
Whether to copy the input data, or simply work inplace.
If True, runs the process forward then backward on the signal,
for zerophase filtering (like Matlab’s filtfilt
).
filtfilt
(self, x, **kwargs)[source]¶Zerophase filtering of x
using this filter.
Equivalent to filt(x, filtfilt=True, **kwargs)
.
make_step
(self, shape_in, shape_out, dt, rng, y0=None, dtype=<class 'numpy.float64'>)[source]¶Create function that advances the synapse forward one time step.
At a minimum, Synapse subclasses must implement this method. That implementation should return a callable that will perform the synaptic filtering operation.
Shape of the input signal to be filtered.
Shape of the output filtered signal.
The timestep of the simulation.
numpy.random.RandomState
Random number generator.
The starting state of the filter output. If None, each dimension of the state will start at zero.
numpy.dtype
(Default: np.float64)Type of data used by the synapse model. This is important for ensuring that certain synapses avoid or force integer division.
nengo.synapses.
filt
(signal, synapse, dt, axis=0, x0=None, copy=True)[source]¶Filter signal
with synapse
.
Note
Deprecated in Nengo 2.1.0.
Use Synapse.filt
method instead.
nengo.synapses.
filtfilt
(signal, synapse, dt, axis=0, x0=None, copy=True)[source]¶Zerophase filtering of signal
using the synapse
filter.
Note
Deprecated in Nengo 2.1.0.
Use Synapse.filtfilt
method instead.
nengo.
LinearFilter
(num, den, analog=True, **kwargs)[source]¶General linear timeinvariant (LTI) system synapse.
This class can be used to implement any linear filter, given the filter’s transfer function. [R853643b3d5541]
Numerator coefficients of transfer function.
Denominator coefficients of transfer function.
Whether the synapse coefficients are analog (i.e. continuoustime),
or discrete. Analog coefficients will be converted to discrete for
simulation using the simulator dt
.
References
Whether the synapse coefficients are analog (i.e. continuoustime),
or discrete. Analog coefficients will be converted to discrete for
simulation using the simulator dt
.
Denominator coefficients of transfer function.
Numerator coefficients of transfer function.
evaluate
(self, frequencies)[source]¶Evaluate the transfer function at the given frequencies.
Examples
Using the evaluate
function to make a Bode plot:
synapse = nengo.synapses.LinearFilter([1], [0.02, 1])
f = numpy.logspace(1, 3, 100)
y = synapse.evaluate(f)
plt.subplot(211); plt.semilogx(f, 20*np.log10(np.abs(y)))
plt.xlabel('frequency [Hz]'); plt.ylabel('magnitude [dB]')
plt.subplot(212); plt.semilogx(f, np.angle(y))
plt.xlabel('frequency [Hz]'); plt.ylabel('phase [radians]')
make_step
(self, shape_in, shape_out, dt, rng, y0=None, dtype=<class 'numpy.float64'>, method='zoh')[source]¶Returns a Step
instance that implements the linear filter.
NoDen
(num, den, output)[source]¶An LTI step function for transfer functions with no denominator.
This step function should be much faster than the equivalent general step function.
Simple
(num, den, output, y0=None)[source]¶An LTI step function for transfer functions with one num and den.
This step function should be much faster than the equivalent general step function.
nengo.
Lowpass
(tau, **kwargs)[source]¶Standard firstorder lowpass filter synapse.
The impulseresponse function is given by:
f(t) = (1 / tau) * exp(t / tau)
The time constant of the filter in seconds.
The time constant of the filter in seconds.
make_step
(self, shape_in, shape_out, dt, rng, y0=None, dtype=<class 'numpy.float64'>, **kwargs)[source]¶Returns an optimized LinearFilter.Step
subclass.
nengo.
Alpha
(tau, **kwargs)[source]¶Alphafunction filter synapse.
The impulseresponse function is given by:
alpha(t) = (t / tau**2) * exp(t / tau)
and was found by [R2fca3e9f29351] to be a good basic model for synapses.
The time constant of the filter in seconds.
References
Mainen, Z.F. and Sejnowski, T.J. (1995). Reliability of spike timing in neocortical neurons. Science (New York, NY), 268(5216):15036.
The time constant of the filter in seconds.
make_step
(self, shape_in, shape_out, dt, rng, y0=None, dtype=<class 'numpy.float64'>, **kwargs)[source]¶Returns an optimized LinearFilter.Step
subclass.
nengo.synapses.
Triangle
(t, **kwargs)[source]¶Triangular finite impulse response (FIR) synapse.
This synapse has a triangular and finite impulse response. The length of
the triangle is t
seconds; thus the digital filter will have
t / dt + 1
taps.
Length of the triangle, in seconds.
Length of the triangle, in seconds.
Decoder or weight solver. 

Unregularized leastsquares solver. 

Leastsquares solver with additive Gaussian white noise. 

Leastsquares solver with multiplicative white noise. 

Leastsquares solver with L2 regularization. 

Leastsquares solver with L2 regularization on nonzero components. 

Leastsquares solver with L1 and L2 regularization (elastic net). 

Find sparser decoders/weights by dropping small values. 

Nonnegative leastsquares solver without regularization. 

Nonnegative leastsquares solver with L2 regularization. 

Nonnegative leastsquares with L2 regularization on nonzero components. 

Manually pass in weights, bypassing the decoder solver. 

Linear least squares system solver. 

Solve a leastsquares system using the Cholesky decomposition. 

Solve a leastsquares system using Scipy’s conjugate gradient. 

Solve a leastsquares system using Scipy’s LSMR. 

Solve a leastsquares system using conjugate gradient. 

Solve a multipleRHS leastsquares system using block conj. 

Solve a leastsquares system using full SVD. 

Solve a leastsquares system using a randomized (partial) SVD. 
nengo.solvers.
Solver
(weights=False)[source]¶Decoder or weight solver.
A solver can be compositional or noncompositional. Noncompositional
solvers must operate on the whole neurontoneuron weight matrix, while
compositional solvers operate in the decoded state space, which is then
combined with transform/encoders to generate the full weight matrix.
See the solver’s compositional
class attribute to determine if it is
compositional.
__call__
(self, A, Y, rng=<module 'numpy.random' from '/home/travis/virtualenv/python3.6.3/lib/python3.6/sitepackages/numpy/random/__init__.py'>)[source]¶Call the solver.
Matrix of the neurons’ activities at the evaluation points
Matrix of the target decoded values for each of the D dimensions, at each of the evaluation points.
numpy.random.RandomState
, optional (Default: numpy.random
)A random number generator to use as required.
(n_neurons, dimensions) array of decoders (if solver.weights
is False) or (n_neurons, post.n_neurons) array of weights
(if 'solver.weights
is True).
A dictionary of information about the solver. All dictionaries have
an 'rmses'
key that contains RMS errors of the solve.
Other keys are unique to particular solvers.
nengo.solvers.
Lstsq
(weights=False, rcond=0.01)[source]¶Unregularized leastsquares solver.
If False, solve for decoders. If True, solve for weights.
Cutoff ratio for small singular values (see numpy.linalg.lstsq
).
Cutoff ratio for small singular values (see numpy.linalg.lstsq
).
If False, solve for decoders. If True, solve for weights.
nengo.solvers.
LstsqNoise
(weights=False, noise=0.1, solver=Cholesky())[source]¶Leastsquares solver with additive Gaussian white noise.
If False, solve for decoders. If True, solve for weights.
Amount of noise, as a fraction of the neuron activity.
LeastSquaresSolver
, optional (Default: Cholesky()
)Subsolver to use for solving the least squares problem.
Amount of noise, as a fraction of the neuron activity.
LeastSquaresSolver
Subsolver to use for solving the least squares problem.
If False, solve for decoders. If True, solve for weights.
nengo.solvers.
LstsqMultNoise
(weights=False, noise=0.1, solver=Cholesky())[source]¶Leastsquares solver with multiplicative white noise.
If False, solve for decoders. If True, solve for weights.
Amount of noise, as a fraction of the neuron activity.
LeastSquaresSolver
, optional (Default: Cholesky()
)Subsolver to use for solving the least squares problem.
Amount of noise, as a fraction of the neuron activity.
LeastSquaresSolver
Subsolver to use for solving the least squares problem.
If False, solve for decoders. If True, solve for weights.
nengo.solvers.
LstsqL2
(weights=False, reg=0.1, solver=Cholesky())[source]¶Leastsquares solver with L2 regularization.
If False, solve for decoders. If True, solve for weights.
Amount of regularization, as a fraction of the neuron activity.
LeastSquaresSolver
, optional (Default: Cholesky()
)Subsolver to use for solving the least squares problem.
Amount of regularization, as a fraction of the neuron activity.
LeastSquaresSolver
Subsolver to use for solving the least squares problem.
If False, solve for decoders. If True, solve for weights.
nengo.solvers.
LstsqL2nz
(weights=False, reg=0.1, solver=Cholesky())[source]¶Leastsquares solver with L2 regularization on nonzero components.
If False, solve for decoders. If True, solve for weights.
Amount of regularization, as a fraction of the neuron activity.
LeastSquaresSolver
, optional (Default: Cholesky()
)Subsolver to use for solving the least squares problem.
Amount of regularization, as a fraction of the neuron activity.
LeastSquaresSolver
Subsolver to use for solving the least squares problem.
If False, solve for decoders. If True, solve for weights.
nengo.solvers.
LstsqL1
(weights=False, l1=0.0001, l2=1e06, max_iter=1000)[source]¶Leastsquares solver with L1 and L2 regularization (elastic net).
This method is well suited for creating sparse decoders or weight matrices.
Note
Requires scikitlearn.
If False, solve for decoders. If True, solve for weights.
Amount of L1 regularization.
Amount of L2 regularization.
Maximum number of iterations for the underlying elastic net.
Amount of L1 regularization.
Amount of L2 regularization.
If False, solve for decoders. If True, solve for weights.
Maximum number of iterations for the underlying elastic net.
nengo.solvers.
LstsqDrop
(weights=False, drop=0.25, solver1=LstsqL2(reg=0.001), solver2=LstsqL2())[source]¶Find sparser decoders/weights by dropping small values.
This solver first solves for coefficients (decoders/weights) with L2 regularization, drops those nearest to zero, and retrains remaining.
If False, solve for decoders. If True, solve for weights.
Fraction of decoders or weights to set to zero.
LstsqL2(reg=0.001)
)Solver for finding the initial decoders.
LstsqL2(reg=0.1)
)Used for resolving for the decoders after dropout.
Fraction of decoders or weights to set to zero.
Solver for finding the initial decoders.
Used for resolving for the decoders after dropout.
If False, solve for decoders. If True, solve for weights.
nengo.solvers.
Nnls
(weights=False)[source]¶Nonnegative leastsquares solver without regularization.
Similar to Lstsq
, except the output values are nonnegative.
If solving for nonnegative weights, it is important that the intercepts of the postpopulation are also nonnegative, since neurons with negative intercepts will never be silent, affecting output accuracy.
Note
Requires SciPy.
If False, solve for decoders. If True, solve for weights.
If False, solve for decoders. If True, solve for weights.
nengo.solvers.
NnlsL2
(weights=False, reg=0.1)[source]¶Nonnegative leastsquares solver with L2 regularization.
Similar to LstsqL2
, except the output values are nonnegative.
If solving for nonnegative weights, it is important that the intercepts of the postpopulation are also nonnegative, since neurons with negative intercepts will never be silent, affecting output accuracy.
Note
Requires SciPy.
If False, solve for decoders. If True, solve for weights.
Amount of regularization, as a fraction of the neuron activity.
Amount of regularization, as a fraction of the neuron activity.
If False, solve for decoders. If True, solve for weights.
nengo.solvers.
NnlsL2nz
(weights=False, reg=0.1)[source]¶Nonnegative leastsquares with L2 regularization on nonzero components.
Similar to LstsqL2nz
, except the output values are nonnegative.
If solving for nonnegative weights, it is important that the intercepts of the postpopulation are also nonnegative, since neurons with negative intercepts will never be silent, affecting output accuracy.
Note
Requires SciPy.
If False, solve for decoders. If True, solve for weights.
Amount of regularization, as a fraction of the neuron activity.
Amount of regularization, as a fraction of the neuron activity.
If False, solve for decoders. If True, solve for weights.
nengo.solvers.
NoSolver
(values=None, weights=False)[source]¶Manually pass in weights, bypassing the decoder solver.
The array of decoders to use.
size_out
is the dimensionality of the decoded signal (determined
by the connection function).
If None
, which is the default, the solver will return an
appropriately sized array of zeros.
If False, connection will use factored weights (decoders from this solver, transform, and encoders). If True, connection will use a full weight matrix (created by linearly combining decoder, transform, and encoders).
The array of decoders to use.
size_out
is the dimensionality of the decoded signal (determined
by the connection function).
If None
, which is the default, the solver will return an
appropriately sized array of zeros.
If False, connection will use factored weights (decoders from this solver, transform, and encoders). If True, connection will use a full weight matrix (created by linearly combining decoder, transform, and encoders).
nengo.utils.least_squares_solvers.
LeastSquaresSolver
[source]¶Linear least squares system solver.
nengo.utils.least_squares_solvers.
Cholesky
(transpose=None)[source]¶Solve a leastsquares system using the Cholesky decomposition.
nengo.utils.least_squares_solvers.
ConjgradScipy
(tol=0.0001, atol=1e08)[source]¶Solve a leastsquares system using Scipy’s conjugate gradient.
References
scipy.sparse.linalg.cg documentation, https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.cg.html
nengo.utils.least_squares_solvers.
LSMRScipy
(tol=0.0001)[source]¶Solve a leastsquares system using Scipy’s LSMR.
nengo.utils.least_squares_solvers.
Conjgrad
(tol=0.01, maxiters=None, X0=None)[source]¶Solve a leastsquares system using conjugate gradient.
nengo.utils.least_squares_solvers.
BlockConjgrad
(tol=0.01, X0=None)[source]¶Solve a multipleRHS leastsquares system using block conj. gradient.
nengo.utils.least_squares_solvers.
RandomizedSVD
(n_components=60, n_oversamples=10, n_iter=0)[source]¶Solve a leastsquares system using a randomized (partial) SVD.
Useful for solving large matrices quickly, but nonoptimally.
The number of SVD components to compute. A small survey of activity matrices suggests that the first 60 components capture almost all the variance.
The number of additional samples on the range of A.
The number of power iterations to perform (can help with noisy data).
See also
sklearn.utils.extmath.randomized_svd
Function used by this class