Nengo frontend API

Nengo Objects

nengo.Network A network contains ensembles, nodes, connections, and other networks.
nengo.Ensemble A group of neurons that collectively represent a vector.
nengo.ensemble.Neurons An interface for making connections directly to an ensemble’s neurons.
nengo.Node Provide non-neural inputs to Nengo objects and process outputs.
nengo.Connection Connects two objects together.
nengo.connection.LearningRule An interface for making connections to a learning rule.
nengo.Probe A probe is an object that collects data from the simulation.
class nengo.Network(label=None, seed=None, add_to_container=None)[source]

A network contains ensembles, nodes, connections, and other networks.

A network is primarily used for grouping together related objects and connections for visualization purposes. However, you can also use networks as a nice way to reuse network creation code.

To group together related objects that you do not need to reuse, you can create a new Network and add objects in a with block. For example:

network = nengo.Network()
with network:
    with nengo.Network(label="Vision"):
        v1 = nengo.Ensemble(nengo.LIF(100), dimensions=2)
    with nengo.Network(label="Motor"):
        sma = nengo.Ensemble(nengo.LIF(100), dimensions=2)
    nengo.Connection(v1, sma)

To reuse a group of related objects, you can create a new subclass of Network, and add objects in the __init__ method. For example:

class OcularDominance(nengo.Network):
    def __init__(self):
        self.column = nengo.Ensemble(nengo.LIF(100), dimensions=2)

network = nengo.Network()
with network:
    left_eye = OcularDominance()
    right_eye = OcularDominance()
    nengo.Connection(left_eye.column, right_eye.column)
Parameters:
label : str, optional (Default: None)

Name of the network.

seed : int, optional (Default: None)

Random number seed that will be fed to the random number generator. Setting the seed makes the network’s build process deterministic.

add_to_container : bool, optional (Default: None)

Determines if this network will be added to the current container. If None, this network will be added to the network at the top of the Network.context stack unless the stack is empty.

Attributes:
connections : list

Connection instances in this network.

ensembles : list

Ensemble instances in this network.

label : str

Name of this network.

networks : list

Network instances in this network.

nodes : list

Node instances in this network.

probes : list

Probe instances in this network.

seed : int

Random seed used by this network.

static add(obj)[source]

Add the passed object to Network.context.

static default_config()[source]

Constructs a Config object for setting defaults.

all_objects

(list) All objects in this network and its subnetworks.

all_ensembles

(list) All ensembles in this network and its subnetworks.

all_nodes

(list) All nodes in this network and its subnetworks.

all_networks

(list) All networks in this network and its subnetworks.

all_connections

(list) All connections in this network and its subnetworks.

all_probes

(list) All probes in this network and its subnetworks.

config

(Config) Configuration for this network.

n_neurons

(int) Number of neurons in this network, including subnetworks.

class nengo.Ensemble(n_neurons, dimensions, radius=Default, encoders=Default, intercepts=Default, max_rates=Default, eval_points=Default, n_eval_points=Default, neuron_type=Default, gain=Default, bias=Default, noise=Default, normalize_encoders=Default, label=Default, seed=Default)[source]

A group of neurons that collectively represent a vector.

Parameters:
n_neurons : int

The number of neurons.

dimensions : int

The number of representational dimensions.

radius : int, optional (Default: 1.0)

The representational radius of the ensemble.

encoders : Distribution or (n_neurons, dimensions) array_like, optional (Default: UniformHypersphere(surface=True))

The encoders used to transform from representational space to neuron space. Each row is a neuron’s encoder; each column is a representational dimension.

intercepts : Distribution or (n_neurons,) array_like, optional (Default: nengo.dists.Uniform(-1.0, 1.0))

The point along each neuron’s encoder where its activity is zero. If e is the neuron’s encoder, then the activity will be zero when dot(x, e) <= c, where c is the given intercept.

max_rates : Distribution or (n_neurons,) array_like, optional (Default: nengo.dists.Uniform(200, 400))

The activity of each neuron when the input signal x is magnitude 1 and aligned with that neuron’s encoder e; i.e., when dot(x, e) = 1.

eval_points : Distribution or (n_eval_points, dims) array_like, optional (Default: nengo.dists.UniformHypersphere())

The evaluation points used for decoder solving, spanning the interval (-radius, radius) in each dimension, or a distribution from which to choose evaluation points.

n_eval_points : int, optional (Default: None)

The number of evaluation points to be drawn from the eval_points distribution. If None, then a heuristic is used to determine the number of evaluation points.

neuron_type : NeuronType, optional (Default: nengo.LIF())

The model that simulates all neurons in the ensemble (see NeuronType).

gain : Distribution or (n_neurons,) array_like (Default: None)

The gains associated with each neuron in the ensemble. If None, then the gain will be solved for using max_rates and intercepts.

bias : Distribution or (n_neurons,) array_like (Default: None)

The biases associated with each neuron in the ensemble. If None, then the gain will be solved for using max_rates and intercepts.

noise : Process, optional (Default: None)

Random noise injected directly into each neuron in the ensemble as current. A sample is drawn for each individual neuron on every simulation step.

normalize_encoders : bool, optional (Default: True)

Indicates whether the encoders should be normalized.

label : str, optional (Default: None)

A name for the ensemble. Used for debugging and visualization.

seed : int, optional (Default: None)

The seed used for random number generation.

Attributes:
bias : Distribution or (n_neurons,) array_like or None

The biases associated with each neuron in the ensemble.

dimensions : int

The number of representational dimensions.

encoders : Distribution or (n_neurons, dimensions) array_like

The encoders, used to transform from representational space to neuron space. Each row is a neuron’s encoder, each column is a representational dimension.

eval_points : Distribution or (n_eval_points, dims) array_like

The evaluation points used for decoder solving, spanning the interval (-radius, radius) in each dimension, or a distribution from which to choose evaluation points.

gain : Distribution or (n_neurons,) array_like or None

The gains associated with each neuron in the ensemble.

intercepts : Distribution or (n_neurons) array_like or None

The point along each neuron’s encoder where its activity is zero. If e is the neuron’s encoder, then the activity will be zero when dot(x, e) <= c, where c is the given intercept.

label : str or None

A name for the ensemble. Used for debugging and visualization.

max_rates : Distribution or (n_neurons,) array_like or None

The activity of each neuron when dot(x, e) = 1, where e is the neuron’s encoder.

n_eval_points : int or None

The number of evaluation points to be drawn from the eval_points distribution. If None, then a heuristic is used to determine the number of evaluation points.

n_neurons : int or None

The number of neurons.

neuron_type : NeuronType

The model that simulates all neurons in the ensemble (see nengo.neurons).

noise : Process or None

Random noise injected directly into each neuron in the ensemble as current. A sample is drawn for each individual neuron on every simulation step.

radius : int

The representational radius of the ensemble.

seed : int or None

The seed used for random number generation.

neurons

A direct interface to the neurons in the ensemble.

size_in

The dimensionality of the ensemble.

size_out

The dimensionality of the ensemble.

class nengo.ensemble.Neurons(ensemble)[source]

An interface for making connections directly to an ensemble’s neurons.

This should only ever be accessed through the neurons attribute of an ensemble, as a way to signal to Connection that the connection should be made directly to the neurons rather than to the ensemble’s decoded value, e.g.:

nengo.Connection(a.neurons, b.neurons)
ensemble

(Ensemble) The ensemble these neurons are part of.

probeable

(tuple) Signals that can be probed in the neuron population.

size_in

(int) The number of neurons in the population.

size_out

(int) The number of neurons in the population.

class nengo.Node(output=Default, size_in=Default, size_out=Default, label=Default, seed=Default)[source]

Provide non-neural inputs to Nengo objects and process outputs.

Nodes can accept input, and perform arbitrary computations for the purpose of controlling a Nengo simulation. Nodes are typically not part of a brain model per se, but serve to summarize the assumptions being made about sensory data or other environment variables that cannot be generated by a brain model alone.

Nodes can also be used to test models by providing specific input signals to parts of the model, and can simplify the input/output interface of a Network when used as a relay to/from its internal ensembles (see EnsembleArray for an example).

Parameters:
output : callable, array_like, or None

Function that transforms the Node inputs into outputs, a constant output value, or None to transmit signals unchanged.

size_in : int, optional (Default: 0)

The number of dimensions of the input data parameter.

size_out : int, optional (Default: None)

The size of the output signal. If None, it will be determined based on the values of output and size_in.

label : str, optional (Default: None)

A name for the node. Used for debugging and visualization.

seed : int, optional (Default: None)

The seed used for random number generation. Note: no aspects of the node are random, so currently setting this seed has no effect.

Attributes:
label : str

The name of the node.

output : callable, array_like, or None

The given output.

size_in : int

The number of dimensions for incoming connection.

size_out : int

The number of output dimensions.

class nengo.Connection(pre, post, synapse=Default, function=Default, transform=Default, solver=Default, learning_rule_type=Default, eval_points=Default, scale_eval_points=Default, label=Default, seed=Default, modulatory=Unconfigurable)[source]

Connects two objects together.

The connection between the two object is unidirectional, transmitting information from the first argument, pre, to the second argument, post.

Almost any Nengo object can act as the pre or post side of a connection. Additionally, you can use Python slice syntax to access only some of the dimensions of the pre or post object.

For example, if node has size_out=2 and ensemble has size_in=1, we could not create the following connection:

nengo.Connection(node, ensemble)

But, we could create either of these two connections:

nengo.Connection(node[0], ensemble)
nengo.Connection(node[1], ensemble)
Parameters:
pre : Ensemble or Neurons or Node

The source Nengo object for the connection.

post : Ensemble or Neurons or Node or Probe

The destination object for the connection.

synapse : Synapse or None, optional (Default: nengo.synapses.Lowpass(tau=0.005))

Synapse model to use for filtering (see Synapse). If None, no synapse will be used and information will be transmitted without any delay (if supported by the backend—some backends may introduce a single time step delay).

Note that at least one connection must have a synapse that is not None if components are connected in a cycle. Furthermore, a synaptic filter with a zero time constant is different from a None synapse as a synaptic filter will always add a delay of at least one time step.

function : callable or (n_eval_points, size_mid) array_like, optional (Default: None)

Function to compute across the connection. Note that pre must be an ensemble to apply a function across the connection. If an array is passed, the function is implicitly defined by the points in the array and the provided eval_points, which have a one-to-one correspondence.

transform : (size_out, size_mid) array_like, optional (Default: np.array(1.0))

Linear transform mapping the pre output to the post input. This transform is in terms of the sliced size; if either pre or post is a slice, the transform must be shaped according to the sliced dimensionality. Additionally, the function is applied before the transform, so if a function is computed across the connection, the transform must be of shape (size_out, size_mid).

solver : Solver, optional (Default: nengo.solvers.LstsqL2())

Solver instance to compute decoders or weights (see Solver). If solver.weights is True, a full connection weight matrix is computed instead of decoders.

learning_rule_type : LearningRuleType or iterable of LearningRuleType, optional (Default: None)

Modifies the decoders or connection weights during simulation.

eval_points : (n_eval_points, size_in) array_like or int, optional (Default: None)

Points at which to evaluate function when computing decoders, spanning the interval (-pre.radius, pre.radius) in each dimension. If None, will use the eval_points associated with pre.

scale_eval_points : bool, optional (Default: True)

Indicates whether the evaluation points should be scaled by the radius of the pre Ensemble.

label : str, optional (Default: None)

A descriptive label for the connection.

seed : int, optional (Default: None)

The seed used for random number generation.

Attributes:
is_decoded : bool

True if and only if the connection is decoded. This will not occur when solver.weights is True or both pre and post are Neurons.

function : callable

The given function.

function_size : int

The output dimensionality of the given function. If no function is specified, function_size will be 0.

label : str

A human-readable connection label for debugging and visualization. If not overridden, incorporates the labels of the pre and post objects.

learning_rule_type : instance or list or dict of LearningRuleType, optional

The learning rule types.

post : Ensemble or Neurons or Node or Probe or ObjView

The given post object.

post_obj : Ensemble or Neurons or Node or Probe

The underlying post object, even if post is an ObjView.

post_slice : slice or list or None

The slice associated with post if it is an ObjView, or None.

pre : Ensemble or Neurons or Node or ObjView

The given pre object.

pre_obj : Ensemble or Neurons or Node

The underlying pre object, even if post is an ObjView.

pre_slice : slice or list or None

The slice associated with pre if it is an ObjView, or None.

seed : int

The seed used for random number generation.

solver : Solver

The Solver instance that will be used to compute decoders or weights (see nengo.solvers).

synapse : Synapse

The Synapse model used for filtering across the connection (see nengo.synapses).

transform : (size_out, size_mid) array_like

Linear transform mapping the pre function output to the post input.

learning_rule

(LearningRule or iterable) Connectable learning rule object(s).

size_in

(int) The number of output dimensions of the pre object.

Also the input size of the function, if one is specified.

size_mid

(int) The number of output dimensions of the function, if specified.

If the function is not specified, then size_in == size_mid.

size_out

(int) The number of input dimensions of the post object.

Also the number of output dimensions of the transform.

class nengo.connection.LearningRule(connection, learning_rule_type)[source]

An interface for making connections to a learning rule.

Connections to a learning rule are to allow elements of the network to affect the learning rule. For example, learning rules that use error information can obtain that information through a connection.

Learning rule objects should only ever be accessed through the learning_rule attribute of a connection.

connection

(Connection) The connection modified by the learning rule.

modifies

(str) The variable modified by the learning rule.

probeable

(tuple) Signals that can be probed in the learning rule.

size_out

(int) Cannot connect from learning rules, so always 0.

class nengo.Probe(target, attr=None, sample_every=Default, synapse=Default, solver=Default, label=Default, seed=Default)[source]

A probe is an object that collects data from the simulation.

This is to be used in any situation where you wish to gather simulation data (spike data, represented values, neuron voltages, etc.) for analysis.

Probes do not directly affect the simulation.

All Nengo objects can be probed (except Probes themselves). Each object has different attributes that can be probed. To see what is probeable for each object, print its probeable attribute.

>>> with nengo.Network():
...     ens = nengo.Ensemble(10, 1)
>>> print(ens.probeable)
['decoded_output', 'input']
Parameters:
target : Ensemble, Neurons, Node, or Connection

The object to probe.

attr : str, optional (Default: None)

The signal to probe. Refer to the target’s probeable list for details. If None, the first element in the probeable list will be used.

sample_every : float, optional (Default: None)

Sampling period in seconds. If None, the dt of the simluation will be used.

synapse : Synapse, optional (Default: None)

A synaptic model to filter the probed signal.

solver : Solver, optional (Default: ConnectionDefault)

Solver to compute decoders for probes that require them.

label : str, optional (Default: None)

A name for the probe. Used for debugging and visualization.

seed : int, optional (Default: None)

The seed used for random number generation.

Attributes:
attr : str or None

The signal that will be probed. If None, the first element of the target’s probeable list will be used.

sample_every : float or None

Sampling period in seconds. If None, the dt of the simluation will be used.

solver : Solver or None

Solver to compute decoders. Only used for probes of an ensemble’s decoded output.

synapse : Synapse or None

A synaptic model to filter the probed signal.

target : Ensemble, Neurons, Node, or Connection

The object to probe.

obj

(Nengo object) The underlying Nengo object target.

size_in

(int) Dimensionality of the probed signal.

size_out

(int) Cannot connect from probes, so always 0.

slice

(slice) The slice associated with the Nengo object target.

Distributions

nengo.dists.Distribution A base class for probability distributions.
nengo.dists.get_samples Convenience function to sample a distribution or return samples.
nengo.dists.Uniform A uniform distribution.
nengo.dists.Gaussian A Gaussian distribution.
nengo.dists.Exponential An exponential distribution (optionally with high values clipped).
nengo.dists.UniformHypersphere Uniform distribution on or in an n-dimensional unit hypersphere.
nengo.dists.Choice Discrete distribution across a set of possible values.
nengo.dists.Samples A set of samples.
nengo.dists.PDF An arbitrary distribution from a PDF.
nengo.dists.SqrtBeta Distribution of the square root of a Beta distributed random variable.
nengo.dists.SubvectorLength Distribution of the length of a subvectors of a unit vector.
nengo.dists.CosineSimilarity Distribution of the cosine of the angle between two random vectors.
class nengo.dists.Distribution[source]

A base class for probability distributions.

The only thing that a probabilities distribution need to define is a sample method. This base class ensures that all distributions accept the same arguments for the sample function.

sample(n, d=None, rng=np.random)[source]

Samples the distribution.

Parameters:
n : int

Number samples to take.

d : int or None, optional (Default: None)

The number of dimensions to return. If this is an int, the return value will be of shape (n, d). If None, the return value will be of shape (n,).

rng : numpy.random.RandomState, optional

Random number generator state.

Returns:
samples : (n,) or (n, d) array_like

Samples as a 1d or 2d array depending on d. The second dimension enumerates the dimensions of the process.

nengo.dists.get_samples(dist_or_samples, n, d=None, rng=np.random)[source]

Convenience function to sample a distribution or return samples.

Use this function in situations where you accept an argument that could be a distribution, or could be an array_like of samples.

Parameters:
dist_or_samples : Distribution or (n, d) array_like

Source of the samples to be returned.

n : int

Number samples to take.

d : int or None, optional (Default: None)

The number of dimensions to return.

rng : RandomState, optional (Default: np.random)

Random number generator.

Returns:
samples : (n, d) array_like

Examples

>>> def mean(values, n=100):
...     samples = get_samples(values, n=n)
...     return np.mean(samples)
>>> mean([1, 2, 3, 4])
2.5
>>> mean(nengo.dists.Gaussian(0, 1))
0.057277898442269548
class nengo.dists.Uniform(low, high, integer=False)[source]

A uniform distribution.

It’s equally likely to get any scalar between low and high.

Note that the order of low and high doesn’t matter; if low < high this will still work, and low will still be a closed interval while high is open.

Parameters:
low : Number

The closed lower bound of the uniform distribution; samples >= low

high : Number

The open upper bound of the uniform distribution; samples < high

integer : boolean, optional (Default: False)

If true, sample from a uniform distribution of integers. In this case, low and high should be integers.

class nengo.dists.Gaussian(mean, std)[source]

A Gaussian distribution.

This represents a bell-curve centred at mean and with spread represented by the standard deviation, std.

Parameters:
mean : Number

The mean of the Gaussian.

std : Number

The standard deviation of the Gaussian.

Raises:
ValidationError if std is <= 0
class nengo.dists.Exponential(scale, shift=0.0, high=inf)[source]

An exponential distribution (optionally with high values clipped).

If high is left to its default value of infinity, this is a standard exponential distribution. If high is set, then any sampled values at or above high will be clipped so they are slightly below high. This is useful for thresholding and, by extension, networks.AssociativeMemory.

The probability distribution function (PDF) is given by:

       |  0                                 if x < shift
p(x) = | 1/scale * exp(-(x - shift)/scale)  if x >= shift and x < high
       |  n                                 if x == high - eps
       |  0                                 if x >= high

where n is such that the PDF integrates to one, and eps is an infintesimally small number such that samples of x are strictly less than high (in practice, eps depends on the floating point precision).

Parameters:
scale : float

The scale parameter (inverse of the rate parameter lambda). Larger values make the distribution narrower (sharper peak).

shift : float, optional (Default: 0)

Amount to shift the distribution by. There will be no values smaller than this shift when sampling from the distribution.

high : float, optional (Default: np.inf)

All values larger than or equal to this value will be clipped to slightly less than this value.

class nengo.dists.UniformHypersphere(surface=False, min_magnitude=0)[source]

Uniform distribution on or in an n-dimensional unit hypersphere.

Sample points are uniformly distributed across the volume (default) or surface of an n-dimensional unit hypersphere.

Parameters:
surface : bool, optional (Default: False)

Whether sample points should be distributed uniformly over the surface of the hyperphere (True), or within the hypersphere (False).

min_magnitude : Number, optional (Default: 0)

Lower bound on the returned vector magnitudes (such that they are in the range [min_magnitude, 1]). Must be in the range [0, 1). Ignored if surface is True.

class nengo.dists.Choice(options, weights=None)[source]

Discrete distribution across a set of possible values.

The same as numpy.random.choice, except can take vector or matrix values for the choices.

Parameters:
options : (N, …) array_like

The options (choices) to choose between. The choice is always done along the first axis, so if options is a matrix, the options are the rows of that matrix.

weights : (N,) array_like, optional (Default: None)

Weights controlling the probability of selecting each option. Will automatically be normalized. If None, weights be uniformly distributed.

class nengo.dists.Samples(samples)[source]

A set of samples.

This class is a subclass of Distribution so that it can be used in any situation that calls for a Distribution. However, the call to sample must match the dimensions of the samples or a ValidationError will be raised.

Parameters:
samples : (n, d) array_like

n and d must match what is eventually passed to sample.

class nengo.dists.PDF(x, p)[source]

An arbitrary distribution from a PDF.

Parameters:
x : vector_like (n,)

Values of the points to sample from (interpolated).

p : vector_like (n,)

Probabilities of the x points.

class nengo.dists.SqrtBeta(n, m=1)[source]

Distribution of the square root of a Beta distributed random variable.

Given n + m dimensional random unit vectors, the length of subvectors with m elements will be distributed according to this distribution.

Parameters:
n: int

Number of subvectors.

m: int, optional (Default: 1)

Length of each subvector.

cdf(x)[source]

Cumulative distribution function.

Note

Requires SciPy.

Parameters:
x : array_like

Evaluation points in [0, 1].

Returns:
cdf : array_like

Probability that X <= x.

pdf(x)[source]

Probability distribution function.

Note

Requires SciPy.

Parameters:
x : array_like

Evaluation points in [0, 1].

Returns:
pdf : array_like

Probability density at x.

ppf(y)[source]

Percent point function (inverse cumulative distribution).

Note

Requires SciPy.

Parameters:
y : array_like

Cumulative probabilities in [0, 1].

Returns:
ppf : array_like

Evaluation points x in [0, 1] such that P(X <= x) = y.

class nengo.dists.SubvectorLength(dimensions, subdimensions=1)[source]

Distribution of the length of a subvectors of a unit vector.

Parameters:
dimensions : int

Dimensionality of the complete unit vector.

subdimensions : int, optional (Default: 1)

Dimensionality of the subvector.

class nengo.dists.CosineSimilarity(dimensions)[source]

Distribution of the cosine of the angle between two random vectors.

The “cosine similarity” is the cosine of the angle between two vectors, which is equal to the dot product of the vectors, divided by the L2-norms of the individual vectors. When these vectors are unit length, this is then simply the distribution of their dot product.

This is also equivalent to the distribution of a single coefficient from a unit vector (a single dimension of UniformHypersphere(surface=True)). Furthermore, CosineSimilarity(d+2) is equivalent to the distribution of a single coordinate from points uniformly sampled from the d-dimensional unit ball (a single dimension of UniformHypersphere(surface=False).sample(n, d)). These relationships have been detailed in [Voelker2017].

This can be used to calculate an intercept c = ppf(1 - p) such that dot(u, v) >= c with probability p, for random unit vectors u and v. In other words, a neuron with intercept ppf(1 - p) will fire with probability p for a random unit length input.

[Voelker2017]Aaron R. Voelker, Jan Gosmann, and Terrence C. Stewart. Efficiently sampling vectors and coordinates from the n-sphere and n-ball. Technical Report, Centre for Theoretical Neuroscience, Waterloo, ON, 2017
Parameters:
dimensions: int

Dimensionality of the complete unit vector.

Neuron types

nengo.neurons.NeuronType Base class for Nengo neuron models.
nengo.Direct Signifies that an ensemble should simulate in direct mode.
nengo.RectifiedLinear A rectified linear neuron model.
nengo.SpikingRectifiedLinear A rectified integrate and fire neuron model.
nengo.Sigmoid A neuron model whose response curve is a sigmoid.
nengo.LIF Spiking version of the leaky integrate-and-fire (LIF) neuron model.
nengo.LIFRate Non-spiking version of the leaky integrate-and-fire (LIF) neuron model.
nengo.AdaptiveLIF Adaptive spiking version of the LIF neuron model.
nengo.AdaptiveLIFRate Adaptive non-spiking version of the LIF neuron model.
nengo.Izhikevich Izhikevich neuron model.
class nengo.neurons.NeuronType[source]

Base class for Nengo neuron models.

Attributes:
probeable : tuple

Signals that can be probed in the neuron population.

current(x, gain, bias)[source]

Compute current injected in each neuron given input, gain and bias.

Parameters:
x : (n_neurons,) array_like

Vector-space input.

gain : (n_neurons,) array_like

Gains associated with each neuron.

bias : (n_neurons,) array_like

Bias current associated with each neuron.

gain_bias(max_rates, intercepts)[source]

Compute the gain and bias needed to satisfy max_rates, intercepts.

This takes the neurons, approximates their response function, and then uses that approximation to find the gain and bias value that will give the requested intercepts and max_rates.

Note that this default implementation is very slow! Whenever possible, subclasses should override this with a neuron-specific implementation.

Parameters:
max_rates : (n_neurons,) array_like

Maximum firing rates of neurons.

intercepts : (n_neurons,) array_like

X-intercepts of neurons.

Returns:
gain : (n_neurons,) array_like

Gain associated with each neuron. Sometimes denoted alpha.

bias : (n_neurons,) array_like

Bias current associated with each neuron.

max_rates_intercepts(gain, bias)[source]

Compute the max_rates and intercepts given gain and bias.

Note that this default implementation is very slow! Whenever possible, subclasses should override this with a neuron-specific implementation.

Parameters:
gain : (n_neurons,) array_like

Gain associated with each neuron. Sometimes denoted alpha.

bias : (n_neurons,) array_like

Bias current associated with each neuron.

Returns:
max_rates : (n_neurons,) array_like

Maximum firing rates of neurons.

intercepts : (n_neurons,) array_like

X-intercepts of neurons.

rates(x, gain, bias)[source]

Compute firing rates (in Hz) for given vector input, x.

This default implementation takes the naive approach of running the step function for a second. This should suffice for most rate-based neuron types; for spiking neurons it will likely fail (those models should override this function).

Parameters:
x : (n_neurons,) array_like

Vector-space input.

gain : (n_neurons,) array_like

Gains associated with each neuron.

bias : (n_neurons,) array_like

Bias current associated with each neuron.

Returns:
rates : (n_neurons,) ndarray

The firing rates at each given value of x.

step_math(dt, J, output)[source]

Implements the differential equation for this neuron type.

At a minimum, NeuronType subclasses must implement this method. That implementation should modify the output parameter rather than returning anything, for efficiency reasons.

Parameters:
dt : float

Simulation timestep.

J : (n_neurons,) array_like

Input currents associated with each neuron.

output : (n_neurons,) array_like

Output activities associated with each neuron.

class nengo.Direct[source]

Signifies that an ensemble should simulate in direct mode.

In direct mode, the ensemble represents and transforms signals perfectly, rather than through a neural approximation. Note that direct mode ensembles with recurrent connections can easily diverge; most other neuron types will instead saturate at a certain high firing rate.

gain_bias(max_rates, intercepts)[source]

Always returns None, None.

max_rates_intercepts(gain, bias)[source]

Always returns None, None.

rates(x, gain, bias)[source]

Always returns x.

step_math(dt, J, output)[source]

Raises an error if called.

Rather than calling this function, the simulator will detect that the ensemble is in direct mode, and bypass the neural approximation.

class nengo.RectifiedLinear(amplitude=1)[source]

A rectified linear neuron model.

Each neuron is modeled as a rectified line. That is, the neuron’s activity scales linearly with current, unless it passes below zero, at which point the neural activity will stay at zero.

Parameters:
amplitude : float

Scaling factor on the neuron output. Corresponds to the relative amplitude of the output of the neuron.

gain_bias(max_rates, intercepts)[source]

Determine gain and bias by shifting and scaling the lines.

max_rates_intercepts(gain, bias)[source]

Compute the inverse of gain_bias.

step_math(dt, J, output)[source]

Implement the rectification nonlinearity.

class nengo.SpikingRectifiedLinear(amplitude=1)[source]

A rectified integrate and fire neuron model.

Each neuron is modeled as a rectified line. That is, the neuron’s activity scales linearly with current, unless the current is less than zero, at which point the neural activity will stay at zero. This is a spiking version of the RectifiedLinear neuron model.

Parameters:
amplitude : float

Scaling factor on the neuron output. Corresponds to the relative amplitude of the output spikes of the neuron.

rates(x, gain, bias)[source]

Use RectifiedLinear to determine rates.

step_math(dt, J, spiked, voltage)[source]

Implement the integrate and fire nonlinearity.

class nengo.Sigmoid(tau_ref=0.0025)[source]

A neuron model whose response curve is a sigmoid.

Since the tuning curves are strictly positive, the intercepts correspond to the inflection point of each sigmoid. That is, f(intercept) = 0.5 where f is the pure sigmoid function.

gain_bias(max_rates, intercepts)[source]

Analytically determine gain, bias.

max_rates_intercepts(gain, bias)[source]

Compute the inverse of gain_bias.

step_math(dt, J, output)[source]

Implement the sigmoid nonlinearity.

class nengo.LIF(tau_rc=0.02, tau_ref=0.002, min_voltage=0, amplitude=1)[source]

Spiking version of the leaky integrate-and-fire (LIF) neuron model.

Parameters:
tau_rc : float

Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay).

tau_ref : float

Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike.

min_voltage : float

Minimum value for the membrane voltage. If -np.inf, the voltage is never clipped.

amplitude : float

Scaling factor on the neuron output. Corresponds to the relative amplitude of the output spikes of the neuron.

class nengo.LIFRate(tau_rc=0.02, tau_ref=0.002, amplitude=1)[source]

Non-spiking version of the leaky integrate-and-fire (LIF) neuron model.

Parameters:
tau_rc : float

Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay).

tau_ref : float

Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike.

amplitude : float

Scaling factor on the neuron output. Corresponds to the relative amplitude of the output spikes of the neuron.

gain_bias(max_rates, intercepts)[source]

Analytically determine gain, bias.

max_rates_intercepts(gain, bias)[source]

Compute the inverse of gain_bias.

rates(x, gain, bias)[source]

Always use LIFRate to determine rates.

step_math(dt, J, output)[source]

Implement the LIFRate nonlinearity.

class nengo.AdaptiveLIF(tau_n=1, inc_n=0.01, **lif_args)[source]

Adaptive spiking version of the LIF neuron model.

Works as the LIF model, except with adapation state n, which is subtracted from the input current. Its dynamics are:

tau_n dn/dt = -n

where n is incremented by inc_n when the neuron spikes.

Parameters:
tau_n : float

Adaptation time constant. Affects how quickly the adaptation state decays to zero in the absence of spikes (larger = slower decay).

inc_n : float

Adaptation increment. How much the adaptation state is increased after each spike.

tau_rc : float

Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay).

tau_ref : float

Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike.

References

[1]Koch, Christof. Biophysics of Computation: Information Processing in Single Neurons. Oxford University Press, 1999. p. 339
step_math(dt, J, output, voltage, ref, adaptation)[source]

Implement the AdaptiveLIF nonlinearity.

class nengo.AdaptiveLIFRate(tau_n=1, inc_n=0.01, **lif_args)[source]

Adaptive non-spiking version of the LIF neuron model.

Works as the LIF model, except with adapation state n, which is subtracted from the input current. Its dynamics are:

tau_n dn/dt = -n

where n is incremented by inc_n when the neuron spikes.

Parameters:
tau_n : float

Adaptation time constant. Affects how quickly the adaptation state decays to zero in the absence of spikes (larger = slower decay).

inc_n : float

Adaptation increment. How much the adaptation state is increased after each spike.

tau_rc : float

Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay).

tau_ref : float

Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike.

References

[1]Koch, Christof. Biophysics of Computation: Information Processing in Single Neurons. Oxford University Press, 1999. p. 339
step_math(dt, J, output, adaptation)[source]

Implement the AdaptiveLIFRate nonlinearity.

class nengo.Izhikevich(tau_recovery=0.02, coupling=0.2, reset_voltage=-65.0, reset_recovery=8.0)[source]

Izhikevich neuron model.

This implementation is based on the original paper [1]; however, we rename some variables for clarity. What was originally ‘v’ we term ‘voltage’, which represents the membrane potential of each neuron. What was originally ‘u’ we term ‘recovery’, which represents membrane recovery, “which accounts for the activation of K+ ionic currents and inactivation of Na+ ionic currents.” The ‘a’, ‘b’, ‘c’, and ‘d’ parameters are also renamed (see the parameters below).

We use default values that correspond to regular spiking (‘RS’) neurons. For other classes of neurons, set the parameters as follows.

  • Intrinsically bursting (IB): reset_voltage=-55, reset_recovery=4
  • Chattering (CH): reset_voltage=-50, reset_recovery=2
  • Fast spiking (FS): tau_recovery=0.1
  • Low-threshold spiking (LTS): coupling=0.25
  • Resonator (RZ): tau_recovery=0.1, coupling=0.26
Parameters:
tau_recovery : float, optional (Default: 0.02)

(Originally ‘a’) Time scale of the recovery variable.

coupling : float, optional (Default: 0.2)

(Originally ‘b’) How sensitive recovery is to subthreshold fluctuations of voltage.

reset_voltage : float, optional (Default: -65.)

(Originally ‘c’) The voltage to reset to after a spike, in millivolts.

reset_recovery : float, optional (Default: 8.)

(Originally ‘d’) The recovery value to reset to after a spike.

References

[1](1, 2) E. M. Izhikevich, “Simple model of spiking neurons.” IEEE Transactions on Neural Networks, vol. 14, no. 6, pp. 1569-1572. (http://www.izhikevich.org/publications/spikes.pdf)
rates(x, gain, bias)[source]

Estimates steady-state firing rate given gain and bias.

Uses the settled_firingrate helper function.

step_math(dt, J, spiked, voltage, recovery)[source]

Implement the Izhikevich nonlinearity.

Learning rule types

nengo.learning_rules.LearningRuleType Base class for all learning rule objects.
nengo.PES Prescribed Error Sensitivity learning rule.
nengo.BCM Bienenstock-Cooper-Munroe learning rule.
nengo.Oja Oja learning rule.
nengo.Voja Vector Oja learning rule.
class nengo.learning_rules.LearningRuleType(learning_rate=Default, size_in=0)[source]

Base class for all learning rule objects.

To use a learning rule, pass it as a learning_rule_type keyword argument to the Connection on which you want to do learning.

Each learning rule exposes two important pieces of metadata that the builder uses to determine what information should be stored.

The size_in is the dimensionality of the incoming error signal. It can either take an integer or one of the following string values:

  • 'pre': vector error signal in pre-object space
  • 'post': vector error signal in post-object space
  • 'mid': vector error signal in the conn.size_mid space
  • 'pre_state': vector error signal in pre-synaptic ensemble space
  • 'post_state': vector error signal in pre-synaptic ensemble space

The difference between 'post_state' and 'post' is that with the former, if a Neurons object is passed, it will use the dimensionality of the corresponding Ensemble, whereas the latter simply uses the post object size_in. Similarly with 'pre_state' and 'pre'.

The modifies attribute denotes the signal targeted by the rule. Options are:

  • 'encoders'
  • 'decoders'
  • 'weights'
Parameters:
learning_rate : float, optional (Default: 1e-6)

A scalar indicating the rate at which modifies will be adjusted.

size_in : int, str, optional (Default: 0)

Dimensionality of the error signal (see above).

Attributes:
learning_rate : float

A scalar indicating the rate at which modifies will be adjusted.

size_in : int, str

Dimensionality of the error signal.

modifies : str

The signal targeted by the learning rule.

class nengo.PES(learning_rate=Default, pre_synapse=Default, pre_tau=Unconfigurable)[source]

Prescribed Error Sensitivity learning rule.

Modifies a connection’s decoders to minimize an error signal provided through a connection to the connection’s learning rule.

Parameters:
learning_rate : float, optional (Default: 1e-4)

A scalar indicating the rate at which weights will be adjusted.

pre_synapse : Synapse, optional (Default: nengo.synapses.Lowpass(tau=0.005))

Synapse model used to filter the pre-synaptic activities.

Attributes:
learning_rate : float

A scalar indicating the rate at which weights will be adjusted.

pre_synapse : Synapse

Synapse model used to filter the pre-synaptic activities.

class nengo.BCM(learning_rate=Default, pre_synapse=Default, post_synapse=Default, theta_synapse=Default, pre_tau=Unconfigurable, post_tau=Unconfigurable, theta_tau=Unconfigurable)[source]

Bienenstock-Cooper-Munroe learning rule.

Modifies connection weights as a function of the presynaptic activity and the difference between the postsynaptic activity and the average postsynaptic activity.

Parameters:
learning_rate : float, optional (Default: 1e-9)

A scalar indicating the rate at which weights will be adjusted.

pre_synapse : Synapse, optional (Default: nengo.synapses.Lowpass(tau=0.005))

Synapse model used to filter the pre-synaptic activities.

post_synapse : Synapse, optional (Default: None)

Synapse model used to filter the post-synaptic activities. If None, post_synapse will be the same as pre_synapse.

theta_synapse : Synapse, optional (Default: nengo.synapses.Lowpass(tau=1.0))

Synapse model used to filter the theta signal.

Notes

The BCM rule is dependent on pre and post neural activities, not decoded values, and so is not affected by changes in the size of pre and post ensembles. However, if you are decoding from the post ensemble, the BCM rule will have an increased effect on larger post ensembles because more connection weights are changing. In these cases, it may be advantageous to scale the learning rate on the BCM rule by 1 / post.n_neurons.

Attributes:
learning_rate : float

A scalar indicating the rate at which weights will be adjusted.

post_synapse : Synapse

Synapse model used to filter the post-synaptic activities.

pre_synapse : Synapse

Synapse model used to filter the pre-synaptic activities.

theta_synapse : Synapse

Synapse model used to filter the theta signal.

class nengo.Oja(learning_rate=Default, pre_synapse=Default, post_synapse=Default, beta=Default, pre_tau=Unconfigurable, post_tau=Unconfigurable)[source]

Oja learning rule.

Modifies connection weights according to the Hebbian Oja rule, which augments typically Hebbian coactivity with a “forgetting” term that is proportional to the weight of the connection and the square of the postsynaptic activity.

Parameters:
learning_rate : float, optional (Default: 1e-6)

A scalar indicating the rate at which weights will be adjusted.

pre_synapse : Synapse, optional (Default: nengo.synapses.Lowpass(tau=0.005))

Synapse model used to filter the pre-synaptic activities.

post_synapse : Synapse, optional (Default: None)

Synapse model used to filter the post-synaptic activities. If None, post_synapse will be the same as pre_synapse.

beta : float, optional (Default: 1.0)

A scalar weight on the forgetting term.

Notes

The Oja rule is dependent on pre and post neural activities, not decoded values, and so is not affected by changes in the size of pre and post ensembles. However, if you are decoding from the post ensemble, the Oja rule will have an increased effect on larger post ensembles because more connection weights are changing. In these cases, it may be advantageous to scale the learning rate on the Oja rule by 1 / post.n_neurons.

Attributes:
beta : float

A scalar weight on the forgetting term.

learning_rate : float

A scalar indicating the rate at which weights will be adjusted.

post_synapse : Synapse

Synapse model used to filter the post-synaptic activities.

pre_synapse : Synapse

Synapse model used to filter the pre-synaptic activities.

class nengo.Voja(learning_rate=Default, post_synapse=Default, post_tau=Unconfigurable)[source]

Vector Oja learning rule.

Modifies an ensemble’s encoders to be selective to its inputs.

A connection to the learning rule will provide a scalar weight for the learning rate, minus 1. For instance, 0 is normal learning, -1 is no learning, and less than -1 causes anti-learning or “forgetting”.

Parameters:
learning_rate : float, optional (Default: 1e-2)

A scalar indicating the rate at which encoders will be adjusted.

post_synapse : Synapse, optional (Default: nengo.synapses.Lowpass(tau=0.005))

Synapse model used to filter the post-synaptic activities.

Attributes:
learning_rate : float

A scalar indicating the rate at which encoders will be adjusted.

post_synapse : Synapse

Synapse model used to filter the post-synaptic activities.

Processes

nengo.Process A general system with input, output, and state.
nengo.processes.PresentInput Present a series of inputs, each for the same fixed length of time.
nengo.processes.FilteredNoise Filtered white noise process.
nengo.processes.BrownNoise Brown noise process (aka Brownian noise, red noise, Wiener process).
nengo.processes.WhiteNoise Full-spectrum white noise process.
nengo.processes.WhiteSignal An ideal low-pass filtered white noise process.
nengo.processes.Piecewise A piecewise function with different options for interpolation.
class nengo.Process(default_size_in=0, default_size_out=1, default_dt=0.001, seed=None)[source]

A general system with input, output, and state.

For more details on how to use processes and make custom process subclasses, see Processes and how to use them.

Parameters:
default_size_in : int (Default: 0)

Sets the default size in for nodes using this process.

default_size_out : int (Default: 1)

Sets the default size out for nodes running this process. Also, if d is not specified in run or run_steps, this will be used.

default_dt : float (Default: 0.001 (1 millisecond))

If dt is not specified in run, run_steps, ntrange, or trange, this will be used.

seed : int, optional (Default: None)

Random number seed. Ensures random factors will be the same each run.

Attributes:
default_dt : float

If dt is not specified in run, run_steps, ntrange, or trange, this will be used.

default_size_in : int

The default size in for nodes using this process.

default_size_out : int

The default size out for nodes running this process. Also, if d is not specified in run or run_steps, this will be used.

seed : int or None

Random number seed. Ensures random factors will be the same each run.

apply(x, d=None, dt=None, rng=<module 'numpy.random' from 'd:\\miniconda3\\envs\\nengo\\lib\\site-packages\\numpy\\random\\__init__.py'>, copy=True, **kwargs)[source]

Run process on a given input.

Keyword arguments that do not appear in the parameter list below will be passed to the make_step function of this process.

Parameters:
x : ndarray

The input signal given to the process.

d : int, optional (Default: None)

Output dimensionality. If None, default_size_out will be used.

dt : float, optional (Default: None)

Simulation timestep. If None, default_dt will be used.

rng : numpy.random.RandomState (Default: numpy.random)

Random number generator used for stochstic processes.

copy : bool, optional (Default: True)

If True, a new output array will be created for output. If False, the input signal x will be overwritten.

get_rng(rng)[source]

Get a properly seeded independent RNG for the process step.

Parameters:
rng : numpy.random.RandomState

The parent random number generator to use if the seed is not set.

make_step(shape_in, shape_out, dt, rng)[source]

Create function that advances the process forward one time step.

This must be implemented by all custom processes.

Parameters:
shape_in : tuple

The shape of the input signal.

shape_out : tuple

The shape of the output signal.

dt : float

The simulation timestep.

rng : numpy.random.RandomState

A random number generator.

run(t, d=None, dt=None, rng=<module 'numpy.random' from 'd:\\miniconda3\\envs\\nengo\\lib\\site-packages\\numpy\\random\\__init__.py'>, **kwargs)[source]

Run process without input for given length of time.

Keyword arguments that do not appear in the parameter list below will be passed to the make_step function of this process.

Parameters:
t : float

The length of time to run.

d : int, optional (Default: None)

Output dimensionality. If None, default_size_out will be used.

dt : float, optional (Default: None)

Simulation timestep. If None, default_dt will be used.

rng : numpy.random.RandomState (Default: numpy.random)

Random number generator used for stochstic processes.

run_steps(n_steps, d=None, dt=None, rng=<module 'numpy.random' from 'd:\\miniconda3\\envs\\nengo\\lib\\site-packages\\numpy\\random\\__init__.py'>, **kwargs)[source]

Run process without input for given number of steps.

Keyword arguments that do not appear in the parameter list below will be passed to the make_step function of this process.

Parameters:
n_steps : int

The number of steps to run.

d : int, optional (Default: None)

Output dimensionality. If None, default_size_out will be used.

dt : float, optional (Default: None)

Simulation timestep. If None, default_dt will be used.

rng : numpy.random.RandomState (Default: numpy.random)

Random number generator used for stochstic processes.

ntrange(n_steps, dt=None)[source]

Create time points corresponding to a given number of steps.

Parameters:
n_steps : int

The given number of steps.

dt : float, optional (Default: None)

Simulation timestep. If None, default_dt will be used.

trange(t, dt=None)[source]

Create time points corresponding to a given length of time.

Parameters:
t : float

The given length of time.

dt : float, optional (Default: None)

Simulation timestep. If None, default_dt will be used.

class nengo.processes.PresentInput(inputs, presentation_time, **kwargs)[source]

Present a series of inputs, each for the same fixed length of time.

Parameters:
inputs : array_like

Inputs to present, where each row is an input. Rows will be flattened.

presentation_time : float

Show each input for this amount of time (in seconds).

class nengo.processes.FilteredNoise(synapse=Lowpass(0.005), dist=Gaussian(mean=0, std=1), scale=True, synapse_kwargs=None, **kwargs)[source]

Filtered white noise process.

This process takes white noise and filters it using the provided synapse.

Parameters:
synapse : Synapse, optional (Default: Lowpass(tau=0.005))

The synapse to use to filter the noise.

dist : Distribution, optional (Default: Gaussian(mean=0, std=1))

The distribution used to generate the white noise.

scale : bool, optional (Default: True)

Whether to scale the white noise for integration, making the output signal invariant to dt.

synapse_kwargs : dict, optional (Default: None)

Arguments to pass to synapse.make_step.

seed : int, optional (Default: None)

Random number seed. Ensures noise will be the same each run.

class nengo.processes.BrownNoise(dist=Gaussian(mean=0, std=1), **kwargs)[source]

Brown noise process (aka Brownian noise, red noise, Wiener process).

This process is the integral of white noise.

Parameters:
dist : Distribution, optional (Default: Gaussian(mean=0, std=1))

The distribution used to generate the white noise.

seed : int, optional (Default: None)

Random number seed. Ensures noise will be the same each run.

class nengo.processes.WhiteNoise(dist=Gaussian(mean=0, std=1), scale=True, **kwargs)[source]

Full-spectrum white noise process.

Parameters:
dist : Distribution, optional (Default: Gaussian(mean=0, std=1))

The distribution from which to draw samples.

scale : bool, optional (Default: True)

Whether to scale the white noise for integration. Integrating white noise requires using a time constant of sqrt(dt) instead of dt on the noise term [1], to ensure the magnitude of the integrated noise does not change with dt.

seed : int, optional (Default: None)

Random number seed. Ensures noise will be the same each run.

References

[1](1, 2) Gillespie, D.T. (1996) Exact numerical simulation of the Ornstein- Uhlenbeck process and its integral. Phys. Rev. E 54, pp. 2084-91.
class nengo.processes.WhiteSignal(period, high, rms=0.5, y0=None, **kwargs)[source]

An ideal low-pass filtered white noise process.

This signal is created in the frequency domain, and designed to have exactly equal power at all frequencies below the cut-off frequency, and no power above the cut-off.

The signal is naturally periodic, so it can be used beyond its period while still being continuous with continuous derivatives.

Parameters:
period : float

A white noise signal with this period will be generated. Samples will repeat after this duration.

high : float

The cut-off frequency of the low-pass filter, in Hz. Must not exceed the Nyquist frequency for the simulation timestep, which is 0.5 / dt.

rms : float, optional (Default: 0.5)

The root mean square power of the filtered signal

y0 : float, optional (Default: None)

Align the phase of each output dimension to begin at the value that is closest (in absolute value) to y0.

seed : int, optional (Default: None)

Random number seed. Ensures noise will be the same each run.

class nengo.processes.Piecewise(data, interpolation='zero', **kwargs)[source]

A piecewise function with different options for interpolation.

Given an input dictionary of {0: 0, 0.5: -1, 0.75: 0.5, 1: 0}, this process will emit the numerical values (0, -1, 0.5, 0) starting at the corresponding time points (0, 0.5, 0.75, 1).

The keys in the input dictionary must be times (float or int). The values in the dictionary can be floats, lists of floats, or numpy arrays. All lists or numpy arrays must be of the same length, as the output shape of the process will be determined by the shape of the values.

Interpolation on the data points using scipy.interpolate is also supported. The default interpolation is ‘zero’, which creates a piecewise function whose values change at the specified time points. So the above example would be shortcut for:

def function(t):
    if t < 0.5:
        return 0
    elif t < 0.75
        return -1
    elif t < 1:
        return 0.5
    else:
        return 0

For times before the first specified time, an array of zeros (of the correct length) will be emitted. This means that the above can be simplified to:

Piecewise({0.5: -1, 0.75: 0.5, 1: 0})
Parameters:
data : dict

A dictionary mapping times to the values that should be emitted at those times. Times must be numbers (ints or floats), while values can be numbers, lists of numbers, numpy arrays of numbers, or callables that return any of those options.

interpolation : str, optional (Default: ‘zero’)

One of ‘linear’, ‘nearest’, ‘slinear’, ‘quadratic’, ‘cubic’, or ‘zero’. Specifies how to interpolate between times with specified value. ‘zero’ creates a plain piecewise function whose values begin at corresponding time points, while all other options interpolate as described in scipy.interpolate.

Examples

>>> from nengo.processes import Piecewise
>>> process = Piecewise({0.5: 1, 0.75: -1, 1: 0})
>>> with nengo.Network() as model:
...     u = nengo.Node(process, size_out=process.default_size_out)
...     up = nengo.Probe(u)
>>> with nengo.Simulator(model) as sim:
...     sim.run(1.5)
>>> f = sim.data[up]
>>> t = sim.trange()
>>> f[t == 0.2]
array([[ 0.]])
>>> f[t == 0.58]
array([[ 1.]])
Attributes:
data : dict

A dictionary mapping times to the values that should be emitted at those times. Times are numbers (ints or floats), while values can be numbers, lists of numbers, numpy arrays of numbers, or callables that return any of those options.

interpolation : str

One of ‘linear’, ‘nearest’, ‘slinear’, ‘quadratic’, ‘cubic’, or ‘zero’. Specifies how to interpolate between times with specified value. ‘zero’ creates a plain piecewise function whose values change at corresponding time points, while all other options interpolate as described in scipy.interpolate.

Synapse models

nengo.synapses.Synapse Abstract base class for synapse models.
nengo.synapses.filt Filter signal with synapse.
nengo.synapses.filtfilt Zero-phase filtering of signal using the synapse filter.
nengo.LinearFilter General linear time-invariant (LTI) system synapse.
nengo.Lowpass Standard first-order lowpass filter synapse.
nengo.Alpha Alpha-function filter synapse.
nengo.synapses.Triangle Triangular finite impulse response (FIR) synapse.
class nengo.synapses.Synapse(default_size_in=1, default_size_out=None, default_dt=0.001, seed=None)[source]

Abstract base class for synapse models.

Conceptually, a synapse model emulates a biological synapse, taking in input in the form of released neurotransmitter and opening ion channels to allow more or less current to flow into the neuron.

In Nengo, the implementation of a synapse is as a specific case of a Process in which the input and output shapes are the same. The input is the current across the synapse, and the output is the current that will be induced in the postsynaptic neuron.

Synapses also contain the Synapse.filt and Synapse.filtfilt methods, which make it easy to use Nengo’s synapse models outside of Nengo simulations.

Parameters:
default_size_in : int, optional (Default: 1)

The size_in used if not specified.

default_size_out : int (Default: None)

The size_out used if not specified. If None, will be the same as default_size_in.

default_dt : float (Default: 0.001 (1 millisecond))

The simulation timestep used if not specified.

seed : int, optional (Default: None)

Random number seed. Ensures random factors will be the same each run.

Attributes:
default_dt : float (Default: 0.001 (1 millisecond))

The simulation timestep used if not specified.

default_size_in : int (Default: 0)

The size_in used if not specified.

default_size_out : int (Default: 1)

The size_out used if not specified.

seed : int, optional (Default: None)

Random number seed. Ensures random factors will be the same each run.

filt(x, dt=None, axis=0, y0=None, copy=True, filtfilt=False)[source]

Filter x with this synapse model.

Parameters:
x : array_like

The signal to filter.

dt : float, optional (Default: None)

The timestep of the input signal. If None, default_dt will be used.

axis : int, optional (Default: 0)

The axis along which to filter.

y0 : array_like, optional (Default: None)

The starting state of the filter output. If None, the initial value of the input signal along the axis filtered will be used.

copy : bool, optional (Default: True)

Whether to copy the input data, or simply work in-place.

filtfilt : bool, optional (Default: False)

If True, runs the process forward then backward on the signal, for zero-phase filtering (like Matlab’s filtfilt).

filtfilt(x, **kwargs)[source]

Zero-phase filtering of x using this filter.

Equivalent to filt(x, filtfilt=True, **kwargs).

make_step(shape_in, shape_out, dt, rng, y0=None, dtype=<class 'numpy.float64'>)[source]

Create function that advances the synapse forward one time step.

At a minimum, Synapse subclasses must implement this method. That implementation should return a callable that will perform the synaptic filtering operation.

Parameters:
shape_in : tuple

Shape of the input signal to be filtered.

shape_out : tuple

Shape of the output filtered signal.

dt : float

The timestep of the simulation.

rng : numpy.random.RandomState

Random number generator.

y0 : array_like, optional (Default: None)

The starting state of the filter output. If None, each dimension of the state will start at zero.

dtype : numpy.dtype (Default: np.float64)

Type of data used by the synapse model. This is important for ensuring that certain synapses avoid or force integer division.

nengo.synapses.filt(signal, synapse, dt, axis=0, x0=None, copy=True)[source]

Filter signal with synapse.

Note

Deprecated in Nengo 2.1.0. Use Synapse.filt method instead.

nengo.synapses.filtfilt(signal, synapse, dt, axis=0, x0=None, copy=True)[source]

Zero-phase filtering of signal using the synapse filter.

Note

Deprecated in Nengo 2.1.0. Use Synapse.filtfilt method instead.

class nengo.LinearFilter(num, den, analog=True, **kwargs)[source]

General linear time-invariant (LTI) system synapse.

This class can be used to implement any linear filter, given the filter’s transfer function. [1]

Parameters:
num : array_like

Numerator coefficients of transfer function.

den : array_like

Denominator coefficients of transfer function.

analog : boolean, optional (Default: True)

Whether the synapse coefficients are analog (i.e. continuous-time), or discrete. Analog coefficients will be converted to discrete for simulation using the simulator dt.

References

[1](1, 2) https://en.wikipedia.org/wiki/Filter_%28signal_processing%29
Attributes:
analog : boolean

Whether the synapse coefficients are analog (i.e. continuous-time), or discrete. Analog coefficients will be converted to discrete for simulation using the simulator dt.

den : ndarray

Denominator coefficients of transfer function.

num : ndarray

Numerator coefficients of transfer function.

combine(obj)[source]

Combine in series with another LinearFilter.

evaluate(frequencies)[source]

Evaluate the transfer function at the given frequencies.

Examples

Using the evaluate function to make a Bode plot:

synapse = nengo.synapses.LinearFilter([1], [0.02, 1])
f = numpy.logspace(-1, 3, 100)
y = synapse.evaluate(f)
plt.subplot(211); plt.semilogx(f, 20*np.log10(np.abs(y)))
plt.xlabel('frequency [Hz]'); plt.ylabel('magnitude [dB]')
plt.subplot(212); plt.semilogx(f, np.angle(y))
plt.xlabel('frequency [Hz]'); plt.ylabel('phase [radians]')
make_step(shape_in, shape_out, dt, rng, y0=None, dtype=<class 'numpy.float64'>, method='zoh')[source]

Returns a Step instance that implements the linear filter.

class Step(num, den, output)[source]

Abstract base class for LTI filtering step functions.

class NoDen(num, den, output)[source]

An LTI step function for transfer functions with no denominator.

This step function should be much faster than the equivalent general step function.

class Simple(num, den, output, y0=None)[source]

An LTI step function for transfer functions with one num and den.

This step function should be much faster than the equivalent general step function.

class General(num, den, output, y0=None)[source]

An LTI step function for any given transfer function.

Implements a discrete-time LTI system using the difference equation [1] for the given transfer function (num, den).

References

[1](1, 2) https://en.wikipedia.org/wiki/Digital_filter#Difference_equation
class nengo.Lowpass(tau, **kwargs)[source]

Standard first-order lowpass filter synapse.

The impulse-response function is given by:

f(t) = (1 / tau) * exp(-t / tau)
Parameters:
tau : float

The time constant of the filter in seconds.

Attributes:
tau : float

The time constant of the filter in seconds.

make_step(shape_in, shape_out, dt, rng, y0=None, dtype=<class 'numpy.float64'>, **kwargs)[source]

Returns an optimized LinearFilter.Step subclass.

class nengo.Alpha(tau, **kwargs)[source]

Alpha-function filter synapse.

The impulse-response function is given by:

alpha(t) = (t / tau**2) * exp(-t / tau)

and was found by [1] to be a good basic model for synapses.

Parameters:
tau : float

The time constant of the filter in seconds.

References

[1](1, 2) Mainen, Z.F. and Sejnowski, T.J. (1995). Reliability of spike timing in neocortical neurons. Science (New York, NY), 268(5216):1503-6.
Attributes:
tau : float

The time constant of the filter in seconds.

make_step(shape_in, shape_out, dt, rng, y0=None, dtype=<class 'numpy.float64'>, **kwargs)[source]

Returns an optimized LinearFilter.Step subclass.

class nengo.synapses.Triangle(t, **kwargs)[source]

Triangular finite impulse response (FIR) synapse.

This synapse has a triangular and finite impulse response. The length of the triangle is t seconds; thus the digital filter will have t / dt + 1 taps.

Parameters:
t : float

Length of the triangle, in seconds.

Attributes:
t : float

Length of the triangle, in seconds.

make_step(shape_in, shape_out, dt, rng, y0=None, dtype=<class 'numpy.float64'>)[source]

Returns a custom step function.

Decoder and connection weight solvers

nengo.solvers.Solver Decoder or weight solver.
nengo.solvers.Lstsq Unregularized least-squares solver.
nengo.solvers.LstsqNoise Least-squares solver with additive Gaussian white noise.
nengo.solvers.LstsqMultNoise Least-squares solver with multiplicative white noise.
nengo.solvers.LstsqL2 Least-squares solver with L2 regularization.
nengo.solvers.LstsqL2nz Least-squares solver with L2 regularization on non-zero components.
nengo.solvers.LstsqL1 Least-squares solver with L1 and L2 regularization (elastic net).
nengo.solvers.LstsqDrop Find sparser decoders/weights by dropping small values.
nengo.solvers.Nnls Non-negative least-squares solver without regularization.
nengo.solvers.NnlsL2 Non-negative least-squares solver with L2 regularization.
nengo.solvers.NnlsL2nz Non-negative least-squares with L2 regularization on nonzero components.
nengo.solvers.NoSolver Manually pass in weights, bypassing the decoder solver.
class nengo.solvers.Solver(weights=False)[source]

Decoder or weight solver.

__call__(A, Y, rng=<module 'numpy.random' from 'd:\\miniconda3\\envs\\nengo\\lib\\site-packages\\numpy\\random\\__init__.py'>, E=None)[source]

Call the solver.

Parameters:
A : (n_eval_points, n_neurons) array_like

Matrix of the neurons’ activities at the evaluation points

Y : (n_eval_points, dimensions) array_like

Matrix of the target decoded values for each of the D dimensions, at each of the evaluation points.

rng : numpy.random.RandomState, optional (Default: np.random)

A random number generator to use as required.

E : (dimensions, post.n_neurons) array_like, optional (Default: None)

Array of post-population encoders. Providing this tells the solver to return an array of connection weights rather than decoders.

Returns:
X : (n_neurons, dimensions) or (n_neurons, post.n_neurons) ndarray

(n_neurons, dimensions) array of decoders (if solver.weights is False) or (n_neurons, post.n_neurons) array of weights (if 'solver.weights is True).

info : dict

A dictionary of information about the solver. All dictionaries have an 'rmses' key that contains RMS errors of the solve. Other keys are unique to particular solvers.

mul_encoders(Y, E, copy=False)[source]

Helper function that projects signal Y onto encoders E.

Parameters:
Y : ndarray

The signal of interest.

E : (dimensions, n_neurons) array_like or None

Array of encoders. If None, Y will be returned unchanged.

copy : bool, optional (Default: False)

Whether a copy of Y should be returned if E is None.

class nengo.solvers.Lstsq(weights=False, rcond=0.01)[source]

Unregularized least-squares solver.

Parameters:
weights : bool, optional (Default: False)

If False, solve for decoders. If True, solve for weights.

rcond : float, optional (Default: 0.01)

Cut-off ratio for small singular values (see numpy.linalg.lstsq).

Attributes:
rcond : float

Cut-off ratio for small singular values (see numpy.linalg.lstsq).

weights : bool

If False, solve for decoders. If True, solve for weights.

class nengo.solvers.LstsqNoise(weights=False, noise=0.1, solver=Cholesky(transpose=None))[source]

Least-squares solver with additive Gaussian white noise.

Parameters:
weights : bool, optional (Default: False)

If False, solve for decoders. If True, solve for weights.

noise : float, optional (Default: 0.1)

Amount of noise, as a fraction of the neuron activity.

solver : LeastSquaresSolver, optional (Default: Cholesky())

Subsolver to use for solving the least squares problem.

Attributes:
noise : float

Amount of noise, as a fraction of the neuron activity.

solver : LeastSquaresSolver

Subsolver to use for solving the least squares problem.

weights : bool

If False, solve for decoders. If True, solve for weights.

class nengo.solvers.LstsqMultNoise(weights=False, noise=0.1, solver=Cholesky(transpose=None))[source]

Least-squares solver with multiplicative white noise.

Parameters:
weights : bool, optional (Default: False)

If False, solve for decoders. If True, solve for weights.

noise : float, optional (Default: 0.1)

Amount of noise, as a fraction of the neuron activity.

solver : LeastSquaresSolver, optional (Default: Cholesky())

Subsolver to use for solving the least squares problem.

Attributes:
noise : float

Amount of noise, as a fraction of the neuron activity.

solver : LeastSquaresSolver

Subsolver to use for solving the least squares problem.

weights : bool

If False, solve for decoders. If True, solve for weights.

class nengo.solvers.LstsqL2(weights=False, reg=0.1, solver=Cholesky(transpose=None))[source]

Least-squares solver with L2 regularization.

Parameters:
weights : bool, optional (Default: False)

If False, solve for decoders. If True, solve for weights.

reg : float, optional (Default: 0.1)

Amount of regularization, as a fraction of the neuron activity.

solver : LeastSquaresSolver, optional (Default: Cholesky())

Subsolver to use for solving the least squares problem.

Attributes:
reg : float

Amount of regularization, as a fraction of the neuron activity.

solver : LeastSquaresSolver

Subsolver to use for solving the least squares problem.

weights : bool

If False, solve for decoders. If True, solve for weights.

class nengo.solvers.LstsqL2nz(weights=False, reg=0.1, solver=Cholesky(transpose=None))[source]

Least-squares solver with L2 regularization on non-zero components.

Parameters:
weights : bool, optional (Default: False)

If False, solve for decoders. If True, solve for weights.

reg : float, optional (Default: 0.1)

Amount of regularization, as a fraction of the neuron activity.

solver : LeastSquaresSolver, optional (Default: Cholesky())

Subsolver to use for solving the least squares problem.

Attributes:
reg : float

Amount of regularization, as a fraction of the neuron activity.

solver : LeastSquaresSolver

Subsolver to use for solving the least squares problem.

weights : bool

If False, solve for decoders. If True, solve for weights.

class nengo.solvers.LstsqL1(weights=False, l1=0.0001, l2=1e-06, max_iter=1000)[source]

Least-squares solver with L1 and L2 regularization (elastic net).

This method is well suited for creating sparse decoders or weight matrices.

Note

Requires scikit-learn.

Parameters:
weights : bool, optional (Default: False)

If False, solve for decoders. If True, solve for weights.

l1 : float, optional (Default: 1e-4)

Amount of L1 regularization.

l2 : float, optional (Default: 1e-6)

Amount of L2 regularization.

max_iter : int, optional

Maximum number of iterations for the underlying elastic net.

Attributes:
l1 : float

Amount of L1 regularization.

l2 : float

Amount of L2 regularization.

weights : bool

If False, solve for decoders. If True, solve for weights.

max_iter : int

Maximum number of iterations for the underlying elastic net.

class nengo.solvers.LstsqDrop(weights=False, drop=0.25, solver1=LstsqL2(reg=0.001, solver=Cholesky(transpose=None), weights=False), solver2=LstsqL2(reg=0.1, solver=Cholesky(transpose=None), weights=False))[source]

Find sparser decoders/weights by dropping small values.

This solver first solves for coefficients (decoders/weights) with L2 regularization, drops those nearest to zero, and retrains remaining.

Parameters:
weights : bool, optional (Default: False)

If False, solve for decoders. If True, solve for weights.

drop : float, optional (Default: 0.25)

Fraction of decoders or weights to set to zero.

solver1 : Solver, optional (Default: LstsqL2(reg=0.001))

Solver for finding the initial decoders.

solver2 : Solver, optional (Default: LstsqL2(reg=0.1))

Used for re-solving for the decoders after dropout.

Attributes:
drop : float

Fraction of decoders or weights to set to zero.

solver1 : Solver

Solver for finding the initial decoders.

solver2 : Solver

Used for re-solving for the decoders after dropout.

weights : bool

If False, solve for decoders. If True, solve for weights.

class nengo.solvers.Nnls(weights=False)[source]

Non-negative least-squares solver without regularization.

Similar to Lstsq, except the output values are non-negative.

If solving for non-negative weights, it is important that the intercepts of the post-population are also non-negative, since neurons with negative intercepts will never be silent, affecting output accuracy.

Note

Requires SciPy.

Parameters:
weights : bool, optional (Default: False)

If False, solve for decoders. If True, solve for weights.

Attributes:
weights : bool

If False, solve for decoders. If True, solve for weights.

class nengo.solvers.NnlsL2(weights=False, reg=0.1)[source]

Non-negative least-squares solver with L2 regularization.

Similar to LstsqL2, except the output values are non-negative.

If solving for non-negative weights, it is important that the intercepts of the post-population are also non-negative, since neurons with negative intercepts will never be silent, affecting output accuracy.

Note

Requires SciPy.

Parameters:
weights : bool, optional (Default: False)

If False, solve for decoders. If True, solve for weights.

reg : float, optional (Default: 0.1)

Amount of regularization, as a fraction of the neuron activity.

Attributes:
reg : float

Amount of regularization, as a fraction of the neuron activity.

weights : bool

If False, solve for decoders. If True, solve for weights.

class nengo.solvers.NnlsL2nz(weights=False, reg=0.1)[source]

Non-negative least-squares with L2 regularization on nonzero components.

Similar to LstsqL2nz, except the output values are non-negative.

If solving for non-negative weights, it is important that the intercepts of the post-population are also non-negative, since neurons with negative intercepts will never be silent, affecting output accuracy.

Note

Requires SciPy.

Parameters:
weights : bool, optional (Default: False)

If False, solve for decoders. If True, solve for weights.

reg : float, optional (Default: 0.1)

Amount of regularization, as a fraction of the neuron activity.

Attributes:
reg : float

Amount of regularization, as a fraction of the neuron activity.

weights : bool

If False, solve for decoders. If True, solve for weights.

class nengo.solvers.NoSolver(values=None, weights=False)[source]

Manually pass in weights, bypassing the decoder solver.

Parameters:
values : (n_neurons, n_weights) array_like, optional (Default: None)

The array of decoders or weights to use. If weights is False, n_weights is the expected output dimensionality. If weights is True, n_weights is the number of neurons in the post ensemble. If None, which is the default, the solver will return an appropriately sized array of zeros.

weights : bool, optional (Default: False)

If False, values is interpreted as decoders. If True, values is interpreted as weights.

Attributes:
values : (n_neurons, n_weights) array_like, optional (Default: None)

The array of decoders or weights to use. If weights is False, n_weights is the expected output dimensionality. If weights is True, n_weights is the number of neurons in the post ensemble. If None, which is the default, the solver will return an appropriately sized array of zeros.

weights : bool, optional (Default: False)

If False, values is interpreted as decoders. If True, values is interpreted as weights.