# NengoDL Simulator¶

This is the class that allows users to access the nengo_dl backend. This can be used as a drop-in replacement for nengo.Simulator (i.e., simply replace any instance of nengo.Simulator with nengo_dl.Simulator and everything will continue to function as normal).

In addition, the Simulator exposes features unique to the nengo_dl backend, such as Simulator.train().

## Simulator arguments¶

The nengo_dl Simulator has a number of optional arguments, beyond those in nengo.Simulator, which control features specific to the nengo_dl backend. The full class documentation can be viewed below; here we will explain the practical usage of these parameters.

### dtype¶

This specifies the floating point precision to be used for the simulator’s internal computations. It can be either tf.float32 or tf.float64, for 32 or 64-bit precision, respectively. 32-bit precision is the default, as it is faster, will use less memory, and in most cases will not make a difference in the results of the simulation. However, if very precise outputs are required then this can be changed to tf.float64.

### device¶

This specifies the computational device on which the simulation will run. The default is None, which means that operations will be assigned according to TensorFlow’s internal logic (generally speaking, this means that things will be assigned to the GPU if tensorflow-gpu is installed, otherwise everything will be assigned to the CPU). The device can be set manually by passing the TensorFlow device specification to this parameter. For example, setting device="/cpu:0" will force everything to run on the CPU. This may be worthwhile for small models, where the extra overhead of communicating with the GPU outweighs the actual computations. On systems with multiple GPUs, device="/gpu:0"/"/gpu:1"/etc. will select which one to use.

### unroll_simulation¶

This controls how many simulation iterations are executed each time through the outer simulation loop. That is, we could run 20 timesteps as

for i in range(20):
<run 1 step>


or

for i in range(5):
<run 1 step>
<run 1 step>
<run 1 step>
<run 1 step>


This is an optimization process known as “loop unrolling”, and unroll_simulation controls how many simulation steps are unrolled. The first example above would correspond to unroll_simulation=1, and the second would be unroll_simulation=4.

Unrolling the simulation will result in faster simulation speed, but increased build time and memory usage.

In general, unrolling the simulation will have no impact on the output of a simulation. The only case in which unrolling may have an impact is if the number of simulation steps is not evenly divisible by unroll_simulation. In that case extra simulation steps will be executed, and then data will be truncated to the correct number of steps. However, those extra steps could still change the internal state of the simulation, which will affect any subsequent calls to sim.run. So it is recommended that the number of steps always be evenly divisible by unroll_simulation.

### minibatch_size¶

nengo_dl allows a model to be simulated with multiple simultaneous inputs, processing those values in parallel through the network. For example, instead of executing a model three times with three different inputs, the model can be executed once with those three inputs in parallel. minibatch_size specifies how many inputs will be processed at a time. The default is None, meaning that this feature is not used and only one input will be processed at a time (as in standard Nengo simulators).

In order to take advantage of the parallel inputs, multiple inputs need to be passed to Simulator.run() via the input_feeds argument. This is discussed in more detail below.

When using Simulator.train(), this parameter controls how many items from the training data will be used for each optimization iteration.

### tensorboard¶

If set to True, nengo_dl will save the structure of the internal simulation graph so that it can be visualized in TensorBoard. This is mainly useful to developers trying to debug the simulator. This data is stored in the <nengo_dl>/data folder, and can be loaded via

tensorboard --logdir <path/to/nengo_dl>


Data will be organized according to the Network label and run number.

## Simulator.run arguments¶

Simulator.run() (and its variations Simulator.step()/ Simulator.run_steps()) also have some optional parameters beyond those in the standard Nengo simulator.

### input_feeds¶

This parameter can be used to override the value of any input Node in a model (an input node is defined as a node with no incoming connections). For example

n_steps = 5

with nengo.Network() as net:
node = nengo.Node([0])
p = nengo.Probe(node)

with nengo_dl.Simulator(net) as sim:
sim.run_steps(n_steps)


will execute the model in the standard way, and if we check the output of node

print(sim.data[p])
>>> [[ 0.] [ 0.] [ 0.] [ 0.] [ 0.]]


we see that it is all zero, as defined.

input_feeds is specified as a dictionary of {my_node: override_value} pairs, where my_node is the Node to be overridden and override_value is a numpy array with shape (minibatch_size, n_steps, my_node.size_out) that gives the Node output value on each simulation step. For example, if we instead run the model via

sim.run_steps(n_steps, input_feeds={node: np.ones((1, n_steps, 1))})
print(sim.data[p])
>>> [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.]]


we see that the output of node is all ones, which is the override value we specified.

input_feeds are usually used in concert with the minibatching feature of nengo_dl (see above). nengo_dl allows multiple inputs to be processed simultaneously, but when we construct a Node we can only specify one value. For example, if we use minibatching on the above network

mini = 3
with nengo_dl.Simulator(net, minibatch_size=mini) as sim:
sim.run_steps(n_steps)
print(sim.data[p])
>>> [[[ 0.] [ 0.] [ 0.] [ 0.] [ 0.]]
[[ 0.] [ 0.] [ 0.] [ 0.] [ 0.]]
[[ 0.] [ 0.] [ 0.] [ 0.] [ 0.]]]


we see that the output is an array of zeros with size (mini, n_steps, 1). That is, we simulated 3 inputs simultaneously, but those inputs all had the same value (the one we defined when the Node was constructed) so it wasn’t very useful. To take full advantage of the minibatching we need to override the node values, so that we can specify a different value for each item in the minibatch:

with nengo_dl.Simulator(net, minibatch_size=mini) as sim:
sim.run_steps(n_steps, input_feeds={
node: np.zeros((mini, n_steps, 1)) + np.arange(mini)[:, None, None]})
print(sim.data[p])
>>> [[[ 0.] [ 0.] [ 0.] [ 0.] [ 0.]]
[[ 1.] [ 1.] [ 1.] [ 1.] [ 1.]]
[[ 2.] [ 2.] [ 2.] [ 2.] [ 2.]]]


Here we can see that 3 independent inputs have been processed during the simulation. In a simple network such as this, minibatching will not make much difference. But for larger models it will be much more efficient to process multiple inputs in parallel rather than one at a time.

### profile¶

If set to True, profiling data will be collected while the simulation runs. This will significantly slow down the simulation, so it should be left on False (the default) in most cases. It is mainly used by developers, in order to help identify simulation bottlenecks.

Profiling data will be saved to <nengo_dl>/data/nengo_dl_profile.json. It can be viewed by opening a Chrome browser, navigating to chrome://tracing and loading the nengo_dl_profile.json file. Alternatively, a filename can be passed to profile to specify an output location for the profiling data.

## API¶

class nengo_dl.simulator.Simulator(network, dt=0.001, seed=None, model=None, dtype=tf.float32, device=None, unroll_simulation=1, minibatch_size=None, tensorboard=False)[source]

Simulate network using the nengo_dl backend.

Parameters: network : Network or None a network object to be built and then simulated. If None, then a built model must be passed to model instead dt : float, optional length of a simulator timestep, in seconds seed : int, optional seed for all stochastic operators used in this simulator model : Model, optional pre-built model object dtype : tf.DType, optional floating point precision to use for simulation device : None or "/cpu:0" or "/gpu:[0-n]", optional device on which to execute computations (if None then uses the default device as determined by Tensorflow) unroll_simulation : int, optional unroll simulation loop by explicitly building the given number of iterations into the computation graph (improves simulation speed but increases build time) minibatch_size : int, optional the number of simultaneous inputs that will be passed through the network tensorboard : bool, optional if True, save network output in the Tensorflow summary format, which can be loaded into Tensorboard
reset(seed=None)[source]

Resets the simulator to initial conditions.

Parameters: seed : int, optional if not None, overwrite the default simulator seed with this value (note: this becomes the new default simulator seed)
soft_reset(include_trainable=False, include_probes=False)[source]

Resets the internal state of the simulation, but doesn’t rebuild the graph.

Parameters: include_trainable : bool, optional if True, also reset any training that has been performed on network parameters (e.g., connection weights) include_probes : bool, optional if True, also clear probe data
step(**kwargs)[source]

Run the simulation for one time step.

Parameters: kwargs : dict
run(time_in_seconds, **kwargs)[source]

Simulate for the given length of time.

Parameters: time_in_seconds : float amount of time to run the simulation for kwargs : dict
run_steps(n_steps, input_feeds=None, profile=False)[source]

Simulate for the given number of steps.

Parameters: n_steps : int the number of simulation steps to be executed input_feeds : dict of {Node: ndarray} override the values of input Nodes with the given data. arrays should have shape (sim.minibatch_size, n_steps, node.size_out). profile : bool, optional if True, collect TensorFlow profiling information while the simulation is running (this will slow down the simulation)

Notes

If unroll_simulation=x is specified, and n_steps > x, this will repeatedly execute x timesteps until the the number of steps executed is >= n_steps.

train(inputs, targets, optimizer, n_epochs=1, objective='mse', shuffle=True, profile=False)[source]

Optimize the trainable parameters of the network using the given optimization method, minimizing the objective value over the given inputs and targets.

Parameters: inputs : dict of {Node: ndarray} input values for Nodes in the network; arrays should have shape (batch_size, n_steps, node.size_out) targets : dict of {Probe: ndarray} desired output value at Probes, corresponding to each value in inputs; arrays should have shape (batch_size, n_steps, probe.size_in) optimizer : tf.train.Optimizer Tensorflow optimizer, e.g. tf.train.GradientDescentOptimizer(learning_rate=0.1) n_epochs : int, optional run training for the given number of epochs (complete passes through inputs) objective : "mse" or callable, optional the objective to be minimized. passing "mse" will train with mean squared error. a custom function f(output, target) -> loss can be passed that consumes the actual output and target output for a probe in targets and returns a tf.Tensor representing the scalar loss value for that Probe (loss will be averaged across Probes). shuffle : bool, optional if True, randomize the data into different minibatches each epoch profile : bool, optional if True, collect TensorFlow profiling information while training (this will slow down the training)

Notes

Most deep learning methods require the network to be differentiable, which means that trying to train a network with non-differentiable elements will result in an error. Examples of common non-differentiable elements include LIF, Direct, or processes/neurons that don’t have a custom TensorFlow implementation (see process_builders.SimProcessBuilder/ neuron_builders.SimNeuronsBuilder)

loss(inputs, targets, objective)[source]

Compute the loss value for the given objective and inputs/targets.

Parameters: inputs : dict of {Node: ndarray} input values for Nodes in the network; arrays should have shape (batch_size, n_steps, node.size_out) targets : dict of {Probe: ndarray} desired output value at Probes, corresponding to each value in inputs; arrays should have shape (batch_size, n_steps, probe.size_in) objective : "mse" or callable the objective used to compute loss. passing "mse" will use mean squared error. a custom function f(output, target) -> loss can be passed that consumes the actual output and target output for a probe in targets and returns a tf.Tensor representing the scalar loss value for that Probe (loss will be averaged across Probes)
save_params(path, include_global=True, include_local=False)[source]

Save network parameters to the given path.

Parameters: path : str filepath of parameter output file include_global : bool, optional if True (default True), save global (trainable) network variables include_local : bool, optional if True (default False), save local (non-trainable) network variables
load_params(path, include_global=True, include_local=False)[source]

Load network parameters from the given path.

Parameters: path : str filepath of parameter input file include_global : bool, optional if True (default True), load global (trainable) network variables include_local : bool, optional if True (default False), load local (non-trainable) network variables
close()[source]

Close the simulation, freeing resources.

Notes

The simulation cannot be restarted after it is closed. This is not a technical limitation, just a design decision made for all Nengo simulators.

trange(dt=None)[source]

Create a vector of times matching probed data.

Note that the range does not start at 0 as one might expect, but at the first timestep (i.e., dt).

Parameters: dt : float, optional the sampling period of the probe to create a range for; if None, the simulator’s dt will be used.
check_gradients(outputs=None, atol=1e-05, rtol=0.001)[source]

Perform gradient checks for the network (used to verify that the analytic gradients are correct).

Raises a simulation error if the difference between analytic and numeric gradient is greater than atol + rtol * numeric_grad (elementwise).

Parameters: outputs : tf.Tensor or list of tf.Tensor or list of Probe compute gradients wrt this output (if None, computes wrt each output probe) atol : float, optional absolute error tolerance rtol : float, optional relative (to numeric grad) error tolerance

Notes

Calling this function will reset all values in the network, so it should not be intermixed with calls to Simulator.run().

class nengo_dl.simulator.SimulationData(sim, minibatched)[source]

Data structure used to access simulation data from the model.

The main use case for this is to access Probe data; for example, probe_data = sim.data[my_probe]. However, it is also used to access the parameters of objects in the model; for example, after the model has been optimized via Simulator.train(), the updated encoder values for an ensemble can be accessed via trained_encoders = sim.data[my_ens].encoders.

Parameters: sim : Simulator the simulator from which data will be drawn minibatched : bool if False, discard the minibatch dimension on probe data

Notes

SimulationData shouldn’t be created/accessed directly by the user, but rather via sim.data (which is an instance of SimulationData).

__init__(sim, minibatched)[source]

Initialize self. See help(type(self)) for accurate signature.

__getitem__(obj)[source]

Return the data associated with obj.

Parameters: obj : object whose simulation data is being accessed :class:~numpy:numpy.ndarray or :class:~nengo:nengo.builder.ensemble.BuiltEnsemble or :class:~nengo:nengo.builder.connection.BuiltConnection array containing probed data if obj is a Probe, otherwise the corresponding parameter object
get_param(obj, attr)[source]

Returns the current parameter value for the given object.

Parameters: obj : NengoObject the nengo object for which we want to know the parameters attr : str the parameter of obj to be returned :class:~numpy:numpy.ndarray current value of the parameters associated with the given object

Notes

Parameter values should be accessed through sim.data (which will call this function if necessary), rather than directly through this function.