TensorFlow graph construction

The TensorGraph class manages all the data and build processes associated with the TensorFlow graph. The TensorFlow graph is the symbolic description of the computations in the network, which will be executed by the simulator.

nengo_dl.tensor_graph.with_self(wrapped, instance, args, kwargs)[source]

A decorator that can be used to ensure that any ops created within the wrapped method will be added to the TensorGraph object’s graph.

class nengo_dl.tensor_graph.TensorGraph(model, dt, unroll_simulation, dtype, minibatch_size, device, progress)[source]

Manages the construction of the TensorFlow symbolic computation graph.

Parameters:
model : Model

Pre-built Nengo model describing the network to be simulated

dt : float

Length of a simulator timestep, in seconds

unroll_simulation : int

Unroll simulation loop by explicitly building unroll_simulation iterations into the computation graph

dtype : tf.DType

Floating point precision to use for simulation

minibatch_size : int

The number of simultaneous inputs that will be passed through the network

device : None or "/cpu:0" or "/gpu:[0-n]"

Device on which to execute computations (if None then uses the default device as determined by TensorFlow)

progress : utils.ProgressBar

Progress bar for optimization stage

build(progress)[source]

Constructs a new graph to simulate the model.

progress : utils.ProgressBar
Progress bar for construction stage
build_step()[source]

Build the operators that execute a single simulation timestep into the graph.

Returns:
probe_tensors : list of tf.Tensor

The Tensor objects representing the data required for each model Probe

side_effects : list of tf.Tensor

The output Tensors of computations that may have side-effects (e.g., Node functions), meaning that they must be executed each time step even if their output doesn’t appear to be used in the simulation

build_loop(progress)[source]

Build simulation loop.

Parameters:
progress : utils.ProgressBar

Progress bar for loop construction

build_inputs(progress)[source]

Sets up the inputs in the model (which will be computed outside of TensorFlow and fed in each simulation block).

Parameters:
progress : utils.ProgressBar

Progress bar for input construction

build_optimizer(optimizer, objective)[source]

Adds elements into the graph to execute the given optimizer.

Parameters:
optimizer : tf.train.Optimizer

Instance of a TensorFlow optimizer class

objective : dict of {Probe: "mse" or callable or None}

The objective to be minimized for each probe. Passing "mse" will train with mean squared error. A custom function f(output, target) -> loss can be passed that consumes the actual output and target output for a probe in targets and returns a tf.Tensor representing the scalar loss value for that Probe (loss will be summed across Probes). None indicates that the error gradient is being directly specified by the user.

Returns:
``tf.Tensor``

Operator implementing the given optimizer update

``tf.Tensor`` or ``None``

Operator for initializing variables created by the optimizer (None if there is nothing to initialize, or if we’re returning a previously built optimizer that should already be initialized)

build_loss(objective)[source]

Adds elements into the graph to compute the given objective.

Parameters:
objective : dict of {Probe: "mse" or callable or None}

The objective used to compute loss for each probe. Passing "mse" will use mean squared error. A custom function f(output, target) -> loss can be passed that consumes the actual output and target output for a probe in targets and returns a tf.Tensor representing the scalar loss value for that Probe (loss will be summed across Probes).

Returns:
``tf.Tensor``

Tensor representing the sum of the given objectives applied to target probes

build_post(sess, rng)[source]

Executes post-build processes for operators (after the graph has been constructed and session/variables initialized).

Note that unlike other build functions, this is called every time the simulator is reset.

Parameters:
sess : tf.Session

The TensorFlow session for the simulator

rng : RandomState

Seeded random number generator

build_summaries(summaries)[source]

Adds ops to collect summary data for the given objects.

Parameters:
summaries : list of dict or Connection or Ensemble or Neurons or tf.Tensor}

List of objects for which we want to collect data. Object can be a Connection (in which case data on weights will be collected), Ensemble (encoders), Neurons (biases), a dict of {probe: objective} that indicates a loss function that will be tracked, or a pre-built summary tensor.

Returns:
``tf.Tensor``

Merged summary op for the given summaries

get_tensor(sig)[source]

Returns a Tensor corresponding to the given Signal.

Parameters:
sig : Signal

A signal in the model

Returns:
``tf.Tensor``

Tensor containing the value of the given Signal

mark_signals()[source]

Mark all the signals in self.model according to whether they represent trainable parameters of the model (parameters that can be optimized by deep learning methods).

Trainable parameters include connection weights, ensemble encoders, and neuron biases. Unless one of those signals is targeted by a Nengo learning rule (otherwise the learning rule update conflicts with the deep learning optimization).

Users can manually specify whether signals are trainable or not using the config system (e.g., net.config[nengo.Ensemble].trainable = False)

create_signals(sigs)[source]

Groups signal data together into larger arrays, and represent each individual signal as a slice into that array.

Parameters:
sigs : list of Signal

Base signals arranged into the order in which they should reside in memory (e.g., output from graph_optimizer.order_signals())