TensorFlow graph construction¶
The TensorGraph class manages all the data and build processes associated with the TensorFlow graph. The TensorFlow graph is the symbolic description of the computations in the network, which will be executed by the simulator.
-
class
nengo_dl.tensor_graph.TensorGraph(model, dt, unroll_simulation, dtype, minibatch_size, device)[source]¶ Manages the construction of the TensorFlow symbolic computation graph.
Parameters: - model :
Model pre-built Nengo model describing the network to be simulated
- dt : float
length of a simulator timestep, in seconds
- unroll_simulation : int
unroll simulation loop by explicitly building
unroll_simulationiterations into the computation graph- dtype :
tf.DType floating point precision to use for simulation
- minibatch_size : int
the number of simultaneous inputs that will be passed through the network
- device : None or
"/cpu:0"or"/gpu:[0-n]" device on which to execute computations (if None then uses the default device as determined by Tensorflow)
-
build(rng)[source]¶ Constructs a new graph to simulate the model.
Parameters: - rng :
RandomState the Simulator’s random number generator
- rng :
-
build_step()[source]¶ Build the operators that execute a single simulation timestep into the graph.
Returns: - probe_tensors : list of
tf.Tensor the Tensor objects representing the data required for each model Probe
- side_effects : list of
tf.Tensor the output Tensors of computations that may have side-effects (e.g.,
Nodefunctions), meaning that they must be executed each time step even if their output doesn’t appear to be used in the simulation
- probe_tensors : list of
-
build_loop()[source]¶ Build simulation loop.
Loop can be constructed using the
tf.while_looparchitecture, or explicitly unrolled. Unrolling increases graph construction time and memory usage, but increases simulation speed.
-
build_inputs()[source]¶ Sets up the inputs in the model (which will be computed outside of Tensorflow and fed in each simulation block).
-
build_optimizer(optimizer, targets, objective)[source]¶ Adds elements into the graph to execute the given optimizer.
Parameters: - optimizer :
tf.train.Optimizer instance of a Tensorflow optimizer class
- targets : tuple of
Probe the Probes corresponding to the output signals being optimized
- objective :
"mse"or callable the objective to be minimized. passing
"mse"will train with mean squared error. a custom functionf(output, target) -> losscan be passed that consumes the actual output and target output for a probe intargetsand returns atf.Tensorrepresenting the scalar loss value for that Probe (loss will be averaged across Probes).
- optimizer :
-
build_loss(objective, targets)[source]¶ Adds elements into the graph to compute the given objective.
Parameters: - objective :
"mse"or callable the objective used to compute loss. passing
"mse"will use mean squared error. a custom functionf(output, target) -> losscan be passed that consumes the actual output and target output for a probe intargetsand returns atf.Tensorrepresenting the scalar loss value for that Probe (loss will be averaged across Probes).- targets : tuple of
Probe the Probes corresponding to target values in objective
- objective :
-
mark_signals()[source]¶ Mark all the signals in
self.modelaccording to whether they represent trainable parameters of the model (parameters that can be optimized by deep learning methods).Trainable parameters include connection weights, ensemble encoders, and neuron biases. Unless one of those signals is targeted by a Nengo learning rule (otherwise the learning rule update conflicts with the deep learning optimization).
Users can manually specify whether signals are trainable or not using the config system (e.g.,
net.config[nengo.Ensemble].trainable = False)
- model :