Operator graph optimization tools

These functions are used to restructure the operator graph so that it can be simulated more efficiently when converted into a TensorFlow graph.

nengo_dl.graph_optimizer.mergeable(op, chosen_ops)[source]

Check if the given op can be merged with the candidate group

Parameters:
op : Operator

the operator to be merged

chosen_ops : list of Operator

the operator group to be merged in to

Returns:
bool

True if op can be merged into chosen_ops, else False

nengo_dl.graph_optimizer.greedy_planner(operators)[source]

Combine mergeable operators into groups that will be executed as a single computation.

Parameters:
operators : list of Operator

all the nengo operators in a model (unordered)

Returns:
list of tuple of :class:`~nengo:nengo.builder.Operator`

operators combined into mergeable groups and in execution order

Notes

Originally based on nengo_ocl greedy planner

nengo_dl.graph_optimizer.tree_planner(op_list, max_depth=3)[source]

Create merged execution plan through exhaustive tree search.

The max_depth parameter scales the planner between full tree search and greedy search. max_depth==1 is equivalent to greedy_planner(), and max_depth==len(op_list) is full tree search (guaranteed to find the optimal plan, but likely very slow).

Parameters:
op_list : list of Operator

all the nengo operators in a model (unordered)

max_depth : int, optional

the planner will search this many steps ahead before selecting which group to schedule next

Returns:
list of tuple of :class:`~nengo:nengo.builder.Operator`

operators combined into mergeable groups and in execution order

nengo_dl.graph_optimizer.noop_planner(operators)[source]

Orders operators into a valid execution order, but does not perform any merging.

Parameters:
operators : list of Operator

all the nengo operators in a model (unordered)

Returns:
list of tuple of :class:`~nengo:nengo.builder.Operator`

operators in execution order

nengo_dl.graph_optimizer.transitive_planner(op_list)[source]

Create merged execution plan through transitive closure construction.

This is something like a middle ground between greedy_planner() and tree_planner(); it can improve simulation time over the greedy planner, but comes with potentially significant build time increases.

Parameters:
op_list : list of Operator

all the nengo operators in a model (unordered)

Returns:
list of tuple of :class:`~nengo:nengo.builder.Operator`

operators combined into mergeable groups and in execution order

nengo_dl.graph_optimizer.transitive_closure_recurse(dg, ops, trans, builder_type, builder_types, cache)[source]

Computes the transitive closure for the given graph, restricted to the operators with the given builder type.

Parameters:
dg : dict of {int: set of int}

dependency graph where dg[a] = {b, c} indicates that operators b and c are dependent on a

ops : list of int

the operators for which we want to compute the transitive closure

trans : dict of {int: set of int}

the transitive closure for the graph (will be filled in-place)

builder_type : type

one of the nengo_dl build classes (e.g., CopyBuilder), specifying the type of operators to include in the transitive closure

builder_types : list of type

the build class for each operator

cache : dict of {frozenset of int: set of int}

stores base sets which trans will reference (to reduce memory usage, since many elements in trans will have the same value)

Notes

This function uses ints to refer to operators, where the int indicates the index of the operator in the overall op list (this is done to save memory). See transitive_planner().

nengo_dl.graph_optimizer.order_signals(plan, n_passes=10)[source]

Orders signals and operators to try to structure reads in contiguous blocks.

Parameters:
plan : list of tuple of Operator

operator execution plan (e.g., output from greedy_planner)

n_passes : int, optional

number of repeated passes through the operator reordering stage

Returns:
list of :class:`~nengo:nengo.builder.Signal`

signals organized into the order in which we want them arranged in memory

list of tuple of :class:`~nengo:nengo.builder.Operator`

input plan with operators reordered within groups to align with order of signals

nengo_dl.graph_optimizer.hamming_sort(blocks)[source]

Reorder signals using heuristics to try to place signals that are read by the same operators into adjacent positions (giving priority to larger blocks).

Parameters:
blocks : dict of {Signal: frozenset of int}

dictionary indicating which read blocks each signal is a part of

Returns:
dict of {:class:`~nengo:nengo.builder.Signal`: int}

indices indicating where each signal should be in the sorted list

nengo_dl.graph_optimizer.sort_ops_by_signals(sorted_reads, sigs, sig_idxs, new_plan, blocks, reads)[source]

Rearrange operators to match the order of signals.

Note: the same operators can be associated with multiple read blocks if they have multiple inputs, so rearranging the operators according to one of those blocks could mess up the order with respect to the other read block. We iterate through the read blocks in increasing size so that the largest blocks win out.

Parameters:
sorted_reads : list of tuple of (Operator, int)

the operators that form each read block, sorted by increasing size of the read block. in the case that a group of operators participate in multiple read blocks, the integer distinguishes which one of those inputs this block is associated with.

sigs : list of Signal

signals that have been arranged into a given order by other parts of the algorithm

sig_idxs : dict of {Signal: int}

sorted indices of signals

new_plan : dict of {tuple of Operator: tuple of Operator}

mapping from original operator group to the sorted operators

blocks : dict of {Signal: frozenset of int}

indicates which read blocks each signal participates in

reads : dict of {Operator: list of Signal}

the signals read by each operator

Returns:
new_plan : dict of {tuple of Operator: tuple of Operator}

mapping from original operator group to the sorted operators

sig_idxs : dict of {Signal: int}

signal indices, possibly updated to match new op order

nengo_dl.graph_optimizer.sort_signals_by_ops(sorted_reads, sigs, sig_idxs, new_plan, blocks, reads)[source]

Attempts to rearrange sigs so that it is in the same order as operator reads, without changing the overall block order.

Parameters:
sorted_reads : list of tuple of (Operator, int)

the operators that form each read block, sorted by increasing size of the read block. in the case that a group of operators participate in multiple read blocks, the integer distinguishes which one of those inputs this block is associated with.

sigs : list of Signal

signals to be sorted

sig_idxs : dict of {Signal: int}

sorted indices of signals

new_plan : dict of {tuple of Operator: tuple of Operator}

mapping from original operator group to the sorted operators

blocks : dict of {Signal: frozenset of int}

indicates which read blocks each signal participates in

reads : dict of {Operator: list of Signal}

the signals read by each operator

Returns:
sig_idxs : dict of {Signal: int}

sorted indices of signals

nengo_dl.graph_optimizer.noop_order_signals(plan, **kwargs)[source]

A version of graph_optimizer.order_signals() that doesn’t do any reordering, for debugging.

nengo_dl.graph_optimizer.create_signals(sigs, plan, float_type, minibatch_size)[source]

Groups signal data together into larger arrays, and represent each individual signal as a slice into that array.

Parameters:
sigs : list of Signal

base signals arranged into the order in which they should reside in memory (e.g., output from order_signals)

plan : list of tuple of Operator

operator execution plan (only used to get a list of all the operators)

float_type : np.float32 or np.float64

floating point precision to use for signals

minibatch_size : int

number of items in each minibatch

Returns:
base_arrays : dict of {object

combined arrays, containing the initial values for all signals

sig_map : dict of {Signal: signals.TensorSignal}

mapping from nengo Signals to nengo_dl TensorSignals (views into the base arrays)

nengo_dl.graph_optimizer.remove_unmodified_resets(operators)[source]

Remove any Reset operators that are targeting a signal that is never modified.

If a signal is reset, but never inced/updated after that, we can just set the default signal value to the reset value and remove the reset. Note: this wouldn’t normally happen, but it can happen if we removed some of the incs (e.g. in remove_zero_incs).

Parameters:
operators : list of Operator

operators in the model

Returns:
list of :class:`~nengo:nengo.builder.Operator`

modified list of operators

nengo_dl.graph_optimizer.remove_zero_incs(operators)[source]

Remove any operators where we know the input (and therefore output) is zero.

If the input to a DotInc/ElementwiseInc/Copy is zero then we know that the output of the op will be zero, so we can just get rid of it.

Parameters:
operators : list of Operator

operators in the model

Returns:
list of :class:`~nengo:nengo.builder.Operator`

modified list of operators

nengo_dl.graph_optimizer.remove_constant_copies(operators)[source]

Change Copies with constant input to Resets.

If a Copy has no dependencies, or just one Reset() dependency, then we can change it to an op that just directly sets the output signal to the Copy input value.

Parameters:
operators : list of Operator

operators in the model

Returns:
list of :class:`~nengo:nengo.builder.Operator`

modified list of operators

nengo_dl.graph_optimizer.remove_identity_muls(operators)[source]

Change y=x*1 ops to y=x Copy ops.

If one of the inputs to a DotInc/ElementwiseInc is 1 then we can skip the multiplication and change it to a Copy op.

Parameters:
operators : list of Operator

operators in the model

Returns:
list of :class:`~nengo:nengo.builder.Operator`

modified list of operators