0.6.1 (March 7, 2018)¶
- Added TensorFlow implementation for
- Optimizer variables (e.g., momentum values) will only be initialized the
first time that optimizer is passed to
sim.train. Subsequent calls to
sim.trainwill resume with the values from the previous call.
- Low-level simulation input/output formats have been reworked to make them
slightly easier to use (for users who want to bypass
sim.trainand access the TensorFlow session directly).
- Batch dimension will always be first (if present) when checking model
- TensorFlow ops created within the Simulator context will now default to the same device as the Simulator.
- Update minimum Nengo version to 2.7.0
- Better error message if training data has incorrect rank
- Avoid reinstalling TensorFlow if one of the nightly build packages is already installed
- Lowpass synapse can now be applied to multidimensional inputs
- TensorNodes will no longer be built into the default graph when checking their output dimensionality.
0.6.0 (December 13, 2017)¶
SoftLIFRateneuron type now has an
amplitudeparameter, which scales the output in the same way as the new
LIFRate(see Nengo PR #1325).
sim.run, which will disable the information about the simulation status printed to standard output (#17).
- Added progress bars for the build/simulation process.
- Added truncated backpropagation option to
sim.train(useful for reducing memory usage during training). See the documentation for details.
- Changed the default
- Use the new tf.profiler
tool to collect profiling data in
- Minor improvements to efficiency of build process.
- Minor improvements to simulation efficiency targeting small ops
- Process inputs are now reseeded for each input when batch processing (if seed is not manually set).
- Users can pass a dict of config options for the
train, which will be passed on to the TensorFlow profiler; see the
tf.profilerdocumentation for the available options.
0.5.2 (October 11, 2017)¶
- TensorNode outputs can now define a
post_buildfunction that will be executed after the simulation is initialized (see the TensorNode documentation for details).
- Added functionality for outputting summary data during the training process that can be viewed in TensorBoard (see the sim.train documentation).
- Added some examples demonstrating how to use Nengo DL in a more complicated task using semantic pointers to encode/retrieve information
sim.training_stepvariable which will track the current training iteration (can be used, e.g., for TensorFlow’s variable learning rate operations).
- Users can manually create
tf.summaryops and pass them to
- The Simulator context will now also set the default TensorFlow graph to the one associated with the Simulator (so any TensorFlow ops created within the Simulator context will automatically be added to the correct graph)
- Users can now specify a different objective for each output probe during training/loss calculation (see the sim.train documentation).
- Resetting the simulator now only rebuilds the necessary components in the graph (as opposed to rebuilding the whole graph)
- The default
"mse"loss implementation will now automatically convert
np.nanvalues in the target to zero error
- If there are multiple target probes given to
sim.lossthe total error will now be summed across probes (instead of averaged)
sim.datanow implements the full
- Fixed bug where signal order was non-deterministic for Networks containing objects with duplicate names (#9)
- Fixed bug where non-slot optimizer variables were not initialized (#11)
- Implemented a modified PES builder in order to avoid slicing encoders on non-decoded PES connections
- TensorBoard output directory will be automatically created if it doesn’t exist
0.5.1 (August 28, 2017)¶
sim.data[obj]will now return live parameter values from the simulation, rather than initial values from the build process. That means that it can be used to get the values of object parameters after training, e.g.
- Increased minimum Nengo version to 2.5.0.
- Increased minimum TensorFlow version to 1.3.0.
0.5.0 (July 11, 2017)¶
nengo_dl.tensor_layerto help with the construction of layer-style TensorNodes (see the TensorNode documentation)
- Added an example demonstrating how to train a neural network that can run in spiking neurons
- Added some distributions for weight initialization to
sim.train(..., profile=True)option to collect profiling information during training
- Added new methods to simplify the Nengo operation graph, resulting in faster simulation/training speed
- The default graph planner can now be modified by setting the
plannerattribute on the top-level Network config
- Added TensorFlow implementation for general linear synapses
backports.print_functionrequirement for Python 2.7 systems
- Increased minimum TensorFlow version to 1.2.0
- Improved error checking for input/target data
- Improved efficiency of stateful gradient operations, resulting in faster training speed
- The functionality for
nengo_dl.configure_trainablehas been subsumed into the more general
nengo_dl.configure_settings(trainable=x). This has resulted in some small changes to how trainability is controlled within subnetworks; see the updated documentation for details.
Simulator.lossno longer resets the internal state of the simulation (so they can be safely intermixed with calls to
- The old
unroll_simulationsyntax has been fully deprecated, and will result in errors if used
- Fixed bug related to changing the output of a Node after the model is constructed (#4)
- Order of variable creation is now deterministic (helps make saving/loading parameters more reliable)
- Configuring whether or not a model element is trainable does not affect whether or not that element is minibatched
- Correctly reuse variables created inside a TensorNode when
- Correctly handle probes that aren’t connected to any ops
dists.VarianceScalingto align with the standard definitions
- Temporary patch to fix memory leak in TensorFlow (see #11273)
- Fixed bug related to nodes that had matching output functions but different size_out
- Fixed bug related to probes that do not contain any data yet
0.4.0 (June 8, 2017)¶
- Added ability to manually specify which parts of a model are trainable (see the sim.train documentation)
- Added some code examples (see the
docs/examplesdirectory, or the pre-built examples in the documentation)
- Added the SoftLIFRate neuron type for training LIF networks (based on this paper)
- Updated TensorFuncParam to new Nengo Param syntax
- The interface for Simulator
unroll_simulationhas been changed. Now
unroll_simulationtakes an integer as argument which is equivalent to the old
unroll_simulation=1is equivalent to the old
unroll_simulation=False. For example,
Simulator(..., unroll_simulation=True, step_blocks=10)is now equivalent to
- Simulator.train/Simulator.loss no longer require
step_blocks(or the new
unroll_simulation) to be specified; the number of steps to train across will now be inferred from the input data.
0.3.1 (May 12, 2017)¶
- Added more documentation on Simulator arguments
- Improved efficiency of tree_planner, made it the new default planner
- Correctly handle input feeds when n_steps > step_blocks
- Detect cycles in transitive planner
- Fix bug in uneven step_blocks rounding
- Fix bug in Simulator.print_params
- Fix bug related to merging of learning rule with different dimensionality
- Use tf.Session instead of tf.InteractiveSession, to avoid strange side effects if the simulator isn’t closed properly
0.3.0 (April 25, 2017)¶
- Use logger for debug/builder output
- Implemented TensorFlow gradients for sparse Variable update Ops, to allow models with those elements to be trained
- Added tutorial/examples on using
- Added support for training models when
- Compatibility changes for Nengo 2.4.0
- Added a new graph planner algorithm, which can improve simulation speed at the cost of build time
- Significant improvements to simulation speed
- Use sparse Variable updates for signals.scatter/gather
- Improved graph optimizer memory organization
- Implemented sparse matrix multiplication op, to allow more aggressive merging of DotInc operators
- Significant improvements to build speed
- Added early termination to graph optimization
- Algorithmic improvements to graph optimization functions
- Reorganized documentation to more clearly direct new users to relevant material
- Fix bug where passing a built model to the Simulator more than once would result in an error
- Cache result of calls to
tensor_graph.build_loss/build_optimizer, so that we don’t unnecessarily create duplicate elements in the graph on repeated calls
- Fix support for Variables on GPU when
- SimPyFunc operators will always be assigned to CPU, even when
device="/gpu:0", since there is no GPU kernel
- Fix bug where
Simulator.losswas not being computed correctly for models with internal state
- Data/targets passed to
Simulator.trainwill be truncated if not evenly divisible by the specified minibatch size
- Fixed bug where in some cases Nodes with side effects would not be run if their output was not used in the simulation
- Fixed bug where strided reads that cover a full array would be interpreted as non-strided reads of the full array
0.2.0 (March 13, 2017)¶
Initial release of TensorFlow-based NengoDL
0.1.0 (June 12, 2016)¶
Initial release of Lasagne-based NengoDL
Contributing to NengoDL¶
Issues and pull requests are always welcome! We appreciate help from the community to make NengoDL better.
If you find a bug in NengoDL, or think that a certain feature is missing, please consider filing an issue. Please search the currently open issues first to see if your bug or feature request already exists. If so, feel free to add a comment to the issue so that we know that multiple people are affected.
Making pull requests¶
If you want to fix a bug or add a feature to NengoDL, we welcome pull requests. We try to maintain 100% test coverage, so any new features should also include unit tests to cover that change. If you fix a bug it’s also a good idea to add a unit test, so that the bug doesn’t get un-fixed in the future!
We require that all contributions be covered under our contributor assignment agreement. Please see the agreement for instructions on how to sign.
Copyright (c) 2015-2018 Applied Brain Research Inc.
NengoDL is made available under a proprietary license that permits using, copying, sharing, and making derivative works from NengoDL and its source code for any non-commercial purpose, as long as the above copyright notice and this permission notice are included in all copies or substantial portions of the software.
If you would like to use NengoDL commercially, licenses can be purchased from Applied Brain Research, Inc. Please contact firstname.lastname@example.org for more information.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
NengoDL imports several open source libraries:
- NumPy - Used under BSD license
- TensorFlow - Used under Apache license
- Progressbar 2 - Used under BSD license
- backports.tempfile - Used under PSF license
To build the documentation, NengoDL uses:
- GitHub Pages Import - Used under Tumbolia Public License
- Jupyter - Used under BSD license
- matplotlib - Used under modified PSF license
- nbsphinx - Used under MIT license
- numpydoc - Used under BSD license
- Pillow - Used under PIL license
- Sphinx - Used under BSD license
- sphinx_rtd_theme - Used under MIT license
- sphinxcontrib-versioning - Used under MIT license
To run the unit tests, NengoDL uses:
- Coverage.py - Used under Apache license
- Flake8 - Used under MIT license
- matplotlib - Used under modified PSF license
- nbval - Used under BSD license
- pytest - Used under MIT license
- pytest-xdist - Used under MIT license