Documentation

Join the Nengo community and learn the ropes

The Nengo Ecosystem

The Nengo ecosystem is made up of several interacting projects. The following image groups these projects into rough categories.

Nengo Ecosystem Chart
New to Nengo?

Download Nengo and start running models today!

Getting started

Core Framework

The core of the Nengo ecosystem is the Python library nengo, which includes the five Nengo objects (Ensemble, Node, Connection, Probe, Network) and a NumPy-based simulator.

NengoGUI is a web browser-based interactive model construction and visualization tool.

NengoDL simulates Nengo models using the TensorFlow library to easily interact with deep learning networks, as well as use deep learning training procedures to optimize Nengo model parameters.

Add-ons & Models

The Semantic Pointer Architecture (SPA) uses Nengo to build large cognitive models.

Extra utilities and add-ons for Nengo. These utilities are helpful when you need them, but they’re not necessary for using Nengo, so we keep them separate to keep the Nengo core as small as possible.

Additional extensions for large-scale brain modelling with Nengo. Includes advanced dynamics networks, additional synapse models, and more.

KerasSpiking provides tools for training and running spiking neural networks directly within the Keras framework.

PyTorchSpiking provides tools for training and running spiking neural networks directly within the PyTorch framework.

Simulation Backends

NengoFPGA is an extension of Nengo that allows portions of a network to be run on an FPGA to improve performance and efficiency.

NengoLoihi runs Nengo models on Intel's Loihi neuromorphic hardware. NengoLoihi also includes a software simulation of Loihi’s spiking neuron cores so that models can be prototyped before running on real hardware.

NengoOCL uses the OpenCL framework to run Nengo models on GPUs and other platforms. Most models run significantly faster with NengoOCL.

NengoSpiNNaker simulates Nengo models using SpiNNaker architecture and associated hardware. Models running on SpiNNaker always execute in real time.

NengoMPI simulates Nengo models using a C++ backend that uses MPI to parallelize the running of the model on large numbers of heterogeneous processing units.