Deep learning

Classes and utilities for doing deep learning with Nengo.

Warning

These utilities were created before Nengo DL, which provides tighter integration between Nengo and TensorFlow. If you are new to Nengo or new to doing deep learning in Nengo, we recommend that you first check out Nengo DL to see if it fits your use case.

Datasets

The following functions make use of Nengo’s RC file to set the location for data downloads. See the Nengo configuration documentation for instructions of editing RC files. Configuration settings are listed below.

[nengo_extras]
# directory to store downloaded datasets
data_dir = ~/data

nengo_extras.data.get_cifar10_tar_gz()

nengo_extras.data.get_cifar100_tar_gz()

nengo_extras.data.get_ilsvrc2012_tar_gz()

nengo_extras.data.get_mnist_pkl_gz()

nengo_extras.data.get_svhn_tar_gz()

nengo_extras.data.load_cifar10([filepath, …])

Load the CIFAR-10 dataset.

nengo_extras.data.load_cifar100([filepath, …])

Load the CIFAR-100 dataset.

nengo_extras.data.load_ilsvrc2012([…])

Load part of the ILSVRC 2012 (ImageNet) dataset.

nengo_extras.data.load_mnist([filepath, …])

Load the MNIST dataset.

nengo_extras.data.load_svhn([filepath, …])

Load the SVHN dataset.

nengo_extras.data.spasafe_name(name[, …])

Make a name safe to use as a SPA semantic pointer name.

nengo_extras.data.spasafe_names(label_names)

Make names safe to use as SPA semantic pointer names.

nengo_extras.data.one_hot_from_labels(labels)

Turn integer labels into a one-hot encoding.

nengo_extras.data.ZCAWhiten([beta, gamma])

ZCA Whitening

nengo_extras.data.get_cifar10_tar_gz()[source]
nengo_extras.data.get_cifar100_tar_gz()[source]
nengo_extras.data.get_ilsvrc2012_tar_gz()[source]
nengo_extras.data.get_mnist_pkl_gz()[source]
nengo_extras.data.get_svhn_tar_gz()[source]
nengo_extras.data.load_cifar10(filepath=None, n_train=5, n_test=1, label_names=False)[source]

Load the CIFAR-10 dataset.

Parameters
filepathstr (optional, Default: None)

Path to the previously downloaded ‘cifar-10-python.tar.gz’ file. If None, the file will be downloaded to the current directory.

n_trainint (optional, Default: 6)

The number of training batches to load (max: 6).

n_testint (optional, Default: 6)

The number of testing batches to load (max: 1).

label_namesboolean (optional, Default: False)

Whether to provide the category label names.

Returns
train_set(n_train, n_pixels) ndarray, (n_train,) ndarray

A tuple of the training image array and label array.

test_set(n_test, n_pixels) ndarray, (n_test,) ndarray

A tuple of the testing image array and label array.

label_nameslist

A list of the label names.

nengo_extras.data.load_cifar100(filepath=None, fine_labels=True, label_names=False)[source]

Load the CIFAR-100 dataset.

Parameters
filepathstr (optional, Default: None)

Path to the previously downloaded ‘cifar-100-python.tar.gz’ file. If None, the file will be downloaded to the current directory.

fine_labelsboolean (optional, Default: True)

Whether to provide the fine labels or coarse labels.

label_namesboolean (optional, Default: False)

Whether to provide the category label names.

Returns
train_set(n_train, n_pixels) ndarray, (n_train,) ndarray

A tuple of the training image array and label array.

test_set(n_test, n_pixels) ndarray, (n_test,) ndarray

A tuple of the testing image array and label array.

label_nameslist

A list of the label names.

nengo_extras.data.load_ilsvrc2012(filepath=None, n_files=None)[source]

Load part of the ILSVRC 2012 (ImageNet) dataset.

This loads a small section of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 dataset. The images are from the test portion of the dataset, and can be used to test pretrained classifiers.

Parameters
filepathstr (optional, Default: None)

Path to the previously downloaded ‘ilsvrc-2012-batches-test3.tar.gz’. If None, the file will be downloaded to the current directory.

n_filesint (optional, Default: None)

Number of files (batches) to load from the archive. Defaults to all.

Returns
images(n_images, nc, ny, nx) ndarray

The loaded images. nc = number of channels, ny = height, nx = width

labels(n_images,) ndarray

The labels of the images.

data_mean(nc, ny, nx) ndarray

The mean of the images in the whole of the training set.

label_nameslist

A list of the label names.

nengo_extras.data.load_mnist(filepath=None, validation=False)[source]

Load the MNIST dataset.

Parameters
filepathstr (optional, Default: None)

Path to the previously downloaded ‘mnist.pkl.gz’ file. If None, the file will be downloaded to the current directory.

validationboolean (optional, Default: False)

Whether to provide the validation data as a separate set (True), or combine it into the training data (False).

Returns
train_set(n_train, n_pixels) ndarray, (n_train,) ndarray

A tuple of the training image array and label array.

validation_set(n_valid, n_pixels) ndarray, (n_valid,) ndarray

A tuple of the validation image array and label array (if validation)

test_set(n_test, n_pixels) ndarray, (n_test,) ndarray

A tuple of the testing image array and label array.

nengo_extras.data.load_svhn(filepath=None, n_train=9, n_test=3, data_mean=False, label_names=False)[source]

Load the SVHN dataset.

Parameters
filepathstr (optional, Default: None)

Path to the previously downloaded ‘svhn-py-colmajor.tar.gz’ file. If None, the file will be downloaded to the current directory.

n_trainint (optional, Default: 6)

The number of training batches to load (max: 6).

n_testint (optional, Default: 6)

The number of testing batches to load (max: 1).

label_namesboolean (optional, Default: False)

Whether to provide the category label names.

Returns
train_set(n_train, n_pixels) ndarray, (n_train,) ndarray

A tuple of the training image array and label array.

test_set(n_test, n_pixels) ndarray, (n_test,) ndarray

A tuple of the testing image array and label array.

label_nameslist

A list of the label names.

nengo_extras.data.spasafe_name(name, pre_comma_only=True)[source]

Make a name safe to use as a SPA semantic pointer name.

Ensure a name conforms with SPA name standards. Replaces hyphens and spaces with underscores, removes all other characters, and makes the first letter uppercase.

Parameters
pre_comma_onlyboolean

Only use the part of a name before a/the first comma.

nengo_extras.data.spasafe_names(label_names, pre_comma_only=True)[source]

Make names safe to use as SPA semantic pointer names.

Format a list of names to conform with SPA name standards. In addition to running each name through spasafe_name, this function numbers duplicate names so they are unique.

Parameters
pre_comma_onlyboolean

Only use the part of a name before a/the first comma.

nengo_extras.data.one_hot_from_labels(labels, classes=None, dtype=<class 'float'>)[source]

Turn integer labels into a one-hot encoding.

Parameters
labels(n,) array

Labels to turn into one-hot encoding.

classesint or (n_classes,) array (optional)

Classes for encoding. If integer and labels.dtype is integer, this is the number of classes in the encoding. If iterable, this is the list of classes to place in the one-hot (must be a superset of the unique elements in labels).

dtypedtype (optional)

Data type of returned one-hot encoding (defaults to float).

class nengo_extras.data.ZCAWhiten(beta=0.01, gamma=1e-05)[source]

ZCA Whitening

References

1

Krizhevsky, Alex. “Learning multiple layers of features from tiny images” (2009) MSc Thesis, Dept. of Comp. Science, Univ. of Toronto. pp. 48-49.

fit(X)[source]

Fit whitening transform to training data

Parameters
Xarray_like

Flattened data, with each row corresponding to one example

Keras

nengo_extras.keras.SoftLIF(*args, **kwargs)

nengo_extras.keras.load_model_pair(filepath)

nengo_extras.keras.save_model_pair(model, …)

nengo_extras.keras.LSUVinit(kmodel, X[, …])

Layer-sequential unit-variance initialization.

class nengo_extras.keras.SoftLIF(*args, **kwargs)[source]
call(x, mask=None)[source]

Compute the SoftLIF nonlinearity.

get_config()[source]

Return a config dict to reproduce this SoftLIF.

nengo_extras.keras.load_model_pair(filepath, custom_objects=None)[source]
nengo_extras.keras.save_model_pair(model, filepath, overwrite=False)[source]
nengo_extras.keras.LSUVinit(kmodel, X, tol=0.1, t_max=50)[source]

Layer-sequential unit-variance initialization.

References

1

Mishkin, D., & Matas, J. (2016). All you need is a good init. In ICLR 2016 (pp. 1-13).

Networks

nengo_extras.deepnetworks.DeepNetwork([…])

nengo_extras.deepnetworks.SequentialNetwork(…)

nengo_extras.keras.SequentialNetwork(model)

nengo_extras.deepnetworks.Layer(**kwargs)

nengo_extras.deepnetworks.NodeLayer([…])

nengo_extras.deepnetworks.NeuronLayer(n[, …])

nengo_extras.deepnetworks.DataLayer(size, …)

nengo_extras.deepnetworks.SoftmaxLayer(size, …)

nengo_extras.deepnetworks.DropoutLayer(size, …)

nengo_extras.deepnetworks.FullLayer(weights, …)

nengo_extras.deepnetworks.ProcessLayer(…)

nengo_extras.deepnetworks.LocalLayer(…[, …])

nengo_extras.deepnetworks.ConvLayer(…[, …])

nengo_extras.deepnetworks.PoolLayer(…[, …])

nengo_extras.cuda_convnet.CudaConvnetNetwork(model)

class nengo_extras.deepnetworks.DeepNetwork(label=None, seed=None, add_to_container=None)[source]
class nengo_extras.deepnetworks.SequentialNetwork(**kwargs)[source]
class nengo_extras.keras.SequentialNetwork(model, synapse=None, lif_type='lif', **kwargs)[source]
class nengo_extras.deepnetworks.Layer(**kwargs)[source]
class nengo_extras.deepnetworks.NodeLayer(output=None, size_in=None, **kwargs)[source]
class nengo_extras.deepnetworks.NeuronLayer(n, neuron_type=Default, synapse=Default, gain=1.0, bias=0.0, amplitude=1.0, **kwargs)[source]
class nengo_extras.deepnetworks.DataLayer(size, **kwargs)[source]
class nengo_extras.deepnetworks.SoftmaxLayer(size, **kwargs)[source]
class nengo_extras.deepnetworks.DropoutLayer(size, keep, **kwargs)[source]
class nengo_extras.deepnetworks.FullLayer(weights, biases, **kwargs)[source]
class nengo_extras.deepnetworks.ProcessLayer(process, **kwargs)[source]
class nengo_extras.deepnetworks.LocalLayer(input_shape, filters, biases, strides=1, padding=0, **kwargs)[source]
class nengo_extras.deepnetworks.ConvLayer(input_shape, filters, biases, strides=1, padding=0, border='ceil', **kwargs)[source]
class nengo_extras.deepnetworks.PoolLayer(input_shape, pool_size, strides=None, kind='avg', mode='full', **kwargs)[source]
class nengo_extras.cuda_convnet.CudaConvnetNetwork(model, synapse=None, lif_type='lif', **kwargs)[source]

Processes

class nengo_extras.convnet.Conv2d(shape_in, filters, biases=None, strides=1, padding=0, border='ceil')[source]

Perform 2-D (image) convolution on an input.

Parameters
shape_in3-tuple (n_channels, height, width)

Shape of the input images: channels, height, width.

filtersarray_like (n_filters, n_channels, f_height, f_width)

Static filters to convolve with the input. Shape is number of filters, number of input channels, filter height, and filter width. Shape can also be (n_filters, height, width, n_channels, f_height, f_width) to apply different filters at each point in the image, where ‘height’ and ‘width’ are the input image height and width.

biasesarray_like (1,) or (n_filters,) or (n_filters, height, width)

Biases to add to outputs. Can have one bias across the entire output space, one bias per filter, or a unique bias for each output pixel.

strides2-tuple (vertical, horizontal) or int

Spacing between filter placements. If an integer is provided, the same spacing is used in both dimensions.

padding2-tuple (vertical, horizontal) or int

Amount of zero-padding around the outside of the input image. Padding is applied to both sides, e.g. padding=(1, 0) will add one pixel of padding to the top and bottom, and none to the left and right.

class nengo_extras.convnet.Pool2d(shape_in, pool_size, strides=None, kind='avg', mode='full')[source]

Perform 2-D (image) pooling on an input.

Parameters
shape_in3-tuple (channels, height, width)

Shape of the input image.

pool_size2-tuple (vertical, horizontal) or int

Shape of the pooling region. If an integer is provided, the shape will be square with the given side length.

strides2-tuple (vertical, horizontal) or int

Spacing between pooling placements. If None (default), will be equal to pool_size resulting in non-overlapping pooling.

kind“avg” or “max”

Type of pooling to perform: average pooling or max pooling.

mode“full” or “valid”

If the input image does not divide into an integer number of pooling regions, whether to add partial pooling regions for the extra pixels (“full”), or discard extra input pixels (“valid”).

Attributes
shape_out3-tuple (channels, height, width)

Shape of the output image.