{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Convolution network\n", "\n", "This demo shows the usage of the convolution network to bind two Semantic Pointers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "\n", "import nengo\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "import nengo_spa\n", "from nengo_spa import Vocabulary\n", "\n", "# Change the seed of this RNG to change the vocabulary\n", "rng = np.random.RandomState(0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create and run the model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our model is going to compute the convolution (or binding) of two semantic pointers `A`\n", "and `B`. We can use the `nengo_spa.Vocabulary` class to create the two random semantic\n", "pointers, and compute their exact convolution `C = A * B`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Number of dimensions for the Semantic Pointers\n", "dimensions = 32\n", "\n", "vocab = Vocabulary(dimensions=dimensions, pointer_gen=rng)\n", "\n", "# Set `C` to equal the convolution of `A` with `B`. This will be\n", "# our ground-truth to test the accuracy of the neural network.\n", "vocab.populate(\"A; B; C=A*B\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our network will then use neurons to compute this same convolution. We use the\n", "`nengo.networks.CircularConvolution` class, which performs circular convolution by\n", "taking the Fourier transform of both vectors, performing element-wise complex-number\n", "multiplication in the Fourier domain, and finally taking the inverse Fourier transform\n", "to get the result." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = nengo.Network(seed=1)\n", "with model:\n", " # Get the raw vectors for the pointers using `vocab['A'].v`\n", " a = nengo.Node(output=vocab[\"A\"].v)\n", " b = nengo.Node(output=vocab[\"B\"].v)\n", "\n", " # Make the circular convolution network with 200 neurons\n", " cconv = nengo.networks.CircularConvolution(200, dimensions=dimensions)\n", "\n", " # Connect the input nodes to the input slots `A` and `B` on the network\n", " nengo.Connection(a, cconv.input_a)\n", " nengo.Connection(b, cconv.input_b)\n", "\n", " # Probe the output\n", " out = nengo.Probe(cconv.output, synapse=0.03)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with nengo.Simulator(model) as sim:\n", " sim.run(1.0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Analyze the results\n", "\n", "We plot the dot product between the exact convolution of `A` and `B` (given by `C = A *\n", "B`), and the result of the neural convolution (given by `sim.data[out]`).\n", "\n", "The dot product is a common measure of similarity between semantic pointers, since it\n", "approximates the cosine similarity when the semantic pointer lengths are close to one.\n", "The cosine similarity is a common similarity measure for vectors; it is simply the\n", "cosine of the angle between the vectors.\n", "\n", "Both the dot product and the exact cosine similarity can be computed with\n", "`nengo_spa.similarity`. Normally, this function will compute the dot products between\n", "each data vector and each vocabulary vector, but setting `normalize=True` normalizes all\n", "vectors so that the exact cosine similarity is computed instead." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.plot(sim.trange(), nengo_spa.similarity(sim.data[out], vocab))\n", "plt.legend(list(vocab.keys()), loc=4)\n", "plt.xlabel(\"t [s]\")\n", "plt.ylabel(\"dot product\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above plot shows that the neural output is much closer to `C = A * B` than to either\n", "`A` or `B`, suggesting that our network is correctly computing the convolution. It also\n", "highlights an important property of circular convolution: The circular convolution of\n", "two vectors is dissimilar to both of the vectors.\n", "\n", "The dot product between the neural output and `C` is not exactly one due in large part\n", "to the fact that the length of `C` is not exactly one (see below). To actually measure\n", "the cosine similarity between the vectors (that is, the cosine of the angle between the\n", "vectors), we have to divide the dot product by the lengths of both `C` and the neural\n", "output vector approximating `C`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The length of `C` is not exactly one\n", "print(vocab[\"C\"].length())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Performing this normalization, we can see that the cosine similarity between the neural\n", "output vectors and `C` is almost exactly one, demonstrating that the neural population\n", "is quite accurate in computing the convolution." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.plot(sim.trange(), nengo_spa.similarity(sim.data[out], vocab, normalize=True))\n", "plt.legend(list(vocab.keys()), loc=4)\n", "plt.xlabel(\"t [s]\")\n", "plt.ylabel(\"cosine similarity\")" ] } ], "metadata": { "language_info": { "name": "python", "pygments_lexer": "ipython3" } }, "nbformat": 4, "nbformat_minor": 1 }