- Overview
- Installation
- Configuration
- Example models
- Communication channel
- Integrator
- Multidimensional integrator
- Simple oscillator
- Nonlinear oscillator
- Neuron to neuron connections
- PES learning
- Keyword spotting
- MNIST convolutional network
- CIFAR-10 convolutional network
- Converting a Keras model to an SNN on Loihi
- Nonlinear adaptive control
- Legendre Memory Units on Loihi
- DVS from file
- API reference
- Tips and tricks
- Hardware setup
Note
This documentation is for a development version. Click here for the latest stable release (v1.0.0).
DVS from file¶
This example demonstrates how to load pre-recorded Dynamic Vision Sensor (DVS) event data from a file.
[1]:
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import HTML
from matplotlib.animation import ArtistAnimation
%matplotlib inline
import nengo
import nengo_loihi
# All NengoLoihi models should call this before model construction
nengo_loihi.set_defaults()
rng = np.random.RandomState(0)
Generate synthetic data¶
Rather than using real DVS data, we will generate some synthetic data and save it in a .events
file. In most applications, this will not be necessary, since you will already have a .events
or .aedat
file from a real DVS camera.
[2]:
def jitter_time(n, t, jitter, rng, dtype="<u4"):
assert jitter >= 0
assert t - jitter >= 0
tt = (t - jitter) * np.ones(n, dtype=dtype)
if jitter > 0:
tt += rng.randint(0, 2 * jitter + 1, size=tt.shape, dtype=dtype)
return tt
# the height and width of the DVS sensor
dvs_height = 180
dvs_width = 240
# our timestep in microseconds (μs)
dt_us = 1000
# the maximum amount by which to jitter spikes around the timestep (in microseconds)
t_jitter_us = 100
assert t_jitter_us < dt_us // 2
# the length of time to generate data for, in seconds and in microseconds
t_length = 1.0
t_length_us = int(1e6 * t_length)
# the maximum rate of input spikes (per pixel)
max_rate = 10
max_prob = max_rate * 1e-6 * dt_us
# the period of the sine wave, in pixels
period = 120
# these functions control the angle (theta) and phase of the sine wave over time
theta_fn = lambda t: 1
phase_fn = lambda t: 10 * t
X, Y = np.meshgrid(np.linspace(-1, 1, dvs_width), np.linspace(-1, 1, dvs_height))
events = []
for t_us in range(dt_us, t_length_us + 1, dt_us):
t = t_us * 1e-6
theta = theta_fn(t)
phase = phase_fn(t)
X1 = np.cos(theta) * X + np.sin(theta) * Y
x = np.linspace(-1.5, 1.5, 50)
prob = np.sin((np.pi * dvs_height / period) * x + phase) * max_prob
prob = np.interp(X1, x, prob)
u = rng.rand(*prob.shape)
s_on = u < prob
s_off = u < -prob
y, x = s_off.nonzero()
tt = jitter_time(len(x), t_us, t_jitter_us, rng, dtype="<u4")
events.append((tt, 0, x, y))
y, x = s_on.nonzero()
tt = jitter_time(len(x), t_us, t_jitter_us, rng, dtype="<u4")
events.append((tt, 1, x, y))
dvs_events = nengo_loihi.dvs.DVSEvents()
dvs_events.init_events(n_events=sum(len(xx) for _, _, xx, _ in events))
i = 0
for tt, p, xx, yy in events:
ee = dvs_events.events[i : i + len(xx)]
ee["t"] = tt
ee["p"] = p
ee["x"] = xx
ee["y"] = yy
i += len(xx)
events_file_name = "dvs-from-file-events.events"
dvs_events.write_file(events_file_name)
print("Wrote %r" % events_file_name)
del dvs_events
Wrote 'dvs-from-file-events.events'
We can view the data by using the DVSEvents
class to load the events, group the events into frames, and then make the frames into a video with the help of Matplotlib’s animation support.
[3]:
dvs_events = nengo_loihi.dvs.DVSEvents.from_file(events_file_name)
dt_frame_us = 20e3
t_frames = dt_frame_us * np.arange(int(round(t_length_us / dt_frame_us)))
fig = plt.figure()
imgs = []
for t_frame in t_frames:
t0_us = t_frame
t1_us = t_frame + dt_frame_us
t = dvs_events.events[:]["t"]
m = (t >= t0_us) & (t < t1_us)
events_m = dvs_events.events[m]
# show "off" (0) events as -1 and "on" (1) events as +1
events_sign = 2.0 * events_m["p"] - 1
frame_img = np.zeros((dvs_height, dvs_width))
frame_img[events_m["y"], events_m["x"]] = events_sign
img = plt.imshow(frame_img, vmin=-1, vmax=1, animated=True)
imgs.append([img])
del dvs_events
ani = ArtistAnimation(fig, imgs, interval=50, blit=True)
HTML(ani.to_html5_video())
[3]: