Naturalizing Neuromorphic Vision Event Streams Using GANs

14 Feb 2021  ·  Dennis Robey, Wesley Thio, Herbert Iu, Jason Eshraghian ·

Dynamic vision sensors are able to operate at high temporal resolutions within resource constrained environments, though at the expense of capturing static content. The sparse nature of event streams enables efficient downstream processing tasks as they are suited for power-efficient spiking neural networks. One of the challenges associated with neuromorphic vision is the lack of interpretability of event streams. While most application use-cases do not intend for the event stream to be visually interpreted by anything other than a classification network, there is a lost opportunity to integrating these sensors in spaces that conventional high-speed CMOS sensors cannot go. For example, biologically invasive sensors such as endoscopes must fit within stringent power budgets, which do not allow MHz-speeds of image integration. While dynamic vision sensing can fill this void, the interpretation challenge remains and will degrade confidence in clinical diagnostics. The use of generative adversarial networks presents a possible solution to overcoming and compensating for a vision chip's poor spatial resolution and lack of interpretability. In this paper, we methodically apply the Pix2Pix network to naturalize the event stream from spike-converted CIFAR-10 and Linnaeus 5 datasets. The quality of the network is benchmarked by performing image classification of naturalized event streams, which converges to within 2.81% of equivalent raw images, and an associated improvement over unprocessed event streams by 13.19% for the CIFAR-10 and Linnaeus 5 datasets.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods