The Ikshana Hypothesis of Human Scene Understanding

21 Jan 2021  ·  Venkata Satya Sai Ajay Daliparthi ·

In recent years, deep neural networks (DNNs) achieved state-of-the-art performance on several computer vision tasks. However, the one typical drawback of these DNNs is the requirement of massive labeled data. Even though few-shot learning methods address this problem, they often use techniques such as meta-learning and metric-learning on top of the existing methods. In this work, we address this problem from a neuroscience perspective by proposing a hypothesis named Ikshana, which is supported by several findings in neuroscience. Our hypothesis approximates the refining process of conceptual gist in the human brain while understanding a natural scene/image. While our hypothesis holds no particular novelty in neuroscience, it provides a novel perspective for designing DNNs for vision tasks. By following the Ikshana hypothesis, we design a novel neural-inspired CNN architecture named IkshanaNet. The empirical results demonstrate the effectiveness of our method by outperforming several baselines on the entire and subsets of the Cityscapes and the CamVid semantic segmentation benchmarks.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation Cityscapes test IkshanaNet-1 Mean IoU (class) 54.82% # 101
Category mIoU 82.22% # 4
Semantic Segmentation Cityscapes test IkshanaNet-3 Mean IoU (class) 42.07% # 103
Category mIoU 75.61% # 6
Semantic Segmentation Cityscapes test IkshanaNet-2 Mean IoU (class) 45.02% # 102
Category mIoU 76.73% # 5

Methods