Efficient architecture for deep neural networks with heterogeneous sensitivity

12 Oct 2018  ·  Hyunjoong Cho, Jinhyeok Jang, Chanhyeok Lee, Seungjoon Yang ·

This work presents a neural network that consists of nodes with heterogeneous sensitivity. Each node in a network is assigned a variable that determines the sensitivity with which it learns to perform a given task. The network is trained by a constrained optimization that maximizes the sparsity of the sensitivity variables while ensuring the network's performance. As a result, the network learns to perform a given task using only a small number of sensitive nodes. Insensitive nodes, the nodes with zero sensitivity, can be removed from a trained network to obtain a computationally efficient network. Removing zero-sensitivity nodes has no effect on the network's performance because the network has already been trained to perform the task without them. The regularization parameter used to solve the optimization problem is found simultaneously during the training of networks. To validate our approach, we design networks with computationally efficient architectures for various tasks such as autoregression, object recognition, facial expression recognition, and object detection using various datasets. In our experiments, the networks designed by the proposed method provide the same or higher performance but with far less computational complexity.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here