no code implementations • 1 Mar 2022 • Chandrashekar Lakshminarayanan, Amit Vikram Singh, Arun Rajkumar
Using the dual view, in this paper, we rethink the conventional interpretations of DNNs thereby explicitsing the implicit interpretability of DNNs.
no code implementations • 6 Oct 2021 • Chandrashekar Lakshminarayanan, Amit Vikram Singh
To address `black box'-ness, we propose a novel interpretable counterpart of DNNs with ReLUs namely deep linearly gated networks (DLGN): the pre-activations to the gates are generated by a deep linear network, and the gates are then applied as external masks to learn the weights in a different network.
no code implementations • 1 Jan 2021 • CHANDRA SHEKAR LAKSHMINARAYANAN, Amit Vikram Singh
Recent works have connected deep learning and kernel methods.
no code implementations • NeurIPS 2020 • Chandrashekar Lakshminarayanan, Amit Vikram Singh
To this end, we encode the on/off state of the gates of a given input in a novel 'neural path feature' (NPF), and the weights of the DNN are encoded in a novel 'neural path value' (NPV).
no code implementations • 10 Feb 2020 • Chandrashekar Lakshminarayanan, Amit Vikram Singh
In DGNs, a single neuronal unit has two components namely the pre-activation input (equal to the inner product the weights of the layer and the previous layer outputs), and a gating value which belongs to $[0, 1]$ and the output of the neuronal unit is equal to the multiplication of pre-activation input and the gating value.