no code implementations • 16 Jun 2023 • Wen-Liang Hwang
In the current study, we demonstrate that DAG-DNNs can be used to derive all functions defined on various sub-architectures of the DNN.
no code implementations • 13 Jun 2022 • Wen-Liang Hwang, Shih-Shuo Tung
In this paper, we made a number of basic assumptions pertaining to activation functions, non-linear transformations, and DNN architectures in order to use the un-rectifying method to analyze DNNs via directed acyclic graphs (DAGs).
no code implementations • 18 Jan 2021 • Wen-Liang Hwang, Shih-Shuo Tung
However, we demonstrate that the optimal solution to a combinatorial optimization problem can be preserved by relaxing the discrete domains of activation variables to closed intervals.
no code implementations • 29 Mar 2019 • Andreas Heinecke, Wen-Liang Hwang
We consider deep feedforward neural networks with rectified linear units from a signal processing perspective.
no code implementations • 6 Jan 2018 • Wen-Liang Hwang, Ping-Tzan Huang, Tai-Lang Jong
Our findings demonstrate that this type of dual frame cannot be constructed for over-complete frames, thereby precluding the use of any linear analysis operator in driving the sparse synthesis coefficient for signal representation.
no code implementations • 14 Dec 2015 • Chia-Chen Lee, Wen-Liang Hwang
Blind image restoration is a non-convex problem which involves restoration of images from an unknown blur kernel.