Deep ReLU Programming

27 Nov 2020  ·  Peter Hinz, Sara van de Geer ·

Feed-forward ReLU neural networks partition their input domain into finitely many "affine regions" of constant neuron activation pattern and affine behaviour. We analyze their mathematical structure and provide algorithmic primitives for an efficient application of linear programming related techniques for iterative minimization of such non-convex functions. In particular, we propose an extension of the Simplex algorithm which is iterating on induced vertices but, in addition, is able to change its feasible region computationally efficiently to adjacent "affine regions". This way, we obtain the Barrodale-Roberts algorithm for LAD regression as a special case, but also are able to train the first layer of neural networks with L1 training loss decreasing in every step.

PDF Abstract