Computing the Testing Error Without a Testing Set

Deep Neural Networks (DNNs) have revolutionized computer vision. We now have DNNs that achieve top (accuracy) results in many problems, including object recognition, facial expression analysis, and semantic segmentation, to name but a few. The design of the DNNs that achieve top results is, however, non-trivial and mostly done by trail-and-error. That is, typically, researchers will derive many DNN architectures (i.e., topologies) and then test them on multiple datasets. However, there are no guarantees that the selected DNN will perform well in the real world. One can use a testing set to estimate the performance gap between the training and testing sets, but avoiding overfitting-to-the-testing-data is of concern. Using a sequestered testing data may address this problem, but this requires a constant update of the dataset, a very expensive venture. Here, we derive an algorithm to estimate the performance gap between training and testing without the need of a testing dataset. Specifically, we derive a set of persistent topology measures that identify when a DNN is learning to generalize to unseen samples. We provide extensive experimental validation on multiple networks and datasets to demonstrate the feasibility of the proposed approach.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here