Computing Testing Error without a Testing Set

We define a methodology that directly measures how well a deep neural network generalizes to unseen samples without the need of a testing dataset. To do this, let us consider a network that maps an input variable x to an output variable y. For each specific input, the network uses a subset of its parameters to compute the corresponding output. We show that if this subset of parameters is similar across most inputs, then our algorithm has learned to generalize, whereas, if the subset of parameters used is highly dependent on the inputs, then our algorithm has used local parameters to memorize samples. These functional differences can be readily computed using algebraic topology, yielding an approach that accurately estimates the testing error of any given network. We report results on several computer vision problems, including object recognition, facial expressions analysis, and image segmentation. PAPER:
Back to Top