REEXAMINING THE PRINCIPLE OF MEAN-VARIANCE PRESERVATION FOR NEURAL NETWORK INITIALIZATION

Reexamining the principle of mean-variance preservation for neural network initialization

Reexamining the principle of mean-variance preservation for neural network initialization

Blog Article

Before backpropagation training, it is common to randomly initialize a neural network so that mean and variance of activity are uniform across neurons.Classically these statistics were defined over an ensemble of random Walker Accessories networks.Alternatively, they can be defined over a random sample of inputs to the network.We show analytically and numerically Magnetic products that these two formulations of the principle of mean-variance preservation are very different in deep networks using rectification nonlinearity (ReLU).

We numerically investigate training speed after data-dependent initialization of networks to preserve sample mean and variance.

Report this page