TY - JOUR
T1 - Data-Driven Priors for Hyperparameters in Regularization
AU - Keren, Daniel
AU - Werman, Michael
PY - 1995
Y1 - 1995
N2 - A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also “smooth” in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a “fidelity term” and a “smoothness term”. The classical approach is to select weights that should be assigned to these two terms, and minimize the resulting error functional. However, using only these “optimal weights” does not guarantee that the chosen function will be optimal in some sense. For that, we have to consider all possible weights. The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights.
AB - A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also “smooth” in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a “fidelity term” and a “smoothness term”. The classical approach is to select weights that should be assigned to these two terms, and minimize the resulting error functional. However, using only these “optimal weights” does not guarantee that the chosen function will be optimal in some sense. For that, we have to consider all possible weights. The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights.
U2 - 10.1007/978-94-011-5430-7_9
DO - 10.1007/978-94-011-5430-7_9
M3 - Article
VL - 79
SP - 77
EP - 85
JO - Maximum Entropy and Bayesian Methods
JF - Maximum Entropy and Bayesian Methods
ER -