The superiorization methodology is intended to work with input data of constrained minimization problems, i.e., a target function and a constraints set. However, it is based on an antipodal way of thinking to the thinking that leads constrained minimization methods. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to reduce (not necessarily minimize) target function values. This is done while retaining the feasibility-seeking nature of the algorithm and without paying a high computational price. A guarantee that the local target function reduction steps properly accumulate to a global target function value reduction is still missing in spite of an ever-growing body of publications that supply evidence of the success of the superiorization method in various problems. We propose an analysis based on the principle of concentration of measure that attempts to alleviate this guarantee question of the superiorization method.
Bibliographical noteFunding Information:
We thank two anonymous reviewers for their constructive comments. This work was supported by research grant no. 2013003 of the United States-Israel Binational Science Foundation (BSF) and by the ISF-NSFC joint research program grant No. 2874/19.
© 2019, Springer Science+Business Media, LLC, part of Springer Nature.
- Concentration of measure
- Feasibility-seeking algorithm
- Hilbert-Schmidt norm
- Linear superiorization
- Perturbation resilience
- Random matrix
- Superiorization matrix
- Target function reduction
ASJC Scopus subject areas
- Control and Optimization
- Applied Mathematics