Abstract
The superiorization methodology is intended to work with input data of constrained minimization problems, i.e., a target function and a constraints set. However, it is based on an antipodal way of thinking to the thinking that leads constrained minimization methods. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to reduce (not necessarily minimize) target function values. This is done while retaining the feasibility-seeking nature of the algorithm and without paying a high computational price. A guarantee that the local target function reduction steps properly accumulate to a global target function value reduction is still missing in spite of an ever-growing body of publications that supply evidence of the success of the superiorization method in various problems. We propose an analysis based on the principle of concentration of measure that attempts to alleviate this guarantee question of the superiorization method.
Original language | English |
---|---|
Pages (from-to) | 2273-2301 |
Number of pages | 29 |
Journal | Applied Mathematics and Optimization |
Volume | 83 |
Issue number | 3 |
DOIs | |
State | Published - Jun 2021 |
Bibliographical note
Publisher Copyright:© 2019, Springer Science+Business Media, LLC, part of Springer Nature.
Keywords
- Concentration of measure
- Feasibility-seeking algorithm
- Hilbert-Schmidt norm
- Linear superiorization
- Perturbation resilience
- Random matrix
- Superiorization
- Superiorization matrix
- Target function reduction
ASJC Scopus subject areas
- Control and Optimization
- Applied Mathematics