Springer International Publishing, Switzerland, 2016. — 308 p. — (Springer Optimization and Its Applications 108) — ISBN: 9783319309200
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. We present a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems and variational inequalities. According to the results known in the literature, these algorithms should converge to a solution. In this book, we study these algorithms taking into account computational errors which are always present in practice. In this case the convergence to a solution does not take place. We show that our algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. In practice it is sufficient to find a good approximate solution instead of constructing a minimizing sequence. On the other hand, in practice computations can induce numerical errors and if one uses optimization methods to solve minimization problems these methods usually provide only approximate solutions of the problems. Our main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this.
Subgradient Projection Algorithm
The Mirror Descent Algorithm
Gradient Algorithm with a Smooth Objective Function
An Extension of the Gradient Algorithm
Weiszfeld’s Method
The Extragradient Method for Convex Optimization
A Projected Subgradient Method for Nonsmooth Problems
Proximal Point Method in Hilbert Spaces
Proximal Point Methods in Metric Spaces
Maximal Monotone Operators and the Proximal Point Algorithm
The Extragradient Method for Solving Variational Inequalities
A Common Solution of a Family of Variational Inequalities
Continuous Subgradient Method
Penalty Methods
Newton’s Method