A Sequential Approximation Bound for Some Sample-Dependent Convex Optimization Problems With Applications in Learning
This paper studies a class of sample dependent convex optimization problems, and derives a general sequential approximation bound for their solutions. This analysis is closely related to the regret bound framework in online learning. However the paper applies it to batch learning algorithms instead of online stochastic gradient decent methods. Applications of this analysis in some classification and regression problems will be illustrated.