Seminario Datalab

Optimization Algorithms and their Optimality for some Structured Problems

Ponente:  David Martínez Rubio (Universidad Carlos III de Madrid)
Fecha:  jueves 03 de abril de 2025 - 12:00
Lugar:  Aula Gris 1, ICMAT

Resumen:

Nesterov's accelerated gradient descent is one of the most celebrated methods in continuous optimization. It exploits the gradient Lipschitzness of a convex function obtaining optimal rates of convergence for its minimization problem. Motivated by applications in Logistic Regression, Optimal Transport, \(\ell_p\)-regression, Distributionally Robust Optimization, Hard-Margin SVM, or Box-Simplex Games, there has been some recent efforts to understand how much the acceleration phenomenon can be exploited in other settings. In particular, in some settings with structure, one can obtain much faster convergence. In many of these, exploiting the function's regularity with respect to a non-Euclidean norm is essential. I will go over a general optimization algorithm with guarantees, whose convergence rate is optimal in these settings even against randomized algorithms or parallel ones that use polynomial-in-the-dimension queries to a local oracle of the function at each iteration.

EVENTOS

1234
567891011
12131415161718
19202122232425
262728293031


Suscríbete a nuestra lista de difusión de Actividades. ALTA - BAJA

Pequeño Instituto de Matemáticas

PIM

La sección del ICMAT en elpais.es