Online learning, links with optimization and games
Université Paris–Saclay — 2024–2025
Thursdays 9:00–12:15 — Room 1A7
This course proposes a unified presentation of regret minimization for adversarial online learning problems, and its application to various problems such as Blackwell's approachability, optimization algorithms (GD, Nesterov, SGD, AdaGrad), variational inequalities with monotone operators (Mirror-Prox, Dual Extrapolation), fixed-point iterations (Krasnoselskii-Mann), and games. The presentation aims at being modular, so that introduced tools and techniques could easily be used to define and analyze new algorithms.
Schedule
- January, 23th
- Convexity tools — Exercices
- January, 30th
- UMD Theory — Exercices
- February, 6th
- Online linear optimization — Exercices
- February, 13th
- Online convex optimization — Exercices
- March, 6th
- Blackwell's approachability — Exercices
- March, 13th
- Gradient methods in optimization & AdaGrad — Exercices
- March, 20th
- Regret learning in games — Exercices
- March, 27th
- Extensive-form games
Evaluation
Evaluation will be 100% project-based.
Projects will become available progressively. A project can be attributed to at most one student. You must choose a project below (first come, first served) and notify me by sending an e-mail.
Projects have various levels of estimated difficulty, indicated by a number of stars. Grading will take that into account. If you are facing important difficulties, you can ask for help by sending me an e-mail.
Completed work must be sent by e-mail and may contain several files (e.g. a pdf and a Jupyter notebook).
If you need your grade early (for e.g. applications), you can send your project one week before the date on which you need your grade.
Candidates for validation
- Brucato, Lorenzo. — ★★ [theory] AdaGrad with cube-root normalization
- Caudard, Joris. — ★★ [theory, code] UMD-based extension of AdaGrad-Norm and application to games
- Chahmirian, Armen. — ★★★ [theory] Composite optimization
- Davion, Corentin. — ★★★ [theory, code] Solving two-player zero-sum games with Blackwell's approachability
- Denis, Hector. — ★★★ [theory, code] Approachability algorithms based on time-dependent norms
- El Aafi, Salma. — ★ [theory] Optimistic iterations with arbitrary norms for solving games
- Glaib, Ilyas. — ★★ [theory, code] A simple market game
- Gou, Xinyi. — ★★ [theory, code] AdaGrad on the simplex and application to games
- Hollands, Anna. — ★★ [theory] AdaGrad-Full
- Hueso, Pablo. — ★★ [theory] Regret minimization with global costs
- Kashou, Layth. — ★ [theory, code] A hybrid of mirror descent and dual averaging
- Kharisov, Timur. — ★★★ [theory, code] Approachability-based optimization
- Khusainov, Marat. — ★★ [theory] AdaGrad-Diagonal: Stronger adaptivity to smoothness
- Lahmi, Elona. — ★★ [theory] Dual averaging variants of AdaGrad
- Launay, Jean-Baptiste. — ★★★★ [theory, code] Minimizing regret on treeplexes with approachability
- Le Dréau, Vincent. — ★★ [theory, code] A simple bluffing game
- Lemaire, Alicia. — ★ [theory] Convex optimization with relative smoothness
- Loret, Alexandre. — ★★ [theory] Yet another algorithm for online strongly convex optimization
- Mingam, Aurèle. — ★★★ [theory, code] Solving games with line-search
- Pradignac, Alexandre. — ★★ [theory] A finer analysis for convex optimization with step-sizes
- Riveiro, Ilian. — ★★ [theory] AdaGrad with normalization
- Salama, Saja. — ★★ [theory, code] A generalized approach for nonsmooth convex optimization
- Stussi, Raphaël. — ★ [theory, code] Dual averaging for stochastic nonsmooth convex optimization
- Tapia Riera, Guido Samuel. — ★★★ [theory] Finer approachability bounds
- Vantalon, Gautier. — ★★★ [theory] Dilated regularizers on treeplexes
- Yagouti, Redouane. — ★★★ [theory, code] Variants of AdaGrad-Norm and application to games