Olga Aryasova (Inst. of Geophysics, Nat. Acad. of Sciences of Ukraine / Friedrich–Schiller–Univ. Jena)
Pierre Gaillard (INRIA Paris)
In this presentation we will examine the framework for online prediction of arbitrary time series. In the latter, a learner sequentially makes predictions of a time series, for which no stochastic assumptions are made. The learner's goal is to minimize his regret, which is the difference between his cumulative error and the cumulative error of the best parameter in hindsight. We will see a general algorithm inspired by empirical online risk minimization that performs logarithmic regrets for many loss functions (such as square or logistic loss). We will then see the importance of improper learning for logistic regression and how our algorithm avoids the exponential constants that are inevitable for proper algorithms.
Reference: Efficient improper learning for online logistic regression. Rémi Jézéquel, Pierre Gaillard, Alessandro Rudi, 2020.
Zoom Link https://uni-potsdam.zoom.us/j/92514620160 / Zoom Meeting ID: 925 1462 0160
Please contact Sara (mazzonetto AT uni-potsdam.de) to get the password.