Ensuring that a predictor is not biased against a sensitive feature is the goal of fair learning. Meanwhile, Global Sensitivity Analysis (GSA) is used in numerous contexts to monitor the influence of any feature on an output variable. We merge these two domains, Global Sensitivity Analysis and Fairness, by showing how Fairness can be defined using a special framework based on Global Sensitivity Analysis and how various usual indicators are common between these two fields.
In this talk we consider a stochastic bandit problem with potentially infinite arms, where a proportion of these arms are optimal. For both the best-arm identification and cumulative regret settings we describe algorithms that leverage the proportion of optimal arms to their advantage. For best-arm identification taking the proportion of optimal arms as a parameter is not necessary, however, this is not the case for cumulative regret, where we show adaptation to the proportion is impossible. Our bounds on regret will depend on a gap Delta, which we take as the distance between the optimal and sub optimal arms.
This talk is based on a joint work with Rianne de Heide, Pierre Ménard and Alexandra Carpentier.
We study a network of N neurons, interacting with each other, and following FitzHugh-Nagumo SDE, when N tends to infinity. We work with an adequate coupling to bound Wasserstein distance and prove a uniform in time propagation of chaos.
In this talk, we consider different two-time-scale stochastic approximation algorithms for quantile and superquantile estimation.We study the asymptotic behavior of these algorithms (almost sure convergence, quadratic strong law and law of iterated logarithm, TCL). We also derive non-asymptotic controls of the mean quadratic error of the algorithms. This talk is based on joint work with B. Bercu and S. Gadat.
In this talk, I will present a model for the alignment of a population of N agents interacting according to their rank. We will firstly explain the main difference with other well-known alignment models (such as Cucker-Smale models) and the difficulties that come with. Then I'll present some flocking results when N is fixed. I will finally try to explain why it is difficult to derive a macroscopic equation when N goes to infinity.
Generalized exclusion processes (GEPs) arise naturally when studying social network models or related agent based models. Due to this underlying motivation, e.g., due to modeling opinions, one has to take into consideration heterogeneous constraints on the movement of the particles which are the constituents of the GEP. In particular, random absorbing environments are a reoccurring obstacle in various models. In this talk I give an overview of my recently obtained results on the expected time to absorption, which are based on a hierarchical representation of the state space with respect to the absorbing environment.
I discuss as an illustrative example the echo chamber model with fixed opinions under suitable assumptions and I indicate possibilities to extend this approach to other models and, in particular, point towards the possibilities of analyzing expected times to absorption for GEPs on connected graphs with random absorbing environments.
This PhD project aims to use generative deep learning models to accelerate and improve the design of turbine blades for Safran aircraft engines. Indeed, these components must be precisely designed, which makes numerical simulations very time consuming. We will present a method for extracting geometric features from a set of watertight 2D contours based on the learning of distance functions by neural networks.
The so-called pick-freeze method allows to estimate efficiently Sobol's index. One of the main drawback of the method is that it needs a well designed second sample. Using the ideas proposed by Chatterjee in his recent paper , we have built a new estimator of Sobol' index that uses a single sample. In , we have shown a CLT for this estimate. In this talk, we will discuss both the construction of the estimate and the CLT proof.
 Sourav Chatterjee (2020) A New Coefficient of Correlation, Journal of the American Statistical Association
 Fabrice Gamboa, Pierre Gremaud, Thierry Klein, Agnès Lagnoux (2021) Global Sensitivity Analysis: a new generation of mighty estimators based on rank statistics. To appear in Bernoulli
Wavelets are known to be a powerful tool. Various questions like approximation, compressing and denoising of functions can be adressed. There exist wavelets for various domains like R, R^d, the sphere and even more general spaces. In this talk we consider the case when we do not know the domain (we only assume the domain to be some Euclidean submanifold), but have only access to a finite number of randomly chosen points. We propose a method to construct a system of functions (vectors) that behave similar to wavelets. This approach is based on the graph representation of the given data points. We have a look at some properties of this function system and how it can be applied to some statistical problems.
In this talk, I will introduce an algorithm that samples the Gibbs measure of a branching random walk. I will show that the algorithm performs with polynomial time complexity under a critical inverse temperature beta_c, and no algorithm can run with polynomial time above the critical inverse temperature.
This is a joint work with Prof. Pascal Maillard.
The 1981 computer game Sokoban is a common testground for algorithmic research, as the game is both NP-hard and PSPACE-complete. The game's objective is to push boxes into designated goal positions. Boxes can only be pushed if the space behind the box is free. Deadlocks are irreversible actions that make winning the game impossible.
We want to investigate how to adapt the Monte-Carlo-Tree-Search algorithm with reward shaping.
In the talk I will introduce the game and talk about why it is still interesting 40 years later.
This is a project with Clemens Woest (Student in the Master Programme of Data Science, UP).
Smooth functions on graphs have wide applications in manifold and semi-supervised learning. In this talk, we present a bandit problem where the payoffs of arms are smooth on a graph. This framework is suitable for solving online learning problems that involve graphs, such as content-based recommendations. In this problem, each item we can recommend is a node of an undirected graph with an expected rating similar to one of its neighbors. The goal is to utilize the graph structure to recommend items with high expected ratings.
Significant work has been recently dedicated to the stochastic delayed bandit setting because of its relevance in applications. The applicability of existing algorithms is however restricted by the fact that strong assumptions are often made on the delay distributions, such as full observability, restrictive shape constraints, or uniformity over arms. In this work, we weaken them significantly and only assume that there is a bound on the tail of the delay. In particular, we cover the important case where the delay distributions vary across arms and the case where the delays are heavy-tailed. Addressing these difficulties, we introduce simple but efficient UCB-based algorithms called the PATIENTBANDITS. We provide both problem-dependent and problem-independent bounds on the regret as well as performance lower bounds.
We investigate the problem of minimizing the excess generalization error with respect to the best expert prediction in a finite family in the stochastic setting, under limited access to information. We revisit the classical setting where all training data is available to the learner and develop a novel algorithm that has fast rates with high probability. We extend our algorithm to the budgeted setting where there is a constraint on the number of observed experts' advice per round and develop high probability distribution dependent and distribution independent bounds.
We derive both Azuma-Hoeffding and Burkholder-type inequalities for partial sums over a rectangular grid of dimension d of a random field satisfying a weak-dependency assumption of projective type, i.e., the difference between expectation of an element of the random field and its conditional expectation given the rest of the field at a distance more than δ is bounded, in the sense of Lᵖ distance, by a known decreasing function of δ. The analysis is based on the combination of a multi-scale approximation of random sums by martingale difference sequences, and of a careful decomposition of the domain. The obtained results improve over previously known bounds under comparable hypotheses.
This talk is based on the joint work with Gilles Blanchard and Alexandra Carpentier.