Learning the choice function: a non-parametric approach
One common assumption behind many decision models is that a choice function, i.e. a function used to rank prospects, is readily available. Specifying a choice function is however not necessarily straightforward in many applications. For instance, in risk minimization problems, the function has to well represent an individual’s preference over random variables, but the latter is often not readily available and can only be partially known through observed decisions made by the individual. The question of how to infer a choice function from observed decisions leads to the study of inverse optimization, whose goal is to determine a choice function that renders the observed decisions (approximately) optimal. Most existing studies rely on parametric assumptions of the choice function to prove the tractability of the inverse optimization problem. This, unfortunately, can lead to a potentially biased estimation of the choice function. In this talk, I will start by presenting a general inverse optimization framework and then show how the problem can be efficiently solved as convex optimization problems without resorting to any parametric assumption. Our method exploits the theory of conjugate duality, which provides the necessary characterization of a function from both primal and dual perspectives. Finally, we stress the “data-driven” aspect of our approach and demonstrate through an example of learning risk measures the convergence behavior of the learning process.
上午 10:00 ~ 11:30
Jonathan Yu-Meng Li, University of Ottawa, Canada