Skip to main content

Explaining Models

Working Papers
Published: 2024
Author(s): K. H. Yang, N. Yoder, and A. K. Zentefis

Abstract

We consider the problem of explaining models to a decision maker (DM) whose payoff depends on a state of the world described by inputs and outputs. A true model specifies the relationship between these inputs and outputs, but is not intelligible to the DM. Instead, the true model must be explained via a simpler model from a finite- dimensional set. If the DM maximizes their average payoff, then an explanation using ordinary least squares is as good as understanding the true model itself. However, if the DM maximizes their worst-case payoff, then any explanation is no better than no explanation at all. We discuss how these results apply to policy evaluation and explainable AI.

Topics:
Economics