Search Results for author: Ashesh Rambachan

Found 7 papers, 2 papers with code

Evaluating the World Model Implicit in a Generative Model

no code implementations6 Jun 2024 Keyon Vafa, Justin Y. Chen, Jon Kleinberg, Sendhil Mullainathan, Ashesh Rambachan

Building generative models that meaningfully capture the underlying logic of the domains they model would be immensely valuable; our results suggest new ways to assess how close a given model is to that goal.

Do Large Language Models Perform the Way People Expect? Measuring the Human Generalization Function

1 code implementation3 Jun 2024 Keyon Vafa, Ashesh Rambachan, Sendhil Mullainathan

Our results show that -- especially for cases where the cost of mistakes is high -- more capable models (e. g. GPT-4) can do worse on the instances people choose to use them for, exactly because they are not aligned with the human generalization function.

From Predictive Algorithms to Automatic Generation of Anomalies

no code implementations15 Apr 2024 Sendhil Mullainathan, Ashesh Rambachan

Facing a similar problem -- how to extract theoretical insights from their intuitions -- researchers often turned to ``anomalies:'' constructed examples that highlight flaws in an existing theory and spur the development of new ones.

Robust Design and Evaluation of Predictive Algorithms under Unobserved Confounding

no code implementations19 Dec 2022 Ashesh Rambachan, Amanda Coston, Edward Kennedy

Predictive algorithms inform consequential decisions in settings where the outcome is selectively observed given choices made by human decision makers.

Decision Making Robust Design

Characterizing Fairness Over the Set of Good Models Under Selective Labels

1 code implementation2 Jan 2021 Amanda Coston, Ashesh Rambachan, Alexandra Chouldechova

We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance, or "the set of good models."

Fairness

Design-Based Uncertainty for Quasi-Experiments

no code implementations3 Aug 2020 Ashesh Rambachan, Jonathan Roth

An interesting feature of our framework is that conventional standard errors tend to become more conservative when treatment probabilities vary more across units, i. e. when there is more selection into treatment.

valid

Cannot find the paper you are looking for? You can Submit a new open access paper.