no code implementations • 6 Jun 2024 • Keyon Vafa, Justin Y. Chen, Jon Kleinberg, Sendhil Mullainathan, Ashesh Rambachan
Building generative models that meaningfully capture the underlying logic of the domains they model would be immensely valuable; our results suggest new ways to assess how close a given model is to that goal.
1 code implementation • 3 Jun 2024 • Keyon Vafa, Ashesh Rambachan, Sendhil Mullainathan
Our results show that -- especially for cases where the cost of mistakes is high -- more capable models (e. g. GPT-4) can do worse on the instances people choose to use them for, exactly because they are not aligned with the human generalization function.
no code implementations • 15 Apr 2024 • Sendhil Mullainathan, Ashesh Rambachan
Facing a similar problem -- how to extract theoretical insights from their intuitions -- researchers often turned to ``anomalies:'' constructed examples that highlight flaws in an existing theory and spur the development of new ones.
no code implementations • 19 Dec 2022 • Ashesh Rambachan, Amanda Coston, Edward Kennedy
Predictive algorithms inform consequential decisions in settings where the outcome is selectively observed given choices made by human decision makers.
1 code implementation • 2 Jan 2021 • Amanda Coston, Ashesh Rambachan, Alexandra Chouldechova
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance, or "the set of good models."
no code implementations • 3 Aug 2020 • Ashesh Rambachan, Jonathan Roth
An interesting feature of our framework is that conventional standard errors tend to become more conservative when treatment probabilities vary more across units, i. e. when there is more selection into treatment.
no code implementations • 18 Sep 2019 • Ashesh Rambachan, Jonathan Roth
We refer to this phenomenon as "bias reversal."