no code implementations • 4 Oct 2023 • Jiri Navratil, Benjamin Elder, Matthew Arnold, Soumya Ghosh, Prasanna Sattigeri
Accurate quantification of model uncertainty has long been recognized as a fundamental requirement for trusted AI.
1 code implementation • 1 Jun 2021 • Jiri Navratil, Benjamin Elder, Matthew Arnold, Soumya Ghosh, Prasanna Sattigeri
Accurate quantification of model uncertainty has long been recognized as a fundamental requirement for trusted AI.
no code implementations • 15 Dec 2020 • Benjamin Elder, Matthew Arnold, Anupama Murthi, Jiri Navratil
We address this core problem of performance prediction uncertainty with a method to compute prediction intervals for model performance.
no code implementations • 10 Jul 2020 • Begum Taskazan, Jiri Navratil, Matthew Arnold, Anupama Murthi, Ganesh Venkataraman, Benjamin Elder
Building and maintaining high-quality test sets remains a laborious and expensive task.
no code implementations • 2 Jul 2020 • Jiri Navratil, Matthew Arnold, Benjamin Elder
Generating high quality uncertainty estimates for sequential regression, particularly deep recurrent networks, remains a challenging and open problem.
no code implementations • 28 Mar 2020 • Matthew Arnold, Jeffrey Boston, Michael Desmond, Evelyn Duesterwald, Benjamin Elder, Anupama Murthi, Jiri Navratil, Darrell Reimer
Today's AI deployments often require significant human involvement and skill in the operational stages of the model lifecycle, including pre-release testing, monitoring, problem diagnosis and model improvements.
no code implementations • 22 Aug 2018 • Matthew Arnold, Rachel K. E. Bellamy, Michael Hind, Stephanie Houde, Sameep Mehta, Aleksandra Mojsilovic, Ravi Nair, Karthikeyan Natesan Ramamurthy, Darrell Reimer, Alexandra Olteanu, David Piorkowski, Jason Tsay, Kush R. Varshney
We envision such documents to contain purpose, performance, safety, security, and provenance information to be completed by AI service providers for examination by consumers.