1 code implementation • 14 Feb 2024 • Jessica Quaye, Alicia Parrish, Oana Inel, Charvi Rastogi, Hannah Rose Kirk, Minsuk Kahng, Erin Van Liemt, Max Bartolo, Jess Tsang, Justin White, Nathan Clement, Rafael Mosquera, Juan Ciro, Vijay Janapa Reddi, Lora Aroyo
By focusing on ``implicitly adversarial'' prompts (those that trigger T2I models to generate unsafe images for non-obvious reasons), we isolate a set of difficult safety issues that human creativity is well-suited to uncover.
1 code implementation • 22 Aug 2023 • Oana Inel, Tim Draws, Lora Aroyo
We argue that data collection for AI should be performed in a responsible manner where the quality of the data is thoroughly scrutinized and measured through a systematic set of appropriate metrics.
no code implementations • 22 May 2023 • Alicia Parrish, Hannah Rose Kirk, Jessica Quaye, Charvi Rastogi, Max Bartolo, Oana Inel, Juan Ciro, Rafael Mosquera, Addison Howard, Will Cukierski, D. Sculley, Vijay Janapa Reddi, Lora Aroyo
To address this need, we introduce the Adversarial Nibbler challenge.
1 code implementation • 28 Jul 2022 • Ombretta Strafforello, Vanathi Rajasekart, Osman S. Kayhan, Oana Inel, Jan van Gemert
Our work is the first to evaluate IoU with humans and makes it clear that relying on IoU scores alone to evaluate localization errors might not be sufficient.
1 code implementation • NeurIPS 2023 • Mark Mazumder, Colby Banbury, Xiaozhe Yao, Bojan Karlaš, William Gaviria Rojas, Sudnya Diamos, Greg Diamos, Lynn He, Alicia Parrish, Hannah Rose Kirk, Jessica Quaye, Charvi Rastogi, Douwe Kiela, David Jurado, David Kanter, Rafael Mosquera, Juan Ciro, Lora Aroyo, Bilge Acun, Lingjiao Chen, Mehul Smriti Raje, Max Bartolo, Sabri Eyuboglu, Amirata Ghorbani, Emmett Goodman, Oana Inel, Tariq Kane, Christine R. Kirkpatrick, Tzu-Sheng Kuo, Jonas Mueller, Tristan Thrush, Joaquin Vanschoren, Margaret Warren, Adina Williams, Serena Yeung, Newsha Ardalani, Praveen Paritosh, Lilith Bat-Leah, Ce Zhang, James Zou, Carole-Jean Wu, Cody Coleman, Andrew Ng, Peter Mattson, Vijay Janapa Reddi
Machine learning research has long focused on models rather than datasets, and prominent datasets are used for common ML tasks without regard to the breadth, difficulty, and faithfulness of the underlying problems.
no code implementations • 15 Jan 2021 • Mats Mulder, Oana Inel, Jasper Oosterman, Nava Tintarev
We apply this notion to a re-ranking of topic-relevant recommended lists, to form the basis of a novel viewpoint diversification method.
no code implementations • 25 Feb 2019 • Silas Ørting, Andrew Doyle, Arno van Hilten, Matthias Hirth, Oana Inel, Christopher R. Madan, Panagiotis Mavridis, Helen Spiers, Veronika Cheplygina
Despite the growing popularity of this approach, there has not yet been a comprehensive literature review to provide guidance to researchers considering using crowdsourcing methodologies in their own medical imaging analysis.
2 code implementations • 18 Aug 2018 • Anca Dumitrache, Oana Inel, Lora Aroyo, Benjamin Timmermans, Chris Welty
However, in many domains, there is ambiguity in the data, as well as a multitude of perspectives of the information examples.
Human-Computer Interaction Social and Information Networks
1 code implementation • COLING 2018 • Tommaso Caselli, Oana Inel
This paper describes a crowdsourcing experiment on the annotation of plot-like structures in English news articles.
no code implementations • LREC 2016 • Oana Inel, Tommaso Caselli, Lora Aroyo
On the other hand, machines need to understand the information that is published in online data streams and generate concise and meaningful overviews.
no code implementations • LREC 2016 • Tommaso Caselli, Rachele Sprugnoli, Oana Inel
This paper describes two sets of crowdsourcing experiments on temporal information annotation conducted on two languages, i. e., English and Italian.