no code implementations • 9 Mar 2024 • Michael Toker, Hadas Orgad, Mor Ventura, Dana Arad, Yonatan Belinkov
Text-to-image diffusion models (T2I) use a latent representation of a text prompt to guide the image generation process.
1 code implementation • 25 Aug 2023 • Rohit Gandikota, Hadas Orgad, Yonatan Belinkov, Joanna Materzyńska, David Bau
Text-to-image models suffer from various safety issues that may limit their suitability for deployment.
1 code implementation • 1 Jun 2023 • Dana Arad, Hadas Orgad, Yonatan Belinkov
Our world is marked by unprecedented technological, global, and socio-political transformations, posing a significant challenge to text-to-image generative models.
1 code implementation • ICCV 2023 • Hadas Orgad, Bahjat Kawar, Yonatan Belinkov
Our Text-to-Image Model Editing method, TIME for short, receives a pair of inputs: a "source" under-specified prompt for which the model makes an implicit assumption (e. g., "a pack of roses"), and a "destination" prompt that describes the same setting, but with a specified desired attribute (e. g., "a pack of blue roses").
1 code implementation • 20 Dec 2022 • Hadas Orgad, Yonatan Belinkov
Common methods to mitigate biases require prior information on the types of biases that should be mitigated (e. g., gender or racial bias) and the social groups associated with each data sample.
no code implementations • NAACL (GeBNLP) 2022 • Hadas Orgad, Yonatan Belinkov
In this position paper, we assess the current paradigm of gender bias evaluation and identify several flaws in it.
2 code implementations • NAACL 2022 • Hadas Orgad, Seraphina Goldfarb-Tarrant, Yonatan Belinkov
Common studies of gender bias in NLP focus either on extrinsic bias measured by model performance on a downstream task or on intrinsic bias found in models' internal representations.