no code implementations • RepL4NLP (ACL) 2022 • Edoardo Mosca, Lukas Huber, Marc Alexander Kühn, Georg Groh
State-of-the-art machine learning models are prone to adversarial attacks”:" Maliciously crafted inputs to fool the model into making a wrong prediction, often with high confidence.
no code implementations • ACL 2022 • Edoardo Mosca, Shreyash Agarwal, Javier Rando Ramírez, Georg Groh
Adversarial attacks are a major challenge faced by current machine learning research.
no code implementations • LNLS (ACL) 2022 • Edoardo Mosca, Defne Demirtürk, Luca Mülln, Fabio Raffagnato, Georg Groh
Interpreting NLP models is fundamental for their development as it can shed light on hidden properties and unexpected behaviors.
no code implementations • NAACL (TrustNLP) 2022 • Edoardo Mosca, Katharina Harmann, Tobias Eder, Georg Groh
Large-scale surveys are a widely used instrument to collect data from a target audience.
no code implementations • NAACL (SocialNLP) 2021 • Edoardo Mosca, Maximilian Wich, Georg Groh
As hate speech spreads on social media and online communities, research continues to work on its automatic detection.
no code implementations • COLING 2022 • Edoardo Mosca, Ferenc Szigeti, Stella Tragianni, Daniel Gallagher, Georg Groh
Model explanations are crucial for the transparent, safe, and trustworthy deployment of machine learning models.
2 code implementations • 10 Apr 2024 • Miriam Anschütz, Edoardo Mosca, Georg Groh
Text simplification seeks to improve readability while retaining the original content and meaning.
no code implementations • 6 Mar 2023 • Edoardo Mosca, Daryna Dementieva, Tohid Ebrahim Ajdari, Maximilian Kummeth, Kirill Gringauz, Yutong Zhou, Georg Groh
Interpretability and human oversight are fundamental pillars of deploying complex NLP models into real-world applications.
1 code implementation • 10 Apr 2022 • Edoardo Mosca, Shreyash Agarwal, Javier Rando, Georg Groh
Adversarial attacks are a major challenge faced by current machine learning research.