no code implementations • GermEval 2021 • Jaqueline Böck, Daria Liakhovets, Mina Schütz, Armin Kirchknopf, Djordje Slijepčević, Matthias Zeppelzauer, Alexander Schindler
Our best model is GottBERT (i. e., a BERT transformer pre-trained on German texts) fine-tuned on the GermEval 2021 data.
no code implementations • 22 Nov 2022 • Armin Kirchknopf, Djordje Slijepcevic, Ilkay Wunderlich, Michael Breiter, Johannes Traxler, Matthias Zeppelzauer
We investigate the problem of explainability for visual object detectors.
no code implementations • 9 Jun 2021 • Mina Schütz, Jaqueline Boeck, Daria Liakhovets, Djordje Slijepčević, Armin Kirchknopf, Manuel Hecht, Johannes Bogensperger, Sven Schlarb, Alexander Schindler, Matthias Zeppelzauer
For both tasks our best model is XLM-R with unsupervised pre-training on the EXIST data and additional datasets and fine-tuning on the provided dataset.
no code implementations • 31 May 2021 • Armin Kirchknopf, Djordje Slijepcevic, Matthias Zeppelzauer
Social media is accompanied by an increasing proportion of content that provides fake information or misleading content, known as information disorder.