no code implementations • 9 Feb 2024 • Nicolas M. Müller, Piotr Kawa, Shen Hu, Matthias Neu, Jennifer Williams, Philip Sperl, Konstantin Böttinger
We argue that this binary distinction is oversimplified.
no code implementations • 14 Nov 2023 • Ana Răduţoiu, Jan-Philipp Schulze, Philip Sperl, Konstantin Böttinger
Neural networks build the foundation of several intelligent systems, which, however, are known to be easily fooled by adversarial examples.
no code implementations • 30 Oct 2023 • Nicolas M. Müller, Maximilian Burgert, Pascal Debus, Jennifer Williams, Philip Sperl, Konstantin Böttinger
Machine-learning (ML) shortcuts or spurious correlations are artifacts in datasets that lead to very good training and test performance but severely limit the model's generalization capability.
no code implementations • 22 Aug 2023 • Nicolas M. Müller, Philip Sperl, Konstantin Böttinger
Current anti-spoofing and audio deepfake detection systems use either magnitude spectrogram-based features (such as CQT or Melspectrograms) or raw audio processed through convolution or sinc-layers.
1 code implementation • 8 Feb 2023 • Nicolas M. Müller, Simon Roschmann, Shahbaz Khan, Philip Sperl, Konstantin Böttinger
For real-world applications of machine learning (ML), it is essential that models make predictions based on well-generalizing features rather than spurious correlations in the data.
no code implementations • 9 Jan 2023 • Karla Pizzi, Franziska Boenisch, Ugur Sahin, Konstantin Böttinger
To the best of our knowledge, our work is the first one extending MI attacks to audio data, and our results highlight the security risks resulting from the extraction of the biometric data in that setup.
no code implementations • 24 Nov 2022 • Nicolas M. Müller, Jochen Jacobs, Jennifer Williams, Konstantin Böttinger
This is often due to the existence of machine learning shortcuts - features in the data that are predictive but unrelated to the problem at hand.
1 code implementation • 21 Jun 2022 • Jan-Philipp Schulze, Philip Sperl, Ana Răduţoiu, Carla Sagebiel, Konstantin Böttinger
Neural networks follow a gradient-based learning scheme, adapting their mapping parameters by back-propagating the output loss.
Semi-supervised Anomaly Detection Supervised Anomaly Detection
no code implementations • 30 Mar 2022 • Nicolas M. Müller, Pavel Czempin, Franziska Dieckmann, Adam Froghyar, Konstantin Böttinger
Current text-to-speech algorithms produce realistic fakes of human voices, making deepfake detection a much-needed area of research.
no code implementations • 1 Feb 2022 • Karla Markert, Donika Mirdita, Konstantin Böttinger
Automatic speech recognition (ASR) systems are ubiquitously present in our daily devices.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 1 Feb 2022 • Karla Markert, Romain Parracone, Mykhailo Kulakov, Philip Sperl, Ching-Yu Kao, Konstantin Böttinger
Automatic speech recognition (ASR) is improving ever more at mimicking human speech processing.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 17 May 2021 • Franziska Boenisch, Philip Sperl, Konstantin Böttinger
An important problem in deep learning is the privacy and security of neural networks (NNs).
no code implementations • 14 Apr 2021 • Nicolas M. Müller, Simon Roschmann, Konstantin Böttinger
Since many applications rely on untrusted training data, an attacker can easily craft malicious samples and inject them into the training dataset to degrade the performance of machine learning models.
no code implementations • 12 Feb 2021 • Pascal Debus, Nicolas Müller, Konstantin Böttinger
In this setting, the usual round-robin scheme, which always replaces the oldest backup, is no longer optimal with respect to avoidable exposure.
no code implementations • 26 Jan 2021 • Nicolas M. Müller, Konstantin Böttinger
In this paper, we share an intriguing observation: Namely, that the combination of these techniques is particularly susceptible to a new kind of data poisoning attack: By adding small adversarial noise on the input, it is possible to create a collision in the output space of the transfer learner.
2 code implementations • 14 Oct 2020 • Tom Dörr, Karla Markert, Nicolas M. Müller, Konstantin Böttinger
We devise an approach to mitigate this flaw and find that our method improves generation of adversarial examples with varying offsets.
2 code implementations • 15 Sep 2020 • Nicolas Michael Müller, Daniel Kowatsch, Konstantin Böttinger
Adversarial data poisoning is an effective attack against machine learning and threatens model integrity by introducing poisoned data into the training dataset.
no code implementations • 7 Aug 2020 • Philip Sperl, Konstantin Böttinger
To overcome the downsides of adversarial training while still providing a high level of security, we present a new training approach we call \textit{entropic retraining}.
1 code implementation • 3 Mar 2020 • Philip Sperl, Jan-Philipp Schulze, Konstantin Böttinger
Based on the activation values in the target network, the alarm network decides if the given sample is normal.
Semi-supervised Anomaly Detection Supervised Anomaly Detection
no code implementations • 5 Nov 2019 • Philip Sperl, Ching-Yu Kao, Peng Chen, Konstantin Böttinger
In this paper, we present a novel end-to-end framework to detect such attacks during classification without influencing the target model's performance.
no code implementations • 14 Jan 2018 • Konstantin Böttinger, Patrice Godefroid, Rishabh Singh
Fuzzing is the process of finding security vulnerabilities in input-processing code by repeatedly testing the code with modified inputs.