no code implementations • 11 Apr 2024 • Yuwei Sun, Ippei Fujisawa, Arthur Juliani, Jun Sakuma, Ryota Kanai
Neural networks encounter the challenge of Catastrophic Forgetting (CF) in continual learning, where new task learning interferes with previously learned knowledge.
1 code implementation • 19 Oct 2023 • Joshua Butke, Noriaki Hashimoto, Ichiro Takeuchi, Hiroaki Miyoshi, Koichi Ohshima, Jun Sakuma
Whole-slide image analysis via the means of computational pathology often relies on processing tessellated gigapixel images with only slide-level labels available.
no code implementations • 27 May 2023 • Kaiwen Xu, Kazuto Fukuchi, Youhei Akimoto, Jun Sakuma
A concept-based classifier can explain the decision process of a deep learning model by human-understandable concepts in image classification problems.
no code implementations • 17 Apr 2023 • Junki Mori, Ryo Furukawa, Isamu Teranishi, Jun Sakuma
To overcome this issue, we propose a novel method, predictive adversarial domain adaptation (PADA), which can predict likely positive examples from the unlabeled target data and simultaneously align the feature spaces to reduce the distribution divergence between the whole source data and the likely positive target data.
no code implementations • 2 Apr 2023 • Yuwei Sun, Hideya Ochiai, Jun Sakuma
To this end, we propose an instance-level multimodal Trojan attack on VQA that efficiently adapts to fine-tuned models through a dual-modality adversarial learning method.
no code implementations • 28 Mar 2023 • Atsuhiro Miyagi, Yoshiki Miyauchi, Atsuo Maki, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto
In this study, we consider a continuous min--max optimization problem $\min_{x \in \mathbb{X} \max_{y \in \mathbb{Y}}}f(x, y)$ whose objective function is a black-box.
1 code implementation • 31 Jan 2023 • Rei Sato, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto
We investigate policy transfer using image-to-semantics translation to mitigate learning difficulties in vision-based robotics control agents.
no code implementations • 29 Nov 2022 • Atsuhiro Miyagi, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto
To reduce the number of simulations required and increase the number of restarts for better local optimum solutions, we propose a new approach referred to as adaptive scenario subset selection (AS3).
1 code implementation • 7 Nov 2022 • Takumi Tanabe, Rei Sato, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto
In this study, we focus on scenarios involving a simulation environment with uncertainty parameters and the set of their possible values, called the uncertainty parameter set.
no code implementations • 26 Sep 2022 • Daiki Morinaga, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto
Evolution strategy (ES) is one of promising classes of algorithms for black-box continuous optimization.
1 code implementation • 22 Sep 2022 • Daiki Nishiyama, Kazuto Fukuchi, Youhei Akimoto, Jun Sakuma
Therefore, we propose a loss function that can improve the separation of the important class by setting the margin only for the important class, called Class-sensitive Additive Angular Margin Loss (CAMRI Loss).
no code implementations • 6 Apr 2022 • Atsuhiro Miyagi, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto
(I) As the influence of the interaction term between $x$ and $y$ (e. g., $x^\mathrm{T} B y$) on the Lipschitz smooth and strongly convex-concave function $f$ increases, the approaches converge to an optimal solution at a slower rate.
1 code implementation • 22 Mar 2022 • Yuwei Sun, Hideya Ochiai, Jun Sakuma
To overcome this challenge, we propose the Attacking Distance-aware Attack (ADA) to enhance a poisoning attack by finding the optimized target class in the feature space.
Ranked #1 on Model Poisoning on Fashion-MNIST
no code implementations • 9 Sep 2021 • Thien Q. Tran, Kazuto Fukuchi, Youhei Akimoto, Jun Sakuma
The challenge is that we have to discover in an unsupervised manner a set of concepts, i. e., A, B and C, that is useful for the explaining the classifier.
no code implementations • 20 Aug 2021 • Taiga Ono, Takeshi Sugawara, Jun Sakuma, Tatsuya Mori
To the best of our knowledge, our work is the first in evaluating the proficiency of adversarial examples for ECGs in a physical setup.
1 code implementation • 13 Apr 2021 • Takumi Tanabe, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto
When ML techniques are applied to game domains with non-tile-based level representation, such as Angry Birds, where objects in a level are specified by real-valued parameters, ML often fails to generate playable levels.
no code implementations • 2 Mar 2021 • Daiki Morinaga, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto
The convergence rate, that is, the decrease rate of the distance from a search point $m_t$ to the optimal solution $x^*$, is proven to be in $O(\exp( - L / \mathrm{Tr}(H) ))$, where $L$ is the smallest eigenvalue of $H$ and $\mathrm{Tr}(H)$ is the trace of $H$.
1 code implementation • 11 Dec 2020 • Rei Sato, Jun Sakuma, Youhei Akimoto
In this paper, we propose a novel search strategy for one-shot and sparse propagation NAS, namely AdvantageNAS, which further reduces the time complexity of NAS by reducing the number of search iterations.
no code implementations • 22 Aug 2020 • Thien Q. Tran, Jun Sakuma
We also carefully design a feature selection method to select proper search terms to predict each component.
1 code implementation • 20 Nov 2019 • Hiromu Yakura, Youhei Akimoto, Jun Sakuma
We first show the feasibility of this approach in an attack against an image classifier by employing generative adversarial networks that produce image patches that have the appearance of a natural object to fool the target model.
no code implementations • 27 May 2019 • Kazuto Fukuchi, Chia-Mu Yu, Arashi Haishima, Jun Sakuma
Instead of considering the worst case, we aim to construct a private mechanism whose error rate is adaptive to the easiness of estimation of the minimum.
1 code implementation • 28 Nov 2018 • Tatsuki Koga, Naoki Nonaka, Jun Sakuma, Jun Seita
Deep learning has significant potential for medical imaging.
no code implementations • 1 Nov 2018 • Jiayang Liu, Weiming Zhang, Kazuto Fukuchi, Youhei Akimoto, Jun Sakuma
In this study, we propose a new methodology to control how user's data is recognized and used by AI via exploiting the properties of adversarial examples.
1 code implementation • 28 Oct 2018 • Hiromu Yakura, Jun Sakuma
We propose a method to generate audio adversarial examples that can attack a state-of-the-art speech recognition model in the physical world.
no code implementations • 1 Mar 2018 • Hiroyuki Hanada, Toshiyuki Takada, Jun Sakuma, Ichiro Takeuchi
A drawback of this naive approach is that the uncertainty in the missing entries is not properly incorporated in the prediction.
no code implementations • ICLR 2018 • Kosuke Kusano, Jun Sakuma
In face recognition, we show that, when an adversary obtains a face recognition model for a set of individuals, PreImageGAN allows the adversary to extract face images of specific individuals contained in the set, even when the adversary has no knowledge of the face of the individuals.
no code implementations • 20 Oct 2017 • Kazuto Fukuchi, Quang Khai Tran, Jun Sakuma
Existing differentially private ERM implicitly assumed that the data contributors submit their private data to a database expecting that the database invokes a differentially private mechanism for publication of the learned model.
no code implementations • ICML 2017 • Kazuya Kakizaki, Kazuto Fukuchi, Jun Sakuma
This paper develops differentially private mechanisms for $\chi^2$ test of independence.
no code implementations • 6 Jun 2017 • Jun Sakuma, Tatsuya Osame
In this way, the predictive performance of recommendations based on anonymized ratings can be improved in some settings.
no code implementations • 1 Jun 2016 • Hiroyuki Hanada, Atsushi Shibagaki, Jun Sakuma, Ichiro Takeuchi
We study large-scale classification problems in changing environments where a small part of the dataset is modified, and the effect of the data modification must be quickly incorporated into the classifier.
no code implementations • 15 Feb 2016 • Toshiyuki Takada, Hiroyuki Hanada, Yoshiji Yamada, Jun Sakuma, Ichiro Takeuchi
The key property of SAG method is that, given an arbitrary approximate solution, it can provide a non-probabilistic assumption-free bound on the approximation quality under cryptographically secure computation framework.
no code implementations • 6 Nov 2015 • Kazuto Fukuchi, Jun Sakuma
Currently, machine learning plays an important role in the lives and individual activities of numerous people.
no code implementations • 24 Jul 2015 • Rina Okada, Kazuto Fukuchi, Kazuya Kakizaki, Jun Sakuma
One is the query to count outliers, which reports the number of outliers that appear in a given subspace.
no code implementations • 25 Jun 2015 • Kazuto Fukuchi, Jun Sakuma
In this paper, we propose a general framework for fairness-aware learning that uses f-divergences and that covers most of the dependency measures employed in the existing methods.