Search Results for author: Naoyuki Terashita

Found 2 papers, 2 papers with code

Decentralized Hyper-Gradient Computation over Time-Varying Directed Networks

1 code implementation5 Oct 2022 Naoyuki Terashita, Satoshi Hara

As a result, the hyper-gradient estimator derived from our optimality condition enjoys two desirable properties; (i) it only requires Push-Sum communication of vectors and (ii) it can operate over time-varying directed networks.

Bilevel Optimization Federated Learning

Influence Estimation for Generative Adversarial Networks

1 code implementation ICLR 2021 Naoyuki Terashita, Hiroki Ohashi, Yuichi Nonaka, Takashi Kanemaru

To this end, (1) we propose an influence estimation method that uses the Jacobian of the gradient of the generator's loss with respect to the discriminator's parameters (and vice versa) to trace how the absence of an instance in the discriminator's training affects the generator's parameters, and (2) we propose a novel evaluation scheme, in which we assess harmfulness of each training instance on the basis of how GAN evaluation metric (e. g., inception score) is expect to change due to the removal of the instance.

Cannot find the paper you are looking for? You can Submit a new open access paper.