Search Results for author: Takuro Kutsuna

Found 10 papers, 0 papers with code

One-Shot Domain Incremental Learning

no code implementations25 Mar 2024 Yasushi Esaki, Satoshi Koide, Takuro Kutsuna

In DIL, we assume that samples on new domains are observed over time.

Incremental Learning

Linearly Constrained Weights: Reducing Activation Shift for Faster Training of Neural Networks

no code implementations8 Mar 2024 Takuro Kutsuna

In this paper, we first identify activation shift, a simple but remarkable phenomenon in a neural network in which the preactivation value of a neuron has non-zero mean that depends on the angle between the weight vector of the neuron and the mean of the activation vector in the previous layer.

Accuracy-Preserving Calibration via Statistical Modeling on Probability Simplex

no code implementations21 Feb 2024 Yasushi Esaki, Akihiro Nakamura, Keisuke Kawano, Ryoko Tokuhisa, Takuro Kutsuna

We propose an accuracy-preserving calibration method using the Concrete distribution as the probabilistic model on the probability simplex.

How many views does your deep neural network use for prediction?

no code implementations2 Feb 2024 Keisuke Kawano, Takuro Kutsuna, Keisuke Sano

The generalization ability of Deep Neural Networks (DNNs) is still not fully understood, despite numerous theoretical and empirical analyses.

Supervised Contrastive Learning with Heterogeneous Similarity for Distribution Shifts

no code implementations7 Apr 2023 Takuro Kutsuna

Distribution shifts are problems where the distribution of data changes between training and testing, which can significantly degrade the performance of a model deployed in the real world.

Contrastive Learning Domain Generalization

StyleDiff: Attribute Comparison Between Unlabeled Datasets in Latent Disentangled Space

no code implementations9 Mar 2023 Keisuke Kawano, Takuro Kutsuna, Ryoko Tokuhisa, Akihiro Nakamura, Yasushi Esaki

One major challenge in machine learning applications is coping with mismatches between the datasets used in the development and those obtained in real-world applications.

Attribute

Neural Time Warping For Multiple Sequence Alignment

no code implementations29 Jun 2020 Keisuke Kawano, Takuro Kutsuna, Satoshi Koide

Multiple sequences alignment (MSA) is a traditional and challenging task for time-series analyses.

Multiple Sequence Alignment Time Series +1

Flow-based Image-to-Image Translation with Feature Disentanglement

no code implementations NeurIPS 2019 Ruho Kondo, Keisuke Kawano, Satoshi Koide, Takuro Kutsuna

Learning non-deterministic dynamics and intrinsic factors from images obtained through physical experiments is at the intersection of machine learning and material science.

Disentanglement Image-to-Image Translation +1

Neural Edit Operations for Biological Sequences

no code implementations NeurIPS 2018 Satoshi Koide, Keisuke Kawano, Takuro Kutsuna

The evolution of biological sequences, such as proteins or DNAs, is driven by the three basic edit operations: substitution, insertion, and deletion.

Protein Secondary Structure Prediction

Linearly Constrained Weights: Resolving the Vanishing Gradient Problem by Reducing Angle Bias

no code implementations ICLR 2018 Takuro Kutsuna

In this paper, we first identify \textit{angle bias}, a simple but remarkable phenomenon that causes the vanishing gradient problem in a multilayer perceptron (MLP) with sigmoid activation functions.

Cannot find the paper you are looking for? You can Submit a new open access paper.