Search Results for author: Shoetsu Sato

Found 8 papers, 3 papers with code

Speculative Sampling in Variational Autoencoders for Dialogue Response Generation

1 code implementation Findings (EMNLP) 2021 Shoetsu Sato, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa

Our method chooses the most probable one from redundantly sampled latent variables for tying up the variable with a given response.

Response Generation

Rethinking Response Evaluation from Interlocutor's Eye for Open-Domain Dialogue Systems

no code implementations4 Jan 2024 Yuma Tsuta, Naoki Yoshinaga, Shoetsu Sato, Masashi Toyoda

Open-domain dialogue systems have started to engage in continuous conversations with humans.

Vocabulary Adaptation for Domain Adaptation in Neural Machine Translation

1 code implementation Findings of the Association for Computational Linguistics 2020 Shoetsu Sato, Jin Sakuma, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa

Prior to fine-tuning, our method replaces the embedding layers of the NMT model by projecting general word embeddings induced from monolingual data in a target domain onto a source-domain embedding space.

Domain Adaptation Machine Translation +3

Vocabulary Adaptation for Distant Domain Adaptation in Neural Machine Translation

no code implementations30 Apr 2020 Shoetsu Sato, Jin Sakuma, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa

Prior to fine-tuning, our method replaces the embedding layers of the NMT model by projecting general word embeddings induced from monolingual data in a target domain onto a source-domain embedding space.

Domain Adaptation Machine Translation +3

Modeling Personal Biases in Language Use by Inducing Personalized Word Embeddings

no code implementations NAACL 2019 Daisuke Oba, Naoki Yoshinaga, Shoetsu Sato, Satoshi Akasaki, Masashi Toyoda

In this study, we propose a method of modeling such personal biases in word meanings (hereafter, semantic variations) with personalized word embeddings obtained by solving a task on subjective text while regarding words used by different individuals as different words.

Multi-class Classification Multi-Task Learning +2

Learning to Describe Unknown Phrases with Local and Global Contexts

no code implementations NAACL 2019 Shonosuke Ishiwatari, Hiroaki Hayashi, Naoki Yoshinaga, Graham Neubig, Shoetsu Sato, Masashi Toyoda, Masaru Kitsuregawa

When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities.

Decoder

Learning to Describe Phrases with Local and Global Contexts

1 code implementation1 Nov 2018 Shonosuke Ishiwatari, Hiroaki Hayashi, Naoki Yoshinaga, Graham Neubig, Shoetsu Sato, Masashi Toyoda, Masaru Kitsuregawa

When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities.

Decoder Reading Comprehension

Cannot find the paper you are looking for? You can Submit a new open access paper.