no code implementations • 21 Dec 2023 • Katsumasa Yoshikawa, Takato Yamazaki, Masaya Ohagi, Tomoya Mizumoto, Keiya Sato
In recent years, large language models (LLMs) have rapidly proliferated and have been utilized in various tasks, including research in dialogue systems.
no code implementations • 7 Aug 2023 • Tomoya Mizumoto, Takato Yamazaki, Katsumasa Yoshikawa, Masaya Ohagi, Toshiki Kawamoto, Toshinori Sato
When individuals engage in spoken discourse, various phenomena can be observed that differ from those that are apparent in text-based conversation.
no code implementations • 19 Oct 2022 • Takato Yamazaki, Katsumasa Yoshikawa, Toshiki Kawamoto, Masaya Ohagi, Tomoya Mizumoto, Shuta Ichimura, Yusuke Kida, Toshinori Sato
This paper describes our system submitted to Dialogue Robot Competition 2022.
no code implementations • 16 Jun 2022 • Hiroaki Funayama, Tasuku Sato, Yuichiroh Matsubayashi, Tomoya Mizumoto, Jun Suzuki, Kentaro Inui
Towards guaranteeing high-quality predictions, we present the first study of exploring the use of human-in-the-loop framework for minimizing the grading cost while guaranteeing the grading quality by allowing a SAS model to share the grading task with a human grader.
1 code implementation • 23 May 2022 • Masato Mita, Keisuke Sakaguchi, Masato Hagiwara, Tomoya Mizumoto, Jun Suzuki, Kentaro Inui
Natural language processing technology has rapidly improved automated grammatical error correction tasks, and the community begins to explore document-level revision as one of the next challenges.
no code implementations • ACL 2020 • Hiroaki Funayama, Shota Sasaki, Yuichiroh Matsubayashi, Tomoya Mizumoto, Jun Suzuki, Masato Mita, Kentaro Inui
We introduce a new task formulation of SAS that matches the actual usage.
no code implementations • WS 2019 • Tianqi Wang, Naoya Inoue, Hiroki Ouchi, Tomoya Mizumoto, Kentaro Inui
Most existing SAG systems predict scores based only on the answers, including the model used as base line in this paper, which gives the-state-of-the-art performance.
1 code implementation • IJCNLP 2019 • Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, Kentaro Inui
The incorporation of pseudo data in the training of grammatical error correction models has been one of the main factors in improving the performance of such models.
Ranked #12 on Grammatical Error Correction on CoNLL-2014 Shared Task
no code implementations • WS 2019 • Tomoya Mizumoto, Hiroki Ouchi, Yoriko Isobe, Paul Reisert, Ryo Nagata, Satoshi Sekine, Kentaro Inui
This paper provides an analytical assessment of student short answer responses with a view to potential benefits in pedagogical contexts.
no code implementations • WS 2019 • Hiroki Asano, Masato Mita, Tomoya Mizumoto, Jun Suzuki
We introduce the AIP-Tohoku grammatical error correction (GEC) system for the BEA-2019 shared task in Track 1 (Restricted Track) and Track 2 (Unrestricted Track) using the same system architecture.
no code implementations • NAACL 2019 • Masato Mita, Tomoya Mizumoto, Masahiro Kaneko, Ryo Nagata, Kentaro Inui
This study explores the necessity of performing cross-corpora evaluation for grammatical error correction (GEC) models.
no code implementations • WS 2018 • Ryo Nagata, Tomoya Mizumoto, Yuta Kikuchi, Yoshifumi Kawasaki, Kotaro Funakoshi
Based on the discussion of possible causes of POS tagging errors in learner English, we show that deep neural models are particularly suitable for this.
no code implementations • WS 2017 • Tomoya Mizumoto, Ryo Nagata
Part-of-speech (POS) tagging and chunking have been used in tasks targeting learner English; however, to the best our knowledge, few studies have evaluated their performance and no studies have revealed the causes of POS-tagging/chunking errors in detail.
no code implementations • IJCNLP 2017 • Hiroki Asano, Tomoya Mizumoto, Kentaro Inui
In grammatical error correction (GEC), automatically evaluating system outputs requires gold-standard references, which must be created manually and thus tend to be both expensive and limited in coverage.