Search Results for author: Aru Maekawa

Found 2 papers, 2 papers with code

DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation

1 code implementation30 Mar 2024 Aru Maekawa, Satoshi Kosugi, Kotaro Funakoshi, Manabu Okumura

To address this issue, we propose a novel text dataset distillation approach, called Distilling dataset into Language Model (DiLM), which trains a language model to generate informative synthetic training samples as text data, instead of directly optimizing synthetic samples.

In-Context Learning Language Modelling +3

Can we obtain significant success in RST discourse parsing by using Large Language Models?

1 code implementation8 Mar 2024 Aru Maekawa, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura

Recently, decoder-only pre-trained large language models (LLMs), with several tens of billion parameters, have significantly impacted a wide range of natural language processing (NLP) tasks.

Decoder Discourse Parsing

Cannot find the paper you are looking for? You can Submit a new open access paper.