no code implementations • 26 Mar 2024 • Ting-Yao Hsu, Chieh-Yang Huang, Shih-Hong Huang, Ryan Rossi, Sungchul Kim, Tong Yu, C. Lee Giles, Ting-Hao K. Huang
Crafting effective captions for figures is important.
no code implementations • 23 Oct 2023 • Ting-Yao Hsu, Chieh-Yang Huang, Ryan Rossi, Sungchul Kim, C. Lee Giles, Ting-Hao K. Huang
We first constructed SCICAP-EVAL, a human evaluation dataset that contains human judgments for 3, 600 scientific figure captions, both original and machine-made, for 600 arXiv figures.
no code implementations • 23 Feb 2023 • Chieh-Yang Huang, Ting-Yao Hsu, Ryan Rossi, Ani Nenkova, Sungchul Kim, Gromit Yeuk-Yin Chan, Eunyee Koh, Clyde Lee Giles, Ting-Hao 'Kenneth' Huang
Prior work often treated figure caption generation as a vision-to-language task.
no code implementations • 17 Nov 2022 • Ting-Yao Hsu, Yoshi Suhara, Xiaolan Wang
To help users quickly digest the key information, we propose the novel CQA summarization task that aims to create a concise summary from CQA pairs.
1 code implementation • Findings (EMNLP) 2021 • Ting-Yao Hsu, C. Lee Giles, Ting-Hao 'Kenneth' Huang
Researchers use figures to communicate rich, complex information in scientific papers.
Ranked #1 on Image Captioning on SCICAP
1 code implementation • ACL 2019 • Ting-Yao Hsu, Chieh-Yang Huang, Yen-Chia Hsu, Ting-Hao 'Kenneth' Huang
We introduce the first dataset for human edits of machine-generated visual stories and explore how these collected edits may be used for the visual story post-editing task.
no code implementations • 22 Feb 2019 • Ting-Yao Hsu, Yen-Chia Hsu, Ting-Hao 'Kenneth' Huang
A significant body of research in Artificial Intelligence (AI) has focused on generating stories automatically, either based on prior story plots or input images.