no code implementations • 18 Jan 2024 • Chu-Jen Shao, Hao-Ming Fu, Pu-Jen Cheng
However, these efforts assume that all positive signals from implicit feedback reflect a fixed preference intensity, which is not realistic.
no code implementations • 11 Jan 2024 • Hao-Ming Fu, Pu-Jen Cheng
Document representation is the core of many NLP tasks on machine understanding.
no code implementations • 27 Nov 2023 • Yu-an Lin, Chen-Tao Lee, Guan-Ting Liu, Pu-Jen Cheng, Shao-Hua Sun
On the other hand, representing RL policies using state machines (Inala et al., 2020) can inductively generalize to long-horizon tasks; however, it struggles to scale up to acquire diverse and complex behaviors.
no code implementations • 19 Aug 2023 • Hao-Lun Lin, Jyun-Yu Jiang, Ming-Hao Juan, Pu-Jen Cheng
Nowadays, modern recommender systems usually leverage textual and visual contents as auxiliary information to predict user preference.
no code implementations • 22 May 2023 • Ming-Hao Juan, Pu-Jen Cheng, Hui-Neng Hsu, Pin-Hsin Hsiao
Though delivering promising performance for rating prediction, we empirically find that many review-based models cannot perform comparably well on top-N recommendation.
no code implementations • 30 Jan 2023 • Guan-Ting Liu, En-Pei Hu, Pu-Jen Cheng, Hung-Yi Lee, Shao-Hua Sun
Aiming to produce reinforcement learning (RL) policies that are human-interpretable and can generalize better to novel scenarios, Trivedi et al. (2021) present a method (LEAPS) that first learns a program embedding space to continuously parameterize diverse programs from a pre-generated program dataset, and then searches for a task-solving program in the learned program embedding space when given a task.
1 code implementation • 28 Sep 2022 • Cheng-An Hsieh, Cheng-Ping Hsieh, Pu-Jen Cheng
To address this, we introduce Multimodal Retrieval on Representation of ImaGe witH Text (Mr.
1 code implementation • 29 Mar 2022 • Chun-Hsien Lin, Pu-Jen Cheng
Word embedding is a modern distributed word representations approach widely used in many natural language processing tasks.
1 code implementation • 5 Jan 2021 • Hung-Ting Su, Chen-Hsi Chang, Po-Wei Shen, Yu-Siang Wang, Ya-Liang Chang, Yu-Cheng Chang, Pu-Jen Cheng, Winston H. Hsu
Furthermore, using our generated QA pairs only on the Video QA task, we can surpass some supervised baselines.
no code implementations • 1 Jan 2021 • Guan Ting Liu, Pu-Jen Cheng, GuanYu Lin
Representation learning on visualized input is an important yet challenging task for deep reinforcement learning (RL).