Search Results for author: Zhang Hui

Found 2 papers, 0 papers with code

Fine-tuning Language Models with Generative Adversarial Reward Modelling

no code implementations9 May 2023 Zhang Ze Yu, Lau Jia Jaw, Zhang Hui, Bryan Kian Hsiang Low

Reinforcement Learning with Human Feedback (RLHF) has been demonstrated to significantly enhance the performance of large language models (LLMs) by aligning their outputs with desired human values through instruction tuning.

reinforcement-learning

Topic Level Disambiguation for Weak Queries

no code implementations17 Feb 2015 Zhang Hui, Yang Kiduk, Jacob Elin

The results not only confirm the effectiveness of the proposed topic detection and topic-based retrieval approaches but also demonstrate that query disambiguation does not improve IR as expected.

Information Retrieval Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.