Search Results for author: Yik-Cheung Tam

Found 11 papers, 5 papers with code

Arithmetic Reasoning with LLM: Prolog Generation & Permutation

no code implementations28 May 2024 Xiaocheng Yang, Bingsen Chen, Yik-Cheung Tam

We hypothesize that an LLM should focus on extracting predicates and generating symbolic formulas from the math problem description so that the underlying calculation can be done via an external code interpreter.

Exploring an LM to generate Prolog Predicates from Mathematics Questions

no code implementations7 Sep 2023 Xiaocheng Yang, Yik-Cheung Tam

Consequently, we employ chain-of-thought to fine-tune LLaMA7B as a baseline model and develop other fine-tuned LLaMA7B models for the generation of Prolog code, Prolog code + chain-of-thought, and chain-of-thought + Prolog code, respectively.

GSM8K Language Modelling

Suffix Retrieval-Augmented Language Modeling

1 code implementation6 Nov 2022 Zecheng Wang, Yik-Cheung Tam

SUREALM employs an embedding retriever to search for training sentences in a data store that share similar word history during sequence generation.

Causal Language Modeling Language Modelling +2

UNITER-Based Situated Coreference Resolution with Rich Multimodal Input

1 code implementation7 Dec 2021 Yichen Huang, Yuchen Wang, Yik-Cheung Tam

Our model ranks second in the official evaluation on the object coreference resolution task with an F1 score of 73. 3% after model ensembling.

coreference-resolution Object +1

Ontology-Enhanced Slot Filling

no code implementations25 Aug 2021 Yuhao Ding, Yik-Cheung Tam

In multi-domain task-oriented dialog system, user utterances and system responses may mention multiple named entities and attributes values.

dialog state tracking slot-filling +1

Keyword-Attentive Deep Semantic Matching

1 code implementation11 Mar 2020 Changyu Miao, Zhen Cao, Yik-Cheung Tam

Deep Semantic Matching is a crucial component in various natural language processing applications such as question and answering (QA), where an input query is compared to each candidate question in a QA corpus in terms of relevance.

Retrieval Text Matching

Read and Comprehend by Gated-Attention Reader with More Belief

no code implementations NAACL 2018 Haohui Deng, Yik-Cheung Tam

GA Reader makes two assumptions: (1) a uni-directional attention that uses an input query to gate token encodings of a document; (2) encoding at the cloze position of an input query is considered for answer prediction.

Position Reading Comprehension +1

Cannot find the paper you are looking for? You can Submit a new open access paper.