no code implementations • 21 May 2024 • Neisarg Dave, Daniel Kifer, C. Lee Giles, Ankur Mali
However, most research has predominantly focused on language-based reasoning and word problems, often overlooking the potential of LLMs in handling symbol-based calculations and reasoning.
no code implementations • 4 Feb 2024 • Neisarg Dave, Daniel Kifer, C. Lee Giles, Ankur Mali
We sampled the datasets from $7$ Tomita and $4$ Dyck grammars and trained them on $4$ RNN cells: LSTM, GRU, O2RNN, and MIRNN.
1 code implementation • WS 2018 • Chen Liang, Xiao Yang, Neisarg Dave, Drew Wham, Bart Pursel, C. Lee Giles
We investigate how machine learning models, specifically ranking models, can be used to select useful distractors for multiple choice questions.