Search Results for author: Amit Parekh

Found 5 papers, 2 papers with code

The Spoon Is in the Sink: Assisting Visually Impaired People in the Kitchen

1 code implementation ReInAct 2021 Katie Baker, Amit Parekh, Adrien Fabre, Angus Addlesee, Ruben Kruiper, Oliver Lemon

Questions about the spatial relations between these objects are particularly helpful to visually impaired people, and our system output more usable answers than other state of the art end-to-end VQA systems.

Question Answering Visual Question Answering

Learning To See But Forgetting To Follow: Visual Instruction Tuning Makes LLMs More Prone To Jailbreak Attacks

1 code implementation7 May 2024 Georgios Pantazopoulos, Amit Parekh, Malvina Nikandrou, Alessandro Suglia

Augmenting Large Language Models (LLMs) with image-understanding capabilities has resulted in a boom of high-performing Vision-Language models (VLMs).

Multitask Multimodal Prompted Training for Interactive Embodied Task Completion

no code implementations7 Nov 2023 Georgios Pantazopoulos, Malvina Nikandrou, Amit Parekh, Bhathiya Hemanthage, Arash Eshghi, Ioannis Konstas, Verena Rieser, Oliver Lemon, Alessandro Suglia

Interactive and embodied tasks pose at least two fundamental challenges to existing Vision & Language (VL) models, including 1) grounding language in trajectories of actions and observations, and 2) referential disambiguation.

Decoder Text Generation

iLab at SemEval-2023 Task 11 Le-Wi-Di: Modelling Disagreement or Modelling Perspectives?

no code implementations10 May 2023 Nikolas Vitsakis, Amit Parekh, Tanvi Dinkar, Gavin Abercrombie, Ioannis Konstas, Verena Rieser

There are two competing approaches for modelling annotator disagreement: distributional soft-labelling approaches (which aim to capture the level of disagreement) or modelling perspectives of individual annotators or groups thereof.

Cannot find the paper you are looking for? You can Submit a new open access paper.