Search Results for author: Andrew Parry

Found 5 papers, 3 papers with code

Top-Down Partitioning for Efficient List-Wise Ranking

no code implementations23 May 2024 Andrew Parry, Sean MacAvaney, Debasis Ganguly

Large Language Models (LLMs) have significantly impacted many facets of natural language processing and information retrieval.

Information Retrieval Re-Ranking

Generative Relevance Feedback and Convergence of Adaptive Re-Ranking: University of Glasgow Terrier Team at TREC DL 2023

1 code implementation2 May 2024 Andrew Parry, Thomas Jaenich, Sean MacAvaney, Iadh Ounis

In re-ranking, we investigate operating points of adaptive re-ranking with different first stages to find the point in graph traversal where the first stage no longer has an effect on the performance of the overall retrieval pipeline.

Language Modelling Large Language Model +2

"In-Context Learning" or: How I learned to stop worrying and love "Applied Information Retrieval"

no code implementations2 May 2024 Andrew Parry, Debasis Ganguly, Manish Chandra

With the increasing ability of large language models (LLMs), in-context learning (ICL) has evolved as a new paradigm for natural language processing (NLP), where instead of fine-tuning the parameters of an LLM specific to a downstream task with labeled examples, a small number of such examples is appended to a prompt instruction for controlling the decoder's generation process.

In-Context Learning Information Retrieval +1

Exploiting Positional Bias for Query-Agnostic Generative Content in Search

1 code implementation1 May 2024 Andrew Parry, Sean MacAvaney, Debasis Ganguly

We demonstrate such defects by showing that non-relevant text--such as promotional content--can be easily injected into a document without adversely affecting its position in search results.

Position Text Retrieval

Analyzing Adversarial Attacks on Sequence-to-Sequence Relevance Models

1 code implementation12 Mar 2024 Andrew Parry, Maik Fröbe, Sean MacAvaney, Martin Potthast, Matthias Hagen

Modern sequence-to-sequence relevance models like monoT5 can effectively capture complex textual interactions between queries and documents through cross-encoding.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.