Search Results for author: Susana Guzman

Found 3 papers, 1 papers with code

Jina CLIP: Your CLIP Model Is Also Your Text Retriever

no code implementations30 May 2024 Andreas Koukounas, Georgios Mastrapas, Michael Günther, Bo wang, Scott Martens, Isabelle Mohr, Saba Sturua, Mohammad Kalim Akram, Joan Fontanals Martínez, Saahil Ognawala, Susana Guzman, Maximilian Werk, Nan Wang, Han Xiao

Contrastive Language-Image Pretraining (CLIP) is widely used to train models to align images and texts in a common embedding space by mapping them to fixed-sized vectors.

Cannot find the paper you are looking for? You can Submit a new open access paper.