no code implementations • 8 May 2024 • Luke Merrick, Danmei Xu, Gaurav Nuti, Daniel Campos
This report describes the training dataset creation and recipe behind the family of \texttt{arctic-embed} text embedding models (a set of five models ranging from 22 to 334 million parameters with weights open-sourced under an Apache-2 license).