no code implementations • 4 Jan 2023 • Haojie Yu, Kang Zhao, Xiaoming Xu
To alleviate this issue, inspired by masked autoencoder (MAE), which is a data-efficient self-supervised learner, we propose Semi-MAE, a pure ViT-based SSL framework consisting of a parallel MAE branch to assist the visual representation learning and make the pseudo labels more accurate.
no code implementations • 30 Dec 2020 • Zhuosheng Zhang, Haojie Yu, Hai Zhao, Rui Wang, Masao Utiyama
Word representation is a fundamental component in neural language understanding models.