1 code implementation • 24 Jan 2024 • Yiming Zhu, Zhizhuo Yin, Gareth Tyson, Ehsan-Ul Haq, Lik-Hang Lee, Pan Hui
To address this, there has been a flurry of research into prompt tuning -- techniques and guidelines that attempt to improve the quality of prompts.
no code implementations • 19 Dec 2023 • Jiawei Jiang, Yinwei Li, Shaowen Luo, Ping Li, Yiming Zhu
Through processing the sub-beam data and mosaicking the refocused subimages, the full image in GOCS without distortion and defocus is obtained.
no code implementations • 5 Oct 2023 • Shunkai Shi, Yuqi Wang, Qihui Ye, Yanran Wang, Yiming Zhu, Muhammad Hassan, Aikaterini Melliou, Dongmei Yu
After a number of extensive experiments, the experimental results show that the proposed model achieves 99. 06% accuracy in the two-class classification task of tooth-marked tongue identification and 80. 02%.
no code implementations • 20 Apr 2023 • Yiming Zhu, Peixian Zhang, Ehsan-Ul Haq, Pan Hui, Gareth Tyson
We believe this work can open up new lines of analysis and act as a basis for future research into the exploitation of ChatGPT for human annotation tasks.
no code implementations • ICCV 2023 • Ziyang Yuan, Yiming Zhu, Yu Li, Hongyu Liu, Chun Yuan
We leverage the inherent properties of EG3D's latent space to design a discriminator and a background depth regularization.
1 code implementation • 14 Oct 2022 • Yiming Zhu, Hongyu Liu, Yibing Song, Ziyang Yuan, Xintong Han, Chun Yuan, Qifeng Chen, Jue Wang
Based on the visual latent space of StyleGAN[21] and text embedding space of CLIP[34], studies focus on how to map these two latent spaces for text-driven attribute manipulations.
2 code implementations • 28 Jan 2022 • Ziyu Wang, Wenhao Jiang, Yiming Zhu, Li Yuan, Yibing Song, Wei Liu
In contrast with vision transformers and CNNs, the success of MLP-like models shows that simple information fusion operations among tokens and channels can yield a good representation power for deep recognition models.
1 code implementation • 15 Oct 2021 • Yu Bai, Heyan Huang, Kai Fan, Yang Gao, Yiming Zhu, Jiaao Zhan, Zewen Chi, Boxing Chen
Through introducing compression rate, the information ratio between the source and the target text, we regard the MT task as a special CLS task with a compression rate of 100%.