no code implementations • 29 May 2024 • Zhanhui Zhou, Zhixuan Liu, Jie Liu, Zhichen Dong, Chao Yang, Yu Qiao
In this work, we introduce $\textit{weak-to-strong search}$, framing the alignment of a large language model as a test-time greedy search to maximize the log-likelihood difference between small tuned and untuned models while sampling from the frozen large model.
no code implementations • 16 Jan 2024 • Zhixuan Liu, Peter Schaldenbrand, Beverley-Claire Okogwu, Wenxuan Peng, Youngsik Yun, Andrew Hundt, Jihie Kim, Jean Oh
Accurate representation in media is known to improve the well-being of the people who consume it.
1 code implementation • 28 Jan 2023 • Zhixuan Liu, Youeun Shin, Beverley-Claire Okogwu, Youngsik Yun, Lia Coleman, Peter Schaldenbrand, Jihie Kim, Jean Oh
It has been shown that accurate representation in media improves the well-being of the people who consume it.
Cultural Vocal Bursts Intensity Prediction Image Generation +3
1 code implementation • 23 Oct 2022 • Peter Schaldenbrand, Zhixuan Liu, Jean Oh
We introduce an approach to generating videos based on a series of given language descriptions.
2 code implementations • 20 Mar 2022 • Zhixuan Liu, ZiHao Wang, Yuan Lin, Hang Li
Deep neural networks, empowered by pre-trained language models, have achieved remarkable results in natural language understanding (NLU) tasks.
1 code implementation • 24 Feb 2022 • Peter Schaldenbrand, Zhixuan Liu, Jean Oh
Generating images that fit a given text description using machine learning has improved greatly with the release of technologies such as the CLIP image-text encoder model; however, current methods lack artistic control of the style of image to be generated.
1 code implementation • 4 Nov 2021 • Peter Schaldenbrand, Zhixuan Liu, Jean Oh
Generating images that fit a given text description using machine learning has improved greatly with the release of technologies such as the CLIP image-text encoder model; however, current methods lack artistic control of the style of image to be generated.