no code implementations • 13 Feb 2024 • Maurice Diesendruck, Jianzhe Lin, Shima Imani, Gayathri Mahalingam, Mingyang Xu, Jie Zhao
When LLMs perform zero-shot inference, they typically use a prompt with a task specification, and generate a completion.
no code implementations • 10 Jan 2024 • Manqing Mao, PaiShun Ting, Yijian Xiang, Mingyang Xu, Julia Chen, Jianzhe Lin
Recent advancements in large language models (LLMs) have provided a new avenue for chatbot development, while most existing research has primarily centered on single-user chatbots that focus on deciding "What" to answer after user inputs.
no code implementations • 1 Sep 2023 • Jianzhe Lin, Maurice Diesendruck, Liang Du, Robin Abraham
We have two initial observations for prompting with batched data.
1 code implementation • 6 Jan 2022 • Maryam Hosseini, Fabio Miranda, Jianzhe Lin, Claudio Silva
While designing sustainable and resilient urban built environment is increasingly promoted around the world, significant data gaps have made research on pressing sustainability issues challenging to carry out.
no code implementations • 18 Oct 2021 • Diwei Sheng, Yuxiang Chai, Xinru Li, Chen Feng, Jianzhe Lin, Claudio Silva, John-Ross Rizzo
Visual place recognition (VPR) is critical in not only localization and mapping for autonomous driving vehicles, but also in assistive navigation for the visually impaired population.
1 code implementation • CVPR 2022 • Guande Wu, Jianzhe Lin, Claudio T. Silva
There is a growing interest in the integration of user queries into video summarization or query-driven video summarization.
1 code implementation • 6 Sep 2021 • Guande Wu, Jianzhe Lin, Claudio T. Silva
This type of methods includes a summarizer and a discriminator.
no code implementations • 6 Sep 2021 • Jianzhe Lin, Tianze Yu, Z. Jane Wang
To address such concerns, we have a rethinking of crowdsourcing annotations: Our simple hypothesis is that if the annotators only partially annotate multi-label images with salient labels they are confident in, there will be fewer annotation errors and annotators will spend less time on uncertain labels.
1 code implementation • 15 Aug 2021 • Tianze Yu, Jianzhe Lin, Lichao Mou, Yuansheng Hua, Xiaoxiang Zhu, Z. Jane Wang
In our experiments, trained with single-labeled MAI-AID-s and MAI-UCM-s datasets, the proposed model is tested directly on our collected Multi-scene Aerial Image (MAI) dataset.
1 code implementation • 22 Apr 2021 • Yuansheng Hua, Lichao Moua, Jianzhe Lin, Konrad Heidler, Xiao Xiang Zhu
To be more specific, we first learn the prototype representation of each aerial scene from single-scene aerial image datasets and store it in an external memory.
no code implementations • CVPR 2021 • Jianzhe Lin, Ghazal Sahebzamani, Christina Luong, Fatemeh Taheri Dezaki, Mohammad Jafari, Purang Abolmaesumi, Teresa Tsang
The model is trained using few annotated frames across the entire cardiac cine sequence to generate consistent detection and tracking of landmarks, and an adversarial training for the model is proposed to take advantage of these annotated frames.
1 code implementation • 23 Jun 2020 • Jing Wang, Jiahong Chen, Jianzhe Lin, Leonid Sigal, Clarence W. de Silva
To solve this problem, we introduce a Gaussian-guided latent alignment approach to align the latent feature distributions of the two domains under the guidance of the prior distribution.
Ranked #1 on Domain Adaptation on SYNSIG-to-GTSRB
no code implementations • 23 Sep 2018 • Jianzhe Lin, Qi. Wang, Rabab Ward, Z. Jane Wang
Previous transfer learning methods based on deep network assume the knowledge should be transferred between the same hidden layers of the source domain and the target domains.