2 code implementations • 12 Mar 2024 • Peiyuan Liu, Hang Guo, Tao Dai, Naiqi Li, Jigang Bao, Xudong Ren, Yong Jiang, Shu-Tao Xia
Unlike existing methods that focus on training models from a single modal of time series input, large language models (LLMs) based MTSF methods with cross-modal text and time series input have recently shown great superiority, especially with limited temporal data.
Knowledge Distillation Multivariate Time Series Forecasting +2
1 code implementation • 23 Feb 2024 • Hang Guo, Jinmin Li, Tao Dai, Zhihao Ouyang, Xudong Ren, Shu-Tao Xia
In this way, our MambaIR takes advantage of the local pixel similarity and reduces the channel redundancy.