no code implementations • 23 May 2024 • Haixu Wu, Huakun Luo, Yuezhou Ma, Jianmin Wang, Mingsheng Long
To mitigate this inherent deficiency of the default scatter-point optimization, this paper proposes and theoretically studies a new training paradigm as region optimization.
1 code implementation • ICLR 2024 • Shiyu Wang, Haixu Wu, Xiaoming Shi, Tengge Hu, Huakun Luo, Lintao Ma, James Y. Zhang, Jun Zhou
Going beyond the mainstream paradigms of plain decomposition and multiperiodicity analysis, we analyze temporal variations in a novel view of multiscale-mixing, which is based on an intuitive but important observation that time series present distinct patterns in different sampling scales.
no code implementations • 4 Feb 2024 • Haixu Wu, Huakun Luo, Haowen Wang, Jianmin Wang, Mingsheng Long
Transformers have empowered many milestones across various fields and have recently been applied to solve partial differential equations (PDEs).
1 code implementation • 30 Jan 2023 • Haixu Wu, Tengge Hu, Huakun Luo, Jianmin Wang, Mingsheng Long
A burgeoning paradigm is learning neural operators to approximate the input-output mappings of PDEs.