Research

G. Li*, Y. Huang*, T. Efimov, Y. Wei, Y. Chi, Y. Chen. “Accelerating Convergence of Score-Based Diffusion Models, Provably”, ICML 2024. [arxiv]

Y. Huang*, Z. Wen*, Y. Chi, Y. Liang. “How Transformers Learn Diverse Attention Correlations in Masked Vision Pretraining”, Preprint 2024. [arxiv]

Y. Huang, Y. Cheng, Y. Liang. “In-Context Convergence of Transformers”, NeurIPS 2023 Mathematics of Modern Machine Learning Workshop (Oral); ICML 2024. [arxiv] [slide]

G. Li, Y. Chen, Y. Huang, Y. Chi, H. V. Poor, Y. Chen. “Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods”, Preprint 2023. [arxiv]

Y. Huang*, Y. Cheng*, Y. Liang, L. Huang. “Online Min-max Problems with Non-convexity and Non-stationarity”, TMLR 2023. [link]

Y. Huang, Y. Liang, L. Huang. “Provable generalization of overparameterized meta-learning trained with SGD”,  NeurIPS 2022 (Spotlight). [arxiv]

Y. Huang, J. Lin, C. Zhou, H. Yang, L. Huang. “Modality Competition: What Makes Joint Training of Multi-modal Network Fail in Deep Learning? (Provably)”, ICML 2022. [arxiv]

Y. Huang*, C. Du*, Z. Xue, X. Chen, H. Zhao, L. Huang. “What Makes Multi-modal Learning Better than Single (Provably)”, NeurIPS 2021. [arxiv]

Y. Huang, L. Huang. “Heavy Traffic Analysis of Approximate Max-Weight Matching Algorithms for Input-Queued Switches” , Performance 2020. [link]