Research

G. Li*, Y. Huang*, T. Efimov, Y. Wei, Y. Chi, Y. Chen. “Accelerating Convergence of Score-Based Diffusion Models, Provably”. [arxiv]

Y. Huang*, Z. Wen*, Y. Chi, Y. Liang. “Transformers Provably Learn Feature-Position Correlations in Masked Image Modeling”. [arxiv]

Y. Huang, Y. Cheng, Y. Liang. “In-Context Convergence of Transformers”, NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning (M3L). (Oral) [arxiv] [slide]

Y. Huang*, Y. Cheng*, Y. Liang, L. Huang. “Online Min-max Problems with Non-convexity and Non-stationarity”, Transactions on Machine Learning Research (TMLR), 2023. [link]

Y. Huang, Y. Liang, L. Huang. “Provable generalization of overparameterized meta-learning trained with SGD”,  Advances in Neural Information Processing Systems (NeurIPS), 2022. (Spotlight) [arxiv]

Y. Huang, J. Lin, C. Zhou, H. Yang, L. Huang. “Modality Competition: What Makes Joint Training of Multi-modal Network Fail in Deep Learning? (Provably)”, International Conference on Machine Learning (ICML), 2022. [arxiv]

Y. Huang*, C. Du*, Z. Xue, X. Chen, H. Zhao, L. Huang. “What Makes Multi-modal Learning Better than Single (Provably)”, Advances in Neural Information Processing Systems (NeurIPS), 2021. [arxiv]

Y. Huang, L. Huang. “Heavy Traffic Analysis of Approximate Max-Weight Matching Algorithms for Input-Queued Switches” , International Symposium on Computer Performance, Modeling, Measurement and Evaluation (Performance), 2020. [link]