profile.png

Anh Nguyen (Aengus)

I am a Research Resident at Qualcomm AI Research, where I am fortunate to be advised by Staff Scientist Dr. Anh Tran.

I am actively pursuing a PhD position in Computer Science for the Fall 2026 intake and excited to collaborate on impactful research! 🚀
Contact: aengus.ng8@gmail.com

I focus on deep generative modeling as a principled route to machine intelligence surpassing human performance.

Research: My research interest lies in developing robust generative models and applying them to diverse applications, including conditional sampling, disentangled representation learning, and reasoning. Currently, my goal is to build a generative model capable of producing high-fidelity, diverse, and high-dimensional samples in a single forward pass. My recent works focus on image generation and manipulation using ODE/SDE-based generative models, specifically diffusion probabilistic models.

Previously: I spent two years as a research resident in the highly selective AI Residency Program at VinAI Research, a lab recognized in the world’s top 20 for AI research based on its research output at top-tier conferences like CVPR and NeurIPS. The program provided intensive, PhD-level training, during which I was responsible for the entire research lifecycle, from ideation and planning to experimentation and implementation. This rigorous environment has a proven record of 119 PhD scholarships worldwide. The program’s elite status was further underscored by Qualcomm’s acquisition of its generative AI unit in 2025.

Beyond Research: I love the combination of mathematics, coding, and intuition. When I’m not debugging models or reading papers, you’ll find me running half marathons 🏃‍♂️


news

Oct 6, 2025 🏆 I am honored to receive the Outstanding Resident in Research and Applied Demo Award 2025! The award is part of the 2025 Recognition Awards from the Qualcomm AI Residency Program, which honors “the exceptional achievements of our residents this year.”
Sep 18, 2025 Improved Training Technique for Shortcut Models got accepted at NeurIPS 2025. This paper tackle the five core issues that held shortcut models back: the hidden flaw of compounding guidance, inflexible fixed guidance, frequency bias, divergent self-consistency, and curvy flow trajectories. Our method achieves state-of-the-art FID scores, making shortcut models a viable class of generative models capable of one-step, few-step, and multi-step sampling.
Jun 26, 2025 Supercharged One-step Text-to-Image Diffusion Models with Negative Prompts got accepted at ICCV 2025. This paper, for the first time, enables negative guidance in one-step diffusion models, unlocking precise creative control without sacrificing speed. The proposed method boosts both controllability and quality, achieving a new state-of-the-art HPSv2 score.

selected publications [full list]

(*) denotes equal contribution

  1. NeurIPS
    Improved Training Technique for Shortcut Models
    Anh Nguyen*, Viet Nguyen*, Duc Vu, Trung Dao, Chi Tran, Toan Tran, and Anh Tran
    In The Thirty-nine Annual Conference on Neural Information Processing Systems, 2025
  2. ICCV
    Supercharged One-step Text-to-Image Diffusion Models with Negative Prompts
    Viet Nguyen*, Anh Nguyen*, Trung DaoKhoi NguyenCuong PhamToan Tran, and Anh Tran
    In International Conference on Computer Vision, 2025
  3. arXiv
    On the Expressiveness of Visual Prompt Experts
    Minh Le*, Anh Nguyen*, Huy Nguyen, Chau Nguyen, Anh Tran, and Nhat Ho
    arXiv preprint arXiv:2501.18936