May 20, 2025 |
Proud to be selected as a Notable Reviewer at ICLR 2025! The award is given to those who reviewed four or more papers. In my case, I put considerable effort not only into the initial reviews but also into engaging in discussions with the authors — all with one goal in mind: identifying which aspects of the papers needed to be included or improved. It was neither an easy nor a quick task, requiring several hours just to read and understand each paper, followed by extensive effort during the rebuttal phase (one paper involved a total of 14 back-and-forth exchanges). This all happened while I was also busy in the role of an author myself. In the end, I raised the ratings of 3 out of the 4 papers I reviewed and supported their acceptance and they were indeed accepted! :D I’m glad to see that my efforts were recognized.
|
Mar 21, 2025 |
It was a great pleasure for me to present our works about Unlearning Concepts to the Machine Learning team at Canva. The slides are available here. I’m glad to see many interesting and practical/industry-related questions from the audience and see how our research can be applied to their real-world problems.
|
Feb 28, 2025 |
I’m excited to share that I am officially a Chief Investigator of the Trustworthy Generative AI: Towards Safe and Aligned Foundation Models project, funded by the Department of Defence, Australia with an $800K AUD grant. The project focuses on four key areas of modern foundation models: Certification - Alignment - Multimodality - Personalization, where I am leading the Personalization stream. Our goal is to push the boundaries of safe and aligned generative AI, ensuring its responsible deployment in real-world applications. The project is led by Professor Dinh Phung and co-led by a team of experts from the Faculty of IT, Monash University, where I am honored to be part of.
|
Feb 27, 2025 |
I’m excited to share that our paper “Preserving Clusters in Prompt Learning for Unsupervised Domain Adaptation” (led by Long Vuong) has been accepted to CVPR 2025!  
While CLIP-based methods for Unsupervised Domain Adaptation (UDA) have shown promise, they face limitations in target domain generalization due to embedding distribution shifts. In this paper, we propose a novel approach that exploits the geometric relationships between visual and text embeddings through optimal transport theory. By leveraging clustering behavior in multi-modal embeddings and reference predictions from source prompts, our method achieves superior performance in target-prompt learning and representation quality.
|
Jan 23, 2025 |
Hooray! I’m thrilled to finally share that our work has been accepted to ICLR 2025! This is more than just an acceptance—I’m truly proud that all reviewers recognized and appreciated the originality and creativity of our approach to concept unlearning, with a clear motivation and comprehensive experiments. The paper can be found here  
|
Oct 4, 2024 |
Excited to share another paper that I am very proud of. This paper is an extension of our NeurIPS 2024 paper, where we dive deeper into the impact of erasing one concept to the others, but this time, we focus on the choice of target concepts. The paper can be found here. Our paper’s name was inspired by the movie “Fantastic Beasts and Where to Find Them”. Hopefully, the reviewers enjoy it as much as the movie .
|
Sep 26, 2024 |
Proudly to share that our paper “Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation” has been accepted at NeurIPS 2024. We had a challenging rebuttal period, where we worked hard to address the feedback from some tough but silent reviewers. Fortunately, we had other reviewers who actively engaged with us, sought to understand our paper, and ultimately championed it. So, in this happy moment, I want to express my gratitude to the anonymous reviewers ❤️, as well as to my incredible collaborators from Monash and DST. We will soon update the paper with all the details and code. The paper can be found here with its slides. Hope you enjoy it.
|
Jun 28, 2024 |
I am thrilled and proud to see the Trustworthy Machine Learning project, on which I have been a key contributor since my PhD, being extended to a new 3-year project funded by the Department of Defense, Australia. The project will focus on various aspects of Trustworthy Generative Models, including alignment, safety, and robustness. This project is not only the first major grant on Generative AI in our DSAI department but also across the entire FIT at Monash University. 🎉 🎉 🎉
|
Nov 1, 2023 |
I officially become a Dr. today! My thesis “Enhancing Adversarial Robustness: Representation, Ensemble, And Distribution Approaches” is available here. Today is also my wedding anniversary Hooray!
|
Sep 22, 2023 |
Our paper “Optimal Transport Model Distributional Robustness” has been accepted to NeurIPS 2023! 🎉 (led by Van-Anh Nguyen)
|
Jun 24, 2023 |
Presenting “Exploring Controllability of Conditioned Diffusion Models” at prof. Gemma Roig’s lab under the Postdoc-NeT-AI program. Slide.
|
Jun 7, 2023 |
Finally, I submit my Ph.D. thesis for examination. Phew!
|
Apr 12, 2023 |
I have been awarded a DAAD AInet fellowship
|
Apr 2, 2023 |
Presenting “Holistic View of Adversarial Machine Learning” at our lab meeting. Slide
|
Sep 2, 2022 |
Presenting “Sharpness Aware Minimization: Recent Advances and Applications” at our lab meeting. Slide
|