News_2026_paper_tea
Iβm thrilled to share that our paper on Test-Time Embedding Adjustment (TEA) π± β a simple yet surprisingly powerful method for generative personalization has been accepted at ICLR 2026. Here are our three key contributions:
π Identifying a hidden problem. We are the first to explicitly highlight the semantic collapsing problem (SCP) in generative personalization β an under-explored failure mode driven by unconstrained optimization during finetuning, where the personalized model gradually loses semantic fidelity to the original concept.
β‘ A training-free fix that just works. We propose a simple, general, and highly effective method that adjusts the embedding of the personalized concept at inference time β no retraining required. Itβs the first approach of its kind, and the results genuinely surprised us π.
π¨ A surprising vulnerability in Anti-DreamBooth. This is perhaps our most unexpected finding: TEA can partially reverse the protection effect of Anti-DreamBooth and recover the supposedly protected concept from a poisoned model. This reveals a counter-intuitive weakness β a model you believe is fully protected can still leak its concept in the post-processing phase. We are the first to uncover this vulnerability in anti-personalization frameworks.
We validate TEA across 7 State-of-the-art personalization methods, 2 architectures (Stable Diffusion & Flux), and 3 datasets (CS101, CelebA, Relationship) covering 22 concepts in total.
Our code is available at https://github.com/tuananhbui89/Embedding-Adjustment. Its blog post is available at here.