This follow-up blog post reviews alternative classifier-free guidance (CFG) approaches to a diffusion model trained without conditioning dropout. In such cases, CFG cannot be applied. So, what do we do? Or how do we apply CFG in purely unconditional generative setups?

Recent works reveal that the (jointly trained) unconditional “model” can be substituted with various alternatives, such as an impaired/inferior model. The inferior network can be conditional or unconditional and requires, by design, some sort of bottleneck compared to the conditional model. We will refer to the positive and negative models to avoid confusion between the conditional and unconditional models, as in plain CFG. Visit our previous articles on diffusion models or CFG to grasp the fundamental concepts first.

Positive and negative notation

CFG requires a) an external condition (text or class label) and b) training both with and without the condition. The latter refers to randomly dropping the condition, typically 10-20% of the time. Separate training of models can also be conducted, but in both cases, it’s a limitation. This was a known limitation of CFG, and it was one of the reasons for its initial rejection:

“From the AC’s perspective, the practical significance could be enhanced if the authors can generalize their technique beyond assisting conditional diffusion models.”

From now on, we will refer to the conditional and unconditional models in CFG as positive and negative models due to their sign in the CFG equation.

This generalizes the framework beyond the use of a condition, allowing for arbitrary models to be combined. We use the “negative” model instead of the unconditional one for extrapolation. Typically, the positive model is a regular diffusion model, and the negative model is a modified version of the positive model. We find this generalized framework of positive and negative terms in the CFG equation helpful and use it in this article.

D^out(xσ)=Dneg(xσ)+(1+γ)(Dpos(xσ)Dneg(xσ)). \hat{D}_{\text{out}}(x \mid \sigma) = D_{\text{neg}}(x \mid \sigma) + (1+\gamma) \left( D_{\text{pos}}(x \mid \sigma) – D_{\text{neg}}(x \mid \sigma) \right).

The generalized formulation of CFG was first presented by Hong et al. 2023. Then, the big question boils down to: what should we use instead of the negative term instead of the unconditional model?

Self-Attention Guidance (SAG)

TL;DR: The negative model is the same as the positive model, but its predictions are modified via. The main idea is to replace the unconditional model with a conditional model with blurred patches. Patches with high activation based on the self-attention map are blurred (adversarial blurring).

Hong et al. try to develop a method, namely self-attention guidance (SAG), that

  • works out of the box for both conditional and unconditional models

  • aims to apply CFG-like guidance on any single diffusion model (unconditional or conditional) with a noised image xtx_t

  • Manipulates the input image xtx_t

The authors of SAG try to develop ideas to derive a useful signal that can act as a negative term on the CFG equation. In their terms, they ablate with various “masking strategies” in image space and measure the impact on FID.

To the best of our understanding, “baseline” in the table below refers to the conditional ADM diffusion model on ImageNet at 128×128 resolution. Based on the ablation study below, one can deduct the following:




masking-strategy


Ablation study of the masking strategy (Dneg(xσ)D_{\text{neg}}(x \mid \sigma)

  • Gaussian blur guidance (“global”) performs smoothing via local intensity weighted averaging as a negative term at each step. Blurring reduces the local details (high frequency) within the input signals.

  • High frequency refers to applying a high-frequency mask using FFT on the denoise image. If I understand correctly, the high-frequency mask would keep the high frequencies and attenuate or remove the low frequencies from the signal.

  • Random and square reset the value pixels.

  • SAG and DINO use attention maps and apply Gaussian smoothing (blur) only in high-activation patches. The authors show an overlap of attention maps generated from a diffusion model with high-frequency signals in the image. DINO is a pre-trained self-supervised vision transformer that has never seen noisy images.

It is crucial that the attention map needs to be manually picked (there are L x H attention maps for L layers and H heads) and upscaled to the image size i.e. from 16×16 to 224×224.




cfg-vs-sag


Compared to CFG, which uses external class information, SAG extracts the internal information with self-attention to guide the models, making it training- and condition-free. The figure is adapted from with minor annotations highlighted in light purple, licensed under CC BY 4.0.

Their results point to the fact that subtracting high-frequency information is more beneficial, similar to other studies. In the above figure, I have highlighted in purple which term is positive and which is negative to avoid confusion.

Instead of blurring the whole image (high-frequency filtering), SAG blurs the image patches with the highest attention activations. Read our previous blog post for more info on how self-attention works in diffusion models .

The improvement in FID compared to the baseline is less than 10 %, yet the presented images look better. However, due to aggressive perturbation applied directly to the model’s input, SAG has recently shown a tendency to produce out-of-distribution (OOD) samples and high hyperparameter sensitivity .




unguided-vs-self-attention


Cheery picked qualitative comparisons between unguided (top) and self-attention-guided (bottom) samples. Unlike classifier
guidance or classifier-free guidance (CFG)], self-attention guidance (SAG) does not necessarily require an external condition, e.g., a class label or text prompt, nor additional training: (a) unconditional ADM and (b) Stable Diffusion with an empty prompt. Source

The concern when doing guidance with impaired models boils down to increasing quality without introducing new artifacts. This method is the first one that indicated this direction to the community, so it is worth some credit!

Attention-based self-guidance: perturbed self-attention (PAG)

Narrative: This work employs a conceptually similar approach to SAG, aiming to derive a training-free negative term from the same model. However, instead of attention-based adversarial image blurring, it directly impairs the attention module in the UNet, breaking the samples’ semantic structures. The resulting difference Δt\Delta t in CFG between positive and negative terms is intended to repair the semantic coherence of the final generated sample.

TL;DR: The negative term is formed by substituting a few self-attention matrices in the UNet’s encoder with an identity matrix. Works for conditional and unconditional models as positives.




pag


PAG replaces the self-attention map with an identity matrix for the negative model. Source

Ahn et al. attempt to derive the guidance signal from a single network. To this end, they define various perturbations denoted as Dneg(xσ)=Dθ~(xt)D_{\text{neg}}(x \mid \sigma)=\widetilde{D_{\theta}}(x_t)

D^θ(xt)=Dθ(xt)+γ(Dθ(xt)Dθ~(xt)perturbation)=Dθ(xt)+γΔ~t. \hat{D}_{\theta}(x_t) = {D}_{\theta}(x_t) + \gamma ( D_{\theta}(x_t) – \underbrace{\widetilde{D_{\theta}}(x_t)}_{\text{perturbation}} ) = {D}_{\theta}(x_t) + \gamma \widetilde{\Delta}_t.




perturbations-self-attention


Ablation study on different perturbations on the self-attention mask for Dθ~(xt)\widetilde{D_{\theta}}(x_t)

The key concept is that the negative/perturbed term Dθ()~\widetilde{{D}_{\theta}(\cdot)}

St=Self-attention(Qt,Kt,Vt)=softmax(QtKtTd)structureVtappearance=AtVt.S_t=\text{Self-attention}(Q_t, K_t, V_t) = \underbrace{\mathrm{softmax}\left(\frac{{Q_t}{K^T_t}}{\sqrt{d}}\right)}_\textit{structure}\overbrace{V_t}^\textit{appearance}={A}_t V_t.

This simple approach replaces the selected self-attention map with an identity matrix Ih×wI_{h\times w}

Perturbed self-attention(Qt,Kt,Vt)=IVt=Vt \text{Perturbed self-attention}(Q_t, K_t, V_t) = I V_t= V_t




unguided-vs-pag


Qualitative comparisons between unguided (baseline) and perturbed-
attention-guided (PAG) diffusion samples. Source

However, both methods (SAG and PAG) remain architecture-dependent: which layers of the Unet am I supposed to select? What If my Unet is larger or smaller than the one used in the paper? What if I don’t have a Unet model?

Autoguidance: Guiding a Diffusion Model with a Bad Version of Itself

Narrative: This work distills the idea of creating a negative model from an impaired positive one into a simpler form. The core intuition is that the negative model should make errors similar to the positive one but stronger.

TL;DR: Create the negative term in CFG with a “weak” model, achieved by i) training the network less (using an earlier checkpoint), ii) using a network of smaller learning capacity (fewer parameters). Combining the two achieves state-of-the-art generative performance in class-conditional models on ImageNet.

The vanilla version of CFG is only applicable to conditional generation. Since the denoiser is trained with two objectives (task discrepancy), the sampling trajectory of CFG can overshoot the conditional distribution. Finally, the prompt alignment is controlled by the guidance scale γ\gamma but harms visual metrics such as FID. Karras et al. demonstrate how to disentangle the above effects and control them in isolation. In the vanilla CFG setup, the denoiser Dθ(xσ)D_{\theta}(x \mid \sigma)

Autoguidance refers to leveraging an inferior version of the denoiser (Dpos(xc;σ)D_{\text{pos}}(x \mid c; \sigma)

They find that the positive model can be bottlenecked by reducing the number of parameters or training iterations. In this way, the positive and negative are both conditional or, as the authors describe it:

“Our method does not suffer from the task discrepancy problem because we use an inferior version of the main model itself as the guiding model, with unchanged conditioning.”

Intuition. The intuition behind autoguidance is that the weak model is prone to make similar yet stronger errors, likely in the same regions. Autoguidance seeks to boost the difference of these errors Dpos(xc;σ)Dneg(xc;σ)D_{\text{pos}}(x \mid c; \sigma)- D_{\text{neg}}(x \mid c; \sigma)

In practice, a well-tuned combination of smaller size models, e.g. between 30 to 50 % of the parameters, and a shorter training schedule (τ[T3.5,T16]\tau \in [\frac{T}{3.5}, \frac{T}{16}]

The models’ EMA is a 3rd nuanced hyperparameter pair influencing the balance between positive and negative terms during sampling. To decide upon the best EMA hyperparameter (pEMAp_{\text{EMA}}

In the table below, the authors emphasize that

  1. using a smaller EMA value for the main model (positive) is beneficial in practice.

  2. using an earlier checkpoint is more impactful than using a model of inferior capacity.




imageNet-512-vs-imageNet-64


Results on ImageNet-512 and ImageNet-64. The parameters of auto-guidance refer to the capacity and amount of training received by the guiding model. The latter is given relative to the number of training images shown to the main model (T). The columns EMAmEMA_m

Below, you can find the curves that show the individual factor sensitivities:




sensitivity-autoguidance


Sensitivity w.r.t. autoguidance parameters, using EDM2-S on ImageNet-512. The shaded
regions indicate the min/max FID over 3 evaluations. (a) Sweep over guidance weight w while
keeping all other parameters unchanged. The curves correspond to how much the guiding model was trained relative to the number of images shown to the main model. (b) Sweep over guidance weight for different guiding model capacities. (c) Sweep over the two EMA length parameters for our best
configuration, denoted with ⋆ in (a) and (b). Source

The price you pay to get such good numbers is that this is not a training-free approach if you want to train a smaller model and the vast hyperparameter search during inference (model size, which checkpoint, EMA positive, EMA negative). Finally, if you train your do not your own models, you will only have a single publicly available checkpoint, rendering the method impractical for such cases.

No Training, No Problem: Rethinking Classifier-Free Guidance for Diffusion Models

ICG: Another training-free solution is independent condition guidance (ICG) : sample a random condition as negative instead of unconditional. The purpose of this approach is to provide a training-free alternative for conditional models that have not been trained with conditional dropout. Below are some cherry-picked results from the paper :




cfg-icg-comparison


Comparison between CFG and ICG for (a) Stable Diffusion and (b) DiT-XL. Both CFG and ICG improve the image quality of the baseline. Source




cfg-icg-quantitative-comparison


Quantitative comparison between CFG and ICG. ICG is able to achieve similar metrics to the standard CFG by extracting the unconditional score from the conditional model itself. Source

Nonetheless, this solely applies to conditional models. It is worth mentioning that the same authors propose a similar variant for unconditional models that applied a hacky perturbation on the timestep embedding of the unconditional model.

Self-improving Diffusion Models with Synthetic Data

Another fantastic idea is to re-train your (trained) diffusion model with a re-training budget B with its own synthetic samples to create the impaired model. Then, the sample of the impaired model is used as a negative in a CFG-like fashion.




sims


Algorithm description of SIMS: Self-improving Diffusion Models with Synthetic Data. Source

Now, one must decide on the hyperparameters: synthetic dataset size and re-training budget.

The authors chose the synthetic dataset size to roughly match the real dataset size.




fid-sims


Top row: FID between the SIMS Algorithm and the real data distribution as a function of the guidance scale w at three different checkpoints of the training budget B as measured by the number of million-images-seen (Mi) during re-training of the auxiliary/impaired model. Bottom row: FID of the SIMS model as a function of training budget for three different values of the guidance scale w. Source

The above plot shows the impact of the training budget as measured by the number of samples seen during re-training and using different guidance strengths.

Smoothed Energy Guidance: Guiding Diffusion Models with Reduced Energy Curvature of Attention

This is another approach to creating the negative model from the base model, called Smoothed Energy Guidance (SEG) . The authors manipulate a few self-attention blocks via blurring of the Stable Diffusion Unet. They also draw some connections between attention and Hopfield energy, as performing a self-attention operation is equivalent to taking a minimization gradient step in the energy landscape. The authors argue that Gaussian blurring on the attention weights modulates the underlying landscape to have less curvature, meaning it is smoother.

Blurring the attention weights means applying a localized Gaussian kernel with a specific standard deviation.

G(x,y)=12πσ2e(xμx)2+(yμy)22σ2 G(x, y) = \frac{1}{2\pi\sigma^2} e^{-\frac{(x-\mu_x)^2 + (y-\mu_y)^2}{2\sigma^2}}

The kernel is slided across the image dimensions, and the mathematical operation is cross-correlation, or in deep learning terms, “convolution”. An important property of the Gaussian blurring is that the mean is preserved.

(QK)SEG=G(QK)(\mathbf{QK}^\top)_\mathrm{SEG} = G * (\mathbf{QK}^\top)

The 2D Gaussian filter is a set of weights of size K times K that uses a 2D Gaussian function to assign weights to neighboring pixels. The mean and variance are defined based on the pixel’s location, not their intensity. To the best of my knowledge, they fix the guidance scale and kernel size and only vary the sigma (standard deviation). Based on the open-source code, the kernel is computed as:

import math

kernel_size = math.ceil(6 * sigma) + 1 - math.ceil(6 * sigma) % 2

The advantages of SEG are that:

  • It is a tuning- and condition-free method

  • It can be interpreted as the “curvature of the underlying energy.”

  • Only the standard deviation (sigma) needs to be tuned.

However, a veiled limitation is that you must pick specific attention layers to apply smoothing, similar to previous methods (SAG and PAG).




conditional-generation-controlnet


Source

Conclusion

Overall, no single out-of-the-box method replaces the vanilla CFG, and more research is needed. This article briefly discussed ways to perform sampling with diffusion models in a CFG-like fashion. For more in-depth material, check out Coursera’s course on image generation or the new book from Manning Publications on Generative AI with PyTorch. If you feel like you‘ve learned something useful, share this blog post on social media or donate to help us reach a broader audience.

Citation

@article{adaloglou2024cfg,

title = "An overview of classifier-free guidance for diffusion models",

author = "Adaloglou, Nikolas, Kaiser, Tim",

journal = "theaisummer.com",

year = "2024",

url = "https://theaisummer.com/classifier-free-guidance"}

Disclaimer

Figures and tables shown in this work are provided based on arXiv preprints or published versions when available, with appropriate attribution to the respective works. Where the original works are available under a Creative Commons Attribution (CC BY 4.0) license, the reuse of figures and tables is explicitly permitted with proper attribution. For works without explicit licensing information, permissions have been requested from the authors, and any use falls under fair use consideration, aiming to support academic review and educational purposes. The use of any third-party materials is consistent with scholarly standards of proper citation and acknowledgment of sources.

References

Deep Learning in Production Book 📖

Learn how to build, train, deploy, scale and maintain deep learning models. Understand ML infrastructure and MLOps using hands-on examples.

Learn more

* Disclosure: Please note that some of the links above might be affiliate links, and at no additional cost to you, we will earn a commission if you decide to make a purchase after clicking through.



Source link

author-sign