Member-only story

Diffusion Models For Image Generation

KEEP IN TOUCH | THE GEN AI SERIES

Aaweg I
6 min readMar 12, 2024

Diffusion models in image generation draw inspiration from the field of physics, particularly thermodynamics. These models offer a versatile approach to generating images and can be categorized into unconditioned and conditioned generation models.

src: https://www.cloudskillsboost.google/course_sessions/3404060/video/379146
  • Unconditioned diffusion models do not require any additional input or instructions. They can be trained solely on images of a specific category, such as faces, and learn to generate new images within that category. Another application of unconditioned generation is super-resolution, which excels at enhancing low-quality images.
  • Conditioned generation models, on the other hand, allow for more specific control over the image generation process. For instance, text-to-image models generate images based on textual prompts, while image inpainting and text-guided image-to-image models enable editing and manipulation of existing images by adding or removing elements.

INNER WORKING OF DIFFUSION MODELS

Let’s dive deeper into the inner workings of diffusion models and understand the process in more…

--

--

No responses yet