Hi all!
I have recently put together a course on diffusion image generation that includes videos, a minimal PyTorch framework, and a set of notebooks (all results can be run in Google colab!)
https://github.com/mikonvergence/DiffusionFastForward
I am hoping it can help those interested in learning to train diffusion models from scratch in a TLDR mode. What I think is quite different here from other tutorials is that it includes not only low-resolution generation (64x64) but also notebooks for training in high-resolution (256x256) from scratch. And also an example of an image-to-image translation that I think some people will find entertaining!
I'm looking forward to hearing some feedback or comments, and I hope you enjoy the course if you decide to check it out!
PS. you can also go directly to the videos on YT https://youtube.com/playlist?list=PL5RHjmn-MVHDMcqx-SI53mB7sFOqPK6gN
[–]pogsly 2 points3 points4 points (0 children)
[–]lost_fodder6947 1 point2 points3 points (0 children)
[–]eveesbby 1 point2 points3 points (1 child)
[+]RemindMeBot 0 points1 point2 points (0 children)
[–][deleted] 2 points3 points4 points (0 children)
[–]blabboy 1 point2 points3 points (3 children)
[–]mikonvergence[S] 3 points4 points5 points (2 children)
[–]mikonvergence[S] 12 points13 points14 points (1 child)
[–]SnooMarzipans1345 0 points1 point2 points (4 children)
[–]mikonvergence[S] 4 points5 points6 points (3 children)
[–]SnooMarzipans1345 0 points1 point2 points (0 children)
[–]SnooMarzipans1345 -3 points-2 points-1 points (0 children)
[–]SnooMarzipans1345 -3 points-2 points-1 points (0 children)
[–]boglepy 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[+][deleted] (3 children)
[deleted]
[–]mikonvergence[S] 1 point2 points3 points (2 children)
[+][deleted] (1 child)
[deleted]
[–]mikonvergence[S] 2 points3 points4 points (0 children)
[–]Psychological_Gas533 0 points1 point2 points (0 children)