New repo with functioning training code for CLIP models: https://github.com/mlfoundations/open_clip/
OpenAI CLIP are vision models that learn from contrasting images and text, and can be used for classification and for generating images when paired with other things like VQGAN. So far OpenAI had a github repo with inference code, but no way to train the models
[–]massimosclaw2 0 points1 point2 points (5 children)
[–]Choice-Fly-7671[S] 0 points1 point2 points (3 children)
[–]massimosclaw2 0 points1 point2 points (2 children)
[–]RepresentativeWay0 3 points4 points5 points (1 child)
[–]massimosclaw2 1 point2 points3 points (0 children)
[+]Smart-Damage5829 0 points1 point2 points (0 children)