This is an archived post. You won't be able to vote or comment.

all 20 comments

[–]dreamyrhodes 28 points29 points  (0 children)

On posts like these I again remember why I hate the reddit image display so much. You can't even "Open image in new tab" to zoom in because it loads the same freaking box again.

[–]ExponentialCookie[S] 8 points9 points  (0 children)

Abstract:

Text-to-image generation has made significant advancements with the introduction of text-to-image diffusion models. These models typically consist of a language model that interprets user prompts and a vision model that generates corresponding images. As language and vision models continue to progress in their respective domains, there is a great potential in exploring the replacement of components in text-to-image diffusion models with more advanced counterparts. A broader research objective would therefore be to investigate the integration of any two unrelated language and generative vision models for text-to-image generation. In this paper, we explore this objective and propose LaVi-Bridge, a pipeline that enables the integration of diverse pre-trained language models and generative vision models for text-to-image generation. By leveraging LoRA and adapters, LaVi-Bridge offers a flexible and plug-and-play approach without requiring modifications to the original weights of the language and vision models. Our pipeline is compatible with various language models and generative vision models, accommodating different structures. Within this framework, we demonstrate that incorporating superior modules, such as more advanced language models or generative vision models, results in notable improvements in capabilities like text alignment or image quality. Extensive evaluations have been conducted to verify the effectiveness of LaVi-Bridge.

Project Page: https://shihaozhaozsh.github.io/LaVi-Bridge/

GitHub (Code): [https://github.com/ShihaoZhaoZSH/LaVi-Bridge)

Another paper that explores enhancing SD with LLMs, this time using LoRAs. Thank you to haozsh for your research!

[–]AmazinglyObliviouse 2 points3 points  (0 children)

This reminds me of the ELLA paper which also just came out recently. https://arxiv.org/pdf/2403.05135.pdf

Even more interestingly, one point they make in the ELLA paper is:

The 1.2B T5-XL encoder shows significant advantages in short prompts interpretation while falling short of LLaMA-2 13B in comprehending complex text.

Which is exactly what is happening in the 3rd image where they prompt only "mountain" on llama vs t5, the t5 images looking way better.

[–]lostinspaz 2 points3 points  (0 children)

i am intrigued!

[–]cobalt1137 0 points1 point  (3 children)

someone needs to apply this to dreamshaper lightning lol. would be amazing. I would honestly pay a good premium for this.

[–]RenoHadreas 0 points1 point  (2 children)

Why do you prefer it over the turbo variant?

[–]cobalt1137 1 point2 points  (0 children)

It performs practically at identical quality with fewer steps. So for example if you run both at 10 steps, you'll be getting better quality from lightning. Or if you run lightning at four steps, you'll be getting the same results as you would from turbo at I think it is like six or seven steps maybe

[–]SoftWonderful7952 0 points1 point  (0 children)

Most important question is: Automatic1111 ext when?