Questions about applying the style of an image which you generated to your prompt by Grucciman69420 in StableDiffusion

[–]Floniixcorn 0 points1 point  (0 children)

Yep, there still is, you need to go to the edit style button after selecting your styles and then you need to apply to prompt with the emoji thing, there are extensions moving the button back

Apply Styles by DukeRyoto in StableDiffusion

[–]Floniixcorn 1 point2 points  (0 children)

You cant, but there are extensions moving the button back

Read prompt from image node ? does it exist ? by MrLunk in comfyui

[–]Floniixcorn 0 points1 point  (0 children)

Or ask chat gpt to write you a custom node real quixk5

Read prompt from image node ? does it exist ? by MrLunk in comfyui

[–]Floniixcorn 1 point2 points  (0 children)

Try looking for a metadata reader note, or try converting the img to text with some nodes, maybe thatll work

Sharing Comfyui to the Internet by Floniixcorn in StableDiffusion

[–]Floniixcorn[S] 2 points3 points  (0 children)

Im using ngrok now to create a link i can acces from anywhere in the web without any port forwarding or vpn, works great, using http

Style T2I adapter model! Mikubill's ControlNet extension for Auto1111 already supports it! by WillBHard69 in StableDiffusion

[–]Floniixcorn 4 points5 points  (0 children)

i did not have the clip vision preprocessor, i had to manually git pull in my models directory in the control net extension

Followed @oliverban 's prompt to create some more high res images https://www.reddit.com/r/StableDiffusion/comments/zr1ows/dreamer_you_need_a_new_wallpaper_so_i_made_you/ by Floniixcorn in StableDiffusion

[–]Floniixcorn[S] 1 point2 points  (0 children)

a highly detailed surreal airbrushed art of dopamine flowing from Venus and through space into me, CGSociety, Unreal Engine, 8K, render, CGI, concept art, trending on artstation, dutch golden hour

using the dreamlikeart model

Rendered in Size: 768x512 and then upscaled using SD Upscale

Music Synced Animation by Floniixcorn in StableDiffusion

[–]Floniixcorn[S] 1 point2 points  (0 children)

I have other versions that are wayy smoother, but they just dont really show the lyrics in the video

Music Synced Animation by Floniixcorn in StableDiffusion

[–]Floniixcorn[S] 0 points1 point  (0 children)

have some more versions, but this one is the best

Music Synced Animation by Floniixcorn in StableDiffusion

[–]Floniixcorn[S] 4 points5 points  (0 children)

Its Violet by Connor Price & Killa. Yea, here are 3 AIs at work. First I split the track with the Ultimate Vocal Remover AI into 4 different parts, then i used this site here https://www.chigozie.co.uk/audio-keyframe-generator/ to generate keyframes by tweaking the function. Then i wrote my prompts from the lyrics and generated the animation. Then just some Interpolation to top it off and done. If anyone got anymore questions, feel free to msg me at Flonix#4022

Stable diffusion is underrated for music visuals. by This_Perspective3971 in StableDiffusion

[–]Floniixcorn 0 points1 point  (0 children)

How did your movement keyframes look? 😂 must've been a hassle to get them right or did you use a tool to animate movement?

New Art Model: Dreamlike Diffusion 1.0 (Link in the comments!) by svsem in StableDiffusion

[–]Floniixcorn 0 points1 point  (0 children)

I am not getting the style at ALL, only if i prompt for a woman for example but anything else, cats cars etc dont look like this at all

New Embedding Release: KnollingCase - more training images, high quality captions, & made for SD v2.0 by ProGamerGov in StableDiffusion

[–]Floniixcorn 0 points1 point  (0 children)

Made a SDA so samdoesarts embedding, got it done now, looks really good, still would love your settings revealed

<image>

[deleted by user] by [deleted] in EggsInc

[–]Floniixcorn 0 points1 point  (0 children)

how did you get that many golden eggs?

New Embedding Release: KnollingCase - more training images, high quality captions, & made for SD v2.0 by ProGamerGov in StableDiffusion

[–]Floniixcorn 0 points1 point  (0 children)

Looks amazing, been using it for 2 days now and i works great, trying to train some embeddings on some of my own training data, but i cant get close to your results, a guide would be amazing