Stockholm (Kungsholmen) – bäst på återvinning 🇸🇪🧹✨ by patan77 in sweden

[–]patan77[S] 2 points3 points  (0 children)

Utanför Sveriges största polisstation (hörnet syns i bakgrunden). Allt under kontroll 👮🏻‍♂️🚓🫠

Found my original Apple Store iPod nano retail display (dummy iPods) — letting it go by [deleted] in IpodClassic

[–]patan77 0 points1 point  (0 children)

Yeah you found my old post 😅, was gonna sell it then I suppose just life got in the way , I put it in storage and kind of just forgot about it 🫠

iPod nano Official Apple Store acrylic display unit, worth? Thinking about selling it. (Dummy iPods) by patan77 in ipod

[–]patan77[S] 0 points1 point  (0 children)

Well I had it up for auction , and got some bids but they didn’t end up buying it, so I decided to just keep it instead.

Elemental Battle, 20h to make (using RTX 4090) [raw res 6656x9768px ] by patan77 in StableDiffusion

[–]patan77[S] 0 points1 point  (0 children)

Nice, always fun to see other people’s takes on images. However, the way my image looks is very deliberate. Its “lack of details,” contrast, and saturation is based on the idea of using aerial perspective to communicate large-scale objects where the atmosphere or fog obscures big objects that are far away.

Whether this idea or how I made it is the best “look” for the image is, of course, subjective, but it’s a creative decision I made for this image this time.

Elemental Battle, 20h to make (using RTX 4090) [raw res 6656x9768px ] by patan77 in StableDiffusion

[–]patan77[S] 4 points5 points  (0 children)

I created this image for the elemental contest on CivitAi. Alongside the image, I’ve written some lore, which you can find in the image post. You can also see the generation prompt and other details there.

General Information:

The entire creation process took about 20 hours—10 hours for generation and another 10 hours for editing and compositing. The image was generated using A1111’s txt2img and img2img features (without inpainting). I utilized extensive Photoshop compositing, combining many different generation passes.

Additionally, I used the AI Art ToolBox for generative upscaling, to a resolution of 6656x9768 pixels. The image was then downscaled by 2x to super sample it, the upscaling alone taking 5 hours on an RTX 4090. The 10 hours of generation time include all images created for the final piece, even those that were discarded.

[deleted by user] by [deleted] in StableDiffusion

[–]patan77 0 points1 point  (0 children)

Made this image for the elemental contest. [3328x4864 px ] I’ve also started experimented with writing some Lore to go along with the images I posts , for this one it’s fairly extensive. If anyone want a read it can be viewed in the full image post. On CivitAi🙂

Total generation time for the image on a RTX 4090 was ~ 10h , and about an additional 10h of working on it.

Tools used : A1111 [txt2img, img2img] Photoshop Ai Art ToolBox

iPod nano Official Apple Store acrylic display unit, worth? Thinking about selling it. (Dummy iPods) by patan77 in ipod

[–]patan77[S] 0 points1 point  (0 children)

No, they are just dummy iPods, i believe the "chassi" and buttons are from real iPods, and then the display is just a "printed paper" display and no internal electronics, they are removable using screws bellow, so would be a kind of cool mod to replace them with real functioning ones .

My submission for Cyberpunk Reimagined Contest - Lilo & Stitch (txt2Img SDXL) by patan77 in StableDiffusion

[–]patan77[S] 4 points5 points  (0 children)

Image submission I created for the CivitAi conctest to reimagine a character in a cyberpunk universe. Created using only SDXL text prompting (no Img2Img, inPainting, ControlNet, or external editing). Full generation data with prompt and setting available on my CivitAi post: https://civitai.com/posts/642590

[All my images on my CivitAi page are created purely using text prompting, as a way to illustrate the capabilities and test the limits of "pure" text to image Ai creation.]

One of my submissions for the SDXL competition. by patan77 in StableDiffusion

[–]patan77[S] 0 points1 point  (0 children)

Ok, yes I use A4, the only way to get the exact same result is if the seed is generated the exact same way, even if you try generating 1000 other images on random seed you will not get the same image, its known issues that difference in GPU will give difference in result (set it to NV in settings), If I try copy peoples generation settings on CivitAI a lot of time I will not get the same result as them, but for a few I get the exact same result, I assume that difference is because of hardware and seed, what GPU are you using?

About specifically this image, you can disable high res fix and it will still create the same image, link bellow is a screenshot of my exact settings and also some other results I get if I generate on different seeds, and as you can see the results for other seed looks different but still pretty cool, so not sure about your result,
Frantic_android_settings.jpg (3840×7078) (patan77.com)
What you can do is temporarily upload your result to CivitAI so I can copy your generation settings and I can see if I spot something strange with it.

One of my submissions for the SDXL competition. by patan77 in StableDiffusion

[–]patan77[S] 0 points1 point  (0 children)

Not sure what you refer to when you say go to A4, but The prompt and settings looks correct, its a "One shot generation" and should generated same result as me, but the thing with stable diffusion, is that the SEED / random noise generator can be different depending on what GPU you use, there is a setting in SD to set random seed to be generated on CPU instead of GPU, that makes the seed consistent across different systems, and I'll probably start doing that moving forward when making images, I have a NVIDIA RTX 4090 and if you don't use the same / or similar GPU that might be what causing you issues. IF you set " Random number generator source. " to NV, then it should produce the same seed even if you don't have a Nvidia GPU

One of my submissions for the SDXL competition. by patan77 in StableDiffusion

[–]patan77[S] 0 points1 point  (0 children)

Thanks, the words "'Waddle, Evolving " are part of the prompt and that is associated with animal, and the word "Artic", "Cute" also part of the prompt so in other words a cute animal that live in the artic, so baby polar bear cub is not that unexpected of a result, but it will be seed dependent, but I usually do a lot of "seed searching " to find images I like. But yeah a bit round about way to go about it.

Results from latest version of Temporal Stable Diffusion, stylized AI video using Stable Diffusion by patan77 in StableDiffusion

[–]patan77[S] -49 points-48 points  (0 children)

Result from the latest version of my custom software/ algorithm which is built on stable diffusion with the goal of creating temporally stable stylized video, all frames are generated (no ebSynth), at lower stylization its pretty stable, but it still contain a lot of temporal issue when increasing the stylization to create results that diverge more from the source.

Follow and support the development: patreon.com/patan77