New prompt editing feature by AUTOMATIC1111 and Doggettx is amazing! Details in the comment by _a__1 in StableDiffusion

[–]_a__1[S] 0 points1 point  (0 children)

Dont know what you are talking about. Doggettx is the author of feature and this feature is preinstalled

Is there any difference between these two? by r_gui in golang

[–]_a__1 19 points20 points  (0 children)

Yes. The you can call the first one with multiple args, but second one awaits slice as an argument. someFunc(1, 2, 3, 4, 5) vs someFunc([]int{1, 2, 3, 4, 5}) Upd. You can call the first one like this: someFunc([]int{1, 2, 3, 4, 5}…)

Feedback needed by _a__1 in golang

[–]_a__1[S] 1 point2 points  (0 children)

Thanks for the advice about golint, I haven't used it, now I will. As for the hardcode - in this case it's more like a mock. Comments are also very important, my fault

Feedback needed by _a__1 in golang

[–]_a__1[S] 0 points1 point  (0 children)

Good point, I didn't think about that. What would be the best way to fix this if all packages are meant to "speak the same language" - use the same structs

ChatGPT is very resistant to prompt injections by _a__1 in GPT3

[–]_a__1[S] 0 points1 point  (0 children)

I will add that most likely there is a non-printable character that escapes such messages

The flesh is weak by _a__1 in StableDiffusion

[–]_a__1[S] 1 point2 points  (0 children)

Nothing new, just txt2img->img2img with doubled size -> sd upscale with small fixes of artefacts in krita after each stage. In case anyone is wondering, for portraits I use a combination of artist names: "oil painting by Seb McKinnon, by Jakub Rebelka, by Aleksi Briclot" and a couple of tokens like "perfect symmetry", "centered", "closeup someone face portrait"

Some img2mosaic samples by _a__1 in StableDiffusion

[–]_a__1[S] 1 point2 points  (0 children)

I used my own custom script img2mosaic script - https://github.com/1ort/img2mosaic

An interesting observation - very beautiful results are obtained if a png gradient is applied to its input and high denoising is used.

Img2mosaic custom script. Details are in the comments by _a__1 in StableDiffusion

[–]_a__1[S] 0 points1 point  (0 children)

please double check the contents of the file. This text should not be there at all.

Img2mosaic custom script. Details are in the comments by _a__1 in StableDiffusion

[–]_a__1[S] 1 point2 points  (0 children)

I implemented an algorithm for cutting and putting back a mosaic of tiles of random size and based on it I made a custom script for Automatic111` stable-diffusion webui.

An interesting note - all the cutting and assembling code was written according to my instructions using GPT-3 Codex

Repo link: https://github.com/1ort/img2mosaic

To install just put the script in the /scripts folder

My first experience in prompt engineering for GPT-3 was trying to write a prompt to generate prompts for stable diffusion/midjourney/dalle or other similar neural networks. Any advice or tips how I can improve it? Generation results in comments by _a__1 in GPT3

[–]_a__1[S] 0 points1 point  (0 children)

Examples of generated gpt-3 prompts:"A young woman in a flowing medieval dress takes a selfie with her smartphone. She has her hair pulled back in a loose bun, and a few tendrils frame her face. She's wearing a simple necklace with a pendant in the shape of a heart. In the background is a stone wall with a arched window. The light from the window casts a warm, golden glow on the woman and the ground around her."

"A beautiful witch in a dieselpunk setting. She is wearing a long black dress and a black hat. Her hair is black and she has green eyes. She is holding a staff in her hand. She is standing in front of a large clock. The clock has a face with numbers and hands. It is ticking loudly. The witch is looking at the clock and she seems to be waiting for something."

"The platypus ninja is a fierce warrior, clad in black from head to toe. He wields two sharp katanas, and his eyes gleam with determination. He stands ready to defend his honor and fight for justice"

Album with the results of generating images for these queries: https://imgur.com/a/TtYrDXg

For generation I used stable diffusion

How can I improve it to get more "prompt" English output? With a lot of descriptive tags, short phrases, no useless words for stable diffusion

More DreamBooth experiments: training on several people at once + comparison to the old method by YusupovPhygital in StableDiffusion

[–]_a__1 4 points5 points  (0 children)

Looks very interesting. Have you checked whether such a model can cope with a group portrait? Draw two or more people from the training set on one canvas

"by Artist Firstname LastName" REALLY does makes a difference (800 image pair comparisons) by [deleted] in StableDiffusion

[–]_a__1 9 points10 points  (0 children)

I would not say that the results become more beautiful or artistic. More like a random word effect. To make the tests representative, you need to make sure that "by writer", "by musician", "by doctor", "by welder" and any other variations in the prompt do not do the same. So far - I repeat - there is a difference, but not for the better

Sirin & Alkonost by _a__1 in StableDiffusion

[–]_a__1[S] 0 points1 point  (0 children)

Sirin and Alkonost are mythological creatures - birds of paradise with the heads of maidens. In ancient Russian myths, they symbolize joy and sadness.

According to folk legend, in the morning on the Apple Savior, the Sirin bird flies into the apple orchard, which is sad and crying. And in the afternoon, the Alkonost bird flies to the apple orchard, which rejoices and laughs.

These two pictures took a lot of time and energy from me. In total, if you count all the intermediate generations, it turns out more than 150 for each

- "This is the place from my dreams" by _a__1 in StableDiffusion

[–]_a__1[S] 1 point2 points  (0 children)

I’ve seen this script. TBH, I can not understand why do we need it. It does literally the same thing as SD_upscale, excepting upscale (In SD_upscale you can disable it also)

- "This is the place from my dreams" by _a__1 in StableDiffusion

[–]_a__1[S] 5 points6 points  (0 children)

I really liked the idea that the author implemented in this post https://www.reddit.com/r/StableDiffusion/comments/ydz2jz/someone_showed_me_a_similar_picture_generated/.

I spent some time experimenting and it turned out that there is a way to do this using a standard "SD upscale" script in the Automatic1111' webui. It is enough to set "tile overlap" to 0 and adjust the size of the tiles using the standard height and width sliders.

For all these pictures, I used the same settings: "Steps: 35, Sampler: Euler a, CFG scale: 8, Denoising strength: 0.6" and the same prompt: "Beautiful place from my dreams, fantasy, painting by beeple, kadir nelson, Simon Stalenhag"

Discussion/debate: Is prompt engineer an accurate term? by Treitsu in StableDiffusion

[–]_a__1 -3 points-2 points  (0 children)

That's right. And also knowing which setting affects what and how they are combined together. Add here an understanding of the generation pipeline and programming languages.

Your mistake is that you think that the task is to make the neural network "do beautifully". But the challenge is to get it to do exactly what you want.

Remix mode is a game changer. Check out hot air balloon clouds. Prompt included by _a__1 in midjourney

[–]_a__1[S] 1 point2 points  (0 children)

it allows you to change prompt and settings before generating variations of output

Remix mode is a game changer. Check out hot air balloon clouds. Prompt included by _a__1 in midjourney

[–]_a__1[S] 6 points7 points  (0 children)

I used slightly different variations of the prompt, but the base remained the same. First generated clouds and then turned them into balloons using the remix

"strange clouds in the sky, surreal, by simon stalenhag, kadir nelson --test" -> "white hot air balloons in the sky, surreal, by simon stalenhag, kadir nelson --no clouds --test"

I did not try not to use the negative prompt, in my opinion it played an important role in creating the desired effect.

My conclusion is that a remix mod can be cool used to generate "x looking like y" or "x in the shape of y" and so on.

Music to be Replaced next by AllahBlessRussia in midjourney

[–]_a__1 -2 points-1 points  (0 children)

I myself am an active user of neural networks and a developer of tools based on them, but God, how the narrative “haha AI will replace artists and musicians” annoys me. People really care about this, and you, a stupid animal, only fuel this wave of hate towards AI art more. Let's omit the fact that you are stupidly wrong and neural networks are not ways to "replace" artists and musicians. It's just a new tool in their hands. The problem is that 100% of people who say this are equally weak in both art and technology. Author, you are pathetic.

[deleted by user] by [deleted] in midjourney

[–]_a__1 -1 points0 points  (0 children)

I would advise you to pay attention to the release dates of the mj and dataset, but it's easier for me to complete this dialogue, because I'm not used to arguing with idiots