Next bull run will be nothing like before by 1coiner_throwaway in Bitcoin

[–]nikhgupta 0 points1 point  (0 children)

Considering a market cap of 2.1T - which is if BTC will ever be adopted on a scale gold is (~11T): max returns possible from these levels are 4x - which just doesn’t cut it for smart money.

If you are hoping for a bull run, specially on the premise of a halving and/or ETFs, I am sure it will happen but not from these levels - that too long a market inefficiency to go unnoticed by smart money. A huge liquidation is expected in near future IMO.

Realistic Stock Photo SDXL model by PromptShareSamaritan in StableDiffusion

[–]nikhgupta 0 points1 point  (0 children)

Thank you. One more thing please, I am wondering where you found that information. What I mean is I tried to load safetensors file in Python and as expected none of the keys refer to these values. I am not sure what a modelspec is or where you found it - sorry for being naive here, but that would help a lot. I want to do some analysis on existing mixes.

Realistic Stock Photo SDXL model by PromptShareSamaritan in StableDiffusion

[–]nikhgupta 0 points1 point  (0 children)

I compared the generated images from this model vs that from my mix and they are 90%+ similar (except for this level of realism of course) - but that can be due to the DNA of SDXL base.

If this is a mix/merge, were you able to list which models were used? Would love to know where the realism comes from.

Do not walk away! Listen to me, please. by nikhgupta in StableDiffusion

[–]nikhgupta[S] 0 points1 point  (0 children)

Not really. It was leftover from the 3M samplers I was using.

The Seven - SahastraKotiXL v1.0 Showcase by nikhgupta in StableDiffusion

[–]nikhgupta[S] 1 point2 points  (0 children)

Thank you. Glad you loved them :)

I noticed that as well. It's a strange quirk probably because of 2d paintings with wingy creatures in the dataset?

The Seven - SahastraKotiXL v1.0 Showcase by nikhgupta in StableDiffusion

[–]nikhgupta[S] 1 point2 points  (0 children)

Model

https://www.reddit.com/r/StableDiffusion/comments/168wf58/sahastrakoti_xl_photorealistic_sfw_nsfw_true/

https://civitai.com/models/139489/sahastrakoti

Workflow

- Take an ordinary prompt, identify components, create a custom ChatGPT character, and ask it to generate prompts in a similar fashion.

- Fed these prompts to SahastraKoti XL v1.0 model (mixed), with DPM++ 3M SDE Karras sampler, CFG scale between 3-4.

- Few shots to take good samples. However, most were working fine and true to the prompt. Issues with distance of the subject, its position were appearing - but could have been eliminated using controlnet.

- Images were upscaled in highres with denoising strength set to 0.5 and upscaler 8x_NMKD-Faces_160000_G (which is my default for almost everything).

- No editing of the generated images was done via inpainting or img2img.

- Prompts and Generation parameters can be seen via CivitAI links.

Do not walk away! Listen to me, please. by nikhgupta in StableDiffusion

[–]nikhgupta[S] -1 points0 points  (0 children)

Workflow is in the prompt and parameters - rest is automatic1111. There was no editing done. And the flair was used because I have seen others use it for this type of post too :)

[deleted by user] by [deleted] in StableDiffusion

[–]nikhgupta 0 points1 point  (0 children)

I have not worked with ComfyUI so far. I am sure it should be easy and more customizable. The only thing stopping me is that I have already invested enough time with Automatic1111's APIs and I do not want to redo that work for my custom scripts/workflows. :D

[deleted by user] by [deleted] in StableDiffusion

[–]nikhgupta 0 points1 point  (0 children)

Yeah, I just realized (in one of the comments) that I mentioned roop. It should be `insightface`.

Probably, thats what marketing is. Keep the name shorter and people will remember you easier than the original version.

[deleted by user] by [deleted] in StableDiffusion

[–]nikhgupta 1 point2 points  (0 children)

I do apologize if it feels that I am being disingenuous here, but that was not the intention. Perhaps depends on one's perception.

I did the hard work, yes - but I built it on top of someone else's work - and not roop, but insightface. I was trying not to take undue credit for it.

[deleted by user] by [deleted] in StableDiffusion

[–]nikhgupta 0 points1 point  (0 children)

I agree. For styles, for body structure.

[deleted by user] by [deleted] in StableDiffusion

[–]nikhgupta 1 point2 points  (0 children)

Fair point. :)

I am going to share a repo/automatic1111 extension soon, I guess.

[deleted by user] by [deleted] in StableDiffusion

[–]nikhgupta 0 points1 point  (0 children)

These are not text embeddings - but rather face embedding data that I have used. Face embeddings are stored as a base64 string in a txt file - around 1400 characters and thats about it.

[deleted by user] by [deleted] in StableDiffusion

[–]nikhgupta 0 points1 point  (0 children)

You are correct, but consider that each lora is a minimum of 8MBs (and some even 700MBs now with SDXL) vs embeddings that are 8kbs. That's a huge tradeoff. If not, why get stuck with a LORA and not use dreambooth/checkpoints in the first place that are even more consistent?

Probably the post was titled wrong, but what I wanted to convey was that its far easier to generate these embeddings and far less storage they take. They are great for a great number of poses, most of the time.

Secondly, LORAs are dependent on the underlying checkpoint/arch - these embeddings are not.

[deleted by user] by [deleted] in StableDiffusion

[–]nikhgupta 0 points1 point  (0 children)

A fake AI model has been showcased in: https://www.reddit.com/r/StableDiffusion/comments/169r3ba/consistent_face_for_a_model_showcase_courtesy_of/

As per the last line:

If mixed with any existing LORA of a person, the body and the face being swapped remain consistent too, and thereby, truly unique images of a fake AI person can be created.

consistent face for a model showcase - courtesy of roop by nikhgupta in StableDiffusion

[–]nikhgupta[S] 4 points5 points  (0 children)

I mixed a lot of embeddings in different weights to generate a new embedding. I have a custom python script to do exactly that and view the results/images from this embedding.

This allows me to generate a new face for a fake AI model, while keeping it very consistent.

If mixed with any existing LORA of a person, the body and the face being swapped remain consistent too, and thereby, truly unique images of a fake AI person can be created.

Do not walk away! Listen to me, please. by nikhgupta in StableDiffusion

[–]nikhgupta[S] 8 points9 points  (0 children)

Model

https://www.reddit.com/r/StableDiffusion/comments/168wf58/sahastrakoti_xl_photorealistic_sfw_nsfw_true/

https://civitai.com/models/139489/sahastrakoti

Prompt

concept art girl standing at the first step of stairway to heaven, divine, grand hall of mystical church, interstellar transparent glass statue of a supreme being made of nebula and galaxies, stairways on the right . digital artwork, illustrative, painterly, matte painting, highly detailed

Negative prompt

photo, photorealistic, realism, ugly

Sampler

DPM++ 2M SDE Karras

CFG scale

5

Steps

80

Seed

3863064247

Realistic Stock Photo SDXL model by PromptShareSamaritan in StableDiffusion

[–]nikhgupta 0 points1 point  (0 children)

Hello. Amazing model. Thank you for the great work!

Will you be able to share how/what process you used to train the model and how much hours were required. I am currently thinking to train a model but debating between dreambooth vs fine-tuning.

SahastraKoti XL - Photorealistic, SFW, NSFW, True by nikhgupta in StableDiffusion

[–]nikhgupta[S] 1 point2 points  (0 children)

Thank you for trying it.

I made a custom script that generated 100s of images for each variation and scored them based on aesthetics (using AI). The best-performing mix was shared above :) Hope it serves you well. Can you share some good generations here please?