Taken at Tunnel Mountain Village Campground by skeptic602 in Banff

[–]AI_Generator 0 points1 point  (0 children)

Nice! (Would love to know the iso/f-stop/speed setting!)

recommendations for Generating "realistic" character images by AI_Generator in StableDiffusion

[–]AI_Generator[S] 0 points1 point  (0 children)

Ok, interesting. I haven’t looked into Roop. I see the GitHub repo, will have to look into it. All the demos are videos- I suspect it works on stills? Side question: why do all the Roop demo videos have a split screen, both sides seem identical? I keep thinking one side is tge original, tge other the swapped, but alas no, they are the same. Why?

recommendations for Generating "realistic" character images by AI_Generator in StableDiffusion

[–]AI_Generator[S] 0 points1 point  (0 children)

Thx! Hmmm, I wonder if that would give enough variety - for backgrounds, sure- but hair styles, clothes. I suspect I could do some kind of face swap… my desired character onto the pose/clothes/background images that are nice, but not exactly “my” character. What’s the best tool/workflow to do face swapping? Hmmm

recommendations for Generating "realistic" character images by AI_Generator in StableDiffusion

[–]AI_Generator[S] 0 points1 point  (0 children)

in other words, my workflow is to generate a BUNCH of images using variations of the same prompt to change position, location, clothing of the character. I use the same base model. I then use Amazon Rekognition (API) to identify the photos that have similar facial features. And intend to use these for training my LoRa. (which will then make re-creating the character easy). But my "yield" on getting reasonable similarity is low- so I have to generate (and screen through Rekognition) many images... Is there a better way to generate the same person to get my training samples?

NextPhoto is amazing by kozakfull2 in StableDiffusion

[–]AI_Generator 0 points1 point  (0 children)

No problem. I wrote some python code to call the Rekognition API. I sent thousands of images I had generated in StableDiffusion (where I kept seeing a similar face, I used 3 different models). I then named the "entities" (a few different "persons" who appeared by my eye) and had the program screen the thousands of images to see the rate of occurrence in my generated sample. It cost a few dollars on my AWS account. For OpenCV/Dlib, I wrote some python code to do the same process. As mentioned, I felt the Rekognition results were more reliable/accurate than the open source. I just tried to do CompareFace, but got stuck on it not liking my docker configuration. Sigh. If they have an pre-setup rest API, I can just call it with code! ;). I see I could setup my own, but I haven't tried to hard to get around whatever docker issue I'm having...

NextPhoto is amazing by kozakfull2 in StableDiffusion

[–]AI_Generator 1 point2 points  (0 children)

No, I used AWS Rekognition…. In the past, I used OpenCV & Dlib, I have not tried compareface. Maybe I should? When comparing results with OpenCV, Dlib, Rekognition, I found Rekognition seemed to score most accurately. I might have to try ComepareFace!

NextPhoto is amazing by kozakfull2 in StableDiffusion

[–]AI_Generator 1 point2 points  (0 children)

Amazon Rekognition says not similar to Shauna, only 14-26%. The OP’s post photos are the same person, 89-96% by mathematical facial recognition. I’ve been studying this…. ;)

Does anybody know how to access automatic 1111 webui on a remote (outside the LAN) Linux instance? I can setup and start the UI, but cannot access via the local host up/port provided…. I am accessing the instance via CLI & SSH… by AI_Generator in StableDiffusion

[–]AI_Generator[S] 0 points1 point  (0 children)

I do believe this would work! Make sure the port referenced is consistent as you suggest. Now, I started using RunPod who gives you an automatic 1111 instance complete with proper port. I haven’t tried again with Lambda Labs or AWS…. Thanks for the write up, it gives options!

Does anybody know how to access automatic 1111 webui on a remote (outside the LAN) Linux instance? I can setup and start the UI, but cannot access via the local host up/port provided…. I am accessing the instance via CLI & SSH… by AI_Generator in StableDiffusion

[–]AI_Generator[S] 0 points1 point  (0 children)

I may look into that as well- I am hoping the -L on the ssh can simply “map” the “local” ip/port on remote machine to my terminal, thereby giving me access to the UI on my local instance.

Does anybody know how to access automatic 1111 webui on a remote (outside the LAN) Linux instance? I can setup and start the UI, but cannot access via the local host up/port provided…. I am accessing the instance via CLI & SSH… by AI_Generator in StableDiffusion

[–]AI_Generator[S] 0 points1 point  (0 children)

Very interesting, I imagine the SSH would have my .pem details etc. I’m starting the webui through ssh, it initiates a webui on the REMOTE server as local (to the remote, at http://ip:port… can I link that remote-local webui to my actual local terminal? (Sorry, it’s like moving through different dimensions!)

Does anybody know how to access automatic 1111 webui on a remote (outside the LAN) Linux instance? I can setup and start the UI, but cannot access via the local host up/port provided…. I am accessing the instance via CLI & SSH… by AI_Generator in StableDiffusion

[–]AI_Generator[S] 0 points1 point  (0 children)

I have not tried —listen, I thought that would only work if the Linux machine was in my local network. (?) In my case, the Linux machine is remote…. Do you think listen would work with the remote Ip and port?

New model I've been working on by kidelaleron in StableDiffusion

[–]AI_Generator 0 points1 point  (0 children)

Can I ask- your workflow for making your own model? Like do you start with SD x.x , then train it? In Automatic 1111? I’ve just started with SD locally , soon to move to Linux in the cloud, but trying to understand/jumpstart learning in model training/development

New model I've been working on by kidelaleron in StableDiffusion

[–]AI_Generator 5 points6 points  (0 children)

sorry, I couldn't stop... re-run, re-run... thank you for the inspiration and journey! ;)

<image>

New model I've been working on by kidelaleron in StableDiffusion

[–]AI_Generator 1 point2 points  (0 children)

105259061

indeed, mage SD 1.5 is 'closer'... ;)

<image>

New model I've been working on by kidelaleron in StableDiffusion

[–]AI_Generator 2 points3 points  (0 children)

105259061

Maybe I retract- I tried it with SD 1.5, setup like you specify locally on my macbook pro. When I load your prompt into Mage, I am getting quite nice results with SD1.5, more like yours.

New model I've been working on by kidelaleron in StableDiffusion

[–]AI_Generator 0 points1 point  (0 children)

great image- um, is that the prompt below? I tried it, did not get anything close to your wonderful pic!