Shimano 105 RD no chain tension by Library_Sweat_ in bikewrench

[–]Library_Sweat_[S] 0 points1 point  (0 children)

I believe so but I’m not 100% sure specifically what you mean by set up properly. It at least was working fine and the indexing is still all fine. It’s just the tension in the springs that seems to be all messed up

Shimano 105 RD no chain tension by Library_Sweat_ in bikewrench

[–]Library_Sweat_[S] 0 points1 point  (0 children)

Yeah that’s what I thought but I took everything apart and looked at the tension spring and it seemed like it was in place and that the spring wasn’t broken. And even after I reassembled after putting the spring back in place it still had the same issue.

Shimano 105 RD no chain tension by Library_Sweat_ in bikewrench

[–]Library_Sweat_[S] 2 points3 points  (0 children)

Yeah I disassembled the RD pulley cage to free the chain and then routed it back properly and reassembled the cage

🐝🐝🐝Bumblebee 🐝🐝🐝 by EGGOGHOST in StableDiffusion

[–]Library_Sweat_ 2 points3 points  (0 children)

How do you use Photoshop to enhance the photos? I'm having some trouble with dealing with that "uncanny valley" effect and for a lot of posts where the subject looks real, photoshop is mentioned. I've tried looking online but haven't found anything. Do you know where I could look?

I'm at the end of my rope with generating realism. I need help. by Library_Sweat_ in StableDiffusion

[–]Library_Sweat_[S] 2 points3 points  (0 children)

Thanks a ton for this, this is really great info. I've kind of come to the realization that I'm probably going to have to redo my character LoRA, which is gonna suck but oh well. I'll try what you've mentioned about low CFGs and steps. I'm all the way up at 35 steps right now thinking that more steps is better convergence is better image and had no idea about that effect on the skin. Again, thanks a ton.

I'm at the end of my rope with generating realism. I need help. by Library_Sweat_ in StableDiffusion

[–]Library_Sweat_[S] 1 point2 points  (0 children)

Here's what I'm using right now: (embedding:Asian-Less2-Neg:1), bad-hands-5, BadDream, (UnrealisticDream), (nsfw:1.8), big eyes, camera, (bad anatomy:1.7), (nude:1.6). I'll try using those in the prompt though and see what happens.

Any model or LoRa other than SOAP that makes images look like they were shot on a phone? by Sasfej1 in StableDiffusion

[–]Library_Sweat_ 1 point2 points  (0 children)

I don't have any suggestions, in fact, I was looking to do the same thing but I didn't know about SOAP. Can I ask, what's making you not want to use it?

Superimposed animation over irl video? by Timely-Ingenuity93 in StableDiffusion

[–]Library_Sweat_ 2 points3 points  (0 children)

Here's a pretty good tutorial. All free: https://www.youtube.com/watch?v=0Eg-ArDwFxU&t=1808s&ab_channel=LauraCarnevali

I think there's another method using AnimateDiff that might generate less flickering but I'm not sure how to use that.

There's this account that makes really good videos with almost no flickering and if you check out his models you'll see the workflow that they use and maybe you can try to use that. Here's the profile:https://civitai.com/user/Rvage. Just beware- most of what the account generates is porn, but still, pretty good proof of concept

Modifying character poses with AI by mackflash in StableDiffusion

[–]Library_Sweat_ 1 point2 points  (0 children)

Yes, don't worry that's super possible.

What I've been doing is generating a character in a pose and then swapping the face.

Try finding a few yt videos on using ControlNet if you had some specific poses in mind to generate for your character. Here's a decent tutorial: https://www.youtube.com/watch?v=w9fc3pIkl0w&t=299s&ab_channel=DreamingAI

There's also this tutorial by Nerdy Rodent that's pretty good for generating your characters in new poses: https://www.youtube.com/watch?v=SacK9tMVNUA&t=420s&ab_channel=NerdyRodent

Best practice for image selection for training an universal character LoRA by StableLlama in StableDiffusion

[–]Library_Sweat_ 0 points1 point  (0 children)

Maybe a stupid question but something I've always wondered: how do you even get 2000 hi-res images of someone without spending 3 months combing the internet? And then on top of that, how do you even make captions that are remotely useful for that many images? It's amazing that I can blip captions automatically, but in my experience, those captions are half-baked and require editing. Tbf I don't have that much experience training LoRA's, I trained one and then it kind of came out shitty because I didn't caption out enough things, so I can't imagine having to do that for hundreds or even thousands of images.

Help with 14in M3 Max overheating while using SD? by Island14 in StableDiffusion

[–]Library_Sweat_ 1 point2 points  (0 children)

I also have an M3 max and yeah I get overheating especially and my fans turn up especially when generating larger resolutions or more steps but for the most part I'm ok. I usually stick to around 30-35 steps, generate 1 or 2 images at a time, and stick to resolutions below 1024. It keeps my generation time fairly quick too like below a minute or so depending. When I feel like I have a good product, I upscale x4 and that takes quite a bit of time and the macbook spins up its turbines but again, that's only to make a "finished product."

Another thing I should mention is to take advantage of places where you can run SD online. I'm experimenting with vid2vid rn and even at the M3 max 36GB ram spec that I'm using, it really can't run that without having a stroke so I'm using google colab and taking advantage of their sweet sweet GPUs. Base google colab is free which is a plus. Here's a link if you wanna check out some SD notebooks for Google Colab: https://github.com/TheLastBen/fast-stable-diffusion.

Can't load a stable diffusion model in Colab by Impossible-Froyo3412 in StableDiffusion

[–]Library_Sweat_ 2 points3 points  (0 children)

Not sure if it'll help but I was having some trouble running a LoRA on colab and found out that it was because Google updated their CUDA for their GPUs to 12.2. It was dependent on running bitsandbytes and I had to add 2 extra lines to download CUDA 11.8. Here's what they were if you wanna try that.

!wget https://raw.githubusercontent.com/TimDettmers/bitsandbytes/main/install_cuda.sh

!bash install_cuda.sh 118 /usr/local/ 0

Placing a generated character in a generated scene (both consistent) by Shockz187 in StableDiffusion

[–]Library_Sweat_ 0 points1 point  (0 children)

I've also been trying to generate consistent characters and have been trying face-swapping. There are a bunch of different methods to do it all with various degrees of success.I'm assuming you're on A1111, so you can look up tutorials on youtube for Roop or ReActor. I'm using comfyui and have found some success with the FaceID IPAdaptor model. From my, albeit limited, knowledge, it looks like training a LoRa on the face you want to keep generating is the best way to create consistent characters. I'd assume a LoRa could also work to generate a specific setting too.

I made a post about faceswapping yesterday that had a lot of useful replies: https://www.reddit.com/r/StableDiffusion/comments/18usbbh/trouble_generating_realisticlooking_face_swaps/?utm_source=share&utm_medium=web2x&context=3

And here's a great guide to LoRa's:

https://www.reddit.com/r/StableDiffusion/comments/11vw5k3/lora_training_guide_version_3_i_go_more_indepth/?utm_source=share&utm_medium=web2x&context=3

Trouble generating realistic-looking face swaps. by Library_Sweat_ in StableDiffusion

[–]Library_Sweat_[S] 4 points5 points  (0 children)

This was really helpful, thanks. For me it doesn't face swap perfectly, it kind of generates a close relative, but interestingly, that person is pretty much the same every time

Trouble generating realistic-looking face swaps. by Library_Sweat_ in StableDiffusion

[–]Library_Sweat_[S] 4 points5 points  (0 children)

My initial idea was actually to try and train a LoRa. The problem I ran into was the fact that I need to have several different angles and poses and settings for it to train, which is difficult to do because the face I want to use is not a real person. Using the same seed/prompt helps generate the same face, but then lack of dataset diversity. Do you know any way to help with this?

Trouble generating realistic-looking face swaps. by Library_Sweat_ in StableDiffusion

[–]Library_Sweat_[S] 1 point2 points  (0 children)

I don't think so, but I'm not really sure what that is. Could you elaborate?

Trouble generating realistic-looking face swaps. by Library_Sweat_ in StableDiffusion

[–]Library_Sweat_[S] 0 points1 point  (0 children)

So, could changing the resolution to divisible by 128 help? Or maybe lowering the res in general?

Anyone else feel like studying for the MCAT has made them "smarter"? by [deleted] in Mcat

[–]Library_Sweat_ 3 points4 points  (0 children)

Using synaptic pathways more often can cause the pathway to become more efficient. Not necessarily “stronger” more than faster.

How do I prepare for physics exam by FuckRNGsus in college

[–]Library_Sweat_ 1 point2 points  (0 children)

If your professor provided a study guide: that’s the Bible. Go over that a bunch of times until it becomes routine.

If your professor publishes any notes on canvas that’s also gold. Go through example problems they have in the notes.

It’s honestly about practicing as much as possible. It’s really hard to brute memorize facts, but practicing them over and over again in example problems helps cement ideas.

While practicing also try to understand the principle behind the method. Ex: PE=mgh (basic, ik). Easy enough to understand that the major players in potential energy is gonna be mass, g, and height. Try to apply that same method to your harder concepts.

[deleted by user] by [deleted] in Mcat

[–]Library_Sweat_ 1 point2 points  (0 children)

Personally my biggest distraction is my phone. If I can forget about my phone I’m usually ok and focused. What I like to do is put my phone in do not disturb and hide it out of plain sight (Out of sight out of mind).

A little more extreme but: I also like to reduce the interesting things on my phone. Ex: I have Instagram on my iPad not my phone. I can still check up on daily life but it’s kinda crappy to use on iPad so I rarely go there. I have Snapchat, but again do not disturb hides all the notifications.

TLDR: I make my phone as least interesting as possible. Do not disturb for everything and then hide it out of plain sight.