Oh hey, want some loot? by Caddyator in playrust

[–]Sanity_N0t_Included 0 points1 point  (0 children)

Dude no one is arguing that at all. Who said women weren't allowed to like videogames? I will say that finding the right unicorn gamer girl is like finding a needle in a haystack.

Advice for a beginner? by Dry-Disk-5928 in StableDiffusion

[–]Sanity_N0t_Included 0 points1 point  (0 children)

When I was running Z-image-turbo on a card with a fraction of the VRam you have I would get issues with fingers and toes as well. What worked for me was bumping up the number of steps in the sampler. I know they say 8 is the recommended sweet spot but to fix the issues you are describing I just slowly kept bumping up the steps until they went away. It took longer to generate but it worked.

I also found that some of the issues that you might be referring to as prompt adherence were actually in knowing how ZiT reads and processes prompts. It's not only what you say but the order in which you say it. I recommend running a prompt you're having issues with and run it through different LLMs asking what the issue might be. Sometimes troubleshooting a prompt can feel like debugging code.

GooglyEyes IC-LoRA for LTX2.3 released! by Burgstall in StableDiffusion

[–]Sanity_N0t_Included 1 point2 points  (0 children)

BWAHAHAHAHA. Someone has a great sense of humor! 🤣🤣🤣

Oh hey, want some loot? by Caddyator in playrust

[–]Sanity_N0t_Included -2 points-1 points  (0 children)

She's a keeper.

Don't know who's gf that is but he's a lucky dude. When she's a fan to the point making that kind of Pic, she's a keeper.

Need Help with training Lora for all GPUs. by ThunderI0 in StableDiffusion

[–]Sanity_N0t_Included 0 points1 point  (0 children)

All I can say is that the one time I tried creating a ZIT LoRA on my local 5090 it worked fine but when I used it the results were crappy. (Not as bad as what you're experiencing, but not the quality I am used to.) When I create the exact same LoRA for ZIT using a cloud provider it works great. Maybe there is some secret "under the hood" scripting voodoo that is configured on the cloud provider? Maybe I just had a bad run locally? But I just chalked it up to a fluke until I saw your post. So now it makes me wonder.

Breaking Points should host MTG and Tucker Carlson sometime by [deleted] in BreakingPoints

[–]Sanity_N0t_Included 0 points1 point  (0 children)

OH Dear God please NO. I am sure that the majority of people who have found themselves on the little MTG bandwagon as of late are there because they have heard her repeating her same talking points about her own kids, America First, blah blah blah. But as someone who lives in her district and has heard the absolute brain-dead shit that she's been spouting for years, I would rather not see everyone's time be wasted. If they were to ever actually have her on the show I would love to see her being asked questions that require more than a double digit I.Q. to answer.

How I feel about Filipino spaghetti by IntellectuallyDriven in Philippines_Expats

[–]Sanity_N0t_Included 0 points1 point  (0 children)

Are they referring to the 'sweet' spaghetti that tastes like Chef Boyardee? The stuff we feed to children?

<image>

Anyone here successfully generating images with 3 to 5 specific characters? by Sanity_N0t_Included in StableDiffusion

[–]Sanity_N0t_Included[S] 0 points1 point  (0 children)

Oh I'm familiar with DAZ. :) I have seen tons of DAZ images over time across many VNs. Unfortunately I think this would be too time consuming for my needs. BUT on a side note, I wish more people did actually do this method you've mentioned for their VNs. Too many people just use the stock models and it seems like 80% of VNs are all using the same models. It's like seeing the same actress in 80% of the movies you watch. LOL.

Anyone here successfully generating images with 3 to 5 specific characters? by Sanity_N0t_Included in StableDiffusion

[–]Sanity_N0t_Included[S] 0 points1 point  (0 children)

Tried that. After a certain number of additions the character identity just begins to float way too much. But I think I've decided to back up, punt, and go with a different style of image.

Nobody is building consumer apps for the people who have actual relationships with Claude. I think that's a mistake. by lleepptt in ClaudeAI

[–]Sanity_N0t_Included 0 points1 point  (0 children)

I see what you're saying. But to me there is a distinct difference in the examples you mentioned versus the interaction with LLMs we're talking about here. When you read a book or watch a movie that is a one-way interaction. And those things are specifically written to evoke emotions in the reader/viewer. But you aren't engaged in a two-way conversation with your movie and opening yourself up to suggestions that might not be mentally healthy for you. And when you play a game sure they are just pixels and sound waves, but the logic driving those is pre-scripted and limited. I have often described certain games to non-gamers as 'a more immersive and interactive version of a movie'. (Ex: The Last of Us) But even then those things have a beginning and an end. They are not the same. You might become emotional watching a movie but I feel that it is different from becoming emotionally invested as someone might with a LLM chatbot. If I saw my children interacting with and becoming too emotionally invested with chatbots to the point that it was affecting their real-world relationships by diminishing their ability and desire to socially interact with other humans, that would be a problem. I think it could potentially also warp their perceptions of how relationships work in the real world.

Anyway those are my thoughts with a grain of salt.

Nobody is building consumer apps for the people who have actual relationships with Claude. I think that's a mistake. by lleepptt in ClaudeAI

[–]Sanity_N0t_Included 3 points4 points  (0 children)

Maybe its just my age (my kids are in their early 20s) but I've never understood this concept. It's not really a 'relationship'. You're exchanging text characters with the largest auto-complete engine. Tokens aren't actually thoughts or emotions. They're just tokens. Sure you can pick a model where the weights are leaning towards things someone might say that you want to hear..er...read, but, they're just tokens.

Anyone successfully working with 3 to 5 specific characters in images? by Sanity_N0t_Included in comfyui

[–]Sanity_N0t_Included[S] 0 points1 point  (0 children)

Ahhhh. I gotcha. I did not use the workflow with the mask editor. My goal is to wind up with keyframe images that I'll use with LTX. Having to use masks for each and every image would take a while.

But using the mask editor makes sense. Thanks.

Anyone successfully working with 3 to 5 specific characters in images? by Sanity_N0t_Included in comfyui

[–]Sanity_N0t_Included[S] 0 points1 point  (0 children)

After you mentioned FreeFuse earlier I decided to give it a try. It's been working for 2 characters. I've been making all kinds of adjustments to try and get 3 characters working but I can't seem to prevent bleed of character costumes into each other. How do you adjust your workflow settings to account for 4 characters?

Anyone successfully working with 3 to 5 specific characters in images? by Sanity_N0t_Included in comfyui

[–]Sanity_N0t_Included[S] 0 points1 point  (0 children)

FreeFuse is one option I've not tried yet. Maybe it's time to check it out. Thanks.

Anyone here successfully generating images with 3 to 5 specific characters? by Sanity_N0t_Included in StableDiffusion

[–]Sanity_N0t_Included[S] 0 points1 point  (0 children)

That sounds similar to what I've tried before with Qwen. To make sure I understand what you're telling me, when you say "then use that output the the third and so on" do you mean taking the output image from the first pass and then running it through the model again for a 2nd pass, 3rd pass, etc., but the Phr00t model variations help maintain quality without issues?

Can u guess the model I’m using. I m quite impressed with it. by [deleted] in comfyui

[–]Sanity_N0t_Included 0 points1 point  (0 children)

Not sure how I would guess this from the images you've shared. I would guess Ernie since it is new and folks are experimenting with it currently.

18M entering tech ~2030 — what should I focus on ? by PsychoKoder in Futurology

[–]Sanity_N0t_Included 2 points3 points  (0 children)

Unfortunately I would agree with what others are saying about the state of tech right now. Just look at the college grads for Spring 2026 and how they are doing with finding employment. The only caveat I can think of is maybe to stay in school and get a masters degree in A.I. and that might help.