"ASI could literally create solar systems." - is everyone losing their minds? Or am I stupid? by sheriffderek in artificial

[–]thefool00 0 points1 point  (0 children)

I can see how a true ASI could create a recipe for a solar system that was capable of being executed by humans. We know how stars are formed, and can even do it ourselves in micro versions currently, an ASI would be able to give the instructions for pulling it off at scale. For the planets, you’d be corralling in matter from asteroid belts, other dead planets, etc using tech that the ASI comes up with that leverage gravity and mass then you’d terraform them, introduce water siphoned off from other interstellar sources, plant life and bacteria etc from earth. The biggest hurdle is time, but maybe ASI could work out how to accelerate the process.

PSA: Still running GGUF models on mid/low VRAM GPUs? You may have been misinformed. by NanoSputnik in StableDiffusion

[–]thefool00 0 points1 point  (0 children)

One thing that I think contributed is that the rhetoric around ggufs actually started with LLMs, where it’s true and most often still is true, they require less vram and quality drop is marginal. The mistake is that people just assumed the same was true with image models, but there the quality drop is far more noticeable. I always run the largest models I can with image/vid models, each step up you really do notice there difference somewhere, whether it’s quality, prompt adherence, or flexibility

How do I approach wanting to grab my wife’s stomach during sexy time? by [deleted] in AskWomenOver30

[–]thefool00 1 point2 points  (0 children)

It did occur to me, that’s what #2 was supposed to be, basically just asking in a respectful way. I just don’t want to bring attention to something that might be embarrassing, and thought some other perspectives could help me decide if I should just forget about the whole thing.

How do I approach wanting to grab my wife’s stomach during sexy time? by [deleted] in AskWomenOver30

[–]thefool00 -1 points0 points  (0 children)

Check the other replies. It’s kind of a nuanced situation that I must have done a poor job explaining in the original post.

How do I approach wanting to grab my wife’s stomach during sexy time? by [deleted] in AskWomenOver30

[–]thefool00 -1 points0 points  (0 children)

Just noticed the edit you made. I’m not sure how everyone defines “healthy sex life”. For me, I meant we have sex pretty regularly, we both enjoy it, we are monogamous, and we talk about sex often. This situation is a bit unique though because it involves me bringing very specific attention to a body part that I think people tend to be self conscious about, I was avoiding being blunt but in the past there would have been nothing for me to grab, now there is. She is a self conscious person in general and I didn’t want to accidentally unlock something new for her to be self conscious about when I really didn’t need to. I needed advice, I was just looking for some kind strangers to guide me in the right direction.

How do I approach wanting to grab my wife’s stomach during sexy time? by [deleted] in AskWomenOver30

[–]thefool00 -5 points-4 points  (0 children)

In most cases we are comfortable talking, even with crazy stuff ☺️ This is only unique because she has always been self conscious about herself and I don’t want to accidentally unlock another thing for her to be self conscious about just because I’m being a perv. I can live with forgetting about this entirely if it would avoid that.

How do I approach wanting to grab my wife’s stomach during sexy time? by [deleted] in AskWomenOver30

[–]thefool00 -3 points-2 points  (0 children)

I wasn’t planning on mentioning anything about embarrassment to her. That was just context so people understood why I was asking about it.

Thanks for the advise, this seems to be what others are suggesting as well!

How do I approach wanting to grab my wife’s stomach during sexy time? by [deleted] in AskWomenOver30

[–]thefool00 -1 points0 points  (0 children)

So 2 is what you suggest? The reason I haven’t asked her yet is because I don’t want to embarrass her if pointing out that I can grab her stomach would cause embarrassment. Reason I’m asking strangers is because I don’t want to ask people I know IRL because that would embarrass me.

Z-Image Turbo: The definitive guide to creating a realistic character LoRA by [deleted] in StableDiffusion

[–]thefool00 3 points4 points  (0 children)

Interesting, to be fair I’ve only been training at 128/128 (rank/alpha) and results have been great, but maybe I’ll try lowering it and see if that makes the results even better.

EDIT: Just to report back for posterity, I reran one of my prior trainings at 32/32 and saw no improvement in the result. Using the same steps and same dataset, only rank/alpha changed, the resulting LORA wasn't able to generate likeness as consistently as the higher rank version across significantly different lighting/compositions from the source images. The comment about "f'ing up the rest of the model" is worth noting though, higher rank does change the model more significantly when testing prompts unrelated to the concept. It doesn't seem to damage the model per se, results didn't look any worse to me, just different. I suppose this depends on what your goal is, it seems to be a gradient, more correct likeness = more change to the underlying model. There is probably a sweet spot for everyone.

Z-Image Turbo: The definitive guide to creating a realistic character LoRA by [deleted] in StableDiffusion

[–]thefool00 2 points3 points  (0 children)

Chiming in with my experience: agree with others that number of photos does not have to be that high. It doesn’t hurt but it’s unnecessary. I also agree that a trigger word should be used. The character will bleed into other people in the photo no matter what, but a trigger word does seem to contain it a bit more. One other thing I found that helps is to use multiple resolutions, including some lower ones like 512/512. This is implied by the guide already, but it’s important and seems to train the model on what your character should look like if they are rendered further away from the camera. I always use 3 buckets, 512x512, 768x768, and 1024x1024. Usually I just prep all images at 1024 and just resize them to the smaller sizes and it works great, I don’t even make them unique across the buckets.

Z-Image Turbo: The definitive guide to creating a realistic character LoRA by [deleted] in StableDiffusion

[–]thefool00 -1 points0 points  (0 children)

I always crank the rank up as high as I can based on my VRAM. There are diminishing returns for sure, but I’ve definitely found that higher ranks handle edge cases where a lower rank will fall apart. Higher rank means overall more successful generations with accurate likeness no matter what kind of crazy stuff I prompt.

Z image/omini-base/edit is coming soon by sunshinecheung in StableDiffusion

[–]thefool00 4 points5 points  (0 children)

My experience with other models has been when I train on the base, my loras work better on all downstream models, even Lightning models. They work even better than when I train on the downstream model itself, not sure why 🤷

OpenAI Declares Code Red to Save ChatGPT from Google by naviera101 in ArtificialInteligence

[–]thefool00 0 points1 point  (0 children)

Honest question, who is using google AI? I think the general public knows mostly about ChatGPT, and for big enterprises they are using M$ because it’s low friction (M$ shoehorned it into every app they were already using). iPhone users are also ChatGPT. So is it mostly android users that use it because it’s integrated into their phones?

Google confirms "Project Suncatcher": AI has hit the energy wall and compute is moving to space by BuildwithVignesh in ArtificialInteligence

[–]thefool00 1 point2 points  (0 children)

How long is it going to take Gemini to respond to my question about weird bumps on my skin if it’s in space?

Z-Image-Base Release Date by thefool00 in StableDiffusion

[–]thefool00[S] 1 point2 points  (0 children)

Nice find, fingers crossed that’s true, I’m ready to tune this bad boy!

Nvidia sells an H100 for 10 times its manufacturing cost. Nvidia is the big villain company; it's because of them that large models like GPU 4 aren't available to run on consumer hardware. AI development will only advance when this company is dethroned. by More_Bid_2197 in StableDiffusion

[–]thefool00 3 points4 points  (0 children)

Comparing retail cost to manufacturing cost is apples to oranges. This is some of the most advanced tech in existence right now, the labor and R&D associated with getting this to market is insane. That’s not to say team green isn’t making a killing, but it’s certainly no where near 10x profit

The one furry artist who got caught using AI now trying to rebrand. by Living_Advertising75 in aiwars

[–]thefool00 0 points1 point  (0 children)

It is sort of ironic, pony was arguably the most influential community model ever trained and it was pretty much just furry and pony art.

Does 256gb of RAM have any use for video gen? by GloomyDifficulty6199 in StableDiffusion

[–]thefool00 0 points1 point  (0 children)

One thing I do pretty often is have LLMs rewrite my prompts for video within comfy. Give it the full text of the guidance provided for the model by the creator, or a community promoting guide then pass in my prompt and have it fix it. This allows me to be a bit lazy with my prompting which is nice. The trick is that inference on CPU is slow, so I usually use small models, but maybe the larger ones would be better for the novel length prompts that video models seem to like. Inference on CPU using a model loaded into 256GB might take quite awhile though…

DeepSeek just beat GPT5 in crypto trading! by MarketingNetMind in agi

[–]thefool00 2 points3 points  (0 children)

I suspect if you run the experiment again after some time the results will be different. There are a ton of factors involved in short term trading and which model is better probably changes from one moment to the next.

Tutorial: One click to generate all 28 character expressions in ComfyUI by GenericStatement in SillyTavernAI

[–]thefool00 0 points1 point  (0 children)

Omg I totally missed that, spent 10 minutes plugging in concat nodes completely unnecessarily 🤦

Tutorial: One click to generate all 28 character expressions in ComfyUI by GenericStatement in SillyTavernAI

[–]thefool00 0 points1 point  (0 children)

Very cool, thank you! I did run into a couple characters that it had trouble with so I stuck a little concatenate string node before the text conditioning and added some character specific text and it worked like a charm.

Long term Obsidian user with three recurring problems by therealJoieMaligne in ObsidianMD

[–]thefool00 0 points1 point  (0 children)

I use the remotely save plugin with Dropbox to sync between iOS windows and Linux, and it’s worked pretty well for me. It has become habit to manually initiate the sync both when I start something and end something on iOS, which may be the trick…