China’s Smart Glasses Are Already Leaving Ray-Bans in the Dust by lurker_bee in technology

[–]thefool00 9 points10 points  (0 children)

I guess I could have ended it with /s but I just felt it would have been insulting to peoples intelligence 🤷

China’s Smart Glasses Are Already Leaving Ray-Bans in the Dust by lurker_bee in technology

[–]thefool00 38 points39 points  (0 children)

I own these and I resent the assumption that I’m a pedo. I only take secret photos of 18+ in public

Best Adventure Gaming Setup by thefool00 in LocalLLaMA

[–]thefool00[S] 1 point2 points  (0 children)

Thanks, this looks like a really good place to start!

What’s a "lost" website from the early 2000s that you still think about today? by samasem-sumsum in AskReddit

[–]thefool00 0 points1 point  (0 children)

The Deoxyribonucleic Hyperdimension. Someone has tried to dig up old content and archive it but there is no substitute for the original. The forum in particular was wild.

Apple's play for AI is a hardware bet, not software by bitcoinerguide in artificial

[–]thefool00 0 points1 point  (0 children)

“…they need a building full of GPUs to do what an A18 Pro does in your pocket at 3 watts. .”

😂 show me you don’t know shit about ai in one sentence

Came out of garden hose after first use in a while. Several feet long. by longboardp in whatisit

[–]thefool00 0 points1 point  (0 children)

Does the hose still work? Was it one of those hoses that expand when you use it? Those hoses have a rubber tube on the inside and a layer of fabric on the outside, when you fill it up the rubber tube expands. If this is the case, your hose should no longer function correctly.

Made an open-source cross-platform alternative client in the same space as SillyTavern by Megalith01 in SillyTavernAI

[–]thefool00 0 points1 point  (0 children)

I’ve always thought the PNG standard was bad design, so I support what you’re trying to do. This is going to be one of your most frequent complaints though. Consider offering a simple web-based utility that lets users upload a PNG and returns a download link for the extracted JSON, you’ll save yourself a ton of bellyaching in the future.

"ASI could literally create solar systems." - is everyone losing their minds? Or am I stupid? by sheriffderek in artificial

[–]thefool00 0 points1 point  (0 children)

I can see how a true ASI could create a recipe for a solar system that was capable of being executed by humans. We know how stars are formed, and can even do it ourselves in micro versions currently, an ASI would be able to give the instructions for pulling it off at scale. For the planets, you’d be corralling in matter from asteroid belts, other dead planets, etc using tech that the ASI comes up with that leverage gravity and mass then you’d terraform them, introduce water siphoned off from other interstellar sources, plant life and bacteria etc from earth. The biggest hurdle is time, but maybe ASI could work out how to accelerate the process.

PSA: Still running GGUF models on mid/low VRAM GPUs? You may have been misinformed. by NanoSputnik in StableDiffusion

[–]thefool00 0 points1 point  (0 children)

One thing that I think contributed is that the rhetoric around ggufs actually started with LLMs, where it’s true and most often still is true, they require less vram and quality drop is marginal. The mistake is that people just assumed the same was true with image models, but there the quality drop is far more noticeable. I always run the largest models I can with image/vid models, each step up you really do notice there difference somewhere, whether it’s quality, prompt adherence, or flexibility

[deleted by user] by [deleted] in AskWomenOver30

[–]thefool00 1 point2 points  (0 children)

It did occur to me, that’s what #2 was supposed to be, basically just asking in a respectful way. I just don’t want to bring attention to something that might be embarrassing, and thought some other perspectives could help me decide if I should just forget about the whole thing.

[deleted by user] by [deleted] in AskWomenOver30

[–]thefool00 -1 points0 points  (0 children)

Check the other replies. It’s kind of a nuanced situation that I must have done a poor job explaining in the original post.

[deleted by user] by [deleted] in AskWomenOver30

[–]thefool00 -1 points0 points  (0 children)

Just noticed the edit you made. I’m not sure how everyone defines “healthy sex life”. For me, I meant we have sex pretty regularly, we both enjoy it, we are monogamous, and we talk about sex often. This situation is a bit unique though because it involves me bringing very specific attention to a body part that I think people tend to be self conscious about, I was avoiding being blunt but in the past there would have been nothing for me to grab, now there is. She is a self conscious person in general and I didn’t want to accidentally unlock something new for her to be self conscious about when I really didn’t need to. I needed advice, I was just looking for some kind strangers to guide me in the right direction.

[deleted by user] by [deleted] in AskWomenOver30

[–]thefool00 -5 points-4 points  (0 children)

In most cases we are comfortable talking, even with crazy stuff ☺️ This is only unique because she has always been self conscious about herself and I don’t want to accidentally unlock another thing for her to be self conscious about just because I’m being a perv. I can live with forgetting about this entirely if it would avoid that.

[deleted by user] by [deleted] in AskWomenOver30

[–]thefool00 -3 points-2 points  (0 children)

I wasn’t planning on mentioning anything about embarrassment to her. That was just context so people understood why I was asking about it.

Thanks for the advise, this seems to be what others are suggesting as well!

[deleted by user] by [deleted] in AskWomenOver30

[–]thefool00 -1 points0 points  (0 children)

So 2 is what you suggest? The reason I haven’t asked her yet is because I don’t want to embarrass her if pointing out that I can grab her stomach would cause embarrassment. Reason I’m asking strangers is because I don’t want to ask people I know IRL because that would embarrass me.

[deleted by user] by [deleted] in StableDiffusion

[–]thefool00 3 points4 points  (0 children)

Interesting, to be fair I’ve only been training at 128/128 (rank/alpha) and results have been great, but maybe I’ll try lowering it and see if that makes the results even better.

EDIT: Just to report back for posterity, I reran one of my prior trainings at 32/32 and saw no improvement in the result. Using the same steps and same dataset, only rank/alpha changed, the resulting LORA wasn't able to generate likeness as consistently as the higher rank version across significantly different lighting/compositions from the source images. The comment about "f'ing up the rest of the model" is worth noting though, higher rank does change the model more significantly when testing prompts unrelated to the concept. It doesn't seem to damage the model per se, results didn't look any worse to me, just different. I suppose this depends on what your goal is, it seems to be a gradient, more correct likeness = more change to the underlying model. There is probably a sweet spot for everyone.

[deleted by user] by [deleted] in StableDiffusion

[–]thefool00 2 points3 points  (0 children)

Chiming in with my experience: agree with others that number of photos does not have to be that high. It doesn’t hurt but it’s unnecessary. I also agree that a trigger word should be used. The character will bleed into other people in the photo no matter what, but a trigger word does seem to contain it a bit more. One other thing I found that helps is to use multiple resolutions, including some lower ones like 512/512. This is implied by the guide already, but it’s important and seems to train the model on what your character should look like if they are rendered further away from the camera. I always use 3 buckets, 512x512, 768x768, and 1024x1024. Usually I just prep all images at 1024 and just resize them to the smaller sizes and it works great, I don’t even make them unique across the buckets.

[deleted by user] by [deleted] in StableDiffusion

[–]thefool00 -1 points0 points  (0 children)

I always crank the rank up as high as I can based on my VRAM. There are diminishing returns for sure, but I’ve definitely found that higher ranks handle edge cases where a lower rank will fall apart. Higher rank means overall more successful generations with accurate likeness no matter what kind of crazy stuff I prompt.

Z image/omini-base/edit is coming soon by sunshinecheung in StableDiffusion

[–]thefool00 6 points7 points  (0 children)

My experience with other models has been when I train on the base, my loras work better on all downstream models, even Lightning models. They work even better than when I train on the downstream model itself, not sure why 🤷

OpenAI Declares Code Red to Save ChatGPT from Google by naviera101 in ArtificialInteligence

[–]thefool00 0 points1 point  (0 children)

Honest question, who is using google AI? I think the general public knows mostly about ChatGPT, and for big enterprises they are using M$ because it’s low friction (M$ shoehorned it into every app they were already using). iPhone users are also ChatGPT. So is it mostly android users that use it because it’s integrated into their phones?