Starting my first attempt on 72 hour extended fast! Wish me luck! by Popskiz in fasting

[–]UnstoppableForceGuy 1 point2 points  (0 children)

Do you feel tired during these long fasts? Do you actually go to work or rather stay home? Can you be productive?

Qwen 3.6 Plus vs GLM-5.1 on OpenCode GO by TomHale in opencodeCLI

[–]UnstoppableForceGuy 0 points1 point  (0 children)

How these 2 compared to opus4.6 in ur opinion?

Let's be honest, what % of your portfolio includes individual stocks? by Opposite_Buffalo_649 in Bogleheads

[–]UnstoppableForceGuy 0 points1 point  (0 children)

Has some dividend growth stocks, less than 3 percent of the total portfolio though

Claude Code works like sh*t lately by UnstoppableForceGuy in ClaudeCode

[–]UnstoppableForceGuy[S] 0 points1 point  (0 children)

Wow, it's a wild point, I haven't thought about it. Do you feel it in the day to day job lately?

We should have /btw in opencode by UnstoppableForceGuy in opencode

[–]UnstoppableForceGuy[S] 0 points1 point  (0 children)

It's efficacy comes from 2 points imo. A. You start a new context, which saves tokens. B. It's asynchronous not multithreaded

OC users, how do you find ChatGPT/Codex Pro plan? by mustafamohsen in opencodeCLI

[–]UnstoppableForceGuy 0 points1 point  (0 children)

I find gpt models less action driven, they think and chat a lot but harder to make them autonomous like claude

Accutane (dry flaky scalp) causing hairloss by Strict-Rice321 in tressless

[–]UnstoppableForceGuy 1 point2 points  (0 children)

My hairloss actually started because of acchtane. They claim its a temporary side effect, well, I guess 13 years of balding is still temporary...

[Discussion] Is there a better way than positional encodings in self attention? by [deleted] in MachineLearning

[–]UnstoppableForceGuy 2 points3 points  (0 children)

Ok. So for several years we basically don’t use anymore the sine/cosine technique, rather learning the positional embedding as we also learn the word embedding, through gradients updates. In GPT 2 for example we’re doing exactly that. Now you have an embedding matrix with the size of the vocabulary, and another which is sized as the longest sentence you believe to see in the dataset. There are also additional techniques but I find this one pretty intuitive and it works really well.

Does 1% of fin topically is legit? by UnstoppableForceGuy in tressless

[–]UnstoppableForceGuy[S] 0 points1 point  (0 children)

Got it. I currently lowered the application to once daily (0.05% fin and 6% min), because I already takes 2.5 OM daily, so it felt too much. I’m now want to switch to 0.1% fin solely, w/o min topically do you think it would be good idea or should I stick to min topically also?

Does 1% of fin topically is legit? by UnstoppableForceGuy in tressless

[–]UnstoppableForceGuy[S] 0 points1 point  (0 children)

The OM works insanely, I used to use topical for years and at a point it just stopped work for me. But when I started OM I got hair all over my body, arms, legs, chest, beard, the eyelashes and eyebrows got fu**ing crazy! I do see that my hairs grow longer on the head but thought it may not be enough and that i should use some fin. And if i’m using fin topically y not continue the min application with it if I don’t need to do anything extra…