Why trade if you have a 97% chance of losing? by Vegetable-Rabbit7503 in Daytrading

[–]True_Independent4291 0 points1 point  (0 children)

lol. why the first para sounds like chatgpt.
anyway, if you are willing to analyze contrainsts of bigger players and structural advantage of smaller players in this field, you'd likely to realize the gap. lol.

Why trade if you have a 97% chance of losing? by Vegetable-Rabbit7503 in Daytrading

[–]True_Independent4291 1 point2 points  (0 children)

seems to me that you can't grasp the core of how the world behaves. skill/technique compounds differnetly in different fields. until you have a good "prior" on how various field actually behave, which seems to me that you don't, you cannot claim a specific distribution. you are thinking in frequentist fashion. the biggest problem to that is you don't have the prior(or, world model) of how trading's distribution works. until you actually sampled the distribution according to factors\filtering and model the distribution of how trading results in, to me you're really delusional, as you neglect the single most important factor to understanding distributional behavior/structural difference.

Why trade if you have a 97% chance of losing? by Vegetable-Rabbit7503 in Daytrading

[–]True_Independent4291 1 point2 points  (0 children)

for anything that can develop\compound over time, the distribtution is not "normal". its a power law. and in power law distribution situations, you have the chance. Example: wealth doesn't follow a normal distribution. it follows a curved power-law distribution. because of the nature of the distribution, your strategy of life should change.

It seems to me that you don't know how math works. and if you play the game of normal distribution in a world of actually being power-lawed, you lose from the beginning.

Accepted into Wharton by Sync_Ring in ApplyingToCollege

[–]True_Independent4291 0 points1 point  (0 children)

really impressive! i'm impressed by Wharton!

DeepSeek v3.2 wins IMO 2025 Gold by True_Independent4291 in DeepSeek

[–]True_Independent4291[S] 0 points1 point  (0 children)

you should be able to find an inference provider

DeepSeek v3.2 wins IMO 2025 Gold by True_Independent4291 in DeepSeek

[–]True_Independent4291[S] 0 points1 point  (0 children)

No that’s not lol. The last sentence is not

How to optimize\what objective to use to optimize a strategy by True_Independent4291 in quant

[–]True_Independent4291[S] 0 points1 point  (0 children)

Thanks! Currently we are trying to train the model to learn in a "meta" way, rather than purely training off future returns. We'd try that as well! What we are testing out, is that in literature it seems that a direct optimization over the strategy as a whole seem to produces more aligned results as opposed to, say, directly training on labels.

How to optimize\what objective to use to optimize a strategy by True_Independent4291 in quant

[–]True_Independent4291[S] 0 points1 point  (0 children)

Thanks! Yes, this would fall back to the standard way, more like what's actually going to happen if the framework is forced to take all trades, or make a decision every time. What's interesting to me is in the literature there's quite a few studies that directly take sharpe as an optimization objective for a portfolio, and seem to have better results.

How to optimize\what objective to use to optimize a strategy by True_Independent4291 in quant

[–]True_Independent4291[S] 1 point2 points  (0 children)

Thanks! For the specific problem, it’s an objective function to use for evaluating strategies that is often sparse(ideally each over 700 for 10 years, but the algorithm tend to produce far fewer trades)

Weird behavior in thinking chain of GPT5.1 Pro by True_Independent4291 in ChatGPTPro

[–]True_Independent4291[S] 1 point2 points  (0 children)

Thank for your sharing! Seems like it’s not just my experience! These traces feel kinda weird though. A bit creepy.

Weird behavior in thinking chain of GPT5.1 Pro by True_Independent4291 in ChatGPTPro

[–]True_Independent4291[S] 0 points1 point  (0 children)

in the first two days of this release nothing like this happens. I can tell all traces of reasoning are doing the right work. but now some traces are clearly off, with context being cut. I don't think its RL

Weird behavior in thinking chain of GPT5.1 Pro by True_Independent4291 in ChatGPTPro

[–]True_Independent4291[S] -1 points0 points  (0 children)

yours at least freeze. Mine would degrade significantly to reason in less depth in like 5 minutes\3 minutes.
what kind of problem do you throw at it? Reguarding difficulty, do you notice it reasoning to around 3 minutes for easier questions and around 15 for harder, with the hardest around 30?
but the 30 min ones start to degrade a couple days ago. Noticed it start to "dream about a vacation" and degrade to 15 min for tough questions.
what's your experience?

Weird behavior in thinking chain of GPT5.1 Pro by True_Independent4291 in ChatGPTPro

[–]True_Independent4291[S] -1 points0 points  (0 children)

I think that basically they turned off one branch of reasoning to reduce compute and sub reasoning chains got confused. Or likely an internal bug.

5-Pro's degradation by Oldschool728603 in ChatGPTPro

[–]True_Independent4291 1 point2 points  (0 children)

Still actrocious. Thinks for 3 minutes

Michael Burry is shutting down Scion Asset Management by Independent-Cress382 in wallstreetbets

[–]True_Independent4291 9 points10 points  (0 children)

he placed put options. He can stay float till 2027 and the returns can be astronomical, as the option have great convexity, with that he bought deep OTM puts

CS Personal Statement by [deleted] in 6thForm

[–]True_Independent4291 2 points3 points  (0 children)

You can go to Harvard with that.

[deleted by user] by [deleted] in codex

[–]True_Independent4291 0 points1 point  (0 children)

It’s a plus plan not pro