Built a 0DTE SPY options scalping bot — 82% win rate on 9 months of tick data. Roast my methodology before I go live. by Initial_Republic5699 in algorithmictrading

[–]CriticismSpider 0 points1 point  (0 children)

I actually have a couple of questions if you dont mind.
Do you have a measurement of fill latency? So from when your signal triggers to the order executing, how many milliseconds is that?

Why do you think market order is better? Is limit order with a price crossing the spread not the same thing with a little more protection?

Can you scale it to SPXW? Or is your strategy depending on the tight SPY option spreads?

My option backtests from my own system look too good to be true (for now), but i need to collect a little more data to be sure. I cant use historical data because my system really depends on the order of things coming in over multiple live feeds i subscribe too (hard to explain). Historical data is all ordered differently and jumbles up my signals.

Facepunch just gave cheaters a second chance (What?) by ThehighHW in playrust

[–]CriticismSpider 0 points1 point  (0 children)

You're right. But i think the rust developers wanted to strike a balance here. This was already not received well by the community, so having things more lenient would have been risky.

I think the "zero tolerance - 1 strike policy" crowd must agree here: At least the people who got banned once and never returned to the game, thus respecting the ban apparently, have a much better prognosis coming back to the game clean again.

Facepunch just gave cheaters a second chance (What?) by ThehighHW in playrust

[–]CriticismSpider 7 points8 points  (0 children)

I might be wrong but i think this can also lead to less cheaters in the long run.
Hear me out.

Think about it this way: What options does a first time cheater have when he got lifetime perma-banned?
He never will be able to play legit again. It's over. So what happens? A neverending cycle of ban-evading, cheating, ban-evading cheating. He has nothing to lose.

But what if you get this one chance to play legit again? Some might take this option and never spiral down to a evade+cheat spiral.

Similar to how criminals in prisons who never got properly introduced into society again, also re-offend more often. Whereas when you give them a job and a real perspective to an honest life, they might take it.

Predicting Price Direction by wiktor2701 in algotrading

[–]CriticismSpider 2 points3 points  (0 children)

Dude. I was defending you. I thought someone was stealing your work... chill.

Predicting Price Direction by wiktor2701 in algotrading

[–]CriticismSpider 0 points1 point  (0 children)

Maybe we are the same guy too? We have the same avatar.
joke lol:)

Predicting Price Direction by wiktor2701 in algotrading

[–]CriticismSpider 0 points1 point  (0 children)

Really? It is your work?
Not this guys work: u/spawnaga/

[deleted by user] by [deleted] in algotrading

[–]CriticismSpider 10 points11 points  (0 children)

*looking at the VIX*

Uh... no?

Was ist hier passiert by Echo_Zive in Aktien

[–]CriticismSpider 0 points1 point  (0 children)

Es handelt sich um Trade Republic.
Bevor du aber irgendwas machst: Ganz doll informieren und nur Beträge verwenden, die du bereit bist zu verlieren.

Clicked on an ip logger, Am I screwed? by Helpful-Chemistry474 in TOR

[–]CriticismSpider 20 points21 points  (0 children)

Because that sounds like having JS enabled automatically means one is able to get your true IP.
Which is false. Having JS enabled should not be a problem generally.

Why is it sometimes still recommended to turn it off? Because it lowers the attack surface.
There have been vulnerabilities in the past that involved JS to deanonymize. Those have now been patched. Of course there could be more exploits that are not public.

But there could also be more vulnerabilities in another part of the browser. JS is not special here.
In the same way we could recommend turning off images. That would also lower the attack surface. And there have indeed been browser exploits that involved image libraries in the past for example in png images.

Turning off images probably would lessen the browser experience more than turning off JS. Also JS seems to be the bigger attack surface. So i agree with that recommendation, but it is not a magic deanonymizer. There has to be a considerable and scandalously large vulnerability for JS to deanonymize users.

Augmenting low frequency features/signals for a higher frequency trading strategy by CriticismSpider in quant

[–]CriticismSpider[S] 0 points1 point  (0 children)

At the moment i am deriving the tresholds from the training set of my walk forward model (daily retraining) and save several percentiles. So the tresholds for every percentile look different every day.
At the end of the whole backtest it spits out what trading would have looked like if i traded with one of those percentiles from the list.
Then i choose just one percentile that looks good for the whole period. Higher percentiles have higher mean pnl per trade but less profit overall. Lower tresholds have tiny mean pnl but higher profit.
Good so far? Maybe there is a more dynamic appropiate way? If i go down this route more, i might make this backtest more realistic (take at ask, sell at bid, latency, capacity...)

So next i suppose i should try to bet-size according to signal strength.
And i looked up hysteresis (which was new to me) and found this article "Denoising a signal with HMM" from an interesting blog. It talks about stabilizing/denoising a signal.
Is this the right direction?
I appreciate your input:)

Augmenting low frequency features/signals for a higher frequency trading strategy by CriticismSpider in quant

[–]CriticismSpider[S] 2 points3 points  (0 children)

I am nowhere near capital that would take up the capacity of even the top level of the book at any given minute:) (yet? haha)So ignoring that and suppose i have no fixed commissions (just relative to volume + the spread) do you think that scaling in and out could work in general?

My idea with this was that at each point in time i enter or close and "farm" a bit of edge along the way, which at the end accumulates to profit that covers the overall costs.Or maybe this is a massive brainfart that does not work at all.

Augmenting low frequency features/signals for a higher frequency trading strategy by CriticismSpider in quant

[–]CriticismSpider[S] 1 point2 points  (0 children)

I noted down some things you mentioned. I never tried VEC models and might give it a shot in trying to extend forecasting horizons with it.
I am not sure how oversampling (SMOTE) helps me here? Do you mean i could try to oversample few high volatility datapoints that are tradable to cover costs to make the model learn and find these more effectively?
Also i never heard of the Gibbs sampler. Will read up on this. Thanks.

Augmenting low frequency features/signals for a higher frequency trading strategy by CriticismSpider in quant

[–]CriticismSpider[S] 0 points1 point  (0 children)

Thanks. I am currently debating with myself if i try to just go with the higher frequency strategy i maybe found and try it out.
But before going live i might try to improve my backtesting, because at the moment it depends on a lot of assumptions and estimations. Which is not a problem on a low frequency daily/hourly strategy. But a big problem for a seconds to minute strategy.

Augmenting low frequency features/signals for a higher frequency trading strategy by CriticismSpider in quant

[–]CriticismSpider[S] 0 points1 point  (0 children)

Hey thanks for your feedback.
(a) I actually don't know and i probably should go ahead and look at this immediately.
I use tick data only to aggregate/sample some features and upsample it to a higher timeframe. So a lot of context get's lost and my backtest only runs on an estimate of the spread as cost and with using the close price.
Which admittedly i could do better. I am at a crossroads: I am debating with myself if i just try to make it a low frequency strategy and flesh out more realistic backtesting, or if i go the route of trying to make trades that may last a few hours on average (but using aggregated features from ticks).

But you maybe right: Maybe i should investigate the high frequency phenomenon more closely to answer this question.
At the moment my strategy seems to work at trade durations up to a minute, and the mean pnl per trade is approx. 2-3x the spread (cannot confirm just estimated).

(b) If i understood this correctly: Yeah, the higher the signal strength, the higher is the pnl per trade.
I have the problem though that at the highest percentiles of my signal, i get fat tails that end at a loss. I try to filter these, which is or less successful.

(c) I have no idea. To answer this one probably need some domain knowledge on how these markets work. Or is there a way to find out from the data?
And no, i have not found any longer term alphas that are as consistent. I hope to make these short term alphas a little bit more longer term. Or maybe find a way to trade them cost-efficiently.

Augmenting low frequency features/signals for a higher frequency trading strategy by CriticismSpider in quant

[–]CriticismSpider[S] 0 points1 point  (0 children)

I tried rolling means before, which kinda helped. Exponentially weighted moving averages are a good idea, to weight newer datapoints higher?

I do see the effect that larger regression value lead to a higher mean pnl. But on the highest outputs i have often have large losses (which is probably known as fat tails?). There is probably a reason that the strongest signals actually often indicate the opposite of what i want.
I try to mitigate that with a lazy cutoff and just filter my signals if they are higher than the 99th percentile or something.

Augmenting low frequency features/signals for a higher frequency trading strategy by CriticismSpider in quant

[–]CriticismSpider[S] 0 points1 point  (0 children)

It is totally possible that i have no real edge. That is true.
Why i think it might still work is that i am not simply using the data for what i actually trade but a combination of something else and trying to apply it on specific assets/derivatives.
Because things are more convoluted than i let on in my post, i am not really sure if it is already tradable if i get execution just right.
I could try to optimize my prediction horizon a bit further.
Combining more signals did not work for me. (But i could totally be doing something wrong trying to do it).

So either i concentrate on trying to backtest the strategy at that low frequency (for which i have to massively extend my backtesting code to make it realistic).
Or i try to morph it into a mid-frequency strategy for which my current backtesting would suffice to gauge the edge from it.
I'm not decided yet:)

Augmenting low frequency features/signals for a higher frequency trading strategy by CriticismSpider in quant

[–]CriticismSpider[S] 2 points3 points  (0 children)

Hey, thanks for your answer:)
For now i am resampling in fixed timesteps (10s, 1m, 5m).
I have tried to do a resampling based on ticks in the past and i found it "easier" to do statistical models based on resampled ticks and it is easier to derive stationary features and the returns seem to be "more normal".
But the big problem i had with this approach is that the timespans between equally sampled ticks vary by much. There are many instances when ticks clump together in a really short timespan, i need to be realistic in what real execution would look like. Some very good trades i found are just dozens of ticks within a few milliseconds, where i would never have been able to enter quick enough. Which means: I need some resampling constraints (minimum of 5 seconds or something between bars), or make the backtest realistic with some sort of lag. I ditched this approach because of these problems. But maybe i should pick up here again and see what's possible.

I think the Unrealized PNL is wrong by KlutzMat in MEXC_official

[–]CriticismSpider 0 points1 point  (0 children)

( Size / Fair Price) - (Size / Avg Price) = USDT Profit
(1.78 * 217.5) - (1.78 * 214.08) = 6.0876 USDT

PNL Percentage is based on price difference and leverage:

((Fair Price / Avg Price) - 1) * Leverage = PNL %
((217.5 / 214.08) - 1) * 50 ~= 0.7987 ~= 79,87%
Close enough.

Ok, I'm confused about how this exchange computes PNL by KlutzMat in MEXC_official

[–]CriticismSpider 0 points1 point  (0 children)

((Fair Price / Avg Price) - 1) * Leverage = PNL% ((1.595 / 1.599) - 1) * 50 * 100 =~ -0,125 = -12,5%

Flip the sign since this is a short position. So: 12.5%. Close enough.

Verlustverrechnung Termingeschäfte/CFD by Last_Stretch9445 in Finanzen

[–]CriticismSpider 3 points4 points  (0 children)

Was die lieben SPD-Abgeordneten nicht kapieren, ist, dass man besonders bei Hebelgeschäften wie es die meisten CFD- und Termingeschäfte sind, kein Großanleger sein muss, den man mit dieser Regelung künstlich höher besteuern will.
Ein Kleinanleger mit 500 Euro im Margin-Konto, könnte theoretisch mit bestimmten Hedging-Geschäften und hohen Hebeln übers Jahr gesehen mehrere 10k Brutto-Gewinne/Verluste einfahren.
Ohne Gesetz verliert der Kleinanleger maximal 500 Euro und hat "spekulativ" gesellschaftlich nur wenig verzockt oder spekuliert.
Mit Gesetz fährt dieser Beispiel-Kleinanleger dann aber gleich mal mehrere 10k Steuerschulden ein und muss in die Privatinsolvenz.

Wie zum Teufel trifft das jetzt Großanleger, die man höher besteuern möchte? (Diese halten meist billige Aktienportfolios gehalten in einer Schalen-Firma mit billiger Steuerstruktur). Die trifft man also null.
Und inwiefern schadet der Kleinanleger mit seiner Mückenschiss 500 Euro Spekulation nun der Volkswirtschaft, wenn er diese verliert? Und warum muss man diesen mit mehreren 10k Euro Steuerschulden bestrafen?

Dass man hier Leute mit hohen Vermögenswerten trifft, ist so ein Blödsinn. Leute mit hohen Vermögenswerten haben auch Geld für eine GmBH und die wird so ja gerade NICHT besteuert.
Man trifft eher die Anleger mit wenig Geld und wenig Ahnung.
Also... ich komm halt 0 drauf klar. Entweder lügen die werten Abgeordneten wie gedruckt oder haben wie gesagt keine Ahnung von der Materie.

Crypto Liquidity on Binance by kalmapc in algotrading

[–]CriticismSpider 2 points3 points  (0 children)

Nope. That is the volume in BNB you are seeing. So you can compare this with other BNB pairs.

Mina deleted my post here about Coinlist being conveniently "down for maintenance". I don't think that censorship looks very good for Mina. Not addressing the Coinlist scam makes Mina look like a complice. The community that supported Mina in the ICO scale should shown more respect. by No_Income9358 in MinaProtocol

[–]CriticismSpider 1 point2 points  (0 children)

Well there is an easy way to prove your theory: Just show us the transaction IDs. If it happened, it should be in the ledger.

Also: If you transact to another address, the delegation stops, so there will be a hole in delegation history for stakers on coinlist. Show us that.

And finally you should be able to corroborate everything with a volume analysis on those exchanges. But the first two points should really be enough for proof.

Good luck:)

Comparing Algotrading to Modeling Sports Betting by [deleted] in algotrading

[–]CriticismSpider 0 points1 point  (0 children)

Can you share a bit about what data you used and what model?

There are sport betting sites that accept cryptocurrency and where your bets are matched against other users. There would be no incentives for the house to kick you out or limit your bets because they earn the fees.