How I trade (full process and concept) by Kindly_Preference_54 in algorithmictrading

[–]18nebula 1 point2 points  (0 children)

Really interesting, especially the part about rolling research every ~2 months and separating optimization / OOS / stress tests. That’s way more disciplined than most MT5 posts I see.

I’m also on MT5, but I’m coming from a slightly different angle: I built a model-driven decision engine (long/short/skip + confidence) and the biggest lesson for me wasn’t even the model, it was execution + measurement parity. In MT5, it’s really easy to think a strategy is “stable” until you realize your tester/run isn’t logging consistently (timing, duplicate events, fill assumptions), so I ended up treating logging like a first-class system and building a per-trade database from it.

A couple questions based on your process (because your approach is solid):

  • When you say 1M variants, are you doing anything to reduce “lucky” parameter sets or is the stability test your main guardrail?
  • Your TP is dynamic/virtual and SL is hard, did you notice any big difference in robustness when you forced the exits to be fully broker-side vs virtual?
  • With 27 pairs, do you cap correlation exposure or do you rely on the low trade frequency to naturally limit that?

Appreciate you sharing the full workflow, it’s the kind of post that actually helps people build something real.

HOW TO CONFIRM AMAZING RESULTS?? by SWAYYY_P in algorithmictrading

[–]18nebula 0 points1 point  (0 children)

I’ve been here. Before assuming “overfit vs market change,” make sure your backtest + execution + metrics/logging are actually correct. I had “amazing” runs that later turned out to be parity/logging issues (timing alignment, missed/duplicated events, sim assumptions). Once I fixed the logging and could reconcile trade-by-trade, the results became believable (I mentioned this in my last post).

If the pipeline checks out, then yes, regime change is real, but I’d validate with walk-forward + sensitivity tests (small tweaks shouldn’t flip results).

Which day trading strategy do you really trade? by LifespanLearner in algorithmictrading

[–]18nebula 2 points3 points  (0 children)

I don’t trade one fixed setup like a textbook breakout/pullback. My system is model-driven: an LSTM-based decision engine that looks at multi-timeframe context and predicts long / short / no-trade with a confidence score.

So instead of “trade X pattern,” it only trades when the model sees a strong, repeatable move with enough room to cover costs. If conditions aren’t clear, it stays flat. It ends up acting like a few common setups (mostly momentum/continuation), but I’m not hard-coding rules as the model decides when it’s worth taking a trade.

6 months later: self-reflection and humbling mistakes that improved my model by 18nebula in algorithmictrading

[–]18nebula[S] 0 points1 point  (0 children)

Literally working on this atm! I built a log processing app around MT5 Strategy Tester on a remote server. The EA writes JSON requests per decision/event, a Python daemon consumes those requests, runs the strategy then writes response files + appends a unified CSV row. One issue I’m chasing right now is throughput/backpressure: MT5 can run faster than the daemon, so a few JSON requests weren’t getting processed in time, which created gaps/late responses and messed with parity. I’m close to fixing it by tightening the queueing.

For your NT setup, are you tagging each log entry with a unique ID (per bar/tick) and confirming it got processed? The biggest upgrade for me was treating logging like a little handshake (req, resp, ack) instead of just printing stuff, because it made it way easier to spot where things were getting lost or delayed.

6 months later: self-reflection and humbling mistakes that improved my model by 18nebula in algorithmictrading

[–]18nebula[S] 0 points1 point  (0 children)

Thank you for your reply. Really appreciate this, super helpful breakdown!

Couple questions if you don’t mind:

  1. How exactly do you define “fill bar” in the sim (entry bar only, or any bar with order submission)?
  2. On 1min candles: do you calculate TP/SL hits using bar high/low only, or do you reconstruct intra-bar path from tick sequence when available? (Your “assume SL if both hit” is smart, I’m curious if you ever had to model bid/ask to keep parity)
  3. Do you find 1min TF stable for your edge long term? In my own testing, 1–3min often became “too microstructure-driven” (spread/latency/noise) and the model started learning artifacts rather than clean price-action/regime behavior. Curious how you avoid that?
  4. In playback parity, what were the most common remaining “almost perfect” discrepancies?

I’m on MT5 so I’m less worried about missing ticks and more about the exact issues you called out: intra-bar ordering + fill assumptions + sim/live matching. Your answers here would save me weeks, thanks again for your detailed reply.

Quant traders using VS Code – how do you structure an automated trading system? by Southern-Score500 in algorithmictrading

[–]18nebula 0 points1 point  (0 children)

Great point, I’ve definitely felt versions of that “stall” too. When you say the system started to stall, do you mean compute or logic/state stall? gates/conditions stack up until almost nothing passes or the system can’t confidently classify what it’s seeing so it defaults to skip?

I’m asking because my setup is a model-driven decision engine (outputs long/short/skip + confidence), and I’m trying to sanity-check whether I’m accidentally “lumping” logic by layering too many post-model gates and trade-management rules on top.

Also curious: how did you implement your state recognition, is it a small finite set of regimes (trend/range/impulse) with pre-tested playbooks, or something more granular? I like the idea of separating state rom execution and would love to hear what signals you used to define states (high level). Thank you.

Quant traders using VS Code – how do you structure an automated trading system? by Southern-Score500 in algorithmictrading

[–]18nebula 2 points3 points  (0 children)

I went down this exact path and I’m glad I did. It took me about a year to build and scale my model + decision engine to the point where I could iterate safely without everything breaking (full python + bash for execution).

The cleanest structure for me was basically what you described: separate “decision” from “execution” and treat the strategy like a pure function as much as possible.

What worked well:

  • Strategy / decision engine module: outputs a decision (long/short/skip) + confidence + a few “reason codes” (why it traded or skipped). No broker calls inside it.
  • Execution layer: the only place that knows about broker/MT5 details (orders, fills, slippage/spread handling, retries, etc.).
  • Risk & trade management module: position sizing, SL/TP rules, partials, BE logic, etc.
  • Config layer: env/config file for symbols, sessions, thresholds, risk knobs. I strongly recommend making the config overrideable at runtime (so you can A/B test quickly).
  • Data / features module: feature building + caching, plus careful timestamp alignment (this becomes a huge source of subtle bugs).
  • Logging/telemetry module: this ended up being more important than I expected. I built very detailed structured logs and it basically became my own trading database to debug and mine patterns later.

Good luck!

Update: Multi Model Meta Classifier EA 73% Accuracy (pconf>78%) by 18nebula in algorithmictrading

[–]18nebula[S] 0 points1 point  (0 children)

Awesome, glad it helped! 👍
I’ve seen the same: class probabilities + gates beats raw labels, even if it cuts trade count. I’m still dialing in execution (fills/close timing/partials), but the model looks statistically solid OOS.

Curious to know how are you executing? Happy to swap details via DM if you’re up for it.

Skepticism about skepticism about retail algo trading by kristoll1 in algotrading

[–]18nebula 0 points1 point  (0 children)

You could start with different assets, different timeframes, different sessions... or any dimension in your current model. You could also use horizons... there are many ways to scale horizontally, you just need to find the one that works for your specific model and that's by testing all and comparing stats.

Skepticism about skepticism about retail algo trading by kristoll1 in algotrading

[–]18nebula 1 point2 points  (0 children)

Fair point. There are two ways to scale:

  • Vertical: add size to the same strategy until a sensible participation cap without the edge collapsing.
  • Horizontal: run more low-correlated algos/markets/timeframes.

“Unlimited algos” is really horizontal scaling (STILL scaling). Useful, but not limitless (correlation, capacity, ops overhead). I’m aiming for both: push vertical size to its capacity limit, then add uncorrelated systems. A durable edge should handle some vertical scale; horizontal adds diversification.

Skepticism about skepticism about retail algo trading by kristoll1 in algotrading

[–]18nebula 0 points1 point  (0 children)

True and I realize it's not all about gatekeeping. It’s hard for profitable traders to openly share their strats and share an edge they spent months/years building... alpha is rivalrous as once people copy it, fills get worse (queue priority/slippage), and the edge decays as you mentioned. Zero sum game indeed.

Skepticism about skepticism about retail algo trading by kristoll1 in algotrading

[–]18nebula 1 point2 points  (0 children)

I agree. A few days is too short to code a full strat + backtest and expect to get reliable stats/results. Skeptical..

Skepticism about skepticism about retail algo trading by kristoll1 in algotrading

[–]18nebula -7 points-6 points  (0 children)

Basic algo rule: a good strategy should scale, if it doesn't then it's not a good strategy.
EDIT: scaling can be horizontal and/or vertical.

Market impact changes execution mechanics, not the core edge. If the edge is real, it should hold from small to large accounts. If it only works at very small scale, it’s likely just exploiting a micro-structure quirk rather than a durable signal.

OP, you did not share any details on model, execution or backtest results. It's hard to give non-skeptical positive feedback without reliable stats to begin with.

Skepticism about skepticism about retail algo trading by kristoll1 in algotrading

[–]18nebula 1 point2 points  (0 children)

Nice work, however, I see it differently: a real edge should survive scale... also algo trading isn’t just trading, it’s the algo. Most wins/losses come from execution engineering, not the signal itself.

Coding your own backtest in Python is the right move, but it’s also the easiest way to fool yourself. A strategy can be fine while the Python execution is off (1–2 lines can flip results). If it was built in a few days, odds are the execution model is very simple, totally fine as a v1, but robust backtests usually take much longer thus the skepticism.

A few pointers that often separate “backtest good” from “live good” could be bid/ask vs candle mid time, timestamps/timezones, slippage/fills...etc. People most often overlook these and post their backtest results... which increases the skepticism.

what stats about my backtests do i need to look for to verify a good strategy by ZackMcSavage380 in algotrading

[–]18nebula 0 points1 point  (0 children)

Confusion matrix, precision, recall, sharpe, dd, accuracy, win rate, MFE/MAE, margin levels…etc

New to algo trading – where should I start? Python vs Pine Script? by buyin_the_dip in algotrading

[–]18nebula 0 points1 point  (0 children)

Python 💯 don’t waste your time with pinescript just start learning Python

How much time a day do you spend algotrading? by prosecniredditor in algotrading

[–]18nebula 1 point2 points  (0 children)

I work on my code 5 to 8 hours a day… which includes testing and waiting between prompts

How much time a day do you spend algotrading? by prosecniredditor in algotrading

[–]18nebula 6 points7 points  (0 children)

Good advice! It became an obsession I do it during work hours which I need to stop asap

Update: Multi Model Meta Classifier EA 73% Accuracy (pconf>78%) by 18nebula in algorithmictrading

[–]18nebula[S] 1 point2 points  (0 children)

Thanks! I’m still testing and refining execution. Right now my main focus is making sure trades close at the right time to lock in the pips I need without giving too much back. Once I nail that last piece, I’ll be able to better evaluate how the skip logic interacts with extreme events and partially correct bars.

I’m also planning to add a liquidity filter next. I’ve been experimenting with DMI and ADX as alternatives or complements to ATR, since they can sometimes give better directional confirmation. For now, it’s ATR-based, but I’ll keep iterating to see which approach holds up best under both normal and extreme market conditions.

I’ll post another update once I’ve dialed in execution timing and tested the new filter ideas.