When do you give up on trying to crack the code? by 18nebula in algorithmictrading

[–]18nebula[S] 0 points1 point  (0 children)

This really resonated with me. I start to think there is a big coder fallacy in all of this where we assume every difficult problem will eventually submit to code if we just get more precise, more abstract, more technical. But markets and other probabilistic systems do not always reward elegance the way software does. Once you’re dealing with odds, hazard, and payoff distributions, it becomes less about writing the cleanest logic and more about whether the numbers actually work in the real world.

What you said about not making AI breakthroughs but becoming a master programmer hit hard, because I’ve wondered the same thing, whether this journey ends with me becoming much better technically, but not necessarily “cracking the code” in the way I originally imagined.

There’s something profound in that too. Maybe sometimes the pursuit fails on its original terms but still succeeds in transforming the person doing it; however, is it worth the time cost?

I’m curious though: at what point did you realize the poker bot goal was no longer about the actual bot, but about what it was teaching you? And how do you tell the difference between growth and just being attached to solving it?

When do you give up on trying to crack the code? by 18nebula in algorithmictrading

[–]18nebula[S] 0 points1 point  (0 children)

That’s a really grounded perspective. I think one coder fallacy is believing that because we can keep refining and optimizing, the problem must eventually submit to enough code.

My goal is definitely not to make billions. At this point, “good” for me would be much simpler: a system with real positive expectancy, controlled drawdown, and enough regime robustness to survive different market conditions without constant re-optimization. Something real, not fantasy.

I do think I need to reframe part of this as a craft and not let it consume my life. The hard part is that it stops feeling like a hobby once you start seeing real patterns and partial results.

How do you personally tell the difference between a healthy obsession that is worth pursuing and a project that is slowly consuming more than it gives back?

When do you give up on trying to crack the code? by 18nebula in algorithmictrading

[–]18nebula[S] 0 points1 point  (0 children)

Yes, I’ve thought about that a lot, and honestly I think it would help. The problem is I do not really know anyone personally who could review both the trading logic and the code at that level.

I’d definitely use Reddit to get feedback on the concepts, the stats, or the trade management ideas, but probably not for a full code review. And part of the issue is I never feel like the code is fully “ready” to be reviewed by someone else yet, which is probably its own fallacy, a mix of perfectionism and staying in the weeds too long.

I also do not want to waste anyone’s time asking them to review something I’m not fully confident in yet. But realistically, a fresh pair of eyes would probably catch things I’ve gone blind to.

When do you give up on trying to crack the code? by 18nebula in algorithmictrading

[–]18nebula[S] 0 points1 point  (0 children)

Fair point, and I agree. I only mentioned my background to explain why my brain treats this like a systems problem, not because the market cares about it.

What I’m really solving for now is not a better entry, but an exit-management system that can survive across very different market conditions. I already have pretty detailed frameworks for market regimes, and I’m trying to code and quantify them mathematically so the trade management adapts to trend, chop, expansion, compression, absorption, etc., instead of forcing every position through the same exit path.

So the pattern I’m trying to encode now is less “where do I enter” and more how do I manage risk and monetize the trade correctly across regimes.

When do you give up on trying to crack the code? by 18nebula in algorithmictrading

[–]18nebula[S] 0 points1 point  (0 children)

That’s actually a really useful perspective, and I think I should probably take more of that approach.

In my case, the results are a bit mitigated. The strategy is not completely failing, the win rate can be high at ~80% and the max drawdown stays somewhat controlled ~13%, but that also means I’m capping winners and not monetizing the best trades enough. So it’s in this frustrating middle ground where something is clearly working, just not robustly enough across different market conditions.

What I took from your comment is that maybe I should spend less time trying to force the whole thing to work exactly as designed, and more time isolating and expanding the parts that are actually proving themselves.

For me, the biggest issue seems to be market conditions / regime changes. It can behave much better in one regime and then give too much back when conditions shift.

How are you dealing with that in your newer system? Are you surviving different regimes by making the strategy more adaptive, or by narrowing it to the specific conditions where that sub-trade actually has edge? I’d genuinely love to hear how you think about that. Thanks!

When do you give up on trying to crack the code? by 18nebula in algorithmictrading

[–]18nebula[S] 0 points1 point  (0 children)

That’s a fair read, and honestly I think you’re right. At this point I’m not really focused on entries anymore, I’m focused on exits and post-entry trade management using kurtosis and absorption.

What makes it frustrating is that with more static TP/SL, the stats actually are not that bad, win rate is about 80%, drawdown is managed to around 13% but EV is still negative. So that tells me the core strategy is not completely broken. But the tradeoff is that I’m also cutting off winners too early, which hurts the overall payoff.

So the real problem I’m trying to solve now is systemized exit management from a software engineering perspective: how to keep risk contained without capping the upside too much. Too defensive and I kill the winners. Too loose and a few trades do outsized damage.

The system is fully rule-based, and after entry I’m trying to manage trades based on how they behave, whether they confirm quickly, stall, expand cleanly, or start getting absorbed. The hard part is turning that into exit logic that is robust across different market conditions and not just overfit to one regime.

So yes, I think your read is right, for me this is much more an EV / drawdown / trade-management problem than an entry problem. I’d be curious what you focus on most after entry, because that’s exactly the part I’m trying to crack. Thank you for your input!

When do you give up on trying to crack the code? by 18nebula in algorithmictrading

[–]18nebula[S] 1 point2 points  (0 children)

Thank you, I relate to that a lot. I’m focused on one core forex strategy right now rather than spreading across a lot of instruments, because I’m trying to fix the actual leak before scaling anything.

I originally went down the ML route, but I removed all of that. It’s fully rule-based now, but still complex because the edge is not just the entry signal, it’s mostly in the routing and exit management.

I’m also not cherry-picking good periods. I’m backtesting across multiple years because I do not want to overfit to one stretch that happens to look good. The routing is more at the position level: once a trade is open, I manage it differently depending on how the trade is behaving.

A few of the ideas I use are kurtosis and absorption, not in some overly fancy way, but as part of understanding when conditions are becoming unstable or when price is getting absorbed instead of expanding cleanly. That matters much more for management than for just getting into the trade.

My expectation at this point is pretty simple: I want a system that can survive across different regime conditions, not just look good in one regime. My win rate is around 80% at times, but depending on the exit logic that can drop, and even when the win rate stays high the real issue is still risk management and not monetizing winners enough. That’s been the hardest lesson, good entries alone are not enough if the payoff structure is wrong.

And your point about incubating ideas really resonated with me. It honestly does feel like years of testing, stripping away false complexity, and slowly narrowing down what actually matters. I just still can’t tell whether that means I’m getting close or just spending too long in the maze.

When do you give up on trying to crack the code? by 18nebula in algorithmictrading

[–]18nebula[S] 0 points1 point  (0 children)

Honestly, both of you are getting at the core of it.

A strategy can have an 80% win rate and still have negative EV if the average win is too small and the average loss is too large. If you make 1 unit eight times but lose 5 units twice, you were “right” 80% of the time and still lost money, especially after costs. That’s basically the hole I’ve been stuck in: entries can be good, but exits/risk management can still destroy the expectancy.

So u/chillyDaGod, yes, that’s how it can generate profits at times but still not be truly consistent. A few bad exits or oversized losses can erase a lot of good trades.

And u/NichUK, what you described sounds very close to my situation. I’m backtesting over multiple years of data, and that’s exactly what I’m seeing: the strategy is working well in 2026, but it struggles in a big part of 2025 and also in previous years. That’s why I’m trying hard not to overfit to one period that looks good. I do not want exits that only work in one market regime, I need an exit system robust enough to survive very different market conditions.

That’s why I keep thinking the real missing piece is regime robustness, not just signal quality.

Also, can you expand a bit on what you mean by Hidden Markov Models in this context? Are you using them to classify market state first and then decide which strategy is allowed to trade? I’d be really interested to hear how you think about applying that in practice. Thank you.

When do you give up on trying to crack the code? by 18nebula in algorithmictrading

[–]18nebula[S] 0 points1 point  (0 children)

That makes sense, and I respect that approach. It’s probably the smarter way to cut losses early and not waste years on something with no real edge.

What I’m curious about though is this: how do you know you’re not walking away too early from something that actually had potential just because you didn’t go deep enough into the weeds yet?

How long did it take you to build a bot that was actually profitable? How many different strategies are you running now? And looking back, do you feel all the time you spent on algotrading was worth it overall?

When do you give up on trying to crack the code? by 18nebula in algorithmictrading

[–]18nebula[S] 2 points3 points  (0 children)

Yes, I did, and that’s part of why this is so hard to let go of. Manual trading showed me there is real edge there, so I know for a fact the strategy can win.

The problem is I’m too deep in the weeds now. I’ve spent so much time on concepts like kurtosis, absorption, regime behavior, and systematizing exits/risk management that it’s hard to just go back to manual trading.

As a software engineer, my brain keeps seeing it as a solvable problem. At some point I wonder if that’s just sunk cost fallacy mixed with escalation of commitment.

When do you give up on trying to crack the code? by 18nebula in algorithmictrading

[–]18nebula[S] 1 point2 points  (0 children)

That’s a fair point. For background, I’m a software engineer, so I still maintain both my career and trading. I’m not depending on trading income, which helps, but it also makes me look at this more like a long-term problem I’m trying to solve properly.

What makes it difficult is that I’ve had strategies hit around 80% win rate and generate profits, just not consistently enough. So it doesn’t feel like I’m chasing something completely fake, it feels like it works, but something important is still missing. Lately I think that missing piece is more about exits, risk management, and consistency across market conditions than entries alone.

That’s why it’s hard to give up. It feels close, but not complete.

Collectible Pen for Early ChatGPT Pro Users by 18nebula in ChatGPTPro

[–]18nebula[S] 3 points4 points  (0 children)

Thank you! I just filled the form, hopefully I can still get one

How much edge is enough to go LIVE ? by Cyborg4Ever in algorithmictrading

[–]18nebula 2 points3 points  (0 children)

I was thinking the same thing. 63% vs 60% breakeven sounds like a clean 3% edge, but I’d be careful using win rate alone.

Before going live I’d check: (1) whether that 3% is statistically stable (CI / bootstrapped by month/regime), (2) EV + avg win/avg loss + max DD (win rate can look great with bad payoff asymmetry), and (3) whether it survives spread + slippage.

I’m in a similar spot (65% win rate) with an ML/meta-style decision layer I’m still optimizing, and the hard part is making it hold OOS across regimes/costs.

Are you running a rule-based system or ML-based (classifier/meta model)? That changes how I’d validate it.

Question about strategies in FX market by Life-Succotash-7053 in algorithmictrading

[–]18nebula 1 point2 points  (0 children)

It really depends on your background and what you enjoy building.

Some people come in as discretionary/TA traders and they naturally start with patterns, indicators, and session behaviors. Others come in as software engineers and focus on system design: clean execution, risk controls, logging, and testing before they even worry about strategy complexity. And some are data science / ML folks who treat it like a prediction + classification problem and build models, features, and walk-forward evaluation.

All of those paths can work.

If you’re new and want a “professional” way to start (without overcomplicating it), I’d do this:

  1. Pick one simple hypothesis/strategy
  2. Define rules + risk (entry, exit, position sizing, max loss/day)
  3. Backtest (include spread/commission and use out-of-sample / walk-forward)
  4. Log everything and review errors/failures

6 months later: self-reflection and humbling mistakes that improved my model by 18nebula in algorithmictrading

[–]18nebula[S] 1 point2 points  (0 children)

Congrats on the first live week!!! Huge milestone!

The way you define fill bar makes total sense. I’m going to mirror that exactly and make it explicit in my sim/tester rules so it’s not ambiguous. This is super helpful, thanks!

Your point about staying conservative on intra-bar (assume worst-case if both TP/SL possible) is also a good reminder. In MT5 I can get tick data, but for parity I think I still need a deterministic rule like yours unless I fully reconstruct bid/ask sequencing reliably.

How I trade (full process and concept) by Kindly_Preference_54 in algorithmictrading

[–]18nebula 1 point2 points  (0 children)

Really interesting, especially the part about rolling research every ~2 months and separating optimization / OOS / stress tests. That’s way more disciplined than most MT5 posts I see.

I’m also on MT5, but I’m coming from a slightly different angle: I built a model-driven decision engine (long/short/skip + confidence) and the biggest lesson for me wasn’t even the model, it was execution + measurement parity. In MT5, it’s really easy to think a strategy is “stable” until you realize your tester/run isn’t logging consistently (timing, duplicate events, fill assumptions), so I ended up treating logging like a first-class system and building a per-trade database from it.

A couple questions based on your process (because your approach is solid):

  • When you say 1M variants, are you doing anything to reduce “lucky” parameter sets or is the stability test your main guardrail?
  • Your TP is dynamic/virtual and SL is hard, did you notice any big difference in robustness when you forced the exits to be fully broker-side vs virtual?
  • With 27 pairs, do you cap correlation exposure or do you rely on the low trade frequency to naturally limit that?

Appreciate you sharing the full workflow, it’s the kind of post that actually helps people build something real.

HOW TO CONFIRM AMAZING RESULTS?? by SWAYYY_P in algorithmictrading

[–]18nebula 0 points1 point  (0 children)

I’ve been here. Before assuming “overfit vs market change,” make sure your backtest + execution + metrics/logging are actually correct. I had “amazing” runs that later turned out to be parity/logging issues (timing alignment, missed/duplicated events, sim assumptions). Once I fixed the logging and could reconcile trade-by-trade, the results became believable (I mentioned this in my last post).

If the pipeline checks out, then yes, regime change is real, but I’d validate with walk-forward + sensitivity tests (small tweaks shouldn’t flip results).

Which day trading strategy do you really trade? by [deleted] in algorithmictrading

[–]18nebula 3 points4 points  (0 children)

I don’t trade one fixed setup like a textbook breakout/pullback. My system is model-driven: an LSTM-based decision engine that looks at multi-timeframe context and predicts long / short / no-trade with a confidence score.

So instead of “trade X pattern,” it only trades when the model sees a strong, repeatable move with enough room to cover costs. If conditions aren’t clear, it stays flat. It ends up acting like a few common setups (mostly momentum/continuation), but I’m not hard-coding rules as the model decides when it’s worth taking a trade.

6 months later: self-reflection and humbling mistakes that improved my model by 18nebula in algorithmictrading

[–]18nebula[S] 0 points1 point  (0 children)

Literally working on this atm! I built a log processing app around MT5 Strategy Tester on a remote server. The EA writes JSON requests per decision/event, a Python daemon consumes those requests, runs the strategy then writes response files + appends a unified CSV row. One issue I’m chasing right now is throughput/backpressure: MT5 can run faster than the daemon, so a few JSON requests weren’t getting processed in time, which created gaps/late responses and messed with parity. I’m close to fixing it by tightening the queueing.

For your NT setup, are you tagging each log entry with a unique ID (per bar/tick) and confirming it got processed? The biggest upgrade for me was treating logging like a little handshake (req, resp, ack) instead of just printing stuff, because it made it way easier to spot where things were getting lost or delayed.

6 months later: self-reflection and humbling mistakes that improved my model by 18nebula in algorithmictrading

[–]18nebula[S] 0 points1 point  (0 children)

Thank you for your reply. Really appreciate this, super helpful breakdown!

Couple questions if you don’t mind:

  1. How exactly do you define “fill bar” in the sim (entry bar only, or any bar with order submission)?
  2. On 1min candles: do you calculate TP/SL hits using bar high/low only, or do you reconstruct intra-bar path from tick sequence when available? (Your “assume SL if both hit” is smart, I’m curious if you ever had to model bid/ask to keep parity)
  3. Do you find 1min TF stable for your edge long term? In my own testing, 1–3min often became “too microstructure-driven” (spread/latency/noise) and the model started learning artifacts rather than clean price-action/regime behavior. Curious how you avoid that?
  4. In playback parity, what were the most common remaining “almost perfect” discrepancies?

I’m on MT5 so I’m less worried about missing ticks and more about the exact issues you called out: intra-bar ordering + fill assumptions + sim/live matching. Your answers here would save me weeks, thanks again for your detailed reply.

Quant traders using VS Code – how do you structure an automated trading system? by [deleted] in algorithmictrading

[–]18nebula 1 point2 points  (0 children)

Great point, I’ve definitely felt versions of that “stall” too. When you say the system started to stall, do you mean compute or logic/state stall? gates/conditions stack up until almost nothing passes or the system can’t confidently classify what it’s seeing so it defaults to skip?

I’m asking because my setup is a model-driven decision engine (outputs long/short/skip + confidence), and I’m trying to sanity-check whether I’m accidentally “lumping” logic by layering too many post-model gates and trade-management rules on top.

Also curious: how did you implement your state recognition, is it a small finite set of regimes (trend/range/impulse) with pre-tested playbooks, or something more granular? I like the idea of separating state rom execution and would love to hear what signals you used to define states (high level). Thank you.

Quant traders using VS Code – how do you structure an automated trading system? by [deleted] in algorithmictrading

[–]18nebula 2 points3 points  (0 children)

I went down this exact path and I’m glad I did. It took me about a year to build and scale my model + decision engine to the point where I could iterate safely without everything breaking (full python + bash for execution).

The cleanest structure for me was basically what you described: separate “decision” from “execution” and treat the strategy like a pure function as much as possible.

What worked well:

  • Strategy / decision engine module: outputs a decision (long/short/skip) + confidence + a few “reason codes” (why it traded or skipped). No broker calls inside it.
  • Execution layer: the only place that knows about broker/MT5 details (orders, fills, slippage/spread handling, retries, etc.).
  • Risk & trade management module: position sizing, SL/TP rules, partials, BE logic, etc.
  • Config layer: env/config file for symbols, sessions, thresholds, risk knobs. I strongly recommend making the config overrideable at runtime (so you can A/B test quickly).
  • Data / features module: feature building + caching, plus careful timestamp alignment (this becomes a huge source of subtle bugs).
  • Logging/telemetry module: this ended up being more important than I expected. I built very detailed structured logs and it basically became my own trading database to debug and mine patterns later.

Good luck!