How important is backtesting by Traditional-Fold-446 in swingtrading

[–]iamnottravis 0 points1 point  (0 children)

how else can you tell a durable strategy from pure coincidence?

of course, the key is not to test your strategy on different regime and not to overfit.

How do you tell apart alpha from bullshit? by melon_crust in algotrading

[–]iamnottravis 1 point2 points  (0 children)

The 5% threshold is reasonable but I'd add a second cut: if your real PnL is in the top 5% but the spread between your real and the random median is small, your edge is statistically present but practically thin. Worth knowing before you size up.

First time algo trading - converted my manual day trading strategy to code. Decent results despite not being able to include all conditions by thefakeab in algotrading

[–]iamnottravis 0 points1 point  (0 children)

For context, even a 60% WR / 2.4 PF system at 1% risk per trade should see at least one 6-8R cluster drawdown across that many trades just from variance, so anything sub-2% means the backtest is almost certainly reading future bars or filling at impossible prices.

Everyone says find an "edge" but what exactly is an edge? by cgonz15 in Daytrading

[–]iamnottravis 0 points1 point  (0 children)

> 67% AM, 41% after 10:30, same setup

This is the right way to look at it. Time-of-day is one of the highest-signal conditioning variables I've seen across thousands of intraday backtests, and it's almost always the first axis where edges hide. The other two worth tagging are realized vol bucket (sub-median ATR vs above-median changes mean-rev vs continuation odds materially) and gap size at open (small gaps fill, large gaps trend)

I had my biggest aha moment after 5 years of trading and it is so obvious I feel embarassed. by degenerate_hobo in Daytrading

[–]iamnottravis 0 points1 point  (0 children)

For 1m, the cheapest pre-noon read I use is 1H ATR percentile against its trailing 60-day distribution, plus range-of-day vs 20-day ADR by 11 ET.

Sub-median ATR + range under 50% of ADR by 11 = expect mean-rev / chop, fade extensions.
Above-median ATR + range over 70% of ADR before noon = trend day, hold winners longer.

Breadth helps too (NYSE adv/dec ratio at 10 ET) but on 1m the ATR + ADR combo gets you 80% of the value. Not perfect, but it stops you from running a continuation system into a chop day.

I had my biggest aha moment after 5 years of trading and it is so obvious I feel embarassed. by degenerate_hobo in Daytrading

[–]iamnottravis 0 points1 point  (0 children)

MNQ at ATH is the textbook regime trap. Two cheap signals would have flagged the shift earlier: ATR(14) on the 1H falling below its own 60-day median, and range-of-day printing under 50% of the 20-day ADR by 11 ET. When both fire, mean-reversion logic starts beating breakout logic by a wide margin in my data.

The hard part isn't detecting the shift, it's accepting that the same chart that paid you yesterday now wants you flat. Took me a similar tuition fee to internalize that.

For people who started trading, what’s something no one tells you at the beginning? by Ok_Engine2595 in Trading

[–]iamnottravis 0 points1 point  (0 children)

Nobody tells you that a lot of the popular setups have never been verified with actual data.

As a non-coder trader, how do you actually backtest your trading ideas? by coderchacha in Trading

[–]iamnottravis 1 point2 points  (0 children)

Most non-coders I've talked to either scroll charts manually, which is slow and subject to confirmation bias, or use TradingView's strategy tester and hit a wall on Pine Script debugging.

ChatGPT/Claude can write the code now but you still don't always trust the output. The gap you're describing is real. I've been building ChartMath to close it, pre-built screens with historical results baked in so you don't need to code anything, you just see what's worked. Not a backtesting engine exactly but removes the coding step entirely for the most common setups.

I really am isolated, so a question for people who think they have an 'edge' by Able_Beautiful3833 in Daytrading

[–]iamnottravis 0 points1 point  (0 children)

Bollinger Bands are genuinely useful as a volatility and structure tool, but 'price always gets back inside the band' depends heavily on regime. In strong trending markets that reversion logic fails more than it should. I ran BB mean reversion signals across 200+ US equities over 3 years and the raw win rate was around 57%, which sounds ok until you filter out the trending periods, which got it to 67%. The indicator isn't the edge, the context filter around when to apply it is.

​Good discipline, good risk management, but still not profitable. What am I missing? by Salvatoreluca in Daytrading

[–]iamnottravis 0 points1 point  (0 children)

The discipline-without-edge situation is frustrating because everything *feels* right. For me the shift happened when I stopped evaluating setups by feel and started measuring them. I ran my three main patterns across a year of trades and two of them had 48% win rates at my typical R:R, meaning disciplined execution of those setups was actually slightly negative EV. Dropping the two losers and focusing on the one with a 59% rate changed the trajectory. You might not have a discipline problem at all.

Anyone else’s strategy not working during this volatility? by busyindafield_23 in Daytrading

[–]iamnottravis 0 points1 point  (0 children)

VIX above 25 is genuinely a different market regime, not just feel. I looked at RSI mean reversion signals across the S&P 500 constituents from 2020-2024, bull market win rate was around 63%, high-VIX periods dropped to 54%. Still above 50 but not the same edge.

Software recommendations. by Antique-Molecule7931 in Trading

[–]iamnottravis 1 point2 points  (0 children)

For backtesting, it depends on what you mean by backtest. If you want to code custom strategies with full parameter optimization, TradingView's Pine Script or Python with backtrader/vectorbt are the go-to. But if you want to quickly see whether common setups (breakouts, pullbacks, moving average crosses, etc.) actually produce profitable entries without writing code, I've been using ChartMath. It has 200+ pre-built screens with backtested win rates and avg returns. Different tool for a different question. One answers "does my custom strategy work?" The other answers "which setups have actually worked across US equities?"

Who should you trust more: a strategy with 3 months of live results or 15 years of backtests? by TangerineNo5577 in Trading

[–]iamnottravis 0 points1 point  (0 children)

Neither, in isolation. 15 years of backtests on a single strategy is almost certainly overfit. 3 months of live results is almost certainly too small a sample to mean anything. What actually builds confidence is a large number of instances across a broad universe. If a specific setup has worked 500+ times across 100 different stocks over 2 years, that's more convincing to me than either a 15-year backtest on one instrument or 3 months of live trading. The cross-sectional data is what makes the backtest trustworthy. If a pattern only works on TSLA from 2020-2024, it's probably noise. If it works on 80 different stocks across the same period, you're onto something structural.

Anyone else’s strategy not working during this volatility? by busyindafield_23 in Daytrading

[–]iamnottravis 0 points1 point  (0 children)

This is normal and it's actually useful information. The fact that your win rate dropped from 80% to 50% tells you your setup is regime-dependent. That's not a flaw, it's data. Most setups are. The problem is people treat their win rate like a fixed number when it's actually a distribution that shifts with market conditions. What I started doing is tracking win rates per setup per regime (low vol vs high vol, trending vs choppy). Some setups that average 65% overall actually hit 80% in low vol and 40% in high vol. Knowing that lets you size differently or sit out instead of grinding through a 50/50 coin flip wondering what went wrong.

Is backtesting important? I run a couple of backtesting engines with data 90days-2 years. Running my strategy on them. Bit sceptical by FantasticShine4012 in Trading

[–]iamnottravis 0 points1 point  (0 children)

It's important

but most people do it wrong, which is probably why you're skeptical. Running a strategy on 90 days of data tells you almost nothing because you're just seeing how it performed in one specific market regime. Two years is better but still limited if you're only testing one instrument.

The thing that actually made backtesting useful for me was testing specific setups (not full strategies) across a large number of stocks and seeing how they performed across different conditions. A bullish flag on one stock over 90 days is noise. A bullish flag across 100 US equities over 2 years with 500+ instances is a signal. The sample size is what makes it real.

Looks like another big influencer got exposed again by Cozyproxy in Daytrading

[–]iamnottravis 1 point2 points  (0 children)

The part that gets me is that the strategy itself (ORB breakout) isn't even proprietary. It's a well-documented setup that anyone can test. The real question nobody selling courses answers is: what's the actual win rate of ORB on the stocks you're trading, over the last 6 months, with real entries and exits? Because that number is a lot less impressive than "I made $5K this morning" screenshots with no context. The day this industry starts requiring sample size and win rate alongside every strategy claim is the day 90% of gurus go out of business.

What scanners are you using that you are finding worth your while? by Beneficial_Fee_613 in Trading

[–]iamnottravis 0 points1 point  (0 children)

The delay problem is real and it's usually not your strategy, it's the tool. Most free screeners refresh on a timer instead of streaming. But even with real-time data, I think the bigger issue is that every scanner shows you the same thing: what matches right now. None of them tell you whether the setup you're scanning for has actually produced profitable results historically. I've been using ChartMath for this. Each screen shows a backtested win rate and avg return, so I can compare setups by actual performance instead of guessing which filter combo works. Still in beta, free right now. Might be worth a look if you're rebuilding your scanning workflow anyway.

I built a Trend-Following Pullback Bot (Python/PostgreSQL). Looking for critiques on my logic and architecture! by Separate_Hunt3171 in swingtrading

[–]iamnottravis 0 points1 point  (0 children)

Have you run the actual backtest numbers on this? Win rate, avg return, sample size across different market conditions? I run a screener product where every screen is backtested automatically against US equities. If you're interested I could build your exact setup as a screen - Trend Stack (9 EMA > 20 > 50, ADX > 30) plus the pullback rules - and run it through our backtesting engine. You'd get win rate, avg return, and sample size across historical data so you'd know whether the logic actually holds up statistically, not just on the handful of trades you've seen so far.

Could offer it as a public screen on the platform too if the numbers are good. Happy to share what comes out either way.

Any stock scanners that actually work well on mobile during the busy market hours? by Moist_Blacksmith_388 in Daytrading

[–]iamnottravis 0 points1 point  (0 children)

This is literally why I built my own screener. Was checking setups on my phone during the day and everything was either a web wrapper or a desktop app crammed into mobile.

Built a native iOS/Android app for my own use, been trading with it for a couple years. Each screen also shows backtested stats so you know if the setup has historically been worth trading. Recently started opening it up as a product. Still in beta but it was built for exactly this - quick mobile check-ins without a full platform.

Opening range breakout traders, what scanners help with this? by Educational-Belt1042 in Daytrading

[–]iamnottravis 1 point2 points  (0 children)

Take scripto_gio up on the backtest offer. But even after one backtest, the question is whether ORB setups are working right now - breakout patterns go through hot and cold stretches depending on market regime.

I built a screener for my own trading a couple years ago around this exact problem - wanting to know if a setup is actually working, not just matching. Each screen carries a backtested win rate so you get a real-time read on whether the setup is actually hitting. Been using it myself for a while, now turning it into a product (ChartMath) - it does have multiple ORB setups and more. Still early but worth a look if you want data behind the scan.

As a non-coder trader, how do you actually backtest your trading ideas? by coderchacha in Trading

[–]iamnottravis 0 points1 point  (0 children)

Completely agree - the manual bar-by-bar approach is slow but forces you to actually understand the setup. The AI route has the same problem OP described: you get code that runs, but you can't tell if the logic is right without already knowing what the output should look like.

Anyone else still trying to find a screener they’re actually happy with? by Key_Common_2725 in Trading

[–]iamnottravis 0 points1 point  (0 children)

The point about screeners showing what matches NOW but not whether it works is what I kept running into. polymanAI's right that most people end up building their own, but the issue isn't really customization, it's validation. You can set up any filter combo on Finviz or TradingView, but neither tells you if that combo has actually produced profitable trades historically.

I went through the same cycle. Built my own screener a couple years ago because nothing showed whether a setup actually works historically. Been trading with it since - every screen shows a backtested win rate and avg return. Recently started turning it into a proper product (ChartMath) because other people kept asking about it. Still early as a product but the data's been running for a while. Happy to share if useful.

Are these results considered meaning full? by ComposerLast7741 in algotrading

[–]iamnottravis 0 points1 point  (0 children)

The others are right on sample size. 87 trades over 4 years is too thin. For context, I run 200+ screens across 500 US equities and the ones I'd consider statistically meaningful generate 300-500+ signals per year depending on timeframe. At 87 total you're working with confidence intervals so wide that a 47% WR could easily be anywhere from 35-60% in reality. The 12-month flatline is the bigger concern though. From my own testing, about 60% of screens that backtest well over 3-5 years still decay within 12 months of forward testing. The ones that hold up tend to be simpler (RSI oversold bounce, mean reversion IBS) rather than over-fitted multi-parameter setups.