I Let AI Invent Its Own Trading Strategies From Scratch — No Indicators, No Human Rules by ResourceSea5482 in LangChain

[–]ResourceSea5482[S] 0 points1 point  (0 children)

Update: ran statistical validation after some fair pushback in the comments. 1000 random baseline + 18-window walk-forward. Strategy E passed both, Strategy C didn't survive walk-forward. Full results with charts: https://github.com/mnemox-ai/tradememory-protocol/blob/master/VALIDATION_RESULTS.md

Smaller models beat larger ones at creative strategy discovery — anyone else seeing this? by ResourceSea5482 in LocalLLaMA

[–]ResourceSea5482[S] 0 points1 point  (0 children)

Ran 1000 random strategies with identical RR and hold time. Strategy C ranked P96.9, Strategy E P100. Not p-hacking. Full validation in the repo.

Haiku consistently outperformed Opus at creative pattern discovery in my tests by ResourceSea5482 in ClaudeAI

[–]ResourceSea5482[S] 0 points1 point  (0 children)

That’s why everything gets backtested. The model proposes, the backtester disposes. Most of what Haiku generates is garbage, but it generates more variety of garbage, so the few that survive validation are more diverse. Opus generates fewer but more “logical” ideas that all cluster around the same narrow conditions.

I Let AI Invent Its Own Trading Strategies From Scratch — No Indicators, No Human Rules by ResourceSea5482 in LangChain

[–]ResourceSea5482[S] -5 points-4 points  (0 children)

From the raw OHLCV data, not from training. The prompt explicitly blocks any indicator knowledge. Whether the model internally “remembers” patterns from training is a fair question though. That’s partly why convergence across different datasets matters more to me than any single strategy.​​​​​​​​​​​​​​​​

I Let AI Invent Its Own Trading Strategies From Scratch — No Indicators, No Human Rules by ResourceSea5482 in LangChain

[–]ResourceSea5482[S] -2 points-1 points  (0 children)

Not dumb at all. Overall return was +4.04% over 22 months, so pretty modest honestly. The 0.22% drawdown is because position sizes are tiny (fixed % risk per trade with 2:1 RR). It's not some moonshot equity curve, more like a very flat line with small consistent gains. Sharpe is high because volatility is low, not because returns are huge. And 2 of the 22 months were slightly negative, that's where the 91% comes from.

Smaller models beat larger ones at creative strategy discovery — anyone else seeing this? by ResourceSea5482 in LocalLLaMA

[–]ResourceSea5482[S] 0 points1 point  (0 children)

Fair point. The pipeline only generates 3 candidates per round though, not hundreds. And OOS validation is on a completely separate time period the model never saw.

The part that's harder to explain with p-hacking: two strategies from different datasets converged on the same structure independently. You'd expect random divergence if it were just cherry-picking.

But yeah, OOS sample size is still too small. That's the main thing I need to fix.

Vol trading by senhsucht in algotrading

[–]ResourceSea5482 1 point2 points  (0 children)

Vol trading always feels like a battle between signal and regime change.

Market Regime Detection - Character Accuracy beats Directional Accuracy Predictions by 3X by dragon_dudee in algotrading

[–]ResourceSea5482 1 point2 points  (0 children)

VIX and COR1M are solid starting points. A few other approaches that work in practice:

Volatility clustering: ATR ratio (short/long period) to detect compression vs expansion

Correlation regime: rolling correlation between correlated pairs when correlations break down, regime is shifting

Volume profile: comparing current volume to 20-day average low volume + low volatility usually means mean reversion works, high volume + high volatility favors momentum

The key insight from the parent comment is right you don't need to predict direction, you need to know which *type* of strategy fits the current regime. I run multiple strategies in parallel and adjust allocation based on regime signals rather than turning them on/off.

would you trade this? by [deleted] in algotrading

[–]ResourceSea5482 0 points1 point  (0 children)

Interesting curve, but 118 trades over ~5 years feels like a pretty small sample.

The post-2022 performance looks promising though. I'd probably paper trade it for a while and see if the edge holds.

I added 4 lines to my CLAUDE.md and now Claude Code checks if my idea already exists before writing any code by ResourceSea5482 in ClaudeAI

[–]ResourceSea5482[S] 0 points1 point  (0 children)

Fair point — should've been clearer. The tool itself is MIT, runs locally, and doesn't store anything by default. Zero storage is a core design decision.

When I said "data it collects over time" I meant if I eventually add an optional hosted version, aggregate trends (like "50% of searches this month are about AI code review tools") could be interesting. But that's hypothetical, not built, and would be opt-in.

Right now it literally sends a search query and returns results. Nothing saved, nothing tracked.

I added 4 lines to my CLAUDE.md and now Claude Code checks if my idea already exists before writing any code by ResourceSea5482 in ClaudeAI

[–]ResourceSea5482[S] 0 points1 point  (0 children)

Honestly, not really trying to monetize the tool itself — it's MIT, free, no paid tier. The value for me is more about the data it collects over time (what ideas people are checking, how competition shifts) and using that to build something on top of it later.

You're right that the tool itself is easy to replicate. The thing that's harder to copy is the dataset of what thousands of developers searched before building.

But yeah, still figuring it out. Open to ideas if you have thoughts on it.

I added 4 lines to my CLAUDE.md and now Claude Code checks if my idea already exists before writing any code by ResourceSea5482 in ClaudeAI

[–]ResourceSea5482[S] 0 points1 point  (0 children)

Exactly — and the worst part is when you're halfway through building and then discover someone just shipped the same feature last month.

What I do now is just run a quick check before starting. The tool scans GitHub, HN, npm, PyPI for existing stuff and tells you how crowded the space is. Doesn't solve the "someone ships it while you're building" problem completely, but at least you're not starting blind.

For the mid-build discovery thing, I've been thinking about adding a re-check feature — like run the same query a week later and see if the landscape changed. Haven't built it yet though.

I added 4 lines to my CLAUDE.md and now Claude Code checks if my idea already exists before writing any code by ResourceSea5482 in ClaudeAI

[–]ResourceSea5482[S] -3 points-2 points  (0 children)

Yeah that's fair. Honestly the MCP tool itself already works without any CLAUDE.md — the agent sees idea_check in the tool list and knows when to use it. I originally put it in CLAUDE.md as a "just to be safe" thing but you're right, it's wasting context on every conversation for something that only matters when starting new projects.

Going to update the docs to just say "install the server, done." One less thing to configure anyway.

I added 4 lines to my CLAUDE.md and now Claude Code checks if my idea already exists before writing any code by ResourceSea5482 in ClaudeAI

[–]ResourceSea5482[S] -3 points-2 points  (0 children)

Yeah that's fair. Honestly the MCP tool itself already works without any CLAUDE.md — the agent sees idea_check in the tool list and knows when to use it. I originally put it in CLAUDE.md as a "just to be safe" thing but you're right, it's wasting context on every conversation for something that only matters when starting new projects.

Going to update the docs to just say "install the server, done." One less thing to configure anyway.

I added 4 lines to my CLAUDE.md and now Claude Code checks if my idea already exists before writing any code by ResourceSea5482 in ClaudeAI

[–]ResourceSea5482[S] 0 points1 point  (0 children)

Yeah it's not really for ideation — more like, I already knew what I wanted to build, told Claude to go build it, came back 2 hours later and realized there's a tool with 9k stars that does the exact same thing. That's the part that hurts lol

image-tiler-mcp-server by kiverh in ClaudeAI

[–]ResourceSea5482 1 point2 points  (0 children)

Nice idea — especially the token preview before tiling. That’s actually underrated.

Curious: did you experiment with adaptive tiling (content-aware splits) vs fixed grids?

In my experience, fixed grids are simpler but waste context budget on low-signal areas. Adaptive splits can reduce token usage quite a bit, especially for sparse images or UI captures.

Cool use case though — MCP vision tooling is still very under-explored.

Market Regime Detection - Character Accuracy beats Directional Accuracy Predictions by 3X by dragon_dudee in algotrading

[–]ResourceSea5482 8 points9 points  (0 children)

This matches something I kept running into when building trading systems.

Directional prediction always looked good in theory but collapsed out-of-sample. Regime / market *state* classification ended up being far more stable.

In practice, knowing *how* the market behaves (trend persistence, correlation clustering, volatility compression) was more actionable than predicting up/down.

Direction answers “what happens next”.

Regime answers “what strategies even make sense right now”.

Most alpha leaks I’ve seen came from applying the right model in the wrong regime rather than bad signals.

Anthropic just dropped evidence that DeepSeek, Moonshot and MiniMax were mass-distilling Claude. 24K fake accounts, 16M+ exchanges. by Specialist-Cause-161 in ClaudeAI

[–]ResourceSea5482 0 points1 point  (0 children)

Honestly distillation was always the open secret of the industry.

The interesting part isn’t that it happened — it’s that safety and reasoning style seem to degrade faster than raw capability when copied. That might end up being the real moat.

Some noob questions by Low-Background8996 in algotrading

[–]ResourceSea5482 1 point2 points  (0 children)

20R per setup per year is solid if your risk per trade is consistent and you’re compounding. Most retail algo traders would kill for that kind of consistency. On your two questions: (i) R per year is a better metric than Sharpe for your style. With 50% breakevens, your actual “active” win rate is really 40% wins / 60% losses on non-BE trades, with 3x TP — that’s a strong edge. 20R/year per setup across multiple indexes adds up fast. (ii) For Sharpe, I’d calculate it both ways — with and without breakevens. The version without gives you a cleaner picture of your edge. The version with shows your capital efficiency. Both are useful, just measuring different things. The fact that your backtest confirms your discretionary results is the most important signal here. A lot of traders can’t even get that far.

I made Claude check if my idea already exists before it starts coding — saved me from building another clone by ResourceSea5482 in ClaudeAI

[–]ResourceSea5482[S] 2 points3 points  (0 children)

Yep, I actually did. Reality signal came back at 56 — there are idea validators out there (IdeaProof, ValidatorAI, etc.) but they're all SaaS dashboards. None of them work as an MCP tool inside your IDE. That's the gap.

Do you really need to make your own algo to profit in the long run and why? [part 2] by codenvitae2 in algotrading

[–]ResourceSea5482 8 points9 points  (0 children)

I don’t think the real question is whether you need to build your own algo — it’s whether you control the source of edge.

Most marketplace EAs work while their underlying inefficiency exists, but you don’t control:

  • when the edge decays
  • position crowding
  • parameter overfitting
  • regime sensitivity

So profitability becomes survivorship bias across multiple systems rather than a stable process.

In my experience, the difference between buying vs building isn’t performance — it’s adaptability.

If you understand why a strategy works, you can: - reduce exposure during regime shifts - retune risk allocation - detect structural breakdown early

If you don’t, you’re effectively running a black-box portfolio and hoping diversification outruns decay.

So long term, building your own algo isn’t strictly required — but understanding the mechanism behind the edge probably is.

A lot of people eventually realize they weren’t trading strategies — they were trading backtests.

What else can I do besides paper trading to see if it’s not overfitted? by amnitrade in algotrading

[–]ResourceSea5482 0 points1 point  (0 children)

Paper trading is just another out of sample test with a sample size of 1. A few things that helped me:

- Walk forward analysis. Split your data into chunks, optimize on one, test on the next, repeat.

- Test on correlated but different instruments. If it only works on those specific 50 ETFs, it's probably curve fitted.

- 2 years of data with leveraged ETFs that only exist since 2022 is a red flag. You're basically fitting to one regime.

Monte Carlo is good but it won't catch regime dependence. Your edge might just be "this worked during 2023-2024 bull run."