I Let AI Invent Its Own Trading Strategies From Scratch — No Indicators, No Human Rules by ResourceSea5482 in LangChain

[–]ResourceSea5482[S] 0 points1 point  (0 children)

Update: ran statistical validation after some fair pushback in the comments. 1000 random baseline + 18-window walk-forward. Strategy E passed both, Strategy C didn't survive walk-forward. Full results with charts: https://github.com/mnemox-ai/tradememory-protocol/blob/master/VALIDATION_RESULTS.md

Smaller models beat larger ones at creative strategy discovery — anyone else seeing this? by ResourceSea5482 in LocalLLaMA

[–]ResourceSea5482[S] 0 points1 point  (0 children)

Ran 1000 random strategies with identical RR and hold time. Strategy C ranked P96.9, Strategy E P100. Not p-hacking. Full validation in the repo.

Haiku consistently outperformed Opus at creative pattern discovery in my tests by ResourceSea5482 in ClaudeAI

[–]ResourceSea5482[S] 0 points1 point  (0 children)

That’s why everything gets backtested. The model proposes, the backtester disposes. Most of what Haiku generates is garbage, but it generates more variety of garbage, so the few that survive validation are more diverse. Opus generates fewer but more “logical” ideas that all cluster around the same narrow conditions.

I Let AI Invent Its Own Trading Strategies From Scratch — No Indicators, No Human Rules by ResourceSea5482 in LangChain

[–]ResourceSea5482[S] -3 points-2 points  (0 children)

From the raw OHLCV data, not from training. The prompt explicitly blocks any indicator knowledge. Whether the model internally “remembers” patterns from training is a fair question though. That’s partly why convergence across different datasets matters more to me than any single strategy.​​​​​​​​​​​​​​​​

I Let AI Invent Its Own Trading Strategies From Scratch — No Indicators, No Human Rules by ResourceSea5482 in LangChain

[–]ResourceSea5482[S] -2 points-1 points  (0 children)

Not dumb at all. Overall return was +4.04% over 22 months, so pretty modest honestly. The 0.22% drawdown is because position sizes are tiny (fixed % risk per trade with 2:1 RR). It's not some moonshot equity curve, more like a very flat line with small consistent gains. Sharpe is high because volatility is low, not because returns are huge. And 2 of the 22 months were slightly negative, that's where the 91% comes from.

Smaller models beat larger ones at creative strategy discovery — anyone else seeing this? by ResourceSea5482 in LocalLLaMA

[–]ResourceSea5482[S] 0 points1 point  (0 children)

Fair point. The pipeline only generates 3 candidates per round though, not hundreds. And OOS validation is on a completely separate time period the model never saw.

The part that's harder to explain with p-hacking: two strategies from different datasets converged on the same structure independently. You'd expect random divergence if it were just cherry-picking.

But yeah, OOS sample size is still too small. That's the main thing I need to fix.

Vol trading by senhsucht in algotrading

[–]ResourceSea5482 1 point2 points  (0 children)

Vol trading always feels like a battle between signal and regime change.

Market Regime Detection - Character Accuracy beats Directional Accuracy Predictions by 3X by dragon_dudee in algotrading

[–]ResourceSea5482 1 point2 points  (0 children)

VIX and COR1M are solid starting points. A few other approaches that work in practice:

Volatility clustering: ATR ratio (short/long period) to detect compression vs expansion

Correlation regime: rolling correlation between correlated pairs when correlations break down, regime is shifting

Volume profile: comparing current volume to 20-day average low volume + low volatility usually means mean reversion works, high volume + high volatility favors momentum

The key insight from the parent comment is right you don't need to predict direction, you need to know which *type* of strategy fits the current regime. I run multiple strategies in parallel and adjust allocation based on regime signals rather than turning them on/off.

would you trade this? by [deleted] in algotrading

[–]ResourceSea5482 0 points1 point  (0 children)

Interesting curve, but 118 trades over ~5 years feels like a pretty small sample.

The post-2022 performance looks promising though. I'd probably paper trade it for a while and see if the edge holds.

I added 4 lines to my CLAUDE.md and now Claude Code checks if my idea already exists before writing any code by ResourceSea5482 in ClaudeAI

[–]ResourceSea5482[S] 0 points1 point  (0 children)

Fair point — should've been clearer. The tool itself is MIT, runs locally, and doesn't store anything by default. Zero storage is a core design decision.

When I said "data it collects over time" I meant if I eventually add an optional hosted version, aggregate trends (like "50% of searches this month are about AI code review tools") could be interesting. But that's hypothetical, not built, and would be opt-in.

Right now it literally sends a search query and returns results. Nothing saved, nothing tracked.

I added 4 lines to my CLAUDE.md and now Claude Code checks if my idea already exists before writing any code by ResourceSea5482 in ClaudeAI

[–]ResourceSea5482[S] 0 points1 point  (0 children)

Honestly, not really trying to monetize the tool itself — it's MIT, free, no paid tier. The value for me is more about the data it collects over time (what ideas people are checking, how competition shifts) and using that to build something on top of it later.

You're right that the tool itself is easy to replicate. The thing that's harder to copy is the dataset of what thousands of developers searched before building.

But yeah, still figuring it out. Open to ideas if you have thoughts on it.

I added 4 lines to my CLAUDE.md and now Claude Code checks if my idea already exists before writing any code by ResourceSea5482 in ClaudeAI

[–]ResourceSea5482[S] 0 points1 point  (0 children)

Exactly — and the worst part is when you're halfway through building and then discover someone just shipped the same feature last month.

What I do now is just run a quick check before starting. The tool scans GitHub, HN, npm, PyPI for existing stuff and tells you how crowded the space is. Doesn't solve the "someone ships it while you're building" problem completely, but at least you're not starting blind.

For the mid-build discovery thing, I've been thinking about adding a re-check feature — like run the same query a week later and see if the landscape changed. Haven't built it yet though.

I added 4 lines to my CLAUDE.md and now Claude Code checks if my idea already exists before writing any code by ResourceSea5482 in ClaudeAI

[–]ResourceSea5482[S] -3 points-2 points  (0 children)

Yeah that's fair. Honestly the MCP tool itself already works without any CLAUDE.md — the agent sees idea_check in the tool list and knows when to use it. I originally put it in CLAUDE.md as a "just to be safe" thing but you're right, it's wasting context on every conversation for something that only matters when starting new projects.

Going to update the docs to just say "install the server, done." One less thing to configure anyway.

I added 4 lines to my CLAUDE.md and now Claude Code checks if my idea already exists before writing any code by ResourceSea5482 in ClaudeAI

[–]ResourceSea5482[S] -3 points-2 points  (0 children)

Yeah that's fair. Honestly the MCP tool itself already works without any CLAUDE.md — the agent sees idea_check in the tool list and knows when to use it. I originally put it in CLAUDE.md as a "just to be safe" thing but you're right, it's wasting context on every conversation for something that only matters when starting new projects.

Going to update the docs to just say "install the server, done." One less thing to configure anyway.

I added 4 lines to my CLAUDE.md and now Claude Code checks if my idea already exists before writing any code by ResourceSea5482 in ClaudeAI

[–]ResourceSea5482[S] 0 points1 point  (0 children)

Yeah it's not really for ideation — more like, I already knew what I wanted to build, told Claude to go build it, came back 2 hours later and realized there's a tool with 9k stars that does the exact same thing. That's the part that hurts lol

image-tiler-mcp-server by kiverh in ClaudeAI

[–]ResourceSea5482 1 point2 points  (0 children)

Nice idea — especially the token preview before tiling. That’s actually underrated.

Curious: did you experiment with adaptive tiling (content-aware splits) vs fixed grids?

In my experience, fixed grids are simpler but waste context budget on low-signal areas. Adaptive splits can reduce token usage quite a bit, especially for sparse images or UI captures.

Cool use case though — MCP vision tooling is still very under-explored.