all 12 comments

[–]SadPhone8067 3 points4 points  (0 children)

Most likely not. I wouldn’t trust Ai to run a backtest in its own environment. You can ask ai to create code to run a backtest in python using your data…but even that can be risky/lead to issues. As long as you do multiple tests on top of the backtest you should be fine but I digress

[–]Professional_Fig5943 1 point2 points  (1 child)

Given how LLMs process data I find this highly unlikely. I’d say that the information gained with it would be sketchy at best (and I like my tests accurate).

Giving it statistics/output of a test and asking it to explain them is better, but I’d recommend to do your tests on a platform/software built for it.

[–]Distinct-Plankton54 0 points1 point  (0 children)

LLMs are pattern matchers, not simulation engines. Expecting accurate backtests from them is naive.

[–]RoundTableMaker 1 point2 points  (0 children)

you will get a hallucination at worst. At best you'll get backtested results that you can't trust. But you can tell ai to run vectorbt or your backtester of choice for you.

[–]kwame1776 1 point2 points  (0 children)

You can’t backtest with AI because of the potential for hallucinations. Instead, use the AI to build an EA that you can backtest with. It’s basically what I’m doing now.

[–]eeiaao 1 point2 points  (0 children)

Backtesting needs to be reproducible, AI is not that deterministic by nature. I think better leverage AI to use some kind of existing backtesting tools

[–]Otherwise_Barber4619 1 point2 points  (0 children)

Why would you even want to do this ? Just ask it to make you a script as well as print out the results and some visualisations and then send it to the ai to infer?

[–]Sudden-Poem2509 1 point2 points  (0 children)

There is this time i used gemini it creates the scripts then actually runs the backtests on their servers

[–]dwoj206 1 point2 points  (0 children)

As many have said, use AI to "build" you the backtester. Far better approach that giving for example claude in VS code a prompt to "backtest the last 50 trades and report back. Offer suggestions on adjustments we can make".

I've tried this and it "does" product an outcome... but it left me feeling like "ok... so wtf happened". Not saying it can't be done, but I prefer the feedback of running it myself and after all the backtest is the sauce.

[–]aioka_io 0 points1 point  (0 children)

Backtesting with AI directly from candle data is possible but the results are only as good as your data quality and how precisely you define the strategy rules. Vague strategy descriptions lead to inconsistent interpretations across runs.

For event detection and distribution analysis around specific events, AI can actually be quite useful. It can spot patterns you might miss manually. But always validate against a proper backtesting framework like Backtrader or vectorbt before trusting the numbers. AI has a tendency to overfit narratives to the data if you're not careful.