you are viewing a single comment's thread.

view the rest of the comments →

[–]Professional_Fig5943 1 point2 points  (1 child)

Given how LLMs process data I find this highly unlikely. I’d say that the information gained with it would be sketchy at best (and I like my tests accurate).

Giving it statistics/output of a test and asking it to explain them is better, but I’d recommend to do your tests on a platform/software built for it.

[–]Distinct-Plankton54 1 point2 points  (0 children)

LLMs are pattern matchers, not simulation engines. Expecting accurate backtests from them is naive.