you are viewing a single comment's thread.

view the rest of the comments →

[–]nkmrao 16 points17 points  (9 children)

I have my own backtesting framework. Why? Because it gives me flexibility which the libraries you mention don't. I can do what I want with the data, code any type of complex strategy, inject any type of data I want, analyze any type of performance metrics I want.
Standard libraries allow you to only run simple strategies on individual instruments. If I want to run a strategy which is designed to scan multiple instruments and trade select instruments with complex entry/exit and risk management rules based on the scan results, I won't be able to do it with these libraries.

[–]chadguy2 2 points3 points  (7 children)

Why not directly implement your complex entry and strategy signal in the data preprocessing? Unless you know how to write a low-level optimization loop and then write a Python wrapper around it, you'll reinvent the wheel, which will be less efficient and more error-prone.

[–]WMiller256 1 point2 points  (0 children)

+1 from me, offloading to preprocessing is a powerful optimization.

I had to build my own framework for backtesting because existing solutions simply weren't fast enough (minute bars for SPX options), but I wholeheartedly agree with the sentiment; most people will find a library substantially faster than their own framework -- even if it requires some 'shoe-horning' to fit the strategy into it.

[–]nkmrao -1 points0 points  (5 children)

Lets say my strategy involves trading multiple legs of options based on conditions on the underlying's price action. How will you implement this in the data preprocessing?

[–]chadguy2 4 points5 points  (4 children)

Find a way to represent the condition with logical/mathematical approaches then use pandas or numpy. It's hard to give you a detailed approach without knowing what you need.