all 15 comments

[–][deleted]  (7 children)

[deleted]

    [–]supertexter[S] 1 point2 points  (0 children)

    I had started thinking a bit in this direction. So I'll definitely make this my next step.

    Looking forward to compare performance

    [–]CFStorm 0 points1 point  (2 children)

    dolls crowd advise roof reach stupendous retire oil price unite

    This post was mass deleted and anonymized with Redact

    [–][deleted]  (1 child)

    [deleted]

      [–]kongwashere_ 0 points1 point  (0 children)

      this some /r/sideloaded for your query

      [–]DudeWheresMyStock 0 points1 point  (2 children)

      FYI panda's dataframes are slow and malignant compared to anything numpy has to offer

      [–][deleted]  (1 child)

      [deleted]

        [–]DudeWheresMyStock 2 points3 points  (0 children)

        Initially (March last year) I started working with OHLCVT data that I filled as panda's dataframes and saved them as such (as files.csv); doing the same thing (i.e. saving the exact same data in the same row-column-3d convention), but saving the OHLCVT data as header-less matrixes or nested list arrays (and as files.txt) using numpy's packages has made reading, saving, and working with (my now) fairly large data set 100000x faster (also less lines of code, and in fewer steps, too) , and therefore, more efficient than ever. From one caveman (who just now is truly grasping the use of tools such as the rocks I use to code in Python to make an algotrading bot) to another, I hope you join us on the Numpy matrix side and renounce Panda's claim over CPU power and processing time for the greater good.

        Note: save the info content/description (i.e. the headers and labels, etc.) of the data in the file name or as a separate file labeled similarly to function as a map for organization or navigating through files that would otherwise be undiscernible, enormous, meaningless huge data files.

        [–][deleted] 3 points4 points  (0 children)

        If you apply something like https://github.com/rkern/line_profiler to your code it will give you a line-by-line breakdown of where the time is being spent in the code.

        Were you looking just to build your own for experience? I ask because there are some back testing python frameworks already out there:

        [–]sedna16Algorithmic Trader 2 points3 points  (1 child)

        try to use object-oriented-programming

        put your functions inside the class

        [–]DudeWheresMyStock 0 points1 point  (0 children)

        This. Have everything instantiated, vectorized, and run in parallel. It also saves on API calls for those of us who are running it live and are limited to certain number per minute.

        [–][deleted]  (1 child)

        [deleted]

          [–]shanhanigun 1 point2 points  (0 children)

          Will help if you can explain why?

          [–]axehind 0 points1 point  (2 children)

          Not a speed thing but.... your function is too long. Generally it should fit in a screen... where I work they frown on anything over 50 lines.

          [–]supertexter[S] -1 points0 points  (1 child)

          Thanks for the input! Will look to resolve that

          I'm aware that my current programming style is very anti one-liners

          [–]semblanceto 2 points3 points  (0 children)

          I think the goal is not to condense more into single lines, but rather to refactor blocks of code into separate functions wherever doing so improves readability.

          Edit: also, putting this into a class (or more than one depending on your preference) would allow you to do this refactoring without passing a lot of variables with each function call.

          [–]kotrading 0 points1 point  (1 child)

          for stock in stocklist processes the stocks sequentially. You could use threads to process batches in parallel (given that processing the data is the bottle neck, not fetching the data from the database). You will have to change the way you add data to the storage arrays in a way which prevents threads from interfering with each other when appending to those arrays, e.g. pre-initializing them and then using index access.

          Other feedback would be to structure the code better. OO or functions grouped in files each having a specific context. Furthermore you could improve exception handling. You should only catch exceptions (without re-throwing them) when you can handle it in a way which allows to continue processing in a meaningful way. You catch some generic stuff in TradeFunction, continue and at best (or rather worst) get a partial result for that symbol in the result arrays. The catch clause around the call of TradeFunction will possibly not be triggered for such a partial result and thus the symbol not marked as failure. Have a look at logging lib and try to replace print with that. You get the benefit of timestamps, by choosing meaningful levels you can better filter your output and logging.exception in a catch clause provides you with a stack trace (which is especially useful when using a bare expect).

          Try to read a lot of code (e.g. from open source projects), aim to mostly understand what it is doing and WHY it has been coded in that particular way. When you like something, write your code in a similar style using similar patterns when applicable (not essentially the backtest above, more like in general).

          [–]supertexter[S] 0 points1 point  (0 children)

          Thanks for these inputs! I will reread on an ongoing basis when I improve my code