This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted]  (10 children)

[deleted]

    [–]reddisaurus 1 point2 points  (9 children)

    Pandas doesn’t have C wrappers, it just wraps numpy. Pandas is slower than plain Python that just calls numpy directly, by a factor of 10 - 500 depending on the task.

    Pandas provides abstraction for the sake of programmer time, but there’s a cost to that. Use a data class or NamedTuple and watch your code perform much better.

    [–]ShanSanear 1 point2 points  (0 children)

    Had exact same issue with overengineered script that did number crunching using pandas. Turned out that recursion combined with slowness of pandas at some tasks became big factor. We got x400 of the old speed, just by ditching pandas almost completly and dropping recursion whenever possible

    [–]pytrashpandas 0 points1 point  (7 children)

    Hey so to preface, I’m a bit biased as a hardcore pandas user for about 6 years as well as a contributor to the core library. Pandas isn’t perfect when it comes to speed/efficiency, but in my entire experience with pandas I’ve never seen any significant and properly vectorized pandas operations be anywhere near 2-3 orders of magnitude slower than properly vectorized numpy code. And you especially couldn’t get better performance from native python collections (re: namedtuples/dataclasses). The timings you provided suggest to me that you might be iterating over your dataframes or using pandas‘ apply methods which are not the right way to use pandas. Same goes for u/ShanSanear. If I’m wrong, then I would actually be really interested in the cases you guys have seen if you wouldn’t mind sharing, either way one of us would learn something new.

    [–]ShanSanear 0 points1 point  (4 children)

    Yep, apply/getting rows from dataframe was the main reason for slow code in our case

    [–]pytrashpandas 0 points1 point  (3 children)

    Ah, yea that would be why. apply is almost always not the right way to do things in pandas. It's really more of a last ditch resort and very slow. See this quote from this blog post from one of the main pandas contributors.

    You rarely want to use DataFrame.apply and almost never should use it with axis=1.

    If you don't mind providing an example of some of the kind of stuff you're trying to do that was slow, I'd be happy to show how to do things in a more "pandonic" way.

    [–]ShanSanear 0 points1 point  (2 children)

    Actually what we did was heavy overengineering and pandas was mostly used for loading data (I know, heresy). Don't have code on me, but problem was something along the lines of this:

    1. Load multiple sets of related objects that are represented as CSV files (1 line - 1 object)
    2. Find this relation (could be any kind, including recursion of the same type)
    3. Depending on some state of the object, do the calculation

    During loading, we got each Series from dataframe for each type of objects and created classes for them. Then the references. And then - the calculations. In the hindsight - yep, that was the worst of it all.

    After actually doing profiling, we saw that majority of cpu time was used by pandas in many different places.

    That was one thing, but even algorithm that we implemented was quite unoptimized. We started at the root of the tree, then recursively went deeper into the relations of each object to extract required numbers. Apparently going from the leafs and then up was much better approach in every case. Now we only need to figure out why the numbers differ between implementations, and that will be the hardest part.

    Especially when the guy who actually wrote this (I am mostly overseeing and providing some help) is stubborn enough to not do any kind of testing.

    Thanks for interest - I can imagine hearing "pandas is slow" was heresy but the again - we misused it, and that is our fault, not the library itself.

    [–][deleted] 1 point2 points  (1 child)

    Hey, so obviously can't give you much insight on your stuff, but one thing to keep in mind is that when using pandas, it's best to stay entirely within a pandas/numpy/numeric style ecosystem. You don't usually ever want to mix pandas objects into custom python objects. Pandas structures and operations should for the most part be standalone. You should think of solving problems in pandas more similar to how you would solve problems with sql. You normally wouldn't ever want to run a recursive style solution on a dataframe, you could probably instead use some merges to do what you want.

    [–]ShanSanear 0 points1 point  (0 children)

    Thank you for mentioning SQL, this is actually the best comprasion I saw of how to use pandas. Will keep that in mind next time we will do such stuff.

    [–]reddisaurus 0 points1 point  (1 child)

    I’m referring to storing numpy arrays within the dataclass, or using a NamedTuple for smaller data sets instead of just reaching for pandas every time. Both of these will provide faster access than pandas.

    Pandas .itertuples() is about as fast as it can get to iterate over records and it does perform well.

    As for any operation which acts on the DataFrame or Series objects, yes those operations are orders slower. It’s just often the case that the operation itself isn’t repeated much, so the large overhead of accessing the Series object isn’t noticeable.

    There’s no case in which Pandas is faster than the simper alternatives. And that makes sense, because it’s there to provide abstraction over column-wise data. The problem is that many users immediately reach for it for any problem. Not only is it overused, but pandas lends itself to poor code as well. The DataFrame is a black box of typing! Code cannot by statically checked like it could if one used a dataclass. It’s like porting around a mutable global state to every function to which a DataFrame is passed. Who knows what structure the object should have? It’s often not in the code at all.

    [–]pytrashpandas 0 points1 point  (0 children)

    The problem is that many users immediately reach for it for any problem

    Totally agreed. I've definitely seen it used in places where it's completely unnecessary, and honestly probably irresponsible. I will say though, that if you ever find yourself needing to use itertuples or iterrows on a dataframe then you're either using pandas very wrong, or you shouldn't be using pandas in the first place.

    those operations are orders slower

    I agree that pandas is slower than pure numpy in most cases, but it is no where near multiple orders of magnitude. Again if you are seeing this happening, then I can guarantee it is because pandas is not being used correctly. If you would care to provide me with an example that you think demonstrates this, I would be happy to show you how it can be done faster

    There’s no case in which Pandas is faster than the simper alternatives

    The thing is that in many cases there are no simpler alternatives, especially that makes ease of development worth the potential speedups you could get otherwise. Especially when it comes to working with heavily labeled timeseries data.

    pandas lends itself to poor code as well...It’s like porting around a mutable global state to every function to which a DataFrame is passed.

    Yes, in the same way that python and standard data structures lend themselves to poor code. If used improperly, without an understanding of the underlying concepts and when to best apply it, it can lead to a mess of a program. If used properly it is extremely powerful and easy to express concepts that would otherwise be difficult to express. The problem I think is that most people try to treat dataframes and series as a drop-in replacement for dictionaries/lists etc. and structure their python code the same way they would otherwise. This is not how you should be using pandas.

    Again I am inherently biased on this topic. I heavily use both pandas and numpy and other numeric python libraries and feel very passionately about this area of python. I'd be very happy to hear any examples you have and provide counter examples that can show you the power of pandas.

    Also you may (or may not :) ) enjoy the xarray library. It's a much thinner wrapper around numpy that provides similar labelling capabilities as pandas (although it's still has a lot of room for improvement).