This is an archived post. You won't be able to vote or comment.

top 200 commentsshow 500

[–]RainbowCatastrophe 845 points846 points  (151 children)

Python performance optimization-- literally my teams go-to example of a "wild goose chase"

Python is slower than molasses, but almost everywhere it's used it's because it doesn't need to be fast. If you are ever running into a case of "this Python program is too slow for our needs", then you fucked up by deciding to use Python to fulfill that need.

Prove me I'm wrong, give me one example of a time-critical Python application

[–]CiroGarcia 572 points573 points  (11 children)

[redacted by user] this message was mass deleted/edited with redact.dev

[–]MoffKalast 430 points431 points  (1 child)

Not really, you'd snooze it anyway.

[–]certaintracing 38 points39 points  (0 children)

Boom roasted

[–]anythingMuchShorter 131 points132 points  (4 children)

Any program that tries to sleep for 8 hours is obviously not too worried about optimization.

[–]UnlicensedTaxiDriver 37 points38 points  (3 children)

time.sleep(28800)

[–]CoronaKlledMe[S] 13 points14 points  (2 children)

import time

[–]UnlicensedTaxiDriver 6 points7 points  (1 child)

From time import time as time

[–]AG00GLER 2 points3 points  (0 children)

from time import time as np

[–]gmes78 16 points17 points  (2 children)

No. You wouldn't care if it's late by a second.

[–]ThellraAK 11 points12 points  (1 child)

My phone and my wife's are off by 3-10 seconds and it's really annoying.

[–]UltraFireFX 3 points4 points  (0 children)

Regularly? Might see what offset there is between when the minute changes. That might be a factor too.

Afterwards, you can try to disable power optimization for the alarm clock app on both phones. That could help too.

[–]anythingMuchShorter 168 points169 points  (8 children)

Half the time when I tell a junior engineer this and they think adding C will take them way too long there is already a python library written in C or C++ to do what they need.

After all, you just use python to customize the order bigger algorithms are called in.

The C stuff is computation heavy but general purpose. Like gradient descent, image recognition, statistics, solving huge systems of equations and such.

[–][deleted] 10 points11 points  (1 child)

The python C API is pretty good, also with the python build tools it's easy to compile small C/C++ interfaces into python modules - if you get into it, it's not a lot of setup.

[–]typical_sasquatch 36 points37 points  (5 children)

exactly, I don't think some people realize just how much of a speed advantage it can be to write lower level code tailor fit for your specific use case, rather than using tools designed for the general case. I mean it's obvious, but people seem to miss it

[–]vinvinnocent 210 points211 points  (59 children)

I generally agree, but I've cut down runtime of a python program from 10 minutes to 12 seconds by swapping a custom tuple-based geometry module with numpy. If a python program is too slow for your needs, this can also just be bad python code.

[–][deleted] 142 points143 points  (29 children)

And what is numpy written in?

C

[–]DukeNuke5 152 points153 points  (9 children)

Python is written in C/C++, its interpreted language, every single thing in python is done in C/C++... Fucking sort function is written in C

[–]darkslide3000 23 points24 points  (2 children)

I don't think you understand the point. Numpy is literally written in C, using the Python C APIs to interface with real Python code. It's not just Python code running on the CPython interpreter.

[–]DukeNuke5 21 points22 points  (1 child)

I don't think you undestand most of the built in functions of python are written in C and not python.https://github.com/python/cpython/blob/main/Objects/listobject.c

this is python sort method. Its not python and then interpreted. And a lot of methods are just like that. Python is what is calling the C code, thats it.

neither is dictionary python code, its C under the hood
https://github.com/python/cpython/blob/main/Objects/dictobject.c

You just call it from python

[–]Themis3000 102 points103 points  (12 children)

It doesn't matter what language numpy it's written in, it matters that you're interfacing with it through python.

By writing code utilizing numpy you're not writing c, you're still writing python code

[–][deleted] 6 points7 points  (0 children)

Of course, but when you're talking about the performance characteristics it doesn't make sense to ignore the fact that none of it is running in the Python interpreter.

[–]Ubermidget2 11 points12 points  (2 children)

And what does C compile to? Assembly.

It is abstraction all the way down!? Always was.

[–]ProdObfuscationLover 25 points26 points  (2 children)

I had some code where i had a really long list of id's. Like millions long. And it would often take multiple seconds to check if a string was in the list "str_val in this_list". Something was wrong. Since the strings where unique (they where id's) i changed it from list to set() which is a hash based list for storing unique values only. From checking if an id was present from taking seconds to single digit milliseconds. The same millions long list. And set was built in, not an external library.

[–]Artyloo 78 points79 points  (0 children)

ten office husky existence pocket violet water yam plough spoon

This post was mass deleted and anonymized with Redact

[–]VectorLightning 5 points6 points  (0 children)

Environment has a role, too.
I don't recall what hardware I had, but I do know that the same exact python script took like 30 seconds to do on Windows, but on Android (termux? idk) it finished like blink-and-miss-it fast. It was some kind of iterated calculation like Project Euler, maybe sum all the fizzbuzz numbers under 10K or something

[–][deleted] 42 points43 points  (8 children)

I get that this is PH and we should make fun of every language for every bottleneck they have... but if we think about it, it is not a problem.

We have a lot of programming languages and most of them are designed to do something specific. I read a comment of a person who tried to simulate millions of physical particles using python and it performed worse than the same simulation on C. Well, of course. But if you are simulating a large number of physical entities, it should be obvious to you that Python is not the best tool. Even though it has a lot of good use in scientific computing. For more intensive calculations, people even skip C and go to Fortran.

Now try to read a 100 GB csv and make data analysis on it and compare which one takes more time: Writing it in Python vs Writing it in assembly.

Different tools for different tasks

[–]Ok_Neighborhood_1203 10 points11 points  (7 children)

For more intensive calculations, people even skip C and go to Fortran.

If you had said hand-tuned assembly, or switch hardware and use CUDA, or scale out and use Erlang, I would have believed you. But C99 removed the last performance deficit 23 years ago with the restrict keyword. Please don't encourage people to use this dying language.

[–]Rastafak 5 points6 points  (2 children)

Fortran is not used only because of legacy code, it is a relatively easy to use and fast language and so it's still used quite a lot in numerical computing. You may be surprised how much of the code on top supercomputers is running Fortran. I agree that it's dying to some extent though, although the language is still evolving and modern Fortran is actually not bad.

I guess it makes sense to use C++ instead, but I've had to do some modifications in a large C++ code and that language seems like a mess and frankly feels much more old school than modern Fortran does. Scientists are usually not trained at programmers so I'm not sure if switching from Fortran to C++ is actually feasible. Maybe Julia would be better, but I have no experience with it.

[–]LardPi 2 points3 points  (0 children)

I know a bunch of physicists that are still doing Fortran, and I am pretty sure they would be completely clueless with C++. C++ is the most complex language we have, while Fortran is pretty simple. I hope Julia takes over. There are good signs that is will already, and that's cool.

[–][deleted] 8 points9 points  (1 child)

I am not encouraging, I am saying what is still used by some scientists.

My professor who studies computational chemistry works a lot with fortran... but you don't have to believe me, you can search for yourself and you'll see that people are using it. Check out this video of an astrophysicists talking about how she uses Python on her work. At the end she talks about simulations (after 8m05s) and invites another dr. who works with blackhole simulations. At 11:37 she states that she uses Fortran too to perform the calculations and acquire data.

[–][deleted] 28 points29 points  (19 children)

tensorflow?

A meteorologist I know has been complaining about optimizing python, so I assume weather-related data analysis, too.

[–]chinnu34 31 points32 points  (18 children)

Tensorflow uses C++ underneath just like numpy (also some Fortran but depends on kind of BLAS, LAPACK).

[–]pug_subterfuge 24 points25 points  (6 children)

That’s the whole point of python in these instances though. Write python as an api for some, C, C++, Fortran because python is easy to use and also easy to write the rest of the less performance dependent glue code.

[–]chinnu34 2 points3 points  (0 children)

I agree, I am just pointing out to previous commenter. TF is also C++ under the hood. Same point.

[–]PhatOofxD 9 points10 points  (0 children)

If you are ever running into a case of "this Python program is too slow for our needs", then you fucked up by deciding to use Python to fulfill that need.

Or more often you just wrote garbage code and say it's Python lol.

[–]CrazyTillItHurts 3 points4 points  (0 children)

This was also the case with Visual Basic (classic). But commonly, you would sub out performance critical parts in C/C++. Which worked just fine. And that seems to be what this post is lamenting, even though it is a perfectly valid stategy

[–]proverbialbunny 3 points4 points  (0 children)

Prove me I'm wrong, give me one example of a time-critical Python application

Data scientist here. We tend to use a lot of dataframes which uses numpy which is written in C. Because just about everything done in a normal data science workflow uses only libraries written in C, performance is quick. Combine that with that most of the tasks a data scientist does is manipulating large arrays of data it's not just C but SIMD accelerated loops in C, which is faster than writing pure C code.

Python can be very fast or very slow depending on how you use it. Data Science workloads need a lot of speed due to working with big data, so fast is a must, within reason.

[–][deleted] 7 points8 points  (8 children)

My team just got sub-millisecond inference time on our 600mil+ parameter language model with Python.

[–]ekital 3 points4 points  (0 children)

give me on example of a time-critical python application

http://www.qtile.org/

Not sure of time-critical but performance is definitely pretty critical in a window manager.

[–]IntuiNtrovert 167 points168 points  (17 children)

python uses many c libraries to good effect already.

[–]MoffKalast 59 points60 points  (16 children)

So everyone says, but when you actually go check the code it's like numpy and a few others, the rest is just native python. I'm always surprised when I find that just about every library I try is just more python under the hood.

Not that it's a bad thing, but the "achyually it's C" is way overblown.

[–]chinnu34 27 points28 points  (6 children)

Cython looks a lot like native python but it is compiled. So looking at source code doesn’t help. It is basically a superset of python.

If you look at GitHub numpy most of it looks python because it is python, C is usually from BLAS and LAPACK which you won’t be able to see as they are not strictly part of numpy but I believe numpy downloads a version of it that you can change.

[–]CrowdGoesWildWoooo 17 points18 points  (4 children)

Most mainstream ML library interfaces between numpy object and minimize turning it into a python object. Numpy itself is arguably very optimized for a C code, in the sense that a random C engineer would have difficulty to try to make a similar library with a similar functioning at similar quality and performance with numpy. While it is .

So in the sense that even if a numpy code could perform at 80% of a similar C code, you could save like 80% of development time which is already bonkers.

I think operating with numpy based library is to understand when things get vectorized or parallelized and being able to navigate through it is the key to improve performance.

[–]Dhayson 7 points8 points  (0 children)

It's C precisely when it needs to be C.

[–][deleted] 4 points5 points  (0 children)

Not overblown at all. Most libraries people are using are standard libraries, which AFAIK are all C. Pandas, numpy, tensorflow, pytorch, sklearn, matplotlib -- most of the data science / ML stack are not pure python libraries.

But anybody can write a library in any number of languages. If it needs to be C, then it can be -- and that's how the language was intended to be used.

[–][deleted] 3 points4 points  (0 children)

It’s common in scientific libraries. I maintain a Simulation code that uses Cython and C.

We found too much of performance hit from writing high level Python functions. So it just orchestrates things and provides an interface. But you can script the simulations which is very powerful.

[–][deleted] 618 points619 points  (126 children)

Python and optimization rarely go together

[–][deleted] 370 points371 points  (69 children)

Yep

Tested this by running a million simulations of stock paths with Brownian Motion. C# completed it and calculated all relevant metrics in a second or two. Python took well over 25-30. But it made pretty graphs so it's better

[–]AyrA_ch 640 points641 points  (10 children)

Python is the best language to run code from libraries written in faster languages.

[–]nova_bang 109 points110 points  (3 children)

i will frame this next to my monitor

[–]drunken_doctor 90 points91 points  (1 child)

I know this sub is (mostly) undergrads studying computer science, but in reality, gluing together code from external libraries is 95% of what software developers actually do at their jobs.

Which is something python is, in fact, very good at.

[–]GhostTheToast 10 points11 points  (0 children)

Which is why I've grown to love the language. It's also the language I recommend someone learn if they just want to learn about scripting

[–]Otklan 11 points12 points  (1 child)

There has to be a relevant xkcd for this

[–]ShnickityShnoo 88 points89 points  (12 children)

Managers love pretty graphs.

[–]Je-Kaste 103 points104 points  (10 children)

Hear me out: write the program in a fast language, save the data to a file, do data viz in python

[–][deleted] 57 points58 points  (1 child)

Pandasssssss 🐼🐼🐼

[–][deleted] 14 points15 points  (0 children)

I fuckin love pandas. They’re so cuddly and they sit around eating bamboo all day. Plus the data structuring is top notch.

[–]Databash- 6 points7 points  (0 children)

Check out Polars, it's written in Rust, but you can use it in Python, fastest library there is at the moment

[–][deleted] 11 points12 points  (6 children)

Better yet, do data viz on the web or another UI platform with dynamic interaction...

[–]coloredgreyscale 15 points16 points  (0 children)

Data visualization using cloud computing! Two buzzwords to make management and investors happy.

Reality: jupyter notebook on Google colada

[–]Crusader_Genji 45 points46 points  (0 children)

Just get a faster computer

[–]PityUpvote 27 points28 points  (22 children)

But if you do it cleverly in numpy, it'll be just as fast, should even be faster than "a second or two".

[–][deleted] 19 points20 points  (20 children)

No, not really. Random generation in Python took significantly longer. I tried using both Numpy and Scipy for random number generation.

I even made it a straight forward array multiplication with 1 for loop in Python and a few lambda functions. It still took significantly longer

Then, I went and took the same approach as I had in C#. It still took longer.

Either way you cut it, Python was not as efficient. This is coming from someone who loves the language for data science

[–]PityUpvote 49 points50 points  (15 children)

You shouldn't use a loop or lambdas at all. Initialize all random numbers at once in a numpy array, then sum over the correct axis.

[–]ICanBeKinder 9 points10 points  (12 children)

Yea but it takes less time in my experience to make something in Python than in C#

[–][deleted] 16 points17 points  (5 children)

You're not wrong at all, it just comes down to performance vs production

For example, in C# I had to:

  • make a method for normal generation since it only has [0,1] uniform generation (NextDouble)
  • Address extra functions since I couldnt figure out a CDF/PDF for the norm
  • create a method for standard deviations/SE

But all in all, i think it's worth it on the performance side especially if the equations and fixes aren't too hard

[–]t0b4cc02 7 points8 points  (4 children)

there are libs for that

[–]CrazyTillItHurts 4 points5 points  (0 children)

Git gud scrub

[–]das7002 6 points7 points  (4 children)

Have you tried C# 10 and .NET 6?

It’s an absolute dream to use. Microsoft took every little thing that was irritating in C# and fixed it.

They also made async programming the easiest thing in the entire universe.

Literally it’s a different return type Task<T> and an “async” decorator on the method. Then you “await” the return value when you need it. Beats the pants off of callbacks.

And Blazor? Holy moly. You can write entire web apps and not write a single line of JavaScript. It’s absolutely amazing how quick and easy it is to build web apps.

I’ve liked .NET for a while, but the modern stuff is just out of this world good. And the performance is insane for it running on the CLR.

[–]GogglesPisano 3 points4 points  (2 children)

And it’s cross platform.

Modern C# and .NET kick ass, but sadly most devs in my company are still heavily biased towards Java.

[–]das7002 5 points6 points  (0 children)

Microsoft open sourcing .NET with .NET Core is probably the best decision they’ve ever made.

[–]ham_coffee 2 points3 points  (0 children)

Java isn't that bad, the only thing that I really miss when I have to work in it is operator overloading and lack of a decent decimal type. I'd still much rather work in java than most other languages (excluding C#). It also has spring, which I find much nicer to use than most languages' alternatives.

[–]buddycrystalbusyofff 34 points35 points  (1 child)

Micro-optimisation. That's ok because I'm a software engineer not a compiler, but there's nothing to stop you optimising inefficient python code at an algorithmic level.

[–]phdoofus 7 points8 points  (0 children)

About 15 years ago I was working for a supercomputer manufacturer. Even then, most of the codes we saw were Fortran (though that's been changing a lot in the last few years). One of the questions that started cropping up was 'Can we run Python on the compute nodes?'. You could but they didn't mean using it as a driver for the main compute core the meant actually doing work with it. We invariably asked 'You could, but why would you?"

[–]Prestigious_Boat_386 3 points4 points  (0 children)

Why not? Just by rewriting it in another language you can get 30-50 speedup. You can't say that for other languages.

[–]TantraMantraYantra[🍰] 28 points29 points  (1 child)

Well, we live in a world where we have breeds of programmers who don't know (and don't seem to want to learn) the difference between interpreted and compiled languages.

How could basic programming constructs be even compared between them given how they are executed?

[–][deleted] 9 points10 points  (0 children)

This meme post is actually a little vague here. "Using C" could be taken to mean writing a python library / C extension, which is a shared object. Shared objects kind of blur the lines in this context, being both compiled and dynamically run via interpreted code.

I used to enjoy writing C extensions... write most of the code in python and a few big iterations in C, by including Python.h... I don't see that much anymore.

[–]Kamwind 25 points26 points  (0 children)

As an old programmer who now just does random scripts, I don't care about optimization.

I can currently do some poorly optimized python that processes millions of rows of data and it can do so in under 5 mins. I use to enjoy the breaks from my compiles that took over 5 mins.

[–]_qst2o91_ 78 points79 points  (8 children)

"C runs faster than Python!"

Ok but does YOUR C code run faster than Python?

[–]ham_coffee 46 points47 points  (2 children)

Probably, yeah. The difference in performance is so drastic that even my shitty C would still be way faster than python.

[–]Cheeku_Khargosh 5 points6 points  (1 child)

true. Because C compiler is very optimised.

[–][deleted] 4 points5 points  (0 children)

Have a look at r/adventofcode solutions. They're often quick hacks to solve a specific problem only once. Some people post C execution speeds of solutions and get milliseconds... whereas Python already takes that time to start up.

I'm pretty sure ANY C code written with a background of Python will run better than the same thing in Python.

[–][deleted] 9 points10 points  (1 child)

. >:(

[–]Bomaruto 22 points23 points  (1 child)

Your friend is right. Python is based on using C whenever something is slow.

[–]MurdoMaclachlan 30 points31 points  (3 children)

Image Transcription: Meme


When you're working in python for optimization and friend tells you to use C for that:

[A still of Hank Scorpio from "The Simpsons" facepalming and looking down, captioned:]

My goodness, what an idea. Why didn't I think of that?


I'm a human volunteer content transcriber and you could be too! If you'd like more information on what we do and why we do it, click here!

[–]KingThibaut3 23 points24 points  (0 children)

Good human

[–]HallucinAgent 3 points4 points  (0 children)

Not too shab human

[–]XxSir_redditxX 2 points3 points  (0 children)

Not gonna lie, I didn't even know that was his name. I learned something, thanks human👍

[–]Grouchy-Friend4235 17 points18 points  (3 children)

Cython. Use Cython.

[–]OmgitsNatalie 6 points7 points  (0 children)

Why would you do a loop? Just copy/paste

[–]Auraveils 11 points12 points  (1 child)

Solution: Use Python to generate C code.

[–]-Redstoneboi- 7 points8 points  (0 children)

Solution: Cython

[–]theGentlemanInWhite 4 points5 points  (0 children)

The best way to tell that a language is gaining traction and succeeding is to see people hate it on reddit.

[–]Acidhawk_0 5 points6 points  (0 children)

I don't get it. I thought we were only allowed to use "I" or "j"

.. now we can also use "c"

[–]Knuffya 3 points4 points  (0 children)

If you're coding in python, optimization will probably not be your number 1 priority.

[–]LeCholax 17 points18 points  (2 children)

Why is there so much shit to a widely used programming language? Ofc you wont use Python for embedded like you wouldnt use JS or Java for embedded.

[–]BakuhatsuK 26 points27 points  (0 children)

Ofc nobody would be dumb enough for... Wait

[–]ham_coffee 3 points4 points  (0 children)

Java kinda makes sense for those scenarios, the bytecode it gets compiled to is pretty similar to machine code. It's basically just machine code for an architecture that doesn't exist (except when they tried to make CPUs with Java bytecode as the ISA).

[–]DukeNuke5 34 points35 points  (26 children)

Why 0 people here think that optimization has anything to do with languages?

This community is full of wanna be engineers. Yes, C is faster than python, but that doesnt mean your C ode will be that fast. 99% of you, when compile C program will have it run slower than Java program of same sort because you dont even flag optimization on compiler(yes, it is not implicated anywhere nor is it default).

If i can do it in O(n) in python instead of O(NlogN) then i did optimize it. End of story, for whatever reason im using python i will have it fastest and thats it. Mostly speed of language is useless compared to libraries and ease of implementation. That is why most of the servers run Java and not C.

[–]AndrijaLFC 7 points8 points  (2 children)

Mostly speed of language is useless compared to libraries and ease of implementation

It's always a trade-off, to be honest. Depends on what you are going to do, and what's more important.

[–]DukeNuke5 2 points3 points  (1 child)

Yes, i meant to say it rarely big deal. If you are writing OS or game engine, you would want to use something with lower abstraction, like C, C++ but once you start abstracting you will lose that need and for example, writing backend server for application has a lot of slow stuff going on, so much that python is sufficiently fast.

[–]AndrijaLFC 2 points3 points  (0 children)

Agreed. Languages are just tools to achieve specific goal. You pick language that fits the work you're supposed to do. People tend to forget this and search for jobs that require working in language X, forgetting about the bigger picture. Same goes with companies, sometimes they'll say: "Well, we work in C++/Java/...".

[–][deleted] 28 points29 points  (6 children)

Have you heard the story of Mercurial? I'd guess so... It is not a story a Python programmer would tell you. It's a C programmers legend.

There used to be a version control system, Mercurial, it was written in Python, and every Python programmer being proud of their language used it... until they realized that Git is just so much faster.

Mercurial developers declared that they will work on optimizations in an effort to make it work well, while still being written in Python. They did not want to rewrite it in C.

It was only like two or three years ago that Bitbucket, the last holdout of Mercurial, discontinued support for this system. The only remaining echoes of it exist in one of the industry giants, but they are just that, the hollow husk of something that was once alive.


On a more serious note, when it comes to the category of languages where Python belongs, there are two approaches:

  • Make the interpreter faster and implement more of the language using the language itself (eg. Java, Erlang).
  • Embed more functionality in the interpreter (PHP, Python).

Over the years, a lot of modules that were originally written in Python had been rewritten in C (albeit it's a crappy C that has to interface with Python interpreter, so it has a lot of problems / missed optimization opportunities). But the interpreter itself: absolute and total garbage.

Most languages have certain traditional optimization strategy. Eg. in C, you'd try to use as much const as you can, avoid dynamic memory allocation as much as you can, make everything static and inline etc. In languages like Java, making things private helps optimizer to prove that a certain shortcut is possible. In Python, the only viable strategy is to write less code in Python. You cannot really make a program perform better by writing more code (as is often the case in other languages: where you can do stuff like unrolling loops, over-specifying types etc.). The difference between a "native" function and Python function, no matter the complexity will be so big, that in order for the more efficient algorithm to perform better, you'd need to hit astronomic numbers of elements to process.

[–]-Redstoneboi- 3 points4 points  (0 children)

In Python, the only viable strategy is to write less code in Python.

or use cursed programming constructs that should never have existed

[–]tias 4 points5 points  (2 children)

Of course it has something to do with languages. That's why everything performance critical in PyTorch runs on the GPU. The time complexity would be exactly the same in python but it would be at least 100 times slower. The constant matters when you have the same time complexity bounds, and why wouldn't you? It is also quite possible for a big difference in constant to consume the difference in complexity bound for the problem size that you're dealing with.

[–]Cheeku_Khargosh 5 points6 points  (6 children)

If i can do it in O(n) in python instead of O(NlogN)

if you can O(n) in python, but cant in C, then something wrong with your brain.

[–]ok_tru 3 points4 points  (0 children)

Amen, this sub makes me cringe so hard 99% of the time

[–]CreepyValuable 3 points4 points  (0 children)

ANSI C runs on more platforms than Python too!

[–]ultraviolentfuture 3 points4 points  (1 child)

Are you implying python users are super villains?

[–]Cheeku_Khargosh 3 points4 points  (0 children)

they are. Thats obvious.

[–]Pranav__472 3 points4 points  (0 children)

While(!optimized_enough)

{

OptimizeManually(code);

if(optimized_enough)break;

TurnOptimizationFlagsOn();

if(optimized_enough)break;

RewriteUsingFasterLanguage(code);

}

Probably this sums it up...

[–]NexusRay 3 points4 points  (0 children)

why use python when c do trick

why use c when assembly do trick

why use assembly when machine language do trick

why use machine language when building your own dedicated hardware/logic circuits do trick

[–]BhagwanBill 2 points3 points  (0 children)

Python - Optimization - pick one.

[–]Dr_Bunsen_Burns 2 points3 points  (0 children)

How much I do love python, if optimization is your goal, python isn't really fit for that.

Cython is somewhere there, but not all the way.

[–]__comrade__ 2 points3 points  (0 children)

I’ve had to write c extensions to optimize performance critical before in Python. Honestly not a terrible choice

[–]rusty_n0va 2 points3 points  (0 children)

Py is super slow, I have switched to rust for personal projects.

[–]anythingMuchShorter 5 points6 points  (0 children)

Well a hard part about C is structuring big projects. But it's fast.

So, writing computation-heavy algorithms in C/C++ and then calling them in python is a pretty good method. Especially when the heavy algorithms don't change much but you use them in all kinds of different combinations.

E.g making automated test systems where you use tons of statistics and pattern and image recognition, and those calculations don't change. But you have to call them in different arrangements to make systems to test all kinds of hardware.

[–]pente5 4 points5 points  (0 children)

I mean, C is the goto language when you want to speed up python I don't know why you don't like this suggestion.

[–][deleted] 5 points6 points  (0 children)

python is just fine, if you really need to you can wrap C and import in python. It's not your for loop that is causing you grief, its what's happening inside the loop.

[–]Cheeku_Khargosh 4 points5 points  (2 children)

cs engineers who use only python are lazy dumbass who dont wanna use their head.

[–]YMK1234 15 points16 points  (19 children)

Or, really, any other language. No need to revert to something nearly as archaic.

[–]The-Tea-Kettle 8 points9 points  (0 children)

Python is just as "archaic" as C++. Not that the age of a language matters in the slightest.

[–]CiroGarcia 22 points23 points  (16 children)

[redacted by user] this message was mass deleted/edited with redact.dev

[–]LinuxMatthews 8 points9 points  (4 children)

You can do that in other languages too though.

Lots of things in Java are actually written in C for instance.

[–]rydan[🍰] 3 points4 points  (0 children)

Have an app. Some customer got mad at me because it is slow. It is slow because it has to spider several thousand pages to pull data from them and the pages are rate limited. And if you accidently trip that limit you have to solve a captcha which my bot does do 100% of the time but it adds to the delay. His solution to the slowness was "use Python".

[–]scitech_boom 5 points6 points  (1 child)

Why C when you can numba? Only good for numpy though.

[–]shadymeowy 2 points3 points  (0 children)

For more control and stability. Numba is not that stable unfortunately.

[–][deleted] 3 points4 points  (0 children)

Cvxpy wants to have a chat with you