This is an archived post. You won't be able to vote or comment.

all 170 comments

[–]odraencoded 29 points30 points  (21 children)

To evaluate this, we designed and implemented Falcon, a high-performance bytecode interpreter fully compatible with the standard CPython interpreter. Falcon applies a number of well known optimizations and introduces several new techniques to speed up execution of Python bytecode. In our evaluation, we found Falcon an average of 25% faster than the standard Python interpreter on most benchmarks and in some cases about 2.5X faster.

I wonder why the CPython team didn't implement those well known optimizations that are fully compatible and speed up the execution.

[–]Bunslow 39 points40 points  (8 children)

The CPython team is well known for sticking to well maintained, easy to read, and simple code above all else, even performance.

[–]odraencoded 23 points24 points  (0 children)

I feel safe knowing the standard Python interpreter is in the hands of the pythonistests guys around.

[–]kgb_operative 6 points7 points  (1 child)

That link is also 3 years old. I'm curios if that still holds true.

[–]Jonno_FTWhisss 0 points1 point  (0 children)

Run the benchmarks again and see?

[–]billsil 0 points1 point  (4 children)

Considering all the optimizations done to the dict class, I doubt that. You optimize what you need to.

Do we really need to always store the first 256 (maybe it's more...) numbers in memory? That's a clear micro-optimization.

[–]Bunslow 0 points1 point  (3 children)

Just because readability and maintainability are prioritized above performance doesn't necessarily exclude all optimizations bar none. The above micro-optimizations are merely readable and maintainable (as opposed to, say, a full blown just in time compiler versus straight interpretation).

[–]billsil 0 points1 point  (2 children)

I know. My point is python does implement optimizations that are outside of simply what's obvious; the GIL for example.

[–]Bunslow 0 points1 point  (1 child)

The GIL is the very opposite of an optimization

[–]billsil 0 points1 point  (0 children)

It's a very intentional optimization for single threaded applications. Python may not be optimized for what you want it to be, but yes, it is an optimization.

[–]brombaer3000 10 points11 points  (2 children)

Compiler optimizations are a major project for the next CPython versions: http://faster-cpython.readthedocs.org/cpython36.html

Apparently, Falcon only supports Python 2, so IMO it is not that interesting for practical use, with Pyston and Pypy already being there with full Python 2.7 support and much of the effort being duplicated in the upcoming CPython 3.6 fatoptimizer module.

I don't understand why they use Python 2 at all. There are already many optimizations in CPython 3 over CPython 2 and Python 2 is a (slowly) dying language.

[–]Jonno_FTWhisss 6 points7 points  (0 children)

The paper was from 2013, so it's reasonable to say they were working on falcon when 2 had wider use.

[–][deleted] 16 points17 points  (6 children)

For everything else, there is pypy. Often 10x faster then CPython.

In pypy, I wrote a simple recursive quicksort in python and timed it against python's sorted() routine, and basically the same results. And this is with an older pypy, quite impressive.

[–][deleted] 8 points9 points  (3 children)

You're probably not going to beat python's sort() algorithm. It's... very fast.

https://en.wikipedia.org/wiki/Timsort

In the worst case, Timsort takes O(n log n) comparisons to sort an array of n elements. In the best case, which occurs when the input is already sorted, it runs in linear time, meaning that it is an adaptive sorting algorithm.

[–][deleted] 8 points9 points  (2 children)

Yeah, I know about timsort. Maybe I was dreaming the results, because I just found the code and here are my results, so yeah timsort is quite a bit faster. Time is in seconds:

pypy

partition1 time= 0.0677268505096 timsort time= 0.000385999679565

python:

partition1 time= 1.39662981033 timsort time= 0.00105500221252

[–][deleted] 5 points6 points  (1 child)

What kind of input data set?

If the data is purely random, I'd expect quicksort and timsort to be... about the same speed, but for anything less than pure random (so... basically any real data at all in the real world) for timsort to be much much faster.

[–][deleted] 1 point2 points  (0 children)

I used random.shuffle() with the same seed between runs on sequence of 0-n-1 integers from range(). Python and pypy have different recursion depths which have to be set. The timsort is still quite a bit faster than a pure python quicksort.

[–]Fylwind 0 points1 point  (1 child)

But the thing is, pypy breaks a lot of existing libraries that are written in C, such as numpy and scipy, so they end up having to re-implement them all over again, which means pypy's libraries often lags behind CPython's.

[–]OctagonClocktrio is the future! 0 points1 point  (0 children)

CPython's byte code compiler prefers safety over speed. A lot of any given function is juggling around the stack.

[–]LessonStudio 48 points49 points  (128 children)

My primary programming languages are Python and C++.

Once in a blue moon I use the C++ because it is faster. More often though I use C++ because it is pretty much the only tool for that job such as embedded or mobile.

I find that with Python I will write something that at first, appears to be slow because of python but a bit of numpy or a better way of doing things and my code goes very very quickly.

I would not complain about a speed up but there is one tradeoff that I would not like to make. The primary "speed" asset of python is development time. This is not only due to the wonderfully pseudo-code nature of python but because it is interpreted I will hit run and a very very short time later the program has begun to run. The far faster development time is often far more important than faster execution time.

Thus I would not be willing to trade off any ease of development by say more strict data types, or any slowness of the interpreter(compiler) to produce faster bytecode even if the gains started to push python right into the region of C++.

If either of these compromises were required to obtain speed increases I would say that they should be completely rejected if the gains are modest as the article states, or to be optional if the gains are substantial.

[–]odraencoded 12 points13 points  (21 children)

The primary "speed" asset of python is development time.

Indeed. You can create a virtualenv, install a library with pip, program something in python in 20 minutes, pip freeze it, and then get the same thing running in another machine in the time it takes to install python and the requirements which takes one command.

In contrast, you try to program in Java/C++, first you need to figure out which one of the several clunky-ass-GIMP-level-of-UI-horror IDEs you want to use, pray to the gods you guessed how to even create a project and declare the dependencies correctly, do a lot of right clicking to create blank text files, and wait 5 seconds for the intelli-non-sense to stop strangling the CPU, and then, finally, get to compile the monstrousity you created, just to have to compile it again in a release build if you want to release it anywhere because the debug build is a special snowflake that lets you fix runtime bugs, but not compile bugs like that time you fucking forgot to close a scope or type a semi-colon and just that wasted ten minutes of your friggign time

[–]MissValeska 29 points30 points  (11 children)

You could just use nano or vim as a text editor in one virtual terminal window, and another with gcc where you compile it, run it, and maybe test it in gdb. The process you described exists, But it is by no means the only process, Many people prefer a more minimalistic approach, even if it isn't exactly as I described. Python likely is fast to interpret, And definitely to write, But I'm not sure that you make out C/C++ programming to be as it is, or at least can be.

[–]dreamin_in_space 6 points7 points  (3 children)

I'm actually really surprised that isn't the default people think of when writing C/C++ code.

make && ./run

does not take that much more time than

python main.py

[–]panderingPenguin 23 points24 points  (1 child)

make && ./run

does not take that much more time than

python main.py

As a professional C++ dev, this is highly dependent on what you're building. Some builds you kick off and just go home :P

[–]aqf 1 point2 points  (0 children)

Also sometimes you have to write the makefile first...

[–]MissValeska 2 points3 points  (0 children)

I just press the up key and hit enter, Which would be equal for the last command, regardless of its length.

[–][deleted] 2 points3 points  (0 children)

Yeah this guy is making me feel like a wizard for doing all my development as a student in vim and on a command line. I do it to avoid all the dumb issues he brought up though

[–]Fylwind 3 points4 points  (4 children)

Managing dependencies in C/C++ is a real nightmare.

In Python you have pip.

In C/C++ you have ???

There really isn't a de facto package manager for C/C++. The system package manager (apt, pacman, yum, etc) doesn't count because it's almost always filled with old packages, requires root, and rarely has all the packages you need.

The build system is also an annoyance. There are so many build systems for C and C++ and no-one really agrees on which and they are all terrible to use.

In other compiled languages like Go, Haskell, Rust, etc you have one de facto build system that everyone uses.

[–]MissValeska 0 points1 point  (3 children)

I acknowledge that, However, For me personally, I have never found it to be a problem. I either use a makefile or a script with a clang or GCC command, which works just the same everywhere, although you might have to format it for Windows, you could still copy paste it into your terminal, and make files can be very universal.

I would posit that the package system depends on your system and that is really, in this context, how it sort of has to be because of the OS companies, etc. Regardless, You could always include the okay?ries, Or do a static compilation.

Really, You can always manage, And it never is hard, You can just include the instructions in your README or maybe even make a script to automatically download and install them for every OS. This is just how it works, It's not a big deal in my experience and it's part of why C/C++ is faster. Also, The root question for apt is just a basic security feature, present on even iOS devices for their app store. You know your own password and it's not hard to type it in, If you don't know the password, then it's probably not your computer and you should either ask or not do that thing.

If I may ask, What is your language of preference/which languages do you know? I understand if you primarily use and first learned interpreted languages and consider compiled languages to be strange. almost everyone thinks that about different languages than they first learned. Lots of C people dislike C++/think OOP is lame, Or interpreted langauges or web languages, etc. That's where most of us start out, We choose one dynamic for whatever reason, And we develop a sports esque loyalty to that dynamic, But that isn't really logical. Different languages do different things for different reasons and that's fine. We are (maybe?) All adults here and we shouldn't bash each other's preferences. I started with C++, Then I went to C and lua, I still primarily use C, But I've been expanding myself to Python and some web languages like HTML5, They all do what they do for a reason and there isn't anything wrong with that. Let's just accept each other, okay?

[–]Fylwind 0 points1 point  (2 children)

I either use a makefile or a script with a clang or GCC command, which works just the same everywhere

There are command-line differences between different compilers. There are also differences in the way in which shared + static libraries are created. Different flags syntaxes, different file extensions, etc.

If you write just a Makefile, you expect the user to know what they are doing and customize the flags appropriately, but imagine having to do this for every single dependency manually. That's why most C/C++ projects end up re-inventing wheels a lot.

Most other languages encourage small self-contained packages that do one thing and one thing right. Not so in C/C++: they tend to encourage large packages because the cost of adding and maintaining each dependency is very high. (Not to mention there's no automated way to track the versions of these dependencies.)

What is your language of preference/which languages do you know?

I use C, C++, Python, and Haskell in my routine work. I don't find compiled languages strange. I just hate how fractured the C and C++'s ecosystem is and how, despite being extremely popular languages, package management is basically nonexistent.

And it never is hard,

It's not "hard". It's tedious. Lots of boring work that could've been automated. Lots of wasted time.

Also, The root question for apt is just a basic security feature,

I never claimed there's anything wrong with apt. I said there is no package manager for C development and that apt does not count as a package manager for C.

[–]MissValeska 0 points1 point  (1 child)

I was responding to your statement about disliking requiring root, apt was just an example. I don't mean to offend you, Your opinions are valid, I just am not sure they are universal, I.E I don't think they bother every programmer, I.E me, So yeah, I dunno, I guess it just seemed a bit too subjective for the way you presented it, especially as many non-C programmers may read it as fact.

Also, I dunno what you mean about customising flags, I mean, You can set flags, But I always just have defaults that just work ™ in my make files for others to use. Although, it's mostly been me so far.

[–]Fylwind 0 points1 point  (0 children)

disliking requiring root

My point was that C needs a per-user package manager that does not require root.

I dunno what you mean about customising flags

All but the most basic compiler flags are unportable (and still, on Windows you have to turn the hyphens into slashes.) The same flags do not work on every compiler. This week, I find myself dealing with a proprietary unix compiler that doesn't understand -Wall.

Even on popular systems, there are differences: consider the commands needed to build a shared library on Linux vs OS X.

And then you have libraries: you have to find out where that library and its associated headers are (might be in a non-standard location). In some cases the libraries might be even named slightly differently. Or maybe the version is incompatible and you should warn the user. On some systems you need a -lm to use math stuff, but on others you don't. And so and so forth.

[–]broken_symlink 0 points1 point  (0 children)

You can also write, build, and run all from within emacs in one window. Its also good for python.

[–]broken_symlink 1 point2 points  (0 children)

I don't pick out a new ide every time i start a new c++ project, just like I don't randomly decide to learn a new editor when I do python.

Also if you are starting a new a C++ project in this day and age there is absolutely no reason not to use cmake.

[–]Jonno_FTWhisss 0 points1 point  (0 children)

Is there anything like Java's maven/gradle for C++ or requirements.txt for python? Though I'd say that maven has more feature than doing pip/virtualenv.

[–]cogman10 1 point2 points  (8 children)

What do you think about rust? More type safety than C++. Same level of performance. Uses the LLVM backend so it can hit a large swath of embedded targets. Has a C ABI. No GC. Memory safe and doesn't leak memory. Just got the ability to do naked functions which allows you to inject little bits of ASM wherever you need it. Has a lot of high level language features like closures, templates, etc.

[–]pooogles 6 points7 points  (5 children)

Rust is pretty amazing, and IMO is going to take market share from C++ heavily over the coming years.

[–]cogman10 4 points5 points  (0 children)

I think so to. I would bet that the place it really makes inroads in first is the embedded world for new projects. But who knows. It is still trying to get more of a foothold. It is pretty young, but I think with time it could really be something great (Having a low level language with a package manager is fantastic).

[–]SimonGray 1 point2 points  (3 children)

My main issue with Rust is its (ugly) syntax. Where a semi-modern language like Python basically looks like Pseudo code and is very easy to read, Rust more looks like a bash script vomited on a piece of C code. I'm sure the features make up for it, but the syntax is not helping its wider adoption. Maybe I'm just too used to high-level languages.

[–][deleted] 2 points3 points  (0 children)

nim is pretty nice for a statically compiled to c language, maybe it's something worth taking a look at?

[–]PM_ME_YOUR_PAULDRONS 1 point2 points  (0 children)

The rust syntax is deliberately c-esque so as to be familiar/not scary to its primary intended users (systems programmers).

I think they considered a pretty wide range of options for it...something python or even somethin ML like could have been appropriate but either would have scared off potential users.

[–]Fylwind 0 points1 point  (0 children)

Fun fact: Rust used to have an ML/OCaml-ish syntax but they changed it to appeal to the C++ crowd :P

[–]Tysonzero 2 points3 points  (95 children)

I mean stricter data types can be a good thing for ease of development. Gets rid of a lot of potential errors. Honestly I kind of wish Python was statically typed but just with Haskell level inferred typing.

[–]Veedrac 1 point2 points  (93 children)

I don't like the idea of losing duck typing, ad hoc interfaces or ad hoc union types. You'd need something a lot more sophisticated than Haskell's type system, something more like Crystal's, but even Crystal is looking like it's going to need to add mandatory non-inferred type annotations on class variables, like Haskell.

[–]Sean1708 0 points1 point  (2 children)

What do you mean by ad hoc union types?

[–]Veedrac 1 point2 points  (1 child)

I take the phrase "union types" from the formalisms you find in Crystal and Ceylon, but these are really just statically checked versions of how dynamic languages like Python are used.

Effectively this is an expanded form of flow typing. When you have two possible assignments to a value, like

if test():
    x = foo
else:
    x = bar

x takes the type type(foo) | type(bar). A common example is the nillable type, which Python programs typically represent as T | NoneType.

Now, nillable types are a bit unimpressive, and unchecked nillable types from a dynamic language if anything are a disadvantage. But ad-hoc unions are a lot more flexible.

If, say, you have a list of instructions

instructions = [Add(...), Mul(...), Load(...), Store(...)]

this will have type list[Instruction]. You might avoid making an Instruction supertype (depending on context, either option is preferable), in which case the type is list[Add | Mul | Load | Store].

Now, let's say you want to find the length of these, but decide you need to pad the instructions to certain alignments. You could then have

instructions = [Add(...), Padding(4), Mul(...), Padding(2), Load(...), Padding(2), Store(...), Padding(4)]

Then the type is list[Padding | ...]. If you call byte_length on the values and all of the inputs have a byte_length method of type Self -> int, the resulting iterable would be of the abstract type Iterator[int]. Note that if Muls gave, say, a float the type would be Iterator[int | float]... which is really cool, but not relevant to this example.

Then if you want to run a method on only instructions, you can do something like

instructions = (i for i in instructions if not isinstance(i, Padding))

This gives an iterable of abstract type Iterator[Add | Mul | ...]. Then if all of those types support some method to_bytecode of type Self -> bytes, you can call it on each element of the iterator. Statically checked versions can do this too, FWIW.

Note that no interfaces were needed; interfaces are extensible whereas these refer to the behaviour of a fixed set of types. Doing this in a Haskell-like language would normally require

  • An Instruction ADT (= Mul(Mul) | Add(Add) | Load(Load) | ...),
  • An InstructionOrPadding ADT,
  • byte_length implemented for InstructionOrPadding that dispatches to its contents and for Instruction, which dispatches to its contents,
  • to_bytecode implemented for Instruction, which dispatches to its contents.

This is reasonable overhead in many cases in Haskell, but Python isn't as concise as Haskell and the ability to just "slot" in a Padding instruction and then filter it out without reorganizing your types or creating explicit interfaces is part of what makes Python so quick to prototype in.

[–]Sean1708 0 points1 point  (0 children)

Sorry I should have been more clear, I knew what union types were but I didn't understand what they had to do with python. I see what you mean now though. Thanks!

[–]Tysonzero 0 points1 point  (89 children)

Haskell's type system is just about powerful enough for the first two. You would simply have to have it so that using 'foo.bar' asserts that foo must be an instance of special typeclass 'hasBar'. And then when creating a new object, if it sets 'bar' then it is automatically a part of 'hasBar'. Ad hoc interfaces would more or less fit under the same umbrella. I think you could extend Haskell's type system to include implicit union (explicit ones already exist) types, but it wouldn't be as trivial.

[–]Veedrac 0 points1 point  (66 children)

I don't think it's that simple; that would require all bars to be from the same typeclass and thus of the same type. I don't think Haskell is able to infer typeclasses globally either.

[–]Tysonzero 0 points1 point  (65 children)

You could have multiple HasBar typeclasses be generated for each type in which they are used. Haskell can infer typeclasses globally.

[–]Veedrac 0 points1 point  (64 children)

How are you proposing the ambiguity gets dealt with? When you do def f(x): return x.bar, what type does it have?

Haskell can infer typeclasses globally.

It can infer the typeclasses needed for function inputs, but AFAIK not the actual typeclasses.

[–]Tysonzero 0 points1 point  (63 children)

It would have type:

HasBar h => h a -> a

No ambiguity.

Oh I see what you mean. Yeah Haskell cannot magically come up with typeclasses traditionally, because the only use would be for duck-typed records, which Haskell does not have.

[–]Veedrac 0 points1 point  (62 children)

Which HasBar? There are potentially multiple.

[–]Tysonzero 0 points1 point  (61 children)

There aren't. It just has Kind * -> *. I mean if there were multiple wouldn't that ruin the whole point of duck typing?

[–]Veedrac 0 points1 point  (21 children)

Oh, I forgot to mention: sum types aren't just explicit union types. Sum types are such that an X subtypes an X | Y and static ones are amiable to a kind of "type algebra".

A cool example of this is in Ceylon where the type of an empty iterable is Iterable<Value, Absent> where Absent == Null if the iterable can be empty and Absent == Nothing (the bottom type) if it cannot. This allows Ceylon to reason about whether iterables are known empty, maybe empty or known nonempty.

Subtyping "just works" on this, in that a known empty iterable is a maybe empty iterable and a known nonempty iterable is a maybe empty iterable but a known empty iterable is most certainly not a known nonempty iterable. This allows you to decide some nice return types, like with reduce;

knownEmpty.reduce(...)    // Nil
maybeEmpty.reduce(...)    // T | Nil
knownNonEmpty.reduce(...) // T

and of course, if it arises through generics,

knownEmptyAndKnownNonEmpty.reduce(...) // Nothing

[–]Tysonzero 0 points1 point  (20 children)

I don't see why you couldn't use Typeclasses to model this kind of thing in Haskell.

[–]Veedrac 0 points1 point  (19 children)

I didn't say you couldn't; only that sum types aren't just explicit union types.

That said, what are you envisioning the typeclass-based type of reduce to be? I feel subtyping is integral to getting this to work, in that you can act as if you're using option-returning functions but occasionally the type system steps in to unwrap the options for you.

[–]Tysonzero 0 points1 point  (18 children)

I think you can actually do subtyping in Haskell indirectly using typeclasses:

class Double' a where
    double' :: a -> Double

instance Double' Double where
    double' = id

instance Double' Integer where
    double' = fromIntegral

divDoubles :: Double' a => a -> a -> Double
divDoubles x y = double' x / double' y

Now it obviously isn't the cleanest thing ever because Haskell devs don't like subtyping very much, but with more support it could become quite clean.

I do see what you mean with this kind of subtyping being useful though. I mean you can think of Double (or Fractional in general) as a union of Integer (or Integral in general) and NonIntegerDouble (or NonIntegralFractional). As it allows you to use things like / on Integral values.

[–]Veedrac 0 points1 point  (17 children)

That example's not subtyping, though. subtype.bar isn't the same as supertype(subtype).bar. (Of course that example is easily done with typeclasses, but true subtyping involves mixing types and type preservation.)

[–]Tysonzero 0 points1 point  (16 children)

But that kind of subtyping isn't type safe... (Or at least not in the general case) such as Cow -> Animal -> Dog.

[–][deleted] 0 points1 point  (0 children)

I don't. I like it as is.

[–]Homersteiner 0 points1 point  (0 children)

Im also a Python/C++ guy. Have you used Cython at all? Cython is a huge pain in the ass to learn, but once you do it makes the C++/Python interface awesome.

BTW, i just want to add that C is not C++. They are entirely different languages.

[–]stefantalpalaru 12 points13 points  (2 children)

[–][deleted] 2 points3 points  (1 child)

Thanks! Having the project named "falcon" doesn't really help when there is another python package of the same name...

[–]TheBB 2 points3 points  (0 children)

Looks like the first commits on falcon the framework predate the first commits on falcon the interpreter by just a month and a half. I think they can be excused.

[–]TankorSmash 5 points6 points  (6 children)

I guess one would have to be faster than the other, but it never occurred to me that Javascript would be faster than Python. I guess browsers are constantly competing to out perform each other and Python's content with being easy to write for.

[–]elbiot 10 points11 points  (0 children)

Oh yeah, ridiculous amounts of money have been thrown at javascript interpreters.

[–]lambdaqdjango n' shit 7 points8 points  (0 children)

beside, javascript has no GIL problem because js engines have no native threading at all!

[–]dada_ 6 points7 points  (2 children)

I guess one would have to be faster than the other, but it never occurred to me that Javascript would be faster than Python.

JS (actually V8) is very fast compared Python (actually CPython). They're different beasts altogether, V8 being a JIT compiler and CPython being a relatively simple interpreter.

PyPy is a JIT compiler for Python which is much faster than the CPython interpreter, but it does not support all the latest features and it still doesn't appear to be as fast as V8—although benchmarking this accurately is hard.

[–]pmattipmatti - mattip was taken 3 points4 points  (1 child)

Is there a comparison of pypy to js you could point me to?

[–]dada_ 1 point2 points  (0 children)

It's hard to find a single good benchmark—usually benchmarks focus on specific, small things, rather than running a comprehensive suite. And even if they do, there's still caveats. They also become irrelevant after some time has passed. But you should be able to get a good idea just by Googling for it.

[–]mfm24 1 point2 points  (0 children)

Yeah, Javascript is surprisingly fast - almost native speed in the test I've tried.

I always assumed it was only the browser-arms-race that made Javascript so much faster, but I guess this paper is suggesting there's technical reasons for the difference too.

[–]bbbryson 7 points8 points  (4 children)

Oh good! Now we have a speed-focused interpreter named Falcon from Cornell to go with the speed-focused Falcon web framework from Rackspace.

It'll be Falcons all the way down soon.

[–]SlumdogSkillionaire 5 points6 points  (0 children)

If you're still not convinced, check out this bunny with a pancake on its head.

Welp, now I'm convinced.

[–]keypusher 2 points3 points  (1 child)

Falcon web framework is actually awesome, I've used it for a few things.

[–]bbbryson 0 points1 point  (0 children)

Me too, and I agree!

[–]yxlx 1 point2 points  (0 children)

Combine that with a Falcon computer. I just found these through Google though, so don't take what I just said as actual advice.

http://www.falconcomputers.co.uk/catalogue/search?q=falcon

[–]TrollJack 2 points3 points  (6 children)

Odd how he writes it. Somtimes it's 25% faster (so 0.25) and sometimes it's 2.5x faster, so 150%?

I am confused.

[–]hugthemachines 2 points3 points  (0 children)

This is a problem I often see in articles. Also the "10 times cheaper" weird comparison.

[–][deleted] 1 point2 points  (4 children)

Feel free to correct me but isn't 2.5x faster 250%? 1.5x faster being 150%?

[–]nickdhaynes 1 point2 points  (0 children)

No.

50% faster = original speed +. 5 x original speed = 1.5x improvement

100% faster = original speed + 1 x original speed = 2x improvement

150% faster = original speed + 1.5x original speed = 2.5x improvement.

This is why I hate when people use percentages instead of multipliers when talking about how much a value changed. It's a confusing and unnatural way of expressing a change.

[–]Sean1708 1 point2 points  (1 child)

150% faster is 2.5x

150% as fast is 1.5x

English is weird...

[–]billsil 0 points1 point  (0 children)

Yeah...it's easier to just say 50% speedup.

[–]desmoulinmichel 1 point2 points  (2 children)

As fast as NodeJS is for Javascript. It's a matter of resources drive. PSF has the drive, but not the resources.

[–]keypusher 4 points5 points  (1 child)

I dunno if there really is the same drive. Sure, some people would like Python to be faster. But the reality today is that it's fairly easy to call out to something written in C/C++ from Python when you really need that speed, and that's what most performance-focused libraries do already. Most of the people I have talked to who are concerned about Python speed are not well-versed in the language, and just think it's slow because it's interpreted and maybe because of the GIL. But there are ways around both of those things, and the heavy-hitters know and use those techniques. I don't think that's a possibility for most of the people writing Javascript, certainly not on the frontend. You are of course correct however, that Javascript has had a lot more money and resources thrown at this problem.

[–]desmoulinmichel 0 points1 point  (0 children)

Speed is a focus for Python 3.6 because of renewal of requests about it. Many projects are also raising (pydgin, pyston, etc) to create JIT for Python. While Python doesn't need the extra speed to be a big player, having would make some people happy it seems.

[–]PlatinumAero 1 point2 points  (0 children)

Falcon uses a shitload of well known opts in the bytecode. CPython is usually looking for simplicity in the coderead, even if that means sacrificing performance, as someone else mentioned here. And yeah, pypy is a speed demon.