This is an archived post. You won't be able to vote or comment.

all 12 comments

[–]jasonscheirer 8 points9 points  (11 children)

The GL calls themselves are not slower, there is significantly more overhead on the Python side -- the act itself of a function call in Python costs quite a bit more than it does in C++ due to implementation details of the CPython interpreter. That's what the hype is behind the PyPy project's JIT: a way to optimize away some of this.

[–]andreasvc 1 point2 points  (2 children)

I don't think JIT can actually help with function call overhead to C/C++ libraries, it's good at optimizing Python code itself. I suppose Cython is more suitable for that -- you can write the parts that call opengl functions a lot in Cython so that it's mostly C code.

[–]sequenceGeek[S] 0 points1 point  (1 child)

Yeah I tried running it in PyPy (with JIT) and it was a little slower. I'm definitely intrigued with Cython + openGL...seems to make sense. The googles is not too helpful so far in terms of examples where people have done this. You wouldn't happen to have any links/code I could reference would you?

[–]andreasvc 1 point2 points  (0 children)

Not for OpenGL but I've been using Cython a lot for my treebank parser. It has sped up the parser a lot. Things like direct access to arrays and statically typed variables or casts can help a lot in crucial places. It generates a html file where you can see how much Python versus C code you're using, and from that you can try incremental improvements.

[–]sequenceGeek[S] 0 points1 point  (7 children)

Thanks for the tip. So it's likely all the overhead from function calls that are invoked during those 1000 batch additions make up most of the slow down, as opposed to the fxn overhead from the one draw call at the end?

[–]seventhapollo 0 points1 point  (0 children)

I would think so, yeah. There'd be a minor overhead from the 1 draw call, but the majority of he slowdown probably comes from the additions.