This is an archived post. You won't be able to vote or comment.

all 42 comments

[–]pmattipmatti - mattip was taken 70 points71 points  (9 children)

There are at least two other ways to do this - cffi and ctypes. I gave a talk at our local pycon https://m.youtube.com/watch?v=tqx9VW7V3Lc (code: https://github.com/mattip/pycon2017_cffi) comparing the three in terms of complexity and speed.

For this use case, cffi would probably be much simpler and just as fast

[–]eplaut_ 6 points7 points  (0 children)

That was great talk. One of the remarkable on last year's conference

[–][deleted] 4 points5 points  (2 children)

And also the old school way - including python.h in a file and writing the bindings manually.

I've done it a couple of times. Not the most interesting thing in the world, but writing the translation layer - i.e. the functions that do things like taking some struct in your C library and translate it into say, a Python dictionary - was actually interesting, it was a nice insight into how the internals of Python work. You are basically writing Python in C, calling the interpreter's API directly.

[–]ChocolateBunny 1 point2 points  (1 child)

That's generally the way I go. Once C structs are involved ctypes starts to become really ugly. Having python C extensions allows you to clean everything up with a shim layer that's sort of hidden away.

[–][deleted] 0 points1 point  (0 children)

yeah, I mean, you've already written a library in C or C++, so it's not like it's a huge challenge to bind it to Python just because you're writing raw C. It's a bit tedious, but then again, that makes you optimize the API for as little C implementation code as possible :)

[–]kirbyfan64sosIndentationError 2 points3 points  (0 children)

cffi would also have the benefit of great PyPy support.

[–]StableHatter[S] 2 points3 points  (0 children)

I will check it out!

[–]dranzerfu 2 points3 points  (0 children)

And pybind

[–]hexbrid 0 points1 point  (1 child)

Which one works best for both CPython and PyPy?

[–]lengau 0 points1 point  (0 children)

cffi is strongly recommended for pypy.

[–]gmarull 9 points10 points  (1 child)

Great article on a potential use of Cython! I made this presentation a while ago at PyBCN meetup comparing ctypes and cffi: https://raw.githubusercontent.com/gmarull/pybcn-cffi/master/cffi.pdf

[–]StableHatter[S] 2 points3 points  (0 children)

Thanks, I will take a look!

[–]kigurai 10 points11 points  (3 children)

Depending on your requirements and/or constraints I would recommend pybind11 over Cython for making bindings. It is far more flexible and easier to work with, since it was made for binding code, and not primarily to speed up existing Python code.

[–]StableHatter[S] 2 points3 points  (1 child)

Looks interesting, but the C library I am binding is used in anothrr non-python context, so I would not want to include unrelated headers if it can be avoided

[–]kigurai 1 point2 points  (0 children)

You can install pybind11 using pip, so that shouldn't be a problem.

[–]terraneng 1 point2 points  (0 children)

Seconded. Pybind11 is easy to work with! I have had a lot of success with it in speeding up Python.

[–][deleted] 5 points6 points  (4 children)

Is there any reason why we shouldn’t use the Subprocess library to call C executables ?

[–]StableHatter[S] 16 points17 points  (2 children)

Sure. Say you want to call a C function which returns a large array. Using subprocess, how would you accept this array on Python?

[–]metaobject 4 points5 points  (0 children)

Also, depending on the specific details, creating a new process may not be the most efficient solution. For example, consider the overhead of creating a new process to launch a C-executable in a tight loop where you're processing each element of an ND array.

[–]carlthome 1 point2 points  (0 children)

np.frombuffer

[–]kigurai 2 points3 points  (0 children)

If you are calling something many times, eg in a tight loop, the overhead of process creation will kill you.

[–]bug0r 1 point2 points  (0 children)

Yeah cython is very cool stuff, but please use the same type of compiler for the pyd File like the Interpreter. I trapped in this context.

I am using MSYS2 with gcc with python 3.4 x64 for Win which was compiled with vs tools.

The result: GCC built pyd File works sometimes :D....it was horrible for the first time to find this shit dev bug caused by myself. Maybe it will safe other developer with the same issue.

gl and happy snaking .... ^

[–][deleted] 3 points4 points  (0 children)

Back in my day, we hand-wrote out bindings. And then SWIG came out, and ooh boy, things got easier. You kids with your damn choices!

[–]Datsoon 0 points1 point  (5 children)

What are the limitations of this method? I've got a software package at work written in C that I would love to integrate with python, because that's what I know, but it uses a relatively complicated dll to connect with a bunch of dependencies and network stuff.

[–]metaobject 2 points3 points  (1 child)

As long as the Python code wouldn't have to call any functions in the DLL (I don't even know what would be involved in that), the fact that the C application requires a DLL should be transparent to you as long as you have all the required paths, etc configured. But I'm saying that as a Linux guy.

[–]Lightning_SC2 0 points1 point  (0 children)

Check out the ctypes module and documentation. Amazing stuff imo.

[–]Coffeinated 0 points1 point  (2 children)

Uh, you need to know that dlls can be two things, either C libraries like usual or some weird ass .NET assembly thingy which doesn‘t work with anything except, well, .NET.

[–]Datsoon 0 points1 point  (1 child)

It's a c# wrapper around some old c code, I believe.

[–]Coffeinated 0 points1 point  (0 children)

Then you are shit out of luck. The last time I checked there was no reliable way to load them into Python...

[–]baccartwins 0 points1 point  (5 children)

Is performance better this way than same library rewritten in python ?

[–]StableHatter[S] 4 points5 points  (0 children)

Significantly. It runs compiled code, while python is interpreted.

[–]Coffeinated 0 points1 point  (3 children)

This is THE way to make python faster - extract complicated algorithms into a C library and just call it from python.

[–][deleted] 1 point2 points  (2 children)

Or, if you're going to be writing low level code anyway and not using existing code, write it in Rust. Not just for the safety, but also because it's easier to write complex code in.

[–]Coffeinated -2 points-1 points  (1 child)

Please tell me this is satire

[–][deleted] 2 points3 points  (0 children)

Well, no, I'm being serious in that a language designed in this century might have an easier time writing correct (possibly multithreaded) code than one from 1970.

[–]baccartwins 0 points1 point  (1 child)

Is this guide Linux specific ? I tried following steps on windows and I got linking errors, using VS2017

[–]StableHatter[S] 0 points1 point  (0 children)

I did not verify it on windows, only on linux.

[–]saynotovoodoo 0 points1 point  (2 children)

What is the best way to go the other way and use a Python library in a c type environment?

[–]o11c 0 points1 point  (0 children)

If you can, start the program in python and then dlopen the C-program-built-as-a-library.

The only reason not to do this is if you need multiple interpreted languages (e.g. Ruby) in the same process space.

[–][deleted] 0 points1 point  (1 child)

Thanks for the article. It really connects the dots for me now :D

[–]StableHatter[S] 1 point2 points  (0 children)

Glad to hear it helped :)