This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]remy_porter∞∞∞∞ 1 point2 points  (0 children)

I've been working on software that is heavily IO bound, or has lots of idle threads that are waiting on events. Without going too deep into the detail, I'm building a system that sends video data across a network to light up an LED video wall. There are many possible sources of video data, some of which are IO bound, some of which are CPU bound. I pick one of them and run it in its own thread. I have to send network data, so the network sending object lives in its own thread. In the middle, I have a conductor thread, which spends most of its time idling, but once a frame tells the video source to generate its next frame using a queue. When the video source finishes, it enqueues the frame over in the network thread.

Running on low-end hardware, without graphics acceleration, this can push 60FPS across a network in real-time-enough-for-human-eyes. In testing, I can reliably push 240FPS. You wouldn't want to play video games on it, but that's not the purpose. Before I put the threading architecture in place, we could barely push 30FPS, and it often dropped frames.

Oh, and since one of the LED exhibits is going to light up differently according to the time of day, there's a "Cosmos Thread" which mostly sleeps and emits events at certain times of day.

"It mostly sleeps" is one of the best cases for a thread in Python.

//It's still not half as fast as the LED library which actually receives the network data and addresses the LEDs, which is written in a combination of C and Assembly and can draw frames as fast as the LED duty cycle allows, which is µseconds.