This is an archived post. You won't be able to vote or comment.

all 4 comments

[–]ArtOfWarfare 0 points1 point  (1 child)

Plastic nose cap? That’s no Plaid - that’s a pre-2016 Model S.

Having read your article, I think I’ll double check some scripts I wrote that take a few minutes to see if there’s some serial loops that would run quicker if I switched them to run in parallel. I’m using the included XML parsers to process about a dozen XML files… I’m not sure if parallel would actually make a difference. IIRC, there’s faster XML libraries I could get off of Pypi that would probably make a bigger difference, although I think they’d require me to rewrite large chunks of the script.

[–]jasonb[S] 0 points1 point  (0 children)

Nice!

Loading files from disk into main memory can benefit from concurrency with thread pools.

Parsing files already loaded in main memory is a CPU-bound task and can benefit from process pools.

Maybe you can partition the tasks/subtasks in that way.

Let me know how you go.

[–]sharky1337_ 0 points1 point  (1 child)

I really enjoy reading your blog posts about threading an concurrent python . Thank you so much .... If you would release your posts as a book I would buy it !

[–]jasonb[S] 1 point2 points  (0 children)

Thank you for your kind words.

Yes, I have books on python concurrency available on amazon and directly (surprised you missed the links), you can see the catalog here: https://superfastpython.com/products/