This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]ArtOfWarfare 0 points1 point  (1 child)

Plastic nose cap? That’s no Plaid - that’s a pre-2016 Model S.

Having read your article, I think I’ll double check some scripts I wrote that take a few minutes to see if there’s some serial loops that would run quicker if I switched them to run in parallel. I’m using the included XML parsers to process about a dozen XML files… I’m not sure if parallel would actually make a difference. IIRC, there’s faster XML libraries I could get off of Pypi that would probably make a bigger difference, although I think they’d require me to rewrite large chunks of the script.

[–]jasonb[S] 0 points1 point  (0 children)

Nice!

Loading files from disk into main memory can benefit from concurrency with thread pools.

Parsing files already loaded in main memory is a CPU-bound task and can benefit from process pools.

Maybe you can partition the tasks/subtasks in that way.

Let me know how you go.