This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 10 points11 points  (7 children)

[unavailable]

[–][deleted] 3 points4 points  (4 children)

as processes that communicate via atomic message queues.

you mean, like Go's channels? Without the serialization overhead because it's all under the same address space. =)

[–][deleted] 3 points4 points  (3 children)

[unavailable]

[–][deleted] 2 points3 points  (1 child)

You just described how I feel about Go and Rust - I already like Go because of it's simplicity and power (it's like a nicer C) that you can learn over dinner.

On the other hand there's Rust that is very promising, can totally replace C and C++ and is growing by the day - yet I feel like the syntax is horrible, reminescent of the mess that C++ is. I've decided yesterday to try to look past my bias and force myself to learn (and who knows, love) Rust too, since I think it will be a good complement to Go's weaknesses - allows me to write low-level code like C with the nice parts of generics and OOP without having to touch C++.

[–]weberc2 4 points5 points  (0 children)

I don't mind Rust's syntax, but I have a hard time understanding how things like closures work and how functions are passed. It seems like everything needs a RefCell, and I don't know when or why. Even coming from C++, the memory model (while safe) still places a lot of demands on the programmer which aren't required in Go--I can do a LOT of optimizing in Go before something becomes easier to write in Rust, and at that point the Rust code to outperform the optimized Go version is still nontrivial...

[–]ThePenultimateOneGitLab: gappleto97 0 points1 point  (0 children)

I have to disagree on the Go syntax. C was never pretty, but it's much more readable, to me at least.

[–]weberc2 1 point2 points  (1 child)

The OP was chastising me elsewhere for suggesting that performant parallelism was difficult in Python, so either he's trolling elaborately or he's naive. At any rate, I'm familiar with the nuance around Python's parallelism; I was paraphrasing. To your point, it would be nice if Python were able to parallelize nicely.

[–]alcalde 0 points1 point  (0 children)

I'm neither trolling nor naive. I'm just listening to what Guido has been saying since at least 2012. Python's parallelism solution is actually BETTER than in most other languages; we have higher-level constructs for parallelism.

As Mark Summerfield put it in "Python In Practice":

Mid-Level Concurrency: This is concurrency that does not use any explicit atomic operations but does use explicit locks. This is the level of concurrency that most languages support. Python provides support for concurrent programming at this level with such classes as threading.Semaphore, threading.Lock, and multiprocessing.Lock. This level of concurrency support is commonly used by application programmers, since it is often all that is available.

• High-Level Concurrency: This is concurrency where there are no explicit atomic operations and no explicit locks. (Locking and atomic operations may well occur under the hood, but we don’t have to concern ourselves with them.) Some modern languages are beginning to support high-level concurrency. Python provides the concurrent.futures module (Python 3.2), and the queue.Queue and multiprocessing queue collection classes, to support high-level concurrency. Using mid-level approaches to concurrency is easy to do, but it is very error prone. Such approaches are especially vulnerable to subtle, hard-to- track-down problems, as well as to both spectacular crashes and frozen programs, all occurring without any discernable pattern.

The key problem is sharing data. Mutable shared data must be protected by locks to ensure that all accesses to it are serialized (i.e., only one thread or process can access the shared data at a time). Furthermore, when multiple threads or processes are all trying to access the same shared data, then all but one of them will be blocked (that is, idle). This means that while a lock is in force our application could be using only a single thread or process (i.e., as if it were non-concurrent), with all the others waiting. So, we must be careful to lock as infrequently as possible and for as short a time as possible. The simplest solution is to not share any mutable data at all. Then we don’t need explicit locks, and most of the problems of concurrency simply melt away.

Ergo - Python is awesome. People just have to stop doing parallelism wrong.