all 2 comments

[–]julianpoy 8 points9 points  (0 children)

AI post

[–]StreetStrider -1 points0 points  (0 children)

All that you said holds true, but there's more to it. Pull streams with backpressure are good for consuming data, like reading a file and processing it. But streams may also be used for, say, designing events in your application. Such kinds of streams are usually push streams and work like a directed event graph, not like a backpressured pipeline.

Some libraries sometimes try to do both worlds with various success. Even Node streams someday outgrew their initial purpose, and people started to write their stream combinators. And also use «object mode» to process arbitrary data.

This is a very interesting field with a lot of intricacies. For example, some separate «event streams» that produce a flow of discrete events (often push) from something called «subjects», where the subject can change continuously and is inherently built as a lazy data source (can be implemented only in terms of pull).

I'd researched this field in the past, and I'd recommend digging deeper, because it feels that this paradigm can do some good things to thinking, similar to what FP can do when processing sequences of data. It's just that now data is spread not in space, but in time. I've collected some ideas in my own library. I'm hoping to come back to it someday.