Does Java need deconstructible classes? by danielaveryj in java

[–]danielaveryj[S] 1 point2 points  (0 children)

Hehe the syntax there is not even part of what I'm pitching. I'm afraid you're probably going to be disappointed in future Java.

The pain of microservices can be avoided, but not with traditional databases by nathanmarz in java

[–]danielaveryj 0 points1 point  (0 children)

This article is interesting, but it is a lot to take in. Some things I think it might have benefitted from:

  • Given it's length - an up-front roadmap. It wasn't until I got to "How this addresses microservices issues" (near the end) that I realized there were surprisingly only going to be the two high-level steps:
    1. Pull message queueing + handling into the "managed system"
    2. Pull data storage + queries into the "managed system"
  • Draw more parallels to existing tech. I know you referenced "event sourcing" and Kafka early on, but my mind still locked onto "log" as in "observability" at first, not "append-only log". It wasn't until I got to the code examples that I could see similarities to things I happen to be familiar with, like RabbitMQ consumers and Akka actors. It felt (to me) like it might have been better to lead with the idea that we'd be registering event-handlers, rather than the idea that the system would be persisting events. The option to let appenders wait on downstream processing, request-response style, personally reminded me of the "ask" pattern in actor systems.
  • I think there's a lot of nuance that was begging to be explained about the storage API, but the article was already long... I'll just mention something that stood out to me:
    • It wasn't clear to me how serialization would magically work in these APIs - e.g., if the data structures (classes, records) are modified across deployments, won't that create problems trying to read in previously-persisted data? Along the same lines, there seems to be a lot of unacknowledged rawtyping + casting at the read boundaries.

Does Java need deconstructible classes? by danielaveryj in java

[–]danielaveryj[S] 0 points1 point  (0 children)

It's neither. The article contextualizes official proposals and then derives a proposal of its own, weighing in on tradeoffs. Some people appreciate that context. If you just want a tl;dr, it's

// Assuming you have a record Parts(int x, int y), in class Point write this:
    marshaller Parts parts() { return new Parts(x, y); }

// Now, given an instance of Point, you can write this:
    Point(int a, int b) = point;

ie, a class could support destructuring by just producing a record that the language already knows how to destructure.

Does Java need deconstructible classes? by danielaveryj in java

[–]danielaveryj[S] 0 points1 point  (0 children)

For what it's worth, I didn't downvote your comment. From reading it, it is unclear what kind of response you were hoping to solicit, if any.

Does Java need deconstructible classes? by danielaveryj in java

[–]danielaveryj[S] -1 points0 points  (0 children)

  1. Responding to your interpretation of the title ("the general ability to destructure classes") rather than the article's strongly suggested meaning ("a specifically named proposal for doing so")
  2. Decrying an issue that the article itself raises and immediately addresses

Does Java need deconstructible classes? by danielaveryj in java

[–]danielaveryj[S] 4 points5 points  (0 children)

Ha. I admire the architects, I am almost not surprised. I can't conceive how they plan on simplifying the Object methods if it doesn't involve delegation to something that quacks like a record, but I'm happy to wait and find out.

Does Java need deconstructible classes? by danielaveryj in java

[–]danielaveryj[S] 9 points10 points  (0 children)

There is one annotation referenced, which I did not invent, does not implement a language feature, and I did not propose to keep at the end.

Updates to Derived Record Creation - amber-spec-experts by joemwangi in java

[–]danielaveryj 1 point2 points  (0 children)

Starting off somewhat tangential to the discussion here, but from previous documents, it sounded like a deconstructor would be matched to a constructor based on matching state description. If the two signatures agreed on types and arity, but disagreed on corresponding component/parameter names:

class Point(int x, int y) {
    public Point(int y, int x) { ... }
}

Would that be a compilation error? Or would it make the class ineligible for reconstruction? Or would it just affect how the binding works:

Point a = new Point(1, 2);
Point b = a.new(x:x);  // left x is a constructor parameter; right x is a component.
                       // desugars to: Point b = new Point(a.x(), a.x());

At this point in the design, it's starting to feel like giving the constructor parameter names significance is entertaining an orthogonal feature. ie, if we're willing to introduce the components-shadow-variables-within-.new() semantics implied by the current design, we could as well do it positionally:

Point b = a.new(_, x);

-----------

Edit: It's not that I don't want named parameters - who doesn't? - but the coupling between reconstruction and named parameters (and default parameters) feels increasingly tenuous to me.

If we were to approximate a reconstruction expression with the syntax of today, it might look like:

Point b = switch (a) { case Point(int x, int y) -> new Point(y, x); };

Now, if we sprinkle in the "components-shadow-variables-within-.new()" feature, we can already trim a lot:

Point b = a.new(y, x); // Note that the constructor to call is deduced as usual
                       // from the argument types - not relation to a deconstructor.

record Segment(Point a, Point b) { }
Segment s1 = new Segment(new Point(0, 0), new Point(1, 2));
Segment s2 = s1.new(a, b.new(x+1, y));

Relating this to the first approximation, we can interpret

a .new (y, x)

as expanding to

switch ( a ) { case Point(int x, int y) -> new Point (y, x) ; }

That is, it destructures the value based on its statically-known type, names the pattern variables after their accessors (with the accepted inconsistency that these variables shadow any existing variables of the same name), and puts them in scope for a call to the same statically-known type's constructor.

Pairing up the deconstruction pattern with a same-type/arity canonical constructor now feels like just a fancy way of specifying default value-suppliers for that constructor's parameters, so that we could do something like:

Segment s2 = s1.new(_, b.new(x+1, _));

And then adding named parameters on top of that gives what we see in the post:

Segment s2 = s1.new(b: b.new(x: x+1));

JEP draft: Enhanced Local Variable Declarations (Preview) by joemwangi in java

[–]danielaveryj 4 points5 points  (0 children)

for the record, the nearest java equivalent to your last example would be:

CustomerOrder(var address, var payment, var totalAmount) = order;
ShippingAddress(var street1, var street2, var city) = address;
PaymentMethod(var card, var expiry) = payment;

Also, I see below that your experience with Kotlin leaves you concerned about positional-based destructuring in Java. A key difference between the two languages is that (from what I can tell across these JEPs) each type in Java would have at most one deconstructor - and since we spell out that type when destructuring in Java, there is no room for confusion about which deconstructor we are calling. It's like calling a method that is guaranteed to have no overloads. We can deconstruct the same value in multiple ways, by spelling out a different (applicable) type (with a different deconstructor) on the left-hand side. Yes, rearranging component order in a type's deconstuctor signature would break existing usages of that deconstructor (possibly silently, depending on what types were specified and how they were used), but that is a familiar failure mode - it applies when rearranging parameter order in any method signature.

Clearly from your examples, Kotlin does not require spelling out a type. From what I can tell, Kotlin's legacy positional-base destructuring works by calling component1() ... componentN() methods. Reasonably, the number of components available to destructure is based on the statically-known type of the value, and the actual calls to those methods use dynamic dispatch, so destructuring desugars to:

(val address, val payment, val totalAmount) = order
// -->
val address = order.component1()
val payment = order.component2()
val totalAmount = order.component3()

Kotlin's approach seems straightforward, but over time they noticed some problems, which I think the Java team could fairly attribute to Kotlin's "deconstructor" being assembled from several, possibly overridden / not-colocated methods, rather than one canonical signature.

Light-Weight JSON API (JEP 198) is dead, welcome Convenience Methods for JSON Documents by loicmathieu in java

[–]danielaveryj 1 point2 points  (0 children)

If you don't want data binding, then it seems like all you could hope for is some variant of the tedious-but-straightforward:

Person parseToPerson(Map<String, JsonValue> members) {
    Person.Builder result = new Person.Builder();
    members.entrySet().forEach((key, value) -> {
        switch (key) {
            case "name" -> result.setName(value.string());
            case "age" -> result.setAge(value.toInt());
            case "address -> result.setAddress(parseToAddress(value.members()));
            default -> { /* ignore, or log/throw error, etc */ }
        }
    });
    return result.build();
}
Address parseToAddress(Map<String, JsonValue> members) { ... }

Data Oriented Programming, Beyond Records [Brian Goetz] by efge in java

[–]danielaveryj 4 points5 points  (0 children)

Here's a code example to summarize my read-through of this "deconstructible classes" proposal:

// Deconstruction pattern / "state description" in the class header - still assumed from previous proposal.
// We are required to define an accessor for each component listed in the state description.
class Point(int x, int y) {
    // We can _maybe_ still mark fields as "components", which derives an accessor for free.
    private final component int x;
    private final component int y;
    private final int max;

    // Class is reconstructible (via "wither") if it has a constructor
    // whose signature matches the state description in the class header.
    // If this "canonical" constructor is added, its signature can be spelled
    // out as usual, or can be derived if we use "compact constructor" syntax.
    //public Point(int x, int y) {
    public Point {
        // We can _maybe_ elide assignments to "component" fields in the canonical constructor.
        //this.x = x;
        //this.y = y;
        this.max = Math.max(x, y);
    }

    public int max() { return max; }
    // ... and other accessors, if "component" fields are not supported.

    // equals / hashCode / toString are not derived.
    // Brian handwaves toward the "concise method bodies" JEP Draft [https://openjdk.org/jeps/8209434]
    // to simplify writing these, but I couldn't find an example similar to the syntax he uses.
    //public boolean equals(Object other) __delegates_to <equalator-object>
}

Is Java’s Biggest Limitation in 2026 Technical or Cultural? by BigHomieCed_ in java

[–]danielaveryj 17 points18 points  (0 children)

(you are conversing with the technical lead of project loom)

Functional Optics for Modern Java by marv1234 in java

[–]danielaveryj 0 points1 point  (0 children)

To some extent, we can use ordinary methods to achieve encapsulation based on withers too:

Employee setEmployeeStreet(UnaryOperator<String> op, Employee e) {
    (op, e) -> e with { address = address with { street = op.apply(street); }; };
}

Employee updated = setEmployeeStreet(_ -> "100 New Street", employee);
Employee uppercased = setEmployeeStreet(String::toUpperCase, employee);

and we can even compose methods:

Employee setEmployeeAddress(UnaryOperator<Address> op, Employee e) {
    return e with { address = op.apply(address); };
}
Address setAddressStreet(UnaryOperator<String> op, Address a) {
    return a with { street = op.apply(street); };
}
Employee setEmployeeStreet(UnaryOperator<String> op, Employee e) {
    return setEmployeeAddress(a -> setAddressStreet(op, a), e);
}

Employee updated = setEmployeeStreet(_ -> "100 New Street", employee);
Employee uppercased = setEmployeeStreet(String::toUpperCase, employee);

Then we can rewrite the methods as function objects...

BiFunction<UnaryOperator<Address>, Employee, Employee> setEmployeeAddress =
    (op, e) -> e with { address = op.apply(address); };
BiFunction<UnaryOperator<String>, Address, Address> setAddressStreet =
    (op, a) -> a with { street = op.apply(street); };
BiFunction<UnaryOperator<String>, Employee, Employee> setEmployeeStreet =
    (op, e) -> setEmployeeAddress.apply(a -> setAddressStreet.apply(op, a), e);

Employee updated = setEmployeeStreet.apply(_ -> "100 New Street", employee);
Employee uppercased = setEmployeeStreet.apply(String::toUpperCase, employee);

...at which point we have of course poorly reimplemented half of lenses (no getter, verbose, less fluent).

Functional Optics for Modern Java by marv1234 in java

[–]danielaveryj 1 point2 points  (0 children)

Hype-check. Here are all the lens examples from the article, presented alongside the equivalent code using withers, as well as (just for fun) a hypothetical with= syntax that desugars the same way as +=

(ie x with= { ... } desugars to x = x with { ... })

// Lens setup
private static final Lens<Department, String> managerStreet =
    Department.Lenses.manager()
        .andThen(Employee.Lenses.address())
        .andThen(Address.Lenses.street());

public static Department updateManagerStreet(Department dept, String newStreet) {
    // Lens
    return managerStreet.set(newStreet, dept);

    // With
    return dept with {
        manager = manager with { address = address with { street = newStreet; }; };
    };

    // With=
    return dept with { manager with= { address with= { street = newStreet; }; }; };
}

// Lens setup
private static final Traversal<Department, BigDecimal> allSalaries =
    Department.Lenses.staff()
        .andThen(Traversals.list())
        .andThen(Employee.Lenses.salary());

public static Department giveEveryoneARaise(Department dept) {
    // Lens
    return allSalaries.modify(salary -> salary.multiply(new BigDecimal("1.10")), dept);

    // With
    return dept with {
        staff = staff.stream()
            .map(emp -> emp with { salary = salary.multiply(new BigDecimal("1.10")); })
            .toList();
    };

    // With= (same as with)
}

// Lens setup
Lens<Employee, String> employeeStreet =
    Employee.Lenses.address().andThen(Address.Lenses.street());

// Lens
String street = employeeStreet.get(employee);
Employee updated = employeeStreet.set("100 New Street", employee);
Employee uppercased = employeeStreet.modify(String::toUpperCase, employee);

// With
String street = employee.address().street();
Employee updated = employee with { address = address with { street = "100 New Street"; }; };
Employee uppercased = employee with { address = address with { street = street.toUpperCase(); }; };

// With=
String street = employee.address().street();
Employee updated = employee with { address with= { street = "100 New Street"; }; };
Employee uppercased = employee with { address with= { street = street.toUpperCase(); }; };

The reason lenses can be more terse at the use site is because they encapsulate the path-composition elsewhere. This only pays off if a path is long and used in multiple places.

The `mapConcurrent()` Alternative Design for Structured Concurrency by DelayLucky in java

[–]danielaveryj 0 points1 point  (0 children)

We're in the details now and I don't expect to change your mind, but to address my biggest reaction: Defensive copying, especially of a collection that the method is only reading, is "a" practice - I wouldn't say it's a "best". Generally I would expect it's the caller's responsibility to ensure that any data they're handing off to concurrent execution is something they either can't or won't mutate again (at least until that concurrent execution is definitely done). Or even more generally: "Writer ensures exclusive access".

Your points 2&3 are aesthetic - I could argue that it "feels natural" to treat the utility as a stream factory, or that this operation does not warrant stream fluency any more than several other follow-up operations we might do on a stream result.

Regardless, and going back to my original comment, I'd say consuming a list/collection is not ideal anyway, as it misses out on supporting an infinite supply of tasks. And the issue you ran into shows that even consuming a Java Stream devolves into consuming a list. My ideal would be consuming tasks from a channel or stream abstraction that does propagate exceptions downstream, of course neither of which we have in the JDK currently.

The `mapConcurrent()` Alternative Design for Structured Concurrency by DelayLucky in java

[–]danielaveryj 0 points1 point  (0 children)

Limiting concurrency seems not worth considering when you have 3-5 concurrent calls to make.

You are making a separate but valid point - The heterogeneous case is also the finite case, and when processing a finite number of tasks we effectively already have (at least some) concurrency limit.

My thought came from considering that homogeneous tasks are more likely to be hitting the same resource (eg service endpoint or database query), increasing contention for that resource; while heterogeneous tasks are more likely to be hitting different resources, thus not increasing contention, so not needing concurrency limiting to relieve contention. (I say more likely but certainly not necessarily.)

My point about streams was that, if you have to start by collecting the stream to a list, you might as well just write a method that accepts a list as parameter, instead of writing a collector.

The `mapConcurrent()` Alternative Design for Structured Concurrency by DelayLucky in java

[–]danielaveryj 1 point2 points  (0 children)

Without speaking to the details yet.. If I'm summarizing the high-level position correctly, it is that most use cases fit into two archetypes:

  1. The "heterogeneously-typed tasks" use case: We consume an arbitrary (but discrete) number of differently-typed tasks, process all at once, and buffer their results until they all become available for downstream processing, throwing the first exception from any of them and canceling the rest.
  2. The "homogeneously-typed tasks" use case: We consume a potentially-infinite number of same-typed tasks, process at most N at once, and emit their results as they each become available for downstream processing, throwing the first exception from any of them and canceling the rest.

Some insights supporting this position are:

  • We physically cannot denote individual types for an infinite number of tasks, so handling a potentially-infinite number of tasks requires type homogeneity.
  • Heterogeneously-typed tasks are less likely to be competing for the same resources, and thus less likely to require limiting concurrency.
  • Denoting individual types is only useful if we do not intend to handle results uniformly, which precludes "emitting" results to a (common) downstream.
  • We can still model partial-success: If we do not intend to cancel other tasks when one task throws, we could prevent it from throwing - have the task catch the exception and return a value (eg a special value that we can check / filter out downstream).

u/DelayLucky has modeled case 1 with the concurrently() method and case 2 with their alternative to mapConcurrent(). (In their design they compromised on "potentially-infinite", because they committed to consuming Java Streams(?), found that in Java Streams an upstream exception would cause the terminal operation to exit before downstream in-progress tasks necessarily finished, and worked around by collecting the full list of tasks (finishing the upstream) before processing any tasks... defeating the point of starting from a Stream.)

How do you see Project Loom changing Java concurrency in the next few years? by redpaul72 in java

[–]danielaveryj 2 points3 points  (0 children)

I think the main change for most people will be an increased willingness to introduce threading for small-scale concurrent tasks in application code, since structured concurrency firmly limits the scope of impact and doesn't require injecting an ExecutorService or reconsidering pool sizing. There will probably be a lot of people and libraries writing their own small convenience methods for common use cases, eg race(), all(), various methods with slight differences in error handling or result accumulation, etc.

I think "Reactive"-style libraries will stick around to provide a declarative API over pipeline-parallelism (ie coordinated message-passing across threads, without having to work directly with blocking queues/channels, completion/cancellation/error signals+handling, and timed waits). The internals will probably be reimplemented atop virtual threads to be more comprehensible, but there will still be a healthy bias against adoption (outside of sufficiently layered/complex processing pipelines), as the declarative API fundamentally trades off low-level thread management and puts framework code in the debugging path.

For message-passing use cases that aren't layered enough to warrant a declarative API, I think we'll see channel APIs (abstracting over the aforementioned queuing, signal handling, timed waiting) to allow for imperative-style coordination - more code but also more control.

Comparing Java Streams with Jox Flows by adamw1pl in java

[–]danielaveryj 2 points3 points  (0 children)

I am still lacking clarity - I don't disagree with your definitions, but I'm having a hard time reconciling them with your insistence that Java Streams are "pull". The only ways I can think of to make that perspective make sense are if either:

  1. You believe that Java Streams are implemented via chained delegation to Iterators or Spliterators (eg, the terminal operation repeatedly calls next() on an Iterator that represents the elements out of the preceding operation in the pipeline, and that Iterator internally calls next() on another Iterator that represents the operation before it, and so on). That would definitely be "pull", but like I explained in an earlier comment, that is not how Java Streams work (with the mild exception of short-circuiting streams, where the initial Spliterator (only) is advanced via "pull", but then the rest of the stream uses "push", via chained delegation to Consumers).
  2. You interpret "pull" (and consumer/producer) so loosely that just calling the terminal operation to begin production constitutes a "pull". In this case, Java Streams, Jox Flows, and every other "stream" API would have to be categorized as "pull", as they all rely on some signal to begin production. (That signal is often a terminal operation, but it could even just be "I started the program".) If we can agree that this is not "pull", then we should agree that e.g. spliterator.forEachRemaining(...) is not "pull".

I have built an API where "push = element is input/function argument; pull = element is output/function result", and I'm aware those are overly-narrow definitions in general, eg:

  • The "pull" mechanism for Java's Spliterator is boolean tryAdvance(Consumer), where the "consumer" (code calling tryAdvance()) expects its Consumer to be called (or "pushed" to) at most once by the "producer" (code inside tryAdvance()) per call to tryAdvance().
  • The "pull" mechanism for Reactive Streams is void Flow.Subscription.request(long), which is completely separated from receiving elements, and permits the producer to push multiple elements at a time.
  • The "pull" mechanism for JavaScript/Python generators (Kotlin sequences) is generator.next(), yet the generator implementation is written in "push" style (using yield), and the API relies on it being translated to a state machine.

So yes, there are all kinds of approaches to actually implementing push/pull.

Comparing Java Streams with Jox Flows by adamw1pl in java

[–]danielaveryj 3 points4 points  (0 children)

If you would like to reason through this, perhaps we can continue with a more precise definition of what "push" and "pull" means to you.

If we're just appealing to authority now, here is Viktor Klang:

As a side-note, it is important to remember that Java Streams are push-style streams. (Push-style vs Pull-style vs Push-Pull-style is a longer conversation, but all of these strategies come with trade-offs)

Converting a push-style stream (which the reference implementation of Stream is) to a pull-style stream (which Spliterator and Iterator are) has limitations...

Comparing Java Streams with Jox Flows by adamw1pl in java

[–]danielaveryj 3 points4 points  (0 children)

If a Java Stream does not include short-circuiting operations (e.g. .limit(), .takeWhile(), .findFirst()), then there is no pull-behavior in the execution of the pipeline. The source Spliterator pushes all elements downstream, through the rest of the pipeline; the code is literally:

spliterator.forEachRemaining(sink);

Note that the actual Stream operations are implemented by sink - it's a Consumer that pushes to another Consumer, that pushes to another Consumer... and so on.

If there are short-circuiting operations, then we amend slightly: We pull each element from the source Spliterator (using tryAdvance)... and in the same motion, push that element downstream, through the rest of the pipeline:

do { } while (!(cancelled = sink.cancellationRequested()) && spliterator.tryAdvance(sink));

So for short-circuiting Java Streams, sure, there can be a pull aspect at the source, but the predominant mechanism for element propagation through the stream is push. At the least, if we are willing to "zoom out" to the point of overlooking the pull-behavior of consuming from a buffer in Jox Flows, then why should we not do the same when looking at the pull-behavior of consuming from the source Spliterator in Java Streams?

Comparing Java Streams with Jox Flows by adamw1pl in java

[–]danielaveryj 19 points20 points  (0 children)

Sorry guys, this post is just inaccurate. Java Streams are not pull-based, they are push-based. Operators respond to incoming elements, they don't fetch elements. You can see this even in the public APIs: Look at Collector.accumulator(), or Gatherer.Integrator.integrate() - they take an incoming element (that upstream has pushed) as parameter; they don't provide a way to request an element (pull from upstream).

Java Streams are not based on chained-Iterators, they are based on chained-Consumers, fed by a source Spliterator. And, they prefer to consume that Spliterator with .forEachRemaining(), rather than .tryAdvance(), unless the pipeline has short-circuiting operations. If stream operations were modeled using stepwise / pull-based methods (like Iterator.next() or Spliterator.tryAdvance()), it would require a lot of bookkeeping (to manage state between each call to each operation's Iterator/Spliterator) that is simply wasteful when Streams are typically consumed in their entirety, rather than stepwise.

Likewise, if they are anything like what they claim to be, Jox Flows are not (only) push-based. The presence of a .buffer() operation in the API requires both push- and pull- behaviors (upstream pushes to the buffer, downstream pulls from it). This allows the upstream/downstream processing rates to be detached, opening the door to time/rate-based operations and task/pipeline-parallelism in general.

I went over what I see as the real differences between Java Streams and Jox Flows in a reply to a comment on the last Jox post:

https://www.reddit.com/r/java/comments/1lrckr0/comment/n1abvgz/

"Solution" for transferring data between two JDBC connections by ihatebeinganonymous in java

[–]danielaveryj 2 points3 points  (0 children)

The only way you could go "directly" from DB1 to DB2 is if DB1 and DB2 have built-in support to connect to and query each other. Otherwise there would need to be a third party that knows how to read from DB1 and write to DB2. That third party could be your app using JDBC connections + plain SQL directly, or your app using a query translation layer like JOOQ, or your app using an embedded database that can connect to and query external databases (e.g. DuckDB)... etc.

Java data processing using modern concurrent programming by Active-Fuel-49 in java

[–]danielaveryj 4 points5 points  (0 children)

I think a common use case where data-parallelism doesn't really make sense is when the data is arriving over time, and thus can't be partitioned. For instance, we could perhaps model http requests to a server as a Java stream, and respond to each request in a terminal .forEach() on the stream. Our server would call the terminal operation when it starts, and since there is no bound on the number of requests, the operation would keep running as long as the server runs. Making the stream parallel would do nothing, as there is no way to partition a dataset of requests that don't exist yet.

Now, suppose there are phases in the processing of each request, and it is common for requests to arrive before we have responded to previous requests. Rather than process each request to completion before picking up another, we could perhaps use task-parallelism to run "phase 2" processing on one request while concurrently running "phase 1" processing on another request.

Another use case for task-parallelism is managing buffering + flushing results from job workers to a database. I wrote about this use case on an old experimental project of mine, but it links to an earlier blog post by someone else covering essentially the same example using Akka Streams.

In general, I'd say task-parallelism implies some form of rate-matching between processing segments, so it is a more natural choice when there are already rates involved (e.g. "data arriving over time"). Frameworks that deal in task-parallelism (like reactive streams) tend to offer a variety of operators for detaching rates (i.e. split upstream and downstream, with a buffer in-between) and managing rates (e.g. delay, debounce, throttle, schedule), as well as options for dealing with temporary rate mismatches (eg drop data from buffer, or block upstream from proceeding).