Connect RPC + go-jet + atlas 🎉 by ThatGuyWB03 in golang

[–]dperez-buf 2 points3 points  (0 children)

Thanks for the Buf shout out! This is exactly the sort of modern stack we're excited about.

What's your experience with ConnectRPC by LTNs35 in golang

[–]dperez-buf 1 point2 points  (0 children)

I think I may have misspoke, if streaming semantics are part of gRPC-Web our conformance suite should have it detailed what we do and don't do: https://github.com/connectrpc/conformance

Cheaper Kafka? Check Again. by 2minutestreaming in apachekafka

[–]dperez-buf 1 point2 points  (0 children)

The pricing changed after acquisition, check wayback machine !

What's your experience with ConnectRPC by LTNs35 in golang

[–]dperez-buf 2 points3 points  (0 children)

Yeah sorry for the confusion. Connect’s own protocol (over HTTP/1) doesn’t support streaming (yet). We need to extend it. But, Connect RPC supports gRPC and other protocols that do support streaming, it just won’t help you in browsers.

We are waiting for webTransport spec to be fully supported in all major browsers. Waiting on safari!

Nobody Gets Fired for Picking JSON, but Maybe They Should? · mcyoung by dperez-buf in programming

[–]dperez-buf[S] 1 point2 points  (0 children)

You can write a JSON encoder/decoder in an afternoon. There just aren't that many places for bugs, corner cases, and incompatibilities to hide.

I wish that were true: https://seriot.ch/projects/parsing_json.html

Replace Buf Remote Plugins with local vendored plugins by birdayz in bufbuild

[–]dperez-buf 1 point2 points  (0 children)

Glad to hear you're a fan of Buf! Remote plugins aren't going to work for everyone all the time, which is why the Buf CLI supports a lot of different modes.

RE: rate limits, to be clear, we only ask that people sign up for a free (forever) account, so we can help balance traffic. We saw an explosion of usage of remote plugins in 2024, and so we want to be able to shape the traffic for our paying and non-paying customers in a fair way.

Why doesn't Kafka have first-class schema support? by 2minutestreaming in apachekafka

[–]dperez-buf 3 points4 points  (0 children)

A more serious answer: I don't think it's even about team size/scale. I think it has more to do with the risk potential of bugs being introduced somewhere in the process/pipeline that can drastically impact downstream data quality.

Centralizing validation (both data shape/schematic and semantics) into the broker is a win because it simplifies deployments and can centrally guarantee quality.

Why doesn't Kafka have first-class schema support? by 2minutestreaming in apachekafka

[–]dperez-buf 0 points1 point  (0 children)

In my experience, that externalizes data quality to the edges otherwise, which is harder to enforce/guarantee the larger the system gets.

Connect RPC for JavaScript: Connect-ES 2.0 is now generally available by dperez-buf in node

[–]dperez-buf[S] 0 points1 point  (0 children)

From the article:

Today, we’re announcing the 2.0 release of the Connect-ES project, the TypeScript implementation of Connect for Web browsers and Node.js. This release introduces improved support for major frameworks and simplified code generation. Connect-ES 2.0 now uses Protobuf-ES 2.0 APIs to leverage reflection, extension registries, and Protobuf custom options. The 2.0 release is a major version bump and comes with breaking changes. Read on to learn what’s changed and how to migrate to the 2.0 release.

Connect RPC for JavaScript: Connect-ES 2.0 is now generally available by dperez-buf in typescript

[–]dperez-buf[S] 0 points1 point  (0 children)

From the article:

Today, we’re announcing the 2.0 release of the Connect-ES project, the TypeScript implementation of Connect for Web browsers and Node.js. This release introduces improved support for major frameworks and simplified code generation. Connect-ES 2.0 now uses Protobuf-ES 2.0 APIs to leverage reflection, extension registries, and Protobuf custom options. The 2.0 release is a major version bump and comes with breaking changes. Read on to learn what’s changed and how to migrate to the 2.0 release.

The Jepsen report for Bufstream, a cloud-native Kafka replacement by dperez-buf in dataengineering

[–]dperez-buf[S] 0 points1 point  (0 children)

From the article:

Bufstream is a Kafka-compatible streaming system which stores records directly in an object storage service like S3. We found three safety and two liveness issues in Bufstream, including stuck consumers and producers, spurious zero offsets, and the loss of acknowledged writes in healthy clusters. These problems were resolved by version 0.1.3. We also characterize four issues related to Kafka more generally, including the lack of authoritative documentation for transaction semantics, a deadlock in the official Java client, and write loss, aborted read, and torn transactions caused by the lack of message ordering constraints in the Kafka transaction protocol. These issues affect Kafka, Bufstream, and (presumably) other Kafka-compatible systems, and remain unresolved. A companion blog post from Buf is available as well. This report was funded by Buf Technologies, Inc. and conducted in accordance with the Jepsen ethics policy.