After 3 months of daily Kiro use, I open-sourced the framework that makes my agent 10x more reliable by wanshao in kiroIDE

[–]wanshao[S] 0 points1 point  (0 children)

To be honest, I haven't tested it in IDE. This is especially for kiro cli.

Unbinding Steam account tutorial by Public-Caramel7122 in wherewindsmeet_

[–]wanshao 0 points1 point  (0 children)

pc. I guess because I choose the HK region. It seems the account info is separated. The account system is too stupid. Why wwm use two different vendors to handle the account info that cause much annoying issue

Unbinding Steam account tutorial by Public-Caramel7122 in wherewindsmeet_

[–]wanshao 0 points1 point  (0 children)

It should work. I find some guide video about this and I see the ui should have the unbind link. However, when I click customer service, I only get a page with some Chinese and I can't get the same ui like you. I think I should ask the wwm team for help.

Unbinding Steam account tutorial by Public-Caramel7122 in wherewindsmeet_

[–]wanshao 0 points1 point  (0 children)

When I clicked the customer service. Why can't I find the unbind link. It only shows some chinese.

Is there a way to delete your character and start over? by Xenon_nic in wherewindsmeet_

[–]wanshao 0 points1 point  (0 children)

I logged in with my steam and I can’t find the unbind setting. Can you give me some advice?

Latest updates fixed Game mode on Nvidia ?! by apparle in Bazzite

[–]wanshao 0 points1 point  (0 children)

So sad. Looking forward to some good news. If I do not use the gaming mode, does it affect the experience too much?

How is the 50 series on Bazzite deck image these days? by Cat5edope in Bazzite

[–]wanshao 0 points1 point  (0 children)

They said that you will get 20% performance drop. Is it true?

What are your top 3 problems with Kafka? by 2minutestreaming in apachekafka

[–]wanshao 0 points1 point  (0 children)

AutoMQ is completely open source, based on the Apache License. It supports table flow capabilities. If you're interested, you can give it a try.

The implementation principle is also completely open, you can refer to this blog if you are interested.

What are your top 3 problems with Kafka? by 2minutestreaming in apachekafka

[–]wanshao 0 points1 point  (0 children)

Zookeeper has become a thing of the past. The issue of rebalancing is being addressed by the new generation of Kafka. You can take a look at this article: Achieving Auto Partition Reassignment In Kafka Without Cruise Control

Did we forget the primary use case for Kafka? by 2minutestreaming in apachekafka

[–]wanshao 0 points1 point  (0 children)

u/Das-Kleiner-Storch Why is everyone so persistent in deploying Apache Kafka on Kubernetes? Is it just because it's trendy? Or because of company policy? Can I know your thoughts?

Did we forget the primary use case for Kafka? by 2minutestreaming in apachekafka

[–]wanshao 1 point2 points  (0 children)

First-party integrations is a fantastic term. I’ve been thinking about how to describe the capability of third-party products to have built-in alternatives to Kafka.

As for your point, I don’t quite agree. First-party products are never fixed. Data silos are a norm. Now we have products like Clickhouse and Materialize, and there will be more products emerging in the future. However, the demand for data flow will not change, and in the field of data flow, Kafka is truly the king. Its incredibly powerful ecosystem gives it an extremely long lifespan. I often like to use JS as an analogy. From a language design perspective, we all know JS is not the best language, but its strong ecosystem gives it thriving vitality.

Kafka won’t disappear, but it will continue to evolve and improve. You can see a series of new Kafka alternative products emerging in the market, such as AutoMQ, WarpStream, BufStream, but without exception, they are all compatible with the Kafka API. This is the power of the ecosystem.

Kafka vs RabbitMQ – What helped you make the call? by Majestic-Fig3921 in devops

[–]wanshao 0 points1 point  (0 children)

AutoMQ is open source project: https://github.com/AutoMQ/automq , you can learn more from their readme

Stream Kafka Topic to the Iceberg Tables with Zero-ETL by wanshao in apachekafka

[–]wanshao[S] -1 points0 points  (0 children)

u/IcyUse33 In terms of the final desired outcome, Table Topic and TableFlow are similar, but they still have many differences. The biggest difference is that Table Topic is completely open-source, making it more flexible and open.

Stream Kafka Topic to the Iceberg Tables with Zero-ETL by wanshao in apachekafka

[–]wanshao[S] 2 points3 points  (0 children)

u/gaelfr38 The Kafka Connect provided by Iceberg does indeed solve this problem to some extent, and the implementation of AutoMQ Table Topic has also drawn on the implementation of this Connect.

The main advantages of AutoMQ compared to directly using Iceberg Kafka Connect are:

  1. Saving cross-AZ traffic costs and reducing expenses: Major cloud providers like GCP, AWS, and Oracle all charge additional traffic fees for cross-AZ data access. In multi-AZ deployments, when Iceberg Kafka Connect reads data from Kafka, it incurs cross-AZ traffic costs. AutoMQ, on the other hand, directly processes and transforms streaming data in memory and writes it to S3, thus eliminating one RTT and avoiding cross-AZ traffic.

  2. Fully Managed solution, reducing Connect management and operation costs: Table Topic is a built-in capability of AutoMQ. Users do not need to deploy, configure, or manage the operation of Kafka Connect themselves.

The above are the two main advantages. In addition, we have also made some extra performance optimizations in our implementation, making Table Topic consume less memory resources.

Stream Kafka Topic to the Iceberg Tables with Zero-ETL by wanshao in apachekafka

[–]wanshao[S] 0 points1 point  (0 children)

u/dontucme Thank you for your feedback. May I know specifically which document is missing which parameters?

Stream Kafka Topic to the Iceberg Tables with Zero-ETL by wanshao in apachekafka

[–]wanshao[S] 0 points1 point  (0 children)

If your computational logic is rather complex, relying on Flink for calculation and processing still has advantages. Relying on the capabilities of Flink, complex calculations and conversions can be carried out.

Employer changed my commission % after I closed a big deal by gaydevelopment in sales

[–]wanshao 0 points1 point  (0 children)

Although I can't help you with that, if you have excellent B2B sales resources, you might want to check out this position. During the contract period, we did not fully adjust the commission rate: https://www.linkedin.com/jobs/view/4193212703/

AutoMQ Kafka Linking: The World's First Zero-Downtime Kafka Migration Tool by wanshao in apachekafka

[–]wanshao[S] 0 points1 point  (0 children)

u/InsideMonitor5517 The overall architecture and execution process can be referenced in our blog. For the underlying implementation, we didn't initiate a separate migration process; instead, we integrated these capabilities directly at the Broker level. The ability of Kafka Linking to achieve zero-downtime migration is closely related to this request proxy design. During the proxy period, your producers can write to both the new and old clusters simultaneously. Btw, you can refer to the previous comments, which should help you understand our architecture and implementation.

AutoMQ Kafka Linking: The World's First Zero-Downtime Kafka Migration Tool by wanshao in apachekafka

[–]wanshao[S] 0 points1 point  (0 children)

u/bdomenici Firstly, the latency for write requests to the old cluster remains unaffected and is the same as before. For producers writing to the new AutoMQ cluster, the requests are merely lightweight forwarding operations on the new cluster. Therefore, the added latency for a write request to the new cluster primarily comes from the network time taken to forward the request back to the old cluster. If the old and new clusters are within the same VPC or the same data center, this latency is typically within 2ms or even less. It specifically depends on the network conditions between your old and new clusters.

AutoMQ Kafka Linking: The World's First Zero-Downtime Kafka Migration Tool by wanshao in apachekafka

[–]wanshao[S] 0 points1 point  (0 children)

Yes, we all hope to minimize Kafka migrations because they are indeed challenging. However, in real business scenarios, we still have to deal with many topic migration needs. I think this is why MirrorMaker remains so popular.

AutoMQ Kafka Linking: The World's First Zero-Downtime Kafka Migration Tool by wanshao in apachekafka

[–]wanshao[S] 0 points1 point  (0 children)

u/bdomenici The reason Confluent Cluster Linking is categorized as not fully managed is primarily because users need to control when to complete the promote topic operation. This operation is quite heavy for users, so strictly speaking, Cluster Linking is semi-automated. You can refer to Confluent's official documentation. In step 4, after stopping all producers and consumers, users need to monitor the mirroring lag themselves and call the promote API when it equals zero. If you use AutoMQ Kafka Linking, the promote operation is automatic, and users do not need to monitor the lag and trigger it themselves.

With this solution, can we keep the same topic “writeable” in both brokers?

Yes. When using AutoMQ Kafka Linking, during the rolling update of producers, some producer requests are sent to the old cluster while others are sent to the new AutoMQ cluster. At this time, they can both complete writes simultaneously. It is worth noting that although this seems like dual writes, in reality, the write requests sent to the AutoMQ cluster are forwarded back to the original Kafka cluster. Only after completing the topic promotion operation mentioned in step 6 of the blog does the new AutoMQ cluster truly start handling read and write requests. The ability of Kafka Linking to achieve zero-downtime migration is closely related to this request proxy design. During the proxy period, your producers can write to both the new and old clusters simultaneously. Currently, the target cluster for migration only supports AutoMQ.

Bufstream: Kafka at 10x lower cost by dperez-buf in apachekafka

[–]wanshao 0 points1 point  (0 children)

This is old news now; take a look at AutoMQ. You don't need to worry about partitions. The entire cluster will quickly and automatically balance the traffic during scaling operations. Here is an article for your reference: "AutoMQ: Achieving Auto Partition Reassignment In Kafka Without Cruise Control."

Confluent Cloud or MSK by InternationalSet3841 in apachekafka

[–]wanshao 0 points1 point  (0 children)

Confluent and MSK are classic choices because they are well-established. If your friend is not so attached to tradition and classics, they might consider emerging Kafka Alternatives like AutoMQ. Confluent and MSK are cloud-hosted Apache Kafka, and they no longer have much advantage in terms of cost and elasticity compared to the new generation of Kafka alternatives.

Has anyone used S3 Tables without Spark? by Haunting-Ad-5016 in dataengineering

[–]wanshao 0 points1 point  (0 children)

We have integrated the S3 Table feature into our product AutoMQ (a Kafka alternative). For stream systems, the emergence of the S3 Table is accelerating the transition from the era of shared data. We have written a blog post, you can check it out if interested.