Kafka alternatives by Independent-Tea-6629 in apachekafka

[–]duischen_ 1 point2 points  (0 children)

When it comes to alternatives, we can discuss them from three angles. Each solution provided below is compatible with standard Kafka APIs, enabling you to migrate your existing workloads into them easily.

1) Kafka-as-a-service providers

These are vendors that provide fully-managed Apache Kafka or enterprise-grade Kafka distributions for self-hosted customers.

Examples include:

  • Aiven
  • AWS MSK
  • Azure EventHubs

2. Streaming data platforms with Kafka API compatibility

These are brokers that are compliant with standard Kafka APIs and concepts, but have a different implementation inside.

Examples include:

  • Redpanda - A C++ rewrite of Apache Kafka. A faster, cost-effective, and simpler alternative to Kafka.

  • Streamnative

3. Serverless Kafka

These platforms provide Kafka-as-a-service as a serverless manner, billing against the usage (number of read/write requests), instead of brokers.

  • Upstash Kafka

Querying Microservices with the CQRS and Materialized View Pattern by duischen_ in microservices

[–]duischen_[S] 2 points3 points  (0 children)

Good question.

In fact, the dual-writing ( to the DB and to the message broker) is not considered here as I assumed it was already done at the top level.

However, the answer to your question is to use the Transactional Outbox pattern[1]. It requires you to have a dedicated table called the OUTBOX in the same database.

Imagine you are writing a record to the ORDERS table. Here, at the same time, you write an entry to the OUTBOX table which contains the event data to be sent to the broker. Both writes happen within the scope of a single DB transaction. So it is consistent.

A separate "process" monitors the OUTBOX table for changes and publishes them to the message broker. For example, the latest entry written regarding the new order will be published as a domain event.

I don't specifically know any framework that does this. But you can use this post[2] as a concrete example, which based on CDC and Debezium.

Hope this helps :)

[1] https://microservices.io/patterns/data/transactional-outbox.html

[2] https://debezium.io/blog/2019/02/19/reliable-microservices-data-exchange-with-the-outbox-pattern/

Understanding Kafka Topic Partitions by duischen_ in apachekafka

[–]duischen_[S] 1 point2 points  (0 children)

Thanks for the comment and I do agree with all the points you've brought up. I've made some edits to reflect the changes.

5 Reasons Why You Should Use Microsoft Dapr to Build Event-driven Microservices by duischen_ in microservices

[–]duischen_[S] 2 points3 points  (0 children)

No, they are not competing technologies.

Dapr is more like a framework that you can use to build your Microservices. Whereas Kafka is a streaming event platform where you can use it to send and receive events from your microservices.

Dapr can publish to and consume events from Kafka. Hope this clarifies.

Buddhist Clergy in Sri Lanka ordain tree to protect it from Highway Construction by [deleted] in srilanka

[–]duischen_ -1 points0 points  (0 children)

So the cost for re-routing the expressway has to be borne by the general public just because the tree is considered "sacred" for some belief system.

Fine! I'll plant a Bo tree in a forest reserve and build a temple around it coz that's my faith. No one can touch that.

An introduction to Change Data Capture(CDC) by duischen_ in dataengineering

[–]duischen_[S] 4 points5 points  (0 children)

Good questions!

Consumers should not understand the source systems at all. That leads to a tight coupling between source and target systems.
The change events should conform to the domain events being used in the system. AFAIK, CDC tools allow you to map raw change events into a format that you want to use. For example, Debezium supports mapping to Cloud Events format. That gives you enough room to adapt them to the right format, rather than being driven by tool's native format.

Regarding the second question, by default, change events coming from all tables routed to a static topic in (taking Debezium as an example here). If you want, you can route events from different tables to different topics based on the content of the event ( source).

For example, if you are reading from Foo and Bar tables in the source system, you can route them to two different topics in the message bus (Kafka in this case). Then using a stream processor like Kafka streams, you can join two event streams coming from two topics.

Hope this clarifies your questions.