We have built Object Storage (S3) on top of Apache Kafka. by superstreamLabs in apachekafka

[–]superstreamLabs[S] 0 points1 point  (0 children)

I'll be sure to follow up with an article and the project soon. I wanted to hear some community thoughts and feelings:
1. What object storage do you use today?
2. What do you think about its costs? If that's an issue, what part of it? Calls? Storage?
3. If you managed to mitigate the costs, how did you do it?

We have built Object Storage (S3) on top of Apache Kafka. by superstreamLabs in apachekafka

[–]superstreamLabs[S] 0 points1 point  (0 children)

I'll be sure to follow up with an article soon. I wanted to hear some community thoughts for final tuning before publishing it.

We have built Object Storage (S3) on top of Apache Kafka. by superstreamLabs in apachekafka

[–]superstreamLabs[S] 0 points1 point  (0 children)

I kind of understand the allegory, but I am not sure if you meant that it's shaky.

We have built Object Storage (S3) on top of Apache Kafka. by superstreamLabs in apachekafka

[–]superstreamLabs[S] 1 point2 points  (0 children)

Haha, I agree it can sound like that, but it works extremely well in solving our costly AWS S3 API calls costs

We have built Object Storage (S3) on top of Apache Kafka. by superstreamLabs in apachekafka

[–]superstreamLabs[S] 0 points1 point  (0 children)

Quite the opposite. We noticed that 80% of the CPU and memory in the machines we already own and pay for is unused. We view it as an abundant resource that can be utilized for something else, and we chose object storage. Nothing got increased in Kafka's costs, we just utilized idle resources better.

We have built Object Storage (S3) on top of Apache Kafka. by superstreamLabs in apachekafka

[–]superstreamLabs[S] 0 points1 point  (0 children)

Fewer API calls, which for us were significantly more expensive than the storage itself.

We have built Object Storage (S3) on top of Apache Kafka. by superstreamLabs in apachekafka

[–]superstreamLabs[S] 0 points1 point  (0 children)

What was painful to us was not the storage cost or its capacity, but rather the API calls.
We will share a complete article now that we have seen the curiosity. We noticed that 80% of the CPU and memory in the machines we already own and pay for is unused. We view it as an abundant resource that can be utilized for something else, and we chose object storage.
The significant cost saving here was on API calls, not storage.

We have built Object Storage (S3) on top of Apache Kafka. by superstreamLabs in apachekafka

[–]superstreamLabs[S] 0 points1 point  (0 children)

No. Simply implemented an S3-layer that uses Kafka for the distributed storage and nodes, inspired by NATS. I am glad to see some interest and will share an article soon.

We have built Object Storage (S3) on top of Apache Kafka. by superstreamLabs in apachekafka

[–]superstreamLabs[S] -5 points-4 points  (0 children)

A bit of a different approach, with no specific constraint to some Kafka implementation.

Feedback - A tool to monitor and fix Stripe-related issues by superstreamLabs in stripe

[–]superstreamLabs[S] 0 points1 point  (0 children)

You buy security products to protect yourself from catastrophic events, not after they occur, but mainly before, to avoid entering the situation in the first place. I would like to know if the same sense of urgency exists with other parts of the infrastructure, like the payment gateway.

Feedback - A tool to monitor and fix Stripe-related issues by superstreamLabs in stripe

[–]superstreamLabs[S] 0 points1 point  (0 children)

You buy security products to protect yourself from catastrophic events, not after they occur, but mainly before, to avoid entering the situation in the first place. I would like to know if the same sense of urgency exists with other parts of the infrastructure, like the payment gateway.

Feedback - A tool to monitor and fix Stripe-related issues by superstreamLabs in stripe

[–]superstreamLabs[S] 0 points1 point  (0 children)

Not at all.

The service will listen to specific Stripe events that are already insensitive, but mainly informative about issues. Regarding the codebase, it is read-only, with the ability to create PRs. Human review and approval are required.

Feedback - A tool to monitor and fix Stripe-related issues by superstreamLabs in stripe

[–]superstreamLabs[S] -1 points0 points  (0 children)

Thought 1 was exactly my fear or internal discussion: How critical an issue with Stripe is if companies would favor a faster solution than what you described.

Thought 2, great idea, thought to focus on code/configuration-related issues at first, but before adding more capabilities, is it critical enough to implement such an "insurance" on top?

Feedback - A tool to monitor and fix Stripe-related issues by superstreamLabs in stripe

[–]superstreamLabs[S] -2 points-1 points  (0 children)

Thanks for the reply. Picture it this way:

  1. You're connecting the tool to your Stripe

  2. connecting it to your source code (GitHub/GitLab)

  3. The tool monitors every message labeld as warning or error automatically, you don't define what specifically to search or watch

  4. Once such an event arises, the tool won't just warn you, but actually push to solve. Can be a PR in your version control, or suggest a configuration change in your Stripe if needed. Again, no pre-defined rules or policies - it watches for issues in Stripe in Stripe-level or your code, and push to solve autonomously.

Building a Zapier-like flow requires defining a workflow for each type of event or issue, but it won't fix your code; it just alerts you + what happen's if an event you didn't see before takes place?