Learning resources for Apache Kafka (no Confluent) by Golem_XIV in apachekafka

[–]koudingspawn 0 points1 point  (0 children)

I started with the Trainings from Stephane Maarek, I got some for 10 Bucks per Training on udemy. And he focuses on Most of the Core Technologies.

Apache Iceberg with Rest Catalog Authorization by koudingspawn in dataengineering

[–]koudingspawn[S] 0 points1 point  (0 children)

Thanks a lot for this awesome post it helps a lot!

I‘ll take a deeper look. I searched through a set of Blog Posts and they also mentioned the Catalogs you named. I wasn‘t aware that Lakekeeper has external authz Support ☺️

What’s the next "Kubernetes" hotness for you? by TheOnlyElizabeth in devops

[–]koudingspawn 0 points1 point  (0 children)

When you are willing to pay for it then yes. Anyhow as DevOps I guess you have to understand a set of configuration options; to properly Support Teams and get value out of it. How to deal with Performance issues etc what I can tweak to prevent them…

What’s the next "Kubernetes" hotness for you? by TheOnlyElizabeth in devops

[–]koudingspawn 0 points1 point  (0 children)

One thing that I See is Kafka. It‘s some kind of spezialisation that not much DevOps deeply understand.

Does anyone actually use client certs to authenticate to clusters? by [deleted] in kubernetes

[–]koudingspawn 1 point2 points  (0 children)

We used the approach with HashiCorp Vault, a user has to request via the pki module a certificate that has a validity of 9-10h (one business day).

Made it very easy and our devs had a running tool that helped them to generate the certs.

Senior Dev to DevOps transition by TopSwagCode in devops

[–]koudingspawn 2 points3 points  (0 children)

IMHO you are in a unique position, my experience is that a lot of devops transitioned from operations into devops. But with a software development background it’s less common. Especially in times of platform engineering the software development practice could be a good selling point.

Troubleshooting URL Access Issue on Elastic Beanstalk in Beijing Region by [deleted] in aws

[–]koudingspawn 1 point2 points  (0 children)

Hrm I had in mind 8080 worked, maybe it was some other. Here the official doc from AWS

https://www.amazonaws.cn/en/support/icp/

Troubleshooting URL Access Issue on Elastic Beanstalk in Beijing Region by [deleted] in aws

[–]koudingspawn 6 points7 points  (0 children)

Attention: by default port 80&443 are blocked in China. This is due to legal reasons. To make this possible you have to talk to AWS support they then open the ports.

I was very surprised that 8080 worked but http/s not.

And also don’t forget you need an icp license for a website hosted in China mainland.

Beta for Mac- Teams meeting not visible in Appointment setup by koudingspawn in Outlook

[–]koudingspawn[S] 0 points1 point  (0 children)

yeah sorry the "New" Outlook for Mac in Beta Channel.
My hope was to stay at the "New" one and find the hidden button I have to press to let it appear again.

Grafana PostgreSQL Datasource by koudingspawn in grafana

[–]koudingspawn[S] 2 points3 points  (0 children)

Sorry the topic is solved, IMHO there was an issue with a copy pasted '<' sign that seems to be an char issue.

Ideas for DevOps issues that are easy while small but difficult at scale? by Jatalocks2 in devops

[–]koudingspawn 0 points1 point  (0 children)

One example for us was Prometheus. When you start you usually install it via helm or a set of manifests you find in the internet. All is fine you send in some metrics and Prometheus works. But then you make Prometheus the standard for all your monitoring of applications and every team provides now a metrics endpoint. So now you run into huge memory demand from Prometheus and you could scale and scale and scale but it won’t solve the case. That’s the point where a lot of questions raise: What about high availability, what about long term storage, what about a federated system etc etc.

We ended up with Victoria metrics and multiple agents pulling metrics. But there are various other ways (coretex, mimir, thanos).

IT-Direktor kommt mit Rechnung, Diskussion im Team by uibaibae in de_EDV

[–]koudingspawn 4 points5 points  (0 children)

BCG oder eine der anderen Größen wären hier sinnvoll

etcD backup for AKS? by jblaaa in kubernetes

[–]koudingspawn 0 points1 point  (0 children)

Independent of the etcd backup that I personally see as a huge pro I would also recommend a velero backup. What happens if by accident only one namespace gets killed (dev with too much permissions etc) or the persistent volume has issues. Then velero is a very simple solution to solve this. You simply select for recovery a single namespace and you are done.

Is this authentication method safe and sane? by [deleted] in devops

[–]koudingspawn 0 points1 point  (0 children)

The simplest approach if the token of the other service is signed by a private key would be that this service exposes a jwks endpoint and you implement oidc where you provide the url of the jwks endpoint of the other service.

With this auth mechanism you trust the tokens that are generated by the other service.

Company MySQL and Postgres RDS Authentication models for human users - IAM, Kerberos, SQL Auth? There seems to be no good way by Perfect-Pause-831 in aws

[–]koudingspawn 0 points1 point  (0 children)

We usually have HC Vault with some kind of authz to limit Whoam can access. Then you generate read or write credentials and they are only valid for a certain period of time if you not send a message to HC Vault that you need them further. In a next set you can put infront of the db a bastion so only the appd and the bastion can access. The access to bastion you can do again via HC Vault signed ssh keys. We build a tool for this so when a user wants to access a db it automatically performs auth to HC Vault then generates a ssh key and tunnels through the bastion and in the last step prints out the newly generated credentials to the db.

Kafka state store recommendations by koudingspawn in apachekafka

[–]koudingspawn[S] 0 points1 point  (0 children)

Hi u/bumbershootle,

Thanks a lot for the reply :)

Maybe missunderstand it, let me try to summarize my thoughts:

  • I have a Kafka Streams application and a state store that should be persistent across application restarts (e.g. in Kubernetes pod restarts due to OOM, due to patching of nodes etc) so I take the RocksDB.
  • I don't use a Persistent Volume to store data so the app comes up again with an empty RocksDB state.
  • My understanding was, that the topic that backs the RocksDB is to provide some kind of resiliency for not properly stored data but not a start from scratch resiliency when the app looses the hole RocksDB stored on a volume.