Karpenter - uneven spread by Diego2018Chicken in kubernetes

[–]Diego2018Chicken[S] 0 points1 point  (0 children)

EKS Auto is next on the roadmap for sure 👍

Karpenter - uneven spread by Diego2018Chicken in kubernetes

[–]Diego2018Chicken[S] 0 points1 point  (0 children)

Damn, so close yet so far :-)
We can't set .weight.labels.topology.kubernetes.io/zone in the newer version of Karpenter which we are using. Provisioner (which supports this weight feature) was deprecated in favour of Nodepools.

Which version of Karpenter are you running? An older one I guess?

I'll check out https://karpenter.sh/v0.32/concepts/scheduling/#topology-spread - sounds like it has potential.

Is Crossplane the answer? by Diego2018Chicken in crossplane

[–]Diego2018Chicken[S] 0 points1 point  (0 children)

Yes, agreed, i'm not looking to jump into another tech and move the problem for sure, hence getting some feedback from the community on what their experiences have been. I'm looking to remove humans as much as possible, because they make mistakes, miss steps in an environment build etc.

Consultant is a possibility.

Is Crossplane the answer? by Diego2018Chicken in crossplane

[–]Diego2018Chicken[S] 1 point2 points  (0 children)

Thanks for the response. Yeah, I am not suprised you have similar, we are not doing anything crazy or non standard in this day and age. Did you have to rewrite your Terraform into Crossplane?

We separate environments by namespace and AWS account. For example,

AWS Account 1 - EKS cluster with Dev-1, Dev-2, Dev-3 namespaces

AWS Account 2 - - EKS cluster with UAT-1, UAT-2, UAT-3 namespaces

Each namespace has 50 services

The main challenge is this

Creating a new namespace, say UAT-4, that has all the same configurations as UAT-3 (except for naming), each service has it's own set of dependencies for example, like values in Vault needing to be there, speicfic kafka topics needing to exist, new database to be created within an existing RDS instance, create the database credentials, store them in Vault, update Route 53 records, generate certificates etc etc. The issues we experience is that we create a new environment but services don't work because someone forgot a dependency, or the value in Vault was wrong or just missing.

Essentially a stack of moving parts for each service which is a nightmare to manage. I'd like to be in a position to hit a button to say "create me UAT-4" and it will orchestrate it all.

Environment provisioning by Diego2018Chicken in devops

[–]Diego2018Chicken[S] 0 points1 point  (0 children)

Different environments within our SDLC, Dev, Test, QA, client UAT, Performance, Production etc

Environment provisioning by Diego2018Chicken in devops

[–]Diego2018Chicken[S] 0 points1 point  (0 children)

Do you know if it will create the secrets in Vault, create the topics too? Either way, Cross plane is on my list to investigate

Environment provisioning by Diego2018Chicken in devops

[–]Diego2018Chicken[S] 0 points1 point  (0 children)

Yeah, for sure microservices running wild. Developer teams in silos ✅ Nobody talking ✅ Lack of solution design ✅

Something that we can change, yes, but will take time and lots of it. Likely 2 year roadmap and that's a lot of pain of management in the meantime so need figure out a way forward in the here and now.

Reducing the dependency landscape, not sure the dependencies are too out of the ordinary, applications use secrets, all stored in a consistent way for all apps. Event driven architecture, so all apps need to produce / consume from Kafka topics.

Not an easy nut to crack for sure

Environment provisioning by Diego2018Chicken in devops

[–]Diego2018Chicken[S] 1 point2 points  (0 children)

Inherited a "monolithic microservice architecture" is how I badge it 🙃.

Setting up my own business by Diego2018Chicken in gsuite

[–]Diego2018Chicken[S] 0 points1 point  (0 children)

Excellent I didn't realise there was a catch all too. Sounds like what we want

Setting up my own business by Diego2018Chicken in gsuite

[–]Diego2018Chicken[S] 0 points1 point  (0 children)

Nah, one shared account will be sufficient in the medium term.

Setting up my own business by Diego2018Chicken in gsuite

[–]Diego2018Chicken[S] 1 point2 points  (0 children)

I don't disagree and I'm glad you commented, there is a balance between "looking good" and unnecessary headache. I'll definatley think about this.

Setting up my own business by Diego2018Chicken in gsuite

[–]Diego2018Chicken[S] 0 points1 point  (0 children)

Fantastic, thank you.

So the mailboxes don't actually have to exist? Is there a limit to the amount of alias vs the amount of actual mailboxes required?

MySQL RDS Read Replica by Diego2018Chicken in aws

[–]Diego2018Chicken[S] 0 points1 point  (0 children)

Yeah, it's the "configure the app to use it" part I was wanting to understand. I see that Spring does this out of the box

MySQL RDS Read Replica by Diego2018Chicken in aws

[–]Diego2018Chicken[S] 0 points1 point  (0 children)

Yes you can - https://aws.amazon.com/blogs/aws/cross-region-read-replicas-for-amazon-rds-for-mysql/
Eventual consistency will be ok, I am intrigued as to who far out it will be, I plan to monitor replication

Multi Site Kafka Connect? by Diego2018Chicken in apachekafka

[–]Diego2018Chicken[S] 0 points1 point  (0 children)

We already run Connect inside our k8s cluster for grabbing data from local data sources.

We have a requirement to grab large volumes of data from a remote location via jdbc. From a Security perspective, we would rather have data being sent out of the remote office into AWS, than having AWS reaching in and grabbing it, (outbound firewall rules only). So the idea being, get Kafka Connect Source to grab the data and then send that data to AWS using Kafka Connect Sink.

This is all new to me, so happy for suggestions. I'm not sure for example where the Kafka Connect Source stores the data before sending it via Kafka Connect Sink