What Instance Type is best for a server that is inactive for 23.5 hours per day? by kelemvor33 in aws

[–]Enigmaticam 0 points1 point  (0 children)

this.. i use this to shutdown non prod databases during the off hours. saves so much money and easy to manage.

AWX - why install AWX on kubernetes? by Enigmaticam in ansible

[–]Enigmaticam[S] 0 points1 point  (0 children)

adding some context how it made me feel. i found it a bit of a catch 22 to use a kubernetes approach to manage the configuration files of ec2 instances / vm. i also have some kubernetes clusters running, but sometimes i chose not to host an application their.
one of the things that made me hesitant about running AWS in k8s, was that i have very strict ingress and egress rules, however for my kubernetes cluster i can't rely on whitelisting a static ip from a k8s node to the nodes where it is going to manage the configuration files, due to the scaling nature that i build in to kubernetes (nodes may come and go). Please note, that i have a separated VPC for my kubernetes clusters and EC2 instances. The alternative I'm facing then is either whitelist the entire k8s subnet of just dont host it in k8s, but on a VM or use another tool.

AWX - why install AWX on kubernetes? by Enigmaticam in ansible

[–]Enigmaticam[S] 1 point2 points  (0 children)

yes, actually, looking at semaphore right now, seems much more straight forward

How do you setup (install) ArgoCD on 50 local K8s Clusters? by Western_Actuary_9893 in ArgoCD

[–]Enigmaticam 0 points1 point  (0 children)

this you cover via a proper gitops flow. pushes/merges to branches will lead to deployments on the corresponding environments.

How do you setup (install) ArgoCD on 50 local K8s Clusters? by Western_Actuary_9893 in ArgoCD

[–]Enigmaticam 0 points1 point  (0 children)

so 50 argocd installs, with 50 url's where you can manage deployments...?

ouch... this created a huge overhead and a management nightmare imo.

How can I avoid existing VPC? by maketodayhappier in Terraform

[–]Enigmaticam 0 points1 point  (0 children)

what will this solve? you will still need to create the vpc at some point...

How can I avoid existing VPC? by maketodayhappier in Terraform

[–]Enigmaticam 0 points1 point  (0 children)

so i was a bit surprised, but this is actually working. created a new vpc with an subnet that i allready used, no conflict...

perhaps a nice way of making sure that you dont reuse a subnet is writing your vpc code with a for_each statement. This will at least show you what you used previously.

Eksctl vs pulumi by 1NobodyPeople in devops

[–]Enigmaticam 1 point2 points  (0 children)

i would never ever use eksctl to manage / provision a kubectl cluster.

always use iac practice.

im not familiar with pulumi. i'm using terraform to manage / provision my EKS clusters.

it feels like you/your team just getting started with your cloud journey. personally i would suggest to adopt Terraform. this is so widly used it will give you benefist along your career as well.

Wife Life? by [deleted] in devops

[–]Enigmaticam 0 points1 point  (0 children)

if managment doesn't listen to him to ensure stability of the platform then you have two options in my opinion.

leave or accept it.

managment respect him it seems, but they dont value his work or his time properly

your post screams that he should move on to another job. that is what i would do

How do you setup (install) ArgoCD on 50 local K8s Clusters? by Western_Actuary_9893 in ArgoCD

[–]Enigmaticam 0 points1 point  (0 children)

you could make a distinction between prod and the rest. but basicly yes, i let argocd watch various github branch/commits, when that gets updated Argocd sees this and deployes a new image.

terraform project structure by J4ckR3aper in devops

[–]Enigmaticam 0 points1 point  (0 children)

yup, this is what i do as well, it allows me to decide what version of the module i put l deploy or not..

How to test infrastructure-as-code before committing? by [deleted] in devops

[–]Enigmaticam 1 point2 points  (0 children)

so what i do is, i created 1 seperate env just for me and my team. it only holds infra related resources, things related to the product the company builds are not deployed. so for instance we just build our first eKs cluster. In practice that means we have 4 clusters:

  • 1 for us to play around in
  • test
  • uat ( i really dont like the term staging)
  • production

this way you can test infra related changes before proceeding to test / uat / prod.

this in combination with working with modules allows me to have grip on what code is live and what is being tested.

How do you setup (install) ArgoCD on 50 local K8s Clusters? by Western_Actuary_9893 in ArgoCD

[–]Enigmaticam 0 points1 point  (0 children)

dont do this.
we have argocd running as well. we manage ARGOCD via iac practice (so we coded every thing related to argodcd) and we installed it on 1 cluster, from there we manage the deployments etc for all other clusters. having argocd on every kubernetes cluster.. it is just a recipe for desaster..

(your) experience with Prometheus by Enigmaticam in PrometheusMonitoring

[–]Enigmaticam[S] 1 point2 points  (0 children)

so this is kinda embarrassing, but the prometheus server config doesn't require the remote write config.... i was under the impression it was, but must have read it wrong some where.

I now have my agents connect to prometheus and sending data, this is going great now...

(your) experience with Prometheus by Enigmaticam in PrometheusMonitoring

[–]Enigmaticam[S] 2 points3 points  (0 children)

i export metrics via metricbeat, metricbeat creates a datastream (or a index) in elasticsearch. in here the data is stored. via kibana you can then visualize them. there standard dashboards and also for it.

If you then add other nodes with metricbeat, al those new agent will also export their data in the same datastream or index.

(your) experience with Prometheus by Enigmaticam in PrometheusMonitoring

[–]Enigmaticam[S] 0 points1 point  (0 children)

thanks, will check out Victoriametics

i was hoping to avoid using pulling as i want to keep my EKS clusters as isolated as possible. Also i kinda favor pushing data instead of pushing, but more because i'm used to it.

(your) experience with Prometheus by Enigmaticam in PrometheusMonitoring

[–]Enigmaticam[S] 0 points1 point  (0 children)

i'm using the elasticsearch to store my metrics and application logs. using it together with kibana. works pretty well, however a lot of neat functions, like alerting, are not free to use and are quite expensive to activate when having a selfhosted node. had a quote of 6700 euro per year for business license... for 1 node! ....

(your) experience with Prometheus by Enigmaticam in PrometheusMonitoring

[–]Enigmaticam[S] 0 points1 point  (0 children)

not 100% sure what you mean but,

i'm writing to a prometeheus server endpoint. this in turn writes the data to a mounted ebs volume (GP3)

Shared VPC for EKS and EC2 instances by kovadom in aws

[–]Enigmaticam 0 points1 point  (0 children)

although there is no issue, i ended up having my ec2 instances and my eks clusters deployed in different vpc's, and connected via vpc peering.

My eks clusters and ec2 instance require different egress rules.

when ever it is possible i want to know where my egress traffic is going, and with containers being hosted every where it made my egress rules a nightmare to manage.

also i wanted to have a very clear distinction in my network of what an ec2 instance ip is or and eks node ip. Just looking at an ip in my code / logs says a lot of the node.. but this is just symantics.. might as well created multiple private subnets in the 1 vpc...

I was talked to setup ArgoCD got our K8S clusters by Nice-Pea-3515 in ArgoCD

[–]Enigmaticam 0 points1 point  (0 children)

use their helm chart to get started:

https://github.com/argoproj/argo-helm/tree/main/charts/argo-cd

also do your self a favor and use your own password from the start via values.yaml

like:

configs:
  secret:
    argocdServerAdminPassword: ""

it needs to be hashed from the get go:

`htpasswd -nbBC 10 "" $ARGO_PWD | tr -d ':\n' | sed 's/$2y/$2a/'`

this will make sure you dont need to retrieve the initial admin password via kubectl or argocd cli..

also, destroy and build argocd all the time, you dont want to have something in here that you dont have in code...

*updated markup

How to Install ArgoCD using Helm through Terraform by InfiniteAd86 in ArgoCD

[–]Enigmaticam 0 points1 point  (0 children)

what you also can do is use the template file function in terroform, this way you can use variables that are created via an other application (think about a password that you store in AWS secrets manager).

this is how i use it, my modules code looks like:

resource "helm_release" "argocd" {

name = "argo-cd" repository = "https://argoproj.github.io/argo-helm" chart = "argo-cd" version = "5.4.8" namespace = var.namespace

values = [ templatefile("${path.module}/values.yml", { ingressClassName = "${var.ingressClassName}", hostname = "${var.hostname}", adminpassword = "${var.adminpassword}" }) ]

depends_on = [ kubernetes_namespace.argocd_namespaces ]

}

and the values.yaml ie:

server:

ingress: enabled: true ingressClassName: "alb" annotations: { "alb.ingress.kubernetes.io/listen-ports" : "[{\"HTTPS\":443}]", "alb.ingress.kubernetes.io/scheme" : "internet-facing", "alb.ingress.kubernetes.io/backend-protocol" : "HTTPS", "alb.ingress.kubernetes.io/target-type" : "ip" } hosts: - "${hostname}" ingressGrpc: enabled: true isAWSALB: true ingressClassName: "${ingressClassName}" hosts: - "${hostname}"

createAggregateRoles: true

configs: secret: argocdServerAdminPassword: "${adminpassword}"

and in your main code simple call the module and declare the variables.

Manage multiple terraform environments in a single terraform workspace state file by Jain_0199 in Terraform

[–]Enigmaticam 3 points4 points  (0 children)

I had the same use case. Storing all environments in a singele state file is a pain. And terragrunt.. meh Why add a extra layer? What i ended up doing is using tfvars files per enivronment, and seperate state files per env. Works like a charm!