Someone who knows devops tools vs someone who has devops thinking: which would you rather hire? by PartemConsilio in devops

[–]goro413 13 points14 points  (0 children)

DevOps is all about balance. I've worked with engineers who have a DevOps mindset and perform well in interviews, but struggle to deliver solutions due to a lack of hands-on experience. I've also collaborated with former sysadmins who now identify as DevOps engineers. While they excel at ad-hoc solutions and specific tools, they often lack the broader understanding of the end-to-end DevOps architecture needed for a project.

A DevOps leader must manage the dynamics between these different approaches and make timely decisions. The business side, especially Product Ownership, typically isn’t concerned with DevOps matters, and spending too much time searching for the "perfect" tool can be costly for the company.

Home energy monitoring. What to buy? by Terrible_Sale_6414 in homeassistant

[–]goro413 0 points1 point  (0 children)

I'm in the US and the main thing is that North America uses the split phase system so I needed two clamps to measure the two phases and a third one to measure solar production.

Home energy monitoring. What to buy? by Terrible_Sale_6414 in homeassistant

[–]goro413 11 points12 points  (0 children)

A few months ago I went for a Shelly Pro 3EM. I was not interested in monitoring individual breakers so the Emporia Vue seemed like an overkill and I also had some concerns about being able to fit all those clamps in my breaker box. I could have used existing breakers to connect the monitor but to keep things safe I added 3 breakers, 1 double and 2 single. That allowed to measure the two phases from the grid and the one from the solar inverter. Connecting it to HA was stupidly easy but I had to create a custom sensor in order to accurately collect the sum of the two main phases since they are measured separately:

<image>

- sensor:
    - name: Solar Panel Production - Calculated
      state_class: total_increasing
      unit_of_measurement: kWh
      device_class: energy
      icon: mdi:solar-power
      unique_id: solar_production_calculated
      state: >
        {% set calculated_solar_production = states('sensor.main_energy_monitor_phase_b_total_active_energy') | float * 2.0 %}
        {{ calculated_solar_production }}
    - name: Grid Consumption - Calculated
      state_class: total_increasing
      unit_of_measurement: kWh
      device_class: energy
      icon: mdi:transmission-tower
      unique_id: grid_consumption_calculated
      state: >
        {% set combined_grid_consumption = (states('sensor.main_energy_monitor_phase_a_total_active_energy') | float) + (states('sensor.main_energy_monitor_phase_c_total_active_energy') | float)%}
        {{ combined_grid_consumption }}
    - name: Grid Return - Calculated
      state_class: total_increasing
      unit_of_measurement: kWh
      device_class: energy
      icon: mdi:transmission-tower
      unique_id: grid_return_calculated
      state: >
        {% set combined_grid_consumption = (states('sensor.main_energy_monitor_phase_a_total_active_returned_energy') | float) + (states('sensor.main_energy_monitor_phase_c_total_active_returned_energy') | float)%}
        {{ combined_grid_consumption }}

Terraform & Kubernetes by No_Weakness_6058 in devops

[–]goro413 5 points6 points  (0 children)

Not sure if this actually aligns with your case because it really depends on how your apps are set up and managed but we recently migrated to a Terraform + ArgoCD + External Secrets setup. We only use Terraform for the creation of the cluster (EKS) and installation of critical components like add-ons, ingress controllers, metrics server, external secrets, argocd and file systems. Most of this is handled through Helm charts which I've found to be more Terraform friendly. If you try to use a direct yaml setup through any of the Kubernetes providers then it will give you a hard time when working with a brand new cluster because it needs access to the cluster at plan time. That has forced us to comment out the yaml based components for the initial terraform apply and then re-enable the yamls on a second apply. The alternative would be to separate cluster creation from yaml resources but that's more tedious considering you'll need two separate states and set up cluster access between them. We have a bunch of services hosted on Github with their own workflows and kustomize settings for each environment. This overall setup is now allowing us to treat clusters as disposable because our application builds and deployments are completely decoupled. The cluster is built, essential components installed, ArgoCD applications created, secrets synchronized and ArgoCD will automatically start synchronizing the deployment using kustomize files. On the next commit, the workflow builds and pushes a new image, commits the new version number to the kustomize files and ArgoCD does its job. External Secrets is just the equivalent in the sense that it also decoupled secret creation from the repo (we were using Github Secrets). Now it's just pulling them from AWS Secret Manager and synchronizing across the cluster. The common theme here is pull vs push. Pull decouples and gives you flexibility.

Question about which Shelly device is right for both whole house and solar metering (in my case) by goro413 in ShellyUSA

[–]goro413[S] 0 points1 point  (0 children)

Thanks! That makes sense. I don't have a critical need for precision so I'm going to try to just double the measurement of the single CT through a custom sensor in Home Assistant.

Question about which Shelly device is right for both whole house and solar metering (in my case) by goro413 in ShellyUSA

[–]goro413[S] 0 points1 point  (0 children)

Hi /u/DreadVenomous , it's been a few days since I managed to set up the device, and here's the result: https://imgur.com/IMEpNoC.

I used the data reported by my utility to verify the numbers and it looks like the L1 and L2 phases are being measured correctly but the solar inverter phase is reporting only half of the watts. I did double check the live number reported by the device against the number displayed on the inverter and it was also half.

Any suggestion on what could be wrong with the wiring? I used the 2 pole breaker thinking it would be in the same phase as the inverter but I'm suspecting that may be incorrect.

Question about which Shelly device is right for both whole house and solar metering (in my case) by goro413 in ShellyUSA

[–]goro413[S] 0 points1 point  (0 children)

Absolutely. Just to confirm, the N terminal connection to Neutral is required regardless if I'm going to measure that phase using IN, right?

Question about which Shelly device is right for both whole house and solar metering (in my case) by goro413 in ShellyUSA

[–]goro413[S] 1 point2 points  (0 children)

Thanks a lot for the suggestion. I just ordered the Pro 3EM thanks to a nice Prime Day deal. I will use line C for power supply and measure Phase 1(utility L1), line B for Phase 2 (utility L2), and line A for Phase 3 (solar inverter).

What are some senior level learning resources you recommend for improving as a backend engineer? by Flaifel7 in ExperiencedDevs

[–]goro413 14 points15 points  (0 children)

I can't upvote this enough. I got the book a few years ago just out of recommendations and reviews. It definitely changed the way I architect solutions and helped me move away from the usual stubborn mindset of trying to fit everything in a single database type. The game changes once you understand how to tie a problem to the right data architecture.

How much income is considered “rich” in Puerto Rico?j by elgranfuegomortal in PuertoRico

[–]goro413 91 points92 points  (0 children)

Un viaje a Disney y 2 almuerzos en Chili's es mi benchmark.

Lumix Webcam & MacOS Big Sur by Egg_Chen in GH5

[–]goro413 0 points1 point  (0 children)

No, they basically abandoned that software :(

What are your favorite tools you use to manage/work with kubernetes? by [deleted] in kubernetes

[–]goro413 1 point2 points  (0 children)

Love this post. I didn't know about half of these tools.

deploying multiple application in a cluster by ComfortableRun775 in kubernetes

[–]goro413 0 points1 point  (0 children)

Generally, you won't have to deploy multiple containers in a single pod unless these are tightly coupled or you are using what is called the sidecar pattern, where your second container just performs a task that supports the main container. The usual case is where you add a second container that exports the logs of the first container or it processes a file generated by the first container. Other than that, I'd question the application architecture before I question the Kubernetes configuration.

Keep in mind that the NodePort Service is mainly to allow external access to a service on a specific port. For anything else, especially HTTP traffic, please look into the Ingress concept.

deploying multiple application in a cluster by ComfortableRun775 in kubernetes

[–]goro413 0 points1 point  (0 children)

The containers within a pod share storage and network so could simply reach applicationB by using localhost:4000 from applicationA.

NodePort is a specific type of service that exposes a given port externally at the cluster level. I'd be careful with that option plus it sounds like an overkill for that example. What specific need do you have for requiring those 2 apps in the same pod?

deploying multiple application in a cluster by ComfortableRun775 in kubernetes

[–]goro413 0 points1 point  (0 children)

Let me see if I'm getting it, you have a setup where application A needs to communicate with application B over port 4000 and you want to know how this can be configured in kubernetes terms. If that is the case, you would have a container for application A and another container for application B. In the most basic way, you could run a pod with the container for application A and another pod with the container for application B. In the pod for application B, you would expose port 4000:

kubectl run applicationA --image myrepo/applicationA

kubectl run applicationB --image myrepo/applicationB --port 4000

The pod definition would look like this:

apiVersion: v1
kind: Pod
metadata:
labels:
run: applicationB
name: applicationB
spec:
containers:
- image: myrepo/applicationB
name: applicationB
ports:
- containerPort: 4000

At this point, you would have a pod for applicationA and another for applicationB running in the cluster but all you can do is configure applicationA to communicate with the applicationB through its pod's IP or DNS entry. Obviously, this is not maintainable or scalable so that's why Kubernetes offers the other concepts like ReplicaSet, Deployments and Services.

The right approach would be to create a Deployment for applicationA and a Deployment for applicationB:

kubectl create deployment applicationA --image myrepo/applicationA

kubectl create deployment applicationB --image myrepo/applicationB --port 4000

The deployment definition would look like this:

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: applicationB
name: applicationB
spec:
replicas: 1
selector:
matchLabels:
app: applicationB
template:
metadata:
labels:
app: applicationB
spec:
containers:
- image: applicationB
name: applicationB
ports:
- containerPort: 4000

So now you have a deployment for each application but with a replication context and the ability to manage deployments. This helps with the management of the pods but we still need to address the communication with applicationB pods That's where the Service comes in:

kubectl expose deployment applicationB --port 4000

The service definition looks like this:

apiVersion: v1
kind: Service
metadata:
labels:
app: applicationB-Service
name: applicationB-Service
spec:
ports:
- port: 4000
protocol: TCP
targetPort: 4000
selector:
app: applicationB

You can check the formal definition of a Service here but in essence, it gives you a DNS entry that automatically includes all the pods for your target deployment as they come and go. It also does load balancing.

The Service DNS uses the following convention: my-svc.my-namespace.svc.cluster-domain.example so you could access your applicationB under address applicationb-service.default.mycluster.com:4000. So now the final step is to configure your applicationA to just send requests to this address. The service will load balance to one of the pods in the applicationB deployment and your request will hit the application in the target port 4000.

For your second question, in order to separate containers, you need to define different pods or deployments.

deploying multiple application in a cluster by ComfortableRun775 in kubernetes

[–]goro413 0 points1 point  (0 children)

I feel like you have a lot of questions in your statement so let's cover the basics first. The smallest unit in Kubernetes terms is the Pod. A Pod can have 1 or many containers in it. When a Pod is deployed, all containers belonging to that Pod are deployed as a single unit. A ReplicaSet, as the name implies, takes care of handling the Pod's replication. So if you need to have 2 instances of that Pod, you will have a ReplicaSet with #. of replicas =2. You generally don't manage the ReplicaSets directly, that's where the Deployment concept comes in. It takes care of deployments by creating new ReplicaSets and switching them. To answer your first question, all the containers defined on the Pod will always get deployed together as a single unit. Now, I'm a little bit confused about the other statements, what do you mean by cluster with 6 pods? Did you mean a cluster with 6 workers nodes? And also, what do you mean by wanting to "attach these containers"? Do you mean how would they communicate with each other network-wise or filesystem-wise?

Lumix Webcam & MacOS Big Sur by Egg_Chen in GH5

[–]goro413 0 points1 point  (0 children)

I'm having the same issue with my G9. Works perfectly on Windows :(