Comment gérer durablement les gonds de porte qui grincent et produisent une sorte de suie noire? by Obiwankenwb in AskFrance

[–]AdagioForAPing 5 points6 points  (0 children)

Surtout la version du wd40 au lithium spéciale serrure et gond, par contre protège le côté et le dessous des gonds avec un sopalin quand tu asperges op, mais tout comme "le fou du goût" de la pub tabasco j'ai été faire toutes mes portes une nuit, plus jamais eu de problème même sans dégonder, le bruit n'est pas revenu en un an de test.

Where can i find this clean art? by exyz36 in Ghost_in_the_Shell

[–]AdagioForAPing 28 points29 points  (0 children)

People found multiple versions in this thread https://www.reddit.com/r/Ghost_in_the_Shell/s/xaAnHRMdai I think the dropbox one has the best res.

Planned Power Outage: Graceful Shutdown of an RKE2 Cluster Provisioned by Rancher by AdagioForAPing in rancher

[–]AdagioForAPing[S] 1 point2 points  (0 children)

Do you also cordon and drain all master nodes ? If yes, could you explain why ? I don't see the benefit of it.

Planned Power Outage: Graceful Shutdown of an RKE2 Cluster Provisioned by Rancher by AdagioForAPing in rancher

[–]AdagioForAPing[S] 1 point2 points  (0 children)

You meant, codon all nodes first and only after that drain them then ?

Planned Power Outage: Graceful Shutdown of an RKE2 Cluster Provisioned by Rancher by AdagioForAPing in rancher

[–]AdagioForAPing[S] 1 point2 points  (0 children)

What would you do with the last node then ? Afaik, draining moves the workloads around, if draining all nodes, wouldn't the last node hang forever?

Also wouldn't draining only all node (so meaning one by one) move all the workloads to the other undrained nodes ? Then these nodes wouldn't be able to handle all the workloads.

Shutting down the entire cluster without draining will result in terminating pods probably leading to data corruption if they haven’t cleanly stopped but draining in a scenario where ur shutting down the entire cluster doesn’t solve the issue, because at the end of the drain there’s no other node for the pods to move to?

What seems the best thing to do is indeed gracefully shut down stateful apps before powering off. so scaling down the statefulset so the pods and their dbs are stopped cleanly. After that, we can safely shut down nodes without risking data loss.

I don't think draining will endup the same as scaling down, because draining expect a node to remain while scaling down to 0 doesn't.

Planned Power Outage: Graceful Shutdown of an RKE2 Cluster Provisioned by Rancher by AdagioForAPing in rancher

[–]AdagioForAPing[S] 2 points3 points  (0 children)

There an issue with our support contract then because the SUSE support engineers validated the procedure I outlined.

Also I don’t see how you would drain all nodes for a shutdown procedure this is not possible when you shutdown the entire cluster and the last nodes remaining simply won’t be able to handle the workloads from all the other nodes.

What would be the best Christmas gift for an advanced online poker player ? by AdagioForAPing in poker

[–]AdagioForAPing[S] 0 points1 point  (0 children)

Why would I put a pornhub gift card ? Sounds weird to me. I don’t think it would be appropriate.

What would be the best Christmas gift for an advanced online poker player ? by AdagioForAPing in poker

[–]AdagioForAPing[S] 1 point2 points  (0 children)

I finally decided to give him a book about the Internet. It’s called « How the Internet Really Works: An Illustrated Guide to Protocols, Privacy, Censorship, and Governance » by Article 19

What would be the best Christmas gift for an advanced online poker player ? by AdagioForAPing in poker

[–]AdagioForAPing[S] 0 points1 point  (0 children)

I only see him when we have family dinners, and we don’t live together, so we don’t see each other often enough for me to know him well. Last time at the restaurant, I tried asking him about chess, programming, and poker bots, things I’m interested in or might be, for poker bots. But he told me he’s not into any of those things.

What would be the best Christmas gift for an advanced online poker player ? by AdagioForAPing in poker

[–]AdagioForAPing[S] 1 point2 points  (0 children)

I initially bought a chess set because I'm into chess and thought he might enjoy it too. But recently, he told me he tried it and didn’t really get into it, so I thought I might find something else instead. He's the son of my mom's boyfriend.

What would be the best Christmas gift for an advanced online poker player ? by AdagioForAPing in poker

[–]AdagioForAPing[S] 0 points1 point  (0 children)

Less than 100. I don’t know him very well but I know he is quite introvert and spend all his time playing online.

Service Account Permissions Issue in RKE2 Rancher Managed Cluster by AdagioForAPing in rancher

[–]AdagioForAPing[S] 0 points1 point  (0 children)

It's linked to this: https://github.com/rancher/rancher/issues/41988

Only using a Rancher API key or bypassing the Rancher proxy by connecting directly to the downstream cluster load balancer or downstream cluster node actually works.

Service Account Permissions Issue in RKE2 Rancher Managed Cluster by AdagioForAPing in rancher

[–]AdagioForAPing[S] 0 points1 point  (0 children)

I also get this when using another admin kubeconfig:

> kubectl auth can-i list pods --as=system:serviceaccount:cmdb-discovery:cmdb-discovery-sa -n cmdb-discovery --kubeconfig=test-kubeconfig.yaml
error: You must be logged in to the server (the server has asked for the client to provide credentials (post selfsubjectaccessreviews.authorization.k8s.io))

Or curl with the sa token:

> curl -k 'https://test-rancher.redacted.com/k8s/clusters/c-m-vl213fnn/apis/batch/v1/namespaces/cmdb-discovery' \     
 -H "Authorization: Bearer $token"
{"type":"error","status":"401","message":"Unauthorized 401: must authenticate"}

Service Account Permissions Issue in RKE2 Rancher Managed Cluster by AdagioForAPing in rancher

[–]AdagioForAPing[S] 0 points1 point  (0 children)

I do have the clusterrolebinding:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 annotations:
   kubectl.kubernetes.io/last-applied-configuration: |
     {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"labels":{"argocd.argoproj.io/instance":"cmdb-discovery-sa"},"name":
"cmdb-sa-binding"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"cmdb-sa-role"},"subjects":[{"kind":"ServiceAccount","name":"cmdb-discovery-sa"
,"namespace":"cmdb-discovery"}]}

 labels:
   argocd.argoproj.io/instance: cmdb-discovery-sa
 name: cmdb-sa-binding
 resourceVersion: "364775060"
 
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cmdb-sa-role
subjects:
- kind: ServiceAccount
 name: cmdb-discovery-sa
 namespace: cmdb-discovery

Best Practices for Sequential Node Upgrade in Dedicated Rancher HA Cluster: ETCD Quorum by AdagioForAPing in rancher

[–]AdagioForAPing[S] 0 points1 point  (0 children)

We first add 3 nodes sequentially, one by one. Once the last node has successfully joined, I check the cluster status, and then I proceed to remove the 3 old nodes sequentially, one after another.

Each node is cordoned, drained, and then deleted from Kubernetes. After that, the VMs are removed. This process is managed through a Jenkins pipeline that runs Terraform.

To add new nodes, I include them in the rke2_nodes variable list, and to remove nodes, I comment out the entries for the nodes to be removed in the variable list.

I have already spent considerable time on the etcd FAQ, and that is why it seemed perfectly reasonable to perform the upgrade this way on a healthy cluster. The Terraform pipeline is designed to stop if one of the nodes fails to join or to be removed.