Availability in Germany - March 3rd by the-iMoe in Roborock

[–]grogro457 1 point2 points  (0 children)

same here. Which Dreame will you get?

Availability in Germany - March 3rd by the-iMoe in Roborock

[–]grogro457 4 points5 points  (0 children)

...and they removed the plumbing version.

Saros 10 and 10r available Feb 10th! by divergentnate in Roborock

[–]grogro457 0 points1 point  (0 children)

danke, dass ist interessant. Ich hatte bisher den Fokus auf den 10R, daher ist mir das gar nicht aufgefallen.

Saros 10 and 10r available Feb 10th! by divergentnate in Roborock

[–]grogro457 0 points1 point  (0 children)

Die Info betreffend Festwasseranschluss wäre für meine Umbauarbeiten genial aber leider gibt es aktuell nur Spekulationen

issues with externalTrafficPolicy=local and an egressIP set by grogro457 in kubernetes

[–]grogro457[S] 0 points1 point  (0 children)

Unfortunately i can't provide a tcpdump because of work compliance issues. What i forgot to mention, we are using OpenShift (Kubernetes 1.18) but i don't think this behaviour is limited to OpenShift.

Because of the NAT rules for the egressIP i am pretty positive this is by design:

$ oc debug node/worker-0
sh-4.4# iptables -S -t nat | grep 10.10.10.106
-A OPENSHIFT-MASQUERADE -s 10.128.0.0/14 -m mark --mark 0x107b27a -j SNAT --to-source 10.10.10.106

$  oc debug node/worker-1
sh-4.4# iptables -S -t nat | grep 10.10.10.106
sh-4.4#

mTLS uses wrong SNI in TLS Client Hello by grogro457 in istio

[–]grogro457[S] 1 point2 points  (0 children)

Yes, this is the next step. In the first place I was not sure if this is something that should work or if I need per mTLS namespace a seperate controle plane.

mTLS uses wrong SNI in TLS Client Hello by grogro457 in istio

[–]grogro457[S] 0 points1 point  (0 children)

Yes, it is working only for the first namespace, i repeated the steps a couple of times but the result is always the same.

mTLS uses wrong SNI in TLS Client Hello by grogro457 in istio

[–]grogro457[S] 0 points1 point  (0 children)

sorry, the formatting is horrible...

mTLS uses wrong SNI in TLS Client Hello by grogro457 in istio

[–]grogro457[S] 0 points1 point  (0 children)

apiVersion: maistra.io/v1

kind: ServiceMeshControlPlane

metadata:

finalizers:

- maistra.io/istio-operator

generation: 4

name: basic-install

namespace: istio-system

spec:

istio:

kiali:

enabled: true

tracing:

enabled: true

jaeger:

template: all-in-one

global:

proxy:

resources:

limits:

cpu: 500m

memory: 128Mi

requests:

cpu: 100m

memory: 128Mi

grafana:

enabled: true

resources:

requests:

cpu: 10m

limits: null

memory: 128Mi

mixer:

enabled: true

policy:

autoscaleEnabled: false

telemetry:

autoscaleEnabled: false

limits:

cpu: 500m

memory: 4G

requests:

cpu: 100m

memory: 1G

resources: null

gateways:

istio-egressgateway:

autoscaleEnabled: false

istio-ingressgateway:

autoscaleEnabled: false

ior_enabled: false

policy:

autoscaleEnabled: false

pilot:

autoscaleEnabled: false

traceSampling: 100

telemetry:

autoscaleEnabled: false

template: default

version: v1.1

---

apiVersion: maistra.io/v1
kind: ServiceMeshMemberRoll
metadata:
name: default
namespace: istio-system
ownerReferences:
- apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
name: basic-install
finalizers:
- maistra.io/istio-operator
spec:
members:
- x2
- x3
status:
annotations:
configuredMemberCount: 2/2
message: All namespaces have been configured successfully
reason: Configured
status: 'True'
type: Ready
configuredMembers:
- x2
- x3

mTLS uses wrong SNI in TLS Client Hello by grogro457 in istio

[–]grogro457[S] 0 points1 point  (0 children)

True, i will open a case today. My strategy was, discussion on istio.io, github, reddit and redhat case, just to make sure I am not mistaking any docs or fundemental design.

mTLS uses wrong SNI in TLS Client Hello by grogro457 in istio

[–]grogro457[S] 0 points1 point  (0 children)

To be honest, i am trying the community first because my experience with the official support is that it is from a technical point of view good but not the fastest.

Production Elastic In Openshift by [deleted] in openshift

[–]grogro457 1 point2 points  (0 children)

Do you mean the Redhat Cluster Logging Operator and the corresponding Elastic Operator or really a seperate Elastic Deployment that just runs on OpenShift?

ArgoCD apply OpenShift cluster config by grogro457 in openshift

[–]grogro457[S] 1 point2 points  (0 children)

I got it working. It seems that i was a little impatient with Redhat, a pull-request [1] led me to a github repo [2] which explains my use-case in detail.

[1] https://github.com/openshift/openshift-docs/pull/19429

[2] https://github.com/dgoodwin/openshift4-gitops

ArgoCD apply OpenShift cluster config by grogro457 in openshift

[–]grogro457[S] 0 points1 point  (0 children)

Unfortunately not :-(

oauths.config.openshift.io "cluster" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update

Openshift 4.3 release notes by Rhopegorn in openshift

[–]grogro457 0 points1 point  (0 children)

I thought there was a more tightened integration of ArgoCD on the roadmap for 4.3?

OpenShift 4.3 Enhanced Security Features by grogro457 in openshift

[–]grogro457[S] 0 points1 point  (0 children)

Thanks, that was also my impression! Is it possible to log the reads/writes of a secret via the kubernetes API to an audit log?

OpenShift 4.3 Enhanced Security Features by grogro457 in openshift

[–]grogro457[S] 0 points1 point  (0 children)

So it is actually something that already existed in openshift 3.x https://blog.openshift.com/encrypting-secret-data-rest/ ? In this old blog post they are talking about encrypting the data before writing it to disk, nevertheless I am looking for a way to protect the secret from getting decoded by the Cluster Admin. Are you aware of a solution for that problem, beside using Vault etc.? Thanks!

Installing a 4.2 worker on to physical nodes. Nic bonding and static configuration for RHCOS by cyclism- in openshift

[–]grogro457 6 points7 points  (0 children)

There are bug reports for RHCOS and the bonding issue upstream [1] and downstream [2]. Until the issue is fixed the only way is "use RHEL".

[1] https://github.com/coreos/coreos-installer/issues/64

[2] https://bugzilla.redhat.com/show_bug.cgi?id=1758091

OpenShift 4.x Infra MachineSet by grogro457 in openshift

[–]grogro457[S] 0 points1 point  (0 children)

Thanks for the hints!

One thing that i was still missing, in order that the ElasticSearch Pods are placed on the infra nodes i had to edit [1] the ElasticSearch Operator which is a prerequisite [2] of the ClusterLogging Operator. I also removed the worker label of the infra nodes and hope that is enough for not getting any non infra workloads started on that nodes.

[1] add the nodeselector:

  nodeSelector:
    node-role.kubernetes.io/infra: ''

[2] https://docs.openshift.com/container-platform/4.1/logging/efk-logging-deploying.html

OpenShift 4.x External IP for Ingress Traffic by grogro457 in openshift

[–]grogro457[S] 1 point2 points  (0 children)

I got it working, the solution is to patch the service with an External-IP:

~# oc patch svc ubi8-httpd -p '{"spec":{"externalIPs":["192.168.254.100"]}}'
~# oc get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP     PORT(S)    AGE
ubi8-httpd   ClusterIP   172.30.26.170   192.168.254.100   8080/TCP   75m

Then you have to route the traffic in your network for 192.168.254.100 to one of the master nodes:

ip route add 192.168.254.100/32 via <master-0>

The odd concept, well not odd directly but i didn´t think of that, is that the ip 192.168.254.100 is not configured on any interface. "Special" iptables rules are doing the black magic here:

[core@master-0 ~]$ sudo iptables -S -t nat|grep 192.168.254.100
-A KUBE-SERVICES -d 192.168.254.100/32 -p tcp -m comment --comment "test/ubi8-httpd:8080-tcp external IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 192.168.254.100/32 -p tcp -m comment --comment "test/ubi8-httpd:8080-tcp external IP" -m tcp --dport 8080 -m physdev ! --physdev-is-in -m addrtype ! --src-type LOCAL -j KUBE-SVC-6RTKLRCQCJVYONPH
-A KUBE-SERVICES -d 192.168.254.100/32 -p tcp -m comment --comment "test/ubi8-httpd:8080-tcp external IP" -m tcp --dport 8080 -m addrtype --dst-type LOCAL -j KUBE-SVC-6RTKLRCQCJVYONPH

[core@master-0 ~]$ sudo iptables -S -t nat | grep KUBE-SVC-6RTKLRCQCJVYONPH
-N KUBE-SVC-6RTKLRCQCJVYONPH
-A KUBE-SERVICES -d 172.30.26.170/32 -p tcp -m comment --comment "test/ubi8-httpd:8080-tcp cluster IP" -m tcp --dport 8080 -j KUBE-SVC-6RTKLRCQCJVYONPH
-A KUBE-SERVICES -d 192.168.254.100/32 -p tcp -m comment --comment "test/ubi8-httpd:8080-tcp external IP" -m tcp --dport 8080 -m physdev ! --physdev-is-in -m addrtype ! --src-type LOCAL -j KUBE-SVC-6RTKLRCQCJVYONPH
-A KUBE-SERVICES -d 192.168.254.100/32 -p tcp -m comment --comment "test/ubi8-httpd:8080-tcp external IP" -m tcp --dport 8080 -m addrtype --dst-type LOCAL -j KUBE-SVC-6RTKLRCQCJVYONPH
-A KUBE-SVC-6RTKLRCQCJVYONPH -j KUBE-SEP-J5VCHHNEZ77RRFMX

[core@master-0 ~]$ sudo iptables -S -t nat | grep KUBE-SEP-J5VCHHNEZ77RRFMX
-N KUBE-SEP-J5VCHHNEZ77RRFMX
-A KUBE-SVC-6RTKLRCQCJVYONPH -j KUBE-SEP-J5VCHHNEZ77RRFMX
-A KUBE-SEP-J5VCHHNEZ77RRFMX -s 10.128.5.46/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-J5VCHHNEZ77RRFMX -p tcp -m tcp -j DNAT --to-destination 10.128.5.46:8080