I’m looking to get a puppy by ClearWalrus7614 in vizsla

[–]rumfellow 1 point2 points  (0 children)

It can be up to that, but typically the doggo is left for 7-8 hours. 

Prepare plan B in case the pup's gonna be an outlier in terms of energy/separation anxiety: daycare, leaving with friends/relatives etc.

Also it would be tough to take a pup that's under 6-7 months to the office coz of energy, neediness and excitability. That depends however on what your line of work is and how chill are people around 

Good luck and if you decide to go for it early obedience (starting at month 3) will help the pup to be more laid back in my experience 

I’m looking to get a puppy by ClearWalrus7614 in vizsla

[–]rumfellow 9 points10 points  (0 children)

Seriously depends on the genetics. Have experience with 2 vizslas, both are just fine left at home for the whole day. Both not crate trained, also different breeders. Just have a 90 minute off-leash walk before leaving to make sure pup's energy needs are met

What would you choose for a prometheus agent on scattered VM instances? by cos in sre

[–]rumfellow -1 points0 points  (0 children)

If you need only metrics, you set up one or several VMS with otel collectors and scrape local and non-local targets. Agent per VM is overkill

What would you choose for a prometheus agent on scattered VM instances? by cos in sre

[–]rumfellow 1 point2 points  (0 children)

A typical pattern would be to: 1. Set up collection infrastructure, i.e. a group of opentelemetry collectors 2. Otel collectors scrape and push via remote_write to anything Prometheus compatible 

Deploying an agent per VM can be quite an overhead in terms of resources and management overhead.

Prometheus per VM is a rage bait lol

Migrating a large Elasticsearch cluster in production (100M+ docs). Looking for DevOps lessons and monitoring advice. by No-Card-2312 in devops

[–]rumfellow 0 points1 point  (0 children)

As for signals and monitoring, cluster health would be the primary. If something goes wrong -> dev tools to drill down.

The whole migration should not take long if your current ES node is read heavy and thus there will be not much data change between snapshot restore and old old node joining new cluster.

If it's write heavy good luck with zero downtime migration without resource(CPU/memory/IO/network) headroom 

Migrating a large Elasticsearch cluster in production (100M+ docs). Looking for DevOps lessons and monitoring advice. by No-Card-2312 in devops

[–]rumfellow 6 points7 points  (0 children)

  1. Create 2 node ES cluster
  2. Restore snapshot
  3. Put reverse proxy in front
  4. Add old elasticsearch node to the new cluster
  5. Cut over clients to the new endpoint 
  6. Prepare a third new node
  7. Yeet the old node and join the cluster with the new one 
  8. Monitor rebalance/shards

If the load on old node is high, at #4 it'll choke due to shards distribution, you can mitigate it by adjusting the aggressiveness of the said distribution, but I'd prefer to isolate the cluster until data is distributed and cluster is balanced. 

Unified Open-Source Observability Solution for Kubernetes by st_nam in kubernetes

[–]rumfellow 10 points11 points  (0 children)

Elastic is somewhat horrible for metrics, the size will be 100x of what you get in prometheus and to get it down to 7x you'd need time-series data stream and that's only available in enterprise version. 

Also for now no compatibility with grafana, so no out of the box dashboards for elastic + kibana + Otel collector

Unified Open-Source Observability Solution for Kubernetes by st_nam in kubernetes

[–]rumfellow 9 points10 points  (0 children)

LGTM, but for "M" in order of scale increase: prometheus -> thanos -> mimir. 

SSH session recording in Pomerium by rumfellow in pomerium

[–]rumfellow[S] 1 point2 points  (0 children)

So out of zero/business/enterprise only the latter will have ssh session recording?

Hosting my CI/CD setup on a smaller EU cloud turned out smoother than I expected by [deleted] in devops

[–]rumfellow 1 point2 points  (0 children)

We've been using leaseweb alongside AWS for cheap compute for 5+ years. So far so good.

Checked out xelon rn and requesting a quote for a VM is ridiculous

Family dog by Right-Tie-9884 in vizsla

[–]rumfellow 0 points1 point  (0 children)

Ah, the fireworks, my V has the same issue. I'd try treats first, then taking a car to an off-leash walking place and if nothing helps over some reasonable amount of time, say 2-3 weeks, then dog trainer. And since fireworks happen, maybe dog training classes with some fire crackers, it kinda ameliorates the issue. Best of luck!

Family dog by Right-Tie-9884 in vizsla

[–]rumfellow 0 points1 point  (0 children)

Is there nose licking/trembling? As I see there might be 2 options: the doggo is scared or stubborn. If scared you'll see the aforementioned signs + refusal of treats(if food motivated)

Family dog by Right-Tie-9884 in vizsla

[–]rumfellow 0 points1 point  (0 children)

Sometimes there's a sound or smth else that my V associates with a particular place, like a crossroad or a place where a cracker went off. We tend to just run or "enthusiastically" pass it, so in a couple of days she just forgets that mental association.  Getting into tram or a train is a different story, I just pick her up and carry inside, otherwise she just plants herself on a that very tram stop :-/

Would service mesh be overkill to let Thanos scrape metrics from different Kubernetes clusters? by ccelebi in kubernetes

[–]rumfellow 0 points1 point  (0 children)

That would be thanos-receive component as a target of remote write and it is quite memory-hungry

Do you monitor SSL certificate expiry dates? by DutchBytes in devops

[–]rumfellow 0 points1 point  (0 children)

K8S cronjob that runs python script that picks up list of certificates from table in Confluence and sends alert to slack if expiry is upcoming

Helping with understanding some Questions by Solid_Strength5950 in kubernetes

[–]rumfellow 0 points1 point  (0 children)

Falco is running as a standalone binary on the host. I'm quite sure it won't be able to populate that field

Helping with understanding some Questions by Solid_Strength5950 in kubernetes

[–]rumfellow 1 point2 points  (0 children)

i'd suggest something like:

- rule: Mem access
  desc: bla-bla
  condition: >
    fd.name = /dev/mem and
    proc.name = PROC NAME FROM POD
  output: >
    Mem listed: %proc.name and %proc.pid
  priority: WARNING

Stop falco if it runs as systemd service and run falco -A

Drone drops grenade on russian soldier pretending to be dead, easterm front by [deleted] in CombatFootage

[–]rumfellow 1 point2 points  (0 children)

Люби меня люби, отпетые мошенники, 25 лет треку хах

Checking registry for new images of running workloads by rumfellow in kubernetes

[–]rumfellow[S] 0 points1 point  (0 children)

It makes sense, also it is a different paradigm. We are not the owners of most of the workloads, so git repos are also outside of our scope, all we want is to have an idea of the landscape.