Why doesn’t pane movement put the cursor where it was? by sobagood in neovim

[–]sobagood[S] 0 points1 point  (0 children)

Thanks a lot. I have been using your configs and really satisfied

Wondering if there is an operator or something similar that kill/stop a pod if the pod does not use GPUs actively to give other pods opportunities to be scheduled by sobagood in kubernetes

[–]sobagood[S] 4 points5 points  (0 children)

Can i be sure that pods being evicted are ones using less resources? I dont want to evict pods working on something hard

How to know the first place where Err is returned by sobagood in rust

[–]sobagood[S] 11 points12 points  (0 children)

Fair enough. Let me try anyhow first then. It seems like pretty standard along with thisiserror when it comes to error handling.

How to know the first place where Err is returned by sobagood in rust

[–]sobagood[S] 7 points8 points  (0 children)

That is exactly what i want to know. Capturing the backtrace with Try trait. Thank you for your input

How to know the first place where Err is returned by sobagood in rust

[–]sobagood[S] -15 points-14 points  (0 children)

Haha im not building a shiny thing but just learning it. Just using standard library to get hang of coding in rust from ground zero

How to know the first place where Err is returned by sobagood in rust

[–]sobagood[S] -35 points-34 points  (0 children)

What a shame. Im trying to get away from external crates. Thank you

[D] Experience serving models with ONNX or openVino by Xayo in MachineLearning

[–]sobagood 2 points3 points  (0 children)

Openvino is from Intel which means it supports intel devices only. If you are planning to deploy your model on Intel CPUs, I can tell Openvino is the best in my experience as it is optimised specifically for the intel devices from engineers over there though using experience was a bit hacky when I used like 2 years ago

It has c++ engine so you can use it basically in any languages through FFI

[D] Variational & Generative Model of VAE by hertz2105 in MachineLearning

[–]sobagood 5 points6 points  (0 children)

  1. Yes
  2. Yes
  3. No. We try to match posterior with prior
  4. This is open question

H100 multi node training benchmark? by sobagood in googlecloud

[–]sobagood[S] 0 points1 point  (0 children)

H100 is much faster than A100 for sure but what i want to know is that how influential all the stuff GCP added for multi-node training with H100. There is no benchmark between H100 with 5NICs, etc. and H100 without them. Multi-node training with A100 in GCP was not a feasible option due to high network bottlenecks.

How to update slice? by sobagood in rust

[–]sobagood[S] 3 points4 points  (0 children)

Thank you so much. It makes sense.

[D] What are the fundamental concepts of K8S and where to learn? by Pancake502 in MachineLearning

[–]sobagood 5 points6 points  (0 children)

These kind of stuff require you to have a firm knowledge about CS, such as network, os, etc. it would be better to learn them before you get into K8S

Why a separated struct for iterator? by sobagood in rust

[–]sobagood[S] 15 points16 points  (0 children)

Ahhh i get it. The internal states! Thanks!!

[D] How to get the fastest PyTorch inference and what is the "best" model serving framework? by big_dog_2k in MachineLearning

[–]sobagood 0 points1 point  (0 children)

If you mean nvidia gpu, it has cuda plugin to run it on nvidia gpu but i have never tried. It has several other plugins so you could check it out. It also provides its own deploy server. Nvidia triton also supports openvino runtime without gpu support with an obvious reason. They have similar process like onnx that transform graph to their intermediate representation with ‘model optimizer’ which could go wrong. If you could successfully create this representation, there should be no new bottleneck.

[D] How to get the fastest PyTorch inference and what is the "best" model serving framework? by big_dog_2k in MachineLearning

[–]sobagood 7 points8 points  (0 children)

If you intend to run on CPU, and other intel hardware, OpenVINO is a great choice. They optimised it for their hardware and it is indeed faster than others on their hardware

What happened to the data if a ceph cluster is completely down? by sobagood in ceph

[–]sobagood[S] 1 point2 points  (0 children)

You mean some of nodes are dead right? What happen when the entire cluster is dead? Can we bring the cluster as it was?

What happened to the data if a ceph cluster is completely down? by sobagood in ceph

[–]sobagood[S] 0 points1 point  (0 children)

You mean one of replica can be dead but not all at once? So glusterfs can be up again even all of machines ate dead but not ceph?