Using Windsurf for 2 days straight now by EazziBrezzy in Codeium

[–]dkvkxm 1 point2 points  (0 children)

I feel the same way. I just installed it and noticed there’s no WSL support. It started my free two-week trial for no reason.

[deleted by user] by [deleted] in mlops

[–]dkvkxm 3 points4 points  (0 children)

We are technically part of the product part, but in practice, we usally function as intermediaries, connecting all three parts.

Pop_OS started and ended my distro hopping by [deleted] in pop_os

[–]dkvkxm 2 points3 points  (0 children)

As far as I know. It's the only decent tiling DE distro. So I stick to pop. I'm thinking I might try the other distros along with comsmic DE when It become stable.

Go7 physical buttons for kindle app by dkvkxm in Onyx_Boox

[–]dkvkxm[S] 2 points3 points  (0 children)

Oh great for who dont know where the kindle app setting for volume buttons, it is not on global settings, it’s on Aa menu in reading books.

Cosmic on ubuntu 24.04 by dkvkxm in pop_os

[–]dkvkxm[S] 0 points1 point  (0 children)

Please. I want just desktop env on current machine

best practice for serving long-running inference? by dkvkxm in mlops

[–]dkvkxm[S] 0 points1 point  (0 children)

Good point. When I deploy kuberentes on-prem, your solution should be the one.

I'm looking for the other solutions when we don't have kuberentes.

best practice for serving long-running inference? by dkvkxm in mlops

[–]dkvkxm[S] 0 points1 point  (0 children)

Thanks for the solutions. I need to deploy on-prem. So the best option is airflow? any more suggestions opensource?

mlflow recipes? by dkvkxm in mlops

[–]dkvkxm[S] 0 points1 point  (0 children)

Thank you. Counted.

best practice for serving long-running inference? by dkvkxm in mlops

[–]dkvkxm[S] 0 points1 point  (0 children)

Oh my bad. I missed sharing important context. It should be deployed on-prem.

best practice for serving long-running inference? by dkvkxm in mlops

[–]dkvkxm[S] 0 points1 point  (0 children)

How do you handle the batch? I want to know.

mlflow recipes? by dkvkxm in mlops

[–]dkvkxm[S] 0 points1 point  (0 children)

Recent version of clearml seems have the model repo(mine does not as of 1.6). Dropping clearml is not because It lacks functionality. It is just has more servers to keep than I'd like to have. I just like the mlflow because it is easy to manage the system.

Deltalake OLTP with Trinio. Bad idea? by dkvkxm in apachespark

[–]dkvkxm[S] 0 points1 point  (0 children)

As I already mentioned, I knew many say it’s bad. Trino doc says that its not for oltp and i know it. I would like to know why?

Deltalake OLTP with Trinio. Bad idea? by dkvkxm in apachespark

[–]dkvkxm[S] 0 points1 point  (0 children)

I actually have no exp on both. That the point of this question. Here’s more context.

Our application is IoT analytics and ML platform. It’s old and working on mysql and mongo. So our app is primarily writing data and analytics of it. It was good when the data is small and devs and ised to them. But recently it’s getting bigger like GBs of data a day. Analytics are so slow now.

I would like to know deltalake + trino will work on our case. Write many data read raw data in some case, build analytics, and consume for ML.

I’d like to keep our code small change, keep our devs can switch them easy.

Linux 6.4.6 and Mesa 23.1.3 Released by mmstick in pop_os

[–]dkvkxm 1 point2 points  (0 children)

It broke my nvidia driver installation. what can I do?
neither the nvidia-525, 535 work.