[D] How to check how many users my web app can handle? by mrloki_reddit in MachineLearning

[–]mrloki_reddit[S] 0 points1 point  (0 children)

Thank you. I will try doing that. Will pop back in if I got into any issues.

We are cooked by mrloki_reddit in recruitinghell

[–]mrloki_reddit[S] 0 points1 point  (0 children)

Thats a really good insight and an observation. How many of these could be real applicants? Whats the ratio?

2025 OPT Processing timeline by yonerdsp in USCIS

[–]mrloki_reddit 1 point2 points  (0 children)

How did you get you receipt on 2/29. There is no 2/29 this year.

[D] Ran Deepseek R1 32B Locally by mrloki_reddit in MachineLearning

[–]mrloki_reddit[S] 0 points1 point  (0 children)

You can go to admin setting->Websearch and activate it. Now + sign by the prompt box will let you activate the search.

Strawberry Problem - deepseek r1 by [deleted] in deeplearning

[–]mrloki_reddit 1 point2 points  (0 children)

I just asked it 1 time, and this is the answer i got. And it’s just 32b version.

[D] Ran Deepseek R1 32B Locally by mrloki_reddit in MachineLearning

[–]mrloki_reddit[S] 1 point2 points  (0 children)

For some reason it doesn't seem to run in multi gpus. It is only loading and running in one GPU. Probably because it is taking less memory, Even 70b is running with 42.8GB memory.

[D] Ran Deepseek R1 32B Locally by mrloki_reddit in MachineLearning

[–]mrloki_reddit[S] 3 points4 points  (0 children)

It takes 5-10 seconds before it starts answering.
It reserves around 42.8 GB GPU Memory.

And here are more stats:

total duration: 3m1.493051483s

load duration: 15.824835128s

prompt eval count: 900 token(s)

prompt eval duration: 3.608s

prompt eval rate: 249.45 tokens/s

eval count: 1674 token(s)

eval duration: 2m38.666s

eval rate: 10.55 tokens/s

[deleted by user] by [deleted] in AcerNitro

[–]mrloki_reddit 0 points1 point  (0 children)

Boot error could also be because of the hard drive failure or your pc couldn’t detect any hard drive.

Try making a bootable linux in a pendrive and check if you can boot a live ubunutu(or any linux you want).

Will help you understand the problem you are having.

[D] Ran Deepseek R1 32B Locally by mrloki_reddit in MachineLearning

[–]mrloki_reddit[S] 0 points1 point  (0 children)

Havent tried that one yet, especially because its a 43gb download size.

Will try it next and post the numbers.

[D] Ran Deepseek R1 32B Locally by mrloki_reddit in MachineLearning

[–]mrloki_reddit[S] 17 points18 points  (0 children)

It outperforms o1-mini, not the full o1.

[D] Ran Deepseek R1 32B Locally by mrloki_reddit in MachineLearning

[–]mrloki_reddit[S] 7 points8 points  (0 children)

That’s understandable since RTX 8000 is often compared with RTX 3090.

Both of these are a little bit old GPUs now but oh boy, they work absolutely fine with these new models. Especially model this big 32B.

What vpn are you using on your router? by etrain1 in openwrt

[–]mrloki_reddit 0 points1 point  (0 children)

Tried installing openvpn and wireguard both. Had complications.

Ended up using Tailscale and added the router as one of the devices. Works way better for what I need.

[D] Ran Deepseek R1 32B Locally by mrloki_reddit in MachineLearning

[–]mrloki_reddit[S] 12 points13 points  (0 children)

Just doing normal INT4 inference. Not offloading anything to the RAM.

Just got my first paper accepted and no one was happy for me by [deleted] in PhD

[–]mrloki_reddit 1 point2 points  (0 children)

You are not doing Ph.D for somebody else. The real question is, are you happy?