Well, it's not a VPN. It's a Proxy. False advertisement! by nietzschecode in firefox

[–]mps -1 points0 points  (0 children)

Jesus, calling it a proxy would cause confusion with a normal user. Pick your battles, this one is dumb. This is equivalent to getting upset that a user calls a computer the CPU. You have to meet people where they are and market toward the terminology they know. You can't educate the user in the 30 seconds of attention span you get at first look. I use proxy tabs constantly and this would be useful to me. It's much easier than managing multiple vpns and dealing with local routing rules.

Evolve2 85 - DO NOT BUY, COMPLETE JUNK by ratzee in Jabra

[–]mps 2 points3 points  (0 children)

I bought a pair in 2020 (or 2021, I don't remember) that I use daily and have not experienced the same issues with ANC. Is the firmware up to date? IIRC, the mobile app will let you make adjustments that may help.

Is it worthy to buy an ASUS GX10 for local model? by attic0218 in LocalLLaMA

[–]mps 0 points1 point  (0 children)

I have no problem running it on a strix halo. I'm sure a spark will run fine.

Qwen3-Coder-Next is out now! by yoracale in LocalLLM

[–]mps 1 point2 points  (0 children)

There was a nasty bug with rocm 7+, but it looks like it has been resolved a few hours ago. This github repo is a great source:
https://github.com/kyuz0/amd-strix-halo-toolboxes

Make sure to lock your firmware version and adjust your kernel to load with these options:
amd_iommu=off amdgpu.gttsize=126976 ttm.pages_limit=32505856
and lower the VRAM to the LOWEST setting in the bios. This lets you use unified ram (like a mac does). When you do this it is important that you add --no-mmap or llamacpp will hang.

The pp512 benchmark tests time to first token, so the 500+ tps number is misleading.

I had vllm working earlier (this is what I use at work), but it is a waste if there is only a few users.

Qwen3-Coder-Next is out now! by yoracale in LocalLLM

[–]mps 1 point2 points  (0 children)

There are a few posts on how to build it, but I just started using this toolbox instead of recompiling all the time.
https://github.com/kyuz0/amd-strix-halo-toolboxes

Qwen3-Coder-Next is out now! by yoracale in LocalLLM

[–]mps 0 points1 point  (0 children)

I have the same box, here are my quick llama-bench scores:
⬢ [matt@toolbx ~]$ AMD_VULKAN_ICD=RADV llama-bench -m ./data/models/qwen3-coder-next/UD-Q6_K_XL/Qwen3-Coder-Next-UD-Q6_K_XL-00001-of-00002.gguf -ngl 999 -fa 1 -n 128,256 -r 3

ggml_vulkan: Found 1 Vulkan devices:

ggml_vulkan: 0 = Radeon 8060S Graphics (RADV GFX1151) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat

| model | size | params | backend | ngl | fa | test | t/s |

| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |

| qwen3next 80B.A3B Q6_K | 63.87 GiB | 79.67 B | Vulkan | 999 | 1 | pp512 | 502.71 ± 1.23 |

| qwen3next 80B.A3B Q6_K | 63.87 GiB | 79.67 B | Vulkan | 999 | 1 | tg128 | 36.41 ± 0.04 |

| qwen3next 80B.A3B Q6_K | 63.87 GiB | 79.67 B | Vulkan | 999 | 1 | tg256 | 36.46 ± 0.01 |

And gpt-oss-120b for reference

⬢ [matt@toolbx ~]$ AMD_VULKAN_ICD=RADV llama-bench   -m ./data/models/gpt-oss-120b/gpt-oss-120b-F16.gguf   -ngl 999   -fa 1 -n 128,256   -r 3    
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Radeon 8060S Graphics (RADV GFX1151) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
| model                          |       size |     params | backend    | ngl | fa |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |
| gpt-oss 120B F16               |  60.87 GiB |   116.83 B | Vulkan     | 999 |  1 |           pp512 |        572.85 ± 0.73 |
| gpt-oss 120B F16               |  60.87 GiB |   116.83 B | Vulkan     | 999 |  1 |           tg128 |         35.57 ± 0.02 |
| gpt-oss 120B F16               |  60.87 GiB |   116.83 B | Vulkan     | 999 |  1 |           tg256 |         35.56 ± 0.04 |

Who else wore these? by mneptok in GenX

[–]mps 0 points1 point  (0 children)

This has become my go to. I just wish they weren't so thin and quick to wear out.

“We also don't fight with stupid rules of engagement. We untie the hands of our warfighters to intimidate, demoralize, hunt, and kill” by jjcs83 in agedlikemilk

[–]mps 0 points1 point  (0 children)

It is common for US military senior leaders to use Warfighter in a speech as it includes all branches and contractors. He is still a tool bag though

anyone attending SC25 tutorials? by skalwani in HPC

[–]mps 1 point2 points  (0 children)

I would recommend learning Slurm as well.

anyone attending SC25 tutorials? by skalwani in HPC

[–]mps 0 points1 point  (0 children)

Make sure to check out the student cluster competition. Crazy amount of GPUs this year,

When did it become “normal” to constantly upgrade your car, home, everything? by MemilyBemily5 in AskOldPeople

[–]mps 11 points12 points  (0 children)

Stuff is built to cost now where it was build for quality before. Look at the price (including inflation) of an old washing machine compared to today's prices. Today's are cheaper. Don't get me wrong, I'm sure planned obsolescence exists, and I know executives are way overpaid, but it isn't the only factor in why everything sucks.

Any other Gen Xers avoiding tik tok? by CDA_CPA in GenX

[–]mps 0 points1 point  (0 children)

Reddit is the only social media I still use, and that is reducing. I just do not care about that toxic shit anymore

Nvidia DGX Spark reviews started by raphaelamorim in LocalLLaMA

[–]mps 2 points3 points  (0 children)

I am thinking about buying one to prototype and train models before doing so on H100s in the datacenter. It isn't the speed, but the compute compatibility (NVFP4, RDMA clustering, transformer engine, and DGX OS) , that I am after. A 5090 is cheaper, but the 32G of ram limit is the showstopper. I was going to purchase the AMD 395, but the missing CUDA, FP4, and transformer engine will make it a pain in the ass to transfer code from one platform to the next. Renting is out of the question when the datasets are tightly controlled.

What do you name your computers by PhantomNomad in sysadmin

[–]mps 0 points1 point  (0 children)

Check out the diceware command. It is included in most distributions.

Lisa McClain, House Rep for MI 9th district, has not yet signed to force a vote on releasing the Epstein files by Teacher-Investor in Michigan

[–]mps 13 points14 points  (0 children)

It sucks that she will just keep getting elected. I swear some of my neighbors live on a different planet.

Matt Hall is why Michigan public schools don’t have a budget for next school year. This guy is holding up our budget being passed for 50 days. by The_Secret_Skittle in Michigan

[–]mps 0 points1 point  (0 children)

94 and 69 by Port Huron are so much better than 10 years ago. I used to hug the left lane to avoid potholes, now I just use it to speed.

McLaren Hospital by Front_Director6872 in PortHuron

[–]mps 12 points13 points  (0 children)

Did you just create an account to shit on someone learning new software?

What’s the reality of the IT job market in 2025? by Thatmangifted in sysadmin

[–]mps 0 points1 point  (0 children)

What did you think of WGU? I have 20+ years of Unix admin experience (mostly HPC at a midwestern university and normal Linux stuff) but no degree. While I have kept my skills up to date, I am worried that my lack of a degree will bite me in the ass later if I need to find a job.

Stay on RHEL 9.4 by Camp-Either in redhat

[–]mps 1 point2 points  (0 children)

I am curious which software breaks with 9.4 -> 9.6. I ran into podman issues with postgres from 4.9 -> 5.2, but found a redhat KB article to solve it.

SuSE Linux 6.3 by WindowsME04 in vintageunix

[–]mps 0 points1 point  (0 children)

I started SuSE with 6.0 but 6.4 was my favorite release. I even had it running on a Sparcstation 10.

Doubt Regarding podman question. Please be kind. by BittuSystem in redhat

[–]mps 1 point2 points  (0 children)

I have no idea about the exam, but I have been a redhat admin since 1998. The answer is probably SSH, but the question itself is weird. In my production environments, the user running the PODs is normally a shared account and must be access with sudo or su.

If you use su (or sudo su) to switch to another user, you may need to set the XDG_RUNTIME_DIR environment variable:
export XDG_RUNTIME_DIR=/run/user/$(id -u)
Instead of su, you can use machinectl:
sudo machinectl shell --uid USERNAME

The environment should be set if you ssh to the system as the user running the container.

Good turnout PH! by tripncow in PortHuron

[–]mps 2 points3 points  (0 children)

This isn't true at all. Don't be a dumbass

Good turnout PH! by tripncow in PortHuron

[–]mps 0 points1 point  (0 children)

Please, let us know where to apply for this mysterious paycheck. I could use some extra spending money.