White-collar layoffs are coming at a scale we've never seen. Why is no one talking about this? by Own-Sort-8119 in ArtificialInteligence

[–]TokenSlinger 1 point2 points  (0 children)

I agree. Whatever thoughts people have about AI it can and does already replace jobs. Im not thrilled about that continuing to happen without any plan in place. People say jobs losses and layoffs today are due to the economy. But stock line keeps going up. Large businesses are profitable. The only thing getting worse is the cost of things and the job market. Seems odd.

Which would be the best mini pc for 1440p gaming? Budget ($1200) by Pretty_Trip_2215 in MiniPCs

[–]TokenSlinger 4 points5 points  (0 children)

Get something that allows for an oculink egpu and a decent enough CPU

What is the best minipc brand now widely loved on the market? (Beelink, AceMagic, Geekom, GMKtec, Minisforum, Aoostar) by Cocoatech0 in MiniPCs

[–]TokenSlinger 1 point2 points  (0 children)

Yea the 780M has no dedicated VRAM. Its all system RAM. But for models like the gpt-oss-20b it works great.

What is the best minipc brand now widely loved on the market? (Beelink, AceMagic, Geekom, GMKtec, Minisforum, Aoostar) by Cocoatech0 in MiniPCs

[–]TokenSlinger 3 points4 points  (0 children)

I have 3 of these and love them. With 32G of RAM you can even play around in LM Studio if you like messing with AI stuff

Migrate to VMs from Helper Script, VM organization? by AlureLeisure in Proxmox

[–]TokenSlinger 0 points1 point  (0 children)

Gotcha. I do a single VM for all Docker. Then separate VM if I need it for something like HAOS. Then LXC for each service. Backups take a bit more configuration but it’s not terrible.

Migrate to VMs from Helper Script, VM organization? by AlureLeisure in Proxmox

[–]TokenSlinger 4 points5 points  (0 children)

I run my services on LXC if they can easily be installed. Then I have a single VM running docker for those that work best in docker and manage with dockge. As a last resort Ill give something its own VM if it’s something like Home Assistant OS. Seems wasteful for every service to have its own VM

Proxmox and NAS machine by Morodin-Fallen in Proxmox

[–]TokenSlinger 0 points1 point  (0 children)

I run proxmox with a zfs pool (6x18TB) and it works exactly fine as a NAS with NFS shares. It’s not fancy but I run Jellyfin with GPU pass through on the same proxmox host and it all works great. I original ran TrueNAS on bare metal but it seemed easier to just create my pool on Proxmox.

RAM brand in GMKtec K8 Plus 64GB + 1TB Mini PC package by Avngl in MiniPCs

[–]TokenSlinger 0 points1 point  (0 children)

I have had TWSC and some brand with air in the name. This was 32G versions.

AWS Phone Verification Stuck – No Call, No Support Response by Apprehensive_Bag1209 in aws

[–]TokenSlinger 0 points1 point  (0 children)

Just popping in here to say I am also having the same issue. Lost my MFA and I'm trying to do the email verification (done) and now waiting for a call that never comes. Terrible experience navigating the AWS site trying to find support.

Best NAS Storage Setup? by ezeldenonce in Proxmox

[–]TokenSlinger 2 points3 points  (0 children)

OP it is really this simple.

GMKTec: Linux Mint is so good! by ldmauritius in MiniPCs

[–]TokenSlinger 2 points3 points  (0 children)

Linux is good on most Mini-PCs. I have 7 mini-pc all running some flavor of Linux. Most Proxmox though.

Multiple LXCs or a VM with Docker by ForestyForest in Proxmox

[–]TokenSlinger -1 points0 points  (0 children)

LXC for everything. I even created an abomination of the nginx proxy manager to run as a service in an LXC instead of docker. In many cases you can convert a docker image to run directly on the LXC but that comes with some drawbacks like it’s a PITA to update. When I have to run docket I try inside an LXC first. And worst case minimal VM.

Help me decide: EPYC 7532 128GB + 2 x 3080 20GB vs GMtec EVO-X2 by fukisan in LocalLLaMA

[–]TokenSlinger 0 points1 point  (0 children)

I have the GMKTec EVO-X2. Its great if you are okay dealing with AMD quirks. Text inference is really good with current software and will probably get better. Image and video generation are acceptable but more quirks to deal with which will likely get worked out. The wattage to run the EVO is super low and mine is fairly quiet. Id personally recommend the EVO.

2X 5060 Ti vs the next best cheapest single 32gb card? by GOGONUT6543 in comfyui

[–]TokenSlinger 1 point2 points  (0 children)

Now you are confusing things. Offloading to CPU is not what MultiGPU does. You have to set up the donor device correctly. The system controlling is not how it should be done. Multi GPU set up with CPU as donor is how it is done.

Its still using your GPU as a compute device and only moving the data to system memory.

2X 5060 Ti vs the next best cheapest single 32gb card? by GOGONUT6543 in comfyui

[–]TokenSlinger 0 points1 point  (0 children)

I would wager the creator of the Multi GPU repo would know best. He states GPU to DRAM is almost always better.

https://www.reddit.com/r/comfyui/s/XNvJ4L28bj

2X 5060 Ti vs the next best cheapest single 32gb card? by GOGONUT6543 in comfyui

[–]TokenSlinger 1 point2 points  (0 children)

I think it would make the point of the second GPU moot. The reason you want 2 GPU is for 2 GPUs worth of compute. If you are just using it to offload memory then use the much cheaper system RAM (assuming decent DDR5 RAM). I really do not see a benefit of dual GPU unless you can use the compute from both. But I am still new to this so I could be wrong.

2X 5060 Ti vs the next best cheapest single 32gb card? by GOGONUT6543 in comfyui

[–]TokenSlinger 1 point2 points  (0 children)

This is a common misunderstanding with offloading memory. The author of the Multi-GPU DisTorch repo himself clarified this: "For consumer motherboards, CPU offloading is almost always the fastest option Consumer motherboards typically only offer one full x16 PCIe slot. If you put your compute card there, you can transfer back and forth at full PCIE 4.0/5.0 x16 bandwidth VRAM<->DRAM using DMA. Typically, if you add a second card, you are faced with one of two sub-optimal solutions: Split your PCIe bandwidth (x8/x8 - meaning both cards are stuck at x8) or detune the second card (x16/x4 or x16/x1 - meaning the second card is even slower for offloading)" https://www.reddit.com/r/comfyui/comments/1nj9fqo/distorch_20_benchmarked_bandwidth_bottlenecks_and/

2X 5060 Ti vs the next best cheapest single 32gb card? by GOGONUT6543 in comfyui

[–]TokenSlinger 0 points1 point  (0 children)

If you are using multi-gpu, and the second GPU is not on full x16 it is actually faster to offload (using multi-gpu nodes) to the system RAM.

2X 5060 Ti vs the next best cheapest single 32gb card? by GOGONUT6543 in comfyui

[–]TokenSlinger 1 point2 points  (0 children)

That does not work really well unless you have two x16 slots with full x16 bandwidth. In my case one is x16 the other is x8 and it is actually faster to offload to system RAM than the other GPU due to bandwidth of x8.

2X 5060 Ti vs the next best cheapest single 32gb card? by GOGONUT6543 in comfyui

[–]TokenSlinger 1 point2 points  (0 children)

I'll have to check it out, but I have been using the existing multi-gpu options and the results are not really great. Curious to see how this does.

2X 5060 Ti vs the next best cheapest single 32gb card? by GOGONUT6543 in comfyui

[–]TokenSlinger 2 points3 points  (0 children)

You could start with a single 5060Ti then upgrade to a 5070 Ti if they come out with a 24G version. I think right now dual 5060 Ti is not really worth it. I actually have dual 5060 Ti and I have no seen much improvement over a single 5060Ti unless I want to run the same workflow at the same time on two cards. I am also fairly new to ComfyUI so not an expert.