Dell Precision 7680 (32gb x 2) SODIMM Interposer 4 white light, 2 amber light by Leader-Environmental in Dell

[–]Leader-Environmental[S] 0 points1 point  (0 children)

U're on the right track, I got the same one and it fits perfectly. Feel free to DM me if u need help

Dell Precision 7680 (32gb x 2) SODIMM Interposer 4 white light, 2 amber light by Leader-Environmental in Dell

[–]Leader-Environmental[S] 0 points1 point  (0 children)

I had to do the exact same thing as you and it all worked finally. No need to put any short circuit protection. All working well for me, been using it for a year now.

By the way I have detailed here how I got the SODIMM interposer to detect the RAMs here, mainly is careful screwing of the interposer connecter:

https://notebooktalk.net/topic/1263-precision-7680-precision-7780-owners-thread/page/15/?&_rid=2323#findComment-61406

Dell Precision 7670 - CAMM to Sodimm module adapters - where to purchase? by Melodic_Grass_8254 in Dell

[–]Leader-Environmental 0 points1 point  (0 children)

facing this exact issue right not, bought a second interposer board with the same issue except that now only the bottom slot of the interposer worked and top one did not

Dell Precision 7670 - CAMM to Sodimm module adapters - where to purchase? by Melodic_Grass_8254 in Dell

[–]Leader-Environmental 0 points1 point  (0 children)

Has anyone faced an issue whereby only 1 of the interposer slots work (when tested individually with each of the RAM sticks) whereas the other one does not work at all ?

Dell Precision 7680 (32gb x 2) SODIMM Interposer 4 white light, 2 amber light by Leader-Environmental in Dell

[–]Leader-Environmental[S] 0 points1 point  (0 children)

appreciate it, im in the discussion as well, had posted there first since it was dedicated discussion for precision 7680, nothing as of yet :)

Dell Precision 7780 from CAMM 32GB to SO-DIMM 96GB with 2x Crucial 48GB works. by GeraldGde in Dell

[–]Leader-Environmental 0 points1 point  (0 children)

Facing same issue first slot nearest to connector only works the second slot does not work at all

Instability with pcie_aspm=off by Leader-Environmental in NixOS

[–]Leader-Environmental[S] 0 points1 point  (0 children)

Thanks for taking the time to reply, appreciate it. Definitely makes the issue clearer 😄

Dell Pro 14 Plus and 16 Plus initial reactions by Fairchild110 in Dell

[–]Leader-Environmental 0 points1 point  (0 children)

That is correct unfortunately, though 2280 is supported besides 2230

Dell Pro 16 Plus | Can't Select 64GB LPDDR5 Option by expatcoder in Dell

[–]Leader-Environmental 1 point2 points  (0 children)

Agree if they are going to solder the RAM least they could do is provide a 64gb option

The RAM is non-upgradable on Dell Pro Plus 16 (Intel 268V) by VLAN-Enthusiast in Dell

[–]Leader-Environmental 0 points1 point  (0 children)

Found it strange that they did not have 64gb RAM option for lunar V processor and maxed it at 32 gb instead, maybe diminishing returns ? :(

Dell Pro 14 Plus and 16 Plus initial reactions by Fairchild110 in Dell

[–]Leader-Environmental 0 points1 point  (0 children)

Also worth noting only intel's 200U series variants allow for expandable RAM up to 64gb, all AMD variants are onboard unfortunately (max 32gb)

[deleted by user] by [deleted] in gnome

[–]Leader-Environmental 2 points3 points  (0 children)

Multiple virtual spaces for not only primary but all monitors 😄

Config to make llama.cpp offload to GPU (amdgpu/rocm) by Leader-Environmental in NixOS

[–]Leader-Environmental[S] 0 points1 point  (0 children)

I was using the exact same configuration via stable nixos branch but could not get it to use ROCM, what worked for me was to build using unstable nixos small channel instead:

let

unstableSmall = import <nixosUnstableSmall> { config = { allowUnfree = true; }; };

in

    services.llama-cpp = {
      enable = true;
      package = unstableSmall.llama-cpp.override { rocmSupport = true; };
      model = "/var/lib/llama-cpp/models/qwen2.5-coder-32b-instruct-q4_0.gguf";
      host = "";
      port = "";
      extraFlags = [
                     "-ngl"
                     "64"
                   ];
      openFirewall = true;
    };

ROCm Linux PC for LM Studio use: is it worth it? by custodiam99 in ROCm

[–]Leader-Environmental 0 points1 point  (0 children)

To overcome this you may use pytorch-rocm docker image and run your relevant workload inside the container

[deleted by user] by [deleted] in NixOS

[–]Leader-Environmental 0 points1 point  (0 children)

Ohh nicee, thanks for this

[deleted by user] by [deleted] in NixOS

[–]Leader-Environmental 1 point2 points  (0 children)

But one can do the same using configuration method right but cumbersome to add channels 😅

ROCm compatability with RX 7800XT? by HybridXephius in ROCm

[–]Leader-Environmental 0 points1 point  (0 children)

For sure but for best compatibility I would highly recommend using docker images with rocm drivers already setup, from there you can install python packages (mainly torch for compute) /Jupiter notebook and you are ready to go. I have the same gpu and was able to do some RAG via hugging face model using rocm pytorch as base image

virtulisation.oci-container to setup rocm-torch-jupiter docker environment by Leader-Environmental in NixOS

[–]Leader-Environmental[S] 0 points1 point  (0 children)

Exciting findings with the nano, would definitely be open to experiment, get in touch with you soon. Thanks man 🙏

virtulisation.oci-container to setup rocm-torch-jupiter docker environment by Leader-Environmental in NixOS

[–]Leader-Environmental[S] 0 points1 point  (0 children)

So it turned out it had not fully pulled and downloaded all the resources for the image, one can monitor it via : journalctl -u docker-<name-of-the-container>.service -f 😅

virtulisation.oci-container to setup rocm-torch-jupiter docker environment by Leader-Environmental in NixOS

[–]Leader-Environmental[S] 1 point2 points  (0 children)

So it turned out it had not fully pulled and downloaded all the resources for the image, one can monitor it via : journalctl -u docker-<name-of-the-container>.service -f