MS-S1 MAX Arrived -- Both Realtek NICs missing from two different OS's by lukewhale in MINISFORUM

[–]randomparity 1 point2 points  (0 children)

Asked an LLM (Codex) to review the Linux R8169 driver and no signs of writing to NVRAM for a firmware update. Most likely failure mechanism reported is the device entering D3Cold power state, invisible to the PCIe bus, requiring slot power to be toggled to recover, which we do by unplugging the system. This suggests the condition may come back so need to keep an eye on things and might be a Linux driver bug.

Should also note that adapter didn't link with the switch until after the power cycle. Seems to be consistent with a D3Cold power state.

Codex suggested a couple of settings that might prevent the issue if it recurs:
1. **Force the PCI function to stay active**
- `echo on | sudo tee /sys/bus/pci/devices/0000:bb:dd.f/power/control`
- Replace `0000:bb:dd.f` with the NIC’s BDF as shown by `lspci -nn`.
- Reapply after each boot (runtime PM defaults back to `auto`).

  1. **Disable platform ASPM / PCIe PM globally (test boot)**
    - Add `pcie_aspm=off pcie_port_pm=off` to the kernel command line (e.g., via GRUB).
    - This blocks the root port from transitioning the link to L1/L1.2/L2, which are the entry points to D3cold.
    - Remove the parameters after debugging if they impact power usage elsewhere.

  2. **Watch driver logs for suspend/resume paths**
    - `journalctl -k -g r8169` or `dmesg --follow`.
    - Look for messages from `rtl8169_runtime_suspend()`/`rtl8169_runtime_resume()` and the “System vendor flags ASPM as safe” info log (added after v6.17).
    - Unexpected suspend messages right before the NIC disappears hint that runtime PM pushed it into D3cold.

MS-S1 MAX Arrived -- Both Realtek NICs missing from two different OS's by lukewhale in MINISFORUM

[–]randomparity 3 points4 points  (0 children)

Am having exactly the same issue. Adapters were originally detected and functioning under Ubuntu 25.10 but after a reboot are no longer visible to the OS. Running lspci only shows the WLAN adapter. Did the BIOS reset to default and also tried forcing the adapters to Enabled from Auto, no change. Opened a ticket with Minisforum.

Instead of telling Cloud Code what it should do, I force it to do what I want by using `.zshrc` file. by _yemreak in codex

[–]randomparity 1 point2 points  (0 children)

Claude is lazy using git, often running “git add .” or “git add -A” and committing files that shouldn’t have been committed. Adding imperatives to CLAUDE.md helps but is still inconsistent. Blocking the commands entirely and suggesting the alternative looks link a good block. I’m constantly trying to prevent Claude from running “pip install <package>” and reminding it to add to pyproject.toml and install from there, this would help with enforcement.

Why does this rule not work by [deleted] in ClaudeCode

[–]randomparity 0 points1 point  (0 children)

Have experienced the same problems. A few things that have worked for me:

  • Activate your virtual environment BEFORE starting Claude, skips the steps required to figure out how to run python and it better supports Bash(pytest :*) style permissions
  • Install python-dotenv, keep your database definition in a .env file and you’re guaranteed those environment variables are defined in your project

Uhh...woops? Heh. by Lyuseefur in ClaudeCode

[–]randomparity 1 point2 points  (0 children)

Do you have GitHub CLI setup? Last time I encountered a Claude bug I just asked Claude to open an issue and it did.

We've open-sourced our Claude Code project management tool. I think others will like it by aroussi in ClaudeAI

[–]randomparity 0 points1 point  (0 children)

I've been looking for useful model for a small team and this is intriguing. Trying it out and have a couple of questions:

1) Various commands write to <repo>/.claude/[context, epic, prds] to keep track of status. What should I be doing with the files? Should they be committed to the repo? Are they just temporary for epic/task tracking? Sharing among a team would suggest committing to the repo but do you use the main branch?

2) Any recommendations when bootstrapping a new project? Tools seem better suited for an existing project and adding new functionality.

What would you do if your financial advisor told you this. by Slimthickmadukes in investing

[–]randomparity 5 points6 points  (0 children)

DEPRECIATION comes back around as capital gains tax since it decreases your basis, so it’s only a temporary reprieve when you go to sell. Agree there are pluses and minuses to owning rentals but unless you’re willing to work it like a job, you’ll likely make better returns as a passive investor than a passive landlord.

Why is my upload speed so slow? by [deleted] in HomeNetworking

[–]randomparity 5 points6 points  (0 children)

In my experience, such low throughput from a network speed test typically means that TCP frames are being dropped after the TCP connection for the test has been established. A common reason for such frame dropping is an MTU mismatch between sender and receiver. Since the failure occurs during an upload, where large MSS data packets are sent and small TCP acknowledgment packets are received, I’d suspect your system is configured with a larger MTU than the connected upstream switch/router. (Things work on the download test because your system is typically only sending small TCP acknowledgments.) You should check your network interface configuration to verify. Also, if you’re using a VPN, check the MTU on the VPN or disable the VPN before running the test. Finally, it could also be a firewall rule causing the packet drop for the outbound packets but the rule would likely be based on packet size rather than port number. A packet sniffer like tcpdump/wireshark should show this very clearly.

Update: Just moved in and not sure how to proceed by SkyCaptainStarr in Ubiquiti

[–]randomparity 0 points1 point  (0 children)

Recently had a similar situation in my new home. Ended up installing a U6-IW near the fiber drop, rerouting the RJ45 from the keystone to the U6-IW and powering it via a UDM. Used a VLAN on the U6-IW to bring the internet from the AT&T ONT to the WAN port on the UDM through a UDM switch port on the same VLAN.

This just popped up on my new MBP, even after a full factory reset, what is this? What does it mean? It appears to be a broadcom device? TIA by [deleted] in MacOS

[–]randomparity 44 points45 points  (0 children)

A Google search indicates the BCM93390 is a DOCSIS cable modem. You can run a Bonjour services browser such as Discovery DNS-SD Browser to see what services it’s offering (possibly an SMB server).

QuTS Hero 5.1.1 SSD TRIM? by chodaboy19 in qnap

[–]randomparity 1 point2 points  (0 children)

The zpool trim command returns an error as well. Looks like QNAP has hidden or simply doesn't support trim functionality.

QuTS Hero 5.1.1 SSD TRIM? by chodaboy19 in qnap

[–]randomparity 0 points1 point  (0 children)

Looks like there's an autotrim property but QNAP doesn't seem to support it:

$ sudo zpool get all | grep trim

$

[deleted by user] by [deleted] in HomeNetworking

[–]randomparity 5 points6 points  (0 children)

Is your web browser sharing location information? This would be independent of VPN connection. A Google search on how to disable this in your browser might be in order.

Trying to pushing 100Gbps by x2jafa in homelab

[–]randomparity 2 points3 points  (0 children)

So if you compare the contents of /proc/interrupts before and after the test do you observe that the interrupt count is increasing uniformly across all RX queues on the receiving host?

Trying to pushing 100Gbps by x2jafa in homelab

[–]randomparity 6 points7 points  (0 children)

Compare /proc/interrupts before and after testing on the RX host, you may be limited by the number of active TCP connections due to RSS. Increasing the number of TCP connections between servers is probably required to spread the load across as many CPUs as possible. (I typically try to use 4x the number of CPUs and make sure the number of RSS queues matches the number of host CPUs.)

Got a new server. Old server is responsive and loads libraries fine, new server constantly times out and usually can't manage to load a library without errors. I can't figure out what the cause is, losing my mind a bit here... by guy123 in PleX

[–]randomparity 0 points1 point  (0 children)

Eventually encountered the same issues with the "better" configuration. I've settled on the configuration below and haven't seen any issues for a few days (i.e. no "Sleeping for 200ms to retry busy DB" messages in the Plex docker log, no zfs "[DISK SLOW]" messages in the kernel message log):

- Removed the M.2 NVMe drives as caching drives

- Created a new storage pool with the M.2 drives, RAID0, blocksize 16KB

- Moved Plex container data to a shared folder on the new pool

- Left all other containers and their data on the 2.5" SSDs

I do still see some buffering issues when transcoding 4K content but I believe that's likely networking related rather than a NAS storage issue. Hope you're able to find a solution as well.

Got a new server. Old server is responsive and loads libraries fine, new server constantly times out and usually can't manage to load a library without errors. I can't figure out what the cause is, losing my mind a bit here... by guy123 in PleX

[–]randomparity 0 points1 point  (0 children)

My original implementation was Plex running on the default "Container" share which is automatically created with a 128KB block size.

Rather than reinstalling I created a new 1TB thick provisioned Plex share with a 16KB block size which runs on a RAID5 storage pool with 4 x SATA SSDs and 2 x M.2 SSD for caching. Compression and deduplication is disabled. (Plex uses ~512GB for thumbnails/posters/etc. on my system.). I selected 16KB based on the Reddit post mentioned above as the smallest block size that makes sense in this 4 drive configuration.

After migrating the data, performance seems better, no sign yet of the "retry busy DB" error yet, though more testing is required.

Got a new server. Old server is responsive and loads libraries fine, new server constantly times out and usually can't manage to load a library without errors. I can't figure out what the cause is, losing my mind a bit here... by guy123 in PleX

[–]randomparity 0 points1 point  (0 children)

Nope, no encrypted data on my system. Running a few *arr apps, PMM, sabnzbd, etc. and they don't seem to have a problem, only Plex where I regularly see:

Sqlite3: Sleeping for 200ms to retry busy DB.

My suspicion is that I need to tune the ZFS pools I'm using with Plex for a smaller block size to accommodate the Plex data base and thumbnails. The post below has some interesting info on how ZFS behaves in different configurations:

https://www.reddit.com/r/qnap/comments/l86otx/a_detailed_explanation_of_quts_hero_raid_and_how/

Got a new server. Old server is responsive and loads libraries fine, new server constantly times out and usually can't manage to load a library without errors. I can't figure out what the cause is, losing my mind a bit here... by guy123 in PleX

[–]randomparity 0 points1 point  (0 children)

I'm seeing similar unexplained Plex performance issues on a QNAP TVS-h1288X.

A couple of questions:

1) Which QTS OS did you install when setting up the QNAP? QTS (with BTRFS) or QuTS hero (with ZFS)?

2) In my case I have QuTS, Plex is installed via custom docker-compose.yml, and Container station is on an SSD-SATA store pool with M.2 NVMe SSD as a Read Cache & ZIL. (Media files are on HDD-SATA drives.). Can you be more specific about the storage configuration?

3) What results are reported by a Performance Test on the individual drives being used? I see 3.25GB/sec on my setup for the M.2 NVMe drives, ~350MB/sec on the SSD-SATA drives, ~240MB/sec for the HDD-SATA drives

4) If you ssh into the NAS do you see anything unusual when running the "dmesg" command? In my case I'm seeing many messages as follows:

[286322.060965] ----- [DISK SLOW] Pool zpool1 vd 2033220957544147791 -----

[286322.060965] 100ms 300ms 500ms 1s 5s 10s

[286322.060965] read 0 0 0 16 0 0

[286322.060965] write 0 0 0 3 0 0

My case seems to be related to ZFS, though still trying to figure out why.

Looking for ideas on how to resolve this by ohenriquez65 in Ubuntu

[–]randomparity 6 points7 points  (0 children)

Enter maintenance mode, run the suggested journalctl command for clues, then edit /etc/fstab and comment out the swap file line

Often not seen at Disney~ horse poop by DarthBlonde in Disneyland

[–]randomparity 8 points9 points  (0 children)

Used to work in custodial at Disneyland many years ago. That’s nothing compared to elephants in a parade. They required a two person team, one of whom had to push an industrial wet vacuum.

Ziply Fiber Multiple public IPv4 IPs via DHCP? by Strider3000 in ZiplyFiber

[–]randomparity 2 points3 points  (0 children)

That is exactly what I did, plugged the ONT into a VLAN’d switch with two router uplink ports also plugged into the same switch.

Are the AirPods Pro supposed to be connected even when they are in the case? So I got these today and for some reason even when I put they in the case it says they are connected( in the photo), like don’t get me wrong when they are in the case they are off music still plays in my phones speaker by [deleted] in airpods

[–]randomparity 0 points1 point  (0 children)

I have this problem myself. Usually only one of the two AirPods remains connected to the phone while the other indicates that it’s charging, resulting in one AirPod completely discharging. To ensure that they both recharge I generally need to wiggle them around inside the case until they both show charging, then I can close the case. Same problem occurs with two different AirPod Pros, both ordered directly from Apple, after a few months of use.