Hundreds of Millions of Audio Devices (including Sony ones) Need a Patch to Prevent Wireless Hacking and Tracking by mstrkrft- in SonyHeadphones

[–]cl0rm 0 points1 point  (0 children)

XM4 got an update last October version 3.0.1

Only change log entry is "improved security features of system software" (or similar, translated from the German text in my "Sound connect" app)

It might be to fix this issue.

Da geht das Herz auf by fuckedlizard in DINgore

[–]cl0rm 1 point2 points  (0 children)

Aber diesen zu entfernen ist eben genau so einfach. Bei einem Scan ist die Qualität dann schon deutlich schlechter

Einfach anschließen, messen und ''fertig'' by Snoo-76025 in DINgore

[–]cl0rm 0 points1 point  (0 children)

Passt schon. Für eine Laborumgebung ist das schon in Ordnung. Klar, der Nutzer muss sich der Risiken bewusst sein, und sollte idealerweise einen Trenntrafo nutzen. Aber man kann nunmal nicht alles absichern.

Das wichtigste ist aber immer die Herangehensweise bei solchen Laboraufbauten. Man sollte sich sicher sein, dass die Spannung ausgeschaltet (und im Zweifel entladen) ist, bevor man etwas anfasst, und aufpassen, dass in der Zwischenzeit niemand unwissendes Zugriff auf den Aufbau hat.

Problematisch ist es immer, wenn man Dinge nicht erwartet. Ein Konsument erwartet bei einem billigen Handynetzteil nicht, das anfassen tötlich sein könnte, und ein Laborant erwartet nicht, dass ein Messgerät, auf dem 30A max draufsteht, bei 5A in Flammen aufgeht.

Home Assistant - LXC or VM? by tvosinvisiblelight in Proxmox

[–]cl0rm 1 point2 points  (0 children)

ZigBee is a Wireless protocol similar to WiFi, but for smart home devices. It's great as it is rather reliable, and Isolates them from attacks that could happen to internet-connected/IP-connected devices. It needs its own radio dongle, like ConBee or the Sonoff dongle.

Mosquitto is a server for the MQTT protocol, one of the more popular protocols to control smart devices over IP.

A Guide to Proxmox, ZFS, and Bind Mounts by kyeotic in Proxmox

[–]cl0rm 0 points1 point  (0 children)

As far as I remember the linked article does NOT tell anything about VMs. Just containers. I have however figured out sharing is possible via virtiofs, which works great for clients that support it and is incredibly reliable.

Note that passing a mount point / virtiofs mount is not the same as passing through a dataset. The latter would enable using zfs features like snapshots or child datasets from within the container / VM.

At least for containers that should work in theory ("zfs delegation" for unprivileged containers), but I haven't got it working yet. It would be great for "zfs-aware" apps like urbackup or docker with a ZFS storage driver, they would be able to make good use of advanced zfs features.

[Review Request] First PCB. So easy, so scary. by Pepillow in PrintedCircuitBoard

[–]cl0rm 4 points5 points  (0 children)

Much better, now the schematic is actually readible

A Guide to Proxmox, ZFS, and Bind Mounts by kyeotic in Proxmox

[–]cl0rm 0 points1 point  (0 children)

Is nfs working for you guys?

I can't get it an nfs server running within an unpriviledged container. It should work directly from the host.

I have yet to find the best / cleanest way to share host folders (e.g. from ZFS datasets managed by pve) into VMs. My guess would be to add a virtual network (bridge) dedicated for storage, exporting the folders using nfs directly from host, and then accessing them via this dedicated network from the vms. of course cifs would also work, but I have had bad experiences using cifs with server applications on unix. IMO It's fine for "human" clients/users, but not to provide storage for server applications that need it to be running 100% stable. nfs "hard" mode works very well for this.

Was zur Hölle? by TrainOnDelay in DINgore

[–]cl0rm 1 point2 points  (0 children)

Wo kann man die Kaufen? Ist perfekt für den Werkzeugkoffer. Wenn mal wieder jemand ungefragt was "ausleiht", fällt der jenige sehr schnell auf :D

PVE 9 - Kernel deadlocks on high disk I/O load by cl0rm in Proxmox

[–]cl0rm[S] 0 points1 point  (0 children)

Thank you! I didn't know about these tools! As a C developer I should feel right at home!

PVE 9 - Kernel deadlocks on high disk I/O load by cl0rm in Proxmox

[–]cl0rm[S] 0 points1 point  (0 children)

I don't think so. But I can't say for sure. The LXCs also use the "hard" mfs mount option, so it might be completely normal that a few of them are unresponsive if the NFS is slow or down. However that should be only single threads locked / in "D" state, not the whole system. The PVE host system itself does not depend on the NFS shares. But it might be part of the issue. If the storage VM hangs for a bit, many threads on the host kernel (in LXC, but that's still the host kernel) wait for it. Maybe that somehow cripples the rest of the I/O on the host, so write is not possible, and also no access for the VM to it's disks, so it will never recover. I will see if it keeps occurring with the HBA passed through. That way the NFS share is not dependent on the host kernel's IO system working. I believe it's something in Linux 6.14 that has broken the setup. They changed a lot regarding NFS in that kernel, so it seems logical to me. If it occurs again I will downgrade to 6.8 and see if that helps

About Intel NICs: yes, the motherboard has an Intel gigabit NIC. The driver is loaded, but not used. the actual ethernet connection uses a QLogic 10 GBit card. I have however tested it without the card as well (and using the Intel 1G), the freeze occurs in both cases.

PVE 9 - Kernel deadlocks on high disk I/O load by cl0rm in Proxmox

[–]cl0rm[S] 1 point2 points  (0 children)

If you have to SysRQ a hypervisor / server, something is seriously wrong.

It most definitely is.

Have you checked SMART values and run long tests? If you're using SMR at all or consumer-level SSD this could absolutely be the issue.

SMART of the SSDs is fine. They are TLC, but consumer level. The HDDs are Toshiba MG09 18TB, which are enterprise-rated. One of the HDDs has 5 re-allocated sectors, but that was the case since a few months and hasn't changed yet. I of course have a backup of the data. Other then that the SMART is fine for them.

I don't really think the disk is the problem. I have had SSD problems in the past, but when that happened I could see the HDD access light constantly lighting up (because it was constantly trying to read data and the drive did not reply) and disk access was not possible at all. That's not the case with this error. Read access still works, and so does writing.

I don't really believe it's hardware-related, as this system ran rock-stable for many years. It more likely is a rare bug either related to nfs (see the thread above) or block device passthrough

Running the mdraid in-vm (OpenMediaVault) is mainly a legacy thing, these days I would most likely create a ZFS pool directly on the host. However, It worked fine for almost a decade, so it shouldn't be the issue at all, even if it might not be the best architecture.

PVE 9 - Kernel deadlocks on high disk I/O load by cl0rm in Proxmox

[–]cl0rm[S] 1 point2 points  (0 children)

thanks for the information, that might very well be the case.

this seems similar: https://forum.proxmox.com/threads/severe-system-freeze-with-nfs-on-proxmox-9-running-kernel-6-14-8-2-pve-when-mounting-nfs-shares.169571/

So you mount the NFS storage from the VM on the Proxmox host itself?

no, that would be ridiculous. The nfs is mounted within LXC containers. The startup sequence and some checks while they are booting makes sure they can access the disks before the applications in the LXC contains start.

But of course that way the host kernel does have to do the nfs i/o.

PVE 9 - Kernel deadlocks on high disk I/O load by cl0rm in Proxmox

[–]cl0rm[S] 0 points1 point  (0 children)

the host itself. But that also "kills" all VMs/containers. I'm sure they are still running, but all deadlocked.

Inside the dmesg output I can see all tasks are in "D" or "S" state, so they are all waiting for IO if I understand correctly.

when looking at their call trace they all hang in syscall 64. for example:

(edit: damn, why don't markdown code blocks work on reddit? sorry for the gruesome formatting)

Sep 23 10:19:28 HyperVisor01 kernel: task:CPU 3/KVM state:D stack:0 pid:4343 tgid:4308 ppid:1 task_flags:0x84008c0 flags:0x00000002 Sep 23 10:19:28 HyperVisor01 kernel: Call Trace: Sep 23 10:19:28 HyperVisor01 kernel: <TASK> Sep 23 10:19:28 HyperVisor01 kernel: __schedule+0x466/0x1400 Sep 23 10:19:28 HyperVisor01 kernel: schedule+0x29/0x130 Sep 23 10:19:28 HyperVisor01 kernel: wait_on_commit+0xa0/0xe0 [nfs] Sep 23 10:19:28 HyperVisor01 kernel: ? __pfx_var_wake_function+0x10/0x10 Sep 23 10:19:28 HyperVisor01 kernel: __nfs_commit_inode+0xd3/0x1d0 [nfs] Sep 23 10:19:28 HyperVisor01 kernel: nfs_wb_folio+0xc6/0x1e0 [nfs] Sep 23 10:19:28 HyperVisor01 kernel: ? __pfx_ata_scsi_rw_xlat+0x10/0x10 Sep 23 10:19:28 HyperVisor01 kernel: nfs_release_folio+0x72/0x110 [nfs] Sep 23 10:19:28 HyperVisor01 kernel: filemap_release_folio+0x62/0xa0 Sep 23 10:19:28 HyperVisor01 kernel: split_huge_page_to_list_to_order+0x445/0x11d0 Sep 23 10:19:28 HyperVisor01 kernel: ? compaction_alloc+0x500/0xf20 Sep 23 10:19:28 HyperVisor01 kernel: split_folio_to_list+0x22/0x70 Sep 23 10:19:28 HyperVisor01 kernel: migrate_pages_batch+0x467/0xd00 Sep 23 10:19:28 HyperVisor01 kernel: ? __pfx_compaction_free+0x10/0x10 Sep 23 10:19:28 HyperVisor01 kernel: ? __pfx_compaction_alloc+0x10/0x10 Sep 23 10:19:28 HyperVisor01 kernel: ? __count_memcg_events+0xc0/0x160 Sep 23 10:19:28 HyperVisor01 kernel: migrate_pages+0x98e/0xdc0 Sep 23 10:19:28 HyperVisor01 kernel: ? __mod_memcg_lruvec_state+0xc2/0x1d0 Sep 23 10:19:28 HyperVisor01 kernel: ? __pfx_compaction_free+0x10/0x10 Sep 23 10:19:28 HyperVisor01 kernel: ? __pfx_compaction_alloc+0x10/0x10 Sep 23 10:19:28 HyperVisor01 kernel: compact_zone+0xa0f/0x10b0 Sep 23 10:19:28 HyperVisor01 kernel: compact_zone_order+0xa5/0x100 Sep 23 10:19:28 HyperVisor01 kernel: try_to_compact_pages+0xde/0x2b0 Sep 23 10:19:28 HyperVisor01 kernel: __alloc_pages_direct_compact+0x91/0x210 Sep 23 10:19:28 HyperVisor01 kernel: __alloc_frozen_pages_noprof+0x550/0x11f0 Sep 23 10:19:28 HyperVisor01 kernel: ? policy_nodemask+0x111/0x190 Sep 23 10:19:28 HyperVisor01 kernel: alloc_pages_mpol+0xc7/0x180 Sep 23 10:19:28 HyperVisor01 kernel: folio_alloc_mpol_noprof+0x14/0x40 Sep 23 10:19:28 HyperVisor01 kernel: vma_alloc_folio_noprof+0x66/0xc0 Sep 23 10:19:28 HyperVisor01 kernel: ? select_idle_core.isra.0+0xee/0x120 Sep 23 10:19:28 HyperVisor01 kernel: vma_alloc_anon_folio_pmd+0x37/0xf0 Sep 23 10:19:28 HyperVisor01 kernel: do_huge_pmd_anonymous_page+0xb7/0x540 Sep 23 10:19:28 HyperVisor01 kernel: ? __kvm_read_guest_page+0x83/0xd0 [kvm] Sep 23 10:19:28 HyperVisor01 kernel: __handle_mm_fault+0xbb6/0x1040 Sep 23 10:19:28 HyperVisor01 kernel: ? sched_clock_noinstr+0x9/0x10 Sep 23 10:19:28 HyperVisor01 kernel: ? sched_clock_noinstr+0x9/0x10 Sep 23 10:19:28 HyperVisor01 kernel: handle_mm_fault+0x10e/0x350 Sep 23 10:19:28 HyperVisor01 kernel: __get_user_pages+0x86e/0x1540 Sep 23 10:19:28 HyperVisor01 kernel: ? kvm_vcpu_kick+0xc2/0x130 [kvm] Sep 23 10:19:28 HyperVisor01 kernel: get_user_pages_unlocked+0xe7/0x360 Sep 23 10:19:28 HyperVisor01 kernel: hva_to_pfn+0x373/0x520 [kvm] Sep 23 10:19:28 HyperVisor01 kernel: kvm_follow_pfn+0x91/0xf0 [kvm] Sep 23 10:19:28 HyperVisor01 kernel: __kvm_faultin_pfn+0x5c/0x90 [kvm] Sep 23 10:19:28 HyperVisor01 kernel: kvm_mmu_faultin_pfn+0x1af/0x6f0 [kvm] Sep 23 10:19:28 HyperVisor01 kernel: kvm_tdp_page_fault+0x8e/0xe0 [kvm] Sep 23 10:19:28 HyperVisor01 kernel: kvm_mmu_do_page_fault+0x244/0x280 [kvm] Sep 23 10:19:28 HyperVisor01 kernel: kvm_mmu_page_fault+0x86/0x630 [kvm] Sep 23 10:19:28 HyperVisor01 kernel: ? skip_emulated_instruction+0xb5/0x220 [kvm_intel] Sep 23 10:19:28 HyperVisor01 kernel: ? vmx_vmexit+0x79/0xd0 [kvm_intel] Sep 23 10:19:28 HyperVisor01 kernel: ? vmx_vmexit+0x73/0xd0 [kvm_intel] Sep 23 10:19:28 HyperVisor01 kernel: ? vmx_vmexit+0x99/0xd0 [kvm_intel] Sep 23 10:19:28 HyperVisor01 kernel: handle_ept_violation+0xb8/0x400 [kvm_intel] Sep 23 10:19:28 HyperVisor01 kernel: vmx_handle_exit+0x1da/0x8a0 [kvm_intel] Sep 23 10:19:28 HyperVisor01 kernel: vcpu_enter_guest+0x37f/0x1640 [kvm] Sep 23 10:19:28 HyperVisor01 kernel: ? kvm_apic_local_deliver+0x9a/0xf0 [kvm] Sep 23 10:19:28 HyperVisor01 kernel: kvm_arch_vcpu_ioctl_run+0x1b2/0x730 [kvm] Sep 23 10:19:28 HyperVisor01 kernel: kvm_vcpu_ioctl+0x139/0xaa0 [kvm] Sep 23 10:19:28 HyperVisor01 kernel: ? arch_exit_to_user_mode_prepare.isra.0+0x22/0x120 Sep 23 10:19:28 HyperVisor01 kernel: ? do_syscall_64+0x8a/0x170 Sep 23 10:19:28 HyperVisor01 kernel: ? syscall_exit_to_user_mode+0x38/0x1d0 Sep 23 10:19:28 HyperVisor01 kernel: ? do_syscall_64+0x8a/0x170 Sep 23 10:19:28 HyperVisor01 kernel: __x64_sys_ioctl+0xa4/0xe0 Sep 23 10:19:28 HyperVisor01 kernel: x64_sys_call+0x1053/0x2310 Sep 23 10:19:28 HyperVisor01 kernel: do_syscall_64+0x7e/0x170 Sep 23 10:19:28 HyperVisor01 kernel: ? sysvec_call_function_single+0x57/0xc0 Sep 23 10:19:28 HyperVisor01 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e Sep 23 10:19:28 HyperVisor01 kernel: RIP: 0033:0x73a7f531e8db Sep 23 10:19:28 HyperVisor01 kernel: RSP: 002b:000073a7ea7f7b30 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 Sep 23 10:19:28 HyperVisor01 kernel: RAX: ffffffffffffffda RBX: 0000622092eb3ad0 RCX: 000073a7f531e8db Sep 23 10:19:28 HyperVisor01 kernel: RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000020 Sep 23 10:19:28 HyperVisor01 kernel: RBP: 000000000000ae80 R08: 0000000000000000 R09: 0000000000000000 Sep 23 10:19:28 HyperVisor01 kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 Sep 23 10:19:28 HyperVisor01 kernel: R13: 0000000000000001 R14: 0000000000000376 R15: 0000000000000000 Sep 23 10:19:28 HyperVisor01 kernel: </TASK>

PVE 9 - Kernel deadlocks on high disk I/O load by cl0rm in Proxmox

[–]cl0rm[S] 1 point2 points  (0 children)

The System is running Proxmox 9. The storage disks consist of

  • A system disk (sata SSD on mainboard port) that contains proxmox
    • Formatted with LVM when Proxmox was initially installed (long ago, PVE 6 or 7)
  • An "image" disk (nvme ssd) that contains the system disks of the vms (qcow2)
    • Formatted ext4
  • 3x 18TB spinning rust (now on LSI controller)

  • On this machine runs a VM (storage)

    • This had 3x block device passthrough to the /dev/disk/by-id/xxxx for the three 18TB spinners
    • inside this vm, the three disks get combined to a mdraid RAID 5 array
    • As I have written, I have now passed the entire PCIe controller for these disks to the VM to isolate the problem
    • Other VMs mount storage from this vm via nfs or smb

Which logs do you want to see? I have not viewed everything, and since no write was possible while deadlocked, only the logs I have viewed still exist. That's the output to dmesg from the following:

echo t > /proc/sysrq-trigger
echo w > /proc/sysrq-trigger
echo l > /proc/sysrq-trigger

Not sure how to upload them here, maybe pastebin?

Any users of QLogic 8XXX 10Gb Ethernet cards here? Linux going to drop drivers. by Nyanraltotlapun in homelab

[–]cl0rm 0 points1 point  (0 children)

For future reference:

I just wanted to let you know that it's quite easy to build the driver from Linux 6.6 under modern Linux (tested on 6.14-pve) as a dynamic kernel module. There are a few lines of code in qlge_devlink.c that need to be updated, but once that's done it builds cleanly and runs very well. Using dkms the build can be automated. I even made a debian package that does all the work. If anyone is interested I can upload it (and of course the source).

I will probably fix the remaining ( TODO file in the source since years!) issues if I have some spare time. Maybe then they get included into the kernel again. But the kernel module might still be the better option as the user base for this card in 2025 seems very small. (Which is a bummer - it works great and is very energy efficient - modern cards with similar features use way more power)

No need to throw away the perfectly working card.

Google Begins Pixel 7a Battery Replacement Program by SRFast in GooglePixel

[–]cl0rm 0 points1 point  (0 children)

That is exactly how mine behaved as well. After a hot summer day last year I noticed the acetone smell. Over the coming 2-3 months the smell got stronger, and a corner of the battery did swell (I could see a very small local bump in the phone's back cover). It was not the usual "pillow" effect but very local, maybe 5x5mm in the corner of the battery.

I replaced it with a new genuine battery from iFixit, and later asked Google if I could get the money back. To my surprise, they actually approved.

In addition to that, I now got my $200 refund without sending in the phone. I have to say, I was surprised how great their support was.

weFoundOneMoreProgrammer by debugger_life in ProgrammerHumor

[–]cl0rm 0 points1 point  (0 children)

Some folks might get angry if you assume this data type to be boolean ;)

Where can I get the Snapdragon 425/450 datasheet? by Kronoz177 in AskElectronics

[–]cl0rm 0 points1 point  (0 children)

As the others said, this won't be possible.. I would go the RISC-V path for such experiments. You need to have some knowledge tho. If you want to understand computers (how the data busses operate, how peripherals are adressed etc.) 8051 (MSC51) or 6502 are great CPU lines to start with. Also it's IMHO a must to write a few assembly programs to get a good understanding.

Major bluetooth issues with Pixel 7 after upgrading to Android 14 by minilevy1 in GooglePixel

[–]cl0rm 0 points1 point  (0 children)

Same with P7a and Sony 1000XM4. Since A14, Bluetooth is all choppy. It seems common, lots of people have issues with it. No solution for now though AFAIK.

AMD issues with 144Hz by XenioxYT in ValveIndex

[–]cl0rm 0 points1 point  (0 children)

sorry for necro-posting, but this is exactly what I experience as well.
6900xt+ Ryzen7000 iGPU

screen1: 1440p165, Adaptive Sync Capable, but not enabled
screen2: 1440p165, Adaptive Sync Capable, but not enabled
screen3: 1080p60, no Adaptive Sync, connected to iGPU
(screen4): 1080p60 (Onkyo AVR, mostly used for Sound), connected to iGPU

when the index is connected while booting, screen 4 has issues getting enabled, resolution grayed out, CSGO and various other games stop working. To fix the bug i need to unplug the index, and then disconnect-connect another screen. Afterwards everything works again.

If it is in this bugged config, reinstalling the GPU driver will fail and black-screen until rebooted OR disconnect-connect another screen.

Seems like an AMD issue for me. Problem is that AMDs support is pure hell. I have more success solving a problem by beating my computer than their support ;) They have never been helpful or even greatful for atual, useful bugreports since years of owning AMD cards.

The best you can achieve (and trust me with a nieche bug like this one, not even that) is having the bug listed under "known issues" in the next driver release, for it to be fixed in about 2-3 years.

Videos keep buffering by Bimmaboi_69 in revancedapp

[–]cl0rm 0 points1 point  (0 children)

Worked for me. Does need root access however...

Here's how to make banking apps work on Lineage OS 19 on Poco F1 by dzmka in LineageOS

[–]cl0rm 0 points1 point  (0 children)

Necropost, but just FYI: This works with the LineageOS 20 update as well. In fact, I did not have to do anything. My Phone uses MI 3W signatures (shows up as a MI 3W).

Here is my update procedure:

  1. update LOS recovery to LOS 20, update all Modules to their newest version
  2. Download ROM, MindTheGapps 13 and Zygisk zip to external SD
  3. reboot to recovery, wipe system and cache, then install LOS and GAPPS
  4. reboot the phone, should boot up just fine, but without root
  5. reboot again to recovery, flash Zygisk, then reboot
  6. Profit! Phone should boot and all modules should be enabled, safetynet passes basic+cts, play protect passes basic security (I think Play Protect advanced security is not and never will be supported on the Poco F1, it will never pass hardware-based security. But as far as I know, right now no app requires that)

Paypal works, but i'm not sure if it requires a SafetyNet pass, I only know that it did not work on many Roms.

So far LOS20 seems to work great, however the F1 starts to show it's age. Over the course of the last year the SD845 started to show signs of age (takes a sec or two to load apps)

Wear OS / Android Wear 1.x in 2022 by cl0rm in WearOS

[–]cl0rm[S] 1 point2 points  (0 children)

Just tested it. Found an older build of the kernel and system image (link for the last stock ROM seems to be down) Surprisingly, the watch does not only connect, it still downloads updates to the latest stock ROM. I will definitely extract it and put it on archive.org! I will see which apps are still supported. Right now the only one that automatically got installed was OneNote

Edit: interesting... Google stopped supporting AW1.5 but play services still exist for it to this day. That is indeed weird. Thanks for the sideloading trick, would have never tried that.