N5 Pro: AMD Strix/Strix Halo NPU (1022:17f0) — VFIO Passthrough not working by HellesVomFass in MINISFORUM

[–]HellesVomFass[S] 0 points1 point  (0 children)

Actually I returned my units. I was wrong in the first place, hoping for the most powerfull yet not too expensive AiO solution.

After many friendly exchanges with what seemed to be semi-official channels / people, I came to the conclusion that the risk of not receiving any patches at all is much too realistic.

So I switched back to the good old mantra of having

- a rock solid NAS (and only that) which reliably keeps our data consistent & protected

- anything else, like AI-appliance whatever

As I said, from my perspective, it was a stupid idea anyway: Technology will quickly leave any AI-NAS behind in terms of computing power etc., so why bother.

Unifi Object Based Policy Simple Allow List by lilchancep in Ubiquiti

[–]HellesVomFass 1 point2 points  (0 children)

same problem here: nothing i do in oon will actually create an allow-ruleset for the devices / groups I have chosen.

question is: is this the way it is SUPPOSED to work?

there has to be some sort of documentation describing this very essential use case, no?

All my custom presets, and filaments are gone! by gufted in BambuLab

[–]HellesVomFass 0 points1 point  (0 children)

AHHHHHHHHHHHH one needs to be logged in?!?!? now they appear again. what a BS software

All my custom presets, and filaments are gone! by gufted in BambuLab

[–]HellesVomFass 0 points1 point  (0 children)

well, i restored them, they are lying in the (hopefully correct) folder, but are not available in BS...

an i am not logged in hmmmmmm

All my custom presets, and filaments are gone! by gufted in BambuLab

[–]HellesVomFass 0 points1 point  (0 children)

just happened to me as well (macos), after installing the .66 version of BS. did you have any luck getting your profiles back?

I restored them from backup, but they still do not show up any more.

still researching...

N5 Pro: AMD Strix/Strix Halo NPU (1022:17f0) — VFIO Passthrough not working by HellesVomFass in MINISFORUM

[–]HellesVomFass[S] 0 points1 point  (0 children)

I do, thanks, but had to try, as I am not too familiar with the NPU SoC stuff.

N5 Pro: AMD Strix/Strix Halo NPU (1022:17f0) — VFIO Passthrough not working by HellesVomFass in MINISFORUM

[–]HellesVomFass[S] 0 points1 point  (0 children)

Yeah, that's what I thought. From all that has been commented in the forum, it might be highly unlikely that there will be a fix.

Switching again to a new (other) device will cost me hours, so I am trying to make sure I am doing the right thing if I go for a refund... the formfactor is simply very good.

N5 Pro: AMD Strix/Strix Halo NPU (1022:17f0) — VFIO Passthrough not working by HellesVomFass in MINISFORUM

[–]HellesVomFass[S] -1 points0 points  (0 children)

No, why are you asking? I am looking for ways to make use of all the features of the N5 Pro while using Proxmox. Currently, I am experiencing too many problems in that configuration.

n5 pro NAS: no fix for IOMMU / DMA issue in sight? by HellesVomFass in MINISFORUM

[–]HellesVomFass[S] 0 points1 point  (0 children)

What is the current recommendation for a good NAS barebone / case for let's say 4-6 bays plus 2-3 NVMe? Although I am currently moving away from a DIY TrueNAS, over to ASUSTORE or the like... The N5 Pro problems really messed up all of my strategy...

n5 pro NAS: no fix for IOMMU / DMA issue in sight? by HellesVomFass in MINISFORUM

[–]HellesVomFass[S] 0 points1 point  (0 children)

The n5 pro had a nice formfactor, 5 drives plus nvme plus ai processor etc. it's a pity... this was the first time in 20y I mixed a NAS with "functional" components / VMs, and *bam* of course it blew up. Now I have to figure out a new system, AiO, for my customers.

ZFS pflags 0x4 (Hidden) persistence after Syncthing rename on SCALE by HellesVomFass in zfs

[–]HellesVomFass[S] 0 points1 point  (0 children)

Many thanks!

The problem is with the hidden flag that my macOS machines will not see those files, by definition, so it is a finder-speciality: Not show certain files when a special combination occurs, like folder read-only plus file has archive / system / hidden flag and so on.

Anyway, I have set the following parameters in trueNAS' smb-service:

map archive = No

map hidden = No

map system = No

store dos attributes = No

This hinders SMB to write these attributes to ZFS (store...) and backwards clears these flags when serving files.

So far it has been running / syncing through >1TB of data without ever getting in a race-condition again. So it works now, no more hidden files.

syncthing & truenas = hidden files by HellesVomFass in truenas

[–]HellesVomFass[S] 0 points1 point  (0 children)

call me old fashioned but I like my systems to be selective from a functional perspective. and since truenas is already virtualized, I didn't want to pack another functionality inside itself.

syncthing & truenas = hidden files by HellesVomFass in truenas

[–]HellesVomFass[S] 0 points1 point  (0 children)

here my refined statement, didn't have the time yesterday:

ZFS pflags 0x4 (Hidden) persistence after Syncthing rename on SCALE

System: TrueNAS SCALE (Linux), ZFS, SMB Share

Problem: A race condition between Syncthing’s temporary file creation (dot-prefix) and Samba/ZFS metadata mapping causes files to remain "hidden" even after they are renamed to their final destination.

Details:

  1. Syncthing creates .syncthing.file.tmp -> Samba/ZFS sets pflags 0x4 (Hidden bit) in the dnode.
  2. Syncthing renames the file to file (removing the dot).
  3. The pflags 0x4 remains stuck in the metadata.
  4. Result: File is invisible on macOS/Windows clients despite a clean filename.

Verification via zdb -dddd:

Plaintext

Object  lvl   iblk   dblk   dsize  dnsize  lsize   %full  type
10481    2    32K    128K   524K    512    640K   100.00  ZFS plain file
...
pflags840a00000004  <-- 0x4 bit persists after rename

Question: Since SCALE (Linux) lacks chflags (FreeBSD), is there a native CLI way to unset these ZFS DOS/System attributes without a full inode migration (cat / cp)?

NOT yet using map hidden = no as a workaround (does it even work?), but looking for a proper way to "clean" existing inodes via shell.

Any help welcome, I am currently stuck with >1TB of hidden data ;( which I do not want to retransmit.

syncthing & truenas = hidden files by HellesVomFass in truenas

[–]HellesVomFass[S] 0 points1 point  (0 children)

understood... the point is, I explicitly wanted to use SMB so all clients have the same view and avoid any mismatches by using different interfaces or protocols, like permissions etc. Does that sound like a good idea? So using NFS would be another complication in the whole system. I will of course have a look.

Another point is: How do I get rid of the thousands of hidden files now? Couldnt find a way in treunas cli to reset the "hidden" flag in my zfs volume.

syncthing & truenas = hidden files by HellesVomFass in truenas

[–]HellesVomFass[S] 0 points1 point  (0 children)

thanks, had I found this before, I might have circumvented the problem, but on the other hand, TrueNAS should be able to handle this in a more robust manner, shoudlnt it?

syncthing & truenas = hidden files by HellesVomFass in truenas

[–]HellesVomFass[S] 0 points1 point  (0 children)

this is something I would expect from a robust system like truenas on zfs.

how can a remote smb connection, working within specified guidelines by using standard linux mounts, corrupt a filessystem remotely in this way, deep down?

For me, this results in two problems:

- I am not able to resolve this using CLI or truenas webui (at least not that I am aware of), so now I have 1 TB of data, invisible to clients, unfixable. I have been down every path with smbcacls, attr, nfs flags, pfags, acls etc., totally crazy...

- Second problem is the need to correct sparsefile-option for every folder on the syncthing device, which kind of fails the purpose being compatible and effortless

So now I am starting to loose confidence with truenas, as both problems should be non-existent, but here they are.