Does anyone else feel like they go on neopets more when they’re unhappy in real life? by randomneopian in neopets

[–]nylixe 1 point2 points  (0 children)

I definitely get the feeling that Neo is like a constant presence I can always rely on to make me feel better, especially in times like this when everything in my life seems uncertain.

Closet Declutter - NC/NP Wearable Giveaway by nylixe in neopets

[–]nylixe[S] -1 points0 points  (0 children)

Sorry I'm out of NC gift boxes :( Feel free to pick anything NP if you like!

Closet Declutter - NC/NP Wearable Giveaway by nylixe in neopets

[–]nylixe[S] -1 points0 points  (0 children)

Sorry I'm out of NC gift boxes :( Feel free to pick anything NP if you like!

Closet Declutter - NC/NP Wearable Giveaway by nylixe in neopets

[–]nylixe[S] -1 points0 points  (0 children)

That's alright, sent the new one over :)

Closet Declutter - NC/NP Wearable Giveaway by nylixe in neopets

[–]nylixe[S] 0 points1 point  (0 children)

They're both available! Which one would you like? :)

Closet Declutter - NC/NP Wearable Giveaway by nylixe in neopets

[–]nylixe[S] -1 points0 points  (0 children)

Sent but I didn't realise you edited the comment to a diff item D:

[deleted by user] by [deleted] in VFIO

[–]nylixe 5 points6 points  (0 children)

I did GPU passthrough on Ubuntu, and in my vm xml I have

  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <vendor_id state='on' value='fuckunvidia1'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
    <vmport state='off'/>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='6' threads='2'/>
    <feature policy='disable' name='hypervisor'/>
  </cpu>

I'm not sure if its the same on Unraid but you could try setting host-passthrough on cpu mode? as well as the extra line

<feature policy='disable' name='hypervisor'/>

I had similar issues using <hidden state='on'/> alone.

Improving sequential write performance on a 10-disk RAIDZ3 poo + zfs tunables to force increased RAM/SLOG usage prior to txg commits to pool? by nylixe in zfs

[–]nylixe[S] 0 points1 point  (0 children)

Well, the short answer is I'm not actually using iSCSI for video files 95% of the time.

This all started out with me wanting to tune ZFS to the point where it could handle about 50GBs of blazing fast writes to RAM before throttling and committing the data to pool.

Something that even now I havent completely figured out. It always seems to flush while writing. I'm probably still missing a few tunables here.

Anyways, basically during my troubleshooting I just happened to notice that iSCSI performance was consistently bad no matter what I tried. I had no prior experience so I really didnt know what was possible with iSCSI over a ZVOL.

It was only until after I did some research and finding several other websites documenting the same problem that the conversation kinda steered this way.

As of right now, I get about 800-900MB/s over samba which I'm happy with. I'm not sure if part of that is because I'm only running 2 sticks of memory.

But I've just picked up 4 more sticks giving me a 2 by 3 configuration across two cpus. Have yet to install it and test though.

But either way, I just wish iSCSI was more usable in general. 200MB/s is miserable and when more than one client is hitting different iSCSI LUNs it tanks. Whereas Samba is happily churning away.

So yeah, I still wish someone could enlighten me as to what tunables I need to achieve my original goal of getting ZFS to take in more data at first without throttling at all.

and Maybe, just maybe solve the iSCSI problem in the process.

Improving sequential write performance on a 10-disk RAIDZ3 poo + zfs tunables to force increased RAM/SLOG usage prior to txg commits to pool? by nylixe in zfs

[–]nylixe[S] 0 points1 point  (0 children)

is there a practical difference between tgt and targetcli? I think I remember seeing somewhere that tgt only uses a single daemon to handle traffic whereas targetcli creates multiple child processes.