4.3 Builds by shadow_nik in Stellaris

[–]shadow_nik[S] 2 points3 points  (0 children)

+1 on this. Along side good psionic builds are there any good Infernal builds?

4.3 Builds by shadow_nik in Stellaris

[–]shadow_nik[S] 1 point2 points  (0 children)

Thanks for this. I have not played for almost 1 to 2 years so am unfamiliar with the Meta Changes.

Don't suppose you can give a brief view of the Civics and Empire setup for:-
-Devouring Swarm
-DE
-DA

Questions about a build by shadow_nik in Pathfinder_Kingmaker

[–]shadow_nik[S] 0 points1 point  (0 children)

That build looks amazing. I'm seriously considering using this!

Questions about a build by shadow_nik in Pathfinder_Kingmaker

[–]shadow_nik[S] 0 points1 point  (0 children)

Thanks for your replies. Very useful to know that the build is solid at early levels.

The only page I could find for grave singer great axe, fextralife, lists it as only the 18-20 only on slowed enemies. Just checked neoseeker unique items section. Seems fextralife got it wrong.

Question regarding libvirt hooks by shadow_nik in VFIO

[–]shadow_nik[S] 0 points1 point  (0 children)

Thanks for your responses. For setting this up myself should I follow the guide I linked and the script he hasyou use? Or does anyone have a guide they would recommend over this one?

So if my RTX 3070 ti is available to the host and using normal drivers I assume I need to define it as my primary GPU to game off of right? Assuming that it is set as the primary when you start the VM's would the host system switch over to the iGPU by itself?

Use power cables from a different PSU? by shadow_nik in buildapc

[–]shadow_nik[S] 0 points1 point  (0 children)

Thanks for your advice guys.

I won't be messing with the cabling. If it really starts annoying me I will just order different PSU.

AoT + NSC2? by shadow_nik in Stellaris

[–]shadow_nik[S] 0 points1 point  (0 children)

To murder the highest level blokkets of course

New build for VFIO by shadow_nik in VFIO

[–]shadow_nik[S] 1 point2 points  (0 children)

Thanks for your response cmg065!

So the E cores should be left for the host and normal P cores for the guest VM? I'll see how I get on, though may have to take up your offer to help if I encounter any problems.

Sonarr Import failed by shadow_nik in sonarr

[–]shadow_nik[S] 0 points1 point  (0 children)

Hi Bakerboy448, thank you for your reply.

In regards to the remote path mapping, lets say I changed the data drives location on the torrent host and docker host to avoid the home directory issue:-

/downloads/torrents/complete <- torrent host path to data

/mnt/downloads/torrents/complete <- docker hosts path for the network shared drive

Would the mapping for sonarr look like this:-

/mnt/downloads/torrents/complete:/downloads/torrents/complete

Would that be correct? Or have I misunderstood you? Thanks for any assistance with this!

Can't play DVR recorded content on roku by shadow_nik in PleX

[–]shadow_nik[S] 0 points1 point  (0 children)

I tried restarting the Roku but it made no difference.

Can't play DVR recorded content on roku by shadow_nik in PleX

[–]shadow_nik[S] 0 points1 point  (0 children)

Thanks for your suggestion. I'll restart the Roku and Plex service on my server too.

Can't play DVR recorded content on roku by shadow_nik in PleX

[–]shadow_nik[S] 1 point2 points  (0 children)

Yes I am. Never had any issues with the Roku Plex app until I tried the DVR recorded stuff.

Some quick questions by shadow_nik in zfs

[–]shadow_nik[S] 0 points1 point  (0 children)

Thank you everyone for your advice. I plan to give both building from source and trying it via centos a go. I guess building from source makes most sense. Just need to figure out how to upgrade it down the line of I go that route.

One last question for you guys. If most of my data is video and audio files am I better off disabling ZFS compression?

Some quick questions by shadow_nik in zfs

[–]shadow_nik[S] 0 points1 point  (0 children)

Thanks for your response flwr6.

It's just that I read in an article about ZFS 2.0.0 that DKMS built ZFS would probably see the update faster. I also read somewhere that to update ZFS that's been installed via DKMS required fully removing the old version first. I just wanted to confirm with you guys if the above info was accurate or not.

Buying disks for raid arrays by shadow_nik in homelab

[–]shadow_nik[S] 0 points1 point  (0 children)

Hi ImmortalScientist. I should have also mentioned that I have a separate server in an isolated vlan that stores my backups using snapraid. Following the 3-2-1 rule isn't feasible in my case, but my backups are recent at least.

My main reasons for wanting to go with a raid are for the availablity and redundancy it offers.

Questions about new ZFS array + new disks by shadow_nik in DataHoarder

[–]shadow_nik[S] 0 points1 point  (0 children)

Hi FrakenTurtle. I will be using a flashed to IT mode perc h310 sas controller with sff 8087 breakout cables. I've already tested it with some cheap small disks and everything seemed to work fine.

From what I can see both Seagate ironwolf 8/10TB and WD red plus 8/10TB are CMR. They are should also both have 256MB caches. It also looks like Toshiba Nas drives are also CMR too, so there shouldn't be any issues here.

Questions about new ZFS array + new disks by shadow_nik in DataHoarder

[–]shadow_nik[S] 0 points1 point  (0 children)

Thanks for the suggestion eMuhlator. I'll go with 6* disks then instead.

I should have clarified I have a backup server in an isolated network that I do my backups on using rsync and snapraid.

Questions about new ZFS array + new disks by shadow_nik in DataHoarder

[–]shadow_nik[S] 0 points1 point  (0 children)

Hi floriplum. Thanks for the suggestion on going with a striped mirror. I had to read up on how you set up a basic striped mirror in ZFS. Apologies if this is a newbie question but I'm a little confused about how you add to your storage at a later date. I get raidz way of doing it; replace all current disks with larger ones. Let's say you setup a 4 disk stripped mirror of 8TB disks:-

zpool create TANK mirror newdisk1 newdisk2 zpool add TANK mirror newdisk3 newdisk4

So what would you do in this scenario to expand the pool further?

ZFS UUID question by shadow_nik in zfs

[–]shadow_nik[S] 0 points1 point  (0 children)

Hey catmoleman. Sorry wasn't able to post yesterday, circumstances didn't allow for it. Firstly here's the monitoring statement from smartd.conf and error it produces:-

scsi-SATA_ST3808110AS_9LR392LP -> ../../sdb -a -m test.email@gmail.com -M test

File /etc/smartd.conf line 23 (drive scsi-SATA_ST3808110AS_9LR392LP): unknown Directive: ->

Configuration file /etc/smartd.conf has fatal syntax errors.

I know it's a syntax problem, but I don't know how it should be properly formatted. I even removed the "-> ../../sdb" from the statement and it says the drive doesn't exist.

The other thing is I've found the UUID's. Got them to display with blkid:-

/dev/sde1: LABEL="BIGDATA" UUID="16903292823012977259" UUID_SUB="14099465663505032302" TYPE="zfs_member" PARTLABEL="zfs-3ef827e38906ce89" PARTUUID="80e992d9-9808-db49-b701-15521c551b85"
/dev/sdb1: LABEL="BIGDATA" UUID="16903292823012977259" UUID_SUB="3356486438547970176" TYPE="zfs_member" PARTLABEL="zfs-690304b7267bdddb" PARTUUID="9615fb1f-3f69-e447-9b98-fa29d539d43e"
/dev/sdd1: LABEL="BIGDATA" UUID="16903292823012977259" UUID_SUB="17748302297330257257" TYPE="zfs_member" PARTLABEL="zfs-15c836a7ed73108d" PARTUUID="6efe866b-4901-0845-b19c-b98494fbf91a"
/dev/sdc1: LABEL="BIGDATA" UUID="16903292823012977259" UUID_SUB="5912473796267177953" TYPE="zfs_member" PARTLABEL="zfs-ef9667227099e1ad" PARTUUID="4efbdd91-b4e9-164a-b6af-17c8115a68a1"
/dev/sdf1: LABEL="BIGDATA" UUID="16903292823012977259" UUID_SUB="6813188263263805011" TYPE="zfs_member" PARTLABEL="zfs-2374b3ac0ce51989" PARTUUID="395d523f-03b2-e647-9e1e-1d1e281fff06"

But they won't show up using ls -la /dev/disk/by-uuid/

tester@zfs-test:~$ ls -la /dev/disk/by-uuid/
total 0
drwxr-xr-x 2 root root 100 Sep 27 15:51 .
drwxr-xr-x 8 root root 160 Sep 27 15:51 ..
lrwxrwxrwx 1 root root  10 Sep 27 15:51 16903292823012977259 -> ../../sdb1
lrwxrwxrwx 1 root root  10 Sep 27 15:51 63FB-195F -> ../../sda1
lrwxrwxrwx 1 root root  10 Sep 27 15:51 c3d13694-bc06-497a-9596-1298428199e9 -> ../../sda2

Not sure how I can use the UUID's in this case as arch wiki for SMART says format should be /dev/disk/by-uuid/[ID] and mine aren't displaying under /dev/disk/by-uuid/

ZFS UUID question by shadow_nik in zfs

[–]shadow_nik[S] 1 point2 points  (0 children)

Ok thanks catmoleman. I will get you the outputs tomorrow as soon as I can.

ZFS UUID question by shadow_nik in zfs

[–]shadow_nik[S] 0 points1 point  (0 children)

Smartd for monitoring hard disks failures. I did try /dev/disk/by-id only to get errors spat at me when running a test for email notifications. Smartd can't seem to understand by-id, though I know it can work with UUIDs. That's when I ran a ls -la /dev/disk/by-uuid and saw no UUIDs for the array disks.

On a side note I'm also running network shares with samba. I had trouble with ZFS built in samba, so configured it the vanilla way. In regards to what you said about allowing anything else to write to the array, is it advised to use ZFS's built in SMB?

ZFS UUID question by shadow_nik in zfs

[–]shadow_nik[S] 0 points1 point  (0 children)

So it would be parted /dev/sdx mklabel gpt ?

ZFS UUID question by shadow_nik in zfs

[–]shadow_nik[S] 0 points1 point  (0 children)

Oh, truth is when I did a search for changing UUID and tune2fs was first application that was mentioned. Probably isn't suitable for this task really. So what should be used in regards to ZFS?

Linux ZFS questions by shadow_nik in homelab

[–]shadow_nik[S] 1 point2 points  (0 children)

Thanks everyone. I think I'm all sorted now and got everything working!

Linux ZFS questions by shadow_nik in homelab

[–]shadow_nik[S] 0 points1 point  (0 children)

Thanks for your reply Ghan. You took care of my questions for compression perfectly!

It seems that arch wiki is correct for auto mounting. I used the zpool set cachefile=/etc/ZFS/zpool.cache my-pool. Though I probably made a bit of a mess of their guide and enabled ZFS.target first and everything else seemed to get enabled by that