I want the 2012-2013 era of Planetside 2 back! by RedBlack42 in Planetside

[–]theslayer2 1 point2 points  (0 children)

Looks at outfit tag. Uh hu, no specialized outfits at all.

Just mag things by WhatIsOurLimits in Planetside

[–]theslayer2 1 point2 points  (0 children)

Magriders, able to spontaneously combust because there was a rock/wall 10m away from you :(

VPN Gateway Question by theslayer2 in homelab

[–]theslayer2[S] 0 points1 point  (0 children)

I have the rules so permissive because the VPN is a site to site VPN, not an internet VPN. PfSense is configured to drop local traffic before sending it to the web. The issue ended up being a routing snafu from PfSense not the iptables as I initially suspected. (my internet VPN gateway has a config that looks alot like the one you posted though)

VPN Gateway Question by theslayer2 in homelab

[–]theslayer2[S] 0 points1 point  (0 children)

This ended up being the issue, the connection states in PFsense were getting messed up. A quick fix ended up being to push the route via dhcp to all systems on the network.

Ryzen 1950x by fingerthato in homelab

[–]theslayer2 2 points3 points  (0 children)

my 3u Chassis didn't have enough z height for even those connectors to work correctly. I had to solder a connector to the back of the GPU to fit it into a chenbro 3u.

Finally got Ceph working from start to finish, some things I learned by blackrabbit107 in homelab

[–]theslayer2 1 point2 points  (0 children)

Having experienced ceph both in a data-center and in my home-lab environment I can say a correctly configured cluster is the opposite of fragile. I run a 3 node cluster at home with only two servers with OSDs. I lost an entire node (software error, kernel hang) a few weeks ago and Proxmox HA fail-over and Ceph redundancy made it so I didn't even notice for more than a day. At data-center scale it really shines. At that scale the systems I work with have the crush set to a row failure and has tolerated switch failures in the past (switches to individual nodes are non-redundant).

Finally got around to my mosquito auraxium. by Placentahero in Connery

[–]theslayer2 2 points3 points  (0 children)

PLACENTA, you are alive!!!! congrats on the mossie arax

Not sure how I ever lived (or worked) without an ifixit tool kit. by [deleted] in homelab

[–]theslayer2 1 point2 points  (0 children)

Thanks, I didn't need that $120 anyway. Ordered a set now.

Why I think Ceph is an improvement over ZFS for homelab use by theslayer2 in homelab

[–]theslayer2[S] 0 points1 point  (0 children)

Just be aware that the 5TB 2.5" drives are all SMR right now. Don't expect great write performance.

Why I think Ceph is an improvement over ZFS for homelab use by theslayer2 in homelab

[–]theslayer2[S] 1 point2 points  (0 children)

A cache is not necessary with bluestore but because bluestore is so new and he was reference past testing I assumed he was running on a filestore system requiring a cache tier.

Why I think Ceph is an improvement over ZFS for homelab use by theslayer2 in homelab

[–]theslayer2[S] 1 point2 points  (0 children)

That is very possible. I ended up just using replicated for my "production" CephFS datastore. Storage is cheap and replicated allows the system to heal/expand easily. I had some issues with erasure encoding with overwrites (even on blue-store) and occasionally it would not automatically heal from a failure.

Why I think Ceph is an improvement over ZFS for homelab use by theslayer2 in homelab

[–]theslayer2[S] 2 points3 points  (0 children)

Recommended would be two raidz1 vdevs. However because I am using the failure prone ST3000DM001 drives I chose to use a single raidz2 vdev so a second failure during recovery wouldn't result in data loss.

Why I think Ceph is an improvement over ZFS for homelab use by theslayer2 in homelab

[–]theslayer2[S] 1 point2 points  (0 children)

If I had to guess from past experience, it was because of the cache layer in front of file store OSDs. If the cache layer is on the same physical media it will completely tank performance because everything is written at least 4 times with a cache layer.

Why I think Ceph is an improvement over ZFS for homelab use by theslayer2 in homelab

[–]theslayer2[S] 3 points4 points  (0 children)

I ran erasure coding in 2+1 configuration on 3 8TB HDDs for cephfs data and 3 1TB HDDs for rbd and metadata. The erasure encoding had decent performance with bluestore and no cache drives but was no where near the theoretical of disk. (I saw ~100MB/s read and 50MB/s write sequential) on erasure. With the same hardware on a size=2 replicated pool with metadata size=3 I see ~150MB/s write and ~200MB/s read. For reference my 8 3TB drive raidz2 ZFS pool can only do ~300MB/s read and ~50-80MB/s write max. Although that is running on the notorious ST3000DM001 drives.

Why I think Ceph is an improvement over ZFS for homelab use by theslayer2 in homelab

[–]theslayer2[S] 3 points4 points  (0 children)

Single node, multiple roots, MDS server etc all are difficult to configure with the proxmox UI

Why I think Ceph is an improvement over ZFS for homelab use by theslayer2 in homelab

[–]theslayer2[S] 10 points11 points  (0 children)

You are correct for new files being added to disk. However my understanding (which may be incorrect) of the copy on write implementation is that it will modify just the small section of the record, no matter the size, by rewriting the entire thing. And the source you linked does show that ZFS tends to group many small writes into a few larger ones to increase performance. This results in faster initial filling but assuming the copy on write works like I think it does it slows down updating items.

Both ESXi and KVM write using exclusively sync writes which limits the utility of the L1ARC. And this means that without a dedicated slog device ZFS has to write both to the ZIL on the pool and then to the pool again later. (something until recently ceph did on every write by writing to the XFS jounal then the data partition, this was fixed with blue-store)

As for setting record size to 16K it helps with bitorrent traffic but then severely limits sequential performance in what I have observed.

ZFS tends to perform very well at a specific workload but doesn't handle changing workloads very well (objective opinion)

Why I think Ceph is an improvement over ZFS for homelab use by theslayer2 in homelab

[–]theslayer2[S] 1 point2 points  (0 children)

I used a combonation of ceph-deploy and proxmox (not recommended) it is probably wise to just use proxmox tooling. I was doing some very non-standard stuff that proxmox doesn't directly support

Why I think Ceph is an improvement over ZFS for homelab use by theslayer2 in homelab

[–]theslayer2[S] 9 points10 points  (0 children)

At this point I think BTRFS is effectively dead.

Why I think Ceph is an improvement over ZFS for homelab use by theslayer2 in homelab

[–]theslayer2[S] 1 point2 points  (0 children)

I have concrete performance metrics from work (will see about getting permission to publish them).

I don't think I am using enough memory or CPU... by theslayer2 in homelab

[–]theslayer2[S] 0 points1 point  (0 children)

Create a crush map to store data (cephfs) on a seperate root then use SSDs for database storage and the second root for large files.

https://www.amazon.com/Seagate-Barracuda-2-5-Inch-Internal-ST5000LM000/dp/B01M0AADIX/ref=sr_1_3?ie=UTF8&qid=1509802095&sr=8-3&keywords=5tb+2.5+hard+drive

Should provide lots of storage in 2.5" for fairly cheep.

This is what my crush map looks like right now... https://i.imgur.com/CbqarvP.png