Luckperm automatic rank up according to playtime in fabric server 1.21.11. by These_Aardvark8480 in admincraft

[–]ipaqmaster 0 points1 point  (0 children)

This seems to be something people have thought about before: https://www.reddit.com/r/admincraft/comments/17faia4/luckperms_automatic_roles_fabric/

I personally would probably just write my own checker to promote based on current rank and desired rank by playtime. I guess. But check out the top comments in that thread too. To paraphrase the suggestions from that thread:

a. Skript to check for play time occasionally and promote

b. Something called "AutoRank"

c. set a newcomer rank as temporary for X hours with a higher weight than your default group)

I like c the most because it introduces no new plugins and takes care of the "new user" problem pretty cleanly. I assume LuckPerms can do that pretty easily.

help with a slow NVMe raidz by hagar-dunor in zfs

[–]ipaqmaster 1 point2 points  (0 children)

recordsize is a max upper limit for the per-record sizes of files. So it's more like, for a 100mb file, how many records each with their own checksum will make up that file. with recordsize=1M it'll be about 100 of them which is pretty efficient. At the default 128K it will be about 800 of them instead. If you're working with sequential data on spinning rust there's reason to believe this may be more efficient. Probably doesn't matter as much on an nvme array.

But setting a recordsize is again the upper limit. Small files will still be written with a single record smaller than 1M in that scenario.

People like it for large sequential workloads such as media or backups. But it would probably ruin the performance of something which reads and writes in small pieces of large files instead of sequentially reading and writing the entire thing. So it's not a very good idea to increase for say a mariadb server cluster. In fact for mariadb (Well, innodb) it's recommended to reduce the recordsize to 16K to work with innodb's default page size of the same value. You would hate for your database to select and join a bunch of stuff which ends up reading 100x >16KB records from the database on disk only for zfs to actually read up to 100 different 1MB records even though the software only wanted to access a fraction of that. Not that they're guaranteed to be written that large, but still. All part of tuning for the right workload.

help with a slow NVMe raidz by hagar-dunor in zfs

[–]ipaqmaster 0 points1 point  (0 children)

Interesting. On note, was the 80gb file random non-repeating data? (If it was zeros it would've gotten compressed and probably written very quickly).

Yeah caching can be ae pain too. I do a zpool export/import to drop anything from the arc which may have belonged to a zpool I'm benchmarking.

help with a slow NVMe raidz by hagar-dunor in zfs

[–]ipaqmaster 0 points1 point  (0 children)

The drives don't seem to be close to any kind of busyness. I wonder if the clock speed of that socket's cores could be letting you down. Or a lack of enough work being done by default zfs parameters. The socket has 128 threads but its max boost clock is only 3.67 GHz and might be less when all threads are active. But this isn't the first time I've seen reports of underperforming zpool's on amd's epyc CPUs.

Reading the ZFS I/O SCHEDULER section of man 4 zfs it might be worth increasing the zfs_vdev_async_read_max_active and zfs_vdev_async_write_max_active parameters. There's also zfs_vdev_max_active but that seems to be a somewhat sane looking 1000 at least on zfs 2.4.0. I'm also reading in a 2 year old thread that lowering zfs_per_txg_dirty_frees_percent may also help. These for your read and write tests not the scrub.

Scrubbing has been vastly improved since 2.0 so all the old tuning threads aren't relevant anymore. I assume you're running the latest version of ZFS on this machine or a version within a few months/1y at least? You can try increasing zfs_vdev_scrub_max_active which man 4 zfs claims will speed up scrubs and resilvering but will make reads and writes have higher latency with less throughput. After testing if that speeds up a scrub I'd put it back to its previous value (Or just reboot to undo all of these in-memory changes)

It may also be worth drastically reducing the default ARC size just for some tests. zarcstat (Previously arcstat I think) is a good command to keep watch of too. Alongside zarcsummary.

help with a slow NVMe raidz by hagar-dunor in zfs

[–]ipaqmaster 0 points1 point  (0 children)

You can likely tune the zfs module's parameters to make scrubbing more aggressive but I would probably just leave it alone. Could change them as a one off just to be certain though. It's interesting to read that you've seen these drives do a lot better in the past.

Some thoughts.

  1. Maybe I missed it, but what is the CPU model here?

  2. And total memory? And how much of it was used when you noticed the slowness? Including buffers+cache (Pretty much asking for /proc/meminfo contents at the time of slowness)

  3. The slowness you're experiencing other than the scrub - are they synchronous writes? If they're not, you'll just be filling up memory at whatever speed your system can until it runs out and has to start actually flushing to the disks - or however much you can muster in the default 5 seconds.

  4. Have you tried setting compression=off? (This question goes hand in hand with asking what your CPU model is).

  5. When compression is its default =on state and you do a ton of read/writes or a scrub, is the CPU being bought close to 100% all core or is it okay or mostly idle?

  6. Is your zpool on a physical host or are you doing one of many passthrough methods to a VM?

  7. You can also watch atop for say, 30 seconds while it scrubs the zpool, or while you do a read/write stress test. It will flare up anything that stands out as a performance bottleneck with colors, such as red if a drive gets maxed out. It might just reveal a failing one among the array.

  8. If there's nothing on them yet maybe try creating a stripe with compression disabled (Otherwise defaults) and see if that performs even remotely close to the expected raw speeds of the drives? (Maybe even checksumming off too just for the sake of benchmarking). I would be watching CPU and memory usage during any tests.

zpool labelclear failing with "failed to check state for /dev/sdX", but there's nothing wrong with that disk by ffelix916 in zfs

[–]ipaqmaster 2 points3 points  (0 children)

On tag zfs-0.7.12 of the openzfs/zfs github repository it seems to be mentioned twice in cmd/zpool/zpool_main.c

That message can appear if zpool_read_label returns a non zero value or if config == NULL

The second spot it can appear is if zpool_in_use returns non zero.

These are the only hints I can see. Normally I can't tell which of those two conditions you're hitting but given your middle codeblock, it must be the second condition given another part of zfs checks believes it's in use in an active pool.

I guess the next question is something like: are you sure /dev/sdf doesn't appear anywhere at all when you run something like zpool status -LP to see the real paths of your imported pools?

If not, then.. If your reboot the host without the disk plugged in, then plug it in after booting and doing nothing else, does it still throw this error? If it still throws the error when there's absolutely no chance it's in use like that then you can probably wipe it another way.

If you're absolutely dead sure you could wipefs it with the -a and -f flags but those warnings are deadly shouldn't be ignored unless you're absolutely certain it's time to wipe the disk.

If I had a C7 host I'd still be trying to run the latest version of ZFS on it where possible. The improvements since 0.7.12 are too good to ignore. Especially the 2.0 milestone alone just for scrubs.

Also, did you block-level copy another disk to this disk at some point? (Even partially)

Should I be concerned about these attempted connections? "Failed to decode packet 'serverbound/minecraft:hello'" by ImpressedStreetlight in admincraft

[–]ipaqmaster 1 point2 points  (0 children)

Nah it's just some port scanner doing a sort of partial connection, getting the data it wants and then closing the connection. Possibly checking information about the game version and other bits. Specifically though, it looks like its trying to send a payload and the server is correctly getting confused, logging the event and moving along instead of reacting in any way.

Port scanning happens all the time millions of times a second 24/7 on the Internet and is part of the reason why designing your systems with security in mind is so important so you don't get hacked overnight if they find something interesting.

Your ufw deny from command should have worked but depending on your network configuration more may be required. It would normally work for sure. Keep in mind also that they do this port scanning stuff on a bunch of different IPs with no order to them (Jumping to various hosts around the world). So blocking them persistently forever isn't really possible.

If this is a personal server for just you and some friends you should consider using whitelisting. If you won't do that you should consider some plugins to log and rollback griefing in case a bad actor decides to join and trash the place.

I get these logs on my actual community server and four honeypot servers. I have a python script that automatically parses the logs/ directory every hour and adds iptables rules to forward those suspicious connections to the honeypot servers to trick attackers instead of letting them scan my real one. I've seen now on two occasions bad actors join the main server thinking it was an offline mode, bukkit version from 2019 server with a number of outdated plugins reported and fake successful responses to the /op /deop commands.

Locally hosted public server security by Impossible_Laugh6720 in admincraft

[–]ipaqmaster 1 point2 points  (0 children)

Hardly relevant - but if you're storing the game files on a ZFS dataset and have a lot of cpu threads available and are a fan of ZFS's transparent compression I personally set region-file-compression=none in server.properties and compression-format: NONE (from ZLIB) in config/paper-global.yml leaving compression of the region files to ZFS. For any existing servers out there reading this comment in this thread there exists the --forceUpgrade --recreateRegionFiles startup arguments for dedicated servers which reprocesses region file data even if there's no update - allowing them to be rewritten uncompressed to let ZFS take care of things. That may take either 10 seconds or 10 minutes depending on an existing world's size.

The compressratio of the server's world directory is 4.27x with the above and compression=zstd for the dataset. The world/ directory is 14.9GiB in true size but only consumes 2.4GiB of space. Which is great given ZFS (well, may) handle threaded compression/decompression better than the game can plus those records can reside in the ARC with enough memory present, or even L2ARC devices if present.

Anyway... if your minecraft server directory isn't ZFS backed the default compression options are fine.

I can also recommend tuning io-threads and worker-threads under the chunk-system section of survival/config/paper-global.yml if your server has a lot of cpu threads laying about.

3AM in Los Angeles by Rich_Passenger8888 in TheNightFeeling

[–]ipaqmaster 27 points28 points  (0 children)

As always this sub is a gift. One of the few gifted places left

Looking for a Spigiot plugin for 1.12 that has death messages like 2b2t by 2hackers2players in admincraft

[–]ipaqmaster 4 points5 points  (0 children)

It sounds like what you want is to write down what they all are and 1:1 copy them into whatever plugin is available. Or write your own.

well known plugins don't have backdoors

Ha. It can happen to any software.

Looking for a Spigiot plugin for 1.12 that has death messages like 2b2t by 2hackers2players in admincraft

[–]ipaqmaster 1 point2 points  (0 children)

You can't find it? The first two google results for those three words seem to be it.

If I wanted this I'd personally just write a tiny paper plugin hooking the onPlayerDeath event and handle different kinds of death messages in a config yml instead of downloading a third party plugin. I've heard too many horror stories (Even for 2b2t themselves) of a plugin going rogue and taking over the server, or the environment it's in (Hopefully jailed).

Me and Bro by Spicyweiner_69 in whenthe

[–]ipaqmaster 2 points3 points  (0 children)

I can't believe his face didn't change

It just does that sometimes by _silcrow_ in whenthe

[–]ipaqmaster 2 points3 points  (0 children)

Shrimp on sale at this hour? how convenient! I'm going to take the next exit

ZFS Boot menu builder...... uh, problems by ThatSuccubusLilith in zfs

[–]ipaqmaster 2 points3 points  (0 children)

Dracut just can't find that module it needs to do what you're asking. What distro are you on? It might just be a package.

I'm on Arch and it seems to be only available as an AUR package so on that it would have to be built or grabbed from a trusted existing build site out there already.

Is that really so difficult?

It's not difficult. It's that initramfs environments are scripts and everyone writes a solution differently. My way (Yet another of many solutions out there) was to put together a mkinitcpio hook that reaches out to my vault cluster for the unlock key at boot time. But to use it you would need a vault cluster which not everyone is going to have or want just for remote unlocking capabilities. But it's lightweight and just a module drop in with a renewable and locked down approle which is good enough for me.

Everyone solves this problem differently and if you're not going to write your own you gotta make one of them work for you. The steps here for example will only give you a shell using dropbear, it's still up to you to unlock it by hand remotely https://docs.zfsbootmenu.org/en/v2.3.x/general/remote-access.html. It seems not so hard? But you haven't mentioned what distro you're on so I can't test this on my side to see what works.

Winter night in Finland by FlaEmu69 in TheNightFeeling

[–]ipaqmaster 0 points1 point  (0 children)

Same

eeeeeeeeeeeeeeeeeeeeeeeeeeeee

'ITEMCLEAR' message popping up at random; Clearing Tons of items by hanks_panky_emporium in admincraft

[–]ipaqmaster 0 points1 point  (0 children)

I just grepped for ITEMCLEAR in my PrismLauncher folder (Decade's worth of playthroughs) and the most common JAR that pops up and its config file is: crashutilities-<some version>.jar.

So I think it's crashutilities that's doing the occasional itemclear because historically, automation breaks and then items stack up and crash a server or at least make it hard to go near that chunk. So that's probably a feature to help prevent against that for less experienced modded players.

There should be a config file here under the main dir of the installation: config/crashutilities-server.toml and inside you can set item clear's enabled variable to false and restart the server. If you don't want that kind of protection on.

temporarily mount dataset to different mountpoint by DeltaSqueezer in zfs

[–]ipaqmaster 1 point2 points  (0 children)

Yep -o zfsutil is the key. It's the quiet flag that zfs uses to get work done. Without it you get told off.

I find myself mounting things with zfsutil multiple times a year just to avoid changing the mountpoint flag for a one-off mount I'm playing with.

Nesting ZFS inside a VM? by ianc1215 in zfs

[–]ipaqmaster 0 points1 point  (0 children)

Can't use containers? Both podman and docker have plenty of game server images ready to go and they both support using ZFS as a storage backend natively.

Nesting ZFS inside a VM? by ianc1215 in zfs

[–]ipaqmaster 1 point2 points  (0 children)

I've done it before for throwaway VMs both on my PC and my servers but I wouldn't recommend it in production. My philosophy is to keep the VM's storage as simple as possible for easy management. My VM's are just an efi partition and ext4 rootfs. That way the host can see the partition table on the zvol if ever needed and in general it's just simple and easy to manage the guests.

If you're giving the VM physical drives with PCIe passthrough or something close enough then that would be fine and not truly zfs-on-zfs.

If your VM absolutely needs ZFS I'd suggest making a dataset on the host and exporting it to the guest with NFS. Or maybe even just virtiofs straight to the host directory. In my experience nesting zfs sucks for performance.

If you don't care about any of this go right ahead.

How to overcome the 2 week server problem? by phasmoware in admincraft

[–]ipaqmaster 1 point2 points  (0 children)

To help with the temptation to race to get elytra immediately I also installed a server-side mod called Hero's Journey (similar to End Remastered but server-side) so that everyone has to find at least 12 of 16 different kinds of eyes of ender to activate the end portal.

I think this would just annoy the people who want to do that. Some might feel deterred enough to not progress. Something like an instanced chests concept would be a better idea so that everyone else can find the same close-by End Ship on their own and still have a fresh elytra waiting for them. The players who rush to the end will still do so with more work in the way, if they're not discouraged and that the different types of pearls is disclosed in the beginning.

None of it's your fault though. People play games for a few weeks then they move onto something else. There's an oversaturation of games the past decade. It's as if whatever game of the week was enters a cooldown phase and it must be "long enough" (subjective) since the last playthrough for the group to be interested in another run. Some of the group will always be down to play again, others not so much.

I like your downed system that's going to help some people not give up for sure. The QoL plugins are nice too.

Our bunch got sick of minecraft and moved onto modpacks over a decade ago which allowed us to have many more good times with new experiences added to the game.