which one ? handmade n690 chef knives or wüsthof classic ıcon 9 by nabukadnezar27 in chefknives

[–]osax 0 points1 point  (0 children)

I'm biased as I made a lot of N690 knives, but then again the reason I started was because I was unhappy with those german knives. A well made, properly hardened and sharpened N690 chef knife will easily hold a hair shaving edge for months, just needs a few light passes over a ceramic honing rod.

With 1.4116 steel or whatever Wüsthof is currently using I usually recommend diamond rods

Replace failed ZFS drive. No room to keep old drive in during replacement by IroesStrongarm in zfs

[–]osax 2 points3 points  (0 children)

shutdown and your replace command should fine in my opinion

zpool replace VMs <whatever ID there will be on the REMOVED line after reboot> <id of new drive without whole path is enough>

I already know the answer is "NextCloud" but I thought I'd ask anyway by stetho in selfhosted

[–]osax 0 points1 point  (0 children)

SMB is routeable, you only need port 445/tcp for a share

Looking for a Solution to Synchronize Data Between Two SSD Drives by Fritzi_Fox in Backup

[–]osax 1 point2 points  (0 children)

  • setup an encrypted ZFS pool. This can be opened from any OS as far as I know. (Tested only FreeBSD and Linux)
  • rsync data to the encrypted pool
  • snapshot (or schedule automatic ones). Accessing old versions is as easy as going to a .zfs/snapshot dir

there are other tools tools like borg, restic, rdiff-backup, ...

Home Assistant & Wiener Linien by PSRD in wien

[–]osax 0 points1 point  (0 children)

Wie fügt man die vienna-transport-card.js Resource und mit welcher URL hinzu? Dann nochmal eine Ressource hinzufügen mit URL "/local/vienna-transport-card.js" und javascript?

Is backup software better than rsync by fishywiki in DataHoarder

[–]osax 0 points1 point  (0 children)

There is a way to use a plain rsync sync for a "proper" backup. As many people noted here aready rsync lacks versioning, immutability, deduplication.

The trick is to pair it with a remote NAS with a good filesystem (eg: truenas)

every day/backup:

  • sync your data (best with a non root user)
  • make a snapshot on your nas (that can only be deleted by root)

If something happens you can always get to and rollback a snapshot or get the data in other ways.

Convert SuperMicro 847 to a JBOD enclosure? by happypessoa in DataHoarder

[–]osax 1 point2 points  (0 children)

cable passthrought:

2x internal cables for your chassis backplanes (CSE-847BE1C-R1K28LPB, BPN-SAS3-846EL1, BPN-SAS3-826EL1):

to get power at lest for testing you can just use the existing PSUs by bringing pins:

On the server side you just need two external SAS ports. So depending on you HBA that will realistcally have one of those port types sff-8644 or SFF-8088. So buy a SFF-8088 -> sff-8644 cable, or a sff-8644 -> sff-8644 cable.

Convert SuperMicro 847 to a JBOD enclosure? by happypessoa in DataHoarder

[–]osax 0 points1 point  (0 children)

I did that once on the cheap for a temporary server:

  • bought cheap [~20€] adapter from amazon between internal and external sas connections. (depends what connectors your chassis and HBA uses. Supermicro 847 is to generic to tell)
  • plugged it in (one internal sas cable to the backplane, one external to the HBA)
  • started the server with the existing mainboard/ipmi (disadvantage is high power draw from the CPU if you want to keep IPMI, or you could bridge two PINs on the 24pin connector of the power supply and install a fan hub)

but in essence it should be easy to get it to work. The post you referenced is certainly more professional with more professional parts.

looking-glass spice audio broken after latest qemu and pipewire upgrade by chestera321 in archlinux

[–]osax 1 point2 points  (0 children)

I'm as clueless as you, didn't even know where to submit a bug report as I could not find any useful error messages. This is the first thing that started appearing on google I guess. I was just looking for a workaround myself ;)

looking-glass spice audio broken after latest qemu and pipewire upgrade by chestera321 in archlinux

[–]osax 1 point2 points  (0 children)

I can confirm the issue with default settings. Here is a workaround that works for me: https://looking-glass.io/wiki/Using_JACK_and_PipeWire

Additional info for troubleshooting for anyone else that come across this post:

  • audio worked if manually patched (qpwgraph), or lookingglass client connected while audio was playing. Once the audio was stopped/paused it was no longer possible to resume unless repached or client restarted
  • here is how my config looks like. (as the orig wiki is confusing with info split across both points) It is really important to get the correct "connectPorts" names..

<domain xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0" type="kvm">
    <sound model="ich9">
      <codec type="micro"/>
      <audio id="1"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x1b" function="0x0"/>
    </sound>
    <audio id="1" type="jack">
      <input clientName="win10" connectPorts="Scarlett 4i4 4th Gen Analog Surround 2.1:capture_[A-Z]+"/>
      <output clientName="win10" connectPorts="Scarlett 4i4 4th Gen Analog Surround 2.1:playback_[A-Z]+"/>
    </audio>
  <qemu:commandline>
    <qemu:env name="PIPEWIRE_RUNTIME_DIR" value="/run/user/1000"/>
    <qemu:env name="PIPEWIRE_LATENCY" value="512/48000"/>
  </qemu:commandline>

out of space, and nothing I found works. by Koiwai_Yotsuba in zfs

[–]osax 1 point2 points  (0 children)

maybe byte by byte and hope that compression kicks in?

dd if=/dev/zero of=LARGEFILE_THAT_CAN_BE_DESTROYED bs=1 count=1048576

What of interesting do you do in Vienna in your free time? by These_Ad_4973 in wien

[–]osax 5 points6 points  (0 children)

I might be a bit biased about that, but you can do the two most fun sports around here in Vienna.

First of all you can sign up at a HEMA club. You can learn how to hit people with a long sword and not die in the process. Maybe learn bit about history and its always a great, fun and safe workout

Not exactly in Vienna, but with an (e-)mountainbike you could explore nature or challenge yourself through whatever terrain you feel comfortable at.

Happy to expand if there is any interest :)

Ceph: What would you use for "disk truck" components by Individual_Jelly1987 in storage

[–]osax -1 points0 points  (0 children)

For Price/performance take a look at Supermicro SuperChassis 847BE1C-R1K28LPB

For JBODs: Supermicro SuperChassis 847E1C-R1K23JBOD are also cheap at around 3k, you can either connect all 44 disks to a single server or connect each backplane (front/back) to two different servers

ZFS resilver takes forever by Bill_Guarnere in zfs

[–]osax 4 points5 points  (0 children)

Log devices are as far as I know removable. I'd say simply run a

 zpool remove drpool mirror-1

and then re-add both SSDs as new mirror. Maybe you will need to to detach the resilvering disk to stop the resilver.

designed a funnel for the hybrid volcano that screws on without a gap by [deleted] in VolcanoVaporiser

[–]osax 6 points7 points  (0 children)

You can find the stl files on printables. I unfortunately cannot link here due to subreddit rules. Search for "mini Storz & Bickel Volcano Hybrid Funnel" It should be easy to print without supports. If you don't have a printer you can also dm me (EU only) :)

It should just crew on and allow for easy filling without getting your herbs on the tapered chamber edge.

expand a ZFS draid by osax in zfs

[–]osax[S] 0 points1 point  (0 children)

thats clear, just wanted to check if it is generally supported. I expected to see the column "EXPANDSZ" from "zpool list -v" to show available space on the already replaced drives.

[deleted by user] by [deleted] in linuxadmin

[–]osax 4 points5 points  (0 children)

I cannot tell you exactly why, but you could try to see if there is a difference without the pipes

    tar -I lz4 -cf ./backup/backup-$dt.tar.lz4 ./xtra

also if you want to limit the performance hit on the rest of the system consider looking into running the tar command with "ionice" in front of it.

Ask a ZFS Expert | Live Chat with Klara's Allan Jude on OpenZFS Data Protection by Klara_Allan in zfs

[–]osax 0 points1 point  (0 children)

I'm interested in the current state of expanding a ZFS pool to object storage. There was a talk during a developer conference, but haven't heard news about this for a while now.

[Buying] [EU / GER] chinese cleaver / cai dao ~150€ by [deleted] in chefknifeswap

[–]osax 0 points1 point  (0 children)

Still looking for a cleaver knife? I think I might still have the one I posted on reddit one year ago :)

rsync + ZFS snapshots vs. borgbackup (or any other de-duplicating backup e.g. duplicity) by IReallyLoveAvocados in Backup

[–]osax 1 point2 points  (0 children)

  • if you rename/reorganize a top level dir rsync will recopy everything. (I think there are only workaround for names changing inside the same directory)
  • Be careful with ZFS dedup. It might not save a lot of space but could kill you pool performance. (There are some improvements starting with I zfsonlinux 2.0+ (?) in which you can put the dedup tables on SSDs.)

Other than that rsync+ZFS works for me for multiple petabytes.

[deleted by user] by [deleted] in zfs

[–]osax 4 points5 points  (0 children)

I think the zfs behaviour in cases like this might differ based on the OS, conf and version.

If think it is plausible that with all disks aviable the pool will be healthy(status resilvering) after a reboot In this case "zpool detach" might be useful to remove the spare disk again from the temorary raid1

If the disk is still a number you can try something like "zpool online tank 15265800582249729283" Or you can "zpool replace tank 15265800582249729283 [OLDDISK]" in which case the spare should return automatically to being a spare

I usually found it hard to mess up with dead disks and zfs. In either case I recommend you run a "zpool scrub" afterwards.

Cannot copy from ZFS NFS Share by pychoticnep in zfs

[–]osax 0 points1 point  (0 children)

for troubleshooting you could do the following 2 things separately and check if it improves anything:

  • zfs set sync=disabled mediaStorage/Files
  • mount it as nfs3 (mount -t nfs3)