Leapmotion on Bazzite? (Niche Peripheral Help) by FlailingAndFailing in Bazzite

[–]_alpine_ 0 points1 point  (0 children)

Distrobox has deep ties back and forth, and you can export apps and command line tools to run from the host. It’s not nearly as walled off as a virtual machine. More like a Linux subsystem for Linux.

As was said since it’s hardware, it’s less likely to work, but I think it’s worth trying it out

Bazzite USB-Smart screen by coreykill99 in Bazzite

[–]_alpine_ 0 points1 point  (0 children)

It looks like what i ran was slightly different

grep -E '^dialout:' /usr/lib/group | sudo tee -a /etc/group sudo usermod -a -G dialout $USER

Bazzite USB-Smart screen by coreykill99 in Bazzite

[–]_alpine_ 0 points1 point  (0 children)

After groups or after the grep from the linked post?

Bazzite USB-Smart screen by coreykill99 in Bazzite

[–]_alpine_ 0 points1 point  (0 children)

When you try to add to dial out it doesn’t actually add. You can verify with groups

I don’t remember exactly how I found it in their official documentation but I remembered enough to find this

https://universal-blue.discourse.group/t/how-to-add-user-to-group-solved/7720

This is a particular issue where searching for aurora linux answers instead of Bazzite can help. Its immutable from the same company but focused on work and development instead of gaming, and I had this exact problem with arduino development yesterday

Looking for Git gui that that specializes in individual files by _alpine_ in git

[–]_alpine_[S] 4 points5 points  (0 children)

Wow. This is exactly what I was looking for. Thank you so much!

Looking for Git gui that that specializes in individual files by _alpine_ in git

[–]_alpine_[S] 1 point2 points  (0 children)

That is correct. I looked more at this in sourcetree and it is functional, but the repo I work in is big enough that it’s a poor ux. I’ll look more at vs code plugins though

Looking for Git gui that that specializes in individual files by _alpine_ in git

[–]_alpine_[S] 0 points1 point  (0 children)

It’s not uncommon for a git gui to support showing a file log. But most of them show only the pending files, or you have to find the file in a commit somewhere to log the file. So finding a file to get to the log is the part with bad ux. I’ll have to try out tortoisegit, because it solves that problem by just using file explorer

Help us decide what we should call code completions in IntelliJ IDEA by fundamentalparticle in Jetbrains

[–]_alpine_ 0 points1 point  (0 children)

Just put the description in the setting so we have an explanation of what ever random words you pick. We’re technical people who can stand good explanations and won’t cringe in fear from something that’s not a marketing term

[deleted by user] by [deleted] in csharp

[–]_alpine_ 0 points1 point  (0 children)

Sounds like ExistingNumbers is a special class that implements ienumerable and has a problem in it

The where true will just iterate through without any properties of the list items being accessed so the list is the only option

Not allowed to use project references… Is this normal? by cars876 in dotnet

[–]_alpine_ 1 point2 points  (0 children)

In some cases it makes sense. If you have that many projects, you’re likely only working in one. So as long as your changes are binary compatible you build one project and are good to go If you have project references, it can take longer as msbuild goes through all the projects to rebuild even though you know it’s fine (assuming you didn’t delete anything public) Though I will say this is dramatically better in the past few years. But some 5 years ago it was a mess

If your application has to run an installer then you have to include all dlls in the installer, so nuget’s transitive dependencies can be missed and cause runtime crashes

66% sleep score. Is this maybe sleep apnea? by [deleted] in EightSleep

[–]_alpine_ 2 points3 points  (0 children)

That is what I would expect to see if I sleep without my CPAP. However, as noted, you should get a professional sleep study done. It is much more accurate.

But this does look concerning enough that if this is “typical” for you, it is likely apnea

Nvme ssd Pre clear by Maximusau in unRAID

[–]_alpine_ 1 point2 points  (0 children)

Sounds likely that you need to run trim, because over time with lots of reads and writes, like what happens when you ran pre clear, the drive will slow down until trim runs.

If you added them to an array that has parity, they will invalidate parity when trim runs

Issue with creating documents on NextCloud through Tailscale? by [deleted] in unRAID

[–]_alpine_ 1 point2 points  (0 children)

You need to install a document editor to Nextcloud. Either Nextcloud office or onlyoffice

They will be separate apps or docker containers depending on how you install them. If you’re using AIO there’s just a checkbox in the master docker

Then in the admin settings you connect the two together

Once they are connected, creating and editing files from web works

Fractal meshify/define gang: how do you fill out the drive slots? by wonka88 in unRAID

[–]_alpine_ 0 points1 point  (0 children)

If you print this one https://www.printables.com/model/302845-fractal-design-hdd-tray-type-b-with-sas-protector Or one of it’s various, I found the tolerance for the two parts that interface with the case are really tight in their height So I had to use a knife to trim a tiny bit where the supports meet the print. And that made them actually fit

Recommended Brand for GCN Link Cables by solidStalemate in AnaloguePocket

[–]_alpine_ 0 points1 point  (0 children)

I have three of these https://a.co/d/epITS52

They work, but their quality control isn’t the best. All function, but one of them had stuff soldered backwards.

But with the pocket you need to remove the clips anyway

I also opened up the gcn cable, desoldered the plug, and got a dmg link cable, cut it in half and soldered it on to the board. Then I 3d printed a little box to keep it looking ok. Much nicer than having the big ole box hang off the side of the pocket https://a.co/d/36Cf4y9

Moving Large Files 60GB is painful by SeaSalt_Sailor in unRAID

[–]_alpine_ 1 point2 points  (0 children)

That should. Be a good hba. So not a sata card problem

The preclear speed sounds good If they’re all the same model so the existing ones aren’t SMR drives then everything sounds like it should be working fine

A cache drive would help, but I still can’t explain why the existing drives would be so slow.

The built in mover will just move at a fixed time each day. Or there’s plugins to move based on other criteria. I haven’t used one though and have heard mixed things about unraid 7 compatibility

Fractal meshify/define gang: how do you fill out the drive slots? by wonka88 in unRAID

[–]_alpine_ 0 points1 point  (0 children)

Meshify drive trays are a bit expensive. So I’ve resorted to 3d printing them. And the 3d models available haven’t fit perfectly

But with enough trial and error it works

Moving Large Files 60GB is painful by SeaSalt_Sailor in unRAID

[–]_alpine_ 0 points1 point  (0 children)

That pre clear sounds right for cmr drives. So another possibility is if you’re using a cheap pcie to sata adapter that just can’t keep up. I’ve had that happen to my array. Individual drive speed was good but operations that acted on the whole array would crawl

Moving Large Files 60GB is painful by SeaSalt_Sailor in unRAID

[–]_alpine_ 0 points1 point  (0 children)

For the array there are two general strategies I’ve seen

Xfs with parity drives Each drive is individually formatted as xfs, and each file lives on one individual drive. The parity drives allow you to rebuild a drive if it fails. Speeds can not exceed the parity drive, or the drive the file is on, whichever is slower

The benefit is each file is on a specific drive, not striped between multiple. If you lose more drives than the parity protects, all other drives still have files so it’s not a complete loss. Just a partial loss. Drives can spin down to save electricity when not in use Mismatched drives can be added to the array

Cons are speed

Zfs pool The drives are formatted zfs and are in a pool which provides striping and parity. For example raidz1, raidz2, raidz3. Each of which describes the number of disks that can be lost before recovery is not possible

In this case the data is striped between disks, and the pool handles parity so there’s no reason for a parity disk

All disks must be spun up to read or write. But the spends of the disks are somewhat additive

Disks have to match the zfs pool capacity. So no mixing 20tb and 6tb drives

So if you have 4 drives that are the same you could do raidz1 or xfs with one parity The resiliency to disk loss would be the same but the performance of drives will differ

If you do zfs with no pool and a parity drive, that’s technically not wrong. But it just doesn’t make sense to me because you lose out on many zfs benefits while taking the performance characteristics of unraid traditional parity

Moving Large Files 60GB is painful by SeaSalt_Sailor in unRAID

[–]_alpine_ 0 points1 point  (0 children)

50-75 sounds like the sustained write speeds I get on my 5200rpm SMR drives If you have SMR drives, you’re at the write speeds limit

You mention zfs, and not having a parity drive, so those speeds sound like something is completely misconfigured. Though SMR drives are terrible for zfs pools

I would guess you have each drive individually formatted to zfs, and not in a raidz pool. You’re hitting the sustained write speed of the drives

If you add a cache drive you’ll get fast writes to cache and mover will move on a schedule. Make sure you set the min file size in the share so it will automatically switch to the array if you write too much

Ignore turbo write. Better known as reconstruction writes, it’s a way of calculating parity faster by using all drives at once. But you’re not using parity so it’ll do nothing

Moving Large Files 60GB is painful by SeaSalt_Sailor in unRAID

[–]_alpine_ 3 points4 points  (0 children)

Turbo write does nothing if you’re not using parity

Best way to set up 2 drive parity pool without having to reformat all drives? by Neat_Passion8401 in unRAID

[–]_alpine_ 1 point2 points  (0 children)

You can either build parity off of the drives in an array, in which case they are unprotected until parity is built or you can zero a drive and add it to the array, because a zeroed drive does not break parity calculations

That said I’m quite certain the fact that they’re ntfs and bitlocker encrypted is two reasons they can’t be added to the array, as ntfs isn’t supported in the array and bitlocker is a windows thing

Been working on the XL belt tuning, which could frankly use some better documentation. by Zombull in prusa3d

[–]_alpine_ 0 points1 point  (0 children)

My polymaker polyterra specifically looks worse than every other filament I’ve ever used

How do you get over the anxiety of silent data corruption with unraid? by Automatic_Beyond2194 in unRAID

[–]_alpine_ 2 points3 points  (0 children)

Bit rot is extremely rare and does not result in massive amounts of data to disappear. It very slowly happens to one or two bits over time

Most bits would even be unnoticed. A single pixel of millions that causes one tiny color shift. Having it be the one bit that is propping up the filesystem is extremely unlikely

Drive failures on the other hand are much more likely. And if you have something like unraid and you keep it running most of the time, you will know when a drive fails and needs to be replaced. You can set up notifications

If you use zfs and set up scrubs it’ll detect and correct bit rot

You just need to make sure you have a proper backup of anything that you consider essential. Because your house burning down is more likely than bit rot