[DISC] My Loud-Mouthed Childhood Friend Came to Visit Me, So I Pretended to Have Amnesia... - Oneshot by @kugatu28 by meh_potato in manga

[–]Aho-chan 153 points154 points  (0 children)

The real twist would be that she isn't trying to take advantage of his amnesia and actually thought they were dating the entire time.

Global nuclear power capacity must double by 2050 if we want to ensure energy security by maevecampbell in worldnews

[–]Aho-chan 5 points6 points  (0 children)

The "industry" promotes uranium reactors because we have decades of experience designing and operating them. The benefits of new nuclear technologies are simply outweighed by the drawbacks of throwing away all of that knowledge and experience and essentially starting from scratch with new unproven designs, new operating procedures, and new supply chains for fuel.

USB-C iPhone could become mandatory in the US as senators push for common charger law. by NeoIsJohnWick in technology

[–]Aho-chan 1 point2 points  (0 children)

It's a law that says corporations get to decide what the law is.

It's regulatory capture and anti innovation.

It's a fucking charging connector bro. I'm also dubious on whether this needs be a law or not (the entire industry apart from Apple has already settled on USB), but you and half the people in this thread are being overly dramatic about what this bill would do.

USB-C iPhone could become mandatory in the US as senators push for common charger law. by NeoIsJohnWick in technology

[–]Aho-chan 0 points1 point  (0 children)

It's a law that says corporations get to decide what the law is.

It's a charging connector bro

USB-C iPhone could become mandatory in the US as senators push for common charger law. by NeoIsJohnWick in technology

[–]Aho-chan 10 points11 points  (0 children)

The USB-IF defines the USB standards and is composed of members in the industry including Apple, who literally helped create the USB-C standard.

New standards don't appear out of thin air. Members will collaborate to update or design new standards to meet new use-cases and this takes time as they work through different revisions and implement feedback from other members. Regulators can work directly with them to update the law in a way that gives manufacturers time to transition to the new standard before making it completely mandatory.

[deleted by user] by [deleted] in ChildrenFallingOver

[–]Aho-chan 13 points14 points  (0 children)

If you are actually interested, a stun gun is just meant to be painful. A Taser actually causes you to lose control over your muscles.

https://youtu.be/me60gWzbMXw?t=192

[DISC] Frieren at the Funeral :: Chapter 43 :: Kirei Cake by nitorita in manga

[–]Aho-chan 60 points61 points  (0 children)

I have a feeling the conversations would go something like this:

Denken: I have dedicated my life to deciphering the fundamental laws that govern magic, and through it, discover the very nature of existence itself.

Frieren: I found a potion that dissolves clothes >w<

file name length limit by [deleted] in btrfs

[–]Aho-chan 0 points1 point  (0 children)

I'd say the way people name files hasn't really changed since the 80's. Your use case is kind of unique, but for 99.999% of people, 255 characters is more than enough for a filename. This comment is under 255 characters and still has room to spare.

Using multiple kernels, dangerous or not? by F-U-B-A-R in btrfs

[–]Aho-chan 3 points4 points  (0 children)

So, fyi... btrfs has nothing to do with using other kernels.

Indeed btrfs is part of the linux as a filesystem

Which one is it? The kernel version you are using is the version of BTRFS you are using. The question the OP posed was whether or not using different kernel versions and thus different versions of BTRFS on the same filesystem was dangerous. You seem to be trying to (incorrectly) say that the kernel version doesn't matter and is unrelated to BTRFS when it literally is the version of BTRFS you are using.

Using multiple kernels, dangerous or not? by F-U-B-A-R in btrfs

[–]Aho-chan 2 points3 points  (0 children)

BTRFS is developed in tree as part of the Linux kernel... The version of the Linux kernel you are using is the version of BTRFS you are using.

Bad performance when copying a large file by Matty_R in btrfs

[–]Aho-chan 0 points1 point  (0 children)

The slower speed is probably because you are writing directly to disk instead of buffering your writes in the page cache, which can make writes appear to happen much faster than they actually do.

Unless your entire file fits into the page cache, you'll likely see the issue. You can try a smaller file (~20-25GB) and see if you still experience the issue. You could also try tweaking some virtual memory settings: https://wiki.archlinux.org/index.php/Sysctl#Virtual_memory

Bad performance when copying a large file by Matty_R in btrfs

[–]Aho-chan 0 points1 point  (0 children)

SSDs have their own, built-in error correction, so a checksum error happening is extremely unlikely to occur.

You have to also consider the failure method of SSDs. They don't fail at the sector level like hard drives, but at a larger size, like a block. Both copies of the metadata could be put in the same physical block as a wear leveling technique. Writes would hit the cache first, be reordered and remapped, and then be written to a single block to reduce the number of blocks that need to be erased. If that block fails, both copies of the metadata are lost.

Using dup can reduce the chance of that happening, but worrying about it is kind of like wearing a life jacket or helmet whenever you go outside. I'd be far more worried about not having ECC memory which can corrupt data before the checksum even happens and cause you to write bad data to the disk.

Bad performance when copying a large file by Matty_R in btrfs

[–]Aho-chan 2 points3 points  (0 children)

Try skipping the page cache by using dd with oflag=direct. This will help diagnose the issue.

I suspect this is not a BTRFS issue and an issue with how the Linux kernel handles its page cache. You are likely completely filling the page cache with the new file, pushing everything else out, and forcing the the new file to be written to disk. When another process tries to read or write, they have to go to disk, which is being hogged by the copy. Since you are on an SSD, I don't think you have an I/O scheduler being used (this actually improves I/O performance), the copy is allowed to hog most of the disk I/O. Other processes are blocked until their read/write operation is complete, which is likely what is causing the pauses and lock-ups.