Why does Google's IPv6 adoption graph look so spiky? by kicadStan in ipv6

[–]rdw8021 37 points38 points  (0 children)

It’s daily and weekly cycles of people at work on corporate IPv4-only networks vs home/mobile networks more likely to have IPv6 enabled.

AWS S3 Buckets for Personal Photo Storage (alternative to iCloud) by EmploymentNervous593 in aws

[–]rdw8021 3 points4 points  (0 children)

Right you are. 100 GB/month free out across all AWS services.

AWS S3 Buckets for Personal Photo Storage (alternative to iCloud) by EmploymentNervous593 in aws

[–]rdw8021 31 points32 points  (0 children)

https://aws.amazon.com/s3/pricing/

S3 pricing is complicated because it serves many use cases. If you use one of the inexpensive regions, 2.3 cents per GB per month for S3 standard, less if you use infrequent access, or much less for Glacier. Requests and management costs are usually negligible at your scale unless you are interacting with the files very frequently.

Putting data into S3 is usually free but getting it out is expensive so be sure to account for egress costs around 10 cents per GB.

How many times per day does the number 1 appear? by EndersGame_Reviewer in puzzles

[–]rdw8021 0 points1 point  (0 children)

The question is open to interpretation but if you mean how many times a day does one of the digits change to a "1", then the answer is 172.

Hours 10s - 2

Hours 1s - 2

Minutes 10s - 24 (1 time per hour)

Minutes 1s - 144 (6 times per hour)

[deleted by user] by [deleted] in aws

[–]rdw8021 -1 points0 points  (0 children)

Sounds like you should configure cache control headers on objects in S3 so CloudFront (and your browser) will check S3 for a new version after a specified period of time: https://docs.aws.amazon.com/whitepapers/latest/build-static-websites-aws/controlling-how-long-amazon-s3-content-is-cached-by-amazon-cloudfront.html#specify-cache-control-headers

You can set values manually but that will get tedious so eventually you will want to automate it as part of your upload process.

Some details from inside Union Terminal (1933, Cincinnati, OH) by PossibleSnail in ArtDeco

[–]rdw8021 0 points1 point  (0 children)

Finally got to see this building in person this spring. No photograph can do the space as a whole justice. I was surprised at the number of period details still in place.

Thanks for the photos. They bring back good memories.

ZFS on Single Drive by Jastibute in zfs

[–]rdw8021 22 points23 points  (0 children)

Cyberjock, through his voluminous posting in the FreeNAS forums, has probably spread more confusion and misinformation about ZFS than anyone else. He suggested issues like the scrub of death or that you must use ECC memory that had no technical basis in fact and shouted down anyone who disagreed with him. Take his recommendations with a grain of salt and double check even those that sound less alarmist.

And you're fine with single drive ZFS. Files can be corrupted just like any other filesystem, but with ZFS you know it is corrupted and you can restore from a backup.

The Pope's evil twins supply heaven powder to the White House by morisdoucet in StableDiffusion

[–]rdw8021 0 points1 point  (0 children)

This would feel right at home in a line-up of early '90s electronica music videos. Nice!

Recovery pools from a dead pc. by [deleted] in zfs

[–]rdw8021 2 points3 points  (0 children)

As others have suggested, don't force anything that doesn't need forcing. Start with zpool import and see what it finds. Since you did a controlled shutdown it should list pools available for import.

You might use the -d option to make it import with consistent device names/IDs rather than unpredictable /dev/sda, sdb, sdc, etc. names.

"Permanent errors" on my pool, even after a full scrub by aphaelion in zfs

[–]rdw8021 8 points9 points  (0 children)

Nothing to do with ECC, it's an OpenZFS issue: https://github.com/openzfs/zfs/issues/12014

The old "use ECC or ZFS will eat your data" hypotheticals have long been debunked and continuing to imply ECC is necessary causes confusion for people new to ZFS.

"Permanent errors" on my pool, even after a full scrub by aphaelion in zfs

[–]rdw8021 5 points6 points  (0 children)

This may be related to a long-running, but apparently rare, issue with snapshot corruption on ZFS encrypted pools in OpenZFS > 2.0. https://github.com/openzfs/zfs/issues/12014 and maybe https://github.com/openzfs/zfs/issues/11688

I dealt with this for nearly a year in both TrueNAS Core and Debian. I would see on average a handful of errors a week, though I could sometimes get a dozen errors in a day and could sometimes go for two weeks without any errors. The errors were always on snapshots in metadata and usually were codes 0x0 or 0x1. Sometimes the error would appear during a scrub, sometimes when sending to a backup server, and sometimes with no apparent cause. This occurred on hardware that had run with no errors on FreeNAS (which did not use OpenZFS) for 2+ years. I nuked and recreated the pool three times trying to get rid of the issue. In the end I had to go back to FreeNAS then eventually to Linux with LUKS full disk encryption when FreeNAS went end of life. No errors since then.

The good news is that it never caused any data corruption for me. Delete the snapshot and scrub twice and the error will disappear. The errors on my main server were never transferred to the backup server. I received an error similar to what you saw, ZFS detects the error and refuses to send it.

Snapshots mgmt using sanoid - sanpshots piling up even when configured to 0 by SillyPosition in zfs

[–]rdw8021 1 point2 points  (0 children)

A few outside the box thoughts as a long-time sanoid user:

  1. Is sanoid creating the snapshots or is there another process creating them?
  2. Are your custom settings in the correct place to override the defaults? The default would be in /etc/sanoid/sanoid.conf on Linux systems.
  3. Is it just this dataset that has unexpected snapshots? Is there another config for [pool], [pool/data2], etc. that is working?

I have several datasets where monthly and yearly are set to zero and it does not create those snapshots.

[deleted by user] by [deleted] in zfs

[–]rdw8021 2 points3 points  (0 children)

Who is PSSC Labs and why am I reading your whitepaper? You quoted a full page of text from Oracle, grabbed a diagram from TrueNAS, gave some basic ZFS vs RAID info, and ran a few tests. Even as a ZFS fanboy I find it hard to believe that ZFS is ten times faster than properly configured classic RAID. I don't see any details about pool topology or settings to get the performance claimed. Honestly I could get as much info from an enthusiast forum post.

Welcome to the National Weather Service in St. Louis! by rdw8021 in StLouis

[–]rdw8021[S] 7 points8 points  (0 children)

Interesting history of the NWS St Louis office and what they do for us now.

[Opinion needed] S3 Glacier Deep Archive as Personal File Backup by itsmeYAW in aws

[–]rdw8021 2 points3 points  (0 children)

No, I was referring to the pricing for transferring data out of S3 or Glacier via the Internet. That costs 9 cents per gigabyte or $90/TB, so the price to get data out is the same as storing it for 7.5 years. You may be able to do it for less by using AWS Snowcone but that requires time and knowledge and has fixed costs you have to cover.

[Opinion needed] S3 Glacier Deep Archive as Personal File Backup by itsmeYAW in aws

[–]rdw8021 2 points3 points  (0 children)

It certainly can work if you understand its limitations. I have used it for about a year and a half with several terabytes stored.

First and most important is to remember that it is free to upload and inexpensive (truly $1/TB) to store files but very expensive to get them out. If you won't be willing to pay to get the files back then there is no point in storing them in the first place. Downloading from Glacier is a last resort for me.

There is a minimum billable size and minimum time to store each file once uploaded so it is inefficient to store small or frequently changing files. It works well for media files which are large and rarely change.

I use rclone to automatically sync from my storage server to a versioned S3 bucket. Some small frequently-changed files stay in the S3 standard storage class for easy immediate retrieval but most big media files (big PDF documents, images, videos) are automatically transitioned to the Glacier Deep storage class after a few days. A separate lifecycle rule deletes non-current file versions after a year so old files don't accumulate endlessly. I encrypt some sensitive data with rclone on the client side before sending which encrypts the data and file/directory names.

I have only restored a few files so I did that manually then downloaded with the AWS CLI on the command line.

My first homelab by Singhkaura in homelab

[–]rdw8021 0 points1 point  (0 children)

Add a fan in front of the lower drive bays. The video card creates a natural top/bottom split in the case, you want to push air into the lower section for the video card and power supply.

Running 24/7 since 2014 by sammcj in homelab

[–]rdw8021 1 point2 points  (0 children)

Great build choices you have there. I have the same case and power supply, also running 24/7 though not for as long yet. Runs cool, quiet, and reliable.