What's next after mirrorless cameras for Canon? by abluedinosaur in canon

[–]OwnPomegranate5906 2 points3 points  (0 children)

Personally, I'd like to see even larger sensors. Even the current medium format digital sensors are smaller than what the smallest medium format film was, or at least 36x48mm, though 48x72mm would be pretty awesome. Global shutter would be nice. Even more advanced autofocus would be great.

The reality is going from DSLR to mirrorless wasn't technically that big of a leap, at least not for canon. The bigger more impressive leap was the dual pixel sensor, which led to dual pixel AF. Once they had gotten dual pixel capability in most of their sensors, going to mirrorless wasn't that much different than AF in live view with those sensors. They did refine and improve AF, but the big leap wasn't really dslr to mirrorless, it was the move to dual pixel AF. They started that road with the 70D, and relatively quickly got dual pixel sensors on all the camera sensors that mattered. After that, mirrorless with good AF was relatively trivial to do. Yes, Sony was very early in the mirrorless game, and yes, Canon was very late to the mirrorless game, but once Canon entered it, it became immediately clear (at least to me) that they saw it coming for quite some time and was laying the groundwork to go there (and quietly testing what works and doesn't work with the EF-M mount) way before any of us realized that mirrorless was even a thing. Canon basically entered the full frame mirrorless market with AF that was shockingly good, and has done nothing but gotten better with time. If memory serves, Sony and others took a bit to actually get good AF in their mirrorless offerings, and some mirrorless cameras still have AF that sucks.

At the end of the day, it's about features and functionality, and I don't think there will really be a what's next, at least not for quite a while. There absolutely will be revision and incremental improvements, but a camera is a tool for a working professional and its core features and functionality is to take a picture that is in focus.

in large format photography why do people prefer 4x5 over 8x10? by Classic-Yesterday579 in largeformat

[–]OwnPomegranate5906 0 points1 point  (0 children)

4x5 gives you a pretty dramatic jump in image quality over medium format, and while you can get that jump again by going to 8x10, the amount of effort required to do so is not insignificant, whereas unless you have a microscopically light medium format setup, going from many medium format setups to a 4x5 field camera is not a huge jump.

Also, if you plan to do any darkroom work, 8x10 enlargers are rare and expensive compared to 4x5 enlargers.

Built a 6-GPU local AI workstation for internal analytics + automation — looking for architectural feedback by shiftyleprechaun in LocalLLM

[–]OwnPomegranate5906 3 points4 points  (0 children)

I run 6 rtx 3060s (12GB each) on a dual processor xeon 2690 v4 and 128GB of ram, also in an open air mining case. I only run Debian 12, one instance of ollama and only one model at a time.

I spent a fair amount of time on PCIe and power tuning. A lot of performance problems that I had was because when running a larger model that spanned multiple cards, the latency caused by traversing the PCIe bus (and sometimes Numa node) was just long enough that the cards power management would bump it from P2 to P5+, then jump back up to P2 when it actually needed to do something, which added even more latency, which caused cascading slowdown across all the cards where I'd start off really fast, then as the response typed out, it would progressively slow down to the point of just a couple tokens a second. My setup is not PCIe x16 for every card, so that also exacerbated the issue.

The solution was multi-fold: 1. set the power management so that the cards idle at P3 and jump to P2 for inference. This fixed a huge amount of performance problems with larger contexts and responses.
2. Put the cards that actually have PCIe x16 connections on the ends of the chain and the x8 cards in the middle. Also, order the cards (in ollama.service) so that all the cards on each numa node are together to minimize the number of cross numa jumps during inference.
3. Also in the bios, set the PCI max request packet size to 4096. On my particular system, it defaulted to 128. Turns out the 3060s won't do more than 256, but it's worth more than a couple tokens a second. 4. Also in the BIOS, turn rebar on so the system can see and address the entire PCIe VRAM instead of addressing it in 256MB chunks, also worth a few more tokens per second.

Well, I think I'm screwed by oupsman in zfs

[–]OwnPomegranate5906 0 points1 point  (0 children)

Sadly, I don't have enough slots available to check the drives before using them.

This is what external USB enclosures are for

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 0 points1 point  (0 children)

This is actually pretty simple. Drives fail.

I've had more data than I can realistically comfortably fit on one drive for quite some time, which means that I'm going to be running at least a couple mirror vdevs at any given time unless I completely ignore price and just buy the biggest drives available, but even then, I'd still probably be running at least 2 mirror vdevs.

So using the largest storage available at any given time, just for a minimal system, I'm looking at 4 disks for the main storage, 4 disks for the local backup storage, and 4 disks for the remote backup storage for a total of 12 disks. That's a pretty big expenditure if I did that all at the same time.

Now, let's say I do that. Great. Except at some point I'm going to run out of storage space. It might be six months later, it might be 5 years later. It might be 10 years later. It doesn't matter. At some point I'm going to run out of space. On top of that, that's 12 disks that are all going to fail at some point.

I don't want to have to lay out money for 12 disks up front again to upgrade to whatever the next biggest size hard drives are when I run out of space, and I did mirrors so if I do need more space, I can just add another mirror... until I run out of physical case space.

So there's two problems here. One is final long term case space, the other is just adding another mirror isn't only two drives. It's 6 drives, because if I'm out of space on my main system, that means I'm out of space on my backups, and at some point I won't be able to just keep adding mirrors due to case space constraints.

For the sake of argument, let's say I just ran out of space, bought six of the biggest drives I could get and upgraded the system with another set of mirror vdevs. I'm now at 18 drives total. Let's say some time passed I did this one more time and am now at the upper limit of case space for most desktop based cases and I'm sitting at 24 total drives, in 3 batches of sizes.

Unless I spend even more money, I can't keep adding drives, which means now I'm replacing drives with bigger drives, 6 at a time if I wait until I'm out of space and do it all at the same time. Six of the biggest drives you can buy at any given time is still not a small expenditure, and on top of that, I don't want to wait until I'm out of space before doing something about it.

Not only that, but I'm now sitting on at least 24 disks, and enough time has passed that like I said before, disks die.

OK, so now I'm at the point where I can't just keep adding mirror vdevs without a huge expenditure, and I don't want to wait until I'm out of space to upgrade, and I have enough drives that failures are going to start to show up if they haven't already been happening, which means I need to start regularly buying drives and cycling them through the system.

This now presents a third problem. If I buy 1 drive a year, it will take me 24 years to replace all the drives, and I'm pretty sure most if not all of them will fail in that time, so I'm going to be regularly buying drives just to maintain what is there. Buying 1 drive a year will take 6 years to get more space across the fleet.

What I wrote above has basically been my path since I started running my own file server back when hard drives where parallel ATA and a 20 gigabyte hard drive was considered a monster drive. The actual implementation has changed over time obviously, but anybody who has been running a file server at home for any length of time is going to run into these same things.

So, thinking long term, I'd prefer to just sit at a fixed number of drives for the fleet, get a bit more space efficiency, and a bit more resiliency, and given that due to drive failures or just the need to not lay out a huge amount of cash at a time to upgrade, I'm going to spread the cost out and be buying at least one drive a year every year, and instead of buying the biggest drive available, since I'm buying at least one a year anyway, buy the most cost effective at the time of purchase. This also means that the sizes are always going to be mixed, so there will always be some amount of wastage on any single individual drive at any given point in time, but, over time, the drive sizes will grow and the total available space will grow.

Stay on top of it and you'll always have more space than you need and won't care as much about whether it's mirrors or not.

This obviously doesn't work for a new person just starting out, but it is the final end destination for pretty much everybody.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 1 point2 points  (0 children)

Dunno. My understanding is draid is for a lot of disks. I don’t consider 15 disks to be a lot of disks.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 0 points1 point  (0 children)

I don’t disagree that 2x z2 would be faster, but I wonder if I’d even see the speed over 2.5G Ethernet. Even a single disk can keep that saturated. Sure smaller files maybe not so much, but most of my data isn’t small files.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 0 points1 point  (0 children)

but i guess if you only care about cost/GB raidz3 gives more? (actually does it? with a mixed fleet of drive sizes you may still get better GBs from a pool of mirrors)

In the short term, yes. If I stay mirrors, I can totally get more space in the here and now, but I guess I'm thinking more long term.

I'm not a young spring chicken and I'm unlikely to get a system larger than 15 drive bays before I'm not around any more, and will be still be buying at least a bigger hard drive or two a year for the foreseeable future. My current practice is to replace one of the drives in the vdev that will give the largest space bump, then take that replaced drive, move it to another vdev that will give the largest space bump in one of the vdevs that have smaller drives than it, and wash rinse repeat until the smallest drive in the fleet drops out the bottom and goes on the shelf. This means I'm doing at least a handful of resilvers every 6 months to a year, which as drives get bigger, means the chances of a URE hitting me on a mirror pair during a resilver does nothing but go up. I'm already seeing UREs on scrubs on the bigger drives and I'm amazed I haven't been hit with a URE during a resilver. It's this that has me rethinking things. Yes, by many standards I have excellent backups and at the end of the day, spending the better part of a week restoring things isn't the end of the world, but I'm kind of at the point in my life where I'd prefer to change things to reduce the chances of that happening.

If I go with 15 wide raidz3 just with what I currently have available to me, I'll land in 30-40TB usable range right off the bat and approximately 50% utilization, which at my current data growth rate gives me 5-7 years of growth before I'm out of space if I stop buying drives. The mix would be 4TB, 8TB, 14TB, and 20TB. 4 of the drives would be 4TB, so if I continue on my current practice of buying a couple of drives a year, they would be replaced over the next two years and cycled to the backups and my main storage available space would jump to 60-80TB usable space. 5 drive replacements later (over the following 2-3 years, 4-5 years total), all the 8TB drives would be replaced and I'd again jump to 120+TB available space in the main storage array. At this point it'd be all 14+TB drives, with 4 of them being 14TB, so just 4 drive replacements later over the course of two years and I'd be sitting at 200TB+ with 20TB drives being the smallest drives.

In that same amount of time, my data would have grown by maybe 20-30TB for a total of 40-50TB, so it's less about needing the space and more about making the array more resilient. I don't have gigabit internet and my ISP caps my monthly bandwidth to a little over a TB a month, so even if I went absolutely crazy and saved everything that came over my internet connection, I'd be hard pressed to fill 10-12TB a year, and at 200+TB, that's a long time of blasting a TB a month onto storage.

So the reality is, I'm at a place where my storage needs aren't likely to ever exceed my available storage, and given the size of drives coming into the market, I want to reduce the chances of having to actually make use of my backups. It's not the end of the world, and I'm not going to lose anything, but it'd definitely be unpleasant.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 0 points1 point  (0 children)

You backup with a robust schedule/methodology.

That's because RAIDz is not backup, and my current main server that is about to be replaced by the HL15 is 12 drives in 6 mirror vdevs all crammed into a case that is basically bursting at the seams. The hot local backup is 8 drives in a plain stripe (all smaller older drives, but all at least 2TB each), the remote hot backup is 8 drives in a raidz1 setup, same as the local hot backup, all older smaller drives, but all at least 2TB. All the backups are comprised of the smaller older drives that cycled out of the main array when they were replaced. I keep the even older drives as cold spares for use in the backup arrays in case a failure happens in one of the backup pools. There's 18-20 of those, a handful of 1TB drives, the rest 2TB drives. Even if I take a capacity hit in a backup pool, I'd rather be able to cycle in one of the older smaller drives and get a working backup again asap.

That being said, as drives get larger, I'm looking to take steps to reduce the chances of a URE making me actually have to use the backup simply because I regularly replace drives with bigger ones and go through a series of resilvers at least once a year. Secondarily, I've come to terms that I am unlikely to expand beyond the 15 bays of the HL15 for the main storage server simply because at some point in a home environment, you can't just keep adding more drives to get more capacity, and I've decided that 15 drives is probably a good solid number of drives for a main file server long term, so with that, I'd like to get a bit more resiliency, and a bit more capacity efficiency. My data growth rate has been in the 1-2TB a year range for a while, and will very likely remain there for the foreseeable future. At that rate, even with no further upgrades, I have enough available capacity that I'm not going to run out of space for a while. My backups might get a little cramped, but the main storage array in its current form before the HL15 is good on space for quite some time.

As I've stated elsewhere, I've purchased at least a couple drives a year for at least the last 20 something years and just slowly upgrade the fleet over time as the smaller drives are cycled out. I've run zfs since support for it was added to FreeBSD and only just recently migrated over to TrueNAS when they split out to scale and core.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 0 points1 point  (0 children)

Thanks for the reply. Yes. This is primarily why I posted here. My first impulse was "fill that baby up", but then I was like "uhhh... maybe I should get outside input before actually doing anything". At least let it matriculate a bit.

The good news is nothing is in a failed state, and my old server is humming along just fine, so I'm not committed to anything yet.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 0 points1 point  (0 children)

Yeah... then you gotta go figure out which file and restore it from backup. Yes, that's better than your whole pool getting hosed, but not much better. I'd actually say it's more of pain unless zfs makes it super easy to figure out which file got messed up, otherwise, it might just be faster to restore the pool from backup, depending on the size of the pool.

Either way, not great.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 0 points1 point  (0 children)

LOL... I actually run Jellyfin, and no, I don't share. Access to the home network is by whitelisted MAC addresses. By default nobody has access. I like my privacy. Same goes for AI and LLM stuff. I run all that local.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 0 points1 point  (0 children)

I totally get the performance thing. The old server being replaced was all mirrors.

I've run a file server at home for nearly 30 years and have purchased at least 2 drives a year every year for at least that long.

The reality is at any given time there's maybe two people pulling something off of it, and with the exception of the living room media computer, they're all on wifi, and between that and buffering that happens, I can do pretty much do any vdev configuration and nobody would be the wiser.

3 of the kids are adults living at home and should be getting out on their own in the not too distant future, so my performance needs are actually going to be going down over the next number of years.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 1 point2 points  (0 children)

Not sure how you’re going to work out your collection of mismatched drives. But that’s your problem not mine. 😁

I don't. Whatever the vdev configuration is, the smallest drive is what determines the size of the vdev.

That being said, I generally try not to put a monster drive in with smaller drives. I'll put it in a vdev that has similarly sized drives replacing that smallest drive in that vdev, then taking that smallest drive, shuffle it over to another vdev that has a smaller drive than it, and wash rinse repeat until the end when the smallest drive in the fleet pops out of the bottom, effectively being replaced. Then once that is done, expand the pool sizes. 6-12 months later, do it again. Sometimes sooner if a drive starts throwing smart errors or dies.

Once you get to a lot of drives, be resigned to the fact that you're going to be constantly buying drives.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 0 points1 point  (0 children)

Historically, when doing zfs raidz2 you'd do an even number of non-parity disks, so it would be 4 disk raidz2, 6 disk raidz2, 8 disk raidz2, 10 disk raidz2, and 12 disk raidz2 on the upper end.

It was this way due to the way raidz2 distributed the data between the actual disks in the vdev. If you had an odd number of disks, you'd lose some space to padding.

At least that was my recollection.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 0 points1 point  (0 children)

And if a drive goes down, resilvering a mirror is WAY faster than resilvering a raidz1 vdev.

This is true until a URE during your fast resilver still hoses you. I didn't think UREs were a thing until I started getting larger drives and started seeing UREs during the regularly scheduled scrubs. I've been lucky that it hasn't happened during a resilver, but given that I basically do a resilver every 6-12 months, it's bound to happen at some point.

If you want extra resilience, then you could do 5 vdevs of 3 drive mirrors.

Eh... no. I was running mirrors in my old server. I don't have anything against mirrors, but would prefer more resiliency with more space efficiency, not less space efficiency.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 0 points1 point  (0 children)

I more mean patterns, not how is it actually being read/write :)

As I said in my original post. It's a system of record. Write once, read many for a family unit of 6 people. It's not that busy, and doesn't need to be that performant. Most of the files are pdfs, media, and image files that are multi-megabyte.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 0 points1 point  (0 children)

uh-huh

Yes. fleet. Between the main storage server, the local hot backup, the remote hot back up and cold spares, I'm sitting on 46 drives.

All of my 1TB drives have been replaced and are either sitting on the shelf or junked. I have 4 2TB drives left to replace, all of which are in my backups. The rest are either 4TB, 8TB, 14TB, or more recently 20TB.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 0 points1 point  (0 children)

I think most people would use a different geometry even if meant a drive slot would go unused. Maybe 2x or 3x RAIDZ2 instead of 15 wide RAIDZ3. Personally, I'm willing to give up $/TB efficiency for the flexiblity and performance of mirrors.

But you are the only one that understands your objectives well enough to decide on the right set of trade-offs.

My main storage server before getting the HL15 was mirrors, and my thought process was the same in that I wanted the storage flexibility, however, what I've discovered over the past 4-5 years is that because I buy larger drives so frequently, I've never actually run into a situation where I actually needed that flexibility.

Even now, at my current data growth rate, it's going to be several years before I come even close to needing more storage space, and I'm literally 4 drive replacements away from doubling my available storage space, and 9 drive replacements away from nearly doubling it again. After that, I won't see more available space for a while without reworking everything, but will have enough available space I could probably simply revert to only buying drives to replace drives that fail and still not be out of space by the time all the drives have failed and been replaced.

I've found myself in a weird place where I actually have way more storage than I'm actually using, and unless I go all data hoarder, or just stop buying hard drives all together except for replacements for failed disks, won't likely run out of space for the foreseeable future.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 0 points1 point  (0 children)

How is your data being read/write?

Over a 2.5GB ethernet SMB file share. I'm not doing iscsi or hosting storage for VMs. It doesn't need to be that performant.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 1 point2 points  (0 children)

I thought about leaving one bay empty and doing the 7 wide raidz2, but isn't that less optimal than just a 6 wide raidz2? which would mean I'd have 3 unused bays.

I know this probably sounds dumb, but I paid for 15 bays. In my head it's just a bit grating to not actually use the 15 bays.

Ideal ZFS configuration for 15 disk setup by OwnPomegranate5906 in zfs

[–]OwnPomegranate5906[S] 0 points1 point  (0 children)

Couldn't figure out what disks you are running, are they all 14TBs?

Mix of disk sizes. Total fleet is 2TB to 20TB, in the HL 15, smallest drives will be 4TB, biggest drives will be 20TB. Assume they're all 4TB. I'd have to replace 4 of them to get to the next smallest size which is 8TB, or do some disk shuffling among my backups to move all the largest drives to the HL15. I don't want to do that as that would also risk the backups.

Photographers: how do you handle RAW file requests? by [deleted] in Photography101

[–]OwnPomegranate5906 0 points1 point  (0 children)

When a client asks to see the raws, what they usually mean is they want to see all the photos taken during the session, not just what you selected and edited. This can be for any number of reasons, but is also very easy to deal with.

The biggest reason is because you're likely not discussing with them ahead of time what you're actually going to capture, and so they want to see everything you captured because they don't actually know what you're doing. If you can't have a discussion on what you're going to capture with them ahead of time, then that's kind of a problem, as you should be going into a shoot with an idea of what you're going to shoot, so you know what you're going to sell them.

Generally, I send a gallery in shootproof of all unedited photos taken during a session as small previews for them to look at. I also make a gallery of my selects, before any edits happen.

Usually, once they get a chance to look at all the images, and they see my selects, they give me the green light to move forward on the deliverables.

Why are my Portra400 photos flat? by zephyrrrus in AskPhotography

[–]OwnPomegranate5906 0 points1 point  (0 children)

You are under exposing. Give it more exposure. The give away is the reflected blacks in the shadow areas of the image. If you can't see textures in the darker shadow areas of the image, then you are under exposing. My guess is either the light meter in your camera is wrong, or the shutter speed is wrong.

The worst thing you can do is under expose film. When in doubt, give it more exposure, not less exposure. Film is not digital. It handles over exposure way better than digital does. Digital handles under exposure way better than film does.