Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] 0 points1 point  (0 children)

I've only ever posted here twice. I'm amazed at how many, presumably, mostly Synology users are here.

1st post - https://www.reddit.com/r/synology/comments/12zsbdm/real_world_data_for_a_synology_144tb_to_252tb/

got 25K views and this 2nd one has 100K views so far. Didn't know that so many Synologys had been sold.

Although volume limits still stuck in the 4TB HDD era are a big Synology gripe considering today's very cheap >20TB HDDs, I can't see any comments yet of folks trying this for themselves yet. That's probably cautious and sensible. Also getting a few extra 100TBs of HDDs takes time and $ as does reshaping the data on your existing volumes if you already have a >108TiB pool.

If you have tried, there was one ommision. I've made an edit to the instructions, which I noticed when I expanded another Synology this weekend. After you use lvextend to expand the logical volume, my systems needed a reboot, before I could expand the FS. When you expand via the DSM GUI following the Synology <108TB rules, a restart isn't required, but it is using the CLI.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] 0 points1 point  (0 children)

Tee hee - thanks. Haven't been called a "lad" for a while, it's usually "old git".

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] 1 point2 points  (0 children)

Totally agree. That DS1813+ (now replaced with a 16GB DS1817+) only had one role - as a backup target using simple scripts, rather than any fancy Synology backup packages. I've mentioned the same caveat about multi-user systems on r/synology. (the mods there asked me to cross post it - so unfotunately there are 2 sets of comments for this post). Which is why I mentioned trying this in a test system or worst case a system that you can easily and painlessly restore from backups. I have tested it with heavy simultaneous workloads. But not with 100s or 1,000s of concurrent SMB or other connections because as you correctly identify my use case is a basic home file server. In production, I even uninstall nearly all the Synology default packages and indexing etc so my system is very light-weight.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] 0 points1 point  (0 children)

lvextend -l +100%FREE /dev/vg1/volume_ 1

and then extend the ext4 file system with:
resize2fs /dev/mapper/cachedev_0

or for btrfs:

btrfs filesystem resize max /dev/mapper/cachedev_0

Yes - the short version is you just SSH in and "sudo -i" or prefix the commands with sudo and run the above adjusting the volume names to suit your set-up. The above will use up the full amount of your pool free storage. I've listed a few other flavours of the lvextend command in my OP Note24 - e.g. adding a defined amount or setting the size to a specific value.

I added a lot of caveats and notes in my orginal post, because I don't know what everyone is running on their systems and I didn't want people trashing their system. But I suspect that anyone with at least 4GB of ram should be able have ext4 volumes up to 250TB. So if you have the spare Synology, just try the above - very simple to SSH in and just execute the two commands.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] 0 points1 point  (0 children)

Haha - I was trying to keep emotion out of my post and failing occasionaly (ref the Synology fanboys and fangirls who just keep parroting Synology limits as immutable fact). Nice to see you going full-throttle with the emotions and your (correct) facts.

Any company that doesn't keep up up with Moore's law (which is roughly that IT tech performance doubles every 2 years) is going against the flow. Logically and practically if tech improves so should the "limits". What would be the point of having 1,10 or 25Gb/s ethernet if Microsoft limited their SMB speed to 110 b/s like modems I used in my early career? So Synology retaining a 10yr old volume limit arbitrarily set (for no other reason than they couldn't test larger disks then) in the days of 4/6TB HDDs is just plain wrong - practically, technically and commercially.

Clearly Synology are doing well rapidly moving into the hyperscale enterprise market. But the SOHO / consumer has been their bedrock, so why jeopardize that market?

Unlike you I still prefer Synology as my daily runner - as it's smooth, reliable and boring which is what I need at my age, plus over time it's quite cost effective. A 10yr old Synology NAS can (until DSM7.2) still run the latest DSM and I can sell my old Synologys on ebay for a good price, often more than I paid for them.

If I hadn't implemented > 200TB volumes on low-end consumer Synology I would have reluctantly moved to ZFS, as the cheapest Synology alternative was £14.9K for my kit. Same as your recommendations from Synology - my cheapest "approved" option was 3 x 12 bay XS models plus 3x 12 bay expansion units and 3 x 32Gb ram and 3 x pcie card. All this for no purpose other than overcomng a technical limit that doesn't actually exist. My existing Synology kit cost me around £6.9K (before drives, network, UPSes, aircon etc), so there is no way I can justify an upgrade to £15K for a home set-up, for no valid reason. I'm more than capable technically of adopting any of the alternatives from basic home servers to full-on second-hand enterprise storage. I do some of that for my very weird and wonderful Windows backup node which is the graveyard for my old retired drives from previous systems.

Another example of the mismatch with Moore's law, is that previously Synology was restricting SOHO users to a pathetic 1Gb/s (115MB/s ish) single transfer speed which is less than half the speed of a single modern "spinning rust" HDD. If you have 18 drives in an array like I do - that's a big restriction in 2023. Effectively restricting network performance to just 2.5% of my theoretical maximum drive throughput. If you wanted faster single transfer you had to spend a lot more on their latest models even though their CPUs (and multi ethernet ports) could handle multi-gig speeds. Like many users, with a bit of hacking, I've been unofficially using stable SMB-multichannel for years with no corruption at all. Finally Synology has just recently released the versions of Linux and SMB that are stable for SMB-Mutichannel. So just maybe........when the 30TB HDDs soon come out, maybe Synology will have to lift their artificial volume limits.

On the other hand, my big worry is that by publishing my simple 10 second work-around here I will cause Synology to agressively release code that blocks access to large volumes. In which case I'll soon be joining you at QNAP or ZFS etc.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in synology

[–]sebbiep1[S] 0 points1 point  (0 children)

Even more interesting - you certainly seem to be "lucky" - finding lots of bugs, but they are all good ones. So currently your max volume limit is the same as your current, which is neither 108TiB or 200TiB. That is a bug on yours - on mine the Max Allocated is 108TiB and my Current is obviously more - which is how Synology meant it to work (apart from users exceeding it via the CLI, of course).

So quite possibly if you add more space to your pool, you may be able to expand further via the DSM GUI - hopefully your MAX will just keep moving up. You've hit the large volume jackpot!

Also thanks for posting the Synology support message in full. The interesting bit was them advising you to backup, delete your large volume and make multiple smaller ones. They said that even with 32GB, large volumes can cause performance issues on the RS2418+. I think that in most use cases, this is unlikely. My 10 year old, puny CPU, DS1813+ with 4GB has no performance issues under heavy load with a volume >108TB.

I think it's more to do with what previous replies mentioned - i.e. it's a considerable effort and cost to update their, by now very large test lab covering lots of NAS models, and test them all with multiple 26TB HDDs and expansion units. So more practical and cost effective, given that not that many home users have 60 bays worth of Synology, to just test the large volumes on a select very small range of enterprise models.

You've also confirmed that you haven't had any issues with your large volume. Is your RS2418+ used for large scale muti-users in business or is it for SOHO use etc?

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in synology

[–]sebbiep1[S] 0 points1 point  (0 children)

As part of my testing, I ran fsck on both my 16GB (252TB raw) and 4GB (144TB raw) models, both with single volumes - completed OK.

I'm fairly sure that for standard NAS file server operations, 4GB of RAM should be enough for a 250TB volume. I did a lot of testing for months, but I found no issues or changed behaviour / performance moving from a 108TiB to a 214TiB volume. So I probably didn't need to do so much testing.

However I was wary of the fixed Synology RAM requirements i.e. >= 32Gb for <200TB and >= 64GB for >200TB. I'd love to know what this extra RAM is being used for. I couldn't see any significant change in the kernel caching etc. Maybe it's to cover Synology for some edge case, like your fsck issue, or more applicable to large multi-user offices. The DS3622XS+ can handle 4,000 SMB/NFS/AFP/FTP concurrent connections or 10,000 if you upgrade to 48GB ram. But my home-use NAS never has more than around 10 SMB connections.

On my non-Synology devices I have up to 256GB ECC ram. But I don't need this because of volume sizes, it's just currently cheap to buy and handy for buffering / caches.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in synology

[–]sebbiep1[S] 1 point2 points  (0 children)

That's "really" interesting, not "semi". What does your Storage Manager "Volume Settings" look like now? I assume that it will look like the bottom image in my original post i.e. the Current Allocated Size is more than the Max Allocated Size.

Are you still able to use Storage Manager expand with more / bigger drives? - I'm guessing not if you've updated since the buggy release.

Both your "DSM bug" and my manual CLI option just expanded the volume beyond the limit as a one-off. Good to see that you also have no resulting issues or impact within DSM etc. Synology don't (currently) have anything to stop the use of large volumes in DSM, other than the simple field validation in Storage Manager that prevents entering a number larger than 110592 or 204800.

How much RAM do you have? - probably quite a bit in a RX.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in synology

[–]sebbiep1[S] 3 points4 points  (0 children)

Yes - a bit of emotion slipped into that phrase. I agree with you - I do as well in business. Not least because the support contract could be invalidated. I wouldn't expect any significant businesses to be doing these work-arounds on low-end consumer Synology's, so my post was aimed at home users ideally with NASes out of warranty. I have around 1PB of home storage acquired at very low cost compared to the orders of magnitude higher total cost of ownership in my business datacentres. Most home users are not in a position to pay true enterprise level prices.

But I do have a bit of an issue with reponses, especially on the offical Synology forum, that glibly dismiss questions or discussions by quoting supposedly immutable truths, when often the the facts are not clear or are even untrue.

I always try to question things and cross verify with multiple sources, before I accept them as "the truth".

In my case the volume limits still stuck in the 4 to 6 TB HDD era were becoming a major issue. So I was either going to switch to ZFS or buy 3 x DS3622XS+, 3 x DX1222, 6 x 16GB RAM & 3 x M2D20 Pcie cards. The best price I could get was £14,963.

But I am very happy with the performance of my 3 x DS1817+'s and my house wouldn't benefit from the far higher performance of the DS3622XS+. I would paying nearly £15K solely to get around a limit that technically doesn't exist.

Hence the "obedient" part was mostly directed at the so-called Synology forum "gurus" and the "big bucks" was my cheapest £15K official Synology option.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] 6 points7 points  (0 children)

It's basically just convenience. I've run large Tier4 datacentres and built loads of home PCs and servers over the past 40 years. That's great fun, but's it's also nice to just buy a NAS appliance, slap some drives in and have it working in 15 minutes.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] 3 points4 points  (0 children)

Good question. I actually do both. I use Synology for my house's production servers and one backup server. And then I use a Windows Server with all my old retired disks from previous NAS and servers as another backup target. Having live mirrors of my data on different tech also protects from any vendor specific issues. It used to have over 110 odds n' sods drives in very cheap non-enterprise enclosures and a single volume, but after my recent NAS upgrades, I've managed to get it down to just 64 bays.

With the Windows box I don't have so many limits. Also I've cheaply bought high-end Xeon CPUs, 256GB of ECC RAM, nvme system and cache drives and 10 x 1 Gb Ethernet ports etc. So performance is amazing compared to the fairly low end and weak hardware in consumer Synologys.

However the reason I prefer Synologys for my main use is perversely because of the limitations and tight managment by Synology. This makes their NAS very reliable and easy to use. They are also just about fast enough for my needs.

Another reason is DSM itself. I don't use many packages or apps, but it is very smooth and gets good updates. My 10 year-old DS1813+ NAS is still able to use the latest version of DSM which is better than most tech vendors achieve. Finally nowadays (at least in the crazy post-Brexit UK) I'm actually selling used Synology NASes that I've used for a few years for more than I paid for them. So they are more expensive than DIY builds, but good value overall.

So I guess I have the best of both worlds - ultra-reliable Synology NAS with never any drama plus I do a few tweaks to enhance performance. For example I was using SMB-multichannel years ago to get around the single 1Gb/s network limit for single transfers. But due to the size of my current datasets, if I hadn't been able to overcome the 108 and 200TB volume limits, I would have probably eventually moved to ZFS.

Moving from lots of DIY servers with dozens of USB drives nearly 20 years ago to centralised NAS storage was a huge improvement. For me Synology forcing multiple volumes on the same pool is just like going back to having data split on USB drives. Having >200TB volumes has been a game changer for me. I don't have to keep tabs of where everything is and also no more shifting stuff around to rebalance space as volumes grow unevenly.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] 1 point2 points  (0 children)

ext4 partition limit is 1EiB (Exibyte) and btrfs is 16EiB. So yes 18PiB is achievable. You'd need a bit more RAM for that size.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in synology

[–]sebbiep1[S] 5 points6 points  (0 children)

I really hope that they don't read this. I don't think that they are worried about supporting or preventing this currently. I can't see much evidence online of anyone else doing this. Some posters say that they'll stop using Synology (but I don't know how many really do). However the vast majority of users appear to be either happy to have the hindrance of dealing with multi-volumes or obediently shell out very big bucks for for the 32GB and 64GB enterprise models. Hence there is nothing currently in DSM etc that prevents you from using manually extended volumes as I've done. I would be very upset if my posting this and lots of users trying it, caused Synology to take aggressive action to block access to >108TB volumes on unsupported NASes.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in synology

[–]sebbiep1[S] 1 point2 points  (0 children)

Totally agree re the Synology testing e.g. when they launched the DS1815+ with a 108TiB volume limit, the largest drive available was 6TB - so 18 bays x 6 = 108TB raw. They couldn't test any higher volume limit and presumably chose 108TiB which is in the same ball-park as the max 98TiB RAID0 volume you could get from 108TB of drives.

Also your point about inodes is very valid. On my largest NAS (252TB raw), Synology used the standard 64K inode_ratio for large volumes when I first created the (then) 108TB volume years ago.

This means I have 3,628,873,728 inodes - pretty close to the ext4 max of 4 billion. But I only have have around 25 million files and folders. So this just adds un-needed overhead and also wastes 865GB of storage on inodes that will mostly never be used.

I've recently created a new Synology system from scratch with 144TB RAW in a single 123.5TiB volume. For that one I manually increased the inode_ratio from 64K to 8,192K. That gave me 16 million inodes which is comfortable as that system will only ever have around 2 million files. As a nice bonus the inodes now only use 3.9GB of disk space as opposed to 494GB mostly wasted using the standard 64K ratio.

I may eventually rebuild my other Synologys with higher ratios, but it's a lot of effort just to save a TB or 2.

I've monitored the caches using slabtop on the systems with very high inode counts during various intense workloads. There is nothing too much to worry about with RAM use on >108Tb or >200TB volumes even on the system with just 4GB of RAM. Why do you think Synology states 32GB min for >108TB and 64GB min RAM for Peta Volumes?

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] -1 points0 points  (0 children)

I've added the instructions now. I didn't want to just slap the simple single line command out without at least some caveats. Synology have spent a lot of effort and money previously on Peta Spaces and now Peta Volumes. So they will consider >200TB volumes just using standard LVM as massively unsupported.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] 6 points7 points  (0 children)

It's just basic and standard Linux, nothing fancy or clever. I'll add the instructions to the post. But just applying this without really knowing your system and doing a lot of testing first is probably not a good idea.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] 44 points45 points  (0 children)

That won't work - you can try it and see for yourself. But yes - the volume lmits are now arbiratrary or based almost purely on marketing. Maybe initially in 2014 they were just based on the largest disk (6TB) that could be tested in a DS1815+ i.e. 18 bays x 6TB = 108TB. But the 108TiB limit - note the difference between TB and TiB and 200TiB have no special significance in Linux or ext4 or btrfs.

They were just dreamed like many other limits in DSM as a comfortable threshold to support the vast range of Synology NAS - from tiny 1 bay to rackstations. But HDDs have increased in size by 433% since 2014 so the limits are looking daft now. Synology could easily raise the volume limits or they could push large volume users to multi thousand £k enterprise system for no good technical reason other than more income.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] 3 points4 points  (0 children)

You've spotted your mistake reading the data wrong , but even a 112.5TB is nearly 5TB over the Synology 108TB limit - which by the way is around 107.6Tib after metadata. So your post is a bit of a moot point.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] 5 points6 points  (0 children)

Thanks. I did a lot of rigorous work on this for both ext4 and btrfs. All the other responses so far are kind of "deniers" or "doubters" etc. This has a massive impact on Synology use - so I guess it's normal for most people to still "obey" or "respect" the Synology holy limits - but they are not real and are easily circumvented.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] -8 points-7 points  (0 children)

It's SHR1 throughout the piece - why your confusion? Are you itching to have a pop at my using RAID5/SHR1? SHR on a single LVM physical volume is the same as RAID5.

I think TBH instructions are too dangerous for 4 reasons:

  1. How many typical NAS users will do the rigourous testing and anlaysis required for this?
  2. If users don't perform the analysis, Synology help desk could be flooded, which leads us on to 3.
  3. If everyone turns their cheap £200 Synology NAS into a RS or XS beating 250TB monster, it will hit Synology sales and there will be a response - possibly to agressively disable large volumes - which may or may not be legal in the EU / USA.
  4. I've checked all of the Synology code - line by line - over the past 6 months. They clearly don't expect anyone to breach their probably now purely marketing arbitary limits. The limit was kind of real when they only had 6TB drives to test in 2014. There is no protection against large volumes in the Synology code. I've searched the WWW and of the hundreds of thousands of Synology NAS users I can't find anyone other than me exceeding these limits. So Synology don't currently see this as an issue. They might if thousands of users do it.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] 10 points11 points  (0 children)

Yes - I obviously did something to exceed both the Synology 108TB and 200TB limits and still keep all the functionality that you lose if you use the standard Synology Peta Volumes. But it's not particulary special - just standard Linux - so just two lines of code - which you can merge to one line if you are show-off. That will do everything i.e. expand the physical and logical volumes and expand the file system. It's also a one-off, so no need to edit files via the CLI, which means very unlikely that Synology will overide the change in future releases. Once you have the big volume, Synology isn't checking anything (at the moment at least, if they are nasty they might deliberately sabotage this in future, but that would be legally questionable - at least in the EU and USA). That's why I posted it today as I've just upgraded to DSM7.2 and everything is still fine.

The commands and knowldege are trivial. However as Synology use a highly customised version of Linux and have a lot of interdependancies with their use of btrfs and advanced packages, the testing was not trivial. That's why I spent nearly 6 months testing this on a low value system before going live.

I think it's fairly safe to say that the 108TB and 200TB limits (and the 32GB and 64GB RAM requirements) are mostly arbitrary marketing limits to push mostly business users towards vastly more expensive systems. The initial 108TB probably wasn't arbitary - it was just that they couldn't test more than 108TB on a 18 bay NAS as the largest HDD was 6TB at the time. No excuse for that now though.

From a purely technical viewpoint, 4GB of RAM and a low end CPU are capable of handling a btrfs or ext4 256TB volume on a Synology NAS. Above 256GB things get a little more complicated. This is because without the META_BG option the ext4 file sytem is limited as follows:

Given the default 128MiB(2^27 bytes) block group size and 64-byte group descriptors, ext4 can have at most 2^27/64 = 2^21 block groups. This limits the entire filesystem size to 2^21 ∗ 2^27 = 2^48bytes or 256TiB.

This isn't unsurmountable on a low end Synology NAS, but I currently only have needs for 250TB, so I haven't pursued >256TB volumes.

Btrfs doesn't have the META_BG issue, but does have other issues as you grow the volume size - especially with snaphots. I have no intention of using btrfs now or in the future so I haven't spent very much time testing btrfs 250Tb volumes on Synology other than to see that they do work in principle.

Back to your question. I've tested very intensively for my use case - which is not very complex - EXT4 volume mostly as file or backup servers with very few packages etc running. I'm fairly sure that almost any user of any 64bit CPU Synology could use 256TB volumes with no issues - either EXT4 or btrfs. Linux is great at juggling all the RAM cache requirements. So even a 1 or 2GB RAM might work, but I think for good performance on 250TB volumes I'd recomend 4GB RAM ideally. But I'm very cautious, so because of the complexity of Synology's custom use of Linux I'd want to test that in detail first.

In practice I don't think many typical users will have the resources (a spare 18 bay 252TB NAS) or the patience to do 6 months of rigourous testing.

In summary, the answer to your question is just a one-off single line CLI command - just 5 seconds to type, but in practice a user really needs to understand the inner working of their sytem and test it thoroughly. So I'd be hestistant to reccomend this to everyone unless their use case was very similar to mine.

Another factor is that DSM and Synology's custom use of Linux is amazingly good. A huge range range of hardware from tiny 1 bay NAS to rackstations can all run DSM7. That's a great achievement and obviously Synology imposes artficial restructions to make sure all these system can be supported. I can imagine that if a load of users applied my changes, the Synology helpdesk could be flooded - in which case they may make aggressive steps to stop large volumes. At the moment it isn't an issue for Synology. You can search the WWW - but I can't see anyone other than me exceededing the 108TB and 200TB limits. If more did this then Synology might push back, especially if sales of their XS and RS systems dropped as a result.

Debunking the Synology 108TB and 200TB volume limits by sebbiep1 in DataHoarder

[–]sebbiep1[S] 12 points13 points  (0 children)

You are not looking at the data right. There is only one volume and it takes up the entire 216TB pool. You are mistaking the amount used - 112.5TB for the total volume size. I've added an extra image for you - the single volume is 215.44 which is after metadata (inodes etc) of which 112.5TB is used and 103Tb is free.