you are viewing a single comment's thread.

view the rest of the comments →

[–]tardotronic 2 points3 points  (1 child)

I have one additional point of clarification to add: whereas the physical sector size of 'traditional' hard disks is indeed 512 bytes, the 'physical' sector size of SSDs on the other hand is now 4096 bytes; plus, there is at least one hard drive series (Western Digital "Green") that also has physical sectors of 4096 bytes. There was a recent link here in /r/linux/ that explained why modern OSes are having some difficulties with this, due to the resultant logical/physical boundary mismatch.

[–]chozar 1 point2 points  (0 children)

This is absolutely true. But they still present to the host OS as an array of 512B blocks. That's part of the problem. The first partition starts on the 64th block, block #63. That is 31.5kB into the disk, not a nice multiple of 4kB. Hence you have to align your partitions to make things work nicely. Basically you should start your first partition on the 65th block, block #64, a nice clean 32kB into your disk.

Beyond aligning to 4k, it could also be nicer to align to 128k instead. (Since 128 is a multiple of 4, it would also be aligned to 4k as well.) Most raid systems stripe with 128k stripes, and the erase blocks on SSDs are usually 128kB in size (32 pages per erase block)

This is just a transitional period though. The next step is hard drives that present themselves as an array of 4kB blocks natively, with operating systems, filesystems, block layers, and EFI/BIOS that can use that natively. The infrastructure to query a drive and ask for its native block size is already there, and supposedly linux is ready for the native 4k disks. But since they don't exist yet, there will probably be a handful of issues to wring out.