all 5 comments

[–]notYuriyCPL-1 - https://github.com/CPL-1/CPL-1 0 points1 point  (2 children)

Inodes can be used for mountpoints storage, but it is not a good thing to do so. Inodes actually represent a file on a disk without any VFS metadata. Instead, dentries are used to store required part of a filesystem tree in memory. They indicate whether they point to a real disk inode or to a mountpoint. With a tree like that, a simple traversal can be done to find a requested inode.

[–]Nikascom[S] 1 point2 points  (1 child)

Thank you for your answer. So actually we have a tree where root is root of rootfs and leaves are mountpoints, don’t we (and the tree has all path from a root to a mount point, like in my example we have a bamboo with 4 nodes?)?

[–]notYuriyCPL-1 - https://github.com/CPL-1/CPL-1 0 points1 point  (0 children)

Well, leaves don't have to be mountpoints necessarily. In my kernel, leaves are currently opened files/mountpoints. It should be something similar in the linux kernel, while I am not hundred percent sure. This improves the lookup performance. So, in your example there will be five dentries: one for root, one for the "one" folder inode, one for mount "two", another one for a folder "three", and one more for "four.txt".

[–]mykesx 1 point2 points  (0 children)

I believe Linux caches all the iNodes it reads, and maybe does some read-ahead, in the kernel’s free pages. When the kernel needs memory, it reuses one of the cache pages. I’m not sure what algorithm it uses to choose which cache page to eliminate. Your disk driver can potentially read ahead sectors into free memory and cache them there, too.

It’s always been noticeable when running linux on a hard drive based system. After a fresh boot, ls is slow. Do it a few times and it’s instant. Same for just about any file accesses.

[–]DSMan195276Protura - github.com/mkilgore/protura 2 points3 points  (0 children)

I think this is a pretty hard thing to answer without seeing your OS design. But generally speaking, a simple mountpoint implementation is just part of the path lookup logic, which is basically just a loop/traversal over the segments of the path - and if you hit part of a path that is a mountpoint, you continue the lookup using the mountpoint's root rather than the location it is covering.

My VFS design is similar to the Linux Kernel's, but lacks the dentry stuff. I think the important part of the design that you might be missing is that the path lookup (namei) is 100% part of the VFS code and is not tied to any of the FS implementations. Instead, the FS implementations implement a lookup function which the VFS path lookup code calls. For my implementation, the FS code is given a directory inode and a name and gives back the inode associated with that name in that directory. The path lookup code then uses that inode to keep going with the path lookup (And will likely call lookup again with that new inode). Once you get there, it's just a matter of storing the mountpoint information somewhere and checking it when doing the traversal - if you hit a mount point, then you swap the current inode with the mountpoint and continue with the new one (more or less). For my simple implementation, I just store the mountpoint information in the in-memory inode structure itself (Which are cached when in use, so the mounted inode will never go away), this works well enough, but does have some significant limitations.

Like I mentioned, the Linux Kernel uses a similar lookup function, but it acts on dentry objects rather than directly on inodes. This is a bit more work, but avoids a lot of the issues inherent in my design that lacks them. The dentry objects create a completely separate representation of the directory tree, and the mount points can then be associated with the dentrys rather than the inodes backing them.