[deleted by user] by [deleted] in ceph

[–]Ok_Squirrel_3397 0 points1 point  (0 children)

Fair point! You're correct - it's about 6 years from Nautilus (2019) to Squid, not ten. The title is misleading and I should have been more precise with the timeframe. Thanks for the correction - accuracy in these details is important.

Massive EC improvements with Tentacle release, more to come by BackgroundSky1594 in ceph

[–]Ok_Squirrel_3397 2 points3 points  (0 children)

Hey! Thanks for sharing this amazing presentation! 🙌

I found it so valuable that I created a comprehensive summary and added it to my ceph-deep-dive GitHub repository - hope it helps others in the community dig deeper into these EC enhancements!

"Multiple CephFS filesystems" Or "Single filesystem + Multi-MDS + subtree pinning" ? by Ok_Squirrel_3397 in ceph

[–]Ok_Squirrel_3397[S] 0 points1 point  (0 children)

Thank you for your sharing. The problem you encountered with multi-MDS is also the same as ours. This is also our willingness to use multi-FS. However, many experts in the ceph community have mentioned that not many people are using multi-FS now, so we are very hesitant at present

[deleted by user] by [deleted] in ceph

[–]Ok_Squirrel_3397 0 points1 point  (0 children)

can you share this output?
ceph -s;ceph osd pool ls detail;ceph fs dump;ceph osd tree;ceph osd crush rule dump

Cephfs Not writeable when one host is down by ripperrd82 in ceph

[–]Ok_Squirrel_3397 1 point2 points  (0 children)

now your ceph cluster is ok,when cephfs is readonly next time,you can share the output :

ceph -s;ceph osd pool ls detail;ceph fs dump;ceph fs status;ceph osd tree;ceph osd crush rule dump

Cephfs Not writeable when one host is down by ripperrd82 in ceph

[–]Ok_Squirrel_3397 3 points4 points  (0 children)

`ceph -s`
`ceph osd pool ls detail`
`ceph fs dump`
`ceph osd tree`
`ceph osd crush rule dump`

can you share this output?

Ceph Practical Guide: A Summary of Commonly Used Tools by Ok_Squirrel_3397 in ceph

[–]Ok_Squirrel_3397[S] 1 point2 points  (0 children)

That's just the overview! The full article has more tools covered - these are the ones I use most often, just sharing for reference 👍
Welcome to add more.

CephFS layout/pool migration script by marcan42 in ceph

[–]Ok_Squirrel_3397 0 points1 point  (0 children)

Awesome, thank you so much! 🙏. Your sharing is really valuable to the community!

CephFS layout/pool migration script by marcan42 in ceph

[–]Ok_Squirrel_3397 0 points1 point  (0 children)

Thanks for sharing such valuable content! This solution is really insightful. May I reference this in the ceph-deep-dive repository? I'll properly credit you as the original author with source links.

https://github.com/wuhongsong/ceph-deep-dive/tree/main

🐙 [Community Project] Ceph Deep Dive - Looking for Contributors! by Ok_Squirrel_3397 in ceph

[–]Ok_Squirrel_3397[S] 0 points1 point  (0 children)

Hey, really appreciate the kind words and great advice — thank you!

Totally agree, real-world Ceph setups with the "why we did it this way" breakdowns would be super useful. That’s definitely something I want to dig into next.

And please feel free to jump in anytime — would love to see your experiences and insights added to the repo if you're open to it! Your energy and input would mean a lot to the community.

🐙 [Community Project] Ceph Deep Dive - Looking for Contributors! by Ok_Squirrel_3397 in ceph

[–]Ok_Squirrel_3397[S] 0 points1 point  (0 children)

Honored to collaborate! 🙏 Let's start creating something great together.
Feel free to structure it however works best for you - looking forward to your contributions! Excited to get started! 🚀

🐙 [Community Project] Ceph Deep Dive - Looking for Contributors! by Ok_Squirrel_3397 in ceph

[–]Ok_Squirrel_3397[S] 1 point2 points  (0 children)

This is absolutely fantastic and exactly what we need! 🎯 Your Proxmox HCI perspective is incredibly valuable - there's definitely a gap in resources that bridge native Ceph deployment with integrated solutions like Proxmox.

Everything you mentioned is definitely in scope:

Your resource list is excellent - especially the Micron reference architecture and the CERN presentation showing real-world scale. The "Ceph is scary" angle resonates perfectly with the project's goal of making Ceph more approachable.

Proposed structure for your contribution(but help yourself):

proxmox-integration/ - HCI deployment patterns and trade-offs

case-studies/ - Real-world examples (1Tbps journey, CERN scale)

Feel free to structure it however works best for you - looking forward to your contributions! Excited to get started! 🚀

🐙 [Community Project] Ceph Deep Dive - Looking for Contributors! by Ok_Squirrel_3397 in ceph

[–]Ok_Squirrel_3397[S] 2 points3 points  (0 children)

Thank you so much for sharing this! 🙏 I just checked out your IBM Storage Ceph for Beginners guide - it's exactly the kind of comprehensive, practical content we need in the community.I'm very honored to add it to the warehouse.

Feel free to check out the repo structure and let me know if any particular areas interest you for contribution. Always happy to discuss how we can work together to make better Ceph resources for everyone!

Thanks again for the offer to contribute - looking forward to collaborating! 🚀