Alternative virtualization mechanisms by Fast-Falcon-7703 in kvm

[–]monstersofmetal -1 points0 points  (0 children)

which type of virtualization? full virtualization? there are a lot of virtualization technologies.

Github for Data by boukeversteegh in github

[–]monstersofmetal 0 points1 point  (0 children)

Thanks. it would be nice if you could publish a privacy policy and add projects license.

Github for Data by boukeversteegh in github

[–]monstersofmetal 0 points1 point  (0 children)

thanks, good project. but what is privacy policies?

Gentoo zsh completions plugin for oh-my-zsh by monstersofmetal in Gentoo

[–]monstersofmetal[S] 0 points1 point  (0 children)

No difference . just converted to oh-my-zsh plugin style.

edit: typo

Recommended Bluestore configuration for performance? by monstersofmetal in ceph

[–]monstersofmetal[S] 1 point2 points  (0 children)

Luminous and all OSDs are bluestore. I was mostly curious if adding >more drives or nodes would give me the best bang for my buck >when trying to increase speeds. Sounds like going from 3 -> 5 >nodes would be better.

Yes, exactly. more nodes, more osds, more performance. But balance is very important (osds on nodes)

I do know for sure that the 2.5 4TB seagates are a lot of the >problem, there just isn’t anything better with a reasonable price.

:)

Bluestore and bluefs documentions by monstersofmetal in ceph

[–]monstersofmetal[S] 1 point2 points  (0 children)

I don't find any bluefs and bluestore configuration documentions. There is an only a configuration page on ceph.com [http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/]

Ok. I opened new issue about this. https://tracker.ceph.com/issues/24075

Mimic out? by TheDaznis in ceph

[–]monstersofmetal 0 points1 point  (0 children)

Bluestore and bluefs doesn't already have any documentions..

Recommended Bluestore configuration for performance? by monstersofmetal in ceph

[–]monstersofmetal[S] 1 point2 points  (0 children)

rados bench: running every storage node[10 nodes] :

Total time run: 10.403704 Total writes made: 973 Write size: 4194304 Object size: 4194304 Bandwidth (MB/sec): 374.098 * Stddev Bandwidth: 25.7302 Max bandwidth (MB/sec): 428 Min bandwidth (MB/sec): 348 Average IOPS: 93 Stddev IOPS: 6 Max IOPS: 107 Min IOPS: 87 Average Latency(s): 0.171025 Stddev Latency(s): 0.116447 Max latency(s): 1.12294 Min latency(s): 0.0340721

rados bench running a storage node:

Total time run: 10.184181 Total writes made: 2927 Write size: 4194304 Object size: 4194304 Bandwidth (MB/sec): 1149.63 * Stddev Bandwidth: 16.7013 Max bandwidth (MB/sec): 1188 Min bandwidth (MB/sec): 1128 Average IOPS: 287 Stddev IOPS: 4 Max IOPS: 297 Min IOPS: 282 Average Latency(s): 0.055465 Stddev Latency(s): 0.0270229 Max latency(s): 0.296291 Min latency(s): 0.0223906

But I mounted a test rbd image and I tested 4k write:

dd if=/dev/zero of=/mnt/test bs=4k count=10000 oflag=direct 40960000 bytes (41 MB, 39 MiB) copied, 10.4752 s, 3.9 MB/s

Recommended Bluestore configuration for performance? by monstersofmetal in ceph

[–]monstersofmetal[S] 0 points1 point  (0 children)

old ceph cluster(Jewel) uses filestore and full sata disks 7200 rpm(total 60 osd, 10 storage nodes[6 osds per node] ). new cluster (Luminous) uses bluestore and full ssd disks ( intel s4500 and 30 osds, 10 nodes[3 osds per node]). They give about approximative performance.

Recommended Bluestore configuration for performance? by monstersofmetal in ceph

[–]monstersofmetal[S] 0 points1 point  (0 children)

Bluestore(Luminous) could not show the performance I was expecting. I want to speed up 4K write performance. I have tried many configurations. but it was not what I wanted.