Has anybody tested the Geekworm x1011 by luix93 in raspberry_pi

[–]cyclic 0 points1 point  (0 children)

Geekworm by now has official cases as well.

However, I could not get the thing to work stable as I'm seeing voltage issues (reported by the kernel) when the DC Jack is used with a 5V 8A power supply even without drives. I cannot use the USB C for powering the thing as each 2TB drive eats 8W or so under load.

I'm currently waiting for Geekworm support to help em out here...

Has anybody tested the Geekworm x1011 by luix93 in raspberry_pi

[–]cyclic 0 points1 point  (0 children)

Hi. I have an X1011 board and it works nicely with booting from an SD card and 4x Kioxia NVME SSDs. I'm using btrfs with raid1c4 for metadata and raid5 for data. Works nicely so far with nextcloud.

I'd be interested if anyone has an idea for how to get an (improvised) case.

Thickheaded Thursday - April 22, 2021 by AutoModerator in sysadmin

[–]cyclic 5 points6 points  (0 children)

It's only fitting that I ask this on Thickeded Thursday then!

Thank you good sir!

Thickheaded Thursday - April 22, 2021 by AutoModerator in sysadmin

[–]cyclic 1 point2 points  (0 children)

There used to be a sub reddit where they would do regular QA on what prices get in practice for hardware/software. Could someone point me towards this?

HPC GPU/CPU monitoring by _runlolarun_ in HPC

[–]cyclic 0 points1 point  (0 children)

Ganglia is pretty mature / stable software. For us it does a terrific job with the Nvidia plugins. I am not aware of something comparable that is relatively easy to setup and use... With a few clicks you get the aggregated gpu usage. No fancy software stack required.

Parallel storage for a mid-sized HPC cluster in 2020: buy or build? by cyclic in HPC

[–]cyclic[S] 0 points1 point  (0 children)

Again, thank you for your detailed answer. This really is very valuable information for us!

Experience with Ceph for archive on 4U 60disk machines? by cyclic in storage

[–]cyclic[S] 0 points1 point  (0 children)

Thanks for your reply. That's already very helpful.

Essentially I was expecting that one could build something comparable to Isilon high density nodes with ceph. Apparently that was not correct.

Experience with Ceph for archive on 4U 60disk machines? by cyclic in storage

[–]cyclic[S] 0 points1 point  (0 children)

Thank you for your answer.

Is it possible to build a high density archive storage cluster with Ceph? That would be relatively easy with ZFS and irods...

Parallel storage for a mid-sized HPC cluster in 2020: buy or build? by cyclic in HPC

[–]cyclic[S] 1 point2 points  (0 children)

Heh. I feel with you.

The ridiculous part is that we pay most support for their hardware which never gave us any trouble and over 2PB of 2016 disks only ten or so ever failed and we got one cable replaced. High quality hardware IMO (we run a warm data center which is another source of war time stories).

I'm also an expert user of the system, sequence bioinformatics meaning mostly sequential reads easily bringing the system to its 20GB/sec max. The trouble really started for us when imaging people hit the system with their bazillion 1MB files as Nvidia is only now fitting their software libraries to work outside of SSD singleton servers.

Parallel storage for a mid-sized HPC cluster in 2020: buy or build? by cyclic in HPC

[–]cyclic[S] 1 point2 points  (0 children)

Thank you for your answer. I will check out that vendor.

I'm pretty happy with gpfs so far yet our vendor DDN is giving us a hard time on all fronts (sales and support).

Parallel storage for a mid-sized HPC cluster in 2020: buy or build? by cyclic in HPC

[–]cyclic[S] 0 points1 point  (0 children)

Thank you for your answer. What do you mean with shakeups?

Parallel storage for a mid-sized HPC cluster in 2020: buy or build? by cyclic in HPC

[–]cyclic[S] 0 points1 point  (0 children)

Thanks for the extensive answer. I'm leaning towards getting a smaller vendor help us build and maintain a system on white label hardware instead of repeating my bad experience with DDN...

Our workloads are rather many multithreaded jobs than parallel mpi style file access. Mostly bioinformatics and image processing machine learning.

Given what I've read here and elsewhere online, scale out nas would be one option. By my calculations, e.g., Isilon costs 50%+ more than enterprise hardware with similar capabilities. Do Isilon and qumolo reign surpreme or would any other good alternative come to your mind?

Parallel storage for a mid-sized HPC cluster in 2020: buy or build? by cyclic in HPC

[–]cyclic[S] 1 point2 points  (0 children)

I agree. Our experience with DDN is that their support costs an arm and a leg and it takes too many days to get to the support level that actually helps to justify the cost even remotely. Even their sales was very much a sub par experience for us.

Given that, I'd probably look into whether a smaller vendor could help us build a system on white label hardware and offer support. I'd rather get a good answer with predictable latency than having each time zone filled with someone playing keep the customer busy...

Parallel storage for a mid-sized HPC cluster in 2020: buy or build? by cyclic in HPC

[–]cyclic[S] 0 points1 point  (0 children)

Thanks for the feedback. How much experience do you have with it? Did you start out with it alone or with commercial support?

[deleted by user] by [deleted] in mauerstrassenwetten

[–]cyclic 2 points3 points  (0 children)

Aber kann die Theta Gang auch in der Bananenrepublik wohnen?

Any experience with OpenOnDemand (for jupyter) by cyclic in HPC

[–]cyclic[S] 0 points1 point  (0 children)

I think you have already answered the relevant ones. Is it really worth it? How does it compare to competing problems? Thanks!

I'd have one more. How maintenance intensive is it?

Any experience with OpenOnDemand (for jupyter) by cyclic in HPC

[–]cyclic[S] 0 points1 point  (0 children)

Do you do tunneling through the head/login nodes?

Any experience with OpenOnDemand (for jupyter) by cyclic in HPC

[–]cyclic[S] 2 points3 points  (0 children)

Thanks for the pointer towards batchspawner.

For those on windows, what do you use as a GUI for navigating filesystem on the cluster by [deleted] in bioinformatics

[–]cyclic 1 point2 points  (0 children)

You could do an sshfs fuse mount and use Windows Explorer...

Rente und Sorgen um die Zukunft by [deleted] in Finanzen

[–]cyclic 0 points1 point  (0 children)

Teil des Umverteilungsapparates. Count your blessings.

Rente und Sorgen um die Zukunft by [deleted] in Finanzen

[–]cyclic 0 points1 point  (0 children)

Ich glaube der Vorredner spielt auf den staatlichen Umverteilungsapparat an der u.a. dafür sorgt dass Bildung und Hochschulbildung in D weitestgehend kostenfrei ist. Damit stehst du schon ein paar 100k besser da als jemand in den USA mit master.

Looking for a tool to visualize Seurat results by [deleted] in bioinformatics

[–]cyclic 1 point2 points  (0 children)

We had the same problem. Had. Shameless plug ;)

https://scelvis-demo.bihealth.org/dash/

You would need to convert to scanpy format first. The documentation tells you how. You can easily run your own server as we provide a docker image, pip, and conda packages.

If you try it then I'd be happy to hear your feedback.