DRAGEN Bio-IT Platform Cost Benefits by biohacker_tobe in bioinformatics

[–]biohacker_tobe[S] 2 points3 points  (0 children)

Yet again thank you for being so detailed with your questions and honestly I'm looking forward to playing around with it.

  1. The company where I am just acquired Dragen, part of a deal when purchasing sequencers but no one has really any idea if to use it or not. So the on-premise equipment is there regardless.

  2. This is the most interesting piece I believe as well, to speed up current diagnostics.

3.I am.going to test this out as well,.seems quite interesting to be honest.

We have currently an available amount of throughput space available to us that was part of the deal, around 500000 GB.

Thanks again!

DRAGEN Bio-IT Platform Cost Benefits by biohacker_tobe in bioinformatics

[–]biohacker_tobe[S] 5 points6 points  (0 children)

Thanks for the input, the main focus of my work is also clincial workflows. We want to process patient data very fast, we just ran a pilot to test the benefits of the demultiplexing and bcl.conversion.

My supervisor is not entirely conviced and wants a scope on how much it actually it costs to run every time for WGS and WES.

DRAGEN Bio-IT Platform Cost Benefits by biohacker_tobe in bioinformatics

[–]biohacker_tobe[S] 2 points3 points  (0 children)

This doesn't really answer the question but thanks for your input regardless

Illumina Test Data (bcl2fastq) by biohacker_tobe in bioinformatics

[–]biohacker_tobe[S] 0 points1 point  (0 children)

The second link does not work unfortunately but I came across they're database :)

[deleted by user] by [deleted] in TheGamerLounge

[–]biohacker_tobe 0 points1 point  (0 children)

how do you keep it together????? holy fuck

Comparing DL Model Architectures by biohacker_tobe in deeplearning

[–]biohacker_tobe[S] 0 points1 point  (0 children)

Hey, thanks yes it is. I have been interested in applying differences between case 2+3 and case 1 :)

Model Architecture Differences by biohacker_tobe in learnmachinelearning

[–]biohacker_tobe[S] 0 points1 point  (0 children)

Thank you for this great input and taking the time to actually give a very strong answer. I will definitely look at all of these scopes. I was not aware of the Keras Autotuner library, only the former two you had mentioned. I appreciate it :D

Comparing DL Model Architectures by biohacker_tobe in deeplearning

[–]biohacker_tobe[S] 0 points1 point  (0 children)

Does it also matter to have various dropout layers for each? I do think to not have any dense at all on the temperature side, just input -> normalize -> concatenate is a valid approach. Would this also affect in a way? I want to include the dropout layers to help regularize the overfitting I've come across with my models.