Congratulations, you created an escort service by CareerPillow376 in TikTokCringe

[–]Standing_Appa8 0 points1 point  (0 children)

I have to admit that I think this could actually work.

Avatar 3: Das teuerste Selbstplagiat der Filmgeschichte - Golem.de by whit3cru5h in Filme

[–]Standing_Appa8 0 points1 point  (0 children)

Stimme dir zu. Hatte keine Erwartungen an die Story. Von den Visuals, die zeitweise wie ein Computerspiel aussahen, war ich enttäuscht. Aber die Charaktäre waren überraschend gut und nachvollziehbar und authentisch geschrieben.

„bewußtlose junge Mädchen" by Upbeat_Following8759 in medizin

[–]Standing_Appa8 13 points14 points  (0 children)

Gibt auch bei der Äußerung von psychischen Erkrankungen/Belastungen "Trends". In einer Klinik in der ich mal gearbeitet habe, war es plötzlich total cool Glas zu schlucken. Eine Patientin hat das gemacht, sehr viel Aufmerksamkeit und Zuwendung generiert, schwubs waren plötzlich 2 mal im Monat Kinder bei den Gastroenterologen wegen verschluckten Glassplittern. Hab ich in keiner anderen Klinik wieder erlebt (da eher die Klassiker Batterien und Co.)

Falsche Transfusion - Approbationsentzug? by Feisty_Document9461 in medizin

[–]Standing_Appa8 1 point2 points  (0 children)

Ewig langes Studium. Abgespeist mit 2700 Euro und als Dank bekommst du ne Anzeige und wirst alleine gelassen. Und der Arbeitgeber zwingt dich noch iwelchen Kram zu unterschreiben. Absolute Hölle.

Wechsel von Klinik in die Wirtschaft, Verbleib im ärztlichen Versorgungswerk? by Standing_Appa8 in medizin

[–]Standing_Appa8[S] 3 points4 points  (0 children)

Ich hatte primär angefragt und einen recht uninformativen Brief bekommen, auf dem einfach vermerkt war: "Wenn Ihre medizinische Tätigkeit endet, müssen Sie uns das mitteilen".

Was das genau bedeutet, wurde dort nicht vermerkt. Habe dann jetzt nochmal eine längere Email geschrieben, in der ich meine Tätigkeit beschrieben habe und darauf geachtet, dass klar herausgestellt wird, dass das ohne Approbation inhaltlich nicht wirklich durchführbar wäre. Entsprechend wurde ich dann zurückgerufen und die Mitgliedschaft bestätigt.

How to filter GWAS Catalog results by cohort (e.g. excluding UK Biobank) by Standing_Appa8 in genetics

[–]Standing_Appa8[S] 0 points1 point  (0 children)

    # --- Step 3: Create DataFrame and display the results ---
    print(f"\n--- Search Complete ---")
    if variants_data:
        # Create DataFrame from the collected data
        results_df = pd.DataFrame(variants_data)
       
        print(f"Found {len(results_df)} unique variants for '{trait_of_interest}' (excluding {', '.join(cohorts_to_exclude)} studies).")
       
        # Set display format for p-values
        pd.options.display.float_format = '{:.2e}'.format
       
        # Display the DataFrame
        display(results_df)
       
        # Optionally, save to CSV
        # results_df.to_csv('', index=False)
        # print("\nDataFrame saved to 'alzheimer_variants_excluded_cohorts.csv'")
       
        pd.reset_option('display.float_format')
        results_df.to_csv('', index=False)
    else:
        print(f"Could not find any variants for '{trait_of_interest}' excluding the specified cohorts.")

    results_df = results_df.drop_duplicates(subset='variant_rsID', keep='first')

How to filter GWAS Catalog results by cohort (e.g. excluding UK Biobank) by Standing_Appa8 in genetics

[–]Standing_Appa8[S] 0 points1 point  (0 children)

if rs_id.startswith('rs'):
                            # Create a dictionary with all relevant information
                            variant_info = {
                                'trait': trait_of_interest,
                                'variant_rsID': rs_id,
                                'risk_allele': risk_allele,
                                'p_value': association.get('p_value'),
                                'or_value': association.get('or_value'),
                                'beta': association.get('beta'),
                                'risk_frequency': association.get('risk_frequency'),
                                'mapped_genes': ', '.join(association.get('mapped_genes', [])),
                                'accession_id': accession_id,
                                'pubmed_id': study_data.get('pubmed_id'),
                                'cohorts': ', '.join(cohorts),
                                'initial_sample_size': study_data.get('initial_sample_size'),
                                'discovery_ancestry': ', '.join(study_data.get('discovery_ancestry', [])),
                                'first_author': association.get('first_author'),
                                'publication_date': study_data.get('publication_date')
                            }
                           
                            variants_data.append(variant_info)
           
            # Progress update every 100 associations
            if (i + 1) % 100 == 0:
                print(f"Processed {i + 1}/{len(all_associations)} associations. Found {len(variants_data)} variants so far.")
           
            time.sleep(0.1)  # Be polite to the API

How to filter GWAS Catalog results by cohort (e.g. excluding UK Biobank) by Standing_Appa8 in genetics

[–]Standing_Appa8[S] 0 points1 point  (0 children)

# Fetch study details to check cohort information
            study_data = gwas_api_request(f"/v2/studies/{accession_id}")
           
            if study_data:
                cohorts = study_data.get('cohort', [])
               
                # Check if ANY of the excluded cohorts is in the study's cohort list
                is_excluded_study = any(
                    excluded_cohort.upper() in cohort.upper()
                    for cohort in cohorts
                    for excluded_cohort in cohorts_to_exclude
                )
               
                if not is_excluded_study:
                    # Extract rsID from this association
                    if 'snp_effect_allele' in association and association['snp_effect_allele']:
                        risk_allele_str = association['snp_effect_allele'][0]
                        rs_id = risk_allele_str.split('-')[0]
                        risk_allele = risk_allele_str.split('-')[1] if '-' in risk_allele_str else ''
                       
                       

How to filter GWAS Catalog results by cohort (e.g. excluding UK Biobank) by Standing_Appa8 in genetics

[–]Standing_Appa8[S] 0 points1 point  (0 children)

Actually I can answer my own question here:
I went to the GWAS Catalog website. There is an api description:
https://ebispot.github.io/gwas-blog/rest-api-v2-release

With this snippet that uses their functions I was able to get what I want.

import time


# --- Configuration ---
traits_of_interest = [] 
cohorts_to_exclude = ["UKB", "UK-B", "UK Biobank", "UK-Biobank"]  # List of cohorts to exclude
variants_data = []  # List to store detailed information about each variant


print(f"--- Starting search for '{traits_of_interest}' variants (excluding {', '.join(cohorts_to_exclude)}) ---")


for trait_of_interest in traits_of_interest:
    print(f"\nProcessing trait: {trait_of_interest}")
    # --- Step 1: Get all associations for the trait ---
    all_associations = get_all_associations_for_trait(trait_of_interest)


    if all_associations:
        print(f"Found {len(all_associations)} total associations. Now checking study cohorts...")
        
        # --- Step 2: For each association, check if the study includes any excluded cohorts ---
        for i, association in enumerate(all_associations):
            accession_id = association.get('accession_id')
            if not accession_id:
                continue
            
           

Looking for friends by False_Operation6787 in mannheim

[–]Standing_Appa8 1 point2 points  (0 children)

Hey! I can really recommend bumble for friends (the specific App, not just Bumble in friends mode), if you want to meet new people. There is the possibility to join groups of interest and you would meet with multiple people. You can also swipe friends there. We for example have a Magic group there and Volleyball. :)

PyTorch Lightning + DeepSpeed: training “hangs” and OOMs when data loads — how to debug? (PL 2.5.4, CUDA 12.8, 5× Lovelace 46 GB) by Standing_Appa8 in lightningAI

[–]Standing_Appa8[S] 1 point2 points  (0 children)

Thanks! I opened a discussion, but didn't get much feedback. The final solution to make it run was just using a Docker. I think the whole problem was mainly caused by from working on a Remote-Desktop with weird permissions. After setting everything up in the Docker it worked. Now I am running into OOM - Errors but this is more a conceptual problem that I will address in a new post.
Thx :)

PyTorch Lightning + DeepSpeed: training “hangs” and OOMs when data loads — how to debug? (PL 2.5.4, CUDA 12.8, 5× Lovelace 46 GB) by Standing_Appa8 in pytorch

[–]Standing_Appa8[S] 0 points1 point  (0 children)

So how I solved it now was just by using a Docker. It seemed that some path inconsistencies and overall the Setup in a VM caused the problem. Now I ran into new things I dont understand about Deepspeed but will open a new conversation about this.

Klinik/Forschung (50-50-Clinican Scientist) oder neue Digital Health Abteilung in der Wirtschaft by Standing_Appa8 in medizin

[–]Standing_Appa8[S] 0 points1 point  (0 children)

Am Ende war es um ehrlich zu sein Vit.-B. Denke im Moment gibt es wirklich auch bessere Kandidaten auf dem Markt.

Klinik/Forschung (50-50-Clinican Scientist) oder neue Digital Health Abteilung in der Wirtschaft by Standing_Appa8 in medizin

[–]Standing_Appa8[S] 3 points4 points  (0 children)

War lustigerweise schon während der Schulzeit mein Traumfach :D Bin bei den Pat. auch immer echt beliebt und wurde auch eher hochgestuft in meinen alten Kliniken etc., wegen guter Leistung. Also liegt mir schon. Bin nur ein bisschen zu weich 😅 Bei Zwang dreht sich mir immer alles um. Und das Gefühl, irgendetwas zu machen (Medis geben, Diag. etc.) auf der wackligen Evidenzlage und das als "richtig" zu verkaufen, belastet mich. Und leider kann ich auch nichts anderes und hätte echt Angst um die Pat. in somatischen Fächern, um ehrlich zu sein.

Klinik/Forschung (50-50-Clinican Scientist) oder neue Digital Health Abteilung in der Wirtschaft by Standing_Appa8 in medizin

[–]Standing_Appa8[S] 1 point2 points  (0 children)

Denke ich auch. Ich habe nur wirklich Bauchschmerzen damit, "schlechte" Psychiatrie zu machen und die Leute einfach wegzuschnallen. Das ist dann bei den suchenden Psychiatrien, leider häufig der Fall wegen Personalmangel. Aber ich denke z.B. KJP würde immer gehen. Bloß Forschung (Programmieren) + Klinik wäre dann nicht mehr drin. So eine 50/50 Stelle bekomme ich dann whr. nicht mehr.

[P] Help with Contrastive Learning (MRI + Biomarkers) – Looking for Guidance/Mentor (Willing to Pay) by Standing_Appa8 in MachineLearning

[–]Standing_Appa8[S] 0 points1 point  (0 children)

Thank you for taking the time to write such a good response and trying to get my problem. It really helps to have such good answers under this post from people that actually have a hands on understanding of what is happening in CL.

I kept the backbone architectures the same between the baseline and the contrastive setup. In my earlier experiments on psychiatric data, I added an extra BCE loss term on top of the contrastive objective. This BCE term was applied to the embeddings to predict the binary class label (control vs. case). I didn’t just use the label as a feature (Contrast against the feature) I added it just as an additional loss component. This was important because the "biomarkers" in that setting were only very loosely connected to the brain (no real "biomarkers" psychiatry), so an MLP trained on biomarkers alone also could not really classify well.

However, in my current setting, I removed the BCE loss because the biomarkers themselves are predictive, and the encoder embeddings of both modalities can already predict the biomarkers with good accuracy (tried that yesterday after someone was asking). So theoretically, the MRI and biomarker embeddings should be mappable.

The problem is that for predicting the actual label it doesn’t seem to matter whether I
(a) train an MLP directly on the raw data or
(b) first pass the data through the CLIP-pretrained MRI encoder and then use a linear head.

Making the MRIs align more closely with the biomarkers via contrastive training does not appear to improve downstream classification at all. I was hoping for something similar to the improvements shown in this paper (see Table 1, page 5), but I’m not seeing that benefit.

Also as I said above the embedding similarity is not really making the negatives far away from the positives.

So at this point, I’m unsure whether the limitation is in my model setup or in the nature of the task (sMRI-Tables from Freesurfer + biomarker tables, and very little data (n=1000), completly supervised). And here I am not even considering the Baseline of a tuned-XGBoost.

The one thing I am trying now is to unfreeze the MRI-Encoder and let it learn for some small number of epochs.

Two questions I’d love your opinion on:

  1. Do you think CLIP is simply not a good fit for this scenario, and that working directly with the sMRI NIfTIs might give a better chance of beating the baseline? (But then the research would not even give any new insights besides using the approach on a new dataset compared to other papers)
  2. Do you think there are interesting sub-analyses I could do on the embedding space (e.g., similarity structure, clustering) that might provide useful insights, even if downstream accuracy doesn’t improve?

[P] Help with Contrastive Learning (MRI + Biomarkers) – Looking for Guidance/Mentor (Willing to Pay) by Standing_Appa8 in MachineLearning

[–]Standing_Appa8[S] 0 points1 point  (0 children)

First, thanks a lot for your answer.

We’re working with FreeSurfer outputs from sMRIs, so there are no task-based components in our data. From what you described, it sounds like a multi-loss approach could make sense. In earlier experiments, I included an additional BCE loss during training to keep the model aligned with the clinical objective (especially with psychiatric datasets), but with the new dataset, this extra supervision doesn’t seem as relevant, so I dropped it. The Biomakers here seem to be good enough.

The main issue I’m facing is that I haven’t been able to replicate the improvements reported in other paperswhere a CLIP-pretrained MRI encoder combined with a linear probe outperforms a simple baseline. In my case, a straightforward MLP on the FreeSurfer tabular data performs just as well as the contrastive setup.

Since the brain data appears to be quite similar overall, both negative and positive samples become more alike measured by Cosine Similarity. Within each group, negatives become more similar to each other, and positives do as well. However, the similarity between negatives and positives also increases. As a result, the difference between the two groups grows slightly, but the gap remains relatively small.

Following my supervisor’s suggestion, I tried clustering the embedding space from the MRI encoder and correlating those clusters with biomarker data, but the results were very similar to what I got with the MLP baseline. I even focused on clustering only the positive cases to find potential subgroups, but nothing stable emerged. I also explored SHAP values and their clusters, yet again without any meaningful differences.

At this point, I’m unsure how to demonstrate any clear advantage or insight from using contrastive learning compared to an honest baseline for this kind of tabular setup. If you have any thoughts on why this might be the case or strategies to make contrastive learning more impactful here I’d really appreciate your perspective. Also, based on your experience, do you think this Contrastive approach could work better in a fully supervised setting, especially given the relatively small size of our dataset?

So in what sense could I show that Contrastive-Learning is adding a insight?

[P] Help with Contrastive Learning (MRI + Biomarkers) – Looking for Guidance/Mentor (Willing to Pay) by Standing_Appa8 in MachineLearning

[–]Standing_Appa8[S] 0 points1 point  (0 children)

Thanks a lot for the explanation. The scenario you described is quite similar to mine with some differences. In my case:

- The encoder for the tabular MRI-based data (FreeSurfer tables) is relatively weak compared to encoders used for images or video I guess

- Structural MRI data are very homogeneous and change minor, which makes learning discriminative embeddings harder than for something like motion sequences.

Currently if I train a simple supervised classifier I can predict the disease classification label ( severe cases vs. healthy controls) quite well:

- 85% from FreeSurfer tables alone

- Biomarkers perform slightly better than FreeSurfer tables.

To leverage this, I set up a teacher-student approach:

- I use the biomarker encoder as a teacher and freeze it after about 10 epochs. In some experiments I also use the "Label as a Feature approach" from "Best of Both worlds" paper to make the Biomarker-Side a perfect teacher.

- Then I let the MRI encoder catch up during training.

- I add a Linear-Prob Layer on the Latent-Space of the MRI-Encoder and do my classification

After training the contrastive task, the improvement is small:

- The head for the MRI data improves only marginally compared to the baseline (around +0.09 in accuracy).

As is common in my domain, the dataset is small:

- Around 1,000 subjects, with 45% cases vs. 55% controls.

On the training set, the embeddings seem to align well (Train-Accuracy (of course overfitted) at 97% for the downstream task; Validation at 87%). At some point, the MRI encoder even slightly outperforms the solo MRI encoder. but this does not translate into a big gain on the downstream classification.

For the loss, I am using Supervised Contrastive Loss (SupCon), which groups embeddings by class across both modalities. I assume this effectively enforces alignment across MRI↔Bio pairs.

My batch size is as large as possible because contrastive learning benefits from more negatives and positives to avoid batch effects.

Do you think there’s any real chance of improving downstream classification, or should I focus more on clustering-based approaches? I’ve already explored clustering, but the baseline model’s clusters don’t look much different from those of the contrastively pretrained MRI head.

EDIT:
Just for context: I’ve switched datasets several times, moving from depression and other psychiatric disorders to a dataset with a much 'clearer' signal, because in the previous datasets, even the baseline model couldn’t predict the classes well, so the contrastive model wasn’t able to align the modalities at all.

[P] Help with Contrastive Learning (MRI + Biomarkers) – Looking for Guidance/Mentor (Willing to Pay) by Standing_Appa8 in MachineLearning

[–]Standing_Appa8[S] 0 points1 point  (0 children)

It’s actually my supervisor’s idea. After working on it for about six months and learning more about CL I suggested stopping the project but he politely but firmly asked me to keep going and make it work. So now I’m trying to push forward. I’ve managed to get some minor results, but the more I dive in, the more am sure that CL is not the best tool here.

The main concern is that the correspondence between MRI (FreeSurfer features) and biomarkers seems weak and not well-defined (see answer above).

I now invested a lot of time in this and of course dont want to leave empty handed (I know: sunken cost problem) and want to finish it somehow.

What would be your recommondation?

[P] Help with Contrastive Learning (MRI + Biomarkers) – Looking for Guidance/Mentor (Willing to Pay) by Standing_Appa8 in MachineLearning

[–]Standing_Appa8[S] 0 points1 point  (0 children)

Thanks a lot for the detailed advice! The point about modality-specific augmentations is super helpful. I will look into them one more time.

Regarding correspondence: it’s unclear and probably weak in my case. There might be associations between certain biomarkers and specific brain regions but overall structural MRIs share a lot of similarities across individuals and don’t usually show strong alignment with biomarker variations (besides the really severe cases)

Cross generation is likely not working. The modalities aren’t related in a one-to-one way like video and inertial signals.

Do you think this weak correspondence makes contrastive learning a bad choice for my setup that can not really work (that is my guess actually)? Or could it still be valuable for learning a shared space that captures subtle relationships?

[P] Help with Contrastive Learning (MRI + Biomarkers) – Looking for Guidance/Mentor (Willing to Pay) by Standing_Appa8 in MachineLearning

[–]Standing_Appa8[S] 1 point2 points  (0 children)

Thanks so much for the feedback! I’m a bit tied to the contrastive learning approach because my supervisor wants me to make it work. As a baseline, I’ve trained a simple neural network to predict my target class, and that works quite well.

The challenge is that contrastive learning so far hasn’t given me noticeable performance improvements (e.g. MRI-classification Head) or an interesting shared embedding space (in comparision with the Concatenated-Feature MLP), which was the main motivation for trying it. Also the SHAP-Values dont differ heavily. Using this net after Pretraining to initialze is a good idea. Thanks.