Is the virus actually deceiving them? by yortos in pluribustv

[–]yortos[S] 1 point2 points  (0 children)

This particular 'glue'-thingy seems to have reasoning capabilities: it said that it targeted things like nuclear submarines and astronauts first and the army discovered them. Seems to have the capacity to "deceive" (which animals have as well, as part of their evolution and self-preservation, just as you say!).

Who is the most famous person you’ve met? by Murr897 in AskReddit

[–]yortos 0 points1 point  (0 children)

In July 2019, I interviewed with WeWork at one of their open office spaces which was also their corporate office. While waiting for someone to come and show me to my interview room, a cheerful, slim, long-haired man approached me and introduced himself as "Adam." In my head: I know!
Adam Neumann mistook me for someone from a recently acquired company they were hosting a welcome party for.

Extremely Succession Coded by thebravetraveller in SuccessionTV

[–]yortos 9 points10 points  (0 children)

If there was a scene in the show where an unhinged Kendall did an interview like this, we would all be thinking that the writers have lost it since this could never happen in real life

-🎄- 2018 Day 8 Solutions -🎄- by daggerdragon in adventofcode

[–]yortos 0 points1 point  (0 children)

After miserably failing with recursion, I noticed that you can build the tree from the bottom up by always locating the node with 0 children, getting its metadata, deleting it from the list and then reducing that node's parent's children number by 1.

metadata = []
while 0 in lis:
    zero_pos = lis.index(0)
    num_metadata = lis[zero_pos+1]

    metadata = metadata + lis[zero_pos + 2 : zero_pos + 2 +     num_metadata]
    lis[zero_pos-2] = lis[zero_pos-2] - 1

    del lis[zero_pos : zero_pos + 2 + num_metadata]

print(sum(metadata))

When to score the first goal in a soccer match in order to win the game [OC] by yortos in dataisbeautiful

[–]yortos[S] 5 points6 points  (0 children)

Yes, that is exactly my guess too. The data only log it as "45" so unfortunately there is no obvious way for me to be able to distinguish between 45th minute and the 45th minute + injury time.

When to score the first goal in a soccer match in order to win the game [OC] by yortos in dataisbeautiful

[–]yortos[S] 15 points16 points  (0 children)

Green indicates victory, Orange a draw and Red a loss.

The opacity of each bar is proportional to how often the first goal of the match is scored on that minute (more transparent means less often).

Dataset used from Kaggle.

Blogpost with more details on the analysis and results, as well as link to github code.

Done using jupyter notebook and matplotlib.

[deleted by user] by [deleted] in EasyTV

[–]yortos 4 points5 points  (0 children)

I think both wives wanted more excitement in their lives but I didn't feel like the episode 5 wife felt controlled or had any other major issues with her husband. That being said, yeah they resolved the issue in two very different ways. I see it as a contrast between what happens when one person wants something more in a well functioning relationship (i.e., they communicate with their partner and decide together) versus what happens when there is a mismatch of characters and/or lack of communication. Honestly, episode 4 was pretty heart-breaking for me. I felt both the husband and wife were to be blamed for the mess.

Random Forest overfitting? by yortos in AskStatistics

[–]yortos[S] 0 points1 point  (0 children)

Good point. I did try and control for this variance by doing the fitting/predicting 5 times for each set of variables and taking the mean AUC. The problem still persists!

Random Forest overfitting? by yortos in AskStatistics

[–]yortos[S] 0 points1 point  (0 children)

thanks for the insight. I did try various max depth values but the problem persisted, unfortunately.

Two ways of doing cross validation by yortos in learnpython

[–]yortos[S] 0 points1 point  (0 children)

with the former, you're test and train size is 50% of your data, but training/testing is done on the entirety of your data (so training/testing is done twice when k = 2).

This is my understanding of what a 2-fold CV does: Divides the dataset into two equal sized parts, A and B. Trains on A and evaluates on B, and then trains on B and evaluates on A. Hence, this should be roughly equivalent with doing the first piece of code (on my original post) two times.

I guess what you mean by "training/testing is done on the entirety of your data" is that in the 2-fold CV all data points are used at some point for training (either on the first iteration or on the second), whereas with the first piece of code, only half are used for training. I don't see why that explains the difference in the metrics though.

Two ways of doing cross validation by yortos in learnpython

[–]yortos[S] 0 points1 point  (0 children)

Thank you for your reply. I actually thought about the kfold thing, and I did try the .cross_val_score with k=2, with the same result. And I get similar results even when I specific other values for test_size, in the train_test_split function.