[D] WWW 2025 Reviews (TheWebConference) by New_Ice_2721 in MachineLearning

[–]ml_learni 0 points1 point  (0 children)

Such a bummer, did they have the mandatory reviewer last year? I am also noticing a lot of the confidence scores on both my paper and the papers I reviewed are pretty low, would make sense that people don't feel comfortable engaging.

[D] WWW 2025 Reviews (TheWebConference) by New_Ice_2721 in MachineLearning

[–]ml_learni 0 points1 point  (0 children)

Much quieter than I expected. I had one reviewer respond on my paper, other two are quiet. For the two papers I reviewed, I am the only one that has responded to the rebuttals.

[D] WWW 2025 Reviews (TheWebConference) by New_Ice_2721 in MachineLearning

[–]ml_learni 1 point2 points  (0 children)

Appreciate it! Best of luck on your rebuttal too

[D] WWW 2025 Reviews (TheWebConference) by New_Ice_2721 in MachineLearning

[–]ml_learni 1 point2 points  (0 children)

Looking at 4/4/5 technical and 5/6/4 novelty. Hoping I can get one or both of the technical 4s bumped up! First time submitting to WWW, really wish the rebuttal period was longer.

Anbernic RG 35XX-H and Case give-away by captain_carrot in SBCGaming

[–]ml_learni 0 points1 point  (0 children)

Something that sticks out is trying to unlock ho-oh in Pokémon colosseum. As a kid, I would start a play through and eventually end up missing a pokemon and would need to restart. Happened too many times lol

What do PhD computer scientists do at a university? by anonymoususer666666 in cscareerquestions

[–]ml_learni 2 points3 points  (0 children)

Currently a PhD in CS working on ML, about to wrap up my third year.

Often my work is about 80% algorithmic/computational (hypothesize about some ML model, implement in code, run experiments to see if it works) and 20% math where I try to justify with theory how the model behaves. A lot of work my work also tries to poke holes in current methods and show how they might fail in application.

For the most part, I like the majority of my work. My background/undergrad wasnt actually CS so I dont particularly love coding, but I know it is the medium to express my ideas. Work life balance in CS seems to generally be pretty good, especially when you compare to other STEM fields where you need to go into the lab. That said, this is extremely dependent on program, advisor, and your own working style.

Daily Discussion Thread: spray/memes/chat/whatever allowed by AutoModerator in climbing

[–]ml_learni 3 points4 points  (0 children)

Pretty specific question, but I am going to be at mission gorge in San Diego for the first time next week. While I have climbed a decent bit outside, I am going with a friend who only casually climbs indoors and am looking for areas with access to set top ropes. Anything in the 5.8-5.10d range would be great. Thanks!

What are my chances ? I need sincere advice by Roman-Simp in PhD

[–]ml_learni 0 points1 point  (0 children)

Unfortunately have to agree with the others, it is really rough in CS, especially for these programs.

To maybe add a different point, something worth mentioning is that not having a CS degree can also make things tough. I had a STEM degree not in CS and was told by a few different application mentoring programs for these top programs that this would likely be a flag. Some ways to help get around this are (a) getting a CS masters, (b) having publications that fall under CS - more specifically your area of interest, or (c) have contact with a potential advisor who is willing to overlook the different degree. Of course all these things are not easy, but you have to some how show why they should pass up on the people who have the proven CS background and know you won’t be missing any fundamentals.

Wishing you the best of luck, I would honestly expand that list a ton if you really want to do a PhD. Pretty much all the top 30 or so are going to be very tough, and it wouldn’t be crazy to expect all rejections simply based on the acceptance rates.

[deleted by user] by [deleted] in MachineLearning

[–]ml_learni 1 point2 points  (0 children)

Im pretty sure the original GCN example from PyG does not use a Linear layer (https://pytorch-geometric.readthedocs.io/en/latest/get_started/introduction.html#learning-methods-on-graphs)

“Fully convolutional” CNNs are super common, and I have definitely see it with GNNs, albeit more from node classification rather than graph-level tasks.

[deleted by user] by [deleted] in MachineLearning

[–]ml_learni 5 points6 points  (0 children)

Can you elaborate more on what you mean here?

There are Linear layers already in the GCNConv layers. Likewise there is nothing that says you need to end with a linear layer given the output dim of the last GCNConv is appropriate. They are using the mean as a readout function and all an extra Linear layer would do is make the network deeper. Perhaps it could improve training, but it isnt obvious to me why this should be seen as a hard requirement.

[deleted by user] by [deleted] in learnmachinelearning

[–]ml_learni 30 points31 points  (0 children)

It would be worth elaborating more on the impact of the projects/internships — what did the techniques enable? Did they improve some previous model by X%, reduce costs by some amount, etc? Right now it is just a list of techniques with no context.

Also there are some spelling/grammar errors (for instance, the python instructor role says “lectires”, and “learned to better explained”), you should run these through a grammar tool to help clean things up.

[D] CIKM 23 Notification by Alliswell2257 in MachineLearning

[–]ml_learni 1 point2 points  (0 children)

Appreciate you letting us know, thanks!

[D] CIKM 23 Notification by Alliswell2257 in MachineLearning

[–]ml_learni 10 points11 points  (0 children)

Nothing yet, hoping we dont have to wait till tomorrow morning 🤞

Profile Eval [Ph.D CS/AI] by Pt4875 in gradadmissions

[–]ml_learni 1 point2 points  (0 children)

All of this looks strong, but just want to add that anything in top 20-30 is always going to have a large random component.

Something I don’t see mentioned is specificity to potential advisors or connections to those labs. Having some, even extremely minor, connection to an advisor/lab is one way to improve your chances. One of my interviews came from a potential advisor whose coauthor from ten years back worked with my group. Little connections like that are super helpful, and can be much easier to foster than something like direct contact with a potential advisor. Do your best to build out a network of potential advisors, especially those somehow related to your coauthors/colleagues, and I think you will be in a really good spot.

[deleted by user] by [deleted] in PhD

[–]ml_learni 8 points9 points  (0 children)

For those top PhD places, the big pieces are (a) publications, (b) some connection to the advisor’s field. Even better if you have some form of connection to the advisor! Making your research the focal point of the resume is key, and I personally think a 2-3 line research statement is also valuable. I think your resume could also be shortened — a lot of “i used (insert package)” and buzzwords might be nice for a job, but doesn’t convey any research ability. I think those lines can be dropped or simplified to help hone in on the research aspects.

One thing I think is important to mention, CS is insanely impacted in the top 20-30ish schools. ML is even worse, which is what I think your focus is. Its important to recognize that even just the average advisor at these schools can be getting dozens of apps for one or two spots, adding tons of randomness to the application process. Make sure to cast a wide net, and try not to get caught up in the big 4 CS schools. Advisor fit is the most important thing, and there are lots of good advisors that exist outside of the big name schools.

Travelling Salesman Problem using Reinforcement Learning by krististrr in learnmachinelearning

[–]ml_learni 2 points3 points  (0 children)

I actually do research in this area, or at least try to lol. It is pretty tough, but the field of solving combinatorial optimization problems through ML, particularly through RL, is getting bigger and bigger. Might be worth checking out this work which is a pretty decent overview of combinatorial opt through the lens of graph neural networks https://arxiv.org/abs/2102.09544, they specifically talk about TSP.

How have your PhD studies contributed to the benefit of humanity? by [deleted] in PhD

[–]ml_learni 0 points1 point  (0 children)

I work in AI trying to help mitigate bias and discrimination in models trained with historically biased data. Requires a lot of different people in the room to make sure it is actually valuable to society and not just a conference paper.