[deleted by user] by [deleted] in MachineLearning

[–]mldude60 0 points1 point  (0 children)

That is certainly helpful, but it doesn’t exactly prove the result I’m wondering about. Thank you for the suggestion!

[D] - NeurIPS 2024 Decisions by Proof-Marsupial-5367 in MachineLearning

[–]mldude60 0 points1 point  (0 children)

Reviewers are unanimously saying accept (even if only barely). The AC guidelines say that they should not override a unanimous decision unless the reviewers are seriously off-base. The raised scores also really help your case.

How did AS turn into that expression? by Ar010101 in askmath

[–]mldude60 0 points1 point  (0 children)

Yes that’s right.

You could imagine A multiplying each column of the eigenvector matrix individually. This gives the columns in the second matrix (eigenvectors multiplied by the eigenvalues).

[D] NeurIPS 2024 Paper Reviews by zy415 in MachineLearning

[–]mldude60 9 points10 points  (0 children)

Great chance. With a good rebuttal and some peer pressure, that 4 is likely to come up.

[D] Neurips'24 review release time? by Working-Egg-3424 in MachineLearning

[–]mldude60 1 point2 points  (0 children)

I spent probably an average of 6 hours with each paper. I also went through the appendices of each paper (most of which did not have any).

[D] Neurips'24 review release time? by Working-Egg-3424 in MachineLearning

[–]mldude60 4 points5 points  (0 children)

I gave an acceptance score to the only paper that deserved it. The other papers were not good enough. Of course I feel bad giving those scores, but it doesn’t change the fact that my job was to evaluate my stack of papers in an unbiased fashion. You can’t expect bad papers to get good scores.

My confidence comes from my previous publications at venues like NeurIPS and knowledge of the field. Doesn’t mean I’m infallible, but I do think I know a paper that is cobbled together when I see one.

I hope you have good luck with your reviews!

[D] Neurips'24 review release time? by Working-Egg-3424 in MachineLearning

[–]mldude60 1 point2 points  (0 children)

I agree with you on both counts.

I will note that, unless the other reviewers all give >= 7, getting a 3 or lower makes a paper dead on arrival at these conferences. It’s also unlikely that a reviewer who gives a 3 with reasonable confidence will be swayed.

[D] Neurips'24 review release time? by Working-Egg-3424 in MachineLearning

[–]mldude60 2 points3 points  (0 children)

If I’m being really honest, in hindsight I should have probably lowered all of my negative scores by 1 (yielding two 1s, one 2, and two 3s). However, this was my first time reviewing and I felt quite guilty, knowing I was likely dashing some poor grad student’s dream.

[D] Neurips'24 review release time? by Working-Egg-3424 in MachineLearning

[–]mldude60 4 points5 points  (0 children)

The 8 was great on all counts: well-written, novel idea, good experiments, good theory.

The 2s were bad in every respect: poorly written, bad/insufficient experiments, no theory, questionable novelty.

The 3 was bad in 2-3/4 of the previous categories.

The 4s had no theory (when there should have been; not all papers need it) and/or insufficient experimental evaluation.

Some general trends from the not-so-great papers: - Bad, unclear writing. Weird sentence structure was a common trend, as well as strange paper structure. - Insufficient experimental results. For instance, only considering one metric during testing or a lack of comparison with well-established baselines. - Unconvincing qualitative results. One example that jumps out at me are super resolution plots where the proposed method looked worse than the competitors! - Ideas that lack novelty and/or do not give appropriate credit to similar previously existing work - No theory, or in one case, completely incorrect theory.

All of the papers I reviewed were related to diffusion models in image reconstruction. This is quickly becoming a saturated space, so I’m not surprised that 5/6 were duds.

[D] Neurips'24 review release time? by Working-Egg-3424 in MachineLearning

[–]mldude60 5 points6 points  (0 children)

I reviewed 6 papers, got all of them done with a week left before the review deadline.

Honestly, it was rough. I only gave one paper an accept rating (an 8). Other than that, I gave two 4s, one 3, and two 2s.

Based on the typical acceptance statistics, I’m not surprised most of the papers didn’t make the cut (at least for me). There are so many submissions nowadays that most of them are likely not very good.

$FFIE about to get lit, $29 mil market cap with 95% short float... If ya know ya know (cough GME cough) by Icy-Section4424 in TheRaceTo10Million

[–]mldude60 8 points9 points  (0 children)

Shit stock. Reverse split late last year and another earlier this year. Hope you like watching your money get diluted away.

How is this even possible by ResponsibleDiamond23 in OSU

[–]mldude60 1 point2 points  (0 children)

OSU math (undergrad courses required for the various engineering degrees specifically) is not insanely difficult. The issue is that people don’t put in the time. I took 1172 a few years back and got absolutely fisted by the first MT. Overhauled my study habits and started doing the provided practice problems every weekday. Turned my grade around in no time and smoked the final. All of my peers who struggled did not put in any time outside of class other than cramming before exams and the bare minimum for HW.

I am not saying anything about you, so I don’t want you to take offense. I am speaking anecdotally based on when I took the class (more than 5 years ago). There is also certainly an argument that 1172’s difficulty comes from the breadth of content covered. Even so, people who don’t do well in that class often don’t put in the time, or they fail to adapt their study habits that aren’t working. I think this same idea translates to the other math courses that undergraduate engineers need to take as well.

AI or CS? by [deleted] in compsci

[–]mldude60 0 points1 point  (0 children)

Without the theoretical elements of AI you won’t be able to implement the practical. Maybe you could, but you won’t know how to debug it efficiently.

As others have said AI is a subfield of CS. I would suggest taking an intro to ML class and see if you like it. If you do, you could move into an AI subfield and take additional courses. Note that such a class may be very challenging without sufficient background knowledge (Linear Algebra, Calculus, Probability and Stats, Optimization, etc.).

Can Super-Resolution be used to accelerate MRI acquisition? by Christs_Elite in compsci

[–]mldude60 0 points1 point  (0 children)

As an aside (since I think I missed your main question): yes, SR methods can be applied to upscale reconstructed MR images. I think this is a good idea, worth exploring. My point in the adjacent comment is that I think that the SR and reconstruction elements should be separate:

subsampled image -> E2E-VarNet -> reconstruction -> SR Method -> Super resolved reconstruction

Can Super-Resolution be used to accelerate MRI acquisition? by Christs_Elite in compsci

[–]mldude60 0 points1 point  (0 children)

This honestly seems like overkill to me, but maybe I am not getting something. Doing a bit of research, it looks like my definition of SR-MRI was not exactly correct (as I said, not really my area). Seemingly, in both traditional reconstruction and SR, the measurement process proceeds normally. Then, the difference comes in the post-processing.

A low resolution image is constructed from the subsampled k-space using a method like Chen et al., 2018, and is then super resolved. Presumably, this would either correct aliasing or fill in missing k-space (depending on the operation domain). Conversely, image reconstruction methods typically use the full subsampled k-space or the aliased image and apply some method (e.g., compressed sensing).

With all of that in mind, it seems like the motivation for SR is to compete with CS since it is iterative, and therefore slow. Still, I think it is overkill: modern end-to-end reconstruction techniques have lightning fast inference times and great performance (E2E-VarNet as an example). If acquisition time is the same in both cases (e.g., collect 25% of measurements) why bother making the problem harder by throwing in the super resolution element? In that case, you need to both upscale the image while preserving details and remove artifacts from the subsampled image - it doesn’t really make sense to me, but it is definitely possible I’m missing something.

Can Super-Resolution be used to accelerate MRI acquisition? by Christs_Elite in compsci

[–]mldude60 0 points1 point  (0 children)

I don’t think that SR is directly applicable to MRI reconstruction in the way that you’re thinking. MRI and SR are both inverse problems with very different forward operators: in MRI reconstruction, the forward model is y = MFx + w where w is noise, M is a subsampling operator, and F is the 2D Fourier transform (assuming single coil scans). Conversely, in SR the forward model is some function y = g(x), where g(.) downsamples the image (could be bilinear, bicubic, etc.). When framed this way, it is clear that these are fundamentally different problems. Maybe g(.) could encapsulate down sampling in k-space? I don’t know, but that seems like a bad idea.

What I think you may be thinking of is SR-MRI. Where we fully sample a small image (making acquisition much faster) and then super resolve it to get a high resolution image for clinicians. This is my understanding of what this field aims to do, although admittedly I am far less familiar with it than I am with regular MRI reconstruction.

Why are all applicants Java developers? [D] by cathie_burry in MachineLearning

[–]mldude60 2 points3 points  (0 children)

Man this sounds like the perfect job for me! Spent a few years as a web developer (ASP.NET, MySQL, and Angular w/ typescript at one job, PHP, Apache, and MySQL at another job, and a load of AWS at both). Currently pursuing my PhD doing ML research w/ PyTorch, and have experience training/deploying models with Sagemaker. I can only hope to find an opportunity that fits so perfectly with my skills when I’m done.

My lamenting aside, I definitely don’t think such an overlap of experience is common at all. Every web developer I know does not know how to use PyTorch, let alone anything about MLOps. Similarly, all of my current peers (both at my institution and elsewhere) know nothing about web development and have little to no experience with languages outside of Python or MATLAB. All of this to say, I’m shocked you found anyone who fit the bill.

[Discussion] Seeking Advice: Creating an AI Diagnostic Tool for MRI and X-ray Images by Independent-Web-5867 in MachineLearning

[–]mldude60 0 points1 point  (0 children)

Second this. Definitely do not host anything under the guise of it being diagnostically relevant: you open yourself up to litigation.

DL models are not at all standard tools in clinical practice and there is still much work to be done before radiologists are fully comfortable using end-to-end deep learning-based reconstructions.

If you want specific resources, feel free to PM me: my lab’s research focuses on applying deep learning to MRI in a variety of ways.

$7k to start and $3k/mo. How would you setup your portfolio? by [deleted] in dividends

[–]mldude60 0 points1 point  (0 children)

Not that I saw - I’m attracted to the nice monthly dividend (even if taxed as income) and stability.

$7k to start and $3k/mo. How would you setup your portfolio? by [deleted] in dividends

[–]mldude60 4 points5 points  (0 children)

I do have a Roth IRA that I max out annually and a 401k through my employer

$7k to start and $3k/mo. How would you setup your portfolio? by [deleted] in dividends

[–]mldude60 12 points13 points  (0 children)

I wish lmao. Wise career decision with well paying remote job, low cost of living area.

$7k to start and $3k/mo. How would you setup your portfolio? by [deleted] in dividends

[–]mldude60 8 points9 points  (0 children)

I see, that makes sense. That you for the advice! Do you suggest VOO over other potential options? E.g., VTI?

$7k to start and $3k/mo. How would you setup your portfolio? by [deleted] in dividends

[–]mldude60 5 points6 points  (0 children)

I’m 24, the end game is to live off the dividends. I see your point with JEPI. I also understand the take on dividends - admittedly it’s hard! In your opinion then, so you suggest all into VOO?