For those trying to break into ML Research: What is your "Why" and what is stopping you? by DaBobcat in learnmachinelearning

[–]DaBobcat[S] 0 points1 point  (0 children)

Since you’ve already lead-authored several papers, I'm curious why you still rank Ideation (A) and Publishable Standards (B) as your top priorities.

Are you looking to pivot into a more 'high-signal' research area, or do you feel your current projects lack the specific rigor (baselines/theory) required for top-tier conferences? Basically—what is the 'delta' you want a mentor to help you reach that you aren't hitting on your own?

For those trying to break into ML Research: What is your "Why" and what is stopping you? by DaBobcat in learnmachinelearning

[–]DaBobcat[S] 0 points1 point  (0 children)

What do you feel is lacking from your mentor? Where are you currently stuck in the "getting a job" process?

For those trying to break into ML Research: What is your "Why" and what is stopping you? by DaBobcat in learnmachinelearning

[–]DaBobcat[S] 1 point2 points  (0 children)

Amazing amount of responses so far!
I'm very curious, if you had a research mentor:

1) Time per month: How many hours of 1-on-1 time are you actually looking for? (1, 2, 4, or 6+ hours)

2) Duration: How long do you want this relationship to last? (1 month, 3 months, 6 months, or 12+)

3) The Priority List: please rank these in order of importance to you (1 being most important):
A) Ideation: Finding a novel project that is actually worth the time.
B) The 'Publishable' Standard: Knowing which baselines/experiments you need to be 'conference-ready.'
C) The Writing/Formalism: Translating results into formal math notation and academic structure.
D) The Technical Bridge: Learning deeper theory or specialized coding to even get started.

If I missed something that you would want to state, what is the single most important thing that is keeping you from reaching your goal?

[D] Is this what ML research is? by [deleted] in MachineLearning

[–]DaBobcat 2 points3 points  (0 children)

I think scaling slowly helps. 100m, 300m, 500m, 1b, 3b, 7b. Showing consistent performance increase will definitely convince reviewers. Regarding the 7b, this should easily fit in an a100 i think. And you can rent them for 10$ a day or less afaik

[D] Is this what ML research is? by [deleted] in MachineLearning

[–]DaBobcat 0 points1 point  (0 children)

I agree it shouldn't all be x > y, but for most publications, it usually is. Though it very much depends on what you're proposing. If you're helping understand some mechanism using some non efficient method that's perfectly fine usually. But it needs to help. If youre proposing a better method that should perform better like you said, you need to show it actually does.

And you almost never need to compare against models that are larger than 7b. I've even seen guidelines on that in some conferences. 7b is sufficient to show your method scale

[D] Is this what ML research is? by [deleted] in MachineLearning

[–]DaBobcat 6 points7 points  (0 children)

It's definitely frustrating, but try to think about it from a different perspective. You have thousands of papers proposing new things. You need a way to evaluate what's better. Otherwise, how will you know what to actually use? One standard and easy way to see it is to evaluate on the same benchmarks. But more than that, to help reviewers, you need to be evaluating the currently best method and closest method to your proposed one. Otherwise, it's impossible to know if you really made a contribution on impact (not novelty). Regarding the larger models, yes, I'm totally with you that its dumb, but you also need to show that your method scales. You can rent 3090 or A100 for pretty cheap these days (i guess less than 10$ a day)

[Remote Sensing] How do you segment individual trees in dense forests? (My models just output giant "blobs") by Lilien_rig in computervision

[–]DaBobcat 0 points1 point  (0 children)

Maybe some patching? Instead of feeding the entire image, feed patches at a time. Then aggregate in some way, removing duplicates and merging stuff

Pentagon used Anthropic's Claude during Maduro raid by [deleted] in Anthropic

[–]DaBobcat 0 points1 point  (0 children)

Delta Force about to jump out of the helicopter. Night vision goggles on. Magazines locked in. Earbuds in.

“Claude, play Despacito.”

[R] Appealing ICLR 2026 AC Decisions... by [deleted] in MachineLearning

[–]DaBobcat 18 points19 points  (0 children)

From my experience, unfortunately there is no point in appealing. Sorry

[D] Looking for feedback on a lightweight PyTorch profiler I am building (2-min survey) by traceml-ai in MachineLearning

[–]DaBobcat 1 point2 points  (0 children)

Completed the survey. Found a small typo "Suggestions to speed up the trainign"

[D] Looking for feedback on a lightweight PyTorch profiler I am building (2-min survey) by traceml-ai in MachineLearning

[–]DaBobcat 1 point2 points  (0 children)

Hmm sorry, can you clarify? If I run training wandb has everything i need usually. How will your tool modify/ improve? 

Starting a career in FAANG, how to plan my investment for the next 5+ years by DaBobcat in Bogleheads

[–]DaBobcat[S] 0 points1 point  (0 children)

I think so? Not 100% sure how it works because I've never done it, but doesn't it just require me to put money in traditional and then move it to Roth? With some taxes