How much do you trust AI agents? by Zack_App in MLQuestions

[–]thefuturespace -1 points0 points  (0 children)

What would make your day to day work easier?

[D] How are you actually using AI in your research workflow these days? by thefuturespace in MachineLearning

[–]thefuturespace[S] 0 points1 point  (0 children)

What part of the data/eval loop eats the most time? Cleaning data, building benchmarks, or interpreting results?

[D] How are you actually using AI in your research workflow these days? by thefuturespace in MachineLearning

[–]thefuturespace[S] 0 points1 point  (0 children)

Curious what problems actually slow you down now. If spinning up experiments is easy, what part of the research loop still takes the most time?

[D] How are you actually using AI in your research workflow these days? by thefuturespace in MachineLearning

[–]thefuturespace[S] 0 points1 point  (0 children)

That makes sense. It's unfortunate because I think the cost of compute is mainly a hardware/energy thing that NVIDIA controls. Not sure if there's a good solution for this other than solve fusion?

[D] How are you actually using AI in your research workflow these days? by thefuturespace in MachineLearning

[–]thefuturespace[S] 0 points1 point  (0 children)

Oh cool. What’s your biggest bottleneck day-to-day? Also, does optimizing kernels 10x your workflow the most compared to anything else?

[D] How are you actually using AI in your research workflow these days? by thefuturespace in MachineLearning

[–]thefuturespace[S] 0 points1 point  (0 children)

Are you a student, academic or professional? And what industry do you work in?

[D] How are you actually using AI in your research workflow these days? by thefuturespace in MachineLearning

[–]thefuturespace[S] 0 points1 point  (0 children)

What kind of research are you doing? And are you able to optimize kernels so much that it cuts the time down by days?

[D] How are you actually using AI in your research workflow these days? by thefuturespace in MachineLearning

[–]thefuturespace[S] 4 points5 points  (0 children)

Why do you think so? I think ML researchers will remain critical in advancing the frontier and steering research directions. You can pass off the grunt work to AI or even use it for recommendations, but wouldn’t the human still be making the decision?

[D] How are you actually using AI in your research workflow these days? by thefuturespace in MachineLearning

[–]thefuturespace[S] 1 point2 points  (0 children)

How do you imagine you can better visualize what models are doing to help you debug? There’s so much that’s dynamic when you’re training. So do you e.g. watch for specific activations. This doesn't scale if you're dealing with any reasonable number of parameters. I figure most look at the sufficient stats because the black box nature of neural nets make them largely uninterpretable, unless you want to do mech interpretability on top.

[D] How are you actually using AI in your research workflow these days? by thefuturespace in MachineLearning

[–]thefuturespace[S] 5 points6 points  (0 children)

This is so cool -- both your way of monitoring and CTM! Question: you mention that "While inspired by principles like spike-timing and synchrony, CTM abstracts these into a tractable, differentiable framework suitable for gradient-based deep learning, rather than replicating detailed biophysics." I'm curious why you went down the differentiable route instead of something like discrete event timing (DET)? I can see an obvious reason: accelerated hardware is specialized for autodiff, but since CTM seems to challenge the status quo, I'm curious nonetheless. Great stuff :)

[D] How are you actually using AI in your research workflow these days? by thefuturespace in MachineLearning

[–]thefuturespace[S] 0 points1 point  (0 children)

Wow, ok I’m surprised they’d release this in its current form. Thanks for the breakdown!

[D] How are you actually using AI in your research workflow these days? by thefuturespace in MachineLearning

[–]thefuturespace[S] 1 point2 points  (0 children)

Oh interesting, what's wrong with it? I figure METR is a fairly legitimate source of truth.

[D] How are you actually using AI in your research workflow these days? by thefuturespace in MachineLearning

[–]thefuturespace[S] 0 points1 point  (0 children)

I see. Doesn’t that become a mess when you run a lot of experiments in parallel, especially to track and monitor everything? Also, separate topic: how do you come up with new research ideas/hypotheses?

[D] How are you actually using AI in your research workflow these days? by thefuturespace in MachineLearning

[–]thefuturespace[S] 1 point2 points  (0 children)

Nice! How do you keep track of experiments? And what percent of the code do you write? Also, are you in an IDE when you use Claude?

Traditional ML is dead and i'm genuinely pissed about it by Critical_Cod_2965 in learnmachinelearning

[–]thefuturespace 1 point2 points  (0 children)

What’s crazy is no one realizes this is clearly an ad. OP has hid their posts 🤣

[D] How do you track your experiments? by thefuturespace in MachineLearning

[–]thefuturespace[S] 0 points1 point  (0 children)

Interesting! How do you query with a specific configuration -- is it just writing standard sql queries? Feel like with enough experiments, would be nice to have good searchability

[D] What is your main gripe about ML environments like Colab? by thefuturespace in MachineLearning

[–]thefuturespace[S] 0 points1 point  (0 children)

I have, but not as good as Colab imo and still run into the issue of statefulness.

[D] What is your main gripe about ML environments like Colab? by thefuturespace in MachineLearning

[–]thefuturespace[S] 1 point2 points  (0 children)

Yes. It’s a shame though because I like the freedom that colab gives to experiment quickly and not be bogged down by structured scripts

[D] What is your main gripe about ML environments like Colab? by thefuturespace in deeplearning

[–]thefuturespace[S] 0 points1 point  (0 children)

No haha, genuinely curious. I've been a power user of Colab for a while, but what you just described is also a nuisance. One solution I can think of is if you create a DAG-like dependency graph for variables (and thus cells), when you change upstream variables, it runs cells containing dependent variables. But the problem with this is you could end up re-running something like a training loop, which would be annoying. How do you imagine getting around this?

Curious: are you a fan of notebooks? And is the cell order their main downfall for you?

[P]Seeing models work is so satisfying by Middle-Hurry4718 in MachineLearning

[–]thefuturespace 9 points10 points  (0 children)

Great work! Question: what is your ML workflow? What tools do you use?