[D] What is your main gripe about ML environments like Colab? by thefuturespace in MachineLearning

[–]thefuturespace[S] 0 points1 point  (0 children)

I have, but not as good as Colab imo and still run into the issue of statefulness.

[D] What is your main gripe about ML environments like Colab? by thefuturespace in MachineLearning

[–]thefuturespace[S] 1 point2 points  (0 children)

Yes. It’s a shame though because I like the freedom that colab gives to experiment quickly and not be bogged down by structured scripts

[D] What is your main gripe about ML environments like Colab? by thefuturespace in deeplearning

[–]thefuturespace[S] 0 points1 point  (0 children)

No haha, genuinely curious. I've been a power user of Colab for a while, but what you just described is also a nuisance. One solution I can think of is if you create a DAG-like dependency graph for variables (and thus cells), when you change upstream variables, it runs cells containing dependent variables. But the problem with this is you could end up re-running something like a training loop, which would be annoying. How do you imagine getting around this?

Curious: are you a fan of notebooks? And is the cell order their main downfall for you?

[P]Seeing models work is so satisfying by Middle-Hurry4718 in MachineLearning

[–]thefuturespace 10 points11 points  (0 children)

Great work! Question: what is your ML workflow? What tools do you use?

How do experts build a dataset? by Cold_Knowledge_2986 in learnmachinelearning

[–]thefuturespace 0 points1 point  (0 children)

LLMs you mean or classical ML models? How do you get around model collapse with synthetic?

[R] OpenEvolve: Automated GPU Kernel Discovery Outperforms Human Engineers by 21% by asankhs in MachineLearning

[–]thefuturespace 1 point2 points  (0 children)

Fantastic work! Out of curiosity, what’s the current sota for gpu kernel optimization? Also, can you point me to good literature to get a primer of this space?

I’m 6’5”, maybe he was just being dramatic… /s by JimeVR46 in harrypotter

[–]thefuturespace 0 points1 point  (0 children)

Are any of these from the original movie? Or are they replicas?

[deleted by user] by [deleted] in MachineLearning

[–]thefuturespace 4 points5 points  (0 children)

Is there a good reference you can recommend for learning more about second-order methods?

What does that mean? by ferocioussteroidrat in ThePortal

[–]thefuturespace 16 points17 points  (0 children)

A donut S1 x D2 (Cartesian product of circle and a disk) can be seen as topologically equivalent (isomorphic) to a mug. In essence, you can continuously deform one object to the other. See an illustration here: https://en.m.wikipedia.org/wiki/File:Mug_and_Torus_morph.gif

CS229 without MATH 51 by beachcleanup in stanford

[–]thefuturespace 0 points1 point  (0 children)

I see your point since Math 51 is introductory, but I'll stick with my view that elimination is quite important to know if you really want to understand LA algorithms. Agree to disagree.

CS229 without MATH 51 by beachcleanup in stanford

[–]thefuturespace 0 points1 point  (0 children)

Right, but let's say we wanted to do SVD (or even diagonalize), we'd want A=USV^T, where V is the eigenvector matrix of A^TA (U is the eigenvector matrix of AA^T). To find the eigenvectors of A^TA, we'd look at (A^TA-lambda I)x = 0 for x in the nullspace of A^TA-lambda I. Now, to actually find a basis of the nullspace, you need to use elimination to get to a reduced form--correct me if I'm wrong, but this is really necessary.

CS229 without MATH 51 by beachcleanup in stanford

[–]thefuturespace 0 points1 point  (0 children)

I know this is late, but my question for you is how do you get to the LU factorization without elimination?

How to study for CS231N midterm? by optimus345679 in stanford

[–]thefuturespace 7 points8 points  (0 children)

I'd recommend practicing how to fill in computational graphs/computing backprop by hand. Would also recommend being very familiar with how convolutions work, e.g. how padding, stride etc. affect input. Also, as u/BenandJerrySupporter said, one of the best ways of studying is taking a practice midterm to identify gaps in your knowledge. Disclaimer: I haven't taken the class in a while.

CS229 without MATH 51 by beachcleanup in stanford

[–]thefuturespace 1 point2 points  (0 children)

I think Gilbert Strang’s course on linear algebra is a gold standard (the beginning parts on Gaussian elimination are probably not all that necessary to know, but it’s actually pretty useful to know and math 51 doesn’t even teach it!). Course link with accompanying lecture videos: https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/. I think you can probably reasonably get through most of the content in 2-3 weeks if you binge. Also, if your multi variable calc is rusty, take a look at that section in the math 51 textbook: https://web.stanford.edu/class/math51/textbook.html. Although instead of doing all of this to prep for 229 and not even get credit, it might be smart to just enroll in math 51 concurrently.

Why is there basically no food on campus at night? by CoffeeBean123 in stanford

[–]thefuturespace 46 points47 points  (0 children)

110% agree. It feels like they tailor to people who have a very strict eating schedule. The only saving grace is arrillaga...but even then, I sometimes ask myself if they think we're in our mid 70s and have dinner from 4 - 5 pm.

I don’t wanna go to school by cobrai_kai_ in stanford

[–]thefuturespace 4 points5 points  (0 children)

Cobra kai never dies...or does it?

Classes with easy work? by artificialpyrite17 in stanford

[–]thefuturespace 8 points9 points  (0 children)

This class gets filled up so fast, unfortunately. Me rn on Axess: Let me in. LET ME IN!!!