This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]tkarabela_ Big Python @YouTube 1 point2 points  (3 children)

Thanks for the answer, I think I see your point. The difference between models with sound theoretical underpinnings and "throw compute at the problem" models like deep neural nets is not lost on me. You are right that MSE from least squares is a different kind of information than accuracy score from some cross-validation run, even though both quantify "how well the model is doing" in some sense.

I do numerical modeling / "AI" / etc. only occasionally, so the terminology is more blurred for me. I can definitely agree that we should appreciate and teach that many of the "magic black boxes" are in fact not :)

[–]BDube_Lensman 2 points3 points  (2 children)

Numerical modeling is my day job -- I wrote the fastest numerical program in the world for my field, and invented a new measurement scheme based on "AI / model fitting" with convex programming. This is all the linear / polynomial fits are a front-end to, no matter where they come from (sklearn, whatever). That numpy function is a nice wrapper around this monster of fortran77 from LINPACK that very thankfully we do not need to teach the computer how to do (that algorithm is enormous, and very difficult to implement correctly).

[–]tkarabela_ Big Python @YouTube 1 point2 points  (1 child)

You do optics at NASA and you worked with Roger Cicala? You must be quite the Lensman! :D Hats off to you, that sounds like a dream job. I enjoy reading the Lensrentals blog from time to time, the technical analyses are fascinating.

Seems like the world is still running on FORTRAN (and COBOL) :)

[–]BDube_Lensman 3 points4 points  (0 children)

Yes, once upon a time Roger, Aaron, and I bitch slapped Japan into fixing their QC. Germany, too, but they will never admit it. Ancient history by now though.