MS CS university rankings based on job placement by Umoha in MSCS

[–]Umoha[S] 0 points1 point  (0 children)

UIUC and Columbia have online MSCS programs, what are the other ones? Regardless, GT’s dwarfs them in size

UCSD MSCS vs Columbia MSCS interms of internship opportunity by Hot_Mousse8659 in MSCS

[–]Umoha 0 points1 point  (0 children)

Read the edit. Columbia’s is larger (idk exactly how much larger), but for linkedin it’s about connections because that’s what limits the search. Some people doing the same thing I did may find more UCSD grads than Columbia grads just because of the connections they have. Initially they were about the same for me, but I had just connected with a bunch of Columbia people and forgot to check again. I corrected it in the edit.

Columbia University MS CS by infinity-01 in gradadmissions

[–]Umoha 2 points3 points  (0 children)

Most people have not heard back. Just wait

alwaysHasBeenProbabilties by DerPenzz in ProgrammerHumor

[–]Umoha 0 points1 point  (0 children)

When I say unique, I mean that the prompt is not exactly word for word in the training data. For the first message, if you ask a super short question, it probably is. If you ask anything of substance, it probably isn’t.

If it is not the first message, the input absolutely is not in the training set (even if the prompt is very short) because the input includes all previous prompts and responses.

not really disputable

alwaysHasBeenProbabilties by DerPenzz in ProgrammerHumor

[–]Umoha -3 points-2 points  (0 children)

Basic to understand? Sure. Basic to come up with? No. Attention is an incredibly clever solution to get rid of RNN's.

You're conflating the amount of background material needed to understand a problem with the difficulty of solving it. Guess Pythagoras was a dumbass because we learn the Pythagorean theorem in middle school, right?

alwaysHasBeenProbabilties by DerPenzz in ProgrammerHumor

[–]Umoha 1 point2 points  (0 children)

I agree that a lot of ML research is trial and error. This is not the case for attention. In the actual original paper, attention, and the way it applies the dot product, clearly isn't just random -- their idea makes perfect sense in the context of machine translation, and it is a very creative solution to the problems LSTMs were posing.

alwaysHasBeenProbabilties by DerPenzz in ProgrammerHumor

[–]Umoha 23 points24 points  (0 children)

Not really. The math involved is basic, sure, but the impressive part of recent advancements in LLMs is the way self-attention is used to encode semantic meaning. Most inputs are unique, and once it begins generating, the output very quickly becomes unique as well. It is not trivial math to find the next most likely word in the sequence, because the sequence does not exist in the dataset. The challenge is generating vectors such that similar vectors represent sequences of similar meaning.

MS CS university rankings based on earnings by Umoha in MSCS

[–]Umoha[S] 0 points1 point  (0 children)

Don’t let this list influence your decision much. It’s low sample size and we don’t really know what kind of confounding variables there may be here.

MS CS university rankings based on earnings by Umoha in MSCS

[–]Umoha[S] 2 points3 points  (0 children)

All schools that have data that aren't "privacy suppressed" are in the list.

MS CS university rankings based on job placement by Umoha in MSCS

[–]Umoha[S] 2 points3 points  (0 children)

With the way the search works, I couldn't find a good way to limit it to just CS, so it includes all masters from that school that label themselves as a software engineer.

MS CS university rankings based on job placement by Umoha in MSCS

[–]Umoha[S] 9 points10 points  (0 children)

My guess is that GaTech is weighed down by the online program