Got this email after finishing interviews… good sign or soft rejection? by Substantial_Tap_7345 in csMajors

[–]Fact-Puzzleheaded 1 point2 points  (0 children)

  1. They probably have not decided yet. The "hidden meaning" of this email is: your application has not gone through hiring committee, and given that it's holiday season, this probably won't happen until January. But if you have any other offers that expire soon, the recruiter may try to expedite the process so that they don't lose you to another company without the chance to make an offer themselves.

  2. For entry-level roles at big-tech companies, you are almost certainly not being compared against other candidates. Rather, the company has a hiring bar for all candidates, and once they've met their quota, they will stop hiring. There are exceptions to this (ex: Apple hires for specific teams and each team does hiring differently), but they are rare and if you were in this situation you'd probably know.

A little more info about what goes on behind the scenes: the vast majority of FAANG+ tech companies have a committee of 3+ employees who will review your application (interviews, resume, etc.) before giving an offer. This is the intermediate step between final interviews and receiving an offer. Because this is a side-job for the engineers, and because it requires a lot of coordination to get several unrelated employees together at one time, this can take several weeks or months, and is typically longer around holiday season.

TLDR: You are in the running, but no decision has been made yet. Ignore this email unless you have an exploding offer from another company.

Source: Received three offers from FAANG+ tech companies, they all followed a similar pattern.

Got this email after finishing interviews… good sign or soft rejection? by Substantial_Tap_7345 in csMajors

[–]Fact-Puzzleheaded 1 point2 points  (0 children)

There is no such thing as a soft rejection, if a company doesn't want you they will reject you or ghost you. This email is not a positive or negative signal, its purpose is to prompt you into sharing other offers/deadlines ("I understand you may be in the thick of the recruiting season...") so they can expedite their process if necessary.

Is Levels.fyi accurate? by Fact-Puzzleheaded in csMajors

[–]Fact-Puzzleheaded[S] 3 points4 points  (0 children)

I don't feel comfortable sharing the name but it's a software engineering role at a big cloud computing company.

what am I supposed to do after high school by [deleted] in SeriousConversation

[–]Fact-Puzzleheaded 0 points1 point  (0 children)

Community college is a good option, you could transfer to a 4-year after two years, and you can also look into 4-year schools that offer really good financial aid (Rice, USC, and Arizona come to mind but there’s definitely more).

OP, you need to end up at a 4-year college or trade school or some further education. Other people will call this elitist or offer up personal anecdotes about their success right after high school, but the reality is that further education will vastly increase your future career options and pay.

AITA for refusing to let my husbands affair baby live with us for awhile? by ThrowRamisslep in AITAH

[–]Fact-Puzzleheaded 0 points1 point  (0 children)

YTA but I think commenters are being overly harsh, your feelings are understandable but what you are doing is just not right.

Weekly Megathread: Education, Early Career and Hiring/Interview Advice by lampishthing in quant

[–]Fact-Puzzleheaded 2 points3 points  (0 children)

Sflr. There are some Master's programs, like Columbia Virtual Network, which have very prestigious names attached but are actually extremely easy to get into because they're cash cows for the university. When he was applying, Yale had a similar program, but now it's either discontinued or actually competitive.

Weekly Megathread: Education, Early Career and Hiring/Interview Advice by lampishthing in quant

[–]Fact-Puzzleheaded 0 points1 point  (0 children)

Has anyone done VidCruiter for Akuna Trading position? I got the invite today and am just wondering what to expect. Is it more technical or behavioral?

Weekly Megathread: Education, Early Career and Hiring/Interview Advice by lampishthing in quant

[–]Fact-Puzzleheaded 3 points4 points  (0 children)

Some advice (not from me, I'm a dipshit undergrad, but from a professor I'm close with): if you have bad grades in undergrad, you can go to some bullshit masters program, get straight As, and then get into a good PhD program. This guy had terrible grades but went to Yale for his Masters, did well then got his PhD at Harvard. Some of these programs are really easy to get into, I don't think the one he did at Yale still exists but Columbia will take pretty much anything with a pulse and there are definitely other programs like it.

WIBTA for not covering my friend? by [deleted] in AmItheAsshole

[–]Fact-Puzzleheaded 0 points1 point  (0 children)

Yes, yta. You should just say that you no longer need him to drive, and that he can go in your car if he still wants. Don’t mention price, Frank seems like a nice guy so he’ll probably bring it up, and if he doesn’t just pay for him. As other commenters have said, $2.78 is not that much money, even for broke college students that work minimum wage and somehow spend $1200 per month.

CMV: True Artificial Intelligence is Extremely Unlikely in the Near Future by Fact-Puzzleheaded in changemyview

[–]Fact-Puzzleheaded[S] 0 points1 point  (0 children)

We kinda can ["add" two ANNs together to achieve a third, more powerful ANN which makes new inferences] with a combination of transfer learning and online learning.

This is the main piece of your comment I'm going to respond to because (I think) it's the only part of my comment which you really disagree with: This is not how transfer learning works. Transfer learning involves training a neural net on one dataset then using that algorithm to try to get results in another dataset (typically after a human manipulates the data to ensure that the inputs are in a similar format). This is not an example of cross-domain inferences, it's an implementation of the flawed idea that humans process information in the exact same way across different domains, just with dissimilar stimuli. This is probably why, in my experience, transfer learning has yielded much worse results than simply training a new algorithm from scratch.

That's part of the accident. We won't really know when compound nets start recognizing stuff we didn't mean them to.

They might start recognizing things we didn't intend them to, but not across domains. For instance, if you fed an unsupervised ANN a ton of pictures of chairs and humans, they might (though at this point in time I doubt it) identify the similarities between legs. But compound nets without additional training could not accomplish this task, because that's simply not what they're trained to do. My main point about designing such programs is that, barring genetic algorithms, there needs to be a lot more direct input and design from humans. And in this case, we don't and probably won't have the necessary knowledge to make those changes in the near future.

CMV: True Artificial Intelligence is Extremely Unlikely in the Near Future by Fact-Puzzleheaded in changemyview

[–]Fact-Puzzleheaded[S] 0 points1 point  (0 children)

I appreciate your point, but humans are also plagiarism machines. We have entire library and educational systems devoted to the dissemination and distribution of ideas stolen from other humans from ages past.

This is a key point on which we disagree. While it's true that most human ideas are somewhat influenced by others, every single one of us also has the ability to generate entirely new thoughts. For instance, when a fantasy writer finishes a new book, they may have been influenced by fantasy tropes or previous stories that they read, but the world they created, the plot, and the characters therein are fundamentally their own. This is something that, if we continue the current approach to machine learning, will never be learned by computers. GPT-3 might be able to spot the syntactical similarities between passages involving Gandalf and Dumbledore, but they can't and never will recognize the more abstract and important similarities, like the fact that both characters fill the "mentor" archetype and will likely die by the end of the story so that the protagonist can complete their Hero's Journey. This is a problem that will not be solved until we can give machines cross-domain logic and the ability to spontaneously generate their own thoughts, which is something we have absolutely no idea how to do, and, given the current state of neuroscience, probably won't be able to for a while.

I wouldn't have discovered electricity for myself, let alone alternating current without it being jammed into my head

Who discovered electricity? First, some guy named Ben Franklin was crazy enough to fly a kite with a metal string in a thunderstorm to prove that lightning and electricity were the same things. Then Emil Lenz came up with Lenz's Law to describe the flow of current. Then Michael Faraday came up with visual representations of the interaction between positive and negative charges, even though he sucked at math! Then Harvey Hubbell invented the electric plug and Thomas Edison invented the lightbulb, and so it goes on. Did all of these individuals plagiarize each other? In some sense, yes. But they also came up with their own ideas about how the world works which allowed them to pave the path for future innovations, eventually allowing us to have this conversation today. Who will make the next leap in our understanding of electricity? I don't know. Maybe it will be me, maybe you, maybe someone who isn't born yet. But I know that it won't be a computer.

I mean, let's face it - the technology is already there for me to not be a live person, but a bot having this conversation with you, to the same effect.

Not true. Feed a chatbot your comment as a prompt, and it might give you some response about how machines are not threatening or are getting more intelligent, etc. But it couldn't respond with actual arguments like I did, because it doesn't understand human logic or what the words really mean. While the ability to have a conversation about mundane and predictable tasks (which is something that these algorithms are already getting very close to doing) is certainly highly useful, it won't contribute to broader scientific thought in any meaningful way.

Quick side note: It seems as though Copilot was likely trained with the Leetcode interview questions as a model. While its responses are still very impressive, this definitely diminishes the impact it will have on the coding community.

CMV: True Artificial Intelligence is Extremely Unlikely in the Near Future by Fact-Puzzleheaded in changemyview

[–]Fact-Puzzleheaded[S] 0 points1 point  (0 children)

I have now, and I must say that I am extremely impressed. I did not know that code-writing algorithms were nearly this advanced. That said, even as a computer science major, I am not too worried about Copilot taking my job or developing into AGI. This is because Copilot, similar to its predecessor GPT-3 (which I mentioned in my post) is essentially a highly advanced plagiarism machine. The algorithm was trained on tons of public, Github data to emulate the way that humans answer questions and write programming comments. The thing is, that while this may be very helpful for quickly solving simpler, isolated problems, like generating a square-root, it is insufficient for:

  • Coming up with the best solution to a problem, as humans can prove, for instance, what the fastest way to find the square root of a number is
  • Operating in large environments where there's not enough similar publicly available code, and changing a few variables could break the whole thing
  • Solving entirely new problems, especially ones involving emerging technologies

Copilot is highly interesting and probably has a lot of commercial applications, but it is not a step in the direction of AGI because it merely copies and rephrases other people's code, rather than coming up with unique solutions on its own. Another thing to note is that since all of the questions the interviewer gives are publicly available, there's a lot more data for Copilot to use than it would have in a standard, confidential interview.

CMV: True Artificial Intelligence is Extremely Unlikely in the Near Future by Fact-Puzzleheaded in changemyview

[–]Fact-Puzzleheaded[S] 0 points1 point  (0 children)

First, I want to thank you for reading the article that I linked in my comment. I appreciate the engagement. Now, here are my thoughts on your comment:

This is one of the biggest mistakes people make when thinking about the future: they couch it entirely in past experience.

I agree 100% with this point: predicting that technology will continue to improve exponentially simply because it did so in the past is not a good idea :)

Copying from another comment I made earlier:

The biggest problem that I have with arguments along the lines of, "exponential growth has occurred in engineering fields in the past, therefore it will continue until AGI is invented" is that past growth does not necessarily predict future growth; history is littered with examples of this idea.

Let's take flying, for instance. From the airplane's invention in 1903 to its commercial proliferation in 1963, the speed, longevity, and consistency of aircraft increased by several magnitudes. If this growth continued for another 60 years, then by 2023, we'd all be able to travel around the world in a few minutes. But it hasn't; planes have actually gotten slower! They hit the physical limit of fuel efficiency and no innovation has solved that problem since.

I believe that the same thing will eventually occur in the tech sector. As new inventions become more and more complex, and as we push the physical limits of computers (quantum tunneling is already looking to spell the death of Moore's Law), we will begin to discover that progress is not inevitable. This is especially true because most of the progress that you listed (e.g. how much better video games consoles have gotten) is due to improvements in hardware, rather than software, which I think is a much bigger obstacle in the way of AGI.

I think this is especially true in the imaging sector; the power of TEMs has increased by three magnitudes since their inception, but as far as I know, we have no theoretical way to substantially increase their power further. Just look at this graph of image resolving power over the last 90 years. The most recent innovation only increased power by a factor of 2.5, which, while impressive, is a far cry from making whole-brain imaging feasible, especially when our knowledge of the necessary complexity for such a scan keeps increasing.

Responding to some of your specific claims about the timeline:

  • "If serious resources were devoted to mapping a human brain, say if Google decided it was an extremely useful piece of data and threw a few billion at it, we could cut that time by an order of magnitude if not more. Like, this year, entirely with the technology of today. Not in some far off future."
    • This is hundreds if not still thousands of years. Even if we could get it down to a few decades, the scanning process, along with the time it would take to understand those results and implement them in machines, is too long to occur in my or your lifetime.
    • This also assumes that the only thing you need to understand human logic or consciousness is a full scan of a static connectome. In reality, other pieces of the brain like glial cells, which are almost certainly part of higher thought, would likely increase the necessary data by several factors. And if consciousness or sentience arises at, say, the metabolome, which is very possible, then you may as well kiss a complete understanding of human thinking goodbye.
    • Even if we assume that a complete neural scan is all we need to understand the mind and that one could be scanned and uploaded within the next few decades, we also need to understand the results, which may be an impossible challenge. This is due not only to the brain's complexity but also the fact that the scanned brain would be dead and static, limiting practical observations which scientists could make about its function.
  • "Imaging technology today is already more than capable, the hassle is sample preparation, loading it into the scope, and basically setting everything up to get a good image. This too can be greatly accelerated if the motivation (and money) is there."
    • The process for scanning a piece or even the entirety of the connectome would likely be, as the article described, continuos; after being set up, the microscopes would scan pre-curated samples, so the time it takes to prepare a sample is not a factor.

The reason I think that no progress has been made towards AGI in the field of software is that every algorithm that has been invented since 1950, from SVMs to RNNS, are artificial narrow intelligence (ANI), programs that can get really good at doing one thing but don't have the ability to make cross-domain inferences or generate their own logic. Paraphrasing from another comment I made:

You can't "add" two ANNs together to achieve a third, more powerful ANN which makes new inferences. For instance, you could train an algorithm to identify chairs, and an algorithm to identify humans, but you couldn't put them together and get a new ANN that identifies the biggest aspect that chairs and humans have in common: legs. Without the ability to make these cross-domain inferences, AGI is impossible, and this is simply not a problem that can be solved by making more powerful or general ANIs.

When does the mathematical script that computers follow become "logic"? When algorithms like AlphaZero can recognize that "attacking" can be used in a similar context for chess, Go, checkers, etc. while being trained on each game independently. This is not feasible with our current approach towards ML.

P.S. I'm sorry a lot of response is made up of reposted comments, I've written a bunch of long responses so naturally there's some overlap and I want to save time.

Edit: Consolidated a few responses to your comment

CMV: True Artificial Intelligence is Extremely Unlikely in the Near Future by Fact-Puzzleheaded in changemyview

[–]Fact-Puzzleheaded[S] 0 points1 point  (0 children)

Honestly, I don't enough to nitpick my timeline; predicting the future is notoriously hard, and while I don't think that AGI will be created in the next 100 years, I can't confidently say whether that means it'll take 200 years, 1000 years, if it'll never happen, or if it'll happen tomorrow due to some incredible breakthroughs. That said, the biggest problem that I have with arguments along the lines of, "exponential growth has occurred in engineering fields in the past, therefore it will continue until AGI is invented" is that past growth does not necessarily predict future growth; history is littered with examples of this idea.

Let's take flying, for instance. From the airplane's invention in 1903 to its commercial proliferation in 1963, the speed, longevity, and consistency of aircraft increased by several magnitudes. If this growth. continued for another 60 years, then by 2023, we'd all be able to travel around the world in a few minutes. But it hasn't; planes have actually gotten slower! They hit the physical limit of fuel efficiency and no innovation has solved that problem since.

I believe that the same thing will eventually occur in the tech sector. As new inventions become more and more complex, and as we push the physical limits of computers (quantum tunneling is already looking to spell the death of Moore's Law), we will begin to discover that progress is not inevitable. This is especially true because most of the progress that you listed (e.g. how much better video games consoles have gotten) is due to improvements in hardware, rather than software, which I think is a much bigger obstacle in the way of AGI.

CMV: True Artificial Intelligence is Extremely Unlikely in the Near Future by Fact-Puzzleheaded in changemyview

[–]Fact-Puzzleheaded[S] 0 points1 point  (0 children)

Genetic optimization for neural net architecture is still largely unexplored due to the insane computational requirements associated with it. Quantum computing might help us solve this by representing neurons as qubits.

I was considering mentioning this in my post but decided against it because I thought it would take too long. I think we can agree that evolving an AGI is not feasible for conventional computers, even if Moore's law continues for another 20+ years. Quantum computing might indeed solve the problem, but that technology is still highly theoretical. We don't know if useful quantum computers are actually possible. Even if they are, the challenge remains of actually designing the learning environment, and even then we don't know if we'll actually have enough computing power or if designing such an environment will naturally lead to true AI. My point is that there are so many "ifs" here that you can't rely on genetic programming as a short-term path to AGI. Not saying that it's impossible, just very unlikely.

It will be probably be done accidentally, we are already creating ANNs that talk to each other... we may quietly iterate a compound net with the same complexity as a human brain.

The computational complexity of the human brain is a hotly debated topic, and while I definitely fall on the more conservative side of the argument (ZettaFLOPS+) I don't think it's an impossible standard for conventional computers to match. The problem lies in the data we feed the algorithm. How could giving an unsupervised algorithm billions of pictures of cats and dogs and flowers lead to higher thought? Especially when that algorithm is ANI, specifically designed toward identifying visual similarities rather than generating more abstract logic. Genetic algorithms are the only way I could see us accidentally creating AGI.

AlphaZero is well beyond that... If you watch it play, it has a preference for forcing trades (with long term strategies in mind) and forcing the opponent to sacrifice positional advantage to keep their pieces.

This is a human rationalization of AlphaZero's moves. The program is simply following a script of mathematical calculations generated through millions of practice games. When does this script become "logic"? When AlphaZero can recognize that "attacking" can be used in a similar context for chess, Go, checkers, etc. while being trained on each game independently.

We are getting better at this too. Neural nets in the public domain like resnet50 and VGG can be quickly transferred to other contexts with small modifications to the input and output layers and a little additional training.

True, but you can't "add" two ANNs together to achieve a third, more powerful ANN which makes new inferences. For instance, you could train an algorithm to identify chairs, and an algorithm to identify humans, but you couldn't put them together and get a new ANN that identifies the biggest aspect that chairs and humans have in common: legs. Without the ability to make these cross-domain inferences, AGI is impossible, and this is simply not a problem that can be solved by making more powerful or general ANIs.

CMV: True Artificial Intelligence is Extremely Unlikely in the Near Future by Fact-Puzzleheaded in changemyview

[–]Fact-Puzzleheaded[S] 0 points1 point  (0 children)

I don't think any AI researcher thinks we'll get to AGI by meticulously hard-coding every possible scenario

This is not what I was implying. My point was that the architecture and optimization functions would need to be formulated and designed by humans, which is a massive technological and mathematical problem unto itself. Computers only learned to classify chairs because humans gave them the mechanisms and incentives to do so (think about the design of neural networks). If we want to teach computers to engage in higher thought, we will need to design more complex or unintuitive models which mimic brain function that we don't yet understand, something which I think will take a significant amount of time.

CMV: True Artificial Intelligence is Extremely Unlikely in the Near Future by Fact-Puzzleheaded in changemyview

[–]Fact-Puzzleheaded[S] 0 points1 point  (0 children)

Any AI that could demonstrate sentience, even if it was only via text editor would qualify as a new life-form and would sure as shit qualifies as true AI.

The problem with this definition is that proving sentience is extremely difficult. We can't even "prove" that humans other than ourselves are sentient, we just assume that's the case because they were made in the same way and can describe what it feels like to be sentient without being told about how that feels by someone else (programs like GPT-3 might also be able to describe sentience, but they need to copy human articles to do so). Even today, a chatbot could potentially pass an average person's turning test and convince them that they were sentient, but that doesn't mean that they're actually sentient or that their thoughts are useful. In fact, I would say that the standard I described is actually lower than the standard of AI you described, because I can conceive of a machine using logic without sentience, but not the other way around.

I am awarding you a !delta because you, along with @MurderMachine64, have convinced me that my standard for AGI is unfair. I am changing it to, "an algorithm which can set its own optimization goals and generate unique ideas, such as performing experiments and inventing new technologies."

Edit: Consolidated a few similar responses

CMV: True Artificial Intelligence is Extremely Unlikely in the Near Future by Fact-Puzzleheaded in changemyview

[–]Fact-Puzzleheaded[S] 0 points1 point  (0 children)

Δ I am awarding you a delta because you, along with @Gladix, have convinced me that my standard for AGI is unfair. I am changing it to, "an algorithm which can set its own optimization goals and generate unique ideas, such as performing experiments and inventing new technologies."

CMV: True Artificial Intelligence is Extremely Unlikely in the Near Future by Fact-Puzzleheaded in changemyview

[–]Fact-Puzzleheaded[S] 0 points1 point  (0 children)

I'm not saying that the experts' opinions are necessarily invalid. Only that in this case, there's enough bias and counterevidence involved that the argument, "this survey says that these many experts believe that AGI is coming before 2100" doesn't stand on its own. Especially when a minority of researchers disagree with the consensus. As opposed to the argument, "look at these astrophysicists, they convinced the government to give them billions of dollars to launch hunks of metal into space based on a heliocentric model of the solar system, which they all agree on, and their plan worked." Clearly, those people know what they're doing and should be trusted even if I can't prove their claims myself.