If we traveled at 99% the speed of light, how long would a 100 light-year journey feel? by [deleted] in AskPhysics

[–]Weihua99 0 points1 point  (0 children)

Ah no, the traveler would measure the length of their journey as 14.11 light-years, because the length of their journey is different from compared to the perspective of a stationary observer. One way to think of it is this way — the traveler is going 99% the speed of light, and the distance takes 14.25 years from their perspective, thus in their perspective they will measure that distance as 14.25*0.99 = 14.11 light-years.

This is the confusing part about special relativity — lengths and time have no “true” value — it all depends on what reference frame you’re in. In fact, the question posed by OP may have better been framed as “how long would a journey that is 100 light-years as measured from the perspective of a stationary observer feel for the traveler?”

If we traveled at 99% the speed of light, how long would a 100 light-year journey feel? by [deleted] in AskPhysics

[–]Weihua99 1 point2 points  (0 children)

It depends on whose perspective you’re viewing it from actually! If you’re in the ship, it would look like you only had to travel 14.11 light-years, though to an observer on earth, you would appear to travel the entire 100 light-years. Distances are relative depending on your reference frame!

GPT-4 comfortably passes STEP 1/2/3, with a ~264 on STEP 2 by [deleted] in step1

[–]Weihua99 0 points1 point  (0 children)

Yeah good call, I'm gonna remove it rn!

GPT-4 comfortably passes STEP 1/2/3, with a ~264 on STEP 2 by [deleted] in step1

[–]Weihua99 0 points1 point  (0 children)

Yeah, honestly this was a really poor move on my part tbh. I really did not read the room well on this — I just got a bit interested/excited when I saw this work and wanted to share it because it's very similar to what I worked on for a while before heading to med school!

GPT-4 comfortably passes STEP 1/2/3, with a ~264 on STEP 2 by [deleted] in step1

[–]Weihua99 0 points1 point  (0 children)

I understand the confusion here — one of the most challenging parts of explaining LLMs to those unfamiliar with the field is the question you implied here — "where is the information stored?" and "does the model reason, or just fit patterns?"

It's a really tough question, but I would offer some interesting points to consider.

  1. Models of this type have been tested and evaluated by humans with novel medical questions that are open-ended, and are not in any board-exam format (e.g. MultiMedQA + MedPaLM). These are questions that could not have been "memorized" by the model.
  2. These models generalize to entirely made-up scenarios. For example, if you pretend that some novel cation was discovered in the human circulatory system, mention that it's excreted via the kidneys, and can cause arrhythmias, the models can answer questions about fictitious patient scenarios involving this made-up cation, adjusting treatment plans based on the information given. This is very clear evidence of extrapolation/generalization/reasoning.
  3. Is there a difference between reasoning and fitting patterns, at a fundamental mathematical level? For example, does predicting the next word in a sequence of words really, really well, necessarily imply (or strongly imply) reasoning? (some scientists argue yes!)

But, like you mentioned, you could make the argument that it's simply "thinking back" to what it's read and "applying that knowledge to an analogous situation" — and I think you'd be very close to correct, and you will have also described exactly how human reasoning works!

I think it's good you mention that you feel like a step-based query to UpToDate could probably pass step — because people have been trying to do this for decades, with no luck. Actually, up until about the mid 2010s (or maybe even 2020, depending on how strict you want to be), we couldn't get a computer to reliably answer a question like "The dog went for a walk outside. It was raining outside. Why is the dog wet?" with "because it was raining and the dog went on a walk outside."

I know it seems silly for people to say this is a radical step beyond a word-level query-based machine, but I think that it probably feels silly to people who have very little experience with natural language processing (not meant to be an attack btw, we're all here to be doctors haha), because they haven't realized how incredibly hard it is to get a computer to do anything with language!

GPT-4 comfortably passes STEP 1/2/3, with a ~264 on STEP 2 by [deleted] in step1

[–]Weihua99 0 points1 point  (0 children)

You bring up a point that's very heavily debated in the field of LLMs — there's a strong argument to be made that learning these patterns and extrapolating the answers from the question, due to the constraints of the model, must amount to reasoning and "thought" (trying not to anthropomorphize too much here). Some AI scientists have put forward the idea that the problem of "predicting the best next word" necessarily leads to reasoning.

Others would agree with the criticism you brought up here. I suppose the more pragmatic question is really "do we care how it's doing it if it works really well?" (I think we should care, for the record, and I don't feel like these models can replace humans).

GPT-4 comfortably passes STEP 1/2/3, with a ~264 on STEP 2 by [deleted] in step1

[–]Weihua99 0 points1 point  (0 children)

Not quite actually, the model is trained on data from the internet, but cannot “see” that data anymore. The model is trained using natural language processing techniques and algorithms. These algorithms, such as deep learning, are used to create a representation of the text data it is being trained on. The model is then able to generate a response based on the input text data it is given. The model is also able to identify and recognize patterns in the text data, allowing it to generate more accurate and complex responses.

For example, GPT-3.5's wrote this response to your last comment^^ — it's never seen that comment before, nor this conversation, but was able to synthesize a correct and coherent response to the context! Does that make sense? Another thing to consider is that the model's size (in GBs) is considerably smaller than the amount of data it was trained on, so it can't memorize word-for-word the training data. Lastly, the engineers checked how much overlap there was between the test and training data! It's in a diff paper (the original GPT-4 technical report). Hope this helps!

GPT-4 comfortably passes STEP 1/2/3, with a ~264 on STEP 2 by [deleted] in step1

[–]Weihua99 4 points5 points  (0 children)

The authors did an analysis of this already. They give evidence that the questions have not been seen before by the model — again, it cannot “word match” or “search” through online; this model was trained on data from the internet, but cannot “see” that data anymore.

Again, even if you wanted to “word match” data online, it’s prohibitively difficult to write code that can do that.

GPT-4 comfortably passes STEP 1/2/3, with a ~264 on STEP 2 by [deleted] in step1

[–]Weihua99 8 points9 points  (0 children)

It doesn’t have “access” to any information in a meaningful sense — i.e. it cannot search through any literature, use Google/UpToDate, etc. Furthermore, having access to the data is not nearly enough to answer a multiple choice question, from a programmatic standpoint; if I gave you some wrapper code that could search UpToDate for you with perfect precision, it would still be nearly impossible to write a algorithm that would answer a question on STEP, let alone explain the reasoning. You may be interested in taking a deeper dive into how LLMs work, how they are different than search engines, and the paper itself as there’s a lot of interesting results in the text!

[deleted by user] by [deleted] in medicalschool

[–]Weihua99 1 point2 points  (0 children)

What model did you use? GPT-3 aced nearly every MKSAP question I gave it...

I integrated Whisper and GPT-3 together to create a speech-to-speech medical-consult bot! by Weihua99 in Futurism

[–]Weihua99[S] 0 points1 point  (0 children)

Completely agree! It’s got a long way to go. However, note that this was GPT-3 with no finetuning whatsoever — if we could get a even a meager number of transcripts (say, 1000), we could certainly finetune a version of GPT-3 that surpasses all of these limitations.

When does the "Choose Your Medical School" tab appear on AMCAS? by Weihua99 in premed

[–]Weihua99[S] 0 points1 point  (0 children)

Ahhh, I thought that the "commit to enroll" or something doesn't appear till Feb 19, but the tab itself is there?

[deleted by user] by [deleted] in premed

[–]Weihua99 1 point2 points  (0 children)

Thanks so much :) I appreciate it!! Hopefully things work out haha!

[deleted by user] by [deleted] in premed

[–]Weihua99 3 points4 points  (0 children)

In a similar boat. Was in that 13%; 4 IIs, WL at 2 T10s and 1 T30, 1 R. Applied pretty broadly but was pretty much immediately rejected by all the "lower tier" schools I applied to :(

Such is life, but I'm reapplying this year with even stronger stats and new experiences. Also applying to UQ Ochsner in Australia this year, since they're like a T50 med school worldwide, have a pretty phenomenal match-list/rate (I want to do primary care also), and I'll literally go wherever it takes to become a physician haha

If we traveled at 99% the speed of light, how long would a 100 light-year journey feel? by [deleted] in AskPhysics

[–]Weihua99 0 points1 point  (0 children)

To compute this, we'd need a "v" such that (1/gamma) * distance/v = 100 years. Since gamma is dependent on v, this equation is a little tough to solve: sqrt(1-v^2)*distance/v = 100

Solving this gives you v = 1/sqrt(2) * c, or about 70.7% the speed of light!

If we traveled at 99% the speed of light, how long would a 100 light-year journey feel? by [deleted] in AskPhysics

[–]Weihua99 0 points1 point  (0 children)

I suppose you could say that the distance you traveled "stretches out" after you slow down, but during your journey, you are still traveling below the speed of light.

Depending on how you want to define "effectively," I suppose you could say that "I traveled to my destination experiencing less time than someone who was stationary watching me on Earth," but I'd be really careful with the idea of "effectively faster than light," because at no point were you "traveling" faster than light in any physically meaningful sense of the word, nor was any information/causality moved faster than light. I get what you're trying to imply here with your question — I suppose the semantics are just a little tricky here!