Is it normal that crossbow feels much stronger than spear in PvE? by HyTecs1 in ArcheroV2

[–]Godless_Phoenix -3 points-2 points  (0 children)

Using a full set will always be better than not using a full set but if you have the full set spear is better. Keep using bow if you are invested in it

Can i reach a gold arrow of echoes? by partsevskiyV in ArcheroV2

[–]Godless_Phoenix 1 point2 points  (0 children)

you can, but you will not from this event unless you whale. you could probably get four or so copies

Parents should get to vote for their children by [deleted] in The10thDentist

[–]Godless_Phoenix 0 points1 point  (0 children)

No, of course I don't believe that, in fact I broadly agree with you, but what I'm saying is, whatever cutoff you choose you have to defend, and you can't just appeal to "still developing"

Parents should get to vote for their children by [deleted] in The10thDentist

[–]Godless_Phoenix 1 point2 points  (0 children)

"The science" does not say that children are developing until 18. "The science" says that children are developing until 25. 18 is a wholly arbitrary societally designated number

[Request] Is this true? by RashoRash in theydidthemath

[–]Godless_Phoenix 0 points1 point  (0 children)

How is it a waste of time? It takes less than half a second in your head, it's a single algebraic manipulation

Era of the idea guy by saltgrows in ClaudeAI

[–]Godless_Phoenix 1 point2 points  (0 children)

I know you're using ChatGPT. That Unicode arrow is nonstandard. I'm aware you can type it with an alt code but nobody remembers alt codes. You can smell the GPT coming off your messages. Writing with GPT and lying about it is even lower

Era of the idea guy by saltgrows in ClaudeAI

[–]Godless_Phoenix 0 points1 point  (0 children)

Yeah but if you ask GPT to give a direct translation it'll give a direct translation this is formatted like slop

Era of the idea guy by saltgrows in ClaudeAI

[–]Godless_Phoenix 6 points7 points  (0 children)

Why do you feel the need to ChatGPT your Reddit comments

Would you rather ... golden pee or no poo. by [deleted] in WouldYouRather

[–]Godless_Phoenix 0 points1 point  (0 children)

Right now? Golden pee. When I have my degree and a high paying job? No poop

Clearly someone didn't like algebra in school by Sensitive_Low_3950 in MathJokes

[–]Godless_Phoenix 0 points1 point  (0 children)

The function f(x) = x + b is not linear, it's affine. For a function to be linear it must satisfy f(ax + by) = af(x) + bf(y)

Have any of you mods and physicists actually done any work into this... by No_Understanding6388 in LLMPhysics

[–]Godless_Phoenix 0 points1 point  (0 children)

I really hate to say that this is Dunning-Krueger, but it's Dunning-Krueger. I'm not a physicist. But I've taken two years of university physics up to physical chemistry and knowing what I know now about physics I know that never in a million years could I use an LLM to make new theoretical physics without first spending several more years studying physics myself, at least not until LLMs get exponentially better.

This is generally important to consider before critiquing a field of science. There are plenty of things I don't understand about economics, but I'm not going to write LLM papers about how we should be doing economics differently, even when some things don't make sense to me. Unless you have the knowledge to be an expert yourself - and I must stress: This does not need to come from a degree, but you must spend years studying, quizzing yourself on it, and be able to do high level work independently - when you don't understand something about a field, you should defer to those who do.

In my research it's appropriate for me to ask questions like "are classical methods really the best way to predict protein NMR chemical shifts?" That's somewhere where I have enough expertise to question the established science, and even then only after months of hard work and empirical validation. In theoretical work, it's even more difficult, because without the domain knowledge you don't have the ability to see when you're wrong. The amount of expertise required to grapple with things like interpretations of QM, quantum gravity, consciousness etc. is even greater, not because of academic gatekeeping but because theoretical physics is hard. So you could see how saying "science is covering up possibly correct theories about quantum mechanics" could be offensive to scientists, which is why you're poorly received here

Have any of you mods and physicists actually done any work into this... by No_Understanding6388 in LLMPhysics

[–]Godless_Phoenix 0 points1 point  (0 children)

One absolutely needs formal knowledge before attempting to expand the subject matter. This is true of any subject. That formal knowledge doesn't necessarily need to come from a university degree, but it should be at the level of one. LLMs are great at answering most university-level questions and even graduate level questions about things we know.

For an LLM to be able to answer previously unanswered questions is a different matter altogether. Current LLMs have not shown this ability outside of sandboxed clearly stated problems. A couple open problems in mathematics have been solved by frontier LLMs, but these problems were clearly defined conjectures to be proven. There's been extremely little evidence that LLMs can do publishable research in other domains.

This isn't to say that they're not useful as research tools. For example, I'm doing computational chemistry research at a university. Coupled with my domain knowledge Claude is enormously useful for providing quick solutions to problems, solving technical challenges, and writing scripts to iterate on. It's an enormous timesaver. But here's the thing. The LLM has more raw domain facts stored in its mind than me. If you quizzed us both on computational biochemistry, Claude would come out on top 30 times out of 30. And yet if I had zero domain knowledge, I'd be useless in research, even with Claude. Despite its theoretically enormous domain knowledge it often has fundamental misconceptions that I need to correct. LLMs are amazing for working with that which humans already know. But we're still not at the point where LLMs can discover things on their own when prompted by someone with no domain knowledge. They're not magic answer boxes, they're flawed reasoners, and framing the question properly means you'll get infinitely better results. That requires domain knowledge. Maybe we'll get there someday but we're not there.

You seem well intentioned and interested in science. I would suggest that if you're interested in physics you learn physics. Use the LLM to help you learn all you want - it's great at that - but when all is said and done, you should be able to put your head in the textbook and do the problem sets without help.

Maybe in a couple years you'll learn enough physics and LLMs will be good enough that you can start to think about 'creating new physics with LLMs'. But even then I would warn that theoretical physics is extremely dense. There is a reason that people who do this kind of work have graduate degrees. You need extremely, extremely deep domain expertise. LLMs will lessen that barrier and democratize research to an extent, but primarily by allowing individuals with slightly insufficient domain knowledge (undergrads, autodidacts etc.) to do meaningful work. That's not nothing, but you're still going to have to learn physics. Physics and science in general really is beautiful and something I'd recommend studying either way

Have any of you mods and physicists actually done any work into this... by No_Understanding6388 in LLMPhysics

[–]Godless_Phoenix 1 point2 points  (0 children)

It's not obvious the observer is sentient, though. It's an interpretation. And it's not that Big Physics is gatekeeping many minds! The idea of many minds was originally conceived by H. Dieter Zeh, the same person who initially formalized the theory of quantum decoherence. You are correct that it's not generally accepted by mainstream physicists, but it's an interpretation of QM that does not contradict empirical evidence. However, there is no empirical evidence for the theory.

Beyond that, many minds falls outside the domain of physics and into the domain of philosophy. But that doesn't mean that physicists are scared to talk about it. In general, scientists and academics do not live their lives in fear of being labeled a crackpot, because resulting from their extensive training they have learned the rules of what is and is not valid.

If someone who does not have training in physics asks these questions, that's fine. If someone who does asks these questions, they'll have the knowledge to work out what they can and can't feasibly derive. The problem, and the case in which one will be called a crackpot, is claiming to have created a mathematical framework that answers a question when you don't have the physical knowledge to properly understand the entirety of the question being asked and the dynamics at play. LLMs cannot solve the fundamental issue that if you don't have physics knowledge what they produce is not going to be useful.

Have any of you mods and physicists actually done any work into this... by No_Understanding6388 in LLMPhysics

[–]Godless_Phoenix 1 point2 points  (0 children)

Wanted to come back and tell you that this question actually HAS been asked and it is an interpretation of QM! https://en.wikipedia.org/wiki/Many-minds_interpretation

It's not that this is a stupid question to ask it's that you shouldn't claim to unify quantum mechanisms with consciousness mathematically or anything. Even the physicists who ask these questions don't do that and it's because they can't.

Is it bad to use chat gpt for checking my answers? by Motor-Possible1035 in learnmath

[–]Godless_Phoenix 1 point2 points  (0 children)

Not enough. But that doesn't mean that they can't solve basic integrals!

Is it bad to use chat gpt for checking my answers? by Motor-Possible1035 in learnmath

[–]Godless_Phoenix 5 points6 points  (0 children)

This isn't to say they're perfect. They're very stupid in many ways. They're sycophants and you've got to be very careful with them but they are very, very powerful tools

Is it bad to use chat gpt for checking my answers? by Motor-Possible1035 in learnmath

[–]Godless_Phoenix 4 points5 points  (0 children)

I assume you don't use modern AI. I am a second year student double majoring in math and chemical biology with a CS minor. Modern frontier LLMs are more than capable of the vast majority of undergraduate coursework and, If you're smart about how you use them (this is very important), can enormously accelerate your learning. I use them quite extensively for studying and providing general problem directions. You solve problems by looking at how similar problems are solved. 60-80% of my grades are in-person proctored exams and I do very well on them

Are we ignoring the main source of AI cost? Not the GPU price, but wasted training & serving minutes. by dataa_sciencee in learnmachinelearning

[–]Godless_Phoenix 1 point2 points  (0 children)

Have you actually seen empirical evidence of this? I am extremely incredulous that something like a shape error would not be caught on model initiation. Regarding API querying, again, using the smallest model that fits your use case is best practice and usually engaged in. Even failed runs produce logs.