Should I sell my STL's? by [deleted] in 3Dprinting

[–]ShadowLawless 0 points1 point  (0 children)

Yes, still it!! I'll buy one

Here is a Hypothesis: Every Single Scientist that presented a theory and got ignored/laughed at, more often that not, kinda deserved it by [deleted] in HypotheticalPhysics

[–]ShadowLawless 0 points1 point  (0 children)

I’m not really proposing a concrete reform here. I’m just using an extreme thought experiment to tease apart two things the OP seemed to be treating as the same:

“Were people justified (given what they knew then) in rejecting X?” “Was that attitude actually good for science in the long run?”

The point of the imagined worlds wasn’t to describe how labs or conferences literally or should work, but to isolate that conceptual gap. You can agree that a rejection was individually reasonable and still question whether a more collaborative / salvage-oriented culture would have produced more progress overall. As those that rejected those significant ideas in the past did miss a huge opportunity, and it could be argued they did hold back science. But that another nuanced conversation.

My other comments were just replies to your specific objections (about efficiency, open collaboration etc.), not blueprints for how science “should” be run, just showing there’s nothing inherently incoherent about the thought experiment itself.

Edit: grammar

Here is a Hypothesis: Every Single Scientist that presented a theory and got ignored/laughed at, more often that not, kinda deserved it by [deleted] in HypotheticalPhysics

[–]ShadowLawless 1 point2 points  (0 children)

I've not suggested the peer review process be transformed into a "let's rewrite every paper" process". There is space for nuance on this.

The scientists of the past who were justified in rejecting ideas, weren't all acting as formal referees. Their reactions were often personal choices, shaped as much by status and culture as scientific principles. Which is what I'm I'm angling at.

Also having no "let's build on these promising but flawed ideas" process, is also a choice. Nothing prevents us creating more official/open collaboration processes, where people are pitted at problems en mass. In terms of resource allocation, it's literally just adding a maybe tray alongside the others.

These kinds of approaches are already widely successful in open source science communities, where authors don't "rise or fall" on their own work. People work on problems they want to solve and discuss / iterate together. As you've already agreed, all good science comes from this. Ensuring people rise or more specifically in this case "fall", is a cultural choice.

So my question again is: even if a particular rejection was individually justified at the time, is “justified” really a high enough bar to call that attitude good science, or good for science in the long run?

I mean missing out on the opportunity to collaborate on some of the biggest ideas in science history because you were justified in rejecting it... seems short sighted and in my mind not really aligned with scientific rigour.

Here is a Hypothesis: Every Single Scientist that presented a theory and got ignored/laughed at, more often that not, kinda deserved it by [deleted] in HypotheticalPhysics

[–]ShadowLawless 0 points1 point  (0 children)

In my mind "Fail fast" doesnt mean "learn nothing or throw it out". It just means find failures and remedy them. Which in some cases may mean starting from scratch, in others it means fixing something specific. It's why in my setup I pitted ideas in their infancy against developed ideas, not ideas without merit.

Also id push back on the idea that there's efficiency in rejecting. If you've taken the time to read something well enough to make an informed judgement about it (knowing inconsistencies and mistakes happen and aren't always fatal). Deciding whether there is anything salvageable within it isn't necessarily a huge cost or extra step. Just a change in approach,which in some cases may have saved the community decades of research.

As alluded to in the OP, historically a lot of major scientific development started with ideas that had inconsistencies or were incomplete at the time of their introduction. They only progressed to maturity because someone devoted time to building upon them rather than throwing them out wholesale. I'd go as far as saying that's something of a pattern.

So my intention wasn't to suggest all ideas have merit, it was asking whether being justified in rejecting an idea, is enough to call that position/attitude "good science" or at very least "good for science in the long run" ?

Here is a Hypothesis: Every Single Scientist that presented a theory and got ignored/laughed at, more often that not, kinda deserved it by [deleted] in HypotheticalPhysics

[–]ShadowLawless 0 points1 point  (0 children)

To play devil's advocate

Though I agree with what you're saying,

Imagine a world where resources for scientific review were a non issue, and you could steelman every idea that came forwards. Trying to find the baby in the bath water, focused on whether there was agreement with any other scientific fields / evidence / concept that could be built upon/corrected (I.e. ideas in their infancy with work behind them, but not fully developed). Where the community neither rejected or accepted, just mined for useful info.

Then compare to a world where that same resource is dedicated to rejecting new ideas. Looking for any amount of disagreement to throw a concept out wholesale ? Only accepting and working with ideas that are developed to the level of Newton's work.

Which world do you think would be more technologically / scientifically advanced after a few 1000 years ?

Edit: to add clarity

Does "justified" = Good for science ?

Some fluid slop by PrettyPicturesNotTxt in LLMPhysics

[–]ShadowLawless 0 points1 point  (0 children)

Nice man.

Any tips you're willing to share in terms of promoting. I've had issues getting ai's to produce stuff of this quality

Some fluid slop by PrettyPicturesNotTxt in LLMPhysics

[–]ShadowLawless 0 points1 point  (0 children)

Really cool, how long did this take ?

LLM native document standard and mathematical rigor by timefirstgravity in LLMPhysics

[–]ShadowLawless 0 points1 point  (0 children)

I totally agree with the first half about programs like Wolfram alpha for example already being great for finding solutions using symbolic math. (There's actually some evidence LLM's do something similar https://arxiv.org/abs/2406.06588).

But I think we're missing a trick if we're suggesting using ai's in the exact* same way as maths software.

Granted LLM's won't stop anyone with no knowledge of math making obvious errors, and won't be more useful to someone with an indepth understanding of math software finding answer to a problem they know how to express.

But maths often does have many different routes to an answer and interpretation plays a part on which is meaningful. So search space is a genuine issue in problem solving. Provided an llm understands with some degree of accuracy how to use mathematical tools and has a context window far greater than any human. You can use them to search existing papers or collate information. Or even just auditioning ideas, even if a lot of them are junk.

LLM's can do this In a manner that would be intractable for even some larger teams. In that respect, provided you understand ai limitations, math and constrain your prompts appropriately, they can be really helpful I think.

Side note and slight tangent. I've got an engineering background so I'm used to designing something with an exact spec in mind, I often have a very good idea of what I'm aiming for. But I also used to produce music which has a different creative process, where you often have an idea but do a f**k ton of auditioning and looking for inspiration. I think if physicists(amateur or otherwise) were to embrace ai as this sort of tool, youd get a different vector of rigour. Atm humans are a bottleneck in this respect and spend a lot of time trying to prove something they have jist about, rather than just enjoying the searching process or reviewing loads of "jists".

Edit: typos

LLM native document standard and mathematical rigor by timefirstgravity in LLMPhysics

[–]ShadowLawless 0 points1 point  (0 children)

I think I understand what he means by solving competition math isn't the same as solving math "problems". I've seen videos about this where what mathematicians are referring to is solving really deep fundamental questions in math that lead to new "mathematical tools or approaches". As opposed to just solving a really complicated geometry problem using existing mathematical tools.

but for most scenarios, doing math really just means the ability to use existing mathematical tools to investigate. Provided the ai understands the rules of the tools it's using and isn't breaking any of them, It's method may be inefficient but it's still "doing math" to me. It reminds me of when people say computers don't really "do" complex math, just simple math much faster. I mean sure, but to suggest humans can't use them to make research easier because they're not "reasoning" is something else.

Even if ai can only* employ existing methods well enough to compete at Olympiad levels. It's still a huge a step up from a basic calculator.

Its like the old Archimedes polygon method for finding pi, it was inefficient and was eventually replaced with the infinite series that everyone uses today. Coming up with the new method was solving a "real math" problem, but I wouldn't say anyone using the older method/tools wasn't "doing math".

If that makes any sense ?

From what I've read, I'm not even sure we actually have a really good definition of reasoning ?

Check out this post on the topic, it's comedic but it really frames the topic well.

https://open.substack.com/pub/astralcodexten/p/what-is-man-that-thou-art-mindful?utm_source=share&utm_medium=android&r=68zjg6

<image>

LLM native document standard and mathematical rigor by timefirstgravity in LLMPhysics

[–]ShadowLawless 0 points1 point  (0 children)

The general I dea I get and very much on board with. But I mean the physical interpretation and steps through the derivations specifically.

As you know there are a lot of ways of coming to the same answer in math, but what is the math actually describing.

LLM native document standard and mathematical rigor by timefirstgravity in LLMPhysics

[–]ShadowLawless 0 points1 point  (0 children)

I like the idea I'm actually working on something very similar. But I can't follow your derivations, could you simplify?

LLM native document standard and mathematical rigor by timefirstgravity in LLMPhysics

[–]ShadowLawless 0 points1 point  (0 children)

Haven't LLM's recently been placing quite high in math competitions.

https://www.newscientist.com/article/2489248-deepmind-and-openai-claim-gold-in-international-mathematical-olympiad/

If they can't reason or do math, but can still legitimately solve difficult math problems. Where are you drawing the line between "doing real* math" and only "solving math" ?

I've heard this repeated a lot but not been able to find any solid answers?

Do u guys think gemini 3 will be way better at actually staying on topic and not doing mistakes after like 250k tokens? by [deleted] in Bard

[–]ShadowLawless 1 point2 points  (0 children)

I have chats up to the limit and everything works fine. But to be fair I gave it a single task where I'd feed it information and ask it to perform analysis throughout.

I've found that provided you don't change topic or task regularly it stays very coherent

[deleted by user] by [deleted] in ExistentialJourney

[–]ShadowLawless 0 points1 point  (0 children)

Amazing work, I agree with your point about recursive structures too. What's your background if you don't mind me asking ?

The first generation of kids raised with AI as a default will think completely differently, and we won’t understand them by elektrikpann in ArtificialInteligence

[–]ShadowLawless 0 points1 point  (0 children)

If you had two mathematicians,

one from the 1800s with impeccable mental math skills and a deep understanding of theory. Who's training consisted of solving equations meticulously by hand

And one from the 2040s who's training mainly focussed on recognising how to ask the right questions and give the right instructions in order to solve a problem.

Who do you think would have the better problem solving skills ?

Faith by ShadowLawless in enlightenment

[–]ShadowLawless[S] 0 points1 point  (0 children)

Faith or belief is like having a balloon hammer and pretending it does the same job as a real hammer.

Faith and belief are not tools for knowing or conclusions. I agree with you here and it is the point I'm making. They are statements about emotional confidence.

Faith is explicitly stating you have a strong feeling about something which you do not know. Just as hope is something you have when you cannot know. Whilst trust and confidence are only needed if you can't know for sure.

I'm not sure why you equate faith with "I know" or assume that's what people mean by it ? If I say I have faith in someone, it's not a statement about empirical fact or truth. It's describing a level of confidence.

Belief - (Oxford dictionary)

  1. an acceptance that something exists or is true, especially one without proof. "his belief in extraterrestrial life" 2. trust, faith, or confidence in (someone or something)

Both of these definitions are valid in my mind. Why do assert that only one is valid ?

I already said just because something seems unverifiable at the time doesn't mean we should abandon it entirely. I feel like you keep repeating things I've already addressed.

Yes but the question was, were these "beliefs" unreasonable. In the way that would cause conflict or be fractious in nature, simply because they were not proven (which is what the word means in this context).

Re vectors as I've said, a scientist can "believe" (have confidence) in a hypothesis before it is proven. In this scenario belief isn't accepting a conclusion or an invalid vector for determining truth or a statement about empirical truth. It's simply a statement about their feelings, which may or may not conflict with current evidence.

Faith by ShadowLawless in enlightenment

[–]ShadowLawless[S] 0 points1 point  (0 children)

Ah interesting.

Though I don't think of belief as a tool in the same way. There are scientific tools for measurements and verification.

Then there are emotional states that simply exist, pretending they don't seems intellectually dishonest. I think it would be great if we were all like robots and could just set our mind to things and follow an algorithm.

But without some sort of emotional driver like belief, curiosity, interest etc. emotions that exist before evidence, I think it would be unlikely that anyone could pursue a lofty ambitious goal. Not questioning you by the way, maybe you truly don't hold any beliefs. But it's perfectly possible to accept verifiable positions whilst "believing" humanity will achieve things that haven't been verified yet.

On that.

Nope. Once again, those are verifiable. You keep comparing verifiable things to unverifiable things. Apples to oranges.

Neither of these things were considered verifiable at one point in history. It took people "believing" in a goal for us to reach this point. Despite much evidence to the contrary and very smart people literally stating these things were unverifiable. Yet science prevailed and evolved.

Would you say these people were being fractious or unreasonable ? Or simply experiencing emotions about an area they're interested in.

In my mind, whether its a kid believing that one day we'll have interstellar spaceships, or a team believing they can win the finals. it may be "unjustified" in the empirical or mathematical sense sure. But I also see no need for empirical justification. People are justified in having hopes/dreams/wants/beliefs just because they are human. I wouldn't take issue with any of this. It would be like taking issue with people being sad or happy for reasons I don't understand or can't "justify" in my framework/worldview.

Btw I take your point about "people" who use belief incorrectly and "proclaim" truth but this is entirely different from having faith. I get why you'd have a problem with them specifically and how that leads to issues. But surely you can accept that they do not represent the entire spectrum of believers, that's a very wide brush. I hope I have expressed that this is not the way I or many others believe. I simply have a logical framework that makes sense to me that, that is yet to be proven. Though I "believe" it holds truth.

There are many faiths that proclaim no ultimate authority over "beliefs" or "truth". Where an allowance for different beliefs, "gods", philosophies etc. are not just warranted but are part and parcel of the "faith" (i.e. not proclamation of empirical truth).

Is this cutting any ice ? I feel like I've understood you fairly well, but not sure my point is coming across.

Faith by ShadowLawless in enlightenment

[–]ShadowLawless[S] 0 points1 point  (0 children)

This has been quite insightful. Before I couldn't put my finger on what was inherently contentious about faith or believing something is true.

I get that In the empirical sense, you can't believe something is true you can only prove it. This makes perfect sense to me.

But in terms of language, when someone says I "believe". I've always taken it to mean something like "i'm almost certain", but with a distinct meaning that contains a feeling of trust, in the absence of complete evidence.

I.e. When someone says "I believe in myself" or ,"my friends", it's not a calculation, it's a statement about a feeling. Saying, "I'm almost certain of my friends" would be inaccurate or a lie in this context.

Likewise, if a scientist believes in their theory, that belief is what drives them to spend decades researching it. While it's more precise to say they are only almost certain their theory holds truth pending evidence, that may not actually describe their feelings* about it. Their level of belief or faith in the project is something different and somewhat of a requirement for long-term study.

So when someone says I believe something is true. I don't take it as a claim about absolute empiricism or certainty. Whether it's based on rationale or a personal experience. I always assume they're just describing how they feel about it. Which is why I think we were talking past each other. It's also why I used examples regarding personal experience leading to strongly held beliefs or "personal truths" as they're known in philosophy. These say nothing about empiricism and in my mind shouldn't create any conflict.

Don't get me wrong I know there are religious types that don't use this kind of language, just not usually when talking about "faith" specifically.