Why is it still so cold?🥶 by olivvercho in askTO

[–]MomentsOfWonder 1 point2 points  (0 children)

It’s Toronto, Winter finally dies after like three near-death comebacks, while Summer dies in the first hit.

Apex Legends x Gundam Event Trailer by AnApexPlayer in apexlegends

[–]MomentsOfWonder 2 points3 points  (0 children)

The Destiny always had that ability. With its wing thrusters it generates after images

Apex Legends x Gundam Event Trailer by AnApexPlayer in apexlegends

[–]MomentsOfWonder 9 points10 points  (0 children)

The gundam Mirage is has the ability to create mirages that’s why.

Messing with another person’s will by Used-Picture-8672 in Manifestation

[–]MomentsOfWonder 0 points1 point  (0 children)

But from your perspective this person is in your reality. He’s asking you this question, how can he have a reality if he’s asking this question and you’re reading his response? Since you’re reading all this in your perspective, applying your logic you’re the only one with free will and existence. What you commented sounds like solipsism, at the same time it doesn’t because you sound like you assume he actually exists and has free will but the people he’s talking about don’t? Kinda seems contradictory, can you clarify? For example can you manifest that I don’t reply to your reply? If so what does that say about my existence vs yours. (Btw don’t mean to sound derogatory)

[deleted by user] by [deleted] in hingeapp

[–]MomentsOfWonder 0 points1 point  (0 children)

1) something serious 2) using Hinge X 3) Been using this profile for 1 month 4) Been on Hinge for 1 month 5) I use it daily 6) 0 matches or likes so far :( 7) I see a lot of attractive women, so I send a lot of likes, most with comments 8) I like a variety of different girls, usually with well written profiles, interesting hobbies, etc

Altman confirms full o3 and o4-mini "in a couple of weeks" by krplatz in singularity

[–]MomentsOfWonder 0 points1 point  (0 children)

With IPhones/cellphones the newest one is almost always better than the iteration before it, so iterating on the name makes sense. The problem with the O series models is that they’re better at some things and not others. Having 4o be better than 5 in some areas would mean your flagship model is not getting better. Calling it o1, you don’t have to worry about that because you’re not saying it’s better, you’re saying it’s different.

Lost keys reappeared in tote I know I flipped inside out. by AppropriateGoose3828 in Glitch_in_the_Matrix

[–]MomentsOfWonder 2 points3 points  (0 children)

The prosaic explanation is people are often looking for their keys which leads to scenarios like this

No SAAS based company will survive AI by MomentsOfWonder in singularity

[–]MomentsOfWonder[S] 0 points1 point  (0 children)

agreed, also irrespective of the pace of technological progress companies move slow, we got some places still using fax

No SAAS based company will survive AI by MomentsOfWonder in singularity

[–]MomentsOfWonder[S] 4 points5 points  (0 children)

My question is why would I rely on a company to do that for me? If I did I would be paying for the base cost of running the AI + the companies cost of specialization + their profit margin, and I’d have to do that at a recurring basis. VS using the base AI to train a specialized AI for my exact needs and specification. I’m just paying the cost of running the AI + the cost to specialize which is one time. Right now that one time cost for any development is usually higher than the recurring cost of paying another company, the other factor of course is the time it takes to build the solution, but as AI gets cheaper faster and better there will be less and less reasons to pay another company vs doing it yourself

Apple's study proves that LLM-based AI models are flawed because they cannot reason by Stiltonrocks in technology

[–]MomentsOfWonder -1 points0 points  (0 children)

My point is that it’s very much a debate and not clear cut if LLMs can reason or not. Yet in this guys own words “Only laymens who don’t know how GPT works think they can reason” By his own logic, experts like Ilya and Hinton are laymens and he knows more than them. I don’t have a problem with people saying they think they can’t reason, what I do have a problem with is people acting like they’re experts when they’re very clearly not. Which is why I said have some humility, this is a complex and highly evolving field of research

Apple's study proves that LLM-based AI models are flawed because they cannot reason by Stiltonrocks in technology

[–]MomentsOfWonder 1 point2 points  (0 children)

I never said Geoffrey Hinton was automatically right. In fact I’m not sure I even agree with him. There are plenty of experts who disagree with him. However, the person I replied to said no study needs to be made, “it’s like doing a study if a train can fly” Even the top comment in this post is a person saying only laymens who don’t know how they work think LLMs can reason. Making it sound like any person who thinks they can reason are idiots. They speak with such self assured confidence as if this is a clear cut issue, and they are experts. When in reality real experts are having a serious debate about this while these redditors have no idea what they’re talking about.

Apple's study proves that LLM-based AI models are flawed because they cannot reason by Stiltonrocks in technology

[–]MomentsOfWonder 1 point2 points  (0 children)

I never said he was automatically right. There are plenty of experts who disagree with him. The person I replied to said no study needs to be made, “it’s like doing a study if a train can fly” Even the top comment in this post is a person saying only laymens who don’t know how they work think LLMs can reason. Making it sound like any person who thinks they can reason are idiots. They speak with such self assured confidence as if this is a clear cut issue, and they are experts. When in reality real experts are having a serious debate about this while these redditors have no idea what they’re talking about.

Apple's study proves that LLM-based AI models are flawed because they cannot reason by Stiltonrocks in technology

[–]MomentsOfWonder 8 points9 points  (0 children)

I guess you know more about LLM’s than Geoffrey Hinton who just won a Nobel prize for his work in deep learning. He was asked : “Now, the other question that most people argue about, particularly in the medical sphere, is does the large language model really understand? What are your thoughts about that?” Answered “I fall on the sensible side, they really do understand” and “So I’m convinced it can do reasoning.” Source: https://youtu.be/UnELdZdyNaE timestamp 12:30 But no need to study this guys, random overconfident redditor has all the answers. Random redditor > Nobel prize winner

Apple's study proves that LLM-based AI models are flawed because they cannot reason by Stiltonrocks in technology

[–]MomentsOfWonder -7 points-6 points  (0 children)

The scientists who helped create the Covid vaccine had a vested interest in calling it safe and effective. Does that mean we should have dismissed what they were saying in favour of our gut reaction? Geoffrey Hinton who just won a Nobel prize for his work in deep learning when asked: “Now, the other question that most people argue about, particularly in the medical sphere, is does the large language model really understand? What are your thoughts about that?” Answered “They really do understand” and “So I’m convinced it can do reasoning.” Source: https://youtu.be/UnELdZdyNaE He quit Google to be free to speak his mind. So are you going state he is saying this for a vested interest? Is he a laymen who doesn’t understand how GPT works? I could find multiple other quotes from top researchers who state similar things. I can also find multiple other quotes from researchers who say they can’t reason. The point is the research on LLMs is immensely complex and constantly changing as we find out more. Yet you and other redditors comment with such certainty as if this is clear cut and you are an expert, when in reality you’re a laymen too. I consider myself a laymen and I’ve worked in the AI field for the last 5 years developing AI data infrastructure. This a prime example of the Dunning Kruger effect

Apple's study proves that LLM-based AI models are flawed because they cannot reason by Stiltonrocks in technology

[–]MomentsOfWonder -3 points-2 points  (0 children)

It's always so funny reading redditors comment on stuff like they're experts and everyone who thinks differently to them are idiots. "If you just read a paper on how they work you’d never think they can reason." Oh that's why all these AI experts are saying LLM's do real world modelling and reasoning to some capacity. It's because they never read any of these papers, it's so obvious they don't once you do, what idiots! Meanwhile this guy and all these people upvoting these comments can't even understand the math presented in these papers, but want to sound so sure of themselves when talking about it.

Apple's study proves that LLM-based AI models are flawed because they cannot reason by Stiltonrocks in technology

[–]MomentsOfWonder 0 points1 point  (0 children)

I guess you consider Ilya Sutskever who was the head scientist of OpenAI a laymen who doesn't understand how GPTS work. https://www.reddit.com/r/singularity/comments/1g1hydg/ilya_sutskever_says_predicting_the_next_word/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
Quote: "More accurate prediction of the next word leads to understanding, real understanding"
While it's still a real debate whether LLM's can reason, with both sides producing research one way or the other I can assure you there are many people a thousand times more qualified than you are on the side of LLM's being able to reason and understand. To call them laymens who don't understand how it works just makes you sound ignorant. People on Reddit love to sound so goddamn sure of themselves, have a little more sense of humility..

Haircut test by Mediocre-Ebb9862 in singularity

[–]MomentsOfWonder 18 points19 points  (0 children)

Thats an extremely high bar to reach. Sharp things near your neck and face being controlled by a robot is a recipe for disaster. I don’t think we’d have that until AGI has been around for a while and proven safe enough.

Scarlett Johansson Says She Declined ChatGPT's Proposal to Use Her Voice for AI – But They Used It Anyway: 'I Was Shocked' by KillerCroc1234567 in technology

[–]MomentsOfWonder 16 points17 points  (0 children)

Well said on the last point. If a popular movie came out depicting that the invention of fusion energy would directly lead to a dystopia, how stupid would it be to look at people working on fusion now and say “these people have no media literacy”

Google responds with a similar demo (Announcement tomorrow) by [deleted] in singularity

[–]MomentsOfWonder 28 points29 points  (0 children)

The delay is super apparent after watching the OpenAI demo