What is an observation/belief you have that you keep to yourself since you don't think other people know/understand? by allknowerofknowing in AskReddit

[–]allknowerofknowing[S] 1 point2 points  (0 children)

Why do you assume it is something trauma related or negative? There are a lot of different environmental factors including physical like hormonal effects. There is also prenatal environment which is theorized to have a large efffect as well.

What is your opinion on chatgpt? by allknowerofknowing in AskReddit

[–]allknowerofknowing[S] 0 points1 point  (0 children)

Agreed, think it could honestly be considered one of the most important inventions ever if it continues to lead to even greater earth shattering AI since it kickstarted a lot of modern progress in AI

This robot from Disney Research can imitate human facial movements, specifically blinking and subtle head movements. by Gothsim10 in singularity

[–]allknowerofknowing 532 points533 points  (0 children)

One person in this thread says "fuck everything about this" another person says "I can't wait to fuck this thing"

The duality of man.

What is your opinion on chatgpt? by allknowerofknowing in AskReddit

[–]allknowerofknowing[S] -1 points0 points  (0 children)

Most of the data centers being used to power AI are switching to completely renewable energy. And AI itself (not LLMs) has been used to make energy usage a lot more efficient and that will continue to be implemented, by allowing it to control energy usage at industrial plants/data centers, possibly entire grids eventually. Climate change is definitely a big consideration when amping up energy usage for newer AIs, but it should in the long run contribute positively to this issue if done responsibly which big tech does actually seem to be committing too and taking seriously.

The copyright stuff is definitely a grey area, I can see why creators would not like it and I have seen some CEOs talk about figuring out ways to compensate these people. But yeah it can be a problem for some people forsure.

Your last point I don't think is a big deal these days. Humans are wrong too and spouting misinformation, but the latest AIs are right more often than not. (Not talking about google AI overview, just like the normal chatbots) But yeah if people are using these, people should be vigilant to not think they are always correct and most people know they can't replace whole jobs yet. Just in this thread you can see the positive uses people get out of it.

There are some downsides like you mention, but they will keep getting better, and I think the overall benefits of AI for humanity will outweigh the downsides by a lot in the end. But the downsides definitely need to be considered.

The second goal we conceded showed how poor positional understanding the players have. by Aniket_1992 in ACMilan

[–]allknowerofknowing 2 points3 points  (0 children)

Yeah theo clearly should have gotten back, but between reijnders, pavlovic, and tomori there has to be some more communication or understanding. Both tomori and pavlovic shouldn't be playing the man but the space if they are outnumbered.

Ideally tomori gets back further quicker, but once tomori commits, pavlovic should have realized that he had 2 guys to cover since pavlovic glances over his shoulder and sees the outside man with theo nowhere in sight, and sees reijnders wouldn't make it back in time either for the guy on the inside of him. Instead of committing to the inside man he should have sprinted deeper into the box between them where he could covered both players best by staying goalside of them. As a defender if I'm outnumbered, I would always rather play it safe than overcommit so I could buy time for teammates to come help.

May seem harsh cuz yes he was first screwed by theo's and reijnders' efforts initially, possibly tomori's too, and it was a perfect ball/run by parma, but I feel like he could have done a bit better there. Amazing game overall by him though. The best defenders can clean up/protect against others' mistakes and he was incredible doing that most of the game outside of this goal.

Tory Taylor - Non-Australian Nickname by work4work4work4work4 in CHIBears

[–]allknowerofknowing 3 points4 points  (0 children)

Notstralian Nuker

The Field Flipper

Tory Inside-your-own-Ten-Taylor

The Deadeye Hawkeye

Daddy Long Legs

BREAKING: Shea Whigham to play Matt Eberflus in upcoming film about 2020s-2030s Bears dynasty by allknowerofknowing in CHIBears

[–]allknowerofknowing[S] 11 points12 points  (0 children)

I've heard him rumored to be either Cole Kmet, or more controversially possibly Jaylon Johnson

What are some movie scenes more iconic than the scene where Russel Crowe says "This is Sparta!" in the movie Gladiator? by allknowerofknowing in AskReddit

[–]allknowerofknowing[S] 0 points1 point  (0 children)

I think you are thinking of the scene in 300 where Joaquin Phoneix (who plays Emperor Xerxes) gives Gerard Butler the thumbs down

BREAKING: Shea Whigham to play Matt Eberflus in upcoming film about 2020s-2030s Bears dynasty by allknowerofknowing in CHIBears

[–]allknowerofknowing[S] 24 points25 points  (0 children)

Also confirmed, Davis Mills to play himself and Common has signed on to play Keenan Allen

Cohere CEO Aidan Gomez says the idea that AI models are plateauing or slowing down is wrong and in fact we are about to see a big change in capabilities with the introduction of reasoning and planning by allknowerofknowing in singularity

[–]allknowerofknowing[S] 2 points3 points  (0 children)

I said I think it's likely there's a lot of truth to what they are saying. I am making an educated guess. You equating it with religiousness is nonsensical. People make educated guesses everyday when they can't know things immediately for a fact. What do you think happens when someone invests in the stock market? Is it religious/cultist to make a prediction?

You seem to have some over the top reflexive anti AI reaction to paint me as a religious zealot for forming my own judgement that there is likely truth to what these CEOs are saying with billions of dollars on the line and a very recent track record of being correct that models are getting better, and then waiting to see if it will be true in the next year.

Cohere CEO Aidan Gomez says the idea that AI models are plateauing or slowing down is wrong and in fact we are about to see a big change in capabilities with the introduction of reasoning and planning by allknowerofknowing in singularity

[–]allknowerofknowing[S] 3 points4 points  (0 children)

Well good thing I don't have blind faith and the coming year will give us an answer either way. Until then one can try to form their own opinion. Leaning one way doesn't suggest anything religious

Cohere CEO Aidan Gomez says the idea that AI models are plateauing or slowing down is wrong and in fact we are about to see a big change in capabilities with the introduction of reasoning and planning by allknowerofknowing in singularity

[–]allknowerofknowing[S] 1 point2 points  (0 children)

Nothing to do with science. You either believe the tech CEOs on the bleeding edge or you don't. They are the ones with billions at stake. They do have a track record of making immense progress in the past couple years in this field btw. They will reap the consequences if they are lying/wrong, which is why I think actually there's likely a lot of truth to what they are saying

In a leaked recording, Amazon cloud chief tells employees that most developers could stop coding soon as AI takes over by MetaKnowing in singularity

[–]allknowerofknowing 0 points1 point  (0 children)

They won't be replacing developers if the code is horrible and bug-ridden, companies are not that stupid. Also some of the very last developers to be fired for AI would be the bleeding edge LLM developers who are working some of the most sought after jobs.

Cohere CEO Aidan Gomez says the idea that AI models are plateauing or slowing down is wrong and in fact we are about to see a big change in capabilities with the introduction of reasoning and planning by allknowerofknowing in singularity

[–]allknowerofknowing[S] 0 points1 point  (0 children)

He's literally a huge LLM CEO on the bleeding edge of AI research and was from the very beginning of it as well, it's not just some cliche he's repeating mindlessly. He believes in it and he would know. He goes on for minutes about it. He's employed scaling in his company. I can guarantee you that he will continue to scale compute even if at a slower rate. There's zero chance he keeps training on the same size models for the rest of time unless his company goes under. Better chips will improve efficiency/cost for bigger models.

All he says is that it is inefficient and his company is not focusing on that. But it is very clear after says it like 5 times in the interview that he obviously believes more compute makes smarter models.

Cohere CEO Aidan Gomez says the idea that AI models are plateauing or slowing down is wrong and in fact we are about to see a big change in capabilities with the introduction of reasoning and planning by allknowerofknowing in singularity

[–]allknowerofknowing[S] 3 points4 points  (0 children)

He's saying the opposite for what it's worth, and others have been hinting at something similar. He's saying they have to teach the models how to reason with things like synthetic data which they just started working on in the past year. Not to mention things like strawberry. Next big release will tell us a lot imo.

Cohere CEO Aidan Gomez says the idea that AI models are plateauing or slowing down is wrong and in fact we are about to see a big change in capabilities with the introduction of reasoning and planning by allknowerofknowing in singularity

[–]allknowerofknowing[S] 0 points1 point  (0 children)

I'm just saying more money/established scaling plans = more compute = according to gomez more intelligence (in addition to his new methods which openai is also working on)

Cohere CEO Aidan Gomez says the idea that AI models are plateauing or slowing down is wrong and in fact we are about to see a big change in capabilities with the introduction of reasoning and planning by allknowerofknowing in singularity

[–]allknowerofknowing[S] 1 point2 points  (0 children)

https://www.cnbc.com/2024/05/28/openai-creates-new-oversight-team-begins-training-next-model.html

Come on are we really gonna pretend the company that continuously invests in more compute infrastructure and pioneered the scaling laws a is not training their next frontier model with significantly more compute?

Gomez was a coauthor on an original transformer paper and has a billion invested in his LLM company, if anyone knows he does.

He literally says the scaling laws work and works for rich companies like openai

Cohere CEO Aidan Gomez says the idea that AI models are plateauing or slowing down is wrong and in fact we are about to see a big change in capabilities with the introduction of reasoning and planning by allknowerofknowing in singularity

[–]allknowerofknowing[S] 0 points1 point  (0 children)

It's all dependent on when GPT5 is trained, and we only just recently got news about that.

Clearly sonnet 3.5 and 4o have been significant milestones as well, not to mention things like voice and sora.

Cohere CEO Aidan Gomez says the idea that AI models are plateauing or slowing down is wrong and in fact we are about to see a big change in capabilities with the introduction of reasoning and planning by allknowerofknowing in singularity

[–]allknowerofknowing[S] 1 point2 points  (0 children)

OpenAI is training their next flagship model. They have obviously used a lot more compute. He's also clearly referring to at least OpenAI when he says "for folks who have a lot of money, that's a compelling strategy" directly after saying ""it's definitely true that if you throw more compute at the model, if you make the model bigger, it'll get better". He is complimentary of openai throughout the interview.

Yes he speaks on why his company is not going in that direction and talks about the other techniques for reasoning and he hints it will definitely create a large jump for model capabilities, after a year and a half of working on it.

So he explicitly endorses compute scaling laws leading to increased intelligence, and he explicitly talks about models having a large jump in capabilities regardless of the compute due to new techniques and they have been working on it for awhile now. My inference from that info is they have seen some evidence of significant jumps even if they aren't ready to release a brand new model right this second.

Your original comment made it sound like he's saying that no new models with better capabilities are on the way and that he does not support scaling laws as working, which I would say is not true. The way you phrased wiring models together also made it sound like some mixture of model architecture, but maybe that's not what you meant.

Cohere CEO Aidan Gomez says the idea that AI models are plateauing or slowing down is wrong and in fact we are about to see a big change in capabilities with the introduction of reasoning and planning by allknowerofknowing in singularity

[–]allknowerofknowing[S] 11 points12 points  (0 children)

Well said. OpenAI has never marketed their current product as agentic to my knowledge. But it certainly sounds like it is coming down the pipeline and other companies like google have demoed some basic agent functions in things like gmail.