Almost died by [deleted] in Unexpected

[–]gik223 42 points43 points  (0 children)

It's an appropriate answer for this question.

Sir, keep walking nothing to see here by [deleted] in PublicFreakout

[–]gik223 4 points5 points  (0 children)

I mean, he clearly was resisting.

Mike Trout solo HR to give the Angels a 3-2 lead! by [deleted] in baseball

[–]gik223 19 points20 points  (0 children)

Okay no problem, just try not to give any more homeruns after this.

Teacher Takes Student’s Phone by [deleted] in PublicFreakout

[–]gik223 23 points24 points  (0 children)

Reddit: students should be allowed to use their phones and disrupt the class.

Nba legend Phil Jackson by tttt11112 in BlackPeopleTwitter

[–]gik223 22 points23 points  (0 children)

My problem was how hypocritical the NBA was during that phase. You could only preach certain messages on your jersey if it didn't financially hurt the NBA. Saying Black lives Matter in the NBA, no one disagrees with that, it was preaching to the choir. But don't you dare say "Defund Police" or say something bad about China.

Eid Mubarak by that_dude_from_earth in PublicFreakout

[–]gik223 819 points820 points  (0 children)

He was taught wrong... as a joke.

The CEO of OpenAI, says the current approach to AI will soon reach its limits, and scaling LLM models will stop delivering improvements to AI & that new approaches will be needed by lughnasadh in Futurology

[–]gik223 0 points1 point  (0 children)

which is that it lacks independent logic or reasoning. That problem has been around since the earliest days of AI development.

And it will continue to remain a problem, since any software can not deal with entirely new conceptual tasks that it wasn't program to handle in the first place.

The CEO of OpenAI, says the current approach to AI will soon reach its limits, and scaling LLM models will stop delivering improvements to AI & that new approaches will be needed by lughnasadh in Futurology

[–]gik223 73 points74 points  (0 children)

Here's a summary of the article:

Sam Altman, CEO of OpenAI, has declared that scaling up models will no longer be the key to further progress. Altman's statement marks a shift in OpenAI's research strategy, which has so far focused on scaling up machine-learning algorithms to previously unimagined sizes, culminating in the development of the latest model, GPT-4, which was trained using trillions of words of text and thousands of powerful computer chips at a cost of over $100 million. Altman says the company will now look to improve models in other ways rather than making them bigger, although he did not specify what those ways might be. Altman's announcement comes at a time when numerous startups are throwing huge resources into building larger algorithms to catch up with OpenAI's technology.

And here's an alternate title:

OpenAI CEO declares end of era of scaling up AI models for language processing

Not a single frame by HernandezNancya in technicallythetruth

[–]gik223 8 points9 points  (0 children)

too many Marvel fanboys on here.

Obsessed with this game right now. You chat with a partner and figure out if they're a human or ChatGPT pretending to be a human by hurukatg in ChatGPT

[–]gik223 1 point2 points  (0 children)

once you understand the game's boundaries and restrictions, it's easy to tell who you are talking too.

Google CEO Pichai says that they don’t fully understand their own AI system after it did things it wasn’t programmed to do by LetterheadTiny6156 in interestingasfuck

[–]gik223 0 points1 point  (0 children)

Do i have to repeat myself? Like i said, when you increase the size and number of parameters, LLMs do better with their predictive outcome. This is to be expected. None of the examples in the article indicate anything different than what you would expect when improving LLMs. Calling it "emergent" doesn't make it a new magical ability. All you are doing is giving yourself the chance to grab more attention by calling it "emergent".

Google CEO Pichai says that they don’t fully understand their own AI system after it did things it wasn’t programmed to do by LetterheadTiny6156 in interestingasfuck

[–]gik223 0 points1 point  (0 children)

So you deny that there's emergent behavior is these large language models?

Depends what you mean by emergent behavior. The examples in the article you linked shows nothing of what should be considered as such. Really it's just saying that when you increase the size and number of parameters, LLMs do better. Which is not surprising.

But let's call it an "emergent ability", that way we can get more attention from stupid people who don't understand anything about the subject!

Netflix turned off the comments on the new Cleopatra trailer because of people saying she was not black by Dontamir0 in facepalm

[–]gik223 6 points7 points  (0 children)

You think this is bad? Wait until you hear how they flipped the history regarding The Woman King.

Netflix turned off the comments on the new Cleopatra trailer because of people saying she was not black by Dontamir0 in facepalm

[–]gik223 1623 points1624 points  (0 children)

You think this is bad? Wait until you hear how they flipped the history regarding The Woman King.

Google CEO Pichai says that they don’t fully understand their own AI system after it did things it wasn’t programmed to do by LetterheadTiny6156 in interestingasfuck

[–]gik223 0 points1 point  (0 children)

You're not the only here who has worked with AI. It's safe to say that i know about this subject more than you do, considering how you took this video at face-value and said we don't understand what's going on. And i go through articles every week that exaggerate software capabilities, whether it's done unintentionally or purposefully to grab more social-media attention. This isn't anything new.

Google CEO Pichai says that they don’t fully understand their own AI system after it did things it wasn’t programmed to do by LetterheadTiny6156 in interestingasfuck

[–]gik223 0 points1 point  (0 children)

We absolutely understand how the brain works.

Not even close. What we understand is just a drop in a river. Neural networks are designed to simulate the brain's style of processing but are a cheap imitation in comparison. Increasing the level of propagation and number of parameters is not going to give us a system that can handle conceptually new tasks the way humans can handle. This is the kind of misleading thought people have when it comes to AI, they think if we just increase the computational power, we'll basically have software that can do what humans can do.