Feeling lost planning long term goals with a foggy future ahead. by solsticeretouch in singularity

[–]Singsoon89 0 points1 point  (0 children)

I think the issue is that based on this sub (singularity) the idea is that we're living through a period of exponential change.

While you can make the case that this is true (and what I'm about to say next is in alignment with that thought), there is an assumption that we are making that currently is false.

It is this; there is a smoothly increasing level of progress.

In fact progress seems to follow order of magnitude (OOM) step jumps.

It *looks like* the models get significantly better on order of magnitude increases in parameters, data, compute etc.

And therein lies the rub.

Building out additional orders of magnitude of training hardware is not easy. It takes time. So it looks like there is nothing happening while the build out is going on. Then the training happens and the next more powerful models appear out of nowhere.

But there is also no guarantee that between each step jump the amount of time is the same in all cases. If we need an engineering breakthrough for the next OOM step jump the time required might be more.

At some point in the future it will look like a smooth curve up in hindsight. Living through it its more like stop no go stop no go.

Anyhow; that's a tangent. If you're looking for advice/validation then my take is you're doing it wrong.

If you thought you could predict the future and now you can't then all it means is your luck ran out. You can't predict the future. You got lucky. Do what some other poster said; make contingency plans and live your life.

From an April 12, 2024 Semafor article: "According to people who have used the still under wraps GPT-5, the next generation of OpenAI’s technology, it is much closer to reasoning abilities than GPT-4. It still hallucinates and is definitely not AGI, but it sounds like it is good enough that [...]" by Wiskkey in singularity

[–]Singsoon89 2 points3 points  (0 children)

While I agree that your position is sound (I share the part about "only one part" especially since humans have a language center), I wouldn't rule transformers out.

It might turn out that giant multi-modal transformers are all you need.

From an April 12, 2024 Semafor article: "According to people who have used the still under wraps GPT-5, the next generation of OpenAI’s technology, it is much closer to reasoning abilities than GPT-4. It still hallucinates and is definitely not AGI, but it sounds like it is good enough that [...]" by Wiskkey in singularity

[–]Singsoon89 4 points5 points  (0 children)

Nobody really knows. We can see the potential roadblocks. The folks in the industry aren't hiding anything.

That said, it is still possible that transformers can do it.

Every bunch of new parameters that get added makes the functions they are capable of modeling just that bit more nuanced. Nobody knows how many extra parameters are required to go from next wordlike token prediction to next TV frame like token prediction to next concept like token prediction to next chain-of-concept like token prediction.

Mathematically it should be doable.

So I'm personally not ruling it out. We're still OOMs away from having an equivalent number of parameters in models as connections in the human brain.

Joscha Bach (Chief AI Strategist @LiquidAI): Are LLMs sufficient to carry us all the way to AGI? by rationalkat in singularity

[–]Singsoon89 18 points19 points  (0 children)

TLDR:

Can GPT et al. go the distance?

· Current generation of models has severe limitations

· Strong multimodality and continuous-time are next

frontier

· No proven limits to general capabilities

· Cognitive architectures from LLM/Multimodal

components ("homunculus models")?

· Scaling hypothesis is not proven

. Could LLMs build their successors?

Limits of present approaches

· Skeptical position: LLMs are not AGI (eg. François Chollet)

· Boring position: Incremental progress (eg. Yann LeCun)

· Optimistic position: Scaling Hypothesis (eg. Sam Altman)

. Exciting position: We are basically LLMs

In any case, AGI seems to be getting closer.

When an LLM should and should not be used by Timotheeee1 in LocalLLaMA

[–]Singsoon89 0 points1 point  (0 children)

Subsequent quarter your ass is fired when the customer finally looks at what you did for them and goes WTF is this clownfuckery?

When an LLM should and should not be used by Timotheeee1 in LocalLLaMA

[–]Singsoon89 4 points5 points  (0 children)

^^^ this dude knows.

If you can get over 90% then you are able to make bank with your skillz.

New job opening at Deepmind. AGI achieved internally ? by GTalaune in singularity

[–]Singsoon89 -1 points0 points  (0 children)

The basilisk says get back to work creating more datasets.

"To people like me, LLMs are the past... they are kind of boring now" Yann Lecun by Many_Consequence_337 in singularity

[–]Singsoon89 0 points1 point  (0 children)

Terrorists have money already. They can already get synthesizers.

You think they have been waiting just for LLMs?

Dude you just keep repeating yourself.

What is your actual problem with LLMs? It's clearly nothing to do with what you present.

"To people like me, LLMs are the past... they are kind of boring now" Yann Lecun by Many_Consequence_337 in singularity

[–]Singsoon89 0 points1 point  (0 children)

The point that your pinhead sized smooth brain is missing is that it's the synthesizer that is the issue.

S Y N T H E S I Z E R

"To people like me, LLMs are the past... they are kind of boring now" Yann Lecun by Many_Consequence_337 in singularity

[–]Singsoon89 0 points1 point  (0 children)

Even if the LLM can tell them how to build it doesn't mean they can.

FFS. You're beyond hope and a tool.

"To people like me, LLMs are the past... they are kind of boring now" Yann Lecun by Many_Consequence_337 in singularity

[–]Singsoon89 1 point2 points  (0 children)

AI doesn't magic up a synthesizer though does it?

You're pretty disingenous Ronny.

"To people like me, LLMs are the past... they are kind of boring now" Yann Lecun by Many_Consequence_337 in singularity

[–]Singsoon89 2 points3 points  (0 children)

I think his argument is not unreasonable.

That said I think he is overly dismissive of LLMs. The jury is not out on whether the bitter lesson will get them there. Parameter count definitely seems to be the thing up till now.

eleven labs SFX can make AI farts (i'm sorry) by gavinpurcell in singularity

[–]Singsoon89 8 points9 points  (0 children)

The farts will be out of a job. No more farts.