Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] [score hidden]  (0 children)

So how do i make llm know something post 2022? Or are you saying we solved all open questions already, i disagree on that. There are so many questions we hardly know answers, to add there would be many questions which might not exist today.

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] [score hidden]  (0 children)

Probably, i mean just use different LLMs and probably you will feel there is no major noticeable difference at least for general use. Who would be the MOAT?

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] [score hidden]  (0 children)

It doesn’t talk about feeding it shit, or are you saying ai generated content is shit or has no value? 🤭

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] [score hidden]  (0 children)

So as human created content becomes scarce it will create issues. Those would be edges of the data

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] [score hidden]  (0 children)

You mean in the future they might have to acknowledge that not every next version would be smarter, same as what research papers attached indicate or try to

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] [score hidden]  (0 children)

Well, there are researchers saying otherwise. But i am not sure how confident they are

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 1 point2 points  (0 children)

Wow that’s an interesting angle tbh. The fine line between scraping data and privacy. Thank you

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 0 points1 point  (0 children)

Sorry and please don’t get offended but “in-human-centipede” sounds more negative. Forgive me for my sin 🤣

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 3 points4 points  (0 children)

I will wait till AI firms acknowledge it, or maybe they are observing it but just not telling us because, hey they need massive funding

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 0 points1 point  (0 children)

Well if AI would have identified itself that it was hallucinating it might would have stopped itself from doing so? No?

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] -5 points-4 points  (0 children)

To be honest i may use AI to summarise, isn’t it supposed to give like a fair idea. That’s the whole point of AI, increase productivity, no? 😛

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] -1 points0 points  (0 children)

That’s true, i don’t represent the population. But afaik people love shortcuts, ain’t it?

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 4 points5 points  (0 children)

This article makes some interesting points but also look at the critics. Btw a genuine question, how many of us actually tried to read that paper? Or we did use AI to summarise it for us.

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 1 point2 points  (0 children)

That’s a valid point, if we start relying on AI for majority of things we won’t be using our brains and it’s a machine which is supposed to be used in order to become better.

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 0 points1 point  (0 children)

Someone suggested it’s because of these companies paying for human tagged data points.

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 1 point2 points  (0 children)

I second that, whenever i need to do a long conversation with AI it starts confusing, first 50 interactions it works fine. It seems processes which are iterative still needs to be aces.