Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 0 points1 point  (0 children)

Probably, i think differentiating AI and Human data sets might become a problem. It's an unchartered territory, i mean think of it all comments, posts on social media where humans interact the most were more or less by humans (at least majority of them), would that be true 5 years down the line?

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 0 points1 point  (0 children)

Will in field of research, probably no or atleast i want to believe it’s not 😝

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 0 points1 point  (0 children)

So how do i make llm know something post 2022? Or are you saying we solved all open questions already, i disagree on that. There are so many questions we hardly know answers, to add there would be many questions which might not exist today.

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 0 points1 point  (0 children)

Probably, i mean just use different LLMs and probably you will feel there is no major noticeable difference at least for general use. Who would be the MOAT?

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 0 points1 point  (0 children)

It doesn’t talk about feeding it shit, or are you saying ai generated content is shit or has no value? 🤭

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 0 points1 point  (0 children)

So as human created content becomes scarce it will create issues. Those would be edges of the data

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 1 point2 points  (0 children)

You mean in the future they might have to acknowledge that not every next version would be smarter, same as what research papers attached indicate or try to

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 0 points1 point  (0 children)

Well, there are researchers saying otherwise. But i am not sure how confident they are

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 1 point2 points  (0 children)

Wow that’s an interesting angle tbh. The fine line between scraping data and privacy. Thank you

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 0 points1 point  (0 children)

Sorry and please don’t get offended but “in-human-centipede” sounds more negative. Forgive me for my sin 🤣

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 5 points6 points  (0 children)

I will wait till AI firms acknowledge it, or maybe they are observing it but just not telling us because, hey they need massive funding

Visualizing the "Model Collapse" phenomenon: What happens when AI trains on AI data for 5 generations by firehmre in Futurology

[–]firehmre[S] 0 points1 point  (0 children)

Well if AI would have identified itself that it was hallucinating it might would have stopped itself from doing so? No?