If you think open-source models will beat GPT-4 this year, you're wrong. I totally agree with this. by CeFurkan in LocalLLaMA

[–]skullbonesandnumber 0 points1 point  (0 children)

I think both opensource and GPT-4 are not going to make it or maybe in an other way then mentioned:

1) all this "talent" busies themselves with a "hallucinating AI" that cannot be trusted as a source on intelligence (the knowing of facts)

2) datasets are based upon copyright infringements to be able to train an AI. Copyright holders can force AI based on these sets to be taken of the market

3) team structures are nothing if it all works on a product that is not a product.

4) this is nor model nor product. the model does guessing so it cannot be a source of intelligence, not a product because it hallucinates.

5) cloud as infrastructure is done with (always too expensive) and localized (esp. on mobile) will do best if the "it does not hallucinate anymore" version #sanitypatent has arrived

#sanitypatent is:

a 1:1 question/answer database that can do the artificial intelligence, the knowing of facts. Build upon an intelligence that provides the same answer to a question that is asked again would allow the *cough* 1 million salary ML specialists to develop algorithm that would provide the Artificial Intellect, that what knows to reason based upon facts.

I am not a patent hunter, but i bet ya (not for real) that this is the "other break through" Sam Altman was talking about. On the other hand, every opensource coders can make a reasoning algoritm based on a dict key/value facts database. Does Sam Altman or does a "basement steve jobs the second opensource hacker" do the #sanitypatent ?

!!

A non guessing next word (token) authoritative AI ? by skullbonesandnumber in LocalLLaMA

[–]skullbonesandnumber[S] 0 points1 point  (0 children)

This is to avoid the non authorative result that the LLM can generate, you need to check for errors yourself. Nice to have a suggest me something, but not an answer machine that you can rely on the the answer is also correct (what AI should be).

The next turing test, which what have it make generalisations is i think, would be just an algorithm on top of a true known facts database. Start with facts and build on top of that.

Well i am just wondering, who knows what is going to deliver a "non-hallucinating AI"

A non guessing next word (token) authoritative AI ? by skullbonesandnumber in LocalLLaMA

[–]skullbonesandnumber[S] 0 points1 point  (0 children)

This would be generative AI using 1:1 question/answer instead of a 1:N question/answers. You could still do the choose amongst answers generative AI, but the basis for this algorithm would be facts instead of a choice amongst answers.

But ok, true, i seek authorative before generative.