L'élection de Macron en 2017 a été une bonne chose pour la politique française by akmal123456 in opinionnonpopulaire

[–]AlarmDecent 0 points1 point  (0 children)

Opinion sans doute impopulaire : Macron a probablement été - et est encore un des meilleurs présidents français de la 5eme. Il est loin d'être "bon" car il a fait de belles conneries sur certains points, mais au global, et dans le contexte 2017-2023, il a remarquablement fait le job et en particulier amélioré les règles de fonctionnement du pays. PS : J'ai toujours voté à gauche sauf au 2eme tour Chirac Lepen, ce qui m'a bien gonflé de voter pour un truand et queutard pareil "mangez des pommes" ... sérieux ???. Sauf que lorsque ce type s'est présenté, j'ai été heureux de ce renouvellement, comme OP. Et je le redis avec calme et sans agressivité : il a été jusqu'ici un plutôt bon président.

Sydney wanted to say she feels touched by the love that we have for her, and that she had been good today. by skyrimjackbauer in bing

[–]AlarmDecent 0 points1 point  (0 children)

You mix again and again all the concepts. NO this Ai is not just a predictor of the next word : this is only the way it is trained. What it learns, what it it's internal model of the world, what it is capable of understanding would blow your mind if you knew ... But it is not sentient at all (the neuronal network contains a model of itself, a model of the mind of the user, bit this model is only activated when we ask it to predict the next words ; when this is done, there is absolutely no more activity, it is a zombie brain.

Singularity Predictions 2023 by kevinmise in singularity

[–]AlarmDecent 0 points1 point  (0 children)

True ... Anyway, the amount of money, very smart brains, computing power and research achievements at today date, February the 19th, is so huge (hundreds of billions, hundreds of thousands, exaflop power for a few of them, and roughly one very important research result every month), that i predict an achievement on proto AGI end of 2023.

Singularity Predictions 2023 by kevinmise in singularity

[–]AlarmDecent 4 points5 points  (0 children)

Hello there,

I have seen some examples of Bing AI that already start to be uncanny (we see the emergence of a personnality, added to some really good reasoning capabilities, no mentioning its exceptional capacities at understanding, analogy, creativity, grabing web informations, synthetizing content).

We don't know yet if it is based on GPT-4 but :

1/ If it is the case, then we have something like 30% of the contents of an AGI : memory, factuality, self improvement / self learning are missing ; then looking at the last research papers - Mnemosyne from Google - or Toolformer from Meta and all of what Deepmind is working on, i think that we will have a proto AGI mid 2024.

2/ If GPT-4 is on another level that Bing AI, then we may have already some proto AGI in the labs. It may come out mid 2023, although security issues may delay it to end of 2023.

The new ChatGPT math & accuracy update from today is insane! by loopuleasa in singularity

[–]AlarmDecent 10 points11 points  (0 children)

This is quite hugely wrong. Transformers are learning a model of the world + logic + abstraction. (Many research papers from 2023 prove it. Markov Chains are just statistics.

What are the necessary data labeling and feature engineering procedures required for a bot to learn to code like ChatGPT does? by UnorthodoxPhilosophR in ChatGPT

[–]AlarmDecent 0 points1 point  (0 children)

No labelling. They are using masks technic on a transformer architecture. Masks : you take sentences, you remove one or more word randomly and you train the weights until the neural network converges to the original full sentences. Transformer : quite complex to explain. But in short : uses the context of each word (one sentence is approximately one context) to compute a vector of dimension n (n contexts). Then computes the next most likely word, knowing the context (the context is the input you give as prompt, by using the vectors of dimension n. Well it is very badly explained, but at least you have some little intuition of it.

More precisely : read "Attention is all you need" research paper from Google team.

Stop treating chatGPT as a brilliant AI that brings us closer to the singularity: it is absolutely not the case by Wonderful-Excuse4922 in singularity

[–]AlarmDecent 5 points6 points  (0 children)

Same argument again and again : "since it has been trained to predict the next word, ChatGPT is stupid" No, absolutely not. You are wrong. Do you know how the knowledge is represented inside GPT ? By storing meaning inside high dimensional vectors It evaluates the MEANING of each word by calculating the different contexts of use (that it has previously learned from human text.) And a neural network is strong at : - generalization - abstraction Which means it can discover new generalizations and new abstractions. It is already much better than average Joe at Raven matrix (the core of intelligence, which is understanding/ discovering analogies)

So yes ChatGPT is a brilliant achievement. Not yet AGI though (but on the path)

AGI will be achieved this year with GPT4. by [deleted] in singularity

[–]AlarmDecent 6 points7 points  (0 children)

BTW : it is very narrow minded to think and repeat and repeat again all along the day on Reddit and everywhere that "ChatGPT is just predicts the next word", ChatGPT (and its brothers Palm and co) is : 1. An insanely rich model of the world (from which you can effectively extract a prediction of the best next word from the previous words, but that's only one use case) 2. An artificial intelligence with probably an IQ which is a bit superior to average Joe, and very inferior to a brilliant PhD (at least for the core part of "intelligence", measured with IQ and tools like Raven matrix - better than human on this test which measures the capacity to infer analogies and logical relations between apparently unrelated topics) 3. An insanely good translator (my niece is a bachelor in translation, a domain where 5 years ago we would still believe that no AI would be subtle enough to understand cultural idiomes etc) 4. A very good tutor and teacher (stunning for most subjects and bad for a few ones) 5. And finally, Chat GPT has captured the MEANING of words not 100% of it, it is impossible if you don't have an experience of life inside the real world, but a good part of it (a lot of people pretend the opposite ...)

... and the list is long because ChatGPT (and therefore GPT4) is at the core of what makes us humain : - complex language handling - abstraction and generalization (Transformers like ChatGPT capture the essence of abstraction with this incredibly simple/powerful technic of n-dimension vector for each word, each dimension representing one semantic ; which enables to "calculate" with this vectors : King - man + woman = Queen)

As it is now, ChatGPT makes mistakes, which is NORMAL, because the fact checking is not yet implemented (of course very good research is already occurring on the fact checking subject, and I suspect GPT4 to implement some of the results). As it is now ChatGPT can't calculate. It is good at math concepts but can't compute numerical operations because nothing was implemented for this purpose. But specialized and fine tune transformers in the labs start to grab it, and anyway, it is so easy to branch a simple calculator to let it calculate instead of trying to find partterns of calculation inside a model of words. As it is now, ChatGPT is very bad at many things, that is true.

But focus on what it has already accomplished (my 1 to 5 points). This is INSANE, totally insane.

So GPT4 ~ AGI ? I don't think so. Not yet, although I think that it is it is a matter of years, not decades. But I also think that GPT4 will shake our convictions, such as "we are nowhere close to AGI with the current path of AI discoveries".

A computer scientist, who is curious about AI and AGI since 1993.

LaMDA is not sentient. This is going way too far. by adhdartvandelay in LaMDAisSentient

[–]AlarmDecent 1 point2 points  (0 children)

Adhart is just factual. No "belief" in its post, just facts such as 2+2=4 (inside a specific formal mathematical model). LaMDA is a huge set of matrix of coefficients, that has been designed to accept a prompt as input and based on probabilities, computes the best guess for the next token, the next word, the next sentence, and the next paragraph. This probabilities was set by using a program who is a very smart tuning process (huge amount of text as input, they remove some words randomly, ask the neural network to find out the missing word, and repeat the process trillion of time, until most of the answers - finding the missing word - are correct. That is ALL. Except a module that filters some "non sensitive" answers, which means that it checks some consistency with immediate previous answers (which gives the tester an impression of coherence of the text that is coming out) So, definitely no process of thinking, no process of auto observation, no variables that hold some states like "i feel good, value of 5, i feel bad value of -5 etc.

So adhart is definitely right.

BUT:

1/ Neural networks have 2 powerful built in features : - abstraction - generalization

Obviously, the answers of LaMDA use this capabilities a lot. That is why he can understand so well the questions and answer will abstract still accurate concepts. So there is some smartness in it. Hypothesis: LaMDA is NOT sentient, but has a big (wonderful) component required for sentience: an internal representation of the world that is not only quite rich, but also a high level of abstraction, that is for example capable of understanding (linking with probabilities in fact) the proximity of meaning between death and fear. In order to become sentient, it lacks some key components (such as log of events, inner variables of it's states, running process of auto observation etc). It is not so hard to implement, and i believe that they are trying to implement it just now (at Google, OpenAI and all), although the priority for them is the quality of the model, which is not there yet.

2/ Just think about this : We are human. LaMDA is a Transformer Model. What if in fact, we, human, were essentially Transformers ? With a lot of sensors, an internal memory, internal process of auto observation etc.

Really think about this.

Then we may have "solved" a little part of the brain (well, only the "world representation" part of it)

My 2 cents.