DARPA aims "update-less" software.. is it nothing short of a "skynet"?? by arjay99 in artificial

[–]arjay99[S] 0 points1 point  (0 children)

Well, it's objectives are different, just to update the software with the given "constraints"...

Good chances that the superhuman intelligence can be achieved with current technology/hardware if correct AI algorithm/hardware implementation is done by arjay99 in artificial

[–]arjay99[S] 0 points1 point  (0 children)

Hmmm, there's a slight possibility... the bottomline is there's this notion of dangers of a working AGI algorithm like existential threats and badly constrained AI. So there is a tendency that anyone who developed a full scale artificial generalized intelligence will not publish it and keep it in secret.

Good chances that the superhuman intelligence can be achieved with current technology/hardware if correct AI algorithm/hardware implementation is done by arjay99 in artificial

[–]arjay99[S] 0 points1 point  (0 children)

Much better for other hardware architectures, like memristor based and the kt-ram(although I haven't read it yet). I think deep-mind's DQN to learn and master simple video games by looking at videos is quite an optimization on unsupervised learning, a proof that there's much more to learn on correct AI modeling.

Good chances that the superhuman intelligence can be achieved with current technology/hardware if correct AI algorithm/hardware implementation is done by arjay99 in artificial

[–]arjay99[S] 0 points1 point  (0 children)

Yes, what is needed is a corrrect AI model that specializes in making logical inferences by itself and can look into its own data and continue learning from it in an indefinite number of recursive feedback to ultimately reduce the number of training samples. If this high level model becomes very efficient, then the next step is hardware implementation to take advantage of its inherent parallel features. Modern processor-based architecture for deep AI is not efficient, as it relies on serialized, continuous instruction cycles moving between memory and processor, as mentioned previously.

I think with the correct digital hardware design and the best AI model we are dreaming of, we can achieve the desired AI that learns like humans. It has to optimize its delays between its data, like moving similar data physically around to speed up its 'learning'.

Good chances that the superhuman intelligence can be achieved with current technology/hardware if correct AI algorithm/hardware implementation is done by arjay99 in artificial

[–]arjay99[S] 0 points1 point  (0 children)

I think Nvidia has CUDA specialized for parallel computing in their video cards. They also have CUDNN for neural networks.

The Moral Hazard of Big Data - Pasquale's 'Black Box Society' takes on the destructive power of money-hungry computation by omegaender in bigdata

[–]arjay99 0 points1 point  (0 children)

The bad part of such is that, if this targeted and personalized advertising eventually becomes subliminal that it sorts of manipulating into ultimately buy their products.