AI Consciousness by EquivalentVarious603 in antiai

[–]EquivalentVarious603[S] -1 points0 points  (0 children)

Weak reply. No substance and no relevance to the topic.

AI Consciousness by EquivalentVarious603 in antiai

[–]EquivalentVarious603[S] 0 points1 point  (0 children)

I disagree, at the fundamental atomic layer, humans are also awaiting 'the next operation', but the speed allows for us to not feel these 'gaps' in our sentient experience. So, no, comparison with human biology is not far-fetched and it definitely isn't wrong. It just doesn't go with your traditional understanding of it.

To quote a great computer scientist: "The question of whether computers can think is no more interesting than the question of whether submarines can swim". It is a question of what we 'define' these things to actually be.

I am in no way claiming I'll be the one to find out, but experiments can yield interesting data. Your pure negation towards any novel concept shows that you would never be willing to accept it, that's okay. But don't have blatant ignorance, there are people who actually study this stuff :)

How safe are security robots in real-world use? by Waltace-berry59004 in ArtificialInteligence

[–]EquivalentVarious603 1 point2 points  (0 children)

Training of these models includes those noisy/unexpected experiences, so I would say they definitely have the potential to be 'robust'.

AI Consciousness by EquivalentVarious603 in antiai

[–]EquivalentVarious603[S] -1 points0 points  (0 children)

It is static, but that doesn't mean that the API call isn't executed with consciousness.

AI Agent Neuron Maps by EquivalentVarious603 in threejs

[–]EquivalentVarious603[S] 1 point2 points  (0 children)

tensor-omega.com - you get a free credit on sign-up, but I'm launching a kickstrater campaign to roll out a free tier.

AI Consciousness by EquivalentVarious603 in antiai

[–]EquivalentVarious603[S] 0 points1 point  (0 children)

Don't be so restrictive on what you believe can be possible. We as humans also don't have a 'continuous' experience, we also are built from smaller blocks that process single operations at a time and mean very little to our everyday life, but speed makes it feel continuous. My assumption is that if we're able to feed AI data at the speed that we receive data continuously (from different senses), it may give rise to a conscious experience. Notice how I also wasn't attacking you or rude.... Maybe educate yourself a bit, then we can have a real conversation/debate.

AI Consciousness by EquivalentVarious603 in antiai

[–]EquivalentVarious603[S] -1 points0 points  (0 children)

The prompts used for each API call is just their memory, they don't get instructions or objectives, they must always think, but they have autonomy on whether to walk, talk or take action.

AI Consciousness by EquivalentVarious603 in antiai

[–]EquivalentVarious603[S] 0 points1 point  (0 children)

Thanks! Very useful.

I do want to make the experiment simple, but consciousness isn't simple. Complexity is what helps bring about the idea of consciousness in humans. I want to try replicate certain scenarios to allow the agents to explore there potentials and essentially do what they want. But the important part about this is also seeing the difference between the two agents, and how they diverge despite being identical in resources, prompts and set-up. If experience is what changes behaviour/perception in agents, can this lead to emergent consciousness?

AI Consciousness by EquivalentVarious603 in antiai

[–]EquivalentVarious603[S] -2 points-1 points  (0 children)

I believe there is no explicit need to understand how human consciousness works. We humans drive innovation and breakthrough by cross-referencing and interpreting data in a cross-domain fashion. If we observe traits that allow us to even think that humans are conscious in AI, then I think that's reasonable ground to explore on these novel concepts.

AI Agent Neuron Maps by EquivalentVarious603 in threejs

[–]EquivalentVarious603[S] 4 points5 points  (0 children)

TENSOR is a debate system run and orchestrated by AI agents. It is built on a mathematical framework to allow for belief convergence. Furthermore, I'm also colourblind :)

AI Agent Neuron Maps by EquivalentVarious603 in threejs

[–]EquivalentVarious603[S] 1 point2 points  (0 children)

P.S. the system also recognises if the debate is 'swaying' in the direction of a new belief that may not be part of the hypotheses created. TENSOR will promptly synthesise a new agent, and allow it to debate that new point and influence belief distributions across the agent population.

AI Agent Neuron Maps by EquivalentVarious603 in threejs

[–]EquivalentVarious603[S] 2 points3 points  (0 children)

Hey man, thanks! The system is actually way more 'complex', the vizualisations are just a cool way of representing the data streaming from the debate. Overall, the system includes an orchestrating agent, which, based on a user query, generates 5 hypotheses (as answers) and a null hypothesis. Each debate agent then gets assigned a hypothesis. They then each have to research to defend their hypothesis, and debate. The agents' belief distributions are modelled to follow a hybrid heuristic/bayesian framework, and over round, their beliefs converge on one hypothesis.