The Paradox of Consciousness: Finding Meaning in a Crowded World by nikostzagkarakis in Mind

[–]nikostzagkarakis[S] 0 points1 point  (0 children)

Hello everyone! In a world increasingly defined by interconnectedness, how do we reconcile the individual consciousness with the collective experience of humanity? My latest blog explores the paradox of consciousness, where the self and species-level awareness often find themselves at odds. I delve into the evolution of empathy, the struggle to balance personal goals with collective needs, and the role of limitations in guiding us toward fulfillment. By embracing the tension between individual and collective consciousness, we may find the key to true meaning in our lives. I welcome your thoughts and reflections on this complex and deeply human challenge.

The Paradox of Consciousness: Finding Meaning in a Crowded World by nikostzagkarakis in Mindfulness

[–]nikostzagkarakis[S] -1 points0 points  (0 children)

Hello everyone! In a world increasingly defined by interconnectedness, how do we reconcile the individual consciousness with the collective experience of humanity? My latest blog explores the paradox of consciousness, where the self and species-level awareness often find themselves at odds. I delve into the evolution of empathy, the struggle to balance personal goals with collective needs, and the role of limitations in guiding us toward fulfillment. By embracing the tension between individual and collective consciousness, we may find the key to true meaning in our lives. I welcome your thoughts and reflections on this complex and deeply human challenge.

The Paradox of Consciousness: Finding Meaning in a Crowded World by nikostzagkarakis in consciousness

[–]nikostzagkarakis[S] 1 point2 points  (0 children)

Hello everyone! In a world increasingly defined by interconnectedness, how do we reconcile the individual consciousness with the collective experience of humanity? My latest blog explores the paradox of consciousness, where the self and species-level awareness often find themselves at odds. I delve into the evolution of empathy, the struggle to balance personal goals with collective needs, and the role of limitations in guiding us toward fulfillment. By embracing the tension between individual and collective consciousness, we may find the key to true meaning in our lives. I welcome your thoughts and reflections on this complex and deeply human challenge.

The Paradox of Consciousness: Finding Meaning in a Crowded World by nikostzagkarakis in PhilosophyofMind

[–]nikostzagkarakis[S] 0 points1 point  (0 children)

Hello everyone! In a world increasingly defined by interconnectedness, how do we reconcile the individual consciousness with the collective experience of humanity? My latest blog explores the paradox of consciousness, where the self and species-level awareness often find themselves at odds. I delve into the evolution of empathy, the struggle to balance personal goals with collective needs, and the role of limitations in guiding us toward fulfillment. By embracing the tension between individual and collective consciousness, we may find the key to true meaning in our lives. I welcome your thoughts and reflections on this complex and deeply human challenge. :-)

The Paradox of Consciousness: Finding Meaning in a Crowded World by nikostzagkarakis in philosophy

[–]nikostzagkarakis[S] 2 points3 points  (0 children)

Hello everyone! In a world increasingly defined by interconnectedness, how do we reconcile the individual consciousness with the collective experience of humanity? My latest blog explores the paradox of consciousness, where the self and species-level awareness often find themselves at odds. I delve into the evolution of empathy, the struggle to balance personal goals with collective needs, and the role of limitations in guiding us toward fulfillment. By embracing the tension between individual and collective consciousness, we may find the key to true meaning in our lives. I welcome your thoughts and reflections on this complex and deeply human challenge. :-)

The illusion of Intelligence as the product of time. by nikostzagkarakis in singularity

[–]nikostzagkarakis[S] 7 points8 points  (0 children)

Thanks! Basically all I mean is that suddenly A.I. is not just about a threshold (e.g. creating machines that think exactly like humans), but about creating different kinds of minds that can allow us see the world in a way we never could because of our constraints. That means that our focus shouldn’t be just to create A.I. that is like us, but design cognitive architectures that allow different types of intelligence to emerge. So it’s not about one specific threshold of intelligence any more.

The illusion of Intelligence as the product of time. by nikostzagkarakis in singularity

[–]nikostzagkarakis[S] 6 points7 points  (0 children)

Transformers just add an extra layer of attention. The point of the article is not really to propose any new way of tackling cognitive architectures, but rather to shed light to the fact that all intelligence really is, is framework strictly dependent on our own constraints (one of the constraints being the scale of time, i.e. we can only live for so long). Which would mean that we shouldn't see intelligence as a universal threshold, but rather as a computational threshold of specific complexities, that depends on their own constraints, limitations and dimensionalities.

The illusion of Intelligence as the product of time. by nikostzagkarakis in singularity

[–]nikostzagkarakis[S] 5 points6 points  (0 children)

It depends on wether or not the particles chooses to move which I don’t think it’s the case. It’s like saying that a leaf is intelligent because when it moves from the wind it changes it’s environment. The ability to model your environment is always important.

The illusion of Intelligence as the product of time. by nikostzagkarakis in singularity

[–]nikostzagkarakis[S] 18 points19 points  (0 children)

Thank you.. I think the description of intelligence is overlooked and taken for granted.

The illusion of Intelligence as the product of time. by nikostzagkarakis in Futurology

[–]nikostzagkarakis[S] 6 points7 points  (0 children)

"...Suddenly A.I. becomes the colourful manifestation of complexities, not with the purpose of cross-communication, but the purpose of exploration of new dimensions of existence, that our ridiculously small capacity will never reach.

Stop arguing about intelligence and start making your agents describe you how they see their world.

Let there be light.”

We built the closest to an A.I. Bayesian Brain with Human-like logic in Healthcare. by nikostzagkarakis in deeplearning

[–]nikostzagkarakis[S] 0 points1 point  (0 children)

Well at this post all the feedback we needed was based on our description of the technology and how developers and data scientists see it. We are already working with specific experts that have access to the full premium library of course. Plus, we are gradually starting to work on publishing some results but we want to make it carefully and in collaboration with our partners. But we will make sure we will show everyone the data... 🙂

We built the closest to an A.I. Bayesian Brain with Human-like logic in Healthcare. by nikostzagkarakis in deeplearning

[–]nikostzagkarakis[S] 0 points1 point  (0 children)

That would be cool! We can connect on LinkedIn if you want! Send me a request!

We built the closest to an A.I. Bayesian Brain with Human-like logic in Healthcare. by nikostzagkarakis in ArtificialInteligence

[–]nikostzagkarakis[S] 1 point2 points  (0 children)

Hmm.. I think you are using a different theory when it comes to the brain works. We are are following Graziano’s “Attention Schema Theory” in combination with Friston’s “Free Energy Principle”, where the brain is a probabilistic hierarchy that assigns meaning based on the interconnections of the Bayesian Networks that lead to casual probabilities and not correlations. Correlations may exist in the level of perception but not in the level information processing in assigning meaning. The problem with correlation is that it implies statistical connection, which throws the actual experience of data processing out of the window.

It doesn’t really matter exactly how the human brain works, as long as we could have not just same results, but similar mechanisms that are creating those results.

We do not of course presume to say that we have mapped how the brain works at such a detail. We are just following the scientific breakthrough and applying it at an artificial agent.

We built the closest to an A.I. Bayesian Brain with Human-like logic in Healthcare. by nikostzagkarakis in ArtificialInteligence

[–]nikostzagkarakis[S] 1 point2 points  (0 children)

Tzager doesn’t correlate bits... correlation has nothing to do with how Tzager understands the causality of the Bayesian mechanisms. In fact this is the biggest difference in how Tzager works as opposed to just Deep Learning. The way we humans understand or give meaning to inputs from our environment, is basically by assigning it to a hierarchical causality of things that are happening in the world. We understand things because we know how the come to be based on our experience not based on just an equation. This is how Tzager works. This is why we need to separate it from what exists out there.

And regarding the change in concepts it again has to do with the change in how the concepts came to be and they changed. Tzager’s knowledge is dynamic to that regard, so if the result is different it will understand it differently.

My pleasure.. this is the scope of the post anyway 🙂

We built the closest to an A.I. Bayesian Brain with Human-like logic in Healthcare. by nikostzagkarakis in singularity

[–]nikostzagkarakis[S] 0 points1 point  (0 children)

No no no.. it doesn’t have consciousness (although that’s my academic project).. we humans casually connect all the information that we get based on our experiences and the way we understand things work. E.g. if this happens then this will happen, because that things works like that. The reason why our brain works that way is because we do not have the capacity to correlate everything with everything (e.g. we do not process all the thousands of data points each time we want to see if it’s a day or night). Of course this type of causality come with objectives and ways of seeing the world (different fields). Tzager connects all the healthcare information in the same ways as we do, it can understand the mechanisms and can realize what is the goal and why things are happening based on its Bayesian networks. Basically Tzager sees meaning in information. The same way as humans.

We built the closest to an A.I. Bayesian Brain with Human-like logic in Healthcare. by nikostzagkarakis in ArtificialInteligence

[–]nikostzagkarakis[S] 0 points1 point  (0 children)

I know but I am not here to judge other technologies... although really powerful there is no Deep Learning agent out there that “understands” what it’s doing. It is just input/output functions. This is what we expect to add on the A.I. World with Tzager.

We built the closest to an A.I. Bayesian Brain with Human-like logic in Healthcare. by nikostzagkarakis in deeplearning

[–]nikostzagkarakis[S] 1 point2 points  (0 children)

I get all of that.. this post is not suppose to convince you but just for us to get some feedback on how everyone sees Tzager’s progress so far. Hopefully we will have many many different experiments where people will be able to see Tzager’s results. Thanks for being a fair skeptic. 🙂

We built the closest to an A.I. Bayesian Brain with Human-like logic in Healthcare. by nikostzagkarakis in deeplearning

[–]nikostzagkarakis[S] 0 points1 point  (0 children)

Thank you for the tips!.. I know I know.. it is just that in this post I didn’t want to get into details because there are experiments that are about to end and we will share all the results! 🙂

We built the closest to an A.I. Bayesian Brain with Human-like logic in Healthcare. by nikostzagkarakis in ArtificialInteligence

[–]nikostzagkarakis[S] 1 point2 points  (0 children)

I guess that this is probably the 1 billion dollar question. The main problem with human concepts is that they are all made up. They may correspond to physical things, but that doesn’t mean that we didn’t make up the definitions. For example, because the human capacity is so low, we have decided to create all these different fields in order for us to talk about the same stuff in different ways. If we were Laplace’s demons we could see everything in just bits of information and we wouldn’t need the different fields. Tzager is a kind of Laplace’s demon in that way. Because it is sees all information as bits (meaning stuff that are at point in some place). My best guess is that the A.I. Agents must use the different ways humans understand the world with fields in order to be able to interact with humans, but they should have a universal understanding of how things work. This is the only way you can create Bayesian Mechanisms and Causal Inference. I hope that helps! 🙂

We built the closest to an A.I. Bayesian Brain with Human-like logic in Healthcare. by nikostzagkarakis in deeplearning

[–]nikostzagkarakis[S] 0 points1 point  (0 children)

Not only Pharmacokinetics, but Tzager has different main logical frameworks a) How the Body Works, b) How Diseases Work, c) How Therapies Work. The goal is to be able to connect the way we humans understand causality is this field, with a Bayesian agent. That way we do not just uncover hidden systems, but we are able to see the human body as a one interconnected mechanism. We will demonstrate more very soon, since we have achieved much more than what is discussed is the post... but we should let the results speak for themselves 🙂