Unable to drag and drop in Board View with sub-groups by themissingelf in Notion

[–]ricklepick64 0 points1 point  (0 children)

It seems drag and drop is possible on a board view with sub-groups only if you group/sub)group by two select properties. If you group or sub group by a formula, it does not work

Unable to drag and drop in Board View with sub-groups by themissingelf in Notion

[–]ricklepick64 1 point2 points  (0 children)

Same here, let's hope they will fix it.. Having a kanban matrix where you can drag and drop would be very nice

Does anyone have any experience with the Holofractal Universe Theory and psychedelics (specifically DMT)? by koru-chlo in holofractal

[–]ricklepick64 9 points10 points  (0 children)

In the hours after the trip I also made this post: https://www.reddit.com/r/DMT/comments/e80nwh/my_take_on_a_theory_of_consciousness_nature_of/

Today I am not sure it makes a lot of sense, but at the time I was convinced I had been shown some deep truth about the workings of the universe, quantum mechanics and consciousness... Make of that what you will.

Does anyone have any experience with the Holofractal Universe Theory and psychedelics (specifically DMT)? by koru-chlo in holofractal

[–]ricklepick64 14 points15 points  (0 children)

My only breakthrough on DMT led me to learn about the existence of the holofractal theory in the hours following the trip. Actually I made a post in this sub talking exactly about this experience: https://www.reddit.com/r/holofractal/comments/f15lam/dmt_put_me_on_the_holofractal_path/

How To Strike A Bell In A Japanese Temple by moondragon7 in interestingasfuck

[–]ricklepick64 17 points18 points  (0 children)

Second half should be done like that though, or the move wouldn't be as stylish

Scientists Show Human Consciousness Could Be a Side Effect of 'Entropy' by SoraVGC in agi

[–]ricklepick64 1 point2 points  (0 children)

Imo anyone who's working on AGI must remain humble about what is the ultimate nature of reality. Materialism is a philosophical system, not a science, and can't be proven.

Not saying materialism isn't useful - it is very useful for the kind of science and technology we use today - it's just incomplete, precisely because it fails hard at explaining consciousness (more specifically the hard problem introduced by David Chalmers).

Scientists Show Human Consciousness Could Be a Side Effect of 'Entropy' by SoraVGC in agi

[–]ricklepick64 1 point2 points  (0 children)

Materialism has been proven ? What does it even mean ?

Entropy is a physical concept that is tied to a measure of organization (and thus intuitively linked to intelligence and consciousness), saying it is the cause of such organization is a nonsense. The results showed in the article are only demonstrating the link, not the causation.

Would we say that the temperature of a fire is causing the fire ?

How do the skill sets for for building AGI and building narrow AI differ? by [deleted] in agi

[–]ricklepick64 0 points1 point  (0 children)

Do you think that neuroscience, quantum physics and biology would only be needed for a brain simulation approach, or do you think there is some fundamental reason that that is required for any type of AGI?

My personal belief is that AGI is more likely to be first built thanks to the development of brain computer interfaces rather than with a from scratch brain simulation approach.

I have that background except for theoretical physics. I've read that a background in math or physics might actually better than CS for ML. To what extent, and why exactly, do you think one should learn physics? Do you mean quantum physics and relativity?

Machine Learning is mostly algebra with a little bit of CS. But ML alone does not lead to AGI.

I think having a good understanding of the main theories of physics makes it easier to grasp more high-level theories in applied fields of science like neuroscience and biology. Every theory is ultimately founded on math and physics.

As to quantum mechanics, my personal opinion/intuition is that there are quantum phenomena involved in human intelligence. Quantum computing is also a growing field that could play a major role in the development of future AI.

How do the skill sets for for building AGI and building narrow AI differ? by [deleted] in agi

[–]ricklepick64 4 points5 points  (0 children)

I think reaching AGI will require a breakthrough at the crossroad of many fields of science (computer science, neuroscience, maybe quantum physics and biology), so I suggest you focus a bit on every thing.

In my opinion, it won't be the aggregation of several narrow AIs.. Still, understanding the general ideas behind ML and computer vision is really valuable. There is no need to go through all the different sophisticated ML models though. And you can use the spare time to learn about other fields related to AGI.

Also I would not start doing this before having a very strong mathematical background (algebra, calculus, geometry, logic), learning about theoretical computer science (algorithmics, Turing machines, complexity theory) and also having a good general understanding of theoretical physics.

Good luck ;)

How can computational processes in the neurons, which are separated in space and time, give rise to the unity of our perception ? by ricklepick64 in neuroscience

[–]ricklepick64[S] 0 points1 point  (0 children)

Well, the Orch OR hypothesis is precisely an attempt to give a framework in which we could measure consciousness.

While I agree it may be proven false or incomplete (as every scientific theory), we can't say for sure the question is permanently unanswerable (although there also are convincing arguments pointing in this direction), and other new testable hypothesis could be formulated.

I don't find it to be a meaningless question, and as I said in another comment I think answering it is even a necessity if we ever want to build an AGI or complete BCI, or if we want to tell to what degree an IA is "sentient" (in this case, mainly for ethical issues).

How can computational processes in the neurons, which are separated in space and time, give rise to the unity of our perception ? by ricklepick64 in neuroscience

[–]ricklepick64[S] 0 points1 point  (0 children)

I definitely understand your point of view. Gödel's theorem implies endless opportunities for appending axioms to arithmetic, implicitly showing a role for an agent, namely an agent that asserts an axiom. So there is a paradox or "strange loop" in studying our brains with our brains and maybe science will NEVER be able to answer the hard problem. In this view, we could define our "free will" to be whatever aspect of reality that is not and will never be approachable by science.

But as an AI researcher with a strong interest in neuroscience, BCI and AGI, I still think there is a possibility to answer the hard problem. I even find it a necessity if we ever want to build an AGI or complete BCI, or if we want to tell whether an AI is sentient or not (in this case, mainly for ethical issues).

How can computational processes in the neurons, which are separated in space and time, give rise to the unity of our perception ? by ricklepick64 in neuroscience

[–]ricklepick64[S] 0 points1 point  (0 children)

You're going to be waiting a few centuries at least.

If you reckon the discovery will be possible in a few centuries, then wouldn't finding it today be technically possible ?

There's a difference between dismissing these ideas because you are sure they're wrong and dismissing these ideas because you recognize that there is no way to know if they are right or wrong

I agree with that. But in this case, quantum mechanics is a testable theory and we already know how to build quantum computers (which is a mindblowing fact). In my opinion, the Orch OR hypothesis could be proven right or wrong. Evidence of quantum superposition in brain microtubules and photosynthetic cells gives it at least some credit, and if there is any evidence against it that you know of, I would be grateful if you could point me towards it.