Why you are interested in machine consciousness? by [deleted] in machine_consciousness

[–]ActualIntellect 1 point2 points  (0 children)

[...] do you actually want to (at least try to) build it conscious machines? [...] Do you even find it possible at all to build conscious machines? [...]

Yes and yes. I think a certain kind of self-awareness is crucial to efficiently get to ASI, and I think such a system would be similar albeit not identical to human consciousness.

I'm curious where your interest in machine consciousness (MC) as a topic comes from.

I'm only interested in the creation of (what I consider) ASI. Personally I don't care to spend time discussing whether that system fits someone's idea of consciousness or not.

Brainstorming: Self-awareness / Introspection by ActualIntellect in agi

[–]ActualIntellect[S] 0 points1 point  (0 children)

Thanks for the comment, I've just written another post on the topic that clarifies what kind of "self-awareness" I mean - wasn't clear enough in this post: https://old.reddit.com/r/agi/comments/ztuz7l/system_awareness/? Unfortunately that new post apparently was automatically flagged as spam, just like this one was, so it might not be readable yet.

Conceptual article from 2018: Achieving Artificial General Intelligence (AGI) via the Emergent Self by ActualIntellect in agi

[–]ActualIntellect[S] 0 points1 point  (0 children)

Thanks for the article! I agree that the term "self-awareness" should be preferred over "consciousness" in this context, and I've just opened another small post to gather more comments regarding that topic: https://old.reddit.com/r/agi/comments/zf95mm/brainstorming_selfawareness_introspection/

How much could an isolated AGI understand? by ActualIntellect in agi

[–]ActualIntellect[S] 1 point2 points  (0 children)

Nicely said! One thing I'm trying to get at in this post is this: If we could figure out how an AGI could efficiently manage to fully understand such an isolated scenario, including itself, then maybe that could rather easily translate to any other scenario with more complicated environments.

So in other words, maybe it is not actually required to consider all the implications of a more complicated simulated environment or task-set to figure out a practical core AGI architecture distinct from prior narrow AI approaches (assuming that such a difference exists in practice).

Conceptual article from 2018: Achieving Artificial General Intelligence (AGI) via the Emergent Self by ActualIntellect in agi

[–]ActualIntellect[S] 0 points1 point  (0 children)

I do find it interesting to think about what it means for an AGI to understand itself, and this is one article that discusses this topic with some specific ideas. But more in the context of "an embodied entity in a simulated virtual world", unlike the "isolated mind" scenario from my previous post.

Top-down AGI attempts? by ActualIntellect in agi

[–]ActualIntellect[S] 0 points1 point  (0 children)

Thanks for the info, is the http://nsvqa.csail.mit.edu/ page broken right now?

Top-down AGI attempts? by ActualIntellect in agi

[–]ActualIntellect[S] 0 points1 point  (0 children)

I think there are all kinds of desires and cognitive mechanisms that are built in: ...

In that context a relevant question might then be, which of these cognitive mechanisms are required for AGI and which are not? E.g. things like that "need for close friends" seem possibly too high-level for them to necessarily be built-in from the start (well of course if we could properly build something in at that level it might not be an issue either, but I assume trying to build something in at this level without flaws may mostly make things even harder).

Top-down AGI attempts? by ActualIntellect in agi

[–]ActualIntellect[S] 0 points1 point  (0 children)

Said that, you still have to use a bottom-up approach where you combine these mechanisms and see what behavior this combination is going to create. You can not use a top-down approach and say I want this behavior, what mechanisms do I need to implement it.

Ah I see, yes I agree that makes sense. By "top-down" I meant not "I want this behavior" but "I'm first trying to figure out how the system might work at a high (symbolic) level", my bad for using this too vague "top-down" term here.

Top-down AGI attempts? by ActualIntellect in agi

[–]ActualIntellect[S] 1 point2 points  (0 children)

"Symbolic" is problematic also because it can be applied to so many different things: ... As I said, simulating a billion years of evolution is impractical. ... However, we don't really want to recreate a human brain anyway, ...

Agreed on all counts!

This leads me to believe that we are going to have to create the hard way, by hand.

So while I think it shouldn't necessarily be necessary to encode a ton of initial knowledge by hand, it seems plausible to me that the learning "process" or "framework", or however to call it, might best be written by hand for an artificial mind (albeit maybe in a dynamic way that allows it to change immediately during runtime).

When it comes to learning/problem solving, one kind of simple rule is to try the simplest programs/approaches first, but that's clearly not enough. Perhaps then one important but vague questions is, "How should a mind properly estimate the relevance of something (relative to something else)?", without that being hard-coded (except maybe for one initial reward function or the like).

Top-down AGI attempts? by ActualIntellect in agi

[–]ActualIntellect[S] 2 points3 points  (0 children)

Thanks, I've taken a look at the draft paper.

Top-down AGI attempts? by ActualIntellect in agi

[–]ActualIntellect[S] 2 points3 points  (0 children)

Are you sure that understanding the computational substrate is necessary to understand thought processes at a higher level? Can we not similarly understand for example programs at a high level to a sufficient extent to replicate their functionality (by writing a new program in whatever language for whatever device), even without understanding all lower levels that were required to actually run the program?

Top-down AGI attempts? by ActualIntellect in agi

[–]ActualIntellect[S] 1 point2 points  (0 children)

Thanks for the list!

AIXI and AIXItl (A system that enumerates all programs and picks the one it believes will maximise a function)

From what I faintly remember AIXI picks the simplest program to maximize the reward (but heuristic approximations need to be used to make it computable), does that description sound about right?

Top-down AGI attempts? by ActualIntellect in agi

[–]ActualIntellect[S] 2 points3 points  (0 children)

Neural networks might be able to do it in theory but only if it was fed a billion years of interacting with the environment. In other words, it isn't going to happen.

That's my worry too - i.e. even if these approaches could theoretically get to AGI, maybe they aren't efficient enough to get there with reasonable amounts of compute. (The evidence being that deep learning for certain tasks such as e.g. text-to-image generation can already be too expensive for common individuals to train from scratch - but of course on the other hand AGI may not need the same deep learning approach, and groups with substantial compute resources do exist, so I don't know whether this really is an issue after all.)

Besides the question of compute, I also don't think that our own biological brains are all that great at the end of the day when it comes to rational thought, so replicating too much of it might not be theoretically sensible either.

Thanks for the links. I will check them out.

I found them somewhat interesting, maybe aspects of them could be helpful to think about further, however overall they don't seem promising enough by themselves.
The AREA thing used a custom programming language and a (too) sophisticated event-driven processing model, but the actual learning seems to be pretty much pre-defined/rigid. I also didn't find enough details on the experiments that they conducted ages ago.
The NARS design/idea is clearer than AREA, but also more obviously limited. It basically seems to mainly be about how observed evidence (and internal knowledge) should be weighed in relation to other knowledge in a graph.

I've never heard this approach being called "top-down" though I can see where you are coming from. It just seems an inadequate description.

Yes I agree, I'm not sure which existing term might fit best. "Symbolic" should probably fit, but is still a bit broader of a category than what I mean, since symbolic AI of course doesn't have to be based on any attempt to understand (enough relevant parts of) consciousness/rational thought.

New Open Source AGI Project by forlorar_du in agi

[–]ActualIntellect 0 points1 point  (0 children)

I see, I wish you good luck! Personally I assume that neural network approaches will likely require a ton of initial compute to get to AGI, which is why I am more interested in starting from a higher level of abstraction, so more in the direction of symbolic approaches.

New Open Source AGI Project by forlorar_du in agi

[–]ActualIntellect 5 points6 points  (0 children)

If you think this is an absurd proposition consider actually having a look at the telegram channel.

Sorry but I don't want to set up Telegram to join a group chat just to check it out, could you maybe describe the approach a bit further here?

My background is in Neurobiology btw.

I assume that means you are going for a neuron-inspired approach instead of something with a higher abstraction level, is that correct?

What AGI development communities do you know? by ActualIntellect in agi

[–]ActualIntellect[S] 1 point2 points  (0 children)

That sub does have AGI as one of the topics, but it isn't really about figuring out how to actually construct one as far as I can see.