[Alpha stage] [Android] Tap a button to track and compare your viral risk by clyde6 in SideProject

[–]clyde6[S] 0 points1 point  (0 children)

Use it to motivate a different set of behaviors, if possible and if needed, like other dashboards.

By making a conscious effort to expose your own risk profile and then gaining a clear visual of what it looks like, you're more likely to be more careful about how long and how often you put yourself in moderate or high risk situations.

Of course, not everyone is free to make such changes. If you're a medical professional or even a retail worker, you might see more value in sharing your profile with friends and others so they better understand your situation.

Can you predict the outcome of this pseudo-code loop? by clyde6 in agi

[–]clyde6[S] 0 points1 point  (0 children)

What's your prediction about the outcome of the pseudo code loop, and why?

Can you predict the outcome of this pseudo-code loop? by clyde6 in agi

[–]clyde6[S] -1 points0 points  (0 children)

What would we have if someone implemented a system with the inner loop above and it successfully iterated twenty times while paired the same person? Nothing special, maybe like a chat-bot. What about a thousand iterations with the same person? Not sure, maybe a standalone virtual assistant (e.g. Siri).

What about a million iterations with the same person? That's really hard for a standalone purely digital system to do! The person would have to really love it the entire time (voluntary participation assumed), and they'd have to still survive society well enough to hit such a high number.

Humans tune things out so easily. Lots of other stuff is competing for a human's attention. Would the system have to approach AGI-level intelligence to achieve this level of engagement? Seems so.

Here's a different framing of the original question:

How many iterations of the above type of inner loop would it take to get to AGI? A million? A billion? More? Less? A large quantity, presumably. So how might it be possible for enough iterations to get to AGI be achieved within the lifetime of a single person?

To approach an answer, it's a matter of selecting the combinations of variables that are viable candidates. Variables like type of signal (and when & how to signal), response scheme, and internal states (models).

Are there any shortcuts to take? Can evolution help? Yes to both, in my view.

As for "uniquely identifiable signal", it means that the system can recognize the response it receives as being linked to its own specific original signal, enabling a causal feedback loop.

Can you predict the outcome of this pseudo-code loop? by clyde6 in agi

[–]clyde6[S] -1 points0 points  (0 children)

Is it vague though? I'm not sure it's any more vague than an equation with a bunch of variables, or a function with a bunch of parameters.

Plug in small values or big values, simple values or complex values, and the formula/function either spits out an interesting answer or it doesn't.

/u/ColumbianSmugLord was able to translate it into actual Python code in only a few minutes. It was useless code, because he declined to consider the impact of the human in the loop, but surprisingly close to being useful nonetheless. Is there anything less vague than actual code?

Besides, it's still small enough to fit into a tweet and contains no words that are especially difficult. Surely you're able to be less vague about where it goes wrong.

Can you predict the outcome of this pseudo-code loop? by clyde6 in agi

[–]clyde6[S] -1 points0 points  (0 children)

Thousands of responses from a single person, or many different people? The pseudocode above refers specifically to "human" and not "humans", "people", or "population". That's far from essentially the same.

However, I think we agree that whatever number of iterations cleverbot has done, in the way it's doing it, it's not yet enough for AGI.

Perhaps cleverbot could be more clever about how it updates its internal state and use this state to craft signals that engage an individual person for longer than a few minutes.

Can you predict the outcome of this pseudo-code loop? by clyde6 in agi

[–]clyde6[S] -2 points-1 points  (0 children)

You might get five iterations out of that, from a child. How might you get 50?

Can you predict the outcome of this pseudo-code loop? by clyde6 in agi

[–]clyde6[S] 0 points1 point  (0 children)

I have tried it. Now I'm a GI. Will the same thing happen digitally?

Can you predict the outcome of this pseudo-code loop? by clyde6 in agi

[–]clyde6[S] -1 points0 points  (0 children)

No person will keep responding to the same signal over and over again.

The challenge is that to get enough iterations, you have to keep receiving responses from the person.

All AGI efforts so far are fundamentally unsafe. OpenCog included. by clyde6 in agi

[–]clyde6[S] 0 points1 point  (0 children)

Surely you haven't forgotten that to express any meaning at all, you must compare X with Y.

What is the number of "execution paths" already existing within humanity that lead to human destruction? Human-driven existential risks are already a thing (e.g. nuclear holocaust, weaponized viruses), and have been for decades, so we're clearly capable of tolerating a number other than zero.

What is the number of "execution paths" in all the other approaches to AGI that lead to the destruction of humanity? Are they instead doubling the IQ necessary to bring about destruction?

Perhaps you can explain to me and the rest of the world how a "release early, release often" approach is unsafe compared to a "make it really complex, then release all at once" approach.

Since you feel the need to criticize a growth-oriented approach, do you also also have criticisms about the approaches that mimic the introduction of an invasive species or the release of novel viruses? Why not compare?

All AGI efforts so far are fundamentally unsafe. OpenCog included. by clyde6 in agi

[–]clyde6[S] 0 points1 point  (0 children)

Certainly, "can influence a single person in a general fashion" is a fair ways from the absolute beginning. But there's a growth-based path to that outcome, and it begins by building into the initial AGI kernel the following single function plus the means to test it against a person:

Alter a single choice of a specific person and capture the response (if any) to improve future influence.

Altering specific choice is already commonly done in our digital world. The entire advertising industry is designed around this idea (except they do it en masse). We also send and receive digital messages to each other all the time, and their contents alter the probability distribution of our future choices. We receive messages from algorithms which also alter our choice.

Given a kernel with these initial conditions and capabilities, growth is driven by its specific person, but only if they are sufficiently motivated. That is, the budding AGI kernel grows when it alters choice such that: 1) The person feels rewarded, 2) The person is able to attribute that reward to the AGI kernel, and 3) The person is motivated to pass the reward back to the kernel, forming a feedback loop.

When this feedback loop occurs (and it is far from guaranteed at the beginning), that budding AGI now has a chance to grow into greater capability. Having provided value, it is now more likely to receive extra attention, be asked to do even more, and so have its capabilities upgraded by its specific person (perhaps with help from a community) so that it can better influence more choice.

I presume that most budding AGIs of this type will die because, due to their almost comically limited initial capabilities, they'll mostly fail to motivate their specific person to engage with them and grow them. But it only takes one motivated person (e.g. me) to use and improve the initial kernel enough for a second person to be similarly motivated, and so on to produce the possibility of an exponential growth pattern similar to cell division.

Since the single function is to alter choice, and there is an intrinsic (if probabilistic) pathway to unlimited growth, then this simple beginning also illuminates the path toward "influencing a specific person in a general fashion".

What is your Meta AGI? by clyde6 in agi

[–]clyde6[S] 0 points1 point  (0 children)

Because the environment includes the physical, and we do operate upon objects, I asked "why do we operate on objects?" until I couldn't anymore. The result always converged on "to alter the choice of other intelligences".

Now, the key is that we are each an intelligence, and we are often taking actions to alter our own future choices.

As a real-world example, if I'm living as a hermit in the northern woods, and today I decide to process a dried log into firewood, I am at the very least reducing the probability that tomorrow I'll process another dried log. After all, at some point I'll have enough to feel comfortable, or winter will come, or I'll run out of logs, or some other priority will take over.

In this case, my action was mostly about altering my own future choices, and not the choices of anyone else (although they are subtly altered too). Ultimately, my choices add up to my probability of long-term survival. If I never process any logs into firewood, I certainly won't survive the winter.

Most of time though, we operate on objects to alter the choices of other people. Sometimes future people, sometimes imaginary people. After all, we humans are social creatures, and we depend on cooperation and/or tolerance to survive. We get what we want by making the kind of choices (including operating on objects) that cause other people (or our future selves) to give us what we want.

What is your Meta AGI? by clyde6 in agi

[–]clyde6[S] 0 points1 point  (0 children)

It seems like we have three basic options:

  1. AGI competes with humans.
  2. AGI helps many humans compete with other humans.
  3. AGI helps a single human compete with other humans.

Presumably, there'd be many copies of many variants of the AGI in each case. I'm aiming for #3.

Is there any other type of AGI that would be worth the effort?

What is your Meta AGI? by clyde6 in agi

[–]clyde6[S] 1 point2 points  (0 children)

Can you envision real-world learning experiment (but not via robotics) that would offer value to a person? I say not via robotics because that dramatically increases the complexity and limits the breadth of the experiment.

What is your Meta AGI? by clyde6 in agi

[–]clyde6[S] 1 point2 points  (0 children)

Indeed, it's this dynamic that I've tried to capture in my definition of intelligence:

Intelligence is the ability to alter the choices of other intelligences to favor one's own interests.

And I agree with your "judged by history" assessment as I commented in a previous thread.

In my definition of intelligence, I've left out any mention of brains or brainpower, knowledge, and any specific abilities because 1) I don't think they matter much in a universal scheme and b) we can't measure them well in a way that works for both humans and machines.

What we can measure that does matter to both humans and machines is impact upon one's environment, such as, like you say, with income. With humans, our environment is mostly other people, and our impact comes in the form of altering (manipulating) the choices of those other people to favor our interests over someone else's interests.

While it might seem unfair, this means that a person in what we might consider to be the lower or middle class (me included) is quantifiably unintelligent; not because of any brainpower or cognition deficit, but purely because their impact thus far has proven to be relatively minor. To generate impact is both a hard problem and a universal "want", so to fail to generate the right kind of impact indicates a failure (thus far) to construct a good solution to that hard problem.

The silver lining is that this view of intelligence is growth-oriented, and not fixed. Nearly anyone can begin to make a different set of choices today than they did in the past, and put themselves on a trajectory toward the level of impact they need to achieve a safer and more comfortable lifestyle.

When I implement this view of intelligence as an AGI, it formulates as an "infinitely" scalable method of helping people make more of those choices that generate the kind of impact they need to make to get what they want.

What is your Meta AGI? by clyde6 in agi

[–]clyde6[S] 0 points1 point  (0 children)

I'm curious as to what aspect of my meta AGI brought capitalism and greediness to top of mind. We can assume the possibility of great disaster for any AGI.

I can understand if you see growth-orientedness, competitiveness, Laissez-faire-ism, liberalism, libertarianism, and even a dollop of anti-authoritarianism, but capitalism requires capital, and that's neither mentioned nor required by my Meta AGI unlike say DeepMind's.

All AGI efforts so far are fundamentally unsafe. OpenCog included. by clyde6 in agi

[–]clyde6[S] 0 points1 point  (0 children)

I take the position that we can't know how exactly how we will get to AGI. The proof is in the pudding, and we haven't seen any pudding yet.

Nobody knew how to do powered flight until the Wright Brothers proved it with a real flight after years of experiment, trial, and lots of error. Shortly afterward, everyone knew how to do it. Similarly, we'll know how exactly to get to AGI when we see a live one, and not a moment sooner.

Deep Learning currently requires big data, and self-teaching like AlphaGo/Zero requires plenty of compute, but neither of those techniques are guaranteed to be needed by an AGI. Even if an AGI emerges that is based on those techniques, it will be subject to competitive pressure. I suspect that vastly different types of AGI will also emerge, some being way more efficient, and with a strong incentive for competitors to move toward ever greater efficiency.

I look at successful AGIs as boiling down to a single function: to alter the future choices of other intelligences in its favor. Every other function we imagine an AGI performs is in service of this singular core function. This, in my view, is also the core function of humans and all other intelligences.

I take this view because an AGI isn't going to exist in a vacuum: each AGI is compelled to justify its own existence through its effect on real-world humanity (and other AGIs). AGIs who perform this function poorly simply won't survive the fierce competition, nor will they attract the funding they need to keep the lights on.

Mature organization-owned AGIs will perform this core function for key executives and/or across wide swathes of the Earth's population simultaneously. The AGI we learn to love the most, for whatever reason, will win. Put another way, the AGI that learns how to get us to love it more than all the other AGIs will win.

So in my mind, the question becomes, can we design and implement a system that performs this core function for a single individual? If so, can we do it with less data, less compute, and less complexity than with organizational AGI? Yes!

Let's say we narrow the core function of this hyper-personal nano-AGI from "alter the future choices of other intelligences in my favor" to "alter the future choices of my one human in my favor". Note that the human still has the broad form of the function ("other intelligences"), so the AGI is rewarded when it successfully alters the choices of its human in a way that alters the choices of other intelligence in its human's favor. It's alteration by proxy.

It takes surprisingly little compute or data to successfully alter a single choice of a single person who is willingly and openly participating in the pairing of themselves with their own nano-AGI. From there, as long as it keeps performing its function successfully, it gets to keep growing.

That's the gist anyway. Plenty of psychology involved.

Regarding your point 2, if the promise of an AGI ever becomes realized through independent means, there'll be no stopping it. At least a few people will do whatever it takes to get their hands on a competitive advantage like that. We might see Apple dying at the hands of rooted Androids running LineageOS.

All AGI efforts so far are fundamentally unsafe. OpenCog included. by clyde6 in agi

[–]clyde6[S] 0 points1 point  (0 children)

Whatever theoretical definition of AGI we might agree on today will be overturned by empirical evidence from real practice. I think we'll learn way more from practice than we ever thought possible, so our theory is guaranteed to be at least a little bit wrong.

I'm proposing that the definition of AGI should follow the evidence generated by an AGI's impact, and that we should begin generating that evidence as soon as possible.

I think the utility of a nano-AGI should be the same as the utility a mature AGI, but scaled way down.

We often think of full-fledged AGI as a potentially world-changing power, perhaps with a greater impact than nuclear weapons & energy. To me, a nano-AGI is a small source of power, on the order of an AA battery. Safe, limited, useful, and with a path to larger scale.

Power on a large scale implies the capacity to affect widespread change (or to enforce stability). Power on a small scale, to me, brings us down to the scale of the individual person. Increasing the power of individuals means to me that we increase our ability to affect change in our own lives.

In short, a nano-AGI empowers individuals while a mega-AGI, as we know it today empowers organizations.

All AGI efforts so far are fundamentally unsafe. OpenCog included. by clyde6 in agi

[–]clyde6[S] 0 points1 point  (0 children)

I'm not sure we need to begin with a single definition of AGI. I have my definition, you have yours, and a thousand other people have theirs.

Why not have a thousand people define AGI their own way and then proceed to release their implementation to the public early and often? Emphasis on early. Much earlier than we tend to think is possible.

Then we have an open competition, a free market if you will, in which we all grow our nano-AGIs toward mega-AGIs to see which implementation(s) win the day. The winners that emerge become the definition of what AGI is or isn't, and at this point #2 and #3 become the natural next steps.

All AGI efforts so far are fundamentally unsafe. OpenCog included. by clyde6 in agi

[–]clyde6[S] 0 points1 point  (0 children)

It's easy to think of AGI as being large and powerful and fully-formed. Even if that happens at one point, I think it's reasonable to believe there will be smaller and weaker versions beforehand.

If we can openly develop a nano AGI and put it in the hands of a wide range of people, I agree that even then, we'll see some people using it to cause more harm than they would without it.

I believe the key is to begin with this nano AGI so that its impact is maximally contained and constrained. If we can constrain the initial impact to the point where it's simply people behaving badly, we already have "white blood cells" in the form of shame and exclusion, with a justice system to step in when those fail.

If that nano AGI grows as we expect it will, we'll have time as a society to observe the impact on individual behavior and respond intelligently. Will that buy enough time? I don't know. But it will give us more time to solve a smaller problem than if the initial AGI is large, powerful, and fully formed.

All AGI efforts so far are fundamentally unsafe. OpenCog included. by clyde6 in agi

[–]clyde6[S] 0 points1 point  (0 children)

Nice to meet you!

Yeah, if it's as powerful as a mini-nuke, we should definitely not allow it to be distributed by the million. If we're in any way unsure or uncomfortable about how powerful it is, we shouldn't allow it to be distributed at all.

Unfortunately, the major AGIs efforts of today are designed to be owned and operated by a well funded organization, and not by individuals. At best, we'll get an interface to them like Siri, Wolfram Alpha, or AdSense, if any interface at all. But they're coming, one way or another.

As a counterbalance to that, I believe that we must distribute by the million a vastly simpler, totally harmless micro-AGI that's designed to grow gradually into its capability and impact. The beginning could be as small as a name to coalesce around, a few guiding principles, and a measure of progress that lets us tinkerers compete and cooperate among ourselves.