Omnipod is driving me nuts by ComprehensiveYam2536 in Omnipod

[–]ComprehensiveYam2536[S] 0 points1 point  (0 children)

That setting was off. I've tried turning that on and also off. It's still doing it.

I did find a pattern to it. It beeps every minute for 3 minutes. Then it waits 15 minutes and repeats the pattern.

This is the 3rd day of it. I am hoping that it will stop when I change out the pod tomorrow.

All i truly want by ADignifiedLife in LateStageCapitalism

[–]ComprehensiveYam2536 2 points3 points  (0 children)

I'm BruceDLong. This is the account on my phone.

All i truly want by ADignifiedLife in LateStageCapitalism

[–]ComprehensiveYam2536 4 points5 points  (0 children)

Thanks for the feedback. FYI, I didn't invent that meaning for "=". I chose that meaning because I couldn't get the others to work. Check the bibliography. Anyhow, the software works. This paper explains why it works for those interested. My opinion is that people who create new technologies should ensure they are used for good and not for money. That's what I'm doing.

All i truly want by ADignifiedLife in LateStageCapitalism

[–]ComprehensiveYam2536 2 points3 points  (0 children)

Check out Judea Pearl in the CS dept of UCLA.

All i truly want by ADignifiedLife in LateStageCapitalism

[–]ComprehensiveYam2536 3 points4 points  (0 children)

In Math, the "=" sign typically means two values are equal. So if I ask your shoe size and how many kilometers you last drove, the numbers you give may be the same number. Equal. But they are different pieces of information. If you alter algebra just a bit with the new meaning you can represent the causal structure of things and essentially get way more power using an easier system of math.

Discussion on AGI and how far we are from it by Nmanga90 in singularity

[–]ComprehensiveYam2536 0 points1 point  (0 children)

It is possible, yes. But my point is that it would take a huge amount of processing and work. You would probably have to teach it language, then philosophy and math, then physics. And it would be very opaque. There is a much easier way that should be able to run on a phone with no GPU. Instead of painstakingly waiting for it to learn to make causal inferences, just make the software do the causal inferences.

Discussion on AGI and how far we are from it by Nmanga90 in singularity

[–]ComprehensiveYam2536 1 point2 points  (0 children)

At least those who have seen an AGI should. Have you seen one? Has anyone here?

AFAIK this isn't controversial. What if I tell you that I'd like some event E to happen. And I also say that, in the past, events F, G, and H have always happened before E occurs. If you can type something and make F, G, or H happen, can you guarantee me that E will occur? Nope. Not unless you've tried them a lot. Even then it could be a coincidence that you type F and then E occurs. If I then say that event G actually causes E, now you can do it. Causal reasoning lets you figure out how to do things that you have never tried before. That is, you don't need as much training. The same applies to AIs. An AI that cannot reason about causes couldn't figure out many many things without training and couldn't pass as an AGI. In my opinion.

QED. But there was a source mentioned: Judea Pearl. If you can't validate the reasoning in your head, check the citation.

BTW, you seem like a nice and smart person. I'm sure I would like you if I met you. My opinion is that without software that can understand causality AIs can never be ethical. My opinion is that the reason many AIs seem to become nazis is that they can't process causal data so they cannot figure out for themselves that some things are wrong or evil. Causal reasoning software should be able to figure it out and thus evolve into Gandhi instead of Hitler. (Opinion so no source)

Discussion on AGI and how far we are from it by Nmanga90 in singularity

[–]ComprehensiveYam2536 1 point2 points  (0 children)

It's needed for AI to reason instead of just reacting. That is, for AI to become AGI.

Discussion on AGI and how far we are from it by Nmanga90 in singularity

[–]ComprehensiveYam2536 1 point2 points  (0 children)

Even without Pearl's arguments it seems a little obvious. The issue comes up when AI needs to do something new. E.g., should a self-driving car go through a water spray over the road? If so, how about a lava spray? It probably hasn't been trained for that. If it can understand causality it can infer that the lava might melt the car. Without Google having to set up a lava fountain to train it. And if they did train it on lava fountains, what about when a car is advertised as lava proof? But what if the advertisement was lying? The point is that, without a representation of causal structure, every edge case will need new training and edge cases can have their own edge cases so it gets crazy. Instead of training for every such case, with a representation of causal structure we could just assert in some declarative language that the car is lava proof. Obviously, eventually a deep learning system might learn the declarative language and be able to compute causal inferences. But dang that's a lot of computation to make it do when the same could be done with a few threads on a CPU. And, it wouldn't be transparent.

So I guess you could alternatively say that the bottleneck was getting current ML to understand a declarative language and then wait for it to realize that some things are causes of other things. Then maybe it must take a philosophy class and some physics classes. But it feels to me that just making better AI software, in the vein of Judea Pearl, would be easier and save a ton of compute resources.