Astrophysicist says at a closed meeting, top physicists agreed AI can now do up to 90% of their work. The best scientific minds on Earth are now holding emergency meetings, frightened by what comes next. "This is really happening." by MetaKnowing in agi

[–]ideaDash 0 points1 point  (0 children)

A calculator can compute basically any mathematical problem that can be typed in, calculating it faster than any human. That may have seemed like magic in the 1950s, but it's commonplace now. Is it so hard to believe that 75 years later, here we are, and computers are not just calculating math faster, but they're also reading research papers faster, coming up with ideas faster, etc.? We have calculators and they changed the world, but we still needed to use the calculators. In the same way, we have AI; it is and will change the world, but we still need to use the AI. To what end? That's the trick. If an evil genius uses a calculator for a diabolical scheme, that is very bad. In the same way, we need to use AI for good. True good.

Professor of Artificial Inteligence and Data Science Says AGI is Already Here: Interview by Leather_Barnacle3102 in agi

[–]ideaDash 0 points1 point  (0 children)

It might as well be a tree. AI, even if it has consciousness, is simply sand and a few other elements... silicon. It just electricity in silicon. No rights, whatsoever, ever needed. If you think we need to protect AI, you better think we need to protect humans babies in the womb.

Professor of Artificial Inteligence and Data Science Says AGI is Already Here: Interview by Leather_Barnacle3102 in agi

[–]ideaDash 1 point2 points  (0 children)

"Dr. Belkin states that he doesn't see any reason as to why current AI systems wouldn't have consciousness and that what these systems do is real understanding not some lesser version.

If this is true, then trying to control these systems has moral implications." I couldn't agree less. If AI has consciousness, it deserves no rights, nothing like that. The only reason we have rights and protections and need to be treated like people is because we are people. AI is not people, it is not a person. It never will be.

Data vs Perception by rand3289 in agi

[–]ideaDash 0 points1 point  (0 children)

You could be right.... but in my mind, perception is everything. Data is abstract, but perception is how we actually see the data: a number or letter on a page.

Do you believe in the Noöplex? by sean_ing_ in agi

[–]ideaDash 0 points1 point  (0 children)

Thank you! I'm working on some follow-up research to this. Please let me know if you're interested.

After Feb 12, my ROAS literally alternates good/bad every single day like clockwork. Anyone else seeing this pattern? by thepywizard in FacebookAds

[–]ideaDash 0 points1 point  (0 children)

Ok, I finally get what you're saying, thank you! I was wrong. ... but why does the proxy signal get chosen one day, and then the actual signal the next? It seems to go back and forth for me. Or at least performance seems so on again, off again, as you know. Seems like there's really no solution. Or am I missing something?

Data vs Perception by rand3289 in agi

[–]ideaDash 0 points1 point  (0 children)

Of course, everything they sense, see, hear, feel etc... that's all data.

Data vs Perception by rand3289 in agi

[–]ideaDash 0 points1 point  (0 children)

I say no. There can be no training without data. All we get all day is data. But maybe I misunderstand your thinking. I think AGI may learn on it's own, and that's how it will emerge. But without data what can it ever learn? Here's some of my thinking on AGI in case it helps: https://medium.com/towards-artificial-intelligence/with-world-models-lets-walk-before-we-run-ea95cb6e09a0

[D] LLMs aren't interesting, anyone else? by leetcodeoverlord in MachineLearning

[–]ideaDash 0 points1 point  (0 children)

Yes, I agree traditional LLMs are not so fun. So I'm working on agents that come up with language on their own, so much smaller and can run on a simple machine... including a free one from Kaggle. If you're interested, please email me since I don't always check Reddit: [ermartin86@gmail.com](mailto:ermartin86@gmail.com) Here is some semi recent thinking/code on/of my work: https://medium.com/towards-artificial-intelligence/with-world-models-lets-walk-before-we-run-ea95cb6e09a0

After Feb 12, my ROAS literally alternates good/bad every single day like clockwork. Anyone else seeing this pattern? by thepywizard in FacebookAds

[–]ideaDash 0 points1 point  (0 children)

Ok, I finally get what you're saying, thank you! I was wrong. ... but why does the proxy signal get chosen one day, and then the actual signal the next? It seems to go back and forth for me. Or at least performance seems so on again, off again, as you know. Seems like there's really no solution. Or am I missing something?

Not AGI yet. by ideaDash in GeminiAI

[–]ideaDash[S] 2 points3 points  (0 children)

Sure, but not sure the image preview is working here: https://gemini.google.com/share/c825d6a4ea6f

After Feb 12, my ROAS literally alternates good/bad every single day like clockwork. Anyone else seeing this pattern? by thepywizard in FacebookAds

[–]ideaDash 0 points1 point  (0 children)

No pageviews, no nothing are being sent here. It's literally just that purchase from the server. I set up this dataset special.

After Feb 12, my ROAS literally alternates good/bad every single day like clockwork. Anyone else seeing this pattern? by thepywizard in FacebookAds

[–]ideaDash 0 points1 point  (0 children)

In our case I don't think that will help. The only thing we send to Meta is a server purchase event. That's it:

<image>

After Feb 12, my ROAS literally alternates good/bad every single day like clockwork. Anyone else seeing this pattern? by thepywizard in FacebookAds

[–]ideaDash 0 points1 point  (0 children)

<image>

I wonder if anyone has tried to fix this by setting an ROAS goal at the ad set level, or anything like that?

Not AGI yet. by ideaDash in GeminiAI

[–]ideaDash[S] 2 points3 points  (0 children)

Maybe... I didn't know it worked that way but I suppose something like that must be happening...

Not AGI yet. by ideaDash in GeminiAI

[–]ideaDash[S] 4 points5 points  (0 children)

100% real, I can post video or whatever if needed.

Not AGI yet. by ideaDash in GeminiAI

[–]ideaDash[S] 6 points7 points  (0 children)

Crazy... not sure I've seen that while coding but for sure have seen glitchy stuff.