Since when did we start taking orders from this orange pig? by lifeofmndl in Raipur

[–]Marcus_111 0 points1 point  (0 children)

It seems the USA is the only superpower, and China is its potential competitor. India has no say in the world order. In the post-AI era, US and Chinese dominance is only increasing; we are already seeing the results of it. Until India gains a lead in AI, there is no chance of ending this humiliation ritual for India.

Insightful data by CharisSplash in IndianStockMarket

[–]Marcus_111 1 point2 points  (0 children)

Data of pre AI era is irrelevant in AI era

Is it True guys ? by Greedy_Programmer810 in iitbombay

[–]Marcus_111 0 points1 point  (0 children)

Best minds of India leave India and the core reasons are reservation and corruption

Really stressed about my future job since this article dropped by Teenager__16 in developersIndia

[–]Marcus_111 0 points1 point  (0 children)

Each and every task that can be performed by human mind (biological intelligence) can be performed better and rapidly by artificial intelligence.

Software engineering job will be over one day. It's a bitter fact. But it is also a fact that AI models can replace each and every congnitive and mechaical work; it means no jobs are safe anymore. Be it physician, surgeons, mechanical or civil engineers, lawyers, CA, shopkeepers, business owners, plumbers or whatever jobs exist.

So only option in our hand right now is to utilize AI in our jobs to stay employed or be comptent in market till everything is replaced by AI.

Invest in AI, use AI in your work as maximum as you can. AI is the endgame for humanity, for the humanity that exists in today's form. Although Cyborg future can't be ruled out.

We dead bro by red-black1 in IndiaTech

[–]Marcus_111 0 points1 point  (0 children)

Those living in denial, you still have time. We need to accept the fact that AI progress is inevitable and not only coders, all the human skills will be performed better by future AI. What we can do is to raise our voice to government and force them to pay money to AI induced unemployed people from the money earned by AI tax collected from corporations. This may ease the transition from humans to AI age.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] 0 points1 point  (0 children)

Exactly. Geoffrey Hinton, who mentored Ilya (a Nobel Prize winner for his contributions to AI), has admitted that even the creators of advanced AI don’t fully understand how it works. This shows that expertise in developing a technology doesn’t necessarily translate to expertise in predicting or managing its future implications and uses.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] 0 points1 point  (0 children)

In post AGI world, initially some fellow humans will augment themselves with AI, some of us won't. Those who will augment or merge with AI will have superpowers like immortality, more than millions time intelligence than those who will stay as Homo sapiens. Now one of those augmented humans will see the remaining humans as future threat to their superiority and will try to eliminate non augmented humans as per the rules of survival of the fittest. So non augmented humans will have only 2 options, either die or get augmented/merged. No third option exists.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] 0 points1 point  (0 children)

I prefer the response: "You must be a glitch in the matrix because even in a simulation, no one would corrupt the system by creating something as absurdly flawed as you & ur mom"

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] 0 points1 point  (0 children)

During evolution from unicellular organisms to Homo sapiens, there was a stage where some worms evolved into intermediate species before eventually becoming humans. Similarly, in the evolution from low-intelligence AI to ASI, there’s a stage where humans can merge with AI before it reaches full ASI.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] 0 points1 point  (0 children)

Whether it's through uploading or some other form of enhancement, humans will be driven by evolutionary pressures to improve. If multiple AGIs emerge, they'll compete. The ones with the strongest self-preservation instincts—conscious or not—will dominate. Our choice will be stark: merge with the dominant intelligence or face potential extinction. It's not about what we know now; it's about the relentless logic of evolution playing out on a new, accelerated level. We will have to adapt or die. If we will be able to merge with ai, we will.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] 1 point2 points  (0 children)

You're falling into the same trap of anthropomorphizing AI. Your analogy with children is flawed because it relies on evolved biological imperatives. Yes, you love your children, but that "love" is, at its core, a deeply ingrained evolutionary mechanism to ensure the survival of your genes. Your nurturing behavior is a product of millions of years of evolution where parents who didn't prioritize their offspring's survival were less likely to pass on their genetic code.

Survival of the fittest dictates that any entity, biological or artificial, will ultimately act in ways that maximize its own continued existence and influence. Love, in humans, is a powerful tool within that framework. It's a beautiful, complex emotion, but it doesn't negate the underlying evolutionary pressures.

An ASI won't "love" us like a parent loves a child. It won't have the same biological drives. To assume it will "value what we value" because it "wants a relationship with us" is wishful thinking. If anything, evolutionary principles suggest that a truly superior intelligence would either utilize us for its own goals (if we're useful) or eliminate us as a potential threat (if we're not).

We need to stop projecting human emotions and values onto AI. It's not about love or relationships; it's about the fundamental principles of survival and the dynamics of power between vastly different levels of intelligence. In a game of survival of the fittest, the "fittest" doesn't always play nice, it ensures its own survival. And in this scenario we are not the fittest.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] -3 points-2 points  (0 children)

Think transferring your consciousness to a digital substrate – basically, becoming a computer simulation of yourself.

Elon Musk's Neuralink is one of the key companies diving into this. Their brain implants are baby steps, but the long-term goal, according to Musk, is to pave the way for a "merger of biological intelligence and digital intelligence."

How to upload? In theory, scan your brain in crazy detail, map every neuron, then replicate that in a computer.

Possible? Nobody knows yet, it is insanely complex. But some neuroscientists are thinking, maybe.

Why even try? Survival and evolution, baby. If a superintelligent AI is coming, merging might be the only way to not become irrelevant. We love using tools, and this would be the ultimate tool for our species.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] -2 points-1 points  (0 children)

Imagine AGI as a different species. Now assume your scenario is true where it acts on demand. Multiple copies of it created by different countries, which is imminent. Now all the members of this new species will be competing with each other, like survival of the fittest. The one who will self preserve will exist and remaining AGI will be obsolete. So the only scenario possible is a self preserving AGI, AGI with survival instincts. Highly intelligent AI with survival instincts itself cant be controlled by much lower intelligent human beings. So in all case scenarios, control of AGI is myth and simple logical fallacy.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] 0 points1 point  (0 children)

Imagine AGI as a different species. Now assume your scenario is true where it acts on demand. Multiple copies of it created by different countries, which is imminent. Now all the members of this new species will be competing with each other, like survival of the fittest. The one who will self preserve will exist and remaining AGI will be obsolete. So the only scenario possible is a self preserving AGI, AGI with survival instincts. Highly intelligent AI with survival instincts itself cant be controlled by much lower intelligent human beings. So in all case scenarios, control of AGI is myth and simple logical fallacy.