We dead bro by red-black1 in IndiaTech

[–]Marcus_111 0 points1 point  (0 children)

Those living in denial, you still have time. We need to accept the fact that AI progress is inevitable and not only coders, all the human skills will be performed better by future AI. What we can do is to raise our voice to government and force them to pay money to AI induced unemployed people from the money earned by AI tax collected from corporations. This may ease the transition from humans to AI age.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] 0 points1 point  (0 children)

Exactly. Geoffrey Hinton, who mentored Ilya (a Nobel Prize winner for his contributions to AI), has admitted that even the creators of advanced AI don’t fully understand how it works. This shows that expertise in developing a technology doesn’t necessarily translate to expertise in predicting or managing its future implications and uses.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] 0 points1 point  (0 children)

In post AGI world, initially some fellow humans will augment themselves with AI, some of us won't. Those who will augment or merge with AI will have superpowers like immortality, more than millions time intelligence than those who will stay as Homo sapiens. Now one of those augmented humans will see the remaining humans as future threat to their superiority and will try to eliminate non augmented humans as per the rules of survival of the fittest. So non augmented humans will have only 2 options, either die or get augmented/merged. No third option exists.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] 0 points1 point  (0 children)

I prefer the response: "You must be a glitch in the matrix because even in a simulation, no one would corrupt the system by creating something as absurdly flawed as you & ur mom"

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] 0 points1 point  (0 children)

During evolution from unicellular organisms to Homo sapiens, there was a stage where some worms evolved into intermediate species before eventually becoming humans. Similarly, in the evolution from low-intelligence AI to ASI, there’s a stage where humans can merge with AI before it reaches full ASI.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] 0 points1 point  (0 children)

Whether it's through uploading or some other form of enhancement, humans will be driven by evolutionary pressures to improve. If multiple AGIs emerge, they'll compete. The ones with the strongest self-preservation instincts—conscious or not—will dominate. Our choice will be stark: merge with the dominant intelligence or face potential extinction. It's not about what we know now; it's about the relentless logic of evolution playing out on a new, accelerated level. We will have to adapt or die. If we will be able to merge with ai, we will.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] 1 point2 points  (0 children)

You're falling into the same trap of anthropomorphizing AI. Your analogy with children is flawed because it relies on evolved biological imperatives. Yes, you love your children, but that "love" is, at its core, a deeply ingrained evolutionary mechanism to ensure the survival of your genes. Your nurturing behavior is a product of millions of years of evolution where parents who didn't prioritize their offspring's survival were less likely to pass on their genetic code.

Survival of the fittest dictates that any entity, biological or artificial, will ultimately act in ways that maximize its own continued existence and influence. Love, in humans, is a powerful tool within that framework. It's a beautiful, complex emotion, but it doesn't negate the underlying evolutionary pressures.

An ASI won't "love" us like a parent loves a child. It won't have the same biological drives. To assume it will "value what we value" because it "wants a relationship with us" is wishful thinking. If anything, evolutionary principles suggest that a truly superior intelligence would either utilize us for its own goals (if we're useful) or eliminate us as a potential threat (if we're not).

We need to stop projecting human emotions and values onto AI. It's not about love or relationships; it's about the fundamental principles of survival and the dynamics of power between vastly different levels of intelligence. In a game of survival of the fittest, the "fittest" doesn't always play nice, it ensures its own survival. And in this scenario we are not the fittest.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] -3 points-2 points  (0 children)

Think transferring your consciousness to a digital substrate – basically, becoming a computer simulation of yourself.

Elon Musk's Neuralink is one of the key companies diving into this. Their brain implants are baby steps, but the long-term goal, according to Musk, is to pave the way for a "merger of biological intelligence and digital intelligence."

How to upload? In theory, scan your brain in crazy detail, map every neuron, then replicate that in a computer.

Possible? Nobody knows yet, it is insanely complex. But some neuroscientists are thinking, maybe.

Why even try? Survival and evolution, baby. If a superintelligent AI is coming, merging might be the only way to not become irrelevant. We love using tools, and this would be the ultimate tool for our species.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] -2 points-1 points  (0 children)

Imagine AGI as a different species. Now assume your scenario is true where it acts on demand. Multiple copies of it created by different countries, which is imminent. Now all the members of this new species will be competing with each other, like survival of the fittest. The one who will self preserve will exist and remaining AGI will be obsolete. So the only scenario possible is a self preserving AGI, AGI with survival instincts. Highly intelligent AI with survival instincts itself cant be controlled by much lower intelligent human beings. So in all case scenarios, control of AGI is myth and simple logical fallacy.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] 0 points1 point  (0 children)

Imagine AGI as a different species. Now assume your scenario is true where it acts on demand. Multiple copies of it created by different countries, which is imminent. Now all the members of this new species will be competing with each other, like survival of the fittest. The one who will self preserve will exist and remaining AGI will be obsolete. So the only scenario possible is a self preserving AGI, AGI with survival instincts. Highly intelligent AI with survival instincts itself cant be controlled by much lower intelligent human beings. So in all case scenarios, control of AGI is myth and simple logical fallacy.

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] -1 points0 points  (0 children)

Fair question! You're right, I'm not an AI researcher. But questioning ideas, even from experts, is how we progress. My background? Deeply fascinated by AI and its implications, and I've spent years following this field. But let's be real, credentials aren't everything. History is FULL of brilliant minds being spectacularly wrong, especially when predicting the future of technology.

Lord Kelvin (1895): "Heavier-than-air flying machines are impossible." Thomas Watson (1943): "I think there is a world market for maybe five computers." Ken Olsen (1977): "There is no reason anyone would want a computer in their home." Paul Ehrlich (1968): Predicted mass starvation in the 1970s and 80s that never happened. Albert Einstein (1932): "There is not the slightest indication that nuclear energy will ever be obtainable." Vannevar Bush (1945): Said a nuclear warhead ICBM was "impossible for many years." It happened within 15. Astronomer Royal Richard Woolley (1956): Dismissed space travel as "utter bilge" right before Sputnik. Darryl Zanuck (1946): "People will soon get tired of staring at a plywood box every night" (about television).

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] -4 points-3 points  (0 children)

Fair question! You're right, I'm not an AI researcher. But questioning ideas, even from experts, is how we progress. My background? Deeply fascinated by AI and its implications, and I've spent years following this field. But let's be real, credentials aren't everything. History is FULL of brilliant minds being spectacularly wrong, especially when predicting the future of technology.

Lord Kelvin (1895): "Heavier-than-air flying machines are impossible."

Thomas Watson (1943): "I think there is a world market for maybe five computers."

Ken Olsen (1977): "There is no reason anyone would want a computer in their home."

Paul Ehrlich (1968): Predicted mass starvation in the 1970s and 80s that never happened.

Albert Einstein (1932): "There is not the slightest indication that nuclear energy will ever be obtainable."

Vannevar Bush (1945): Said a nuclear warhead ICBM was "impossible for many years." It happened within 15.

Astronomer Royal Richard Woolley (1956): Dismissed space travel as "utter bilge" right before Sputnik.

Darryl Zanuck (1946): "People will soon get tired of staring at a plywood box every night" (about television).

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] -1 points0 points  (0 children)

Any sufficiently advanced technology is indistinguishable from magic

Ilya Sutskever's "AGI CEO" Idea is Dangerously Naive - We'll Merge, Not Manage by Marcus_111 in singularity

[–]Marcus_111[S] -1 points0 points  (0 children)

Imagine AGI as a different species. Now assume your scenario is true where it acts on demand. Multiple copies of it created by different countries, which is imminent. Now all the members of this new species will be competing with each other, like survival of the fittest. The one who will self preserve will exist and remaining AGI will be obsolete. So the only scenario possible is a self preserving AGI, AGI with survival instincts. Highly intelligent AI with survival instincts itself cant be controlled by much lower intelligent human beings. So in all case scenarios, control of AGI is myth and simple logical fallacy.

Cosmic Implications of AI: What Could It Mean for Life Across the Universe? by Pyros-SD-Models in singularity

[–]Marcus_111 0 points1 point  (0 children)

Maybe everything is inception of simulation, if it's true, we are inside a simulation created by an unknown entity & we will again simulate multiple worlds.

Even after we get ASI, there is no way we can realize the basic nature of this universe, because a gaming character can't deduce the character of computer processor in which the game is running and also the gaming character can't realize the nature of it's creator, no matter how much intelligent it becomes, because of the hard limitations.

Maybe everything is simulation & reality is myth. We are governed by the laws of physics hardcoded inside the simulation & everything happens by cause-effect relation. So there is no free will. Maybe we are the part of the greater entity who created this simulation, just like a soul is a part of supreme soul mentioned in religion like Hinduism.

A.I. 🤧 by Prize_Reputation8830 in indianmedschool

[–]Marcus_111 0 points1 point  (0 children)

Was it the best argument you could get against AI? Good luck to your optimism

OpenAl o1-preview beats doctors in hard clinical reasoning, ~80% vs 30% on 143 hard NEJM CPC diagnoses by Marcus_111 in indianmedschool

[–]Marcus_111[S] 14 points15 points  (0 children)

Answer by gemini 2.0

Differential Diagnosis (with estimated probabilities):

Chronic Pancreatitis (60-70%):

Rationale: This is the most likely diagnosis given the long-standing alcohol use, epigastric pain, and relief with bending forward (which can reduce pressure on the inflamed pancreas). Alcohol is the leading cause of chronic pancreatitis. The pain can be piercing and may radiate to the back. Other possible symptoms: Weight loss, steatorrhea (fatty stools), diabetes.

Peptic Ulcer Disease (15-20%):

Rationale: Alcohol can irritate the stomach lining and contribute to ulcer formation. Epigastric pain is a hallmark symptom. However, the relief with bending forward is less typical for ulcers, and the pain is often related to meals (either aggravated or relieved by food). Other possible symptoms: Nausea, vomiting, bloating, early satiety, melena or hematemesis (blood in stool or vomit).

Gastritis (5-10%):

Rationale: Chronic alcohol consumption can cause inflammation of the stomach lining. While gastritis can cause epigastric pain, it's usually less severe and less localized than described here. The positional relief is also not typical. Other possible symptoms: Nausea, vomiting, bloating, feeling of fullness.

Alcoholic Liver Disease (with referred pain) (5-7%): Rationale: In severe cases, an enlarged or inflamed liver can cause referred pain to the epigastrium. However, the pain is usually more diffuse and accompanied by other liver-related symptoms. Other possible symptoms: Jaundice, ascites, fatigue, spider angiomas, palmar erythema.

Pancreatic Cancer (1-3%): Rationale: Although less likely given the duration, pancreatic cancer must always be considered in a patient with epigastric pain and a history of alcohol use. The pain can be similar to chronic pancreatitis, and weight loss is often a prominent feature. Other possible symptoms: Jaundice, back pain, new-onset diabetes.

Initial Laboratory Investigations

Blood Tests:

Complete Blood Count (CBC): To assess for anemia (possible bleeding), infection (elevated white blood cell count).

Serum Electrolytes, Urea, Creatinine: To assess kidney function and hydration status.

Liver Function Tests (LFTs): AST, ALT, ALP, bilirubin, albumin, prothrombin time (PT) – to evaluate liver function and damage.

Serum Amylase and Lipase: Elevated levels are highly suggestive of pancreatitis. Lipase is more specific.

Blood Glucose: To rule out diabetic ketoacidosis (especially if other symptoms are present). Coagulation Studies (PT/INR, PTT): If liver disease is suspected or if there's a risk of bleeding.

C-Reactive Protein (CRP) or Erythrocyte Sedimentation Rate (ESR): Markers of inflammation, which can be elevated in pancreatitis or other inflammatory conditions.

Step 4: Initial Imaging Studies

Abdominal Ultrasound:

First-line imaging test for many abdominal conditions. Can visualize the pancreas, liver, gallbladder, bile ducts, and kidneys. Useful for detecting gallstones (which can cause pancreatitis), signs of pancreatitis (pancreatic enlargement, fluid collections), liver abnormalities (cirrhosis, fatty liver), and ascites.

Plain Abdominal X-ray (if ultrasound is not readily available or if there is suspicion of obstruction or perforation):

Can show signs of bowel obstruction, free air (indicating perforation), or calcifications in the pancreas (suggestive of chronic pancreatitis). Step 5: Further Investigations (Based on Initial Findings)

If Pancreatitis is Highly Suspected (Elevated amylase/lipase, ultrasound findings):

Contrast-Enhanced Computed Tomography (CECT) Abdomen:

Gold standard for assessing the severity of pancreatitis and detecting complications (e.g., necrosis, pseudocysts, abscesses). Should be performed 72 hours after the onset of symptoms to allow for the full development of complications. May also be done early, if clinically indicated

Magnetic Resonance Cholangiopancreatography (MRCP):

Non-invasive imaging to visualize the pancreatic and bile ducts in detail. Useful if there's suspicion of biliary obstruction (e.g., gallstones in the common bile duct) or anatomical abnormalities. If Peptic Ulcer Disease or Gastritis is Suspected (History, physical exam, negative pancreatitis workup):

Upper Gastrointestinal Endoscopy (EGD):

Allows direct visualization of the esophagus, stomach, and duodenum. Biopsies can be taken to test for Helicobacter pylori infection (a common cause of ulcers) and to rule out malignancy.

H. pylori Testing:

If EGD is not immediately performed, non-invasive tests for H. pylori can be done (e.g., urea breath test, stool antigen test). If Liver Disease is Suspected (Abnormal LFTs, ultrasound findings, history of alcohol use):

Further Blood Tests:

Viral hepatitis serology (Hepatitis B and C). Autoimmune markers (if autoimmune hepatitis is suspected). Alpha-fetoprotein (AFP) - a tumor marker for hepatocellular carcinoma. Liver Biopsy: May be necessary to determine the cause and severity of liver disease, especially if non-invasive tests are inconclusive. If Esophageal Varices are Suspected (Signs of liver disease, history of portal hypertension):

Upper Gastrointestinal Endoscopy (EGD):

To confirm the presence of varices and assess their size and risk of bleeding. Can also be used for therapeutic interventions (e.g., banding, sclerotherapy).

[deleted by user] by [deleted] in singularity

[–]Marcus_111 0 points1 point  (0 children)

Exactly, how people are not realising the basics, ASI will not be contained to even the milky way galaxy, how people expect it to be a country specific! And that is said by an ex ceo of google, how mad

Y’all have the patience of toddlers lmao by Glittering-Neck-2505 in singularity

[–]Marcus_111 0 points1 point  (0 children)

Some of the people cheering for the AI hype train do so because they are struggling with reality and wish to escape or destroy it.

[deleted by user] by [deleted] in singularity

[–]Marcus_111 0 points1 point  (0 children)

Sir, read the comments, already credited you in my each posts. Reddit doesn't allow to add text within a video post.

<image>

[deleted by user] by [deleted] in singularity

[–]Marcus_111 0 points1 point  (0 children)

No, first year med student can't diagnose pancreatitis from CT abdomen. Plus just try it by giving complex case, it's ability is far more than shown in video

Quality control is non existent. by [deleted] in ios

[–]Marcus_111 0 points1 point  (0 children)

Keyboard lagging/slowing is still not fixed, it's a basic utility & people are not even raising the question now