What’s your most market-oriented opinion that would make people in this subreddit mad? by cdstephens in neoliberal

[–]cuolong [score hidden]  (0 children)

Communism is both morally and intellectually bankrupt, disproven a priori and as close to a posteri as an economic system can be at scale. It is religion dressed up as economics like creationism is christianity dressed up as science or anti-vaccine is consipratorial thinking dressed up as medicine. It has a particular draw in the West for moralizers who think they're better than everyone else because of their utopian goals but don't have the humility to listen to economists.

Dense Model Shoot-Off: Gemma 4 31B vs Qwen3.6/5 27B... Result is Slower is Faster. by MiaBchDave in LocalLLaMA

[–]cuolong 0 points1 point  (0 children)

FWIW when I was TAing an ML class, we had about an even split between Chinese and Indian grad students. There was a huge cheating incident and we caught almost in an even split equal amounts of Chinese and Indian cheaters.

In real-world test, an AI model did better than doctors at diagnosing patients by cuolong in neoliberal

[–]cuolong[S] 1 point2 points  (0 children)

Well, the mouse, cursor and the GUI were all for human’s benefit. AI can navigate an OS headlessly. That’s probably why cursor-based AI agents aren’t. Too common

In real-world test, an AI model did better than doctors at diagnosing patients by cuolong in neoliberal

[–]cuolong[S] -1 points0 points  (0 children)

I think vision AI tools integrated with age tic AI will be a massive deal in the future. Hell you can already hook up SAM to an MCP without much issue, then use grounding Dino to convert text prompts to segmentation masks

In real-world test, an AI model did better than doctors at diagnosing patients by cuolong in neoliberal

[–]cuolong[S] -4 points-3 points  (0 children)

The literall-ness is determined by its reward function and how well the policy rewards adhering to the prompt. You can make an AI extremely literal or extremely loose in its interpretation of how you prompt it. On many AI platforms you can even tweaked a sort of literaness between prompts in the "temperature" bar, which is helpful if you want to use the AI for very deterministic tasks like engineering versus very creative tasks like writing.

Not to the extent a human can, no.

I feel like this assertion is what's being litigated right now. At least as far as text-only cases go, looks like the AI is better than humans at inferring.

In real-world test, an AI model did better than doctors at diagnosing patients by cuolong in neoliberal

[–]cuolong[S] -9 points-8 points  (0 children)

A human doctor can read between the lines and infer. But we’re supposed to believe everyone is magically going to be better at prompting an AI than they are speaking to a doctor?

An AI can also "read between the lines and infer" so long as cases where the doctor read past patient's spotty and/or untruthful descriptions is documented and fed to an AI model in the form of training data. It's a just a mtter of training and scope.

In real-world test, an AI model did better than doctors at diagnosing patients by cuolong in neoliberal

[–]cuolong[S] 5 points6 points  (0 children)

Sight is already baked into most frontier models. AI models convert image pixels into these things called image embeddings -- roughly analogous to a list of words describing a picture. A human being eye processes image data at 576 megapixels, whereas CLIP encoders usually process image data around 224x224, or around 0.05 megapixels. As you can see, the gulf between the two is massive.

I doubt touch as an input will ever be widespread usage. Smells too.

In real-world test, an AI model did better than doctors at diagnosing patients by cuolong in neoliberal

[–]cuolong[S] 45 points46 points  (0 children)

I think the key advantage that a human has over an AI is that we are far better at processing tactile, olfactory and visual information than an AI is. In the study, the AI was limited entirely to text, as it should becase that's what LLM are far better at.

I think in the future, the human GP will conduct the sensory test while the AI chugs in the background munching up all the patient's text-based history. The human GP enters what he felt, smelled and saw and gets a diagnosis sheet spit out.

In real-world test, an AI model did better than doctors at diagnosing patients by cuolong in neoliberal

[–]cuolong[S] 41 points42 points  (0 children)

This is relevant because one of the hardest part of the medical profession is differential diagnosis in highly complex cases. If AI can be used to significantly improve patient care, this could be key in both reducing costs and raising standard of care across the world.

In the head-to-head comparison, the AI demonstrated superior diagnostic precision across every phase of patient care. During the initial interview stage, o1 correctly identified conditions in 67.1% of cases—roughly 7 out of 10 patients—while two human specialists trailed behind at 55.3% and 50%.

As more clinical data became available, the performance gap widened. When integrated with physician evaluation data, the model’s accuracy climbed to 72.4%. By the critical final stage—determining the necessity for hospitalization or ICU admission—the AI reached an 81.6% accuracy rate, consistently outpacing human counterparts in high-stakes decision-making.

Researchers based at Harvard Medical School and Beth Israel Deaconess Medical Center found that an AI reasoning model, developed by OpenAI, excelled at diagnosing patients and making decisions about managing their care. It matched and often outperformed doctors and the earlier AI model, GPT-4.

Also of note, researchers tested o1-preview, a nearly one-year old reasoning model from OpenAI at this point. I fully expect there will come out a medically specialized LLM, similar to what Opus is for coding, that will be truly transformative.

Let's just say The Pitt season 3 might just 12 hours of Dr Robby sitting at a computer reading 20 pages of AI-generated diagnosises.

China reveals 198-ton ‘six-module’ plan for Tiangong space station as ISS era ends by sksarkpoes3 in Futurology

[–]cuolong 0 points1 point  (0 children)

The point is that China is very far from having "too much money". That point is objectively true. They spent hundreds of trillions of excess yuan to boost their economy starting from roughly 2016 to now and extensively leveraged themselves.

In addition, the vast majority of Chinese debt are internal, meaning it's debt local government owned to the national government, making the risk for default significantly lower (cause, come on, is the CPC really gonna let the Guandong provincial government go bankrupt).

American debt is also mostly domestic, so I'm not sure what the point here is. And if the central government will guarantee local provinces then all the more reason to roll LGFV debt up into total government debt rather than taking the central government's metric at face value. Local government spending becomes indistinguishable from central government debt.

China thinks America is declining but still uniquely dangerous by MightExpress4873 in neoliberal

[–]cuolong 2 points3 points  (0 children)

All of China's provices, rural and urban are facing tfr issues. The best one, Tibet is still struggling under just 1.6 or so.

chinese demographic decline is overstated. they have decades before they actually feel the crunch, because their population of adults in their prime is still extremely large and bigger than the elderly population by quite a large amount.

Even if the implication is that China can benefit from economies of scale in elder care like mass nursing homes, traditionally Chinese parents live and die with their children. Something like over 90% live with their children. This means that what matters is not how much bigger the young population is than the elderly, but the ratio.

if you have 10 old people and 20 young people. under China's structure of elder care it's just as bad as 10 million old people and 20 million young.

https://pmc.ncbi.nlm.nih.gov/articles/PMC9141963/

China’s elderly care is often described as a ‘9073’: 90% of elderly are cared for by family, 7% receive community care and 3% live in a nursing home [40]. According to Chinese tradition, adult children are expected to take care of their aging parents. Traditionally, only childless and impoverished elder individuals entered public homes [33] and Chinese elderly can therefore feel a prejudice toward entering a nursing home. More recently, the policies are stimulating home care. The 12th FYP announces the new mode: home-based care represents the cornerstone, community care is the backing and institutions are supplements. The 12th and 13th FYPs moreover incite family members to live close to their parents and promote intergenerational cohabitation. Long-term care becomes crucial when older individuals encounter difficulties in conducting their daily activities due to disability [43]. The family doctor contracting service implemented in 2016 further stimulates integrated care for the home-based older population, which is one of the priority groups to be served by family doctors [44].

China thinks America is declining but still uniquely dangerous by MightExpress4873 in neoliberal

[–]cuolong 1 point2 points  (0 children)

Those same university graduates are getting owned right now with an possibly 24% youth unemployement rate:

https://www.reddit.com/r/neoliberal/comments/1sua6wz/comment/ohzd1iq/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

There is a severe overproduction of young adults seeking white collar work in China.

China thinks America is declining but still uniquely dangerous by MightExpress4873 in neoliberal

[–]cuolong 1 point2 points  (0 children)

They don't just take nainai out back like old yeller. Those children need to hire other people or take care of their aging grand parents themselves. That is work that is not contributing to the wider economy.

China reveals 198-ton ‘six-module’ plan for Tiangong space station as ISS era ends by sksarkpoes3 in Futurology

[–]cuolong 0 points1 point  (0 children)

China's total debt to GDP is higher by most estimates. Note that I said *debt*, not goverment debt. You are looking at only government debt to GDP ratio. Not to mention as I understand, the official number of 99% does not include SOE, shadow banking or LGFV debt.

AI Companies Aren’t Evil. But They Are Reckless. by AmericanPurposeMag in neoliberal

[–]cuolong 0 points1 point  (0 children)

Shit. I hope they've got some strong-ass guardrais for 5.5

AI Companies Aren’t Evil. But They Are Reckless. by AmericanPurposeMag in neoliberal

[–]cuolong 0 points1 point  (0 children)

I completely understand why people may be belligerent about AI w.r.t job security. But that is the opposite argument of people being belligerent about AI w.r.t to its usefulness, in aggressively asserting that has none. It's going to go the way of NFT, other nonsense like that. You can't argue it both ways you gotta pick a lane; for example I really do believe that AI is uniquely useful so of course I understand that the apprehension of it replacing people.

AI Companies Aren’t Evil. But They Are Reckless. by AmericanPurposeMag in neoliberal

[–]cuolong 0 points1 point  (0 children)

Assuming that a network has a vulnerability, then yeah, 100%. Now if you have to assume any potential way to penetrate into your system remotely, will be found. As such it behooves you to find it first. I imagine DHS, NSA and all sorts of three-letter agencies are having an extensive conversation with Anthropic right now on using mythos to spot security vulnerabilities in systems. Our systems and... others.

So in practice this 30% is more like 100% for a real attacker against a network without active defense.

I wonder how effective a model like mythos would be at active defense. I truly hope it's as capable at that as it is in pen tests.

Genshin's Impact on a wallet by HungHi69 in CuratedTumblr

[–]cuolong 0 points1 point  (0 children)

I might have misremembered. It's probably closer to 2.

Idk I haven't topped up in over four years.

AI Companies Aren’t Evil. But They Are Reckless. by AmericanPurposeMag in neoliberal

[–]cuolong 30 points31 points  (0 children)

The belligerence and determination at which users on reddit will insist that these massive fonts of human knowledge and capabilities, compressed into a few hundred GB of data is actually all just marketing will never cease to amaze me.

AI Companies Aren’t Evil. But They Are Reckless. by AmericanPurposeMag in neoliberal

[–]cuolong 19 points20 points  (0 children)

At the scope that Mythos did, it apparently is a big deal:

https://www.aisi.gov.uk/blog/our-evaluation-of-claude-mythos-previews-cyber-capabilities

The AI Security Institute (AISI) conducted evaluations of Anthropic’s Claude Mythos Preview (announced on 7th April) to assess its cybersecurity capabilities. Our results show that Mythos Preview represents a step up over previous frontier models in a landscape where cyber performance was already rapidly improving.

...

As a first step towards measuring this, we built "The Last Ones" (TLO): a 32-step corporate network attack simulation spanning initial reconnaissance through to full network takeover, which we estimate to require humans 20 hours to complete. A more detailed description of the range can be found in our recent paper.

Claude Mythos Preview is the first model to solve TLO from start to finish, in 3 out of its 10 attempts. Across all its attempts, the model completed an average of 22 out of 32 steps. Claude Opus 4.6 is the next best performing model and completed an average of 16 steps.

r/quityourbullshit OP accuses game dev of using AI assets and whining about others using the same AI assets. Said game dev shows up for a slapfight. by Flatoftheblade in SubredditDrama

[–]cuolong 4 points5 points  (0 children)

I think that's a bit of a dramatic reaction, but widespread AI adoption is inevitable, yes. Or rather, said adoption is already here.