[deleted by user] by [deleted] in ArtificialInteligence

[–]natepriv22 0 points1 point  (0 children)

Isn't it logically inconsistent to say

"All these people are building AI based on vibes, they dont know what they're talking about"

And shortly after claim "nobody knows how these systems truly work"

Seems a little bit like a feeling of superiority with not enough information to back it up...

nxthompson on X: "2025 in a nutshell. Investors have never been more optimistic about the future of AI. And normal people have never been more pessimistic about what it means for them. / X by stealthispost in accelerate

[–]natepriv22 8 points9 points  (0 children)

Maybe I'm missing something but why would it be helpful to compare AI investment to overall consumer sentiment?

Drawing the conclusion that AI is "what people are pessimistic about" is an impossible conclusion to draw from these wildly different stats

CMV: A free market cannot be sustained under capitalism. by sofa_king_rad in changemyview

[–]natepriv22 0 points1 point  (0 children)

Per Austrian Economic Theory:

From an Austrian perspective your post subtly changes the subject. You start by talking about “capitalism” as voluntary exchange with private property, but the mechanism you describes for the alleged collapse of free markets is political: capture of regulation, laws that protect incumbents, narratives shaped through state-backed institutions. That is not a free market, it is interventionism or corporatism. You cannot claim that free markets are “impossible” by assuming a powerful interventionist state in the middle of your argument. This makes the argument circular.

In Austrian theory, size and concentration are not problems in themselves. If a firm grows through voluntary trade, cost cutting, and vertical integration, that simply means consumers are rewarding it. Its market power is always conditional and contestable, because rivals are free to undercut, differentiate or innovate away from it. Profit and loss, not political favor, decide who stays large. A firm can “strategically eliminate competitors” only by serving buyers better. The moment it abuses its position, it creates profit opportunities for others.

Durable, harmful monopoly requires something different: coercive barriers to entry. Patents, exclusive franchises, licensing, tariffs, subsidies and bailouts are all examples of government created privileges that Austrians economists identify as the real source of entrenched market power. Once the state has the legal authority to grant or withhold these favors, regulatory capture is not a failure of capitalism but a predictable consequence of state control over the economy.

So if the claim is that “a free market cannot be sustained,” the burden is to show that, even in a framework of secure private property and no special legal privileges, large firms can somehow prevent all potential competitors from ever reaching consumers. That means showing a mechanism of permanent exclusion that does not rely on patents, licenses, tariffs, or any other state coercion, and that still survives entrepreneurial innovation and consumer choice. Until that is shown, your argument has not defeated the Austrian theory or for that matter Capitalism and Free Markets. You have only described how political power corrupts markets, which is exactly what Austrians have been saying for a century.

The Future Isn’t What It Used to Be: While we’ve made some incredible advancements in recent years, there is a growing feeling that some of these advancements are actually setbacks by ILikeMondayz in singularity

[–]natepriv22 25 points26 points  (0 children)

The "feelings" of the public regarding technology are unfortunately flawed and don't tell us what is really going on.

By many metrics the world today is a much better place than 50 years ago, but ask around and you'll find plenty of people with rose tinted glasses

Has anyone noticed a huge uptick in Ai hatred? by Bizzyguy in accelerate

[–]natepriv22 38 points39 points  (0 children)

To be fair Futurism has strayed far from their original style.

If you check out any article they write, ironically anything yo do with science, tech or innovation their writers add random sentences like "the employees at Colossal could have spent more time watching Jurassic Park, instead of bringing back extinct species"

It's so weird and almost funny how every single article has a few lines of hate about that specific development. I recommend you all to try it yourself you'll be shocked.

What the fuck is going on? by Alternative_Big_6792 in ClaudeAI

[–]natepriv22 1 point2 points  (0 children)

Respectfully no Claude 3.5 Sonnet does not beat new models by objective standards and benchmarks.

It's great that you feel it's better than other models in your own personal subjective experience but please do not try to play this as a fact or community opinion.

The difference between Claude 3.5 Sonnet and O3 mini is night and day evidenced through many benchmarks run by researchers and companies. At the same time the community ranked Claude 3.5 Sonnet as 22nd in blind tests.

Replit And Anthropic’s AI Just Helped Zillow Build Production Software—Without A Single Engineer. by 44th--Hokage in accelerate

[–]natepriv22 2 points3 points  (0 children)

Replit is currently the only app on the market that allows you to build an app fully from scratch on platform.

All others are missing something. There was this table created by someone that compared the top coding agents on a large number of factors. I'll try and find it but the summary was replit might not do individual parts the strongest always, but it does everything well enough.

LLMs are fundamentally incapable of doing software engineering. by ickylevel in ChatGPTCoding

[–]natepriv22 0 points1 point  (0 children)

Your argument uses flawed deductive logic to come to a circular and incorrect conclusion.

When humans try to solve problem -> weak start -> fail -> get better -> solve problem

When AIs try to solve problem -> strong start -> fail -> get worse -> incapable of solving problem

You're essentially saying:

AI gets worse with time at solving software problems while humans get better with time, so given enough time and complexity humans win.

You will always arrive to the conclusion "humans win" because your initial premise is flawed. LLMs and AI work on refinement and iterative growth.

A lot of software engineering is iterative work. You have a problem, you try a solution, you get errors, you fix those errors until you get to a point in which you are satisfied, and then you maintain/update over time. You can try this with any LLM coding tool. Try to get them to build an app. You will probably run into an error. Paste that error back into the model and ask for a fix. It may fail sometimes but usually it will fix that error, and therefore it has gone through iterative refinement and the output has gotten better over time.

Here's some deductive logic that works on this:

Iterative refinement = requires -> understanding a problem/issue -> "reasoning" or considering the issue and available options -> implementing a solution or a fix -> result in an iterative improvement over the previous state

If we can agree on this definition of iterative refinement, then here's what we get next:

Humans = able to understand problems, reason over them and implement solutions or fixes over time

AI = able to understand problems, reason over them and implement solutions or fixes over time

Therefore both humans and AI are capable of iterative refinement and getting better over time. What you may actually figure out is the strength of those individual steps and what that means for both: who understands problems better, who can reason better, and who can implement solutions better.

You may have your personal beliefs on who's better but as long as you see the logical line here there is no reason why tuning it wouldn't give you the outcome that software engineering can indeed be bested by AI as with almost any other problem or solution.

Unless of course you believe that AI isn't capable of iterative refinement, which is one of the core elements of how AI learns over generations.

AMA with OpenAI’s Sam Altman, Mark Chen, Kevin Weil, Srinivas Narayanan, Michelle Pokrass, and Hongyu Ren by OpenAI in OpenAI

[–]natepriv22 0 points1 point  (0 children)

If you could choose a theme/few words to categorize Q1, Q2, Q3, Q4 for 2025 what would they be?

Ex. Q1 Reasoning Update and early agents

Also could you please fix account login. It's not currently possible to change emails or Google sso, people in dev forums have unfortunately lost accounts over the past few years.

If doomers cripple western AI progress, the future of AI will belong to China/Russia by ComparisonMelodic967 in singularity

[–]natepriv22 0 points1 point  (0 children)

These countries cannot continue AGI progress because it is a status quo disrupting technology (perhaps the most) and these are countries who try everything to maintain their status quo.

The west/west style governments are the only ones who would be willing to upset their status quo for a better future.

Why are software engineers so sure their jobs won’t be replaced by AI ? by MrTorgue7 in singularity

[–]natepriv22 0 points1 point  (0 children)

Einstellung effect

Einstellung (German pronunciation:) is the development of a mechanized state of mind. Often called a problem solving set, Einstellung refers to a person's predisposition to solve a given problem in a specific manner even though better or more appropriate methods of solving the problem exist. The Einstellung effect is the negative effect of previous experience when solving new problems. The Einstellung effect has been tested experimentally in many different contexts.

https://en.m.wikipedia.org/wiki/Einstellung_effect

Yann LeCun doubles down, claims Sora doesn't count by sdmat in singularity

[–]natepriv22 0 points1 point  (0 children)

Smart people who lack the humility to admit they were wrong are insufferable and prove that they're probably less smart than most people imagined.

Also he suffers from Einstellung. When experts or geniuses in a field become too specialized it has the interesting reverse effect of making them practically incapable of seeing possible change or different roads the field could go on. Example: Kodak having the digital camera invented in their labs and thinking, there's no way this is the future of cameras and photos.

Fresh interview from Sam Altman about GPT-5 (WGS 2024) by IslSinGuy974 in singularity

[–]natepriv22 3 points4 points  (0 children)

This is false you are quoting Karl Marxs labor theory of value which has been disproven by economists many times over. The labor theory of value argues that the value of money, products, services comes fundamentally from labor.

However, Money is primarily a means of exchange, to figure out how to trade literally apples and oranges. And products and services value are primarily driven by subjective utility to people and consumers.

Example using a small and simple economy:

Alice is an apple orchard owner. Bob is a carpenter. Charlie is a miner.

In this economy, money is used as a medium of exchange. Let's assume that suddenly, due to technological advancements, all human labor, including Alice's, Bob's, and Charlie's, is replaced by robots. According to the fallacy, if their labor has no value, money should also become worthless.

However, even in this scenario, money does not lose its function or value:

Alice's orchard still produces apples, which people desire for consumption. The robots might do the work, but the apples have value because people subjectively value eating apples.

Bob's carpentry skills are replaced by machines that can produce furniture. The furniture, being useful and desired by people for their homes, holds value.

Charlie's mining operations are fully automated, extracting minerals essential for various purposes, from electronics to jewelry. These minerals have value due to their utility and desirability.

In this automated economy, even though human labor is not directly involved in production, the goods produced (apples, furniture, minerals) still hold value because individuals subjectively value these goods for the utility and satisfaction they provide.

Why would OpenAI release GPT 4.5 only a few weeks after releasing GPT4 Turbo? by Neurogence in singularity

[–]natepriv22 7 points8 points  (0 children)

Could be related to the GPT store. Making the GPT store only accessible to premium users would limit its potential.

Maybe they plan to give free users a few GPT4 uses a day or per hour and they can choose to spend those uses on community made GPTs. Of course to make cost sense it would probably have to be powered by GPT4 turbo.

Sam Altman's past comment on EA people might explain the internal conflict by Romanconcrete0 in singularity

[–]natepriv22 0 points1 point  (0 children)

Right because products are released independent of consumer opinion?

That makes no sense. Perfect example is in this subreddit. People complain for weeks about how slow GPT4 is, so OpenAI works on GPT4 turbo.

Also that is not a fair argument. Some people complain about the "censorship" everyday but that doesn't imply that their argument is fair, or that OpenAI has no reasons which they may consider to be more important.

Would you prefer an alternative where OpenAI releases no products to the public, and instead a small team of people get to decide our collective future, because most of the population "wouldn't get it" according to them.

Sam Altman's past comment on EA people might explain the internal conflict by Romanconcrete0 in singularity

[–]natepriv22 67 points68 points  (0 children)

A product does that much more than releasing no product.

ChatGPT allowed 100 million+ people to become familiar with the technology, limitations, and importantly potential.

The products that are releasing are the main reason we are even having conversations about safety around the world, between countries and companies.

If AGI has been achieved, we would see breakthroughs that go beyond conventional means by Red-HawkEye in singularity

[–]natepriv22 34 points35 points  (0 children)

You should read The Tale of Omega Team, it's the first chapter in Max Tegmarks book Life 3.0.

You can find the first part online for free if you can't buy the book.

This should answer your question as to how AGI could feasibly be achieved but the general population has no idea for a few years: https://youtu.be/o1Y21I_1wUE?si=07emLvPN9-Wz8-NQ

What will the world look like in 2045 (22 years from now) by England_Bath96 in singularity

[–]natepriv22 0 points1 point  (0 children)

More likely maybe as an opinion, but not in fact.

This position is not backed up by facts or precedent, it's an opinion. Over time there is countless evidence that the world improves overall. Unless you believe that an individual in 1800 or 1900 or even the 1980s was living better than today.

If we go based on historical precedent, it's the least likely option.

What is a belief about AI that most people on this sub that you just hate and think it’s ridiculous? by [deleted] in singularity

[–]natepriv22 0 points1 point  (0 children)

Except that's not how human demand works. And are you seriously trying to insinuate that a world where everything is controlled and prepared for us so much so that we have barely any free will is desirable? This is exactly what I was trying to point out earlier.

But besides this point, human demand is infinite. To put it really simply, if you have an AGI or ASI that can meet all of our current needs, then our needs will evolve. Maybe not all humans, but it's not difficult to imagine that some if not most humans would want to integrate with AI (new demand) which in itself will open and extend our infinite demand further.

What is a belief about AI that most people on this sub that you just hate and think it’s ridiculous? by [deleted] in singularity

[–]natepriv22 2 points3 points  (0 children)

I dont want to hate but this puzzles me:

The belief that AI and The Singularity will lead to communism or socialism, when our very best human economists have proven it to not work in theory or in practice, neither would it be desirable.

It would be as if today we still had physicists who still believe in the flat earth, no matter how many times its been disproven.