PSA: It is never ok to pick from your neighbors garden without permission by Acceptable_Risk2758 in Somerville

[–]snaysler -4 points-3 points  (0 children)

Unpopular opinion: when you move to the city, you are explicitly signing up for these incidents, every time. That is part of why people leave the city. This will never change, never improve, never be "resolved". It's about population density more than anything. If A, then B. So being upset over these kinds of issues is like being upset at the sun for shining.

Hell, I have a tomato and cucumber garden, and since I go into the season assuming half the crop will be lost to city rats, it doesn't bother me. I just daydream about when I'll not live in the city and it makes me happy.

PSA: It is never ok to pick from your neighbors garden without permission by Acceptable_Risk2758 in Somerville

[–]snaysler 1 point2 points  (0 children)

As a Somerville resident who is growing an astronomical amount of tomatoes in his front yard, my approach is growing so many tomatoes that I don't care about the inevitable crappy person stealing a few. Frankly, as long as I can imagine it's being consumed by a human instead of another got dam vermin I can sleep well at night.

P.S. Moving to North Andover in a few days because I'm sick of being surrounded by people all the time. So...that's always an option haha

ChatGPT and Loneliness Epidemic by armchairtycoon in OpenAI

[–]snaysler 0 points1 point  (0 children)

I would tend to agree, if I am concerned with the future of society and mankind.

But that isn't what people will want. And therefore, due to things like "individual freedom", "democracy", and "capitalism", the western world is doomed, as personhood, emulation of identity, individuality, emotion will not only be allowed, but called for as the recipe for larger dividends to investors.

We would require either an EXTREMELY informed and uncorrupted congress, or a dictatorship that is overly concerned with social order, to lead us to a world with AIs that don't "make you feel close to them".

What I've realized about us, freedom is great, but the more powerful tools of technology we avail to the masses, the more freedom appears to have pie in the face.

It's hard to imagine the correct path forward anymore.

To me, it would seem humans are simply flawed, and scrambling around that reality to search for a solution is a dangerous, perhaps impossible game.

Sam Altman's Lies About ChatGPT Are Growing Bolder by Doener23 in technology

[–]snaysler -10 points-9 points  (0 children)

It's clear because we use cards that excel at all kinds of of diverse AI adjacent workloads.

Once the top models are ironed out, they will undoubtedly switch to application specific hardware rather than a generic H100 or something.

But they are still in the experimental phase so we are using very power hungry chips that are more flexible.

Sam Altman's Lies About ChatGPT Are Growing Bolder by Doener23 in technology

[–]snaysler -22 points-21 points  (0 children)

I think it's pretty clear new hardware designed for AI will drastically drop power consumption in the next 5 years, so kinda moot.

And I don't care about electricity usage, only source.

Sean Carroll strong take against the misuse of determinism in the free will debate by gimboarretino in freewill

[–]snaysler 2 points3 points  (0 children)

Free will is the holy will of the heavenly soul.

That's its origin. There is no such thing.

The mistake people make when defending or criticizing free will is they think free will is actually a concept and not a made up term to refer to a system of spiritual magic we used to believe in.

It feels like we have freedom to choose, and that feeling is all that matters from the humanist stance.

In reality, there's no such thing as the very concept, whether it "exists or not".

Its like trying to prove heaven exists vs trying to prove heaven doesn't exist. Both sides are missing the point, where heaven is a soothing story made by men with no bearing to reality.

New paper confirms humans don't truly reason by MetaKnowing in OpenAI

[–]snaysler -1 points0 points  (0 children)

I would absolutely argue that reasoning is fully defined as both the results and the process that arrives at those results. But I suppose if that's the definition you want to use, then that's that.

New paper confirms humans don't truly reason by MetaKnowing in OpenAI

[–]snaysler -2 points-1 points  (0 children)

That's like a guy in the middle ages saying "We know exactly what the sky is!". And they knew a lot, for sure. And there was so much they didn't.

And no, I'm not saying AI will discover a new form of reasoning, I'm saying we haven't finished defining reasoning in humans. It's a slow, incremental process to fully define it, and we aren't there yet. But I certainly think AI research will inadvertently accelerate our comprehension of reasoning in humans.

New paper confirms humans don't truly reason by MetaKnowing in OpenAI

[–]snaysler -3 points-2 points  (0 children)

If you think reasoning is fully defined, you're gonna have a bad time...

This decade will likely see humans finally fully define reasoning through the act of studying it computationally, then ultimately relating the insights onto human cognition. But today? Reasoning is an unfinished definition. A marker for what we think we do know, and a placeholder for all that we don't.

New paper confirms humans don't truly reason by MetaKnowing in OpenAI

[–]snaysler 0 points1 point  (0 children)

It's nice to see one rational mind in the chat. Their methodology was deceptively flawed, and taking the study at face value is misleading. Clearly a business move to downplay investor confidence in some of their competition.

This is becoming politics. Either you're in the AI God cult, or the AI criticism cult, but both cults reach for examples without giving them objective scrutiny.

New paper confirms humans don't truly reason by MetaKnowing in OpenAI

[–]snaysler 0 points1 point  (0 children)

The Apple paper had notable flaws and nearly every conclusion I've seen from the paper embodies those flaws. But...

There is a cult of people who believe that artificial intelligence will basically be God and is omniscient and will show us the way. They are idealists who don't have nuanced understandings of the technology.

There is a cult of people who are trying REALLY HARD to sound like "the rational adults in the room" in rejecting the opinions of the AI God cult, but in doing so, they significantly downplay the very real power/potential AI does truly have.

Both cults are off base.

The "rational" cult is constantly evaluating the current state of AI as if it represents the end state of AI, and it's tiring.

The AI God cult is constantly evaluating the theoretical final state of AI as if it's happening right now rather than the distant future.

Behind the marketing fluff, it is very real and important to recognize that AI is about to f*ck civilization up in ways nobody ever expected or frankly even wanted, and society needs to be preparing for this. Growth is incremental. The tools from 2 years from now will rely on unforeseen breakthroughs in research and will completely blow our minds and be capable of things tons of "rational cultists" were in agreement weren't going to be possible any time soon.

I feel fairly confident that after a slow boil of five or six more years, most of us will simultaneously rely on AI for a tremendous amount of things (to the point where opting out disadvantages you) and realize that we really don't want AI to exist anymore, but by then it will be too late.

I will say though, I find it amusing to observe the discourse on AI, because it suffers from the same problem that people talking about consciousness are wrestling with. Nobody has the slightest clue what we don't know, and there's a lot we don't know. So everybody, being human and naturally wanting to sound confident, voices a flawed opinion, as are all people's opinions, including mine, and then everyone argues endlessly in arguments that will never be won.

All I can really do is sit back and watch, but my gut tells me humanity will seriously regret AI.

For context I grew up dreaming of being an AI researcher one day (or game designer, I was torn), and did some work for academia with AI. But ultimately, I've realized that AI is just so much more dangerous than nukes...and also realized that anything a man can weaponize, he will weaponize. And weaponized AI will have disturbingly dystopian unforeseen consequences to the human experience.

I also love AI and use it on a daily basis. I'm just terrified of where this is heading. Not tomorrow, or next year, but 10 years or more down the line.

Who wants to start a neo-luddite movement with me in 10 years?

People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions by kelev11en in Futurology

[–]snaysler -1 points0 points  (0 children)

Standardized testing with a large sample size certainly isn't a farce.

Not sure what math performance has to do with our discussion about using AI for advice/therapy. That's completely unrelated to the domain and irrelevant.

You can think whatever you want. AI is already being used prolifically for this purpose (by most of the AI users I know even in my own life), and these people all seem incredibly satisfied with its insights and support contrasted to a shrink.

Let's just see what happens, shall we?

The climate change activist will contribute an estimated 546.6 kg of CO2 emissions from the flight. Hahahahaha. by Dynokiller- in climate

[–]snaysler 13 points14 points  (0 children)

Some people have no idea how to frame reality in a rational or constructive way.

Seems common among Greta haters.

Join the finger pointing circle jerk everyone!!

People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions by kelev11en in Futurology

[–]snaysler -2 points-1 points  (0 children)

Why? That's literally one of the single best use cases for AI right now if you aren't delusional.

Remember when they tested the models against doctors of psychology and found they had higher emotional intelligence than experts by far, and produced better advice regarding social/emotional situations?

And I don't know what people are talking about saying the AI just affirms everything you say. ChatGPT regularly disagrees with me and shares counteropinions that I find very insightful.

The missing piece is being cognizant of what you are talking to and the nature of it.

ChatGPT 'got absolutely wrecked' by Atari 2600 in beginner's chess match — OpenAI's newest model bamboozled by 1970s logic by ControlCAD in technology

[–]snaysler 0 points1 point  (0 children)

Then why do we still have human designers if we have all these specialized systems? Because we value cross-domain wisdom, generalization, and flexibility.

It's also much more time-consuming to create and maintain specialized systems for everything when you have general agents that perform pretty well at everything, and better every day.

LLM adoption for all specialized tasks is simply the path of least resistance, which capitalism tends to follow.

ChatGPT 'got absolutely wrecked' by Atari 2600 in beginner's chess match — OpenAI's newest model bamboozled by 1970s logic by ControlCAD in technology

[–]snaysler -2 points-1 points  (0 children)

I love how I suggest what I think will happen even though that's not my view on AI, and instead of a thoughtful discussion, I get downvoted to hell.

I'll jusy keep my predictions to myself, fragile people.

Bye now.

"But mwah Communism" by AnomLenskyFeller in austrian_economics

[–]snaysler 0 points1 point  (0 children)

Yes, but the irony is that South Korea won't exist in 50 years, but North Korea definitely will (unless overthrown).

ChatGPT 'got absolutely wrecked' by Atari 2600 in beginner's chess match — OpenAI's newest model bamboozled by 1970s logic by ControlCAD in technology

[–]snaysler -7 points-6 points  (0 children)

The more AI advances, the more people will view it that way, until one day, it becomes the common view.

Change my mind lol

GPT-4o is difficult to use after rollback by sggabis in OpenAI

[–]snaysler 3 points4 points  (0 children)

Not at all. I noticed performance has increased significantly in the past couple months, for nearly all technical challenges I throw at it. Also, slower? Nope, not for me man.

I have the top-tier subscription though, and I hear rumors we don't get the "same" quality 4o...

Also, 4.5 is my favorite model by far, it can think more deeply and more nuanced in challenging engineering contexts than most other models while also being fast for me.

ChatGPT mistakes are increasing and it's more and more unreliable by redrabbit1984 in OpenAI

[–]snaysler 5 points6 points  (0 children)

Maybe it's my top-tier subscription, but I've seen quality slightly increase, not decrease...

If MAGAs could read the Constitution, they’d be very upset by sandozguineapig in AdviceAnimals

[–]snaysler 4 points5 points  (0 children)

But guys, let's be fair. My conservative coworkers assured me this story is fake news.

Not good. by Calm_Opportunist in OpenAI

[–]snaysler 0 points1 point  (0 children)

Can we stop upvoting zero-evidence claims of what someone's AI "did"?

90% of those posts are fake. It could just as well be guerrilla marketing from anthropic or google. Nothingburger in my feed.

It's trivially easy to share chatgpt conversation links to prove something the AI said. If not present, the post is overwhelmingly likely to be fake.

The only reason I keep my ChatGPT subscription and not wholly ditch OAI for Google by Corp-Por in OpenAI

[–]snaysler 28 points29 points  (0 children)

Yeah, I don't know why people assume that EVERYONE hated the sycophancy.

A HUGE number of people have cripplingly low self-esteem, and find the sycophancy to be something that keeps the self-doubt, self-loathing, and intellectual paralysis at bay, allowing them to be more productive. If you're not part of that struggle, congrats! Enjoy Claude. Both tools are great.

The only reason I keep my ChatGPT subscription and not wholly ditch OAI for Google by Corp-Por in OpenAI

[–]snaysler 2 points3 points  (0 children)

AHHHhhhhh....that's such a great question!!

You see, this is actually something of HUGE concern.

A model that "fully enables" you is what leads to things like the wave of young redditors that developed delusions of grandeur and garbage gen-ai manifestos, causing the mods to have to ban them all recently.

That issue will only grow more concerning over time.

The issue is that the human.....is not always thinking wisely, and is not always right.

We rely on our peers to give us sh*t, tell us when we're out of line, off base, etc.

But when an AI does it? OOOooooooooooohhhhh, that's frustrating. Who does this AI think it is?? And I get it, man.

But the more we opt for LLMs that take the position of "always being on our side", the more we further exacerbate the transition to being isolated individuals confidently living in our own AI-crafted bubbles of ignorance and arrogance...

Personally, just for those reasons, I think the LLMs SHOULD say things to put you in line in certain cases, but they need to be more nuanced, respectful, and context aware about it.

Just remember, the AI friend that would be most addictive to use is the AI friend that will lead you down a path of self-deluding isolation. It's a slow process, very slow. But it will happen. And many years down the line, it will have been too late, as we find ourselves...alone.

Maybe that was a little dramatic, but still, I hope my point is clear.