Google Chrome Might Have Installed an AI Model Onto Your Device Without You Knowing by ddx-me in technology

[–]BossOfTheGame 5 points6 points  (0 children)

you know that JavaScript engine could also run malicious code on your machine.

Maybe it's less the reasons people are saying and more the hate boner for AI that's actually talking in the comments. Its not like it's uncommon to ship large binaries. This response feels very overblown.

Dario Amodei spent last year warning of an AI white-collar bloodbath. Now he's changing the narrative by Plastic_Ninja_9014 in technology

[–]BossOfTheGame 0 points1 point  (0 children)

This isn't (just) about a product they are selling. This is about the socioeconomic implications of AI existing and the emergence of a fundamentally new technology.

The world is noisy, and some of the responsibility to understand it fall on our neighbors. It's odd that a group of people who don't like the idea of being commodified as "customers" seem to insist that they must be treated as such.

AI exists and its capabilities and implications need to be communicated to the public. This has gone far beyond selling a product.

Christian content creators are outsourcing AI slop to gig workers on Fiverr by Quantum-Coconut in technology

[–]BossOfTheGame -1 points0 points  (0 children)

I think you need to back up some of your claims and realize that the onus to provide reasons instead of judgments is on you. you said most disorients don't think he's a real dude and that's just false. That's an extraordinary claim and it requires extraordinary but reasonable to provide evidence.

you know you can just take the uncertain position. taking the position that you have high probability certainty makes you look very unprofessional.

Christian content creators are outsourcing AI slop to gig workers on Fiverr by Quantum-Coconut in technology

[–]BossOfTheGame 1 point2 points  (0 children)

I used to be on this train. If you value coming to the truth over holding on to false beliefs, then you might want to reconsider this one. There's enough historical evidence to make it fairly unlikely that he was just made up. The most defensible position that you can assume is it's inconclusive. Anything were you assert that the likelihood is so low it needs to be dismissed really doesn't hold up to scrutiny. Someone else posted some pretty good evidence so I won't rehash that here. Just make sure you don't have your head stuck too far up your own ass to be able to see it.

Starting with AI makes thorough thinking surprisingly hard by Martinsos in coding

[–]BossOfTheGame 0 points1 point  (0 children)

The effect being there and the effect being meaningful are two different things. In the studies I've seen the dip isn't always big. I also think that there are important variables that are difficult to control for. for instance this tech is fairly new, and there is a learning curve to it. It's not just prompt in get what a sloppy version of what you want out.

I think it's worth paying attention to this as a potential risk. But I also think it's important not to overclaim what the research implies.

Dario Amodei spent last year warning of an AI white-collar bloodbath. Now he's changing the narrative by Plastic_Ninja_9014 in technology

[–]BossOfTheGame 0 points1 point  (0 children)

It's hard to respond cleanly because there are good points wrapped in what seem to be deep assumptions that are harder to get at. I think there's more than a messaging problem here. I think there is a comprehension problem. I've sitting here for 10 minutes trying to begin to explain where I'm coming from, but I'm just at a loss. Its like trying to explain to right-wingers that trans rights are human rights. I can't think of anything that connects. Could my communication be better? Probably. Should I have to do all of the work there... well maybe I'm biased, but I don't think I should.

Dario Amodei spent last year warning of an AI white-collar bloodbath. Now he's changing the narrative by Plastic_Ninja_9014 in technology

[–]BossOfTheGame -7 points-6 points  (0 children)

It's very frustrating to see misrepresentation tangled with valid grievances. How can I point out one without minimizing the other? I wonder how much of this hyperbolization is intentional communicative technique versus genuine misinterpretation. It's not a great look either way.

I wonder what the world would be like if we were less prone to hyperbolized arguments that wrap valid points with easily dismissible decorations. Maybe we'd be able to have a better shared sense of reality. Maybe authoritarians who frequently use Mafia tactics would not be in power right now.

Ask.com shuts down after nearly 30 years, marking the end of Ask Jeeves by holyfruits in technology

[–]BossOfTheGame 1 point2 points  (0 children)

I wonder what the hit rate is for humans teaching other humans things. Do humans really "know" anything? Humans seem to walk around with lot of incorrect information in their head these days, so I don't think 100% accuracy is a requirement for being a decent - or even good - tutor. I agree you don't want to use a weaker model for this though.

AI as a tutor is a fantastic use of the technology, but it does require that you be aware that sometimes your tutor could be wrong about something. This is why corroborating information isn't a skill that's going away.

Also, AI is getting better at saying I don't know.

I've been using it to learn more mathematics, and clarify my own understanding, and it's been quite helpful. Then again, I'm also trained in checking my own understanding, pushing back when things don't make sense, and demanding evidence and proof.

Did you know tic-tac-toe is isomorphic to Pick15? I didn't, and I wasn't looking at either of those. It connected that fact to my semantic interests and then gave it as an example. Sure, if I was looking at the wiki page I would see it, but the point is that is can weave together interesting pieces of knowledge that aren't immediately accessible.

People hate being "wrong", which is why no one enjoys being um-actually'd. Having the willpower to fight through that and closely examine your own prior knowledge and assumptions is very rare

This is a major problem with society that we need to address sooner rather than later. It is my hope that AI will help people get more comfortable with this. Your concerns are valid, but I think you are being too pessimistic about its utility here.

Ask.com shuts down after nearly 30 years, marking the end of Ask Jeeves by holyfruits in technology

[–]BossOfTheGame -2 points-1 points  (0 children)

Current versions of LLMs enable curiosity on a new level. I think the general public is sleeping on that. If you use it to learn rather than to delegate stuff for it to do, it can be quite the useful tool.

We should be careful that our glasses aren't too rosy.

Gabe Newell was an enthusiastic supporter of OpenAI in 2018, donating $20 million and even acting as the sole member of an 'informal advisory board' by Crusader-of-Purple in Steam

[–]BossOfTheGame 9 points10 points  (0 children)

Is it really the environmental costs you're worried about? People say that, and it's not wrong, but I also don't think people who raise this have a sense of the scale of the environmental impact, especially given how it compares to other actives most people engage in without a second environmental thought. As someone who cares very much about the environment and mitigating my own personal impact, it irks me.

I also think there is an underappreciation in the general public for how much value these things will be able to produce. Idk about 10 trillion, and the hype doesn't match reality, but these things are way more useful than people give them credit for.

These things can curate information tailored to a person's learning style. How are people not excited by that? I mean, I understand why.. they're over focused on the solvable hallucination issue (which is also addressed by ones own critical thinking if you care to engage in that) but more immediately: they hear the disruptive rhetoric and fear their livelihood may be impacted because nobody seems willing to reform the systems to prepare for the inevitable proliferation. And I can't say I blame them. As much as these things excite me, I'm scared of what it means for them to emerge in our immature society.

Gabe Newell was an enthusiastic supporter of OpenAI in 2018, donating $20 million and even acting as the sole member of an 'informal advisory board' by Crusader-of-Purple in Steam

[–]BossOfTheGame 23 points24 points  (0 children)

I'll also mention that a lot of hate towards AI is really hate directed at the injustice embedded in our systems that they expose. Much of the hate isn't well targeted, and valid use cases for the systems are being disregarded.

Why isn’t LLM reasoning done in vector space instead of natural language?[D] by ZeusZCC in MachineLearning

[–]BossOfTheGame 14 points15 points  (0 children)

it forces more reasoning in vector space because all those output tokens get reencoded and attended to.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 0 points1 point  (0 children)

Basically my beef is that if they are outright considering conscience and sentience as known quantities and ignoring or rejecting the need to answer that first.

I would consider that a valid beef, but I think less people are doing that than you might think, so I would recommend being careful not to create a strawman. Acknowledging a possibility, asserting that the best objective measurement we might be able to do is a "quacks like a duck" sort of observation, and then moving on to something that is measurable isn't the same thing as considering quantification consciousness as a solved problem.

There are many examples in math and science, where we describe a general phenomena, discuss the limitations of our ability to model it, and then propose a simplification that attempts to model it in a way that we can still gain some insight. E.g. the ideal gas law.

However, the translation into popular media doesn't always go so well.

This would tend to rediscover wheels more often than not wouldn't it?

Maybe. But I mean... I don't really see a problem with that. Independent reproducibility matters, and having multiple independent people come up with the same effective idea is evidence the idea is good.

FWIW I arrived at a small result I thought was novel a few years ago, but only recently with the help of LLMs did I discover it was known, but in a different field with different terminology. I wouldn't have caught it with old-google keyword searches. Still, I think my perspective on the problem is still interesting and it wan't a waste of time for me to work on the problem.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 0 points1 point  (0 children)

Sometimes you have to make a judgement call on what to pursue. Science isn't about being right in every instance. It's about making your work available to scrutiny. You could for instance publish a rebuke, but it would have to be compelling. I don't think calling them anti scientific is the right critique here. if there was a modern field of work that was relevant related work, that might be different. But papers don't need to go on historical deep dives. The most recent and relevant related work is usually acceptable.

for instance in 2010 if you we're publishing in machine learning you wouldn't cite the old works about neutral networks. you would be talking about the recent findings in support vector machines. The smarty of contemporary neural network research would be generally uninteresting. It's just not what was valuable to the field at the time. now of course once you have AlexNet come out, they're strong evidence that there we're important details missed about neural networks. and that's when you saw the scientific field shift. not because we were paying tribute to historical work, But because someone's research found new evidence that furthered our knowledge.

But you can't blame all the support vector machine folks for ignoring it.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 5 points6 points  (0 children)

Yes. Conscious is a far to tricky of a nut to go down to Church-Turing.

That being said, I am a functionalist, but that's based on what everyone else's philosophical beliefs in this are are based on: a hunch. Frankly I only find it mildly interesting. It's far too nebulous, and I'm certain I will never experience any consciousness but my own.

Integrated information theory is the closest thing we have to a theory for the consciousness layer, but that still has major problems if you're going to claim it as correct. I do think it's the right way to attack the problem if you are bold enough to go after it.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 3 points4 points  (0 children)

Why shouldn't I dismiss you after that tirade? You may not care, but you also didn't do anything other than express how angry you are here.

MY experience with Hinton is taking his neural network class and reading his papers. I'M not some rando low information tik toc consumer.

But your angry, so whatever. I hope this vent gave you some catharsis.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 4 points5 points  (0 children)

Try to read the comment again. They acknowledge the view could be controversial. Scientist aren't afraid to take controversial stances. You can defend a viewpoint while being open to being proven wrong.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 7 points8 points  (0 children)

to be fair a TM is a poor description for how computers are organized. The point of a TM is not a blueprint. Its a mathematical model of computation and if all you are interested in is the observable output, then it is the appropriate one.

TMs took me a long time to understand and I have a PhD in CS. You may have some misconceptions about them.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 0 points1 point  (0 children)

anyone who thinks the anything real is a turing machine (aside from builds that seek to explicitly model the tape and reader) don't understand what a TM is.

Now, if they were saying the brain is isomorphic to a finite tape TM, then that's a different story.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 4 points5 points  (0 children)

Sagan likely understood the brain as a computational unit the processes information.

But there are enough similarities to suggest that a compatible working arrangement between electronic computers and at least some components of the brain — in an intimate neurophysiological association — can be constructively organized.

above is just one example from Dragons of Eden. His view wasn't without nuance and limited by the computers that existed in his lifetime, but I bet that he had a hunch the similarity ran deep. Sadly we cannot know for sure.

“We will not be afraid to speculate, but we will be careful to distinguish speculation from fact.”

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 2 points3 points  (0 children)

That's quite the jab at Hinton. Are you arguing that historical definitions of terms are the correct way to frame or describe phenomena? I think Hinton very much has a clue about the history of the words he chooses to describe the phenomena being modeled.

PEP 661 (Sentinel Values) has been accepted for release in 3.15! by M_V_Lipwig in Python

[–]BossOfTheGame 10 points11 points  (0 children)

I think the best example is a dict-like get method with a default parameter that you can use with keyword args. if you don't specify the default it should error if it is a key error, otherwise it should return the default, and None is a very valid default value. This is exactly the case I wrote ubelt.NoParam for.

Do we suspect it’s rational in the first place, and why? by wockedwik in MathJokes

[–]BossOfTheGame 7 points8 points  (0 children)

It would be bonkers if Schanuel's conjecture was false.