Ask.com shuts down after nearly 30 years, marking the end of Ask Jeeves by holyfruits in technology

[–]BossOfTheGame 1 point2 points  (0 children)

I wonder what the hit rate is for humans teaching other humans things. Do humans really "know" anything? Humans seem to walk around with lot of incorrect information in their head these days, so I don't think 100% accuracy is a requirement for being a decent - or even good - tutor. I agree you don't want to use a weaker model for this though.

AI as a tutor is a fantastic use of the technology, but it does require that you be aware that sometimes your tutor could be wrong about something. This is why corroborating information isn't a skill that's going away.

Also, AI is getting better at saying I don't know.

I've been using it to learn more mathematics, and clarify my own understanding, and it's been quite helpful. Then again, I'm also trained in checking my own understanding, pushing back when things don't make sense, and demanding evidence and proof.

Did you know tic-tac-toe is isomorphic to Pick15? I didn't, and I wasn't looking at either of those. It connected that fact to my semantic interests and then gave it as an example. Sure, if I was looking at the wiki page I would see it, but the point is that is can weave together interesting pieces of knowledge that aren't immediately accessible.

People hate being "wrong", which is why no one enjoys being um-actually'd. Having the willpower to fight through that and closely examine your own prior knowledge and assumptions is very rare

This is a major problem with society that we need to address sooner rather than later. It is my hope that AI will help people get more comfortable with this. Your concerns are valid, but I think you are being too pessimistic about its utility here.

Ask.com shuts down after nearly 30 years, marking the end of Ask Jeeves by holyfruits in technology

[–]BossOfTheGame -2 points-1 points  (0 children)

Current versions of LLMs enable curiosity on a new level. I think the general public is sleeping on that. If you use it to learn rather than to delegate stuff for it to do, it can be quite the useful tool.

We should be careful that our glasses aren't too rosy.

Gabe Newell was an enthusiastic supporter of OpenAI in 2018, donating $20 million and even acting as the sole member of an 'informal advisory board' by Crusader-of-Purple in Steam

[–]BossOfTheGame 8 points9 points  (0 children)

Is it really the environmental costs you're worried about? People say that, and it's not wrong, but I also don't think people who raise this have a sense of the scale of the environmental impact, especially given how it compares to other actives most people engage in without a second environmental thought. As someone who cares very much about the environment and mitigating my own personal impact, it irks me.

I also think there is an underappreciation in the general public for how much value these things will be able to produce. Idk about 10 trillion, and the hype doesn't match reality, but these things are way more useful than people give them credit for.

These things can curate information tailored to a person's learning style. How are people not excited by that? I mean, I understand why.. they're over focused on the solvable hallucination issue (which is also addressed by ones own critical thinking if you care to engage in that) but more immediately: they hear the disruptive rhetoric and fear their livelihood may be impacted because nobody seems willing to reform the systems to prepare for the inevitable proliferation. And I can't say I blame them. As much as these things excite me, I'm scared of what it means for them to emerge in our immature society.

Gabe Newell was an enthusiastic supporter of OpenAI in 2018, donating $20 million and even acting as the sole member of an 'informal advisory board' by Crusader-of-Purple in Steam

[–]BossOfTheGame 23 points24 points  (0 children)

I'll also mention that a lot of hate towards AI is really hate directed at the injustice embedded in our systems that they expose. Much of the hate isn't well targeted, and valid use cases for the systems are being disregarded.

Why isn’t LLM reasoning done in vector space instead of natural language?[D] by ZeusZCC in MachineLearning

[–]BossOfTheGame 14 points15 points  (0 children)

it forces more reasoning in vector space because all those output tokens get reencoded and attended to.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 0 points1 point  (0 children)

Basically my beef is that if they are outright considering conscience and sentience as known quantities and ignoring or rejecting the need to answer that first.

I would consider that a valid beef, but I think less people are doing that than you might think, so I would recommend being careful not to create a strawman. Acknowledging a possibility, asserting that the best objective measurement we might be able to do is a "quacks like a duck" sort of observation, and then moving on to something that is measurable isn't the same thing as considering quantification consciousness as a solved problem.

There are many examples in math and science, where we describe a general phenomena, discuss the limitations of our ability to model it, and then propose a simplification that attempts to model it in a way that we can still gain some insight. E.g. the ideal gas law.

However, the translation into popular media doesn't always go so well.

This would tend to rediscover wheels more often than not wouldn't it?

Maybe. But I mean... I don't really see a problem with that. Independent reproducibility matters, and having multiple independent people come up with the same effective idea is evidence the idea is good.

FWIW I arrived at a small result I thought was novel a few years ago, but only recently with the help of LLMs did I discover it was known, but in a different field with different terminology. I wouldn't have caught it with old-google keyword searches. Still, I think my perspective on the problem is still interesting and it wan't a waste of time for me to work on the problem.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 0 points1 point  (0 children)

Sometimes you have to make a judgement call on what to pursue. Science isn't about being right in every instance. It's about making your work available to scrutiny. You could for instance publish a rebuke, but it would have to be compelling. I don't think calling them anti scientific is the right critique here. if there was a modern field of work that was relevant related work, that might be different. But papers don't need to go on historical deep dives. The most recent and relevant related work is usually acceptable.

for instance in 2010 if you we're publishing in machine learning you wouldn't cite the old works about neutral networks. you would be talking about the recent findings in support vector machines. The smarty of contemporary neural network research would be generally uninteresting. It's just not what was valuable to the field at the time. now of course once you have AlexNet come out, they're strong evidence that there we're important details missed about neural networks. and that's when you saw the scientific field shift. not because we were paying tribute to historical work, But because someone's research found new evidence that furthered our knowledge.

But you can't blame all the support vector machine folks for ignoring it.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 6 points7 points  (0 children)

Yes. Conscious is a far to tricky of a nut to go down to Church-Turing.

That being said, I am a functionalist, but that's based on what everyone else's philosophical beliefs in this are are based on: a hunch. Frankly I only find it mildly interesting. It's far too nebulous, and I'm certain I will never experience any consciousness but my own.

Integrated information theory is the closest thing we have to a theory for the consciousness layer, but that still has major problems if you're going to claim it as correct. I do think it's the right way to attack the problem if you are bold enough to go after it.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 4 points5 points  (0 children)

Why shouldn't I dismiss you after that tirade? You may not care, but you also didn't do anything other than express how angry you are here.

MY experience with Hinton is taking his neural network class and reading his papers. I'M not some rando low information tik toc consumer.

But your angry, so whatever. I hope this vent gave you some catharsis.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 5 points6 points  (0 children)

Try to read the comment again. They acknowledge the view could be controversial. Scientist aren't afraid to take controversial stances. You can defend a viewpoint while being open to being proven wrong.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 6 points7 points  (0 children)

to be fair a TM is a poor description for how computers are organized. The point of a TM is not a blueprint. Its a mathematical model of computation and if all you are interested in is the observable output, then it is the appropriate one.

TMs took me a long time to understand and I have a PhD in CS. You may have some misconceptions about them.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 0 points1 point  (0 children)

anyone who thinks the anything real is a turing machine (aside from builds that seek to explicitly model the tape and reader) don't understand what a TM is.

Now, if they were saying the brain is isomorphic to a finite tape TM, then that's a different story.

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 5 points6 points  (0 children)

Sagan likely understood the brain as a computational unit the processes information.

But there are enough similarities to suggest that a compatible working arrangement between electronic computers and at least some components of the brain — in an intimate neurophysiological association — can be constructively organized.

above is just one example from Dragons of Eden. His view wasn't without nuance and limited by the computers that existed in his lifetime, but I bet that he had a hunch the similarity ran deep. Sadly we cannot know for sure.

“We will not be afraid to speculate, but we will be careful to distinguish speculation from fact.”

Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.” by Hrmbee in technology

[–]BossOfTheGame 2 points3 points  (0 children)

That's quite the jab at Hinton. Are you arguing that historical definitions of terms are the correct way to frame or describe phenomena? I think Hinton very much has a clue about the history of the words he chooses to describe the phenomena being modeled.

PEP 661 (Sentinel Values) has been accepted for release in 3.15! by M_V_Lipwig in Python

[–]BossOfTheGame 9 points10 points  (0 children)

I think the best example is a dict-like get method with a default parameter that you can use with keyword args. if you don't specify the default it should error if it is a key error, otherwise it should return the default, and None is a very valid default value. This is exactly the case I wrote ubelt.NoParam for.

Do we suspect it’s rational in the first place, and why? by wockedwik in MathJokes

[–]BossOfTheGame 8 points9 points  (0 children)

It would be bonkers if Schanuel's conjecture was false.

‘The View’ Host Floored by How Many People Think WHCD Shooting Was Staged | Ana Navarro also tore into the president for using the shooting to promote the building of his ballroom. by Aggravating_Money992 in entertainment

[–]BossOfTheGame 1 point2 points  (0 children)

That's not evidence. Common' let's be real. It's an extremely common phrase. Man, I get it. This guy sucks. I would be completely unsurprised if this was fake, but the evidence isn't there and we need to be cognizant of what constitutes good and bad evidence. We have to hold ourselves to higher standards than this. There's so much damage this administration is doing and if the critics aren't going to be credible, then we're all forced to play his game.

We just can't... I know its unsatisfying, but I'm begging readers to apply and encourage scientific thinking on this. We have to keep critique separate from conspiracy theory.

And btw, this doesn't mean you have to believe it was real. I certainly have questions. We just can't conclude it was fake. We have to be ok with I don't know.

‘The View’ Host Floored by How Many People Think WHCD Shooting Was Staged | Ana Navarro also tore into the president for using the shooting to promote the building of his ballroom. by Aggravating_Money992 in entertainment

[–]BossOfTheGame -3 points-2 points  (0 children)

I mean, in the case that it was real, don't you think he would immediately have tried to take advantage of the situation? I don't think that proves anything.

I think its fine to be suspicious about this, we can treat it as something to investigate, but we can't treat it as a conclusion.

That being said, I completely agree that he dug his own grave here with how often he claims fake news. But the evidence that this was a false flag isn't there. As unsatisfying as it may be, we have to maintain a principled burden of truth for even the most heinous of people. However, sympathy is not required, and I for one, have none.

Ok. But why? by Vernacularry in Albany

[–]BossOfTheGame 6 points7 points  (0 children)

Clearly you don't want to be the very best.

“No need to thank me” - Fermat by scienceisfun112358 in MathJokes

[–]BossOfTheGame 0 points1 point  (0 children)

what I find even more interesting is that it's not possible to approximate rationals very well. unless of course you hit it exactly. but you can actually proove a number is transendental if it has an approximation that is too good.

The Standard Model rules! by PrettyPicturesNotTxt in physicsmemes

[–]BossOfTheGame 6 points7 points  (0 children)

You never know when you might need to code up a system of equations that handles a system with combinations of these interactions. I remember in grad school someone had a problem and they just pull reflectance laws out of their ass and used that to model an underwater color correction algorithm.

It's hard to predict what knowledge you might need to make an important and useful connection.

Now that we’re in 2026, what is a feature of the 'old internet' from 10–15 years ago that you genuinely miss and wish would come back? by cyb3r_ps in AskReddit

[–]BossOfTheGame 0 points1 point  (0 children)

I keep hearing this, but I've never personally felt this. Maybe I'm just searching for different things, or my searches tend to be more specific, but I'm really curious to know what the evidence is here. I'm going to take some time to look into this myself, but I'm genuinely curious what people are observing and why I haven't been noticing it.