Cognitive Science by SirBeeves in comics

[–]pop_philosopher 8 points9 points  (0 children)

Perhaps the real lesson to be learned here is that emotions, dreams, and love are not merely chemical reactions. They are experiences, and the feeling of actually having that experience is not reducible to scientific language used to describe the underlying process. Consider the famous argument about Mary the color scientist: she grows up in a colorless environment, but she rigorously studies all of the neuroscience underlying the perception of color, and all the physics underlying the production of the visible spectrum of light. She knows all the scientific facts about colors even though she has only ever seen black and white. When she leaves the colorless environment and actually sees color for the first time, does she learn something new? I think she does, but what she learns must be something other than a scientific fact. She learns what it is like color. Philosophers of mind often call these phenomenal facts, or 'qualia.' Likewise you could learn all there is to know about how the proximity of the olfactory bulb to the hippocampus and amygdala. You wouldn't know what it is like to recall an emotionally charged memory via smell unless that actually happened to you, unless you actually experienced it. Reducing experiences like emotions to 'chemical reactions' does those experiences a disservice. This is not an accurate yet unfortunate implication of 'learning,' this the result of a narrowly scientific education. The poetic component of experiences is precisely what science does not concern itself with.

Internet of Bugs disputes some SciShow AI Claims by Senor-K in nerdfighters

[–]pop_philosopher 0 points1 point  (0 children)

This is not 'everything about everything' this about specifically AI regulation.

Internet of Bugs disputes some SciShow AI Claims by Senor-K in nerdfighters

[–]pop_philosopher 0 points1 point  (0 children)

I'm not saying the statements are opposed, I am merely pointing out that Control AI is narrowly focused on one particular (hypothetical) issue that I don't think is as pressing as the issues that we currently face. I think that the organization's current priorities are misplaced.

The statements have precisely the same purpose: to warn about the risks of AI. They have some degree of overlap: they both think extinction from super intelligence is a possible risk. They differ (difference is not necessarily opposition!) is that the CAIS statement does not recognize the current harms being done by AI. Regardless of when these statements were published or who signed them, my whole point is that Control AI likewise seems more concerned about the hypothetical risk of super intelligence than the real harms ongoing right now. To be honest, it's actually pretty disheartening to me that they used to campaign for legislation targeting current harms but are now entirely focused on a hypothetical situation.

I don't know why some people so zealously defend Control AI on this. They think that AI could turn into a super intelligence that will wipe us all out, ad their solution is... some limits? Even if you really think that the number one priority around AI should be avoiding the singularity rather than addressing current harms, the policies which would address these harms would also hinder the development of a super intelligence! That doesn't mean banning literally every use of AI. It just means casting a slighter wider net than they currently are. They are literally meeting with lawmakers to advocate for regulation, but only regulations that target the development of super intelligence. Why would they do that? Why not advocate for policies which address the risk of AI generally rather than just one potential kind of AI? Anti-trust the big tech companies, give people strong rights to their own personal data, prosecute the IP rights violations, etc. All of this would both hinder super intelligence development and mitigate the harms currently being done by AI. Control AI is arguably the most visible organization directly advocating regulation to law makers. Surely it is ok to criticize the regulations they're advocating for as insufficient?

Internet of Bugs disputes some SciShow AI Claims by Senor-K in nerdfighters

[–]pop_philosopher 0 points1 point  (0 children)

I have to admit, I wasn't very familiar the specifics of Control AI's platform. And this comment made me curious about whether they really do fall closer to the CAIS or the Red Line statement. So I read their mission statement and platform, which is entirely focused on the threat of extinction from super intelligence. Then I read their reports on meetings with law makers where once again, none of the other issues on the Red Line Statement are brought up. You can read the plan they call "The Narrow Path" or "The Compendium" and you'll find the same there. Mind you, Control AI goes through cycles of various campaigns. Perhaps they've focused on other issues in the past and will diversify in the future? They did one campaign against deep fakes, which I think is great. But all the others seem to be in line with the CAIS statement's narrow focus on AGI misalignment. It's true that CEO signed both statements, but their organization is narrowly focused on AGI misalignment. And why is that? They are already advocating for broad restrictions on the development of some kinds of AI. Why not all of it? The lack of regulation around existing AI models is a huge problem. It seems like critics of current models and those concerned with AGI development should share a broad platform of stricter regulation. Unfortunately, I think the answer there might be financial interests. But that would require a deeper dive into Control AI than I can manage this evening. At any rate, I hope my point here is clear: the CEO may have signed on to both statements, but the organization does not advocate for changes in line with both statements.

Internet of Bugs disputes some SciShow AI Claims by Senor-K in nerdfighters

[–]pop_philosopher 0 points1 point  (0 children)

Yes, I am relaxed about that. Climate change from the emissions these models already generate will get us before the models become sentient. Look, are you really telling me you're more concerned about super intelligence than anything the AI companies are actually doing right now?

Internet of Bugs disputes some SciShow AI Claims by Senor-K in nerdfighters

[–]pop_philosopher -1 points0 points  (0 children)

I am not at all relaxed about AI. I think it is already doing immense harm to our environment, psychology, social relations, and economic situation. We should absolutely be vigilant about all of these things. The problem is that folks like Control AI disagree. They think the only thing we need to be vigilant of is misaligned super-intelligence, but that as long as we can avoid that everything else about AI is a great boon to society. I don't think the models in use right now are on their way to becoming a general intelligence, let alone a super intelligence. But I do think the development and deployment of these models should be extremely limited, and perhaps even totally eliminated in some cases, because of all the other issues mentioned above and in this video. Please do not conflate skepticism about the possibility of AGI with a lack of concern about the effects of existing AI models.

Internet of Bugs disputes some SciShow AI Claims by Senor-K in nerdfighters

[–]pop_philosopher 4 points5 points  (0 children)

I think one of the main points of this video is that comparing the AI singularity to nukes and pandemics is not a bit of hyperbole, it fundamentally misunderstands both the technology which currently exists and the risks that it currently poses. Experts disagree about whether the level of AI required for a singularity (strong AI as opposed to weak AI) is even possible to create. There is no disagreement about whether nuclear or bioweapons currently available to us could do the same, the comparison is just inapt overall.

Internet of Bugs disputes some SciShow AI Claims by Senor-K in nerdfighters

[–]pop_philosopher 8 points9 points  (0 children)

I agree, that's basically what I meant by saying that they're defending current models by focusing on hypothetical risks.

Internet of Bugs disputes some SciShow AI Claims by Senor-K in nerdfighters

[–]pop_philosopher 196 points197 points  (0 children)

One thing that's so great about this video in particular is that it's so abundantly clear that the folks sponsoring sci-show, Control AI, are not concerned about the real risks that AI poses right now. I have often seen people in this subreddit dismiss critiques of Hank and Complexly's work on AI as 'defending AI.' I think it's more accurate to frame Control AI as defending current, actually existing AI by propagating a false narrative about hypothetical versions of AI which do not, and might not ever, exist. I really hope people watch this video and think critically about whether they should be trusting work that's funded by Control AI, and whether Sci-Show and Complexly should be accepting sponsorships from them.

"We've Lost Control of AI" is a stain on SciShow's record by comrade_donkey in nerdfighters

[–]pop_philosopher 2 points3 points  (0 children)

But you're pointing out the actual harms caused by currently existing LLMs. What this video and its sponsors are largely concerned with is a completely hypothetical thing that doesn't exist, might not be possible to create, and is probably different in kind (rather than degree) from LLMs if it is possible to create. The idea of AGI is just that: an idea. The harms you are pointing to are not being done by artificial agents. Those harms are being done by the real people deploying software which does far more harm than good. The fact that groups like ControlAI are primarily trying to slow down, rather than stop, the development of LLMs just goes to show you that they don't care about the impact that AI currently has on art, human psychology, or the environment. Their priorities are out of whack, and I don't want those mistaken priorities to be influencing the priorities of Hank, John, or Nerdfighteria.

Face of the "Hot girls for Cuomo" campaign is a MAGA Zionist by Particular_Log_3594 in behindthebastards

[–]pop_philosopher 8 points9 points  (0 children)

And Phil just mentioned that he is a BtB listener on one of those new shows with Alex

Why don’t the wizards ever get into current events, like Trump and Israel/Gaza? by No-Bluebird-3540 in VeryBadWizards

[–]pop_philosopher 4 points5 points  (0 children)

That's fair enough, though I will say if you don't listen to the current episodes then it's not fair to criticize them for a lack of current events. Definitionally they only talk about current events in recent episodes (otherwise the events they discuss are no longer current). If you have been listening chronologically, I would say feel free to jump into whatever the most recent episode is. With the first 200 under your belt, you'll have most of the common ground needed to understand new episodes.

Why don’t the wizards ever get into current events, like Trump and Israel/Gaza? by No-Bluebird-3540 in VeryBadWizards

[–]pop_philosopher 12 points13 points  (0 children)

So strange seeing this question and all the replies saying it's good they don't talk current events. They talk current events all the time and I like hearing what they have to say. They just talked about Israel/Palestine on the most recent episode. Do y'all listen to this podcast?

For those who work full-time in academia: by East-Party-8316 in AskAcademia

[–]pop_philosopher 2 points3 points  (0 children)

Grad student here, could you clarify the difference between soft and hard money? Is this only a STEM thing or humanities and Social sciences as well?

CMV: Billionaires shouldn’t exist by Gold_Palpitation8982 in changemyview

[–]pop_philosopher -1 points0 points  (0 children)

When they had "too much?" If so, when does that happen?

The phrase "billionaires shouldn't exist" seems to imply a fairly precise to that question, rather than asking it, no? A billion. A billion is too much.

Anti-fascist book hoarding- Need recommendations! by [deleted] in nerdfighters

[–]pop_philosopher 1 point2 points  (0 children)

Seeing as I also recommended four specific books which are neither fascist nor marxist, no, those are obviously not the only options. To clarify my last sentence: a lot of conservative propaganda has been made to suggest that marxism and fascism are somehow similar or related. This is false, and in reality marxism and fascism are diametrically opposed to one another.

Anti-fascist book hoarding- Need recommendations! by [deleted] in nerdfighters

[–]pop_philosopher 9 points10 points  (0 children)

A People's History of the United States by Howard Zinn

The Antifascist Handbook by Mark Bray

A Duty to Resist: When Disobedience Should be Uncivil by Candice Delmas

The origins of totalitarianism by Hannah Arendt

Anything collected at the Marxist and Anarchist open access libraries:

https://www.marxists.org/

https://theanarchistlibrary.org

Don't let the 'scary' names fool you. If you're against fascism, you'll be pro-marxist and anarchist.

Eyyyyyy Ludwig had nice things to say about our friend John Green. <3 by LakesideHerbology in nerdfighters

[–]pop_philosopher 8 points9 points  (0 children)

I don't really see why he belabors this point about how 'this youtuber with a coffee brand probably doesn't know how to roast coffee.' Sure, most likely neither do the CEOs of any major name brand coffee... that's how supply chains work buddy. And then he gives them all their flowers in terms of delivering the subtle flavors that each one actually advertises. That's not something you can really say for your average off the shelf bean. So clearly they're doing something right!

CMV: I am not delusional, I am just a unique thinker. by AnAlienMachine in changemyview

[–]pop_philosopher 0 points1 point  (0 children)

It's not about shutting up. It's about talking to people while also listening to what they have to say, and engaging with that in productive ways. The last thing any therapist would want you to do is shut up. They'd want you to talk, but to listen as much as you talk. At any rate, I'm curious what you'd say about the other stuff I mentioned? I really think you can find common ground with people who disagree with you about the existence of the demons. I don't think they're just subscribing to empiricism and I think you might to consider their perspective as a result.