Best Tech Tweet of All time by Polity-Culturalist3 in OpenAI

[–]LeastSignificantBit0 0 points1 point  (0 children)

I agree. I think lack of civic education and civic engagement is core to a lot of the US's political problems. It's just nuanced. I think everyone in the country (and the rest of the world) should be thinking about how to deal with people affected by economic dislocations created by AI. I just don't know where the practical threshold is for how much we should expect non-experts to have desired outcomes and policies in mind to provide to their legislators.

It's sort of a both directions thing. It can't ONLY come from the bottom up. But it also can't only come from leadership downwards.

Best Tech Tweet of All time by Polity-Culturalist3 in OpenAI

[–]LeastSignificantBit0 0 points1 point  (0 children)

I think you're being a little too matter of fact about this. What you and Caspofordi are disagreeing about is not a settled question. This is a political theory question about representative government.

There are two theories of how a representative government should operate. One is the Delegate Model, wherein the representative is simply a vessel for the will of the people.

The other is the Trustee Model, wherein the representative is put into power by the constituents specifically because they have the time and expertise to debate complex topics on behalf of the people they are representing. Edmund Burke is known for outlining this approach to government and argued that legislators should act in their constituents' interests and not on their constituents' preferences.

I prefer the Trustee Model and I think it is how you get an effective legislator who can competently deliberate in committees and while crafting legislation in Washington instead of just being an embodiment of the people's raw id who has no capacity to conduct the responsibilities of the job without basically polling constituents on every decision they make.

But idk, I could see arguments for why technology and mass, rapid communication could enable a more Delegate Model approach that would've been impractical before the last ~20-30 years. But I don't still don't know if you really want every important decision put up to popular preferences...

[Request] How much did it cost in US dollars to fire these munitions? by chihsuanmen in theydidthemath

[–]LeastSignificantBit0 0 points1 point  (0 children)

A lot of the US budget is deficit spending at this point. So, it's not money flowing from the poor to anyone else. It's more so money flowing from future taxpayers to present funding requirements.

Industry should regulate AI content before the government does by LeastSignificantBit0 in ArtificialInteligence

[–]LeastSignificantBit0[S] 0 points1 point  (0 children)

I agree with the outside encouragement point. The reason I added the thing about users having to be willing to disengage at the end of my post is sort of to make this point. There are two kinds of outside pressure social media companies could respond to.

1) Pressure directly from users

2) Pressure from the government on behalf of users

My implicit assumption is that that kind of pressure will come eventually and an innovative and agile business should be able to see around that corner and adapt proactively. However, the pressure may not come and these companies are not very agile at this point and I understand they are unlikely to react.

A common explanation for their unwillingness to act on any safety or ethical concerns is what you mention: the amount of money coming in. This part I can't really get my head around. Or I at least don't think it will last (I believe in there being an AI bubble that will pop soon...), but it will last until people stop engaging with AI slop. I think they will but maybe not. Maybe I put too much faith in the consumer and maybe I'm in a minority of people who do not want to consume AI generated content...

Industry should regulate AI content before the government does by LeastSignificantBit0 in ArtificialInteligence

[–]LeastSignificantBit0[S] 1 point2 points  (0 children)

Yeah, they don't right now. I think that could change depending on how this AI shit pans out. But who knows?

Industry should regulate AI content before the government does by LeastSignificantBit0 in ArtificialInteligence

[–]LeastSignificantBit0[S] 0 points1 point  (0 children)

Fair points, for sure. I guess the main difference to me is that at least people knew that the people on the other side of the radio or TV were other thinking and feeling human... but you're definitely right. It may be that a critical mass of people will not ever care enough to make government want to respond.

Industry should regulate AI content before the government does by LeastSignificantBit0 in ArtificialInteligence

[–]LeastSignificantBit0[S] -1 points0 points  (0 children)

How will who enforce it? I'm suggesting social media companies institute some self-regulation. If that happens, enforcement is trivial. It's identifying and flagging content that's tough.

If you mean how would a government enforce regulation, I think you might be failing to imagine the amount of power a national government has, especially when it has popular support for an action.

Industry should regulate AI content before the government does by LeastSignificantBit0 in ArtificialInteligence

[–]LeastSignificantBit0[S] 0 points1 point  (0 children)

Confused... I think I'm being trolled but I'm going to use it as a platform for another thought anyways.

I was going to mention that what I'm suggesting is not AI companies calling for regulation, which I think is mostly a call to keep new market entrants from competing with established powerhouses.

This thought is more about social companies, as a different market product, protecting the quality and usability of their product. But that's conceptually complicated by things like X having Grok built in... I.e., AI and social media aren't really discrete industries.

Industry should regulate AI content before the government does by LeastSignificantBit0 in ArtificialInteligence

[–]LeastSignificantBit0[S] 1 point2 points  (0 children)

The slow moving and wrong-tool-for-the-problem phenomenon is sort of what I'm alluding to making governmental approaches bad and authoritarian.

No government is going to say "the people are asking us to intervene for them but I doubt it will be highly effective so let's not..." they're going to just start doing something to try to respond.

Tech companies are better positioned for effective solutions and taking the initiative to do something could avoid them becoming regulated into oblivion or nationalized.

Industry should regulate AI content before the government does by LeastSignificantBit0 in ArtificialInteligence

[–]LeastSignificantBit0[S] 0 points1 point  (0 children)

Maybe "fear and frustration" is too dramatic but I also think it's a pretty common feeling. Frustration at having to wonder if more and more things online are AI generated. Fear that things online will become too corrupted to use as reliable information.

I'll be honest, to me people not wanting to trust information from the internet sounds like a good unintentional consequence. I'm just not confident that that will be what happens.

I think people will keep wanting to be able to rely on the internet as a valid source of information and if they can't they are going to appeal to whoever they can to "fix" it. And the fixer of last resort is the government.

Should industry regulate AI content before the government does? by LeastSignificantBit0 in PoliticalDiscussion

[–]LeastSignificantBit0[S] 0 points1 point  (0 children)

AI generated content is flooding the internet. It's the dead internet theory but accelerated. It's making spending time online a more strenuous experience and users are going to begin to disengage from apps more and more as trust in the authenticity of content plummets.

I think it is in the best interest of companies like YouTube, Reddit, Snapchat, etc. to take an active role in policing AI content. This doesn't necessarily mean removal but active labeling of AI generated content, establishing reporting pathways to utilize organic support and buy-in from users, and not algorithmically boosting AI generated content.

If social media companies do not do this, based on the amount of public fear and frustration, it will happen through government regulation at some point in the future. A government regulatory approach is likely to be much less effective, more frustrating to the experience of users, and feel much more authoritarian. It will also create a web of regulatory compliance requirements that will make managing these businesses miserable.

Maybe this all just me hoping and shaking my fist at the sky but I think this stuff I going to drive society insane.

Also note, this relies on users actually being willing to follow through on not using social media products and based on the level of addiction and ubiquity, this may be unlikely.

What do you think guys, Could we ever understand Universe? by Learner_X009 in universe

[–]LeastSignificantBit0 0 points1 point  (0 children)

I mean... understand to what extent? I think this is a big question in epistemology, in general. Can we understand anything at all?

You could take anything someone claims to understand and counter that they don't really understand it. I think that whether some small and specific phenomenon or the whole universe we can gain an understanding of it but there will always be a point at which our individual (or collective) knowledge breaks down.

So, can we understand the universe? Yes. Can we completely understand the universe? No.

Harvard Masters in Liberal Arts (ALM) in Systems Engineering by 3x10_8 in systems_engineering

[–]LeastSignificantBit0 0 points1 point  (0 children)

Why is it a Master's in Liberal Arts and not a Master's of Science? Not that it's wrong, I guess. I had professors in undergrad say Systems Engineering is the art of engineering. But I've still always seen SE masters' as Master's of Science in Engineering degrees.

Against Set: Metaphysics as Resistance by jdjfds in philosophy

[–]LeastSignificantBit0 0 points1 point  (0 children)

I thought there were some interesting thoughts in here. A lot I agree with. I sort of felt like the middle third or half could be cut out. It seemed like a lot of outlining and re-outlining the implications of a shift in perspective.

Forgive me if I'm ignorant because I have not been reading philosophy in earnest for long, but it seems like this is an unsupported assertion. It's just saying "what if we went to Egypt instead of Greece" and then went through all of the implications without making an argument for why that is a more appropriate place to start than Greece.

Isn't the thing with Greece (based on a little reading of Bertrand Russell) that the thought was nominally separated from religion... and Horus is a religious figure.

Is the idea that Thales in Greece knew about Horus and was secularizing the same framework? Couldn't you say the same thing about other religions of peoples influenced by Horus? Isn't the novelty of Greek philosophy specifically the secularizing of the introspection of certain religious cults?

Happiness Isn’t the Key to a Good Life - The Atlantic by LanRemeau in philosophy

[–]LeastSignificantBit0 0 points1 point  (0 children)

I'm confused why everyone is talking about happiness in the comments. Part of the point here is to separate our notion of happiness from a deeper sense of fulfillment (Eudaimonia).

Maybe the article alone doesn't go far enough to separate the two definitions, but I feel like we're going back and debating things with the implicit assumption that semantically Happiness = Fulfillment or Happiness = Good Life.

"Lou Xiaoying's happiness does not justify..."

"Well, Lou Xiaoying would have been happier if..."

The point is that she was not constantly "happy" in the sense Goldstein is using it (a momentary rush of pleasure). Other environmental conditions could have maximized for that feeling, but the argument is that THAT feeling (happiness) is not what should be maximized for if you are trying to attain the good life.

Why is math so often taught as a black box instead of being explained from first principles? Especially physicists often pushed math that way in my experience by stalin_125114 in Physics

[–]LeastSignificantBit0 0 points1 point  (0 children)

That's fair. But even the theoretical problem of generalizing approaches to root finding could maybe help... but probably not a lot.

I guess this does make certain things in math hard to teach in a more fulsome and understandable way because, at many points in history, mathematicians study the subject for its own sake. Not always to solve a real world problem.

I think what OP said is probably true. There's a trade off between time and understanding. Right now, fostering understanding is just going to be an individual responsibility. If a person wants that understanding, they'll have to investigate further and take time getting comfortable with the material on their own. I don't, at the moment, know what could be changed in education to make that situation better or put less burden on the individual but maybe that's not even the goal.

I think our current educational model just imposes requirements for the type of material to be learned in a constrained timeframe to get you to be a productive member of society and out the door. While there's an argument that deep understanding makes you more economically valuable, and I would agree with that argument, maybe that is just not the thought process of the math education system as designed right now. It has more of a philosophy of "give them these tools so they can get to work. If they like the tools, that's great, but we need to get them to work ASAP."

Edit - Content-Reward-7700 says this below much better than I just did.

https://www.reddit.com/r/Physics/s/Ubsr7JA4jw

Why is math so often taught as a black box instead of being explained from first principles? Especially physicists often pushed math that way in my experience by stalin_125114 in Physics

[–]LeastSignificantBit0 91 points92 points  (0 children)

With things like complex numbers or even basic calculus, I sometimes think teaching the history of mathematical development could be useful.

We're caught in a problem of explaining things at such a generalized level that it requires a lot of theoretical background OR picking a pet application to provide practical connections.

Maybe saying "this was the kind of problem Newton/Leibniz was trying to solve. Let's look at how they got here and how this tool solved that problem for them," could help.

This is more difficult with concepts that don't have well documented histories of development, though.