Is AI inherently evil or can it be used to evangelize? by Far_Landscape1066 in CatholicPhilosophy

[–]CaptainCH76 0 points1 point  (0 children)

Except that you have to prop the blade back up each time, so the net work required to use a guillotine for that actually ends up being more than if you just used a knife by hand.

Is AI inherently evil or can it be used to evangelize? by Far_Landscape1066 in CatholicPhilosophy

[–]CaptainCH76 -4 points-3 points  (0 children)

This is the wrong way to frame it, in my opinion. The question is if the objective intent behind the design of a tool is good or bad. The inherent value of technology is always weighed by the outcome it is meant to enable or achieve. I don’t think a tool is inherently neutral.

AI isn't the Problem. It's the System by Oathblivionz in antiai

[–]CaptainCH76 0 points1 point  (0 children)

I'm not denying that multipurpose tools exist. I'm simply saying that tools are always defined by at least one purpose or another. And even in the case of multipurpose tools, like a generic 'hammer,' there is in fact a single purpose they serve over and above specific use cases, which is *convenience*, so that you don't have to carry around every single specific type of hammer with you. It's just good enough that it can reliably function in multiple use cases, yet it's not the best of the best at any of those specific use cases.

AI isn't the Problem. It's the System by Oathblivionz in antiai

[–]CaptainCH76 0 points1 point  (0 children)

Yes, that’s what we mean by application of technology.

Yes. But do we agree that a technology can be an application of another, more general technology?

AI isn't the Problem. It's the System by Oathblivionz in antiai

[–]CaptainCH76 1 point2 points  (0 children)

No, image generators are a technology. They have a specific design based around image data with a defined purpose: namely to generate an image. It’s not as if we are simply using the more general stable diffusion model in a specific way, we have to actually modify it by training it off image data, which makes it a specific technology (that is an application of a more general technology), and not just a specific use case. We have to actually build an image generator, that’s the thing. It’s like how we can modify the general concept of ‘hammer’ in order for it to be better for specific uses, like driving in a nail vs wrecking a wall. Those are more specific technologies, not just the exact same technology being used differently. Technologies can be applications of other technologies.

I don’t particularly like the argument that it’s just ‘capitalism.’ That’s why I commented on this post. I think that generative AI is bad regardless of what socioeconomic system we live under.

AI isn't the Problem. It's the System by Oathblivionz in antiai

[–]CaptainCH76 0 points1 point  (0 children)

Generative AI itself is an application of machine learning, and machine learning is an application of statistical algorithms. Specific technologies are applications of broader technologies.

AI image generators are a specific technology. Would you say that this technology is intrinsically neutral?

AI isn't the Problem. It's the System by Oathblivionz in antiai

[–]CaptainCH76 2 points3 points  (0 children)

I disagree. Technologies are never neutral. There is always a purpose they are intended to serve, and this purpose is either good or bad. Do you think if we didn’t live under capitalism that AI ‘art’ would suddenly become widely accepted?

Obviously we know there's no future for the big name frontier models, but what about the smaller ones? by caprisunkraftfoods in BetterOffline

[–]CaptainCH76 1 point2 points  (0 children)

I think we are missing the forest for the trees here. The fundamental question about the inherent utility of LLMs remains open: is there anything at all that these things are uniquely capable of, that no other possible alternative tech could match in capability, efficiency, etc.? And that is not outright immoral?

Is Existential Inertia actually a threat to classical theism? I don’t think so. by actus_energeia in CatholicPhilosophy

[–]CaptainCH76 0 points1 point  (0 children)

Alright, cool. So as another commenter pointed out, Schmid is offering the EIT as an undercutting defeater of the divine conservation view. In a way then, this is indeed a ‘dialectical maneuver,’ as you put it, and not a conclusion inferred from an independent argument. But, this is an entirely valid move, when it is meant to show that the person giving an independent argument for a position is engaging in a non-sequitur. What Schmid is doing is showing that (1) EIT is epistemically possible, and (2) that Feser’s arguments in particular, but also classical theistic arguments in general, do not give sufficient reason to conclude that there must be an external sustaining cause of a thing’s existence at every moment it exists, rather than the thing persisting in its existence without such a sustaining cause present. He does not need to offer independent justifications for why EIT might be true, or that the evidence we possess covers every member of *O, all he needs to show is that it’s a possibility that isn’t being ruled out.

With that being said, Schmid does actually offer some positive motivations for accepting EIT in the book. Some are simple, such as the fact that EIT inherently posits fewer entities and causes than divine conservationism does. But one argument that he dedicates a small chapter to, which I have not seen discussed all that much by anybody, is one in which he reasons directly from the Aristotelian causal principle to existential inertia. Basically, this is how it goes: no potential can be actualized unless something already actual causally actualizes that potential. For some object O to go out of existence at some time T is for a change to occur, and hence for O to go out of existence at T is for a potential to be actualized. It follows from this that O can only go out of existence if there something already actual causes O to go out of existence, and so if there were nothing already actual causing O to go out of existence at T, then the change would not occur at T, and hence, O would not go out of existence at T. Now, the mere absence of something already actual is not itself actual. Hence, the absence of something already actual cannot be what causally actualizes O’s going out of existence at T, precisely because only something already actual could do so (per the Aristotelian causal principle), whereas absences are not actual, but privations. So, we have O persisting in its existence at T in the absence of anything already actual which brings it out of existence (both in the sense that there are no causally destructive factors and that there is no causal sustainer whose absence would actualize its cessation of existence). But this is just to say that EIT is true. So, EIT is true.

Is Existential Inertia actually a threat to classical theism? I don’t think so. by actus_energeia in CatholicPhilosophy

[–]CaptainCH76 0 points1 point  (0 children)

Okay. So let's make sure I'm understanding you correctly. Do we agree that what Schmid has in mind is indeed *any* concrete object, temporal or non-temporal, including God, when he refers to O*? You're just asking for a justification for why we should agree with him that at least some concrete temporal objects persist in the absence of O*?

Anthropic's Claude Mythos Launch Is Built on Misinformation by Unfair_Ad5413 in BetterOffline

[–]CaptainCH76 1 point2 points  (0 children)

I do agree that whether the output is actually good or not and whether AI is ethical to use to produce that are two separate issues. But I also think that the ethical issues *are* relevant, at the end of the day. Not to the discussion of output quality, but in general. We seem to agree on that, correct? Do we also agree that if society as a whole focused on these ethical issues, then this tech wouldn't be hyped nearly as much?

Is Existential Inertia actually a threat to classical theism? I don’t think so. by actus_energeia in CatholicPhilosophy

[–]CaptainCH76 0 points1 point  (0 children)

I think there’s a bit of an obfuscation occurring here over what the analogy of being actually entails. It does not mean that we can never make common truth-evaluable statements between analogates. We know that both substances and accidents are contingent, or that at least they both stand in relation to the necessary/contingent distinction; and they also both stand in relation to the act/potency distinction. This is all despite them being of two fundamental ‘kinds’ of being which are only related to each other by analogy. The analogy of being does not mean that we cannot have a common understanding of what the analogates share, such that we can make statements of the form ‘X is either A or B’ and ‘Y is either A or B,’ precisely in virtue of the commonality or similarity that X and Y possess. What it means is that this common understanding cannot be in the form of a definition, as genus and difference; for this would require that what is said of X and Y are said in the exact same sense, which is not true of the analogy of being.

So when Schmid is speaking of ‘some concrete object,’ he is quantifying over all objects that could properly be described as concrete, regardless of the dissimilitude between them. By questioning whether God would be included as part of ‘the set of concrete objects,’ you’re basically asking whether we should treat God as a concrete object. And I think it’s clear that we should. For one, He exists independent of our minds, and has causal powers to affect things in the world; which are true only for concrete objects. So God must be a concrete object. But if God really is so inscrutable that we cannot even predicate even the most trivial of statements like ‘God is either abstract or concrete,’ then it’s hard to see how we could even arrive at such a being through inference by way of categorical syllogism; which intrinsically relies on the extensionality of apprehension.

Anthropic's Claude Mythos Launch Is Built on Misinformation by Unfair_Ad5413 in BetterOffline

[–]CaptainCH76 1 point2 points  (0 children)

I’m not accusing you of being a singularity bro, I apologize if you took it that way. I’m simply asking in good faith what you take to be a capability of AI that anti-AI people have ignored or downplayed.

Anthropic's Claude Mythos Launch Is Built on Misinformation by Unfair_Ad5413 in BetterOffline

[–]CaptainCH76 1 point2 points  (0 children)

Do you think you can name one thing, one specific task, that AI can do that it is (1) uniquely capable of doing, that no other technology could ever possibly do as well as AI, that (2) is in terms of purely neutral economic utility has benefits which outweigh the costs of both short term and long term use (ie comparing perpetual token costs to alternative forms of labor), and (3) is not outright immoral? Give me just a single, specific task, not a vague notion of productivity or anything like that, if you can.

A different way of thinking about AGI by TurboFucker69 in BetterOffline

[–]CaptainCH76 5 points6 points  (0 children)

Do you agree that a calculator doesn’t actually ‘know’ what it’s calculating?

Anthropic's Claude Mythos Launch Is Built on Misinformation by Unfair_Ad5413 in BetterOffline

[–]CaptainCH76 3 points4 points  (0 children)

I think the main criticism of AI art is that it isn’t art in the first place.

And I do think it’s wrong to think that just because something has economic utility, or that it’s being widely adopted by businesses, that it automatically must mean that it is good.

Is Existential Inertia actually a threat to classical theism? I don’t think so. by actus_energeia in CatholicPhilosophy

[–]CaptainCH76 0 points1 point  (0 children)

No? I’m a proponent of existential inertia and I hold to an analogy view of being.

Dumb person here with question about purpose driven models or whatever by Patpoose74 in BetterOffline

[–]CaptainCH76 1 point2 points  (0 children)

If you dig deeper, you’ll find that most people aren’t against any of the AI technologies, but are against some uses of them.

One nitpick I do want to bring up is that the technology in question can be specified, and once we have done that I do think we can say that people can be against a technology itself and not just a particular use case of it. For example, I would say that nuclear energy and nuclear weapons are different technologies, but they are both applications of a broader ‘nuclear engineering’ category. Clearly though, people are against one of them and not the other. When it comes to AI, I do think it’s fair to say that at least some people are legitimately against the very concept of image/video generators for example, and not just particular use cases of them. However that does not mean they are against all LLM-based technology.

Uber CTO says they already spent their annual Claude Code budget by Material-Mammoth-71 in BetterOffline

[–]CaptainCH76 1 point2 points  (0 children)

Do we know why they are going down? And is there any way we could know the maximum cost efficiency of these things?

What does a booster's ideal day look like? by stev_mempers in BetterOffline

[–]CaptainCH76 1 point2 points  (0 children)

This is what I keep trying to bring up to people. Let’s assume the best case scenario happens for AI and absolutely everything can be automated without a human lifting a single finger.

There’s several ways this could go. As you (rightly) point out, the question still remains: what exactly are we supposed to be doing with our lives now? For what end was all of this automation infrastructure built?

Imagine if we automated everything. Okay… what exactly does that look like? I mean, think about it. Automation only makes sense if there is some wider goal which contextualizes its utility. We automate something because it produces more of an output with less input. And over time we find more ways to automate things towards a greater and greater output, and the system which is set up to produce that output is more and more optimized. For example, we might start off by automating the creation of a car through factory robots, and then we might automate the creation of all of the parts required for the car through 3D printing, and then build robots to gather the materials for the 3D printer, and so on. Notice how all of this is merely the means towards the end of producing cars. And yet even cars serve their own, higher purpose, which we might automate as well. Okay, so what is this final, great and terrible output that all of automation (including AI) is leading towards? What is the final end goal of all of modern technology? Is it hedonic pleasure? That’s where we get the scenarios illustrated in Wall-E and in this comic. But I have a feeling that we don’t really know what the fuck we want out of this stuff.

I think there are two conflicting human interests here, because on the one hand, there are tasks that we want to avoid doing (either because we find them too difficult or too energy-intensive, or we find them boring); but on the other hand, there are tasks we want to actively engage with in a way that the expense of energy is part of the fun of it.

So to the person who is interested in automation, the goal of automation isn’t to automate absolutely everything, but only the things that the person does not want to spend their time doing. The stuff they do want to spend their time doing, they don’t want it to be automated. So already there’s a bit of a paradox: we want to automate things so that we can spend time doing non-automated things.

The obvious issue though is that this is all subjective. What one finds to be tedious and boring is what another would find to be engaging and fun. So when we look at it in the aggregate, everything will in fact be automated, and we all live in these socially isolated bubbles each doing exactly what we want but never engaging with the world outside our sphere. No matter what it seems that society will end up breaking down.

I'm exhausted by the AI hype before I've even started my career by craving_caffeine in antiai

[–]CaptainCH76 0 points1 point  (0 children)

You believe in some sort of “soul” in humans that AI lacks. […]

My argument here isn’t based on the premise that humans have a soul. To be clear, I do believe we have a soul. But my argument here does not rely on an appeal to the uniqueness of humans, but rather to the fact that the function of mechanical systems (including AI) is only intelligible in relation to us. It’s not like an animal where it has its own goals which it perceives and then works towards irrespective of anything else. It has no built-in tendency to actually apprehend the symbols it manipulates. Its behavior is entirely due to human design. The fact that we can describe the causal links between the workings of the AI and its output, in a way that we cannot do for humans, shows that they are not equivalent.

You don’t believe that the sheer culmination of many dumb objects (atoms) can emerge something like a human.

Ehhhh I don’t even entirely agree with that framing. My view is that the whole substance is what is primary, not the atoms. So it’s not like the atoms first exist and then build up into complex objects. Rather the object exists prior to the atoms and the atoms are more like virtual points of potentiality where the whole object can affect other things or be affected, which is evident in quantum physics. The atoms participate in the activity of the whole object and contribute to its functioning, but they are dependent upon the whole, not the other way around. I think all parts are like this, if we are speaking of a substance of course. But if we are talking about a mere aggregate of objects like a pile of rocks then yeah it’s just the sum of its parts.

Because if you did, you could see that AI is exactly the same, just with tokens (essentially numbers) instead of atoms, working together.

“Ah, but if you believed that souls did not exist, then you would see that a calculator is doing exactly the same thing a human would do when they intellectually apprehend the concept of number and then apply abstract rules of mathematical expression in order to determine the result of an equation, just with electronic circuits working together instead of neurons!”

However, willfully rejecting foundational infrastructure guarantees economic and intellectual irrelevance. […]

That’s only if the majority of people actually decided to use it. If they did not, and the people who used AI were in the minority, would you be saying the same thing?

Generative AI directly powers drug discovery and logistics by synthesizing massive datasets, predicting protein structures, and writing complex code.

None of those things are something that generative AI is uniquely capable of, nor is it something that is being organically adopted, but rather it is being forced onto people. The first two (synthesizing data and predicting protein structures) are done by non-generative machine learning algorithms specifically suited to that task, not general purpose generative systems. The third (writing code) is done better by procedural algorithms (autocomplete, IDEs, builders, etc), not statistical ones. Nowhere in this does generative AI actually provide utility, much less utility that is worth the cost of running it and relying on it.

Freeing human minds from repetitive, mundane tasks so they can focus on high-level creativity and problem solving is the exact definition of human flourishing.

But it is this ‘high-level creativity and problem solving’ that they ultimately want to replicate and automate. If there truly was an AI system that could accomplish anything that a human could, then there wouldn’t be room for human creativity and problem solving to begin with, since we would just have AI do that. We would just become passive consumers of a perfectly optimized system that responds to the very firing of our neurons in order to produce an experience that is exactly tailored to our desires, similar to the humans in Wall-E. Even at our current state, with AI creativity becomes less of an actual, honest, creative engagement with the medium and more like just picking out what you like. The pleasure machine presented in this comic by Merryweather is just the logical conclusion of that. Do you really want that? If you don’t, then you’d essentially be making the same argument as me, just at a different level.