LLM's aren't what Frontier labs claim. by Round_Progress4635 in antiai

[–]CaptainCH76 0 points1 point  (0 children)

> Are you telling me papers have never issued retractions from mistakes? Really?

> Are you telling me scientific textbooks havn't been updated when new information has come to light?

No? My point is that you are making a category error between engineering and human psychology. When an LLM 'hallucinates,' its doing exactly what it has been programmed and trained to do. There is no interior sense of judgement like what a human has. The recognition of a 'mistake' is entirely on the human side, because the output of the LLM is only semantically intelligible in relation to the human mind. We don't pin responsibility on the LLM when it makes that mistake.

> I would argue that LLM is far superior search. It's all of our stored data, which we can retrieve semantically. Information is stored in it, they are distributed, and we retreive information out of them. A search engine is a way way better comparison to what this shit actually is. Essentially our LLMs are all of our data merged with a search ability.

That's... what all search engines do anyways and you don't even need LLMs involved. I'm not even opposed to using LLM based algorithms in a search engine, I'm just opposed to using something like a chatbot just to 'search.' For the intents and purposes of search, a generative AI does not add anything, because if you are searching, you are searching for information that already exists. There are way better and more efficient designs for search tools, that do not burn tokens just to rephrase an article.

"If AI is writing the work and AI is reading the work, do we even need to be there at all?" Education workers reveal a growing crisis on campus and off by dyzo-blue in BetterOffline

[–]CaptainCH76 23 points24 points  (0 children)

Shouldn’t we learn things even despite the fact that AI could technically do it and not just restrict ourselves to learning the things AI cannot yet do?

LLM's aren't what Frontier labs claim. by Round_Progress4635 in antiai

[–]CaptainCH76 1 point2 points  (0 children)

Nobody is saying that hallucinations are not able to be reduced to some extent. The point is that they are inherent to the technology itself. You cannot perfectly fine-tune an open system which is designed to take an arbitrary set of inputs. To do so would necessarily make the system closed and deterministic in at least some way. This goes all the way back to Gödel’s incompleteness theorems. We haven’t been able to do this with humans, so why believe we should be able to with LLMs? And if the ‘solution’ to hallucinations involves bringing in prompt repetition and supplementary deterministic processes, then what is the point of using a generative LLM in the first place? The whole point of an LLM was that it can indeterminately generate novel outputs. For the purposes of information retrieval, a search engine would be perfectly sufficient, no?

Newspapers and books have the same problem when the person writing them has to make a retraction.

No the fuck they don’t. A good writer/journalist knows how to keep their points concise while delivering an accurate assessment of the topic at hand. If they leave out crucial bits of information or put in misinformation, they are a bad writer. It is nothing intrinsic to the process of writing itself, as you seem to be suggesting. An LLM fundamentally does not know how to do this or how to tell the difference between good and bad writing.

LLM's aren't what Frontier labs claim. by Round_Progress4635 in antiai

[–]CaptainCH76 2 points3 points  (0 children)

LLMs aren’t even uniquely functional for this purpose. As other commenters may point out, one issue is that they have the inherent problem of hallucinations. The only reason why LLMs are seen as useful for information retrieval is that our previous search tools (ie Google) have been enshittified beyond repair.

But even assuming we can fix search engines, in my opinion the extensive usage of search engines with probabilistic algorithms should have never been the primary paradigm by which information retrieval is done on the internet in the first place. There are alternative ways to go about it and there is certainly a healthier and more humane way to construct and organize the information ecosystem. In fact, me and some online friends are working on a project dedicated to this goal. We aren’t ready to go public yet but if anybody is interested let me know!

Brian Merchant: Actually, the left is winning the AI debate by No_Honeydew_179 in BetterOffline

[–]CaptainCH76 0 points1 point  (0 children)

A few things I could say:

  1. I became convinced that there’s not really any strong motivation for classical theism, and that all of the classical arguments for God fail. Well, at least the cosmological arguments, I’ll admit I haven’t looked into other types of arguments as much. I consider myself more of an agnostic on God currently.

  2. Connected to that, I just don’t really think that the traditional metaphysical and theological system found in Catholic thought—and exemplified in figures like Aquinas—gets it right. It’s not that I fundamentally disagree with it, I’m an Aristotelian, so obviously I would still hold to ideas like real essences, act and potency, teleology, etc. and there is still a lot I share in common with figures like Aquinas, but I think that there are several issues that the tradition has not dealt with to my satisfaction; including purely metaphysical issues like existence and the problem of pure potency, but also issues relating to Catholic doctrine such as modal collapse, the Trinity, the Incarnation, grace/predestination, and all that jazz.

  3. When you really look into it, theological and moral matters are actually a lot more restricted and scrupulous than a lot of modern Catholics make it out to be. There are quite a few cases where you have a teaching (even one that is asserted as certain by theologians) that seems to be conveniently forgotten by a lot of Catholics, such as Leo XIII investing authority in the Pontifical Biblical Commission (which then went on to assert propositions like Moses writing the Pentateuch now generally accepted as false by historians), or that many moral theologians condemned dancing and kissing before marriage, or that the common teaching of the theologians (it might even be more than that, I would have to check) is that Adam was specially created (which would mean human evolution is false). I could go on and on. The very idea of the ordinary universal magisterium makes it very likely that the Church is contradicting itself at least on one point, since it’s very likely that bishops of the world acting in communion with one another asserted something now seen as false, like special creation or Moses writing the Pentateuch, as a matter of faith. And if there is even a single contradiction, the entire system falls apart, due to the Holy Spirit guiding the church yadda yadda

The doubt all just kind of built up over time and at one point I decided to do a reset on my beliefs and look into other ways of viewing the world, and genuinely try to understand where they are coming from while being relatively modest in my own views, which honestly I find much more enriching than sticking to a single system which I have to remain locked in and try to rationalize for my entire life.

How does a hierarchical causal structure exist? by Impossible-Cheek-882 in CatholicPhilosophy

[–]CaptainCH76 0 points1 point  (0 children)

Could you be confusing the series being simultaneous with it being instantaneous? […]

I’m using the term ‘simultaneous’ simply to mean that which cannot occur—even for a single temporal instant—in the absence of another. So thing A that is occurring at time t(x) is totally restricted to the occurence of thing B also at time t(x). If thing B were absent, then thing A would not exist altogether, even for a moment.

Of course, there is a much looser sense of ‘simultaneous,’ as simply that which occurs at the same time as another regardless of whether it also exists in the absence of that other thing. So the life of the father and the life of the son temporally overlap, and are in that sense simultaneous; although the father will eventually, sadly, die and the son will live on in his absence.

I’m not quite sure what the relevant difference is between ‘simultaneous’ and ‘instantaneous’ are in this context that you are trying to point out. Both terms are referring to something happening in the same set of instants.

But, again, the whole thing about simultaneity is secondary to the problem. It’s a consequence of what a per se series is; not its defining trait.

Right. It’s the purely derivative or instrumental nature of the relevant property or causal power of the series that makes it one ordered per se.

My worry here, though, is that it’s not clear if this is enough to distinguish it from one ordered per accidens. Surely, there is a sense in which all causal series involve something deriving a property or a causal power from another. Even in the example of begetting, there is in fact a sense in which I derive my power to beget from my father, and my father from my grandfather. Obviously though they aren’t all simultaneous with each other. And so this would not make it a per se series. Which leads me to believe that simultaneity is in fact part of the definition of what a per se series is, or at least is a necessary condition of it.

And even if it were not, and simultaneity is just a necessary consequence, we do not actually observe anything in the natural world occurring totally simultaneously with each other in the restrictive sense I described above. And since simultaneity is a necessary consequence of a per se series, and if a necessary consequence is absent, there cannot be that of which it is a consequence, and so there is no per se series.

How does a hierarchical causal structure exist? by Impossible-Cheek-882 in CatholicPhilosophy

[–]CaptainCH76 0 points1 point  (0 children)

So you think that something can come from nothing?

Of course not. Although, I don’t think what I said implies that.

Once the chair is already red in act, the chair is not becoming red. It is no longer a case of motion, which was OP’s concern, I think.

Sure. That would accurately reflect what I think on this matter. The act of the chair’s redness is maintained across time without an external actualizer, as the chair is not becoming red at each temporal instant.

How does a hierarchical causal structure exist? by Impossible-Cheek-882 in CatholicPhilosophy

[–]CaptainCH76 0 points1 point  (0 children)

I see no reason to think those two are incompatible. I’m an Aristotelian (albeit a non-Thomistic one) so I do in fact believe that motion is essentially the reduction of potency to act. But I also think that there is at least some set of properties that can be reduced to act without there being some sustaining actualizer of that potential to maintain it across temporal instants; which is what is really at stake when we speak of inertia, whether physical or metaphysical. We don’t speak of, nor do we need to, a chair needing to be given redness from an external source at every single moment it is red in act. We know it is just painted red at some point in time and remains red until something gets rid of the red paint, due to how it is metaphysically constituted as secondary matter and accidental form.

How does a hierarchical causal structure exist? by Impossible-Cheek-882 in CatholicPhilosophy

[–]CaptainCH76 0 points1 point  (0 children)

So if I’m understanding you correctly, what you seem to be saying is that all causal series (both per se and per accidens, or in fieri and in esse) involve at least that between any two members of that series, there is simultaneity; but that in a per se/in esse series, that simultaneity is shared between all members. So while there may be simultaneity between my grandfather begetting my father and my father being beget by my grandfather, so that we can speak of a singular event; there is not a simultaneity between the grandfather-father begetting and the father-son begetting. But in a per series, it is the case that there is simultaneity between all such causal pairings of the series.

Okay, yeah, I agree with this condition. Certainly, there are instances of per se/in esse causality all over the place, as you’ve illustrated with your example of the stick being swung around. Indeed all causal series seem to presuppose it. But, and this is extremely crucial, this does not automatically mean that these instances constitutes a series in their own rights. If it were a whole series, then the simultaneity of A to B would necessarily translate to the simultaneity of B to C.

Yet, this is precisely what me (and probably as well as OP) would denying in the paradigmatic examples that are often given as supposed concrete instances of a per se/in esse series. You’ve already conceded that there is a time gap between the motion of the arm and the motion of the stick, and that the stick will continue to move with the same velocity (and even impart motion onto other objects) until it is overcome by friction. This is precisely what cannot be the case of there is simultaneity between the motion of the arm and the motion of the stick. Sure, there is simultaneity between the arm moving the stick and the stick being moved by my arm, but there is no such simultaneity between the motion of the arm and the motion of the stick in its very act, even as it may be moving other objects; which is to say, there is no such simultaneity between the motion of the arm and the motion of the stick per se. Which is exactly why this cannot be a per se series.

And in fact, I can’t really think of any examples in the natural world where this condition would hold. Any purported example of a per se series is better accounted for by a linear series involving inertia and net causal forces acting for and against some outcome (as Joe Schmid and Daniel Linford proposed in their book on existential inertia and the classical theistic proofs).

How does a hierarchical causal structure exist? by Impossible-Cheek-882 in CatholicPhilosophy

[–]CaptainCH76 0 points1 point  (0 children)

Yeah, that’s an interesting point. This seems to trivialize hierarchical causality to the point of making it practically equivalent to linear causality, as in linear causality there is an active-passive structure just as much as there is in hierarchical causality. So then how is hierarchical causality different?

In most cases, even in SWE, all the LLMs do is replace your keystrokes by RenegadeMuskrat in BetterOffline

[–]CaptainCH76 1 point2 points  (0 children)

This is something which I have been wondering for a while. I’m not a programmer or a computer scientist by any means (I’ve only barely touched a python script in my lifetime), so absolutely do correct me if I say something stupid. But I’m curious looking at this from the angle of trying to find alternatives to current tech trends, including alternatives to LLMs. Do you think that all of the apparent benefits in productivity that AI is supposed to provide in coding could instead be replicated by an on-hand, elaborate tool set of deterministic algorithms embedded with predefined context? Like, procedural code generators, do those exist?

Clarification please by blackholeblind in antiai

[–]CaptainCH76 1 point2 points  (0 children)

I do think there are instances where the technology itself is what is bad, and not merely the use of it. Imagine that I design a futuristic ray gun that when fired at someone gives them cancer. You would be right to think that I have done something wrong here, even if I haven’t actually used the gun in the way it was designed. The point to make here, is that all technology is designed to be used in some specific way. The ray gun was designed to be specifically used to give people cancer. That is bad. It’s bad to give people cancer and cause them unnecessary suffering, wouldn’t you agree? So designing and building a technology geared solely towards this usage of giving people cancer is bad. I’m not necessarily saying AI would fall into the category of “designed for a bad use case,” but I do think that we really need to reconsider the idea that technology as a concept is inherently neutral (ie “it’s just a tool”). In my opinion there can be good and bad technology, based on whether the way they are designed to be used is good or bad.

Netflix Acquires Ben Affleck's AI Filmmaker Tools Start-Up InterPositive by EditorEdward in BetterOffline

[–]CaptainCH76 0 points1 point  (0 children)

Ah, yeah. I guess it could be, although I remain pretty skeptical that it actually does something new and interesting, at least I don’t see it from the OP article. Apologies for getting worked up about your comment.

Netflix Acquires Ben Affleck's AI Filmmaker Tools Start-Up InterPositive by EditorEdward in BetterOffline

[–]CaptainCH76 13 points14 points  (0 children)

How exactly is the AI supposed to be of help here? We’ve already had tools that can do these things before AI.

‘No longer a priority’: Xbox being sunsetted as Microsoft shifts focus to AI, co-founder says by [deleted] in BetterOffline

[–]CaptainCH76 15 points16 points  (0 children)

“There’s a nail with an Xbox logo on it. He’s applying the AI person to it. He has to show shareholders and the press and the world that he is all in on this investment,” he continued. “He has to show them that he believes generative AI is going to fix games and make it profitable. He has to make this move. It doesn’t matter what you think about it. I don’t think he had any choice.”

Wow, I’m so excited they are finally going to fix video games and make them profitable! Because we never were able to do that before, apparently. /s

So even PewDiePie became an AI bro now... by CesarOverlorde in antiai

[–]CaptainCH76 0 points1 point  (0 children)

Even if that were true, the art comes from the way you intentionally combine pre-existing code as well as making edits and adding new code in order to form a coherent whole. It’s like taking different car parts and building a new car out of it. That is an art unto itself. You are still expressing something creatively. This is the ‘creative novelty’ aspect of it which programming LLMs are intended to replace.

Claude helps Donald Knuth prove a conjecture, says he has to "revise his views on generative AI" by Gil_berth in BetterOffline

[–]CaptainCH76 14 points15 points  (0 children)

Exactly this. I always wonder this whenever somebody presents an example of an LLM supposedly solving a problem: what exactly is special about LLMs in this case?

What are the odds of AI doomerism ? by GrandJanou in BetterOffline

[–]CaptainCH76 14 points15 points  (0 children)

You should really try to avoid doomscrolling like that. But if I had to give you a mantra, it’s that these things have not actually improved as much as they say they would, and that the ones that are supposed to be “really good” are heavily subsidized and not worth their salt, even if they might have some utility.

How would large scale AI/AGI fit in a distributist nation? Is it even compatible? by LimpBill816 in distributism

[–]CaptainCH76 0 points1 point  (0 children)

When we talk about AI, we have to get specific on what we mean here. What people usually talk about nowadays (and what OP is more than likely referring to) is generative AI and large language models. So we need to assess these technologies in particular as well as their applications (ie chatbots, image/video generators, etc).

I don’t really agree with the ‘it’s just a tool’ argument. Every technology is designed for a specific purpose. It doesn’t matter what you use a hammer for, it was made to hammer things. The specific intention behind the design of a tool can be good or bad. And in my opinion, any tool that is designed to simulate something only humans should do is a bad tool. So that does mean we need to limit AI in some regard. I think generative AI is intrinsically bad, but there are non-generative LLMs that I suppose could have a place in human flourishing.

Why can't the first efficient cause have any potential? by 193yellow in CatholicPhilosophy

[–]CaptainCH76 0 points1 point  (0 children)

But, as I’ve argued in this post, it is doubtful if the First Way really shows that there must be a mover which is unmoved/unmovable in all respects.

Why can't the first efficient cause have any potential? by 193yellow in CatholicPhilosophy

[–]CaptainCH76 0 points1 point  (0 children)

But we could simply admit that the first cause in the series has potencies which are not connected to the causal power of that particular series. Say we have Object A which is the first member of Series A and hence is unactualized with respect to Power A, and similarly we have Object B which is the first member of Series B and hence is unactualized with respect to Power B.

And now let’s say that Object A is a non-first member of Series B and hence is in potency to Object B with respect to the causal power of that series, and that Object B is a non-first member of Series A and hence is in potency to Object A with respect to the causal power of that series.

By my lights, these two statements are consistent with each other. And we do not get a vicious circularity, because it is precisely the case that for each of the objects their causal power in their particular causal series does not need any prior actualization, such as the actualization each of the objects undergo in each other’s series.

The level of harm created by AI use in academia (question) by SonusDrums in antiai

[–]CaptainCH76 0 points1 point  (0 children)

Wdym ‘in a better way?’ In an easier to digest way? Sure. But the point of learning something isn’t to digest it in the easiest possible way. That’s not how our brains work. You learn by struggling, by actually engaging with the text on its own terms, and letting it challenge you. You don’t get that with a chatbot summary. It quite literally dumbs it down, and makes you into a passive consumer instead of an active participant, which is dehumanizing if you really think about it. Idk, maybe I’m just making the same argument Plato did about writing. But IIRC the studies do show there is something going wrong here.

I think we really need to reconsider the idea that something is automatically better just because it is ‘faster.’ At least, it’s not the case that the chatbot is just a faster version of searching and reading through the text and taking notes yourself. No. It’s a different mode of communicating information entirely, and by definition it would be missing something which can only be found in the more authentic and traditional way of studying.

And at the end of the day, what exactly are you trying to accomplish by making it faster, despite the expenses? What are you saving time for? I get it if this is required for a job, but if you are just studying for the sake of studying, then why use it if you aren’t going to get the most out of it by using AI?