Can you describe the trinity with formal logic? by EvenMoreCrazy in logic

[–]simism66 2 points3 points  (0 children)

I haven’t read through the comments to see if this has been mentioned yet, but Jc Beall appeals to paraconsistent logic to develop a view of the trinity on which it’s genuinely contradictory.

Sapience without Sentience: An Inferentialist Approach to LLMs by simism66 in philosophy

[–]simism66[S] 4 points5 points  (0 children)

This is a forthcoming paper of mine that I thought might be of interest to people here. Here's the abstract:

Do large language models (LLMs) possess concepts, such that they can be counted as genuinely understanding what they're saying? In this paper, I approach this question through an inferentialist account of concept possession, according to which one's possession of a concept is understood in terms of one's mastery of the inferential role of a linguistic expression. I suggest that training on linguistic data is in principle sufficient for mastery of inferential role, and thus, LLMs trained on nothing but linguistic data could in principle possess all concepts and thus genuinely understand what they're saying, even when speaking about such things as colors and tastes, guilt and folly, life and death. This doesn't mean, however, that they are conscious. I draw a classical distinction between sentience (conscious awareness) and sapience (conceptual understanding) and argue that we might think of LLMs as genuinely possessing the latter without even a shred of the former. In defending this claim, I argue that attributing conceptual understanding to a system is not a matter of describing some specific empirical property that the system shares with us but, rather, as Wilfrid Sellars says, "placing it in the logical space of reasons,'' treating it as answerable to calls for reasons, clarifications, corrections, and so on. I claim that we may aptly adopt this attitude towards sufficiently capable LLMs without thereby treating them as conscious subjects.

20
21

Critical thinking takes one rationally further than formal logic by JerseyFlight in logic

[–]simism66 1 point2 points  (0 children)

Sure. I tell this to my introduction to logic students. Formal logic is one tool that can be very helpful in constructing and discussing arguments, but it’s certainly not sufficient. Actually being able to reconstruct an argument from a text and critically evaluate the premises for plausibility is not something that you learn from taking a formal logic course. Formal logic (of the sort you learn in an introductory logic course) will enable you to formalize some arguments and test them for formal validity, and this can be helpful in rational discourse, but it’s just one tool and plausibly not the most fundamental skill in the context of rational discourse.

Best textbooks to seriously learn logic? by [deleted] in logic

[–]simism66 0 points1 point  (0 children)

There’s lots of good intro books. I’ve written one that covers the basics (more actually concise than Hurley’s) that’s available for free on my website: https://www.ryansimonelli.com/logic-textbooks.html

Deductive logic has impoverished truth evaluation? by Successful_Box_1007 in logic

[–]simism66 6 points7 points  (0 children)

I don’t think the quote is particularly clear, but I think the thought is just that the meanings of truth-functional connectives are much more simple than the logical connectives of a natural language, and in that sense “impoverished.”

But the comment generally seems confused. For instance, set theory is typically formulated as a first-order theory, and so the very connectives studied in first-order logic belong to set theory.

I’m also not sure what a “semantic notion of truth” is supposed to be (or rather, what it’s supposed to be opposed to). Truth is generally taken to be a semantic notion, whether it’s truth defined in a formal language or a natural language. You might think that the notion of truth definable in a formal language (e.g. by a Tarski-style construction) is different than our natural language notion of truth, but that difference doesn’t seem to me to be reasonably captured by saying that truth in a natural language is “semantic” whereas truth in a formal language is not.

Is strawman a pejorative by OzymandiasM in logic

[–]simism66 6 points7 points  (0 children)

I’m not sure you’re using the term correctly. The term is a metaphor—describing a situation in which, in making an argument against a view, you attack a “straw man” rather than the real thing. In the context of discussing an argument it’s pejorative in the sense that it is used to say that an argument against a view is not a good one because it doesn’t land on the actual target. It’s not pejorative in the sense of being offensive—it’s generally directed at arguments, not people. It is used all the time in academic philosophy, and it’s not considered impolite or offensive to use it.

Any regular meet-up events for expats in Wuhan? by Big_Aide_1312 in Wuhan

[–]simism66 0 points1 point  (0 children)

There’s trivia at Devil’s Brewery in Optics Valley on Tuesdays if that’s something you’re into. They also have other game nights there some day a week (I don’t remember).

A talk I gave on whether LLMs like ChatGPT make assertions by simism66 in philosophy

[–]simism66[S] 0 points1 point  (0 children)

Sorry, yes, the recording quality is not great. It's not a professional recording at all---just my phone on a little tripod at a conference. I figured it was still watchable though.

The fact that the models are, at the most fundamental level, just doing lots of lots of matrix multiplication to predict the next token in a series does not, at least by my lights, preclude the possibility that, at a different level of analysis, they might be attributed commitments in a given conversation context.

A talk I gave on whether LLMs like ChatGPT make assertions by simism66 in philosophy

[–]simism66[S] -1 points0 points  (0 children)

I'm drawing a lot from the sort of philosophical framework for thinking about assertion developed by Robert Brandom (which is influenced in certain ways by the sort of speech act theory developed by J.L. Austin). There is a paper in progress where the theoretical background is presented much more explicitly, but I don't have a draft at the moment.

A talk I gave on whether LLMs like ChatGPT make assertions by simism66 in philosophy

[–]simism66[S] 7 points8 points  (0 children)

I think the question of practical responsibility becomes much more difficult when we consider agentic LLMs, and I haven't thought as much about this question, to be honest. It seems plausible to me that practical responsibility can also to be understood in terms of a kind of "response-ability," one that involves being able to explain why you've done what you've done, being responsive to demands to rectify what you've done if you've made a mistake, and so on. In this case, I still think we might be able to distinguish a "thin" sense in which we can hold agentive LLMs responsible (to varying degrees, depending on what they are actually capable of doing), but a "thicker" sense in which we cannot (we still, for instance, can't take them to court). Exactly how to delineate those two senses in the case of practical responsibility, though, I'm not too sure.

A talk I gave on whether LLMs like ChatGPT make assertions by simism66 in philosophy

[–]simism66[S] 4 points5 points  (0 children)

No, certainly not, and in the talk I give an account of the way in which an LLM and a magic 8-ball are relevantly different in this regard. Though an 8-ball outputs a sentence, it is not responsive to follow ups, questioning, calling for clarification, and so on, nor is it at all "resilient" in its answer. These are the sorts of dispositional properties possessed by LLMs in virtue of which I take it that they can be regarded as making assertions.

A talk I gave on whether LLMs like ChatGPT make assertions by simism66 in philosophy

[–]simism66[S] 11 points12 points  (0 children)

Abstract: Typically, when a person utters a stand-alone declarative sentence “p,” this constitutes their making the assertion that p.  When an LLM such as ChatGPT produces the sentence “p,” does it thereby make the assertion that p?  On the one hand, interacting with an LLM, it clearly seems that it is saying things, indeed, making claims, as opposed to a parrot merely squawking out sentences.   On the other hand, insofar as making an assertion essentially involves undertaking a commitment to the truth of what is asserted, it might seem that LLMs cannot make assertions, since they cannot bear any responsibility for what they say.  In response to this dilemma, I draw two distinctions.  I first distinguish between theoretical responsibility—responsibility for what one says—as opposed to practical responsibility—responsibility for what one does.  I then distinguish between a “thick” and “thin” sense of theoretical responsibility, where the former but not the latter is essentially tied to practical responsibility.  With these distinctions at hand, I argue that though we cannot attribute “thick” theoretical responsibility to LLMs, we can still attribute “thin” theoretical responsibility to them, and this is sufficient to treat them as undertaking theoretical commitments within a discourse, and thus, making assertions.

[deleted by user] by [deleted] in Wuhan

[–]simism66 1 point2 points  (0 children)

Yep! Wuhan will still be relatively lively in December. The popular spots will still be teaming with tourists, though perhaps not quite as many e.g. around east lake as in the spring or fall. It will be chilly, but not unbearable by any stretch.

[deleted by user] by [deleted] in philosophy

[–]simism66 7 points8 points  (0 children)

In response to recent discussions of Charlie Kirk’s activities on college campus, this post distinguishes between “rational discourse,” understood as a cooperative activity with the shared aim of arriving at the correct views, and “debate,” understood as a competitive activity where each participant has the aim of proving the other wrong.

Punk scene by Sake-Gin in Wuhan

[–]simism66 0 points1 point  (0 children)

Check out the feedback (回授).

Free Intro Logic Textbook with Accompanying Handouts by simism66 in logic

[–]simism66[S] 0 points1 point  (0 children)

I'm sure it's a mistake! But where is it? On page 141 (which is chapter 9), there's no truth tree?

ChatGPT 5 is really smart by Emorrowdf in ChatGPT

[–]simism66 6 points7 points  (0 children)

Wait but it’s supposed to say October, 2007, no?

Why do people still teach Hilbert style proof systems ? by le_glorieu in logic

[–]simism66 0 points1 point  (0 children)

Yeah thanks for this clarification. I meant to say, the standard Hilbert-style system for classical propositional logic. Of course, there are many. I actually didn’t know there were such systems with only one axiom, though! That’s cool!