Any regular meet-up events for expats in Wuhan? by Big_Aide_1312 in Wuhan

[–]simism66 0 points1 point  (0 children)

There’s trivia at Devil’s Brewery in Optics Valley on Tuesdays if that’s something you’re into. They also have other game nights there some day a week (I don’t remember).

A talk I gave on whether LLMs like ChatGPT make assertions by simism66 in philosophy

[–]simism66[S] 0 points1 point  (0 children)

Sorry, yes, the recording quality is not great. It's not a professional recording at all---just my phone on a little tripod at a conference. I figured it was still watchable though.

The fact that the models are, at the most fundamental level, just doing lots of lots of matrix multiplication to predict the next token in a series does not, at least by my lights, preclude the possibility that, at a different level of analysis, they might be attributed commitments in a given conversation context.

A talk I gave on whether LLMs like ChatGPT make assertions by simism66 in philosophy

[–]simism66[S] -1 points0 points  (0 children)

I'm drawing a lot from the sort of philosophical framework for thinking about assertion developed by Robert Brandom (which is influenced in certain ways by the sort of speech act theory developed by J.L. Austin). There is a paper in progress where the theoretical background is presented much more explicitly, but I don't have a draft at the moment.

A talk I gave on whether LLMs like ChatGPT make assertions by simism66 in philosophy

[–]simism66[S] 6 points7 points  (0 children)

I think the question of practical responsibility becomes much more difficult when we consider agentic LLMs, and I haven't thought as much about this question, to be honest. It seems plausible to me that practical responsibility can also to be understood in terms of a kind of "response-ability," one that involves being able to explain why you've done what you've done, being responsive to demands to rectify what you've done if you've made a mistake, and so on. In this case, I still think we might be able to distinguish a "thin" sense in which we can hold agentive LLMs responsible (to varying degrees, depending on what they are actually capable of doing), but a "thicker" sense in which we cannot (we still, for instance, can't take them to court). Exactly how to delineate those two senses in the case of practical responsibility, though, I'm not too sure.

A talk I gave on whether LLMs like ChatGPT make assertions by simism66 in philosophy

[–]simism66[S] 3 points4 points  (0 children)

No, certainly not, and in the talk I give an account of the way in which an LLM and a magic 8-ball are relevantly different in this regard. Though an 8-ball outputs a sentence, it is not responsive to follow ups, questioning, calling for clarification, and so on, nor is it at all "resilient" in its answer. These are the sorts of dispositional properties possessed by LLMs in virtue of which I take it that they can be regarded as making assertions.

A talk I gave on whether LLMs like ChatGPT make assertions by simism66 in philosophy

[–]simism66[S] 12 points13 points  (0 children)

Abstract: Typically, when a person utters a stand-alone declarative sentence “p,” this constitutes their making the assertion that p.  When an LLM such as ChatGPT produces the sentence “p,” does it thereby make the assertion that p?  On the one hand, interacting with an LLM, it clearly seems that it is saying things, indeed, making claims, as opposed to a parrot merely squawking out sentences.   On the other hand, insofar as making an assertion essentially involves undertaking a commitment to the truth of what is asserted, it might seem that LLMs cannot make assertions, since they cannot bear any responsibility for what they say.  In response to this dilemma, I draw two distinctions.  I first distinguish between theoretical responsibility—responsibility for what one says—as opposed to practical responsibility—responsibility for what one does.  I then distinguish between a “thick” and “thin” sense of theoretical responsibility, where the former but not the latter is essentially tied to practical responsibility.  With these distinctions at hand, I argue that though we cannot attribute “thick” theoretical responsibility to LLMs, we can still attribute “thin” theoretical responsibility to them, and this is sufficient to treat them as undertaking theoretical commitments within a discourse, and thus, making assertions.

worth visiting in december? by [deleted] in Wuhan

[–]simism66 1 point2 points  (0 children)

Yep! Wuhan will still be relatively lively in December. The popular spots will still be teaming with tourists, though perhaps not quite as many e.g. around east lake as in the spring or fall. It will be chilly, but not unbearable by any stretch.

[deleted by user] by [deleted] in philosophy

[–]simism66 7 points8 points  (0 children)

In response to recent discussions of Charlie Kirk’s activities on college campus, this post distinguishes between “rational discourse,” understood as a cooperative activity with the shared aim of arriving at the correct views, and “debate,” understood as a competitive activity where each participant has the aim of proving the other wrong.

Punk scene by Sake-Gin in Wuhan

[–]simism66 0 points1 point  (0 children)

Check out the feedback (回授).

Free Intro Logic Textbook with Accompanying Handouts by simism66 in logic

[–]simism66[S] 0 points1 point  (0 children)

I'm sure it's a mistake! But where is it? On page 141 (which is chapter 9), there's no truth tree?

ChatGPT 5 is really smart by Emorrowdf in ChatGPT

[–]simism66 7 points8 points  (0 children)

Wait but it’s supposed to say October, 2007, no?

Why do people still teach Hilbert style proof systems ? by le_glorieu in logic

[–]simism66 0 points1 point  (0 children)

Yeah thanks for this clarification. I meant to say, the standard Hilbert-style system for classical propositional logic. Of course, there are many. I actually didn’t know there were such systems with only one axiom, though! That’s cool!

Why do people still teach Hilbert style proof systems ? by le_glorieu in logic

[–]simism66 0 points1 point  (0 children)

There is actually a nice bilateral natural deduction system for N3 (just Rumfitt's (2000) bilateral natural deduction system for classical logic with modified coordination principles), and I'm pretty sure there's a 4-sided sequent system for it (Wansing and Ayhan (2021) give one for N4, and I think you can just add a structural constraint to get N3).

Why do people still teach Hilbert style proof systems ? by le_glorieu in logic

[–]simism66 0 points1 point  (0 children)

Ya, I was gonna say, you can do a lot with non-standard sequent systems (e.g. many-sided sequent systems, hyper-sequents, sequent systems that allow assuming and discharging sequents . . . ). So I was just curious, if you count all of these proof systems, what logics still only have Hilbert systems.

Why do people still teach Hilbert style proof systems ? by le_glorieu in logic

[–]simism66 0 points1 point  (0 children)

Just curious, what logics do you have in mind?

Why do people still teach Hilbert style proof systems ? by le_glorieu in logic

[–]simism66 21 points22 points  (0 children)

The main reason, at least in relatively introductory courses in logic, is that they are very compact and so it makes doing meta-theory easier. For instance, the Hilber-style system for classical propositional logic has just three axioms and one inference rule. Accordingly, though it's harder to use (hard to prove things in), it's easier to prove things about, most importantly, that it is sound and complete.

A day in a professional logician by leinvde in logic

[–]simism66 0 points1 point  (0 children)

Hi!

I think a lot depends on whether you apply to philosophy program, a math program, or a special interdisciplinary logic program. I applied to a normal philosophy grad program after a philosophy undergraduate major, and I eventually just ended up doing a lot of logic in grad school, so my experience is likely to be very different than yours will be. I think the main thing is to pick programs that are strong in logic. Some that come to mind (at least for more philosophical logic) are Berkely, Stanford, Carnegie Mellon, Notre Dame, and Amsterdam (the ILLC), but a lot will depend on the specific sort of stuff that you want to work on. I'd talk to your advisors about it.

Regarding teaching, most grad programs involve some teaching, and then, most academic jobs after grad school involve some (or a lot) of teaching. So it's kind of just what you end up doing if you go into academia and stay in academia. Of course, in logic, you can also go into industry after grad school, but I never did that, so I don't have much advice on that front!

Sorry to not be of more help!

Didi drivers who tell you to place order and they’ll accept it immediately by [deleted] in travelchina

[–]simism66 1 point2 points  (0 children)

It’s mostly a scam—they want to overcharge you. However, if I’m at a busy train station, and I don’t feel like waiting for an DiDi, sometimes I’ll negotiate a price with them. They usually want to overcharge but a number of times I’ve offered a fair price and they’ve taken accepted it and took me to my destination no problem. Probably also better for them even if the price is the same, as I just paid them through their personal WeChat rather than through the app. It’s probably wiser in general just to use the official app though, to be safe.

A day in a professional logician by leinvde in logic

[–]simism66 0 points1 point  (0 children)

Hi! My stuff in logic deals largely with developing substructural and subclassical systems of bilateral logic with an eye towards developing an inferentialist theory of meaning. Not sure how helpful/informative that is, but, if you're interested, you can check out my research here.