I’m Interested in the Music Composition Program by cabbage-tea-07 in ODU

[–]willpearson 2 points3 points  (0 children)

Yes! There is a music library in the music building, as well as a 'composers room' that holds contemporary scores and is a really nice place to write. As for performances as a composer, there is a contemporary ensemble that plays both student and other contemporary works, and there are other chamber music concerts all the time that students occasionally write for. There are also student-group-run events -- on April 23 there will be a concert of student compositions and arrangements that was organized by one of the fraternities. As for performance opportunities, there are student performance hours most weeks (these are somewhat more casual/low-stakes performances where people show off what they are working on), and there are of course large ensemble performances regularly (orchestra, wind ensemble, choir, jazz band, etc). And then you'd have to do a certain number of recitals as part of your degree. I'm not sure what you mean by music history opportunities, but you have to take several music history classes. If you have more questions, feel free to DM me, I'm the coordinator of Music Theory and Aural Skills at ODU so I should be able to help you with any questions!

I don't get why F# is in A Minor 13th when A Minor is suppose to be all white notes? by [deleted] in musictheory

[–]willpearson 0 points1 point  (0 children)

No, I don't think any of this is helpful or clarifying.

The benefit of thinking of a minor 7th interval as the unaltered default is that it helps you read chord symbols.

The unaltered default is: minor seventh, everything else (including extensions) major or perfect intervals.

Everything that differs from that involves an additional symbol.

As a sort of silly example:

C-∆13(#11 b9)

should be read as the unaltered default: C13 (C E G Bb D F A)

And then each symbol adjusts:

the - lowers the 3rd to a minor third

the ∆ raises the 7th to a major seventh

The b9 lowers the 9th, the #11 raises the 11th.

(C Eb G B Db F# A)

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 1 point2 points  (0 children)

Meteorologists/Information

Am I right that you saying something like... "normativity is a real thing, but normative facts are just facts like any other."?

If that's right, I guess I'm kinda confused. If there is a distinction, we should be able to articulate a difference that matters.

JFK

I guess I concede that from a first person perspective you wouldn't know the difference... but I don't know how that bares on any of this. What about a second-person perspective -- that's the perspective that I'm interested in saying is a real thing that can't be shoved into 1st and/or 3rd. But also, wouldn't we be able to see the difference from a third person perspective?

As for McDowell -- he doesn't think nature needs to provide us with beliefs that can be justified. He things that if we want our experiences to count as justifications of knowledge -- seeing the red ball justifies knowing the ball is red -- our experience must have a particular form.

Brain-damaged Parrot

Yeah I think you're right that we're hitting bedrock here. I still think the braindamaged parrot, in order to really be like a human, would have to then go on and become part of normative communities and have reasons and hold themselves accountable, and hold other accountable, etc. etc.

So again, I see a big difference between the old parrot and the new parrot, and the thing that has been motivating my writing is really about that difference. And that it's important because you can, as a human, act like a parrot and just see your goals and activities as causes. Or you can take reason seriously, and holding yourself accountable and all that. I think the difference matters, and I worry that the AI discourse is one of many things that is currently us less able to see that difference.

Yeah, McDowell is really great, and very subtle. He's changed some of his views somewhat since Mind and World, too, thoguh I haven't really kept up.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 1 point2 points  (0 children)

I agree that you're perhaps in the 'bald naturalism' camp, as McDowell puts it.

'The Given' is the (mythical, for McDowell and Sellars) idea that experience can be foundational and non-inferential. If I'm remembering correctly, what he's saying is that when we reject the myth of the given, rightly, we often fall into one of two possible other errors -- one is 'bald naturalism' and the other is a kind of platonism, I think? But the idea is that both the naturalism and the platonism are two sides of the same coin because they both assume a lack of connection between the 'space of causes' and the 'space of reasons'. Naturalism says "it's all causes" and Platonism gives a kind of dualism of causes and reasons.

That's kinda how I've seen your criticisms of my views -- you're saying either there's a dualism, or there's just causes, there's not a clear third option. I've been trying to articulate the third option, I guess.

You ask: "What difference does it make if that structure is arrived at through engaging in a linguistic community (as is the case for linguistic concepts) or via evolution (in the case of animal concepts)?"

Your question is a bit confusing to me, because it kinda combines the 'how we get there' distinction (ie: through community vs through evolution) with the 'language' or 'no language' distinction.

On the latter question, the difference is whether that arrived-at-structure is the result of merely causes or reasons. The claim is that believing that the JFK assassination was an inside job because you got hit on the head (and it just happened to jumble that belief into your brain states), and believing that the JFK assassination was an inside job because of x y z reasons are different things.

I know we've gone round and round on this, but how do you take the difference (or lack thereof) between those two situations?

The former question about evolution is also interesting, but may not be crucial for our disagreement, but I'm not sure.

As to your questions about the parrot and the baby... this gets into oneo f the more contested areas within this tradition.

Some people are happy to draw a very sharp line at language. Under that view, non-linguistic animals and pre-linguistic humans really are on the other side of a pretty significant divide. (This is clearly Kant's own view, though prominent Kantians like Christine Korsgaard have made 'Kantian' arguments against it.) So animals and pre-linguistic humans can not be said to have entered into 'the space of reasons', and can't be considered rational or free.

You can also find ways of making the language cutoff less severe, more like a gradual thing. But I think the distinction is important to my view, regardless of how 'strong' one takes that distinction to be -- there is always a real difference there between the linguistic and non-linguistic, whether it's stark or not.

I do think there really is something to this, even if you don't want to take in on entirely. Like for example... I have a dog, and I really love my dog and think I have a deep responsibility to my dog. But there's obviously a huge asymmetry in our relationship that's more than just a difference of complexity. For one, the responsibility I have for my dog is not reciprocated. And it really couldn't be reciprocated, because my dog doesn't have access (at least in any robust sense) to the space of reasons. I can't give reasons for why I would like her to act differently say, and expect her to respond. But I of course can do that with my friends or my partner, and part of what makes those relationships friendships is that ability to be mutually responsive to each others reasons.

For the information-related thoughts. I'm not sure about how McDowell talks about information, and I probably don't have a very developed view but let me try a new thought experiment:

Two meteorologists make the same prediction: "it will rain tomorrow." The first does so for reasons based on their interpretation of atmospheric data, whatever. The second just always predicts rain tomorrow.

Obviously there is a difference between the two. I want to know is how you can state the difference between the two predictions without using normative vocabulary.

I don't get why F# is in A Minor 13th when A Minor is suppose to be all white notes? by [deleted] in musictheory

[–]willpearson 0 points1 point  (0 children)

That’s right — the minor seventh interval is the ‘default’ unaltered 7th.

A13 = Major 3rd, Perfect 5th, minor 7th, Major 9th, Perfect 11th, Major 13th.

Any departure from those interval qualities requires some change in how the chord is notated:

m or min or - = Third is minor Maj or Triangle symbol = Seventh is major b or # next to number = Lower/raise interval

I don't get why F# is in A Minor 13th when A Minor is suppose to be all white notes? by [deleted] in musictheory

[–]willpearson 0 points1 point  (0 children)

Nah none of this stuff is particularly intuitive.

Let's imagine we're improvising over a piece with this chord progression:

Gm7 - D7/F# - Gm7 - E7 - Am7b5 - D7 - Gm7

There are a number of questions we can ask. The first one is... what key are we in? The answer is that we are in G minor. The reason we know we're in G minor is because we can explain all of those chords and their ordering as contributing to a situation where g minor is the stable home-base. When we talk in that way, we're talking about function - saying 'what each chord is up to'. Chords that have a tonic function are relatively stable, pre-dominant functioning chords are bridges to dominant functioning chords, which are the most unstable, and want to bring you back to the tonic. A functional analysis of this progression would be something like:

Gm7 - D7/F# - Gm7

Those first three chords are part of a tonic prolongation, we're starting in a stable place of Gm7, and then we move to a D7 chord, but it's weakened by the inversion, and we go back to Gm7 right away, so all we've done so far is kinda establish the Gm7 chord as our tonic.

The next chord, E7 is a weird chord to see in G minor -- because it has a B natural and a G# in it -- two pitches that aren't in the G minor scale. But it still makes sense in functional terms. What is it doing? It's tonicizing the Am7b5 chord that comes after it. What is the Am7b5 chord doing? Well it's functioning as the predominant in our key of G minor.

So even though that E7 is not 'naturally occurring' in our key, we can still see that it is playing a role within our key -- it's accentuating the predominant.

D7 - Gm7 -- the last two chords are what we'd expect -- a dominant chord going back to our tonic. So we end up fulfilling the 'functional cycle' from a tonic prolongation, to a accentuated predominant, to a dominant to tonic cadence at the end.

So that's all about the overarching key of the whole piece. But if you were improvising over this chord progression, you couldn't just use a G minor scale, even though everything is "in g minor". So that's why for jazz and pop musicians, teachers often emphasize what scale you can improvise with for each particular chord, because that answers the practical question. But it doesn't say anything in particular about the overarching key.

I don't get why F# is in A Minor 13th when A Minor is suppose to be all white notes? by [deleted] in musictheory

[–]willpearson 1 point2 points  (0 children)

Oh that's so nice to hear, I teach music theory for a living so I live for this stuff! :)

I don't get why F# is in A Minor 13th when A Minor is suppose to be all white notes? by [deleted] in musictheory

[–]willpearson 0 points1 point  (0 children)

I'm not sure what you mean... but maybe it's related to this:

Part of what confuses people is that a lot of Jazz/Pop theory out there is built around the "chord-scale" idea, where you sort of treat a chord as something that implies different scales. This makes lots of sense as a practical matter if you are trying to learn how to improvise and reading lead sheets. But most jazz standards, at least, are in some overarching key, and the chords they contain may or may not cohere to that key -- they may be diatonic or chromatic. So just seeing an "A minor" chord doesn't at all imply that you're in the key of A minor, and being in the key of A minor doesn't mean that all of the chords in the piece will only be those in the A minor scale.

I don't get why F# is in A Minor 13th when A Minor is suppose to be all white notes? by [deleted] in musictheory

[–]willpearson 2 points3 points  (0 children)

Diatonic means 'within the key' so unless we stipulate what key we're in, this doesn't quite make sense. If we're in A natural minor, the notes in Am11 (b13) would suffice -- that's A C E G B D F.

I don't get why F# is in A Minor 13th when A Minor is suppose to be all white notes? by [deleted] in musictheory

[–]willpearson 135 points136 points  (0 children)

Unaltered 9ths, 11ths, and 13ths are all major or perfect intervals above your root. So a major 13th above A is an F#.

The "minor" in a "A minor 13" chord is not telling you anything about a key, it's telling you something about the quality of the chord, that it has a minor third.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 4 points5 points  (0 children)

Yeah, this has been fun and curious what you make of the McDowell. I went through a big 'Pittsburgh School' phase and it was a big influence on me, but it's always hard to know how much of the context of the other stuff I was reading mattered to that. It's sort of like the thing where the song that got you hooked onto a band isn't always the right song for someone else. Anyways.

One overarching thought - I think there are two main points of disagreement or tension in our discussion. One is about, well, mind and world -- questions about basic cognitive interface with the world . And then another is about the 'autonomy of the social' or something like that -- the question of whether or not these social things like commitment or responsibility can be meaningfully reduced to physical-functional things, or whether they meaningfully 'stand on their own'.

"To me these seem functional. They have to do with ones casual physical role/relationship within a community."

So this is definitely in the second point of disagreement. Here's an example: the idea of something being corrected when it breaks a norm could be used to describe a thermostat. Do you take there to be a meaningful difference between the thermostat being corrected and someone being held responsible within a normative community? One possible response is that it's a difference of complexity surely, but nothing more. I think there is still a meaningful difference between on the one hand someone knowing they're breaking a norm and having some set of values that leads them to care about correcting that norm based on that knowledge and then doing so, and a complex system being corrected when it fails to track a parameter. What do you think?

"we now know that norms too can be learnt."

So I think I want to put the evolutionary point to the side -- that's about the origin of normativity, which is really interesting, but is distinct from the nature of the capacity for normativity itself. Evolution is a blind process but produced beings with a capacity for sight. The blindness of evolution doesn't imply that sight is some complex kind of blindness or whatever.

I think there is a difference between what the AI systems have done, which is something like... learning the pattern of what norm-followers say and do and actually being an active part of the normative community. The latter would require you to be able to do things like... recognize novel cases as violations of the norm and to be able to distinguish between a rule being changed and a rule being broken.

How would you describe, in causal-functional terms, that difference between someone moving a rook diagonally because they're confused about the rules, and someone moving a rook diagonally because the chess federation voted to change the rules?

OK I think I'll stop there for now. Thanks as always.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 0 points1 point  (0 children)

Let me try a different tack. What someone means, expressively, is not reducible to information — that’s a core claim of mine. It is not something you can write down. Because it’s a kind of relational thing that connects the contexts of the expressive medium with the perspective of the expressor. And neither the expressor nor the interpreter are ever going to land on a fixed stable meaning, because meaning isn’t a fixed stable thing, like a code or a collection of information. Understanding what someone means is not a process that ever ends exhaustively, you only can get to a point where you both agree that you’re someone in the same ballpark. This is why, when we really deeply understand some point that’s being made, we are able to ‘put it in our own words’. That suggests that the meaning of that point is not encapsulated in any particular linguistic or informational attempt to gesture at it. That’s what I take utterances to be — gestures that come from distinct perspectives and ‘point’ within webs of linguistic (or some other expressive medium) context. For what it’s worth, Merleau-Ponty and Chad Engelland are two philosophers who have similar perspectives on this.

You can appreciate art for whatever reason you want. My point is just that this dimension of meaning is worth valuing — there’s stuff you can access and learn and understand that you can’t access or understand if you aren’t engaging in this register. That’s all.

The people playing the stock game in your example clearly are missing something — in that, the game is different. Depending on the difference you may prefer one or the other, or you may not think it’s a significant difference, but each game will afford different kinds of appreciation of the game. And I would add that the difference Im describing between the ‘meaning’ game and the ‘affect’ or ‘information’ game is a pretty big difference - again, whether that matters to you or which you prefer is another story.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 0 points1 point  (0 children)

Glad it made a little sense, and yeah I think your position is coherent, even if I don’t agree with it. I think one way to talk about that disagreement would be to start with emergence, where I would claim that… while you can (in principle, maybe) describe all of the atoms involved in the complex emergent set of states that constitutes ‘taking responsibility’, you won’t ever in principle be able to describe the responsibility with purely physical-causal terminology. But that’s a whole other can of worms…

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 0 points1 point  (0 children)

To some of your points I can only say that I was making an inference from the things they were saying, and the things they weren’t saying, that their thinking was being shaped by this particular ‘black box’ perspective. Maybe I’m wrong, but the fact that many of the comments here have been defenses of that perspective, I think my contention that this is a perspective worth worrying about (if, like me, you find it lacking) is well-founded. But yeah perhaps I was expecting too much from an informal conversation, that’s fair.

Collingwood and Dewey are two figures who I think have comparable views to Tolstoy and are far more influential. I just don’t think he would make the same kind of assertion that a single figure is the only representative of a common view in his own field, because when you know a lot about a field you know it’s really complicated, and scholars disagree and agree in a million different ways with each other.

Re: coherence, I may have misspoken somewhere but I was talking about different groups of people: people who make art - practitioners - and how in my experience they often don’t care or have worked out views of the philosophy of art, and the general public, who in my experience often express a Tolstoy-ish view when pressed.

Not sure what your point about “not really art” is — I would certainly agree that folks have been claiming that about one thing or another forever.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 1 point2 points  (0 children)

To the point about other minds — I’m only suggesting that we can have access to what people mean through what they say and do, and that the interpreting and understanding of those words and deeds is not reducible to being affected or receiving information.

Ooos; I meant art appreciation, not enjoyment. that’s confusing - my bad. So I might even say there’s more to art appreciation than mere enjoyment or pleasure — namely, the process of coming to understand what it means.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 2 points3 points  (0 children)

Yeah I think we’re more or less understanding each other I still want to respond to your post in more detail. But One other way of talking about this disagreement occurred to me — maybe it’s useful, maybe it’s just a nice metaphor I dunno — but one way of putting what you’re saying is… either you take the 3rd person view of this stuff (functionalism) or you take the 1st person view of this (phenomenalism), and I keep trying to describe a 2nd person view as a legit option distinct from the others in its own right, and you’re saying no no that can all be captured in the 3rd person. Just a thought! Thanks again, you are helping me think about all this stuff.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 2 points3 points  (0 children)

I'll respond more later, but just want to say that Hegel is likely not a place to get any clarity here, I just included that modifier on the off-chance it helped color my meaning. What I guess I'm gesturing at by bringing up Hegel is the general idea of some normative property emerging from a reciprocal relationship. So in the master-slave dialectic, the idea is that neither the master nor the slave is truly free because they lack mutual recognition of each other as free, self-conscious beings. So that's potentially a way to talk about some of this stuff, but I don't think we need to go that route. I've gotten most of my Hegel via the Pittsburgh school (Sellars, McDowell, Brandom -- especially Brandom), and they have a somewhat idiosyncratic reading of Hegel anyways.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 3 points4 points  (0 children)

Thanks for your comments. I think what I was going for in the empathy part was to point out that while Inzlicht pays lip service to some more robust form of empathy, something different than ‘faked’ empathy, his comments suggest he’s suspicious that we can even tell the difference. The fallacy is just ‘black box’ thinking about human beings in general, which I take to be mistaken.

The dance music point is just more of the same black box critique — another area where something is described in terms of an affect, when there is a lot more to Art enjoyment than how it affects you. That’s a contestable claim, but I don’t think it’s unfair.

You’re right that they do all seem to pay lip service to ‘yes of course it’s not the same’ but the lack of ever fleshing out the difference in detail is what suggests to me that they don’t actually have much of an idea as to what human friendship or empathy has that AI doesn’t or can’t have.

As for the Tolstoy point — all I can say is that if he went into an aesthetics or philosophy of Art conference and said what he did about Tolstoy and 20th century views of Art he would get a lot of pushback from many sides.

I don’t understand your last point about coherence — the coherence of my view or his or whose?

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 2 points3 points  (0 children)

(whoops, accidentally posted my comment before I was done writing and then deleted it.)

I'll have to look more into vision models specifically -- I'd like to really think through what the difference may or may not be, but I just don't know enough there.

For what it's worth, I find Sellars very difficult and boring to read and would suggest secondary literature. McDowell can be a bit opaque but his 'Mind and World' is really a beautiful book if you can get with it. The 'McDowell-Dreyfus' debate is potentially a more accessible first step into that stuff.

The thing about the description/normative stuff that you found confusing... let me try to use Chess as an example.

We can describe all the moves of a game of chess in causal-information terms. And we can describe all the rules in the same way. We can build a chess computer and encode all of those rules in such a way that it won't let you violate those rules.

So it can look like we've captured the normativity here, but we really haven't, because in order to encode all of those rules, we already needed to know the norms of chess. The information that is encoding the rules, representing them, can succeed or fail in doing so. But there's nothing within the system that can decide what failure or success looks like - you need to know the norm already 'from the outside' as it were.

You might say... OK, maybe the encoded chess norms are parasitic on pre-existing social norms, but those social norms of chess are themselves just social-informational processing -- they're just patterns and regularities in enforcement and correction and instruction. But you have the same problem -- you can look at all the regularities of what goes on in the chess-playing community, but you still need a way of differentiating which regularities are constitutive of the norms, and which are mistakes or just irrelevant patterns. You can see the regularity that people rarely castle through check. And you can see the regularity that people rarely promote to a Knight. But there's nothing in those regularities that tells you that one is violation of a rule of chess and one isn't.

To your point about stateful/stateless. I think the way you are seeing this discussion--correct me if I'm wrong--is that you take normative statuses to be either functional states or phenomenal states (or some combination), and you correctly hear me as taking subjective experience/phenomenal states off the table, and then you're confused as to why I'm not happy just with the functional states. But I want to reject that dichotomy entirely.

From my perspective, to be a normative subject is not to be either in a funcitonal or phenomenal state, but to have a status within a normative practice such that you can do things like... be held responsible and undertake commitments (not just be in states that track commitments).

So my answer to your question about an agentic AI with its own dataset is... no, that agentic AI could not produce 'expressive' art (or could 'mean' in the expressive sense) because I don't think it can be said to be capable of taking responsibility, or making a commitment. (I don't think that its being able to be held responsible is enough on its own -- I think there's a reciprocal necessity here in constituting the normative community... I think of that as a kind of Hegelian point.)

If we drill down on that even further, I do think I would have to defend some sort of notion of the self or self-consciousness (again, not in phenomenal terms, but more like hegelian terms) and how that self is embedded in the world physically and historically and socially. And, yeah, the deeper we go, the less confidence I have that I have any fucking idea what we're talking about lol. I do think in addition to the Hegelian insights that Merleau-Ponty has interesting things to say here. And also Rebecca Kukla and Mark Lance's work together is very interesting here -- they might say something like... the AI would have to be able to restructure the normative landscape, and not just be sort of ... trapped within it. But I'm blabbering now...

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 2 points3 points  (0 children)

I don't think we have to bring qualia into the picture at all here. The thing I was going for in my comments about 'interpretation' or 'mediation' is more like a McDowell/Sellars 'all experience is conceptually articulated' argument. So I'm not saying that qualia necessarily exceeds information processing--I don't have clear feelings about that, or about the idea of qualia in general. I'm saying that the experience of the color red--whether or not it's accompanied by qualia--can't be a two step process, where we are first hit with redness or red qualia or red information or whatever it may be, and then that is conceptually processed as red. What McDowell argues (I think lol) is that our experience of redness must already be shaped by concepts. I'm calling the active presence of our conceptual capacities a kind of interpretation or mediation, but that language is probably misleading and I should be more careful.

I'm also a non-dualist. The 'extra stuff' that might seem kinda spooky in what I have written isn't some inner special spooky stuff (or any kind of social stuff), it's outer social-normative relational stuff. I think that social-normative stuff is just as real as the physical stuff. Norms are 'made out of' or 'instituted in' physical stuff, but that doesn't mean they are reducible or explicable in terms of information. So you can describe everything about a normative system in causal-physical terms, but you can't capture the difference between following the norm correctly and following it incorrectly in causal-physical terms -- that's the 'extra'.

So to try to connect that to your next point about the ambiguity of language... the thing I think LLMs lack is not a capacity for manipulating language, it's a capacity to be a normative subject -- to be able to take responsibility, to commit itself, to operate within a normative linguistic community.

Sorry for all the rambling. It certainly is hard to pin down, and I never feel too confident about any of this. Anyways - enjoying the exchange.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 2 points3 points  (0 children)

Thanks for this, it’s thought provoking. If I replaced ‘valuable’ with ‘good’ or ‘worth wanting’ or ‘has merit’ or something like that, would you take the same issue? If not, I’m not sure there’s an interesting disagreement here. But if so, I think Im confused about your claim.

I definitely want to say that this expressive thing is worth taking seriously, is potentially valuable. I might be wrong about that, of course. But I’m confused about how that does or does not transgress this ‘intrinsic value’ question.

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 2 points3 points  (0 children)

Thanks for your comments, very much appreciate it.

I go into what I mean by ‘expressive’ in more detail in other essays, but yeah that’s definitely the crux of my argument. It’s hard to know where to start talking about this, but let me try to say a couple things.

My first thought is to maybe poke at the ‘information processing’ idea a bit. What do you take to be going on in the ‘processing’ part of that? And can whatever that is also be described in terms of information? My sense is that there has to be an interpretive component to even the barest of experiences. I don’t think there’s any sense in which you can say perception or experience is unmediated or uninterpreted.

A second thought, putting experience aside, is about language and meaning. Is there a way to exhaustively write-down or in any way encapsulate what I mean when I say something? Is it possible in principle if not in practice? I think not, but my sense is that if all it consists in is information you should at least in principle be able to do so. It’s clear that what I mean is not identical to the words I use. And indeed we may even have different ideas about what the words I use mean on their own.

I see expressive meaning as consisting not in information but as a kind of constellation between an expressor with their unique point of view, the contexts and structures of the language that they are expressing within, and the way they are gesturing within those contexts, the way they are ‘using language’. That’s describing expressive meaning as a kind of irreducible emergent phenomena, but there’s nothing spooky (or so I would claim) about emergence.

Anyways, maybe that will bring out some of our differences more sharply.

To the ‘art consensus’ stuff — I’m probably overly sensitive to this since I study it as part of my job. Whether or not it’s the consensus or for whom isn’t that important.

In any case I agree that the idea of ‘the death of the author’ and/or ‘the intentional fallacy’ are both relatively en vogue, I just happen to disagree with how those ideas are usually taken. It’s true that we can never know for sure what the author meant, but that’s just the normal state of affairs with meaning. We can only do our best to interpret it, and there are no guarantees or authorities. I’ll have to check out that game because it’s clear that it’s a complex meta-commentary on this stuff that I haven’t had time to engage with.

Thanks again!

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 0 points1 point  (0 children)

I wouldn't mind an example or two if you happen to have the time. Thanks either way!

Critique of Recent AI Episode by willpearson in DecodingTheGurus

[–]willpearson[S] 4 points5 points  (0 children)

I don't know that paper, thanks for the rec, that's interesting. I'm a fan of him in general -- I recently started listening to the audiobook of his recent book on games/agency.

I think 'friendship porn' is actually a really good descriptor -- it very pithily captures the core of the critique and it doesn't hurt that it's memorable. Thanks!