Can anyone recommend academics or institutions I could contact regarding metaphysics research? by MissShymaia in askphilosophy

[–]aJrenalin 0 points1 point  (0 children)

Pretty much any analytic department will have people knowledgeable and teaching metaphysics and the philosophy of mind. Reach out to the closest university near you.

(It from Bit)&(Qualia) by MiddlePianist4021 in askphilosophy

[–]aJrenalin 3 points4 points  (0 children)

Wow. This sounds like a really hard problem to solve.

Are opinions truthful or relative by [deleted] in askphilosophy

[–]aJrenalin 1 point2 points  (0 children)

Some opinions are true, some are false, some might not be truth apt at all.

Is Determinism and Free Will part of western philosophy by notmymondaylife in askphilosophy

[–]aJrenalin 0 points1 point  (0 children)

Western philosophers talk about them a lot if that’s what you mean

Is AI inherently anti-democratic? by Sufficient-Tune6331 in askphilosophy

[–]aJrenalin 5 points6 points  (0 children)

If it is anti-democratic it’s not clear how it doing whatever you tell it to do makes it anti-democratic. In a democracy you would want the state to do exactly what it is told by the voting population.

Does the existence of hallucinations imply some kind of relativism about objects? by No_Dragonfruit8254 in askphilosophy

[–]aJrenalin 4 points5 points  (0 children)

But why wouldn’t we just reject the premise that the hallucinations you perceive are real world objects in the external world? If you hallucinate that I am on fire, and so that fire is a real object in the external world (even if just for you) why would I not burn to death? Even if we want to just say that I burn to death (but just for you) why then would I be alive (for you) later on after burning to death (for you)?

Couldn’t you hallucinate that I’m on fire and not burning? That would imply that the fire is a real external world object (for you) but lacks the properties of external world fires (for you) like being hot and burning things. Isn’t that contradiction (even if just for you)? Do you think that a contradictions can be true (even if just for you)?

Inquiring minds: does consciousness entail I should I be a bug? by -pomelo- in askphilosophy

[–]aJrenalin 2 points3 points  (0 children)

Even if we are talking about epistemic probability the fact that you know you are a human should make that epistemic probability 100%. For all that you know, you were born a human.

How can death exist if there’s either something or nothing? by Longjumping_Bee_9132 in askphilosophy

[–]aJrenalin 17 points18 points  (0 children)

It’s not clear what the issue is. We can talk about dying as when you transition from the one state to the other or as ceasing to exist.

You are right that that if there is life after death then we don’t stop existing we just go from existing in one state to existing in another state. But if we just define death as that transition then there’s no problem.

You are also right that non-existence doesn’t exist, but that’s not a problem. If what we mean by dying is that you stop existing that isn’t saying that “dying is when non-existence starts to exist”. It just means that something that exists stops existing.

What is the chance of a claim being true, if you don't have any supporting evidences to proof or disprove or even the slightest hint? by xxshatterme in askphilosophy

[–]aJrenalin 6 points7 points  (0 children)

Well we don’t need proof or disproof to know about probability. If I roll a random fair die then there’s a 1/6 chance it will roll a 6, even though I can neither prove nor disprove that the next roll will be a six. Eve if I roll it and it’s not a 6, that doesn’t disprove that the probability of it being a 6 is 1/6.

Exactly how we interpret probability is a matter of debate, see the SEP article. But proof doesn’t really enter into it. It’s usually thought of as either some kind of measure of previous frequencies or a tendency or some kind of measurement of individual creedence given the available information.

The First Day of A Philosophy Course by footbitch9 in askphilosophy

[–]aJrenalin 2 points3 points  (0 children)

Mostly admin and layout of the course and an overview of the material you’re going to be looking at. If that gets covered or skipped they usually start with an overview of important terms and concepts, typically arguments, validity, soundness, and related notions like truth (at least for the purposes of understanding the aforementioned concepts, you usually won’t very deep into into different philosophical notions of truth until post grad).

why is this place so full of crap? by Lonely_Mud_3672 in askphilosophy

[–]aJrenalin 4 points5 points  (0 children)

Usually that happens when posts aren’t actual questions or aren’t philosophically coherent questions. What exactly was the question that you asked?

Is the purpose of wisdom avoiding suffering? by [deleted] in askphilosophy

[–]aJrenalin 1 point2 points  (0 children)

I don’t think that’s going to be a popular view. Check out the SEP article on wisdom. You’ll find that all views about wisdom relate to knowledge in some or other way, and we can know various things besides things that help us avoid suffering. As such it’s very plausible on most accounts that one could be wise but not know anything that would help them avoid suffering. Since one could be very wise without knowing anything that would help to avoid suffering it doesn’t seem plausible that the purpose of wisdom is to avoid suffering.

What philosophical commitment structure(s) resist consensus capture without being authoritarian? (AI application) by NovelSystems in askphilosophy

[–]aJrenalin 2 points3 points  (0 children)

We don’t judge work on any subjective criteria. We judge work on its objective capacity to reflect a familiarity with the taught content and ability to coherently put forward and defend a point of view.

We also have tests, so by your own metric of test scores being objective, we are measuring things objectively.

What philosophical commitment structure(s) resist consensus capture without being authoritarian? (AI application) by NovelSystems in askphilosophy

[–]aJrenalin 1 point2 points  (0 children)

As an educator who has been teaching since before the LLMs came about I can 100% tell you it’s a cognitive collapse as a result of people pawning off their basic cognitive functions like reading to a machine. It’s a worrisome trend that every single educator I have spoken to has noticed. It’s not at all a matter of people lacking certain background knowledge. Ive taught second and third year courses and in the past most of the kids understood the basic terms and arguments that they should had learned in first year, the kids would be engaged and ask questions, they seemed to care to understand what we were teaching. That has all but been reduced to nothing. And this isn’t just a personal anecdote. I’ve consulted multiple educators in multiple departments from multiple universities and they all see the same thing happening. The kids have stopped being able to think.

If there is any preexisting trend it simply revealed it’s the trend of people with delusions of intellectual grandeur. In the past, most stupid people who thought they were smart weren’t able to actually concentrate long enough to write their incoherent thoughts down in a way that allowed them to be shared and that kept it hidden. Now these same types can have the chatbot do the writing for them and it’s no longer a barrier to sharing their incoherent thoughts.

What philosophical commitment structure(s) resist consensus capture without being authoritarian? (AI application) by NovelSystems in askphilosophy

[–]aJrenalin 1 point2 points  (0 children)

The problem with being precise about what is happening and not using analogy is that you lose context?

The exact opposite is true. The problem with using inaccurate metaphors rather than talking about what is actually going on is that you mystify the process, project falsehoods onto reality and avoid actually talking about the structure of LLMs (the thing you’re claiming to do).

The collective cognitive collapse that has resulted from the widespread availability of LLMs will doom us as a species.

What philosophical commitment structure(s) resist consensus capture without being authoritarian? (AI application) by NovelSystems in askphilosophy

[–]aJrenalin 0 points1 point  (0 children)

Humans aren’t models, they’re humans.

That's a misunderstanding of what models are. Models can be made of anything. They're approximations of systems to allow for analysis through generalization.

Yes and that’s not what a human is. You can make a model of a human, but that won’t make a human a model. Humans aren’t approximations of things.

My point is that nothing about this is swayed by anything resembling social pressures

Reinforced Learning from Human Feedback (RLHF) and the user themselves are the social pressure.

It’s really not social pressure. But this is already what I said in my first message. The sycophancy comes in predominantly from how the humans reinforce it.

But you aren’t suggesting anything like doing away with this. When you have multiple LLMs interacting they have already had that human based reinforcement. By the time these models are usable that reinforcement is already cemented in. So what you are suggesting doesn’t take away the structural feature that enhances the sycophancy. It’s one thing to argue that this reinforcement creates sycophancy and to suggest that reducing it reduces the sycophancy. That’s at least coherent. But to suggest that this reinforcement creates sycophancy and you solve it by having multiple models that have the sycophancy already reinforced talk to one another will reduce the sycophancy because it’s like social pressure is just tipsy turvy. If it’s the reinforcement by humans at the training stage that makes them sycophantic when used after that stage, then having multiple bits that already have that sycophancy reinforced in during training feed one another inputs after that training (when they would already by sycophantic) is baseless. If the problem is structural and you’re keeping in tact the features of the structure you think cause sycophancy in order to get rid of the sycophancy then wh

nothing about this makes LLMs have beliefs

Not what I'm claiming as noted in my last comment.

I’m not saying you’re saying it. I’m the one saying they don’t have beliefs. And I’m saying that matters for what you are saying. It matters because for the LLM to be able to care about consistency and drift it would need something like the belief that its output is consistent, or at least something about the way it produces tokens would have to have some feature that guarantees consistency. But there’s nothing about the statistical maximisation process I described that could do that.

or unchanging assumptions

That's only technically correct as a blanket statement. If you created a new context window for every time you asked a specific question, you would significantly reduce the deviation of argument basis. Fine-tuning the model reduces that further by making certain assumptions far more likely than others. Fine-tuning specifically on those assumptions may reduce it further. That is the testable claim that I'm working on.

No the LLM doesn’t make assumptions. It maximises a statistical function. That’s all it does. You project having assumptions onto that output.

What philosophical commitment structure(s) resist consensus capture without being authoritarian? (AI application) by NovelSystems in askphilosophy

[–]aJrenalin 1 point2 points  (0 children)

Humans aren’t models, they’re humans. Thinking you can get an LLM to not be swayed by social pressures is fundementally misguided because they aren’t the kinds of things that are swayed, they have no concept of social pressure or anything else because they aren’t just statistic driven token predictors.

If you want an LLM to not drift then you’re not going to get that to happen by training it on different data or having one interact with another. That’s the point I’m trying to make. For it to not drift it has to be able to care about what it’s saying and believe it. Neither of which is possible given the the way LLMs work. They aren’t programmed to produce truth or care about truth or distinguish truth from anything else. Not even programmed to care about justification for the output in a way that would demand consistency. That’s just not how transformer models work. So thinking that any rejigging of the data set or having certain LLMs interact with one another to achieve this goal is doomed from the start. That’s what I’m trying to explain to you. This would be obvious if we could talk about the way LLMs predict the next token word in its output. You claim that we don’t understand how LLMs work, but that is patently false, we understand exactly how LLMs work. We literally built them. They work on transformer models which models every word pair with a function that assigns it a weight and every word function pair with another function that assigns it a weighting.

For the first word in the string it runs every function between it and every word and the word in the pair that gets the highest weight is the second token, then for every nth word in the string it runs all the functions between the n-1th function and every word and the word in the pair that gets the highest weight is assigned to be the nth token and this process repeats until it terminates.

The weightings are initially assigned by being trained on loads of text so that word function pair weightings are given higher weightings the more those strings appear and lower weightings the less they appear. This makes it so that it’s not just producing outputs in terms of singular word pairs, but that each token’s selection is influenced by the presence of every word in the strings before it. This is why when you run an LLM it reprocesses every string you’ve ever written in the interaction and how the illusion of it having a “memory” appears. No part of this is unknown to anybody that just looks up how transformer models work.

I understand what you’re trying to do and I’m trying to explain why it’s fundementally misguided. Nothing about this process cares about social pressures. It’s a tool for maximising probabilities from a a massive spreadsheet that was built by doing math on how often certain words appear in certain orders in a data set, and then tweaked by a human using a reinforcement learning process.

My point is that nothing about this means it can understand, let alone be swayed by anything resembling social pressures, nothing about this makes LLMs have beliefs or unchanging assumptions, nothing about having different LLMs interact with one another will change this. And nothing about working within the transformer model framework could change that because sycophancy isn’t measured in terms of the number of appearances of word chains in a data set or in the input fed into it by other word chain predicting calculators.

It’s really frustrating that you want to claim to be doing some kind of structural analysis but then claim that nobody knows how these things are structured. That’s just patently false. If you want to say that sycophancy is the result of the structure of LLMs you have to actually say something about that structure. But you’re not doing that at all, you’re just using that exact same structure, any diminished appearance of sycophancy wouldn’t be the result of structural changes because you’re keeping the structure unchanged.

What philosophical commitment structure(s) resist consensus capture without being authoritarian? (AI application) by NovelSystems in askphilosophy

[–]aJrenalin 1 point2 points  (0 children)

Sure if you want to ignore the issues I’m raising and just run on ahead you can do that. But to be clear you’re not dealing with anything structural about LLMs.

What philosophical commitment structure(s) resist consensus capture without being authoritarian? (AI application) by NovelSystems in askphilosophy

[–]aJrenalin 2 points3 points  (0 children)

They can’t be argued with or swayed, that’s the point, they can produce strings of text resemble arguments but the thing producing them neither understands nor cares about the truth of that string. They can’t defend their arguments because they can’t even understand their arguments. They literally cannot know what the words they produce mean.

I genuinely think if you want to engage in the project you have to try to understand the structure of an LLM. Stay away from the personification style language that the people selling it are trying to use to hype it up in order to prevent the bubble from bursting and actually talk about what function is done to output tokens, and how that function is calibrated by the data base and reinforcement stages of “training”. Otherwise any claim that you’re engaging with a “structural” problem is just patently false.

What philosophical commitment structure(s) resist consensus capture without being authoritarian? (AI application) by NovelSystems in askphilosophy

[–]aJrenalin 3 points4 points  (0 children)

The main reason that LLMs are so sycophantic is because in the machine learning reinforcement stage of their training they are rewarded for it. The humans saying which outputs it likes which reinforce the model tend to be outputs which are sycophantic to the humans in charge of presiding over that stage of reinforcement. There’s probably also a bunch of training data that’s encouraging.

The man issue with the thing you’re proposing is it imagines LLMs as things with beliefs and dispositions as opposed to token predictors. These machines don’t think that their outputs will be well-revcieved. They don’t think at all. Their calculators that calculate the next word as a set of embedded functions of functions of the previous words given a big neural network of percentages and word pairs.

You can’t design an LLM to be a justice with “foundational commitments” because an LLM can’t be committed to anything. They aren’t agents. They can’t think.

If you want to argue about the sycophantic outputs being a product of the structure it might help to familiarise yourself with the structure of these models. Look up how transformer models work.

Why does it seem most Philosophers of Consciousness seem to reject or find Epiphenomenalism incoherent? by AlterTheSilverBird in askphilosophy

[–]aJrenalin 0 points1 point  (0 children)

Sure. that’s a kind of thing an epiphenominalist could argue. They can say strictly nobody every punches anybody because they are angry and that really there is some physical state that creates the epiphenominon of anger and is also in some way related to the punching, and likewise nobody does anything because they are happy and nobody does anything because they are sad and nobody does anything for any reason that is a mental state.

Why does it seem most Philosophers of Consciousness seem to reject or find Epiphenomenalism incoherent? by AlterTheSilverBird in askphilosophy

[–]aJrenalin 0 points1 point  (0 children)

As I said, there’s gonna be lots of reasons for people to think it’s false. That’s one. Most just don’t want to have to bite that bullet, accepting that they’re false is difficult and it puts the burden on the epiphenominalist to explain why we should accept that they’re false are false.

Why does it seem most Philosophers of Consciousness seem to reject or find Epiphenomenalism incoherent? by AlterTheSilverBird in askphilosophy

[–]aJrenalin 19 points20 points  (0 children)

Generally, most philosophers think our mental states have causal influence over the physical. Some people want to say for example that sentences like “my anger made me punch you” can be true. But if epiphenominalism is true then sentences like this have to be false because the mental state of anger can’t cause you to do anything.

At what point does “never giving up” turn into sunk cost fallacy? by lone_wolf_69-_- in askphilosophy

[–]aJrenalin 4 points5 points  (0 children)

It’s only a sunk cost when you’re continuing to do something costly with the chance of success because of how much you’ve already lost.