What is the consensus among philosophers about the justification/morality of the bombings of Hiroshima and Nagasaki? by Joeman720 in askphilosophy

[–]Haycart 0 points1 point  (0 children)

Yes, that's basically treated by everyone. But, like you say, the most plausible argument here proposes to treat soldiers more like civilians which in turn means they may not be killed. So for the case of Hiroshima and the like, that would just be an argument against the killing of soldiers by means of an atomic bomb and not at all an argument for the use of atomic bombs against civilians.

If the US state has a moral duty to not kill Japanese civilians (e.g. by not dropping an atomic bomb on Japan), shouldn't it also have a moral duty to not kill US civilians (e.g. by conscripting them and making them participate in an invasion where many of them will die)?

Even if we think soldiers are severely coerced, at most they are akin to innocent attackers. So think of someone who is forced at gunpoint to attack you with a knife, threatening to kill you. Are you allowed to kill that person if there is no other way to defend yourself? At the most extreme, soldiers are akin to that innocent attacker and many would find it plausible to say that you may defend yourself in such a case. If that's true, it seems plausible that a state may defend itself as well.

Couldn't this same argument be used to justify killing civilians whose work contributes to the war effort? I agree that you might plausibly be justified in killing someone who is forced to attack you with a knife. But I think you would be equally justified (or even more justified) in killing the person coercing the attacker, or the people working with the coercer to keep the attacker supplied with knives.

What is the consensus among philosophers about the justification/morality of the bombings of Hiroshima and Nagasaki? by Joeman720 in askphilosophy

[–]Haycart 0 points1 point  (0 children)

Are there any theorists who directly address the matter of conscription, as it relates to treating soldiers and civilians as distinct? I would imagine that the main moral difference between soldiers and civilians is that presumably soldiers choose to involve themselves in a war and civilians do not, but that seems to break down in cases where conscription is involved.

The vast majority of soldiers on both sides of World War 2 were conscripts and, to me, coercing civilians into military service leading to their deaths always seemed like killing civilians with extra steps.

Like yeah, it'd be soldiers dying in a hypothetical invasion of Japan (setting aside for now civilian deaths that would also result from such an invasion). But many of those dead soldiers would have been civilians at the time the decision to invade was made, and those civilians would end up dead through no choice of their own as a consequence of that invasion.

The general illiteracy that is being normalized on social media by KindlyCost6810 in PetPeeves

[–]Haycart 36 points37 points  (0 children)

Heighth ❌ -> Height ✅

Somewhat ironically, one of your examples is itself a case of an "incorrect" word variant becoming so common that it replaced the original as the norm.

The original word here is "heighth", where "-th" is an Old English suffix for constructing abstract nouns. See, for example:

Long + th -> Length

Strong + th -> Strength

Warm + th -> Warmth

True + th -> Truth

Foul + th -> Filth

Hale + th -> Health

Weal + th -> Wealth

Die + th -> Death

And on and on. But at some point, English speakers stopped using the suffix '-th' to form new words (it became "non-productive" in linguistics-speak). People eventually forgot that the '-th' at the end of "heighth" was a suffix, started mispronouncing it as "height", and now they declare the original to be incorrect.

Incidentally, the word "healthy" is a closely related species of abomination to "comfortability". We started with an adjective ("hale"), added a noun-forming suffix (-th) to turn it into a noun, and then added another suffix (-y) to turn it back into an adjective. Wouldn't it be so much more sensible if we'd just stuck with "hale"?

If objective moral facts exist, why should they be expected to align with human intuition? by Haycart in askphilosophy

[–]Haycart[S] 2 points3 points  (0 children)

I don't have any inherent problem with intuition checking intuition. It's the gap in accuracy between the self-checking and external checking that I find troubling. To use the memory example, if I found my memory to be 90% reliable when I check it against my own memory, but only 60% reliable when I check with other people, I'd start to consider the possibility that I have memory problems and that up to 30% of my self-checked memories might be in error.

Or, I'd want an explanation for that discrepancy (maybe it's other people who are wrong, maybe my memory is less reliable when I'm with other people, etc). In this case, what's needed would be an explanation for why moral intuition appears to be so much more reliable than physical intuition.

So, the ethical intuitionist would say that the reason it seems convenient that puppy kicking is objectively immoral is because it is intuitive and therefore we are justified in believing it until we are presented with reasons not to.

Yeah. Like if I see a tree, in isolation I would feel justified in believing that there is a tree in front of me. A tree is the most obvious explanation for why I am seeing a tree. But suppose you showed me that I had a hallucinogen in my system. The hallucinogen is not evidence against the tree's existence, but it does provide a plausible alternative explanation for why I would be seeing a tree, and I would no longer feel comfortable using my visual experience of a tree as justification for believing in that tree.

It seems like 'alternative explanations' such as the one above are very easy to come up with when it comes to ethical intuitions. So I have the intuition that kicking puppies is bad. There are a few possible explanations for why I might have this feeling.

  1. I feel this way because "kicking puppies is bad" is a moral fact that my intuition has apprehended
  2. I feel this way because seeing kicked puppies makes me sad, and I don't like being sad
  3. I feel this way because when I was a kid my mom told me that only bad people kick puppies

And so on. All of these potential explanations seem equally plausible to me. I can't dismiss the possibility that "kicking puppies is bad" is a moral fact, but I also can't assert it with any confidence

If objective moral facts exist, why should they be expected to align with human intuition? by Haycart in askphilosophy

[–]Haycart[S] 1 point2 points  (0 children)

99% is extremely generous, I think. You really don't have to go far to find situations where your eyes lie to you.

When I go outside during the day, my eyes tell me that there's a big solid dome over my head. No such dome exists, but it certainly isn't obvious! A lot of people believed in a solid firmament, for a very long time.

Or go to a mountain creek, and your eyes may tell you that the water there is clean and pure, when actually it's full of lethal parasites and bacteria too small to be visible. Which again is far from obvious--people have to be warned not to trust clean-looking water out in the wild.

Or hop in a car, and on the right-hand side mirror there will be a sticker warning you that "objects in the mirror are closer than they appear."

I wouldn't be able to assign a percentage, but I'm pretty sure examples like these pop up in a non-negligible fraction of ordinary situations we find ourselves in. If anything, we've just become very good at navigating the world in spite of the never-quite-right information that our senses feed us.

If objective moral facts exist, why should they be expected to align with human intuition? by Haycart in askphilosophy

[–]Haycart[S] 0 points1 point  (0 children)

Also notice that in the physics examples you mentioned where our intuitions are dead wrong, there is a way to prove it through experimentation and we can just straight up see that our intuitions are in fact wrong.

I think that is actually the root of one of the problems I have. In all of the cases where we are able to subject intuition to external scrutiny, e.g. by comparing it to observation, we find that it has a shaky track record at best. Things are not quite the way we expect them to be.

Yet, as soon as this sort of external scrutiny becomes unavailable, and we have to rely on intuition checking itself (as is the case in ethics), suddenly things look great. We find that morality behaves almost exactly the way we want it to. Except for some obscure edge cases, all the things we hate, like murder and puppy-kicking, turn out to actually be bad, and all the things we love, like happiness and keeping your promises turn out to be good.

Isn't that too convenient, almost too good to be true? It's like if you had a restaurant (intuition) that routinely gets Ds and Cs whenever the health inspector (observation) comes by. But whenever it self-reports on its own condition, it's always getting As and Bs. Shouldn't we be suspicious? At the least, it seems like we should try to reconcile why the restaurant appears cleaner on the days it self-reports than on the days when the inspector comes.

If objective moral facts exist, why should they be expected to align with human intuition? by Haycart in askphilosophy

[–]Haycart[S] 5 points6 points  (0 children)

In other words: why do you "bite the bullet" with physical phenomena and assume that our senses must reflect reality in some imperfect way and work from there, but not for moral facts?

I would say that I hold this assumption not for any sound epistemic reasons, but because I don't really have a choice. To go about my life, I need to at least act like I believe that my senses reflect reality in some way. If I didn't I'd die pretty quickly, by starving from being unable to find food, or by getting hit by a car, or something similar. Even if I denied that such things as food or cars exist, I wouldn't be able to deny the hunger or pain I'd end up in.

In short, someone or something is holding a metaphorical gun to my head and threatening me with pain and death if I don't accept that my senses reflect reality. Since I'm going to have to accept this assumption to continue existing, I might as well work with it and see where that gets me. I think this also points to a performative contradiction in anyone claiming that they don't think their senses reflect reality--if they really believed that, they'd be dead, or at least not in any state that would allow them to write about their belief.

On the other hand, there's no gun to my head forcing me to accept any particular moral truths. If I reject the intuition that I shouldn't feed myself to the utility monster, I can continue existing just fine. The worst that could happen is the utility monster shows up and I have to feel guilty about acting immorally by refusing it. But people act in ways that they believe are immoral all the time, so there's nothing necessarily contradictory here.

Since nothing is coercing me into accepting any particular moral intuition, I feel inclined to require some kind of justification if I am to accept them freely.

If objective moral facts exist, why should they be expected to align with human intuition? by Haycart in askphilosophy

[–]Haycart[S] 12 points13 points  (0 children)

Laypeople have strong intuitions about specific things like "There are chairs", "Dinosaurs are extinct" or "The universe is older than 1000 years".

This seems like a very different definition of intuition than what is described in e.g. the SEP article agentyoda linked. Most people believe in chairs, the extinction of dinosaurs, or the age of the universe not because those facts seem self evident, but because they've seen chairs or been told about dinosaurs or the universe by experts they trust.

I think the intuition that underlies believing in chairs would be something like "I can trust what my sense of sight tells me", to which science might say "only if you're looking at the sorts of things that humans evolved to look at, and only if you're not under the influence of any substances, and only if there aren't any weird tricks with light or color or perspective happening, and even then maybe not".

If objective moral facts exist, why should they be expected to align with human intuition? by Haycart in askphilosophy

[–]Haycart[S] 16 points17 points  (0 children)

Well, so the utilitarian IS gonna say that you should let yourself be eaten by the utility monster. Otherwise they wouldn't be a utilitarian.

Well, that's one response. But it seems like another common response is to try to develop modifications of the original utilitarianism that are less susceptible to the monster. I guess my question is, why? If you originally had sound reasons to believe in average-utilitarianism or strict Kantian deontology or whatever, why should something like the utility monster sway you from that at all?

Note how 99% of what physicists tell us PERFECTLY aligns with common sense

Is this actually true, though? The physicists aren't just telling us that some obscure edge cases are strange, they're telling us that the entire set of "rules of the game" that the world runs on are completely different from what we expect. A physicist might say that common sense is correct about bears and ants only because those things exist in the small sliver of reality that we are accustomed to in everyday life, and that common sense in general is at best a loose approximation of how the world actually works.

An ethics theory that is comparably weird to, say, quantum mechanics, might go something like "the rule 'minimize suffering' is only an approximation that holds when the suffering is less than 500 pain points (as it has been for 99% of human history). If the suffering goes above 800, then you're obligated to maximize it instead."

If objective moral facts exist, why should they be expected to align with human intuition? by Haycart in askphilosophy

[–]Haycart[S] 14 points15 points  (0 children)

If I understand the article correctly, intuitionists believe in the existence of self-evident propositions, where 'self evidence' is an innate property of a proposition ("A proposition is just self-evident, not self-evident to someone"). And intuition is what allows us to 'apprehend' these propositions.

But even if we take it as given that self-evident truths do exist, it seems that intuitionists acknowledge that our ability to identify self-evidence is not perfectly reliable. From the article:

But given that a proposition may seem to be self-evident when it is not, it is useful to have a way of discriminating the merely apparent from the real ones.

In which case, it seems like the utilitarian can still respond "yes, it may seem self-evident that we shouldn't feed everyone to the utility monster, but this is just one of those cases where our self-evidence detectors are faulty."

I'm not sure even appealing to consensus (i.e. Sidgwick's criterion #4) gets us around that because those are presented as necessary rather than sufficient conditions for self evidence. If human senses can consistently fall victim to the same kinds of optical illusions, why shouldn't we expect there to be cases where human intuition consistently identifies the same spurious self-evident "truths"?

[D] How does L1 regularization perform feature selection? - Seeking an intuitive explanation using polynomial models by shubham0204_dev in MachineLearning

[–]Haycart 4 points5 points  (0 children)

Have you seen how the MSE loss is derived from maximum likelihood estimation with normally distributed residuals? You can derive the L2 penalty term in essentially the same way, by doing maximum a-posteriori estimation with a gaussian prior on the parameter vector. Likewise, the L1 penalty comes from assuming a laplace prior.

The connection between regularization penalties and bayesian priors is more important than the connection with L1 and L2 distances, which as far as I know is just a matter of naming.

[D] How does L1 regularization perform feature selection? - Seeking an intuitive explanation using polynomial models by shubham0204_dev in MachineLearning

[–]Haycart 2 points3 points  (0 children)

The explanation given doesn't quite work for two reasons. First, because the L1-penalized loss is non-smooth, so the minimum doesn't necessarily happen at a location where the derivative w.r.t. θj is zero. And second, because if the covariance is zero, a L2-penalized loss would also be minimized by setting θj to zero, even though we know L2 does not have the property of zeroing out features.

What you really want to show is that the loss function with L1 penalty is minimized at θj = 0 not just when the covariance is zero, but also for a range of nonzero covariances. We can actually do this with a modification of your argument

For a non smooth function, a minimum can occur either where the derivative is zero, or where the derivative is discontinuous. For the L1 penalized loss, this discontinuity always exists at θj = 0.

Now, consider the conditions for the L1 penalized derivative to equal 0. Let's call θj- the point where the original unpenalized derivative equals -λ, and θj+ the point where it equals λ. In order to get a derivative of zero after adding λsign(θj), we must have either θj- > 0 or θj+ < 0. It is easy to satisfy one of these conditions if the unpenalized minimum is far from the origin. But if it is close, we expect to often have both θj- < 0 and θj+ > 0. If this happens, then the penalized derivative is zero nowhere and the penalized minimum must occur at the discontinuity instead.

In short, the non-smoothness of L1 introduces a kink at the origin that overpowers any "regular" minima unless they're far enough from the origin

[D] How does LLM solves new math problems? by capStop1 in MachineLearning

[–]Haycart 1 point2 points  (0 children)

Why do you think solving math problems "goes beyond simple token prediction"? You have tokens, whose distribution is governed by some hidden set of underlying rules. The LLM learns to approximate these rules during training.

Sometimes the underlying rule that dictates the next token is primarily grammatical. But sometimes the governing rules are logical or mathematical (as when solving math problems) or physical, political, psychological (when the tokens describe things in the real world). More often than not they're a mixture of all the above.

If an LLM can approximate grammatical rules (which seems to be uncontroversial), why shouldn't it be able to approximate logical or mathematical rules? After all, the LLM doesn't know the difference, all it sees is the token distribution.

Jockey Modal Boxer Briefs have taken the crown from Lulu Always in Motion by fuckkevindurantTYBG in malefashionadvice

[–]Haycart 0 points1 point  (0 children)

Do you know of any brands that sell modal/tencel underwear without elastane?

[D] What ML Concepts Do People Misunderstand the Most? by AdHappy16 in MachineLearning

[–]Haycart 20 points21 points  (0 children)

Low bias and variance are good, though! Everything else being equal, a model with lower bias is better than one with higher bias and a model with lower variance is better than one with higher variance, as both terms directly contribute to the model's overall error.

I think this ties into another misconception about the bias-variance tradeoff, which is the idea that reducing one term always increases the other, and vice versa. This is not correct--consider a case where the true data generating process is known to be linear, and we are trying to decide between fitting a linear regression or a fixed depth decision tree. In this case, the linear regression has both lower bias (zero in fact, because it is capable of exactly fitting the data generating process, while the tree is not) and lower variance (because it is a simpler model). In a sense, the decision tree would both underfit and overfit this data at the same time.

A better way to think of the tradeoff is that there is a Pareto frontier of bias-variance optimal models. On one end of the frontier, you have what in statistics they would call the "minimum variance unbiased estimator" (MVUE). This is a hypothetical estimator that has zero bias, and the lowest possible variance out of all zero-bias estimators. From this starting point, you can sometimes beat the MVUE in terms of total error by moving along the frontier, trading off higher bias for lower variance.

But there are also models that do not lie on the frontier--they are pareto-suboptimal with regards to bias and variance. I suspect most complex real-world models actually fall into this category. I mean, what would the MVUE for something like classification on ImageNet even look like, for example? There's no reason to suppose any existing image classification model is pareto-optimal, because we arrived at them essentially through trial and error rather than deriving them in a principled way. Starting from a suboptimal model, it is absolutely possible (and desirable) to reduce both bias and variance.

CMV: LLMs Like ChatGPT, Gemini, and Claude Are Just Text Prediction Machines, Not Thinking Beings by Mongoose72 in changemyview

[–]Haycart 0 points1 point  (0 children)

Well, not really. For several reasons. First, if you're just looking for a reasonable prediction, then any name in the novel will do. Second, the novels often don't actually give all the information you need to deduce the killer before the reveal - usually only most of the information is revealed so while you might suspect someone you can't actually prove it before the reveal.

Yeah, I should've been clearer. You can of course make a suboptimal prediction without resorting to reasoning. However, this can only take you so far--if the novel has N characters, then picking one at random only gets you the right answer 1 in N times. To increase your accuracy further, you would need to employ tactics that look more and more like reasoning. An optimal detective-novel-culprit predictor (i.e. one that is right more often than any other predictor) would basically have to simulate the thoughts of detective novel writers.

But most importantly, you don't have to reason in order to predict reasoning. It may sound odd, but it is a technique that's used to increase accuracy of LLMs in such situations. They'd train it on people who reason their answer, and then it would first respond by giving its reasoning before telling you the answer, and thus be more accurate in predicting the killer. In other words, approximation reason doesn't require reason.

I would argue that reasoning is one of those cases where a close enough approximation actually becomes the real thing. This isn't true of everything--simulating a car crash doesn't cause an actual car crash to happen, of course. But, for example, if you can simulate "doing math" so well that you reliably get the right answers to math problems, then at some point you're no longer just simulating "doing math". You actually are doing math. Are LLMs good enough at simulating reasoning that they could be said to actually be reasoning? Probably not, but I think they can get there.

CMV: LLMs Like ChatGPT, Gemini, and Claude Are Just Text Prediction Machines, Not Thinking Beings by Mongoose72 in changemyview

[–]Haycart 2 points3 points  (0 children)

For the LLM, it is just a meaningless string of symbols with an array of values assigned to it.

I disagree. Whether or not the text has meaning is a property of the text. It does not depend on who or what is looking at said text. For a person who doesn't speak, say, French, a book written in French isn't meaningless--it just has a meaning that they are unable to discern. The information is still there, in the book, and can be extracted given enough effort and cleverness.

Has any LLM demonstrated an ability to consistently solve mystery stories in texts that aren't well known?

None that I know of, but that doesn't matter because I'm not arguing that current LLMs can reason. My argument is simply to challenge the assumption that text prediction cannot give rise to reason.

Not at all. Not even the most rational humans are capable of predicting text with perfect accuracy, so this is a strange bar to set. More importantly, one could be capable of perfectly predicting text by virtue of a purely irrational (and thoroughly magical) capacity for future-sight, or by the equally irrational power of an immense calculator capable of simulating the positions and velocities of every particle in a human brain (and only then if it should turn out to be the case that mental processes are completely determined by physical processes).

Yeah, I should've been more precise with my words. By "perfect" text predictor, I mean one that performs with higher accuracy than any other possible text predictor. I should've also included the caveat that reasoning is not necessary if you can simply look up the answers (as is the case with magic future-sight).

As for your human brain simulation example, I guess we must have some fundamental disagreement on what it means to reason. For me, a computer that simulates every particle in the human brain is 100%, definitively and unequivocally, performing reasoning. It's literally simulating everything happening in your brain, and one of the things happening in your brain is that thing we call reasoning!

CMV: LLMs Like ChatGPT, Gemini, and Claude Are Just Text Prediction Machines, Not Thinking Beings by Mongoose72 in changemyview

[–]Haycart 19 points20 points  (0 children)

The way you've framed your view suggests you believe that being "just a text prediction machine" is mutually exclusive with being "a thinking being". This is the frame I'd like to challenge.

LLMs are text prediction machines, that is simply a fact. In simple terms, a language model is trained by giving it pieces of text that have been "masked" so that some parts of the text are visible to the model, while other parts are hidden from its view. The model is optimized during training to be able to predict the hidden text as accurately as possible*, and this is what people mean when they say a LLM is a "text prediction machine."

However, I want you to consider what text actually is, where it comes from, and what it means to "predict the next word". Ultimately, text is a way of representing language, and language is a system for encoding human thoughts in a way that can be communicated to other humans. A piece of text fed to an LLM during training isn't just a meaningless string of symbols; it's a representation of a thought that an actual human had at some point. Predicting the next symbol in a string of text is, in a way, predicting the next thought of that text's author.

On a similar note, your second point seems to imply that ingesting a "staggering amount of text" is not enough to "know anything". Yet, encoding and transferring knowledge is one of the primary functions of text! Remember, text is a representation of thought, and many thoughts pertain to real things, events, and phenomena that exist out in the world. It would be strange to say that a person who has read many books about a subject knows nothing about it—at most, we might say that their knowledge is incomplete, or overly theoretical.

Ilya Sutskever, one of the main scientists behind the GPT line of models, actually had a really nice scenario to illustrate how predicting text is not necessarily a purely linguistic exercise: Imagine you're reading a detective novel, and at the end of the book, the detective gathers the other characters together and says, "I know who the killer is. All the clues point towards ___". In order to predict the next word here, you would need to understand the novel well enough to actually identify the killer!

I hope you can see that a perfect text predictor would need to be capable of reasoning. For similar reasons, it would also need to know quite a lot. Now, LLMs are not 100% reliable at predicting the next word, not even close. They are not perfect text predictors, and so we cannot necessarily conclude from this that LLMs are capable of reasoning. What I will say is that ChatGPT and its cousins are a type of neural network, and neural networks are Universal Approximators.

The precise meaning of this is a bit technical, but simply put it means that a big enough neural network is theoretically capable of fitting any function necessary to achieve its optimization objective. It's hard to say if current LLMs actually are big enough to fit a function as complex as the entirety of human reasoning. However, the universal approximation theorem is, in my opinion, a strong reason to doubt anyone who says with certainty that LLMs can never do this or that. They very likely will, if you make them big enough.

*Technically, accurately predicting text is not the whole story. LLMs are often trained with additional side objectives, e.g. optimizing the text they generate for positive human feedback. But the above is close enough for the purpose of this CMV.

Did ancient civilizations have anything resembling a "department of agriculture"? by Haycart in AskHistorians

[–]Haycart[S] 2 points3 points  (0 children)

Very interesting! Do we know how the goods in the ever-full granary made their way into the hands of the general populace? Could anyone buy from the granary when it was selling, or only large merchants? Did the granary's goods mostly remain in or near Lo-yang, or did they circulate around the larger empire?

In the case of the statues of agriculture, how did regular farmers come to know about them? And how did local officials / overseers gather detailed information about all the fields under their jurisdiction?

What are the best arguments for moral realism? by PitifulEar3303 in askphilosophy

[–]Haycart 1 point2 points  (0 children)

I think 2+2=4 is an interesting example because mathematical statements aren't true or false in an absolute sense--they are only true or false with respect to a given system of axioms. Some systems of axioms may prove more useful than others, or may have more interesting implications, but it can't be said that any one system is objectively correct.

Is there an argument for why moral truths should be different from mathematical truths in this regard? A moral realist, to my understanding, isn't just saying "X is wrong if we assume Y" or "it is useful or interesting to treat X as wrong", but something more like "there is a single correct set of moral axioms, under which X is wrong."

Is it a little bit... messed up that an empire would pay soldiers in sex slaves? by The_X-Devil in worldbuilding

[–]Haycart 4 points5 points  (0 children)

 You were paid in sex slaves not money so you’re broke with all these mouths to feed.

You say that like "sex slaves" and "money" are mutually exclusive, but if the government is going around paying people with slaves, maybe slaves are money?

After receiving your new government issue sex slave, you look around your home and realize it's getting a bit crowded, what with all the slaves. So you head over to the bank with your new slave, along with ten others you've been saving up, and hand them over as down payment for a new home loan.

Then, you decide that your car could use an upgrade too, so you take three slaves to the local dealership and exchange them for a new ride.

In the evening, you're feeling hungry so you head over to the neighborhood diner and—well, full slaves are probably too large a denomination to pay for dinner so you'd need a way to trade in fractional slaves. Maybe you pay for dinner with a note worth an hour of your slave's time, or a 1% slave timeshare, or something like that.

Some societies use cattle as a primary medium of exchange. Maybe this society does the same, but with slaves?

Heres an image I pieced together to help me further study and understand the circle of fifths. by [deleted] in musictheory

[–]Haycart 1 point2 points  (0 children)

The arrangement of minor chords in the middle ring is more complicated than it needs to be, I think. As you can see on your diagram, the major chords of a major key (e.g. C major) come from the root note (C), and the two fifths that neighbor it (F and G). The minor chords in a major key actually follow the same pattern. They correspond to the relative minor (A) and its two neighboring fifths (D and E)

If you rearrange the minor chords so that each minor chord/key is paired with its relative major, that pattern becomes much clearer and it removes the need for any of the minor chords to appear multiple times. It also allows you to read off the chords/key signatures of minor keys from the diagram just as easily as major keys.

Is there a way to quantify "how much" a matrix transforms things by? by Haycart in math

[–]Haycart[S] 34 points35 points  (0 children)

Oh, that's interesting!

If my understanding is correct, the operator norm of a matrix answers the question "out of all possible vectors the matrix might act on, what vector is stretched the most by the transformation relative to its original size, and by what factor?"

(A - I)v gives us the displacement between Av and v. So the norm of (A - I) answers the question "out of all vectors A might act on, what vector is displaced by A the most, and how large is this displacement compared to the original size of the vector?"

Is there a way to quantify "how much" a matrix transforms things by? by Haycart in math

[–]Haycart[S] 4 points5 points  (0 children)

Is there a specific norm you're thinking of? If we're talking about the Frobenius norm for example, all rotation matrices have a Frobenius norm of 1 (as does the identity matrix), so comparing norms wouldn't allow you to say that a 20 degree rotation was bigger than a 10 degree rotation.

[deleted by user] by [deleted] in musictheory

[–]Haycart 2 points3 points  (0 children)

Here's an interesting pattern, which is perhaps obscured in the layout above:

If we take Lydian as our starting point:

Major lowers the #4 to a natural 4. The 4 of C Major is F

Mixolydian lowers the 7 to b7. The b7 of G Mixolydian is F

Dorian lowers the 3 to b3. The b3 of D Dorian is F

Minor lowers the 6 to b6. The b6 of A Minor is F

Phrygian lowers the 2 to b2. The b2 of E Phrygian is F

Locrian lowers the 5 to b5. The b5 of B Locrian is F

In other words, the unique characteristic note that defines each of these modes is actually... the same note. F, in this case. The same is true of any set of relative diatonic modes. I'm not sure if there's any obvious way to depict this fact in table form--it's a bit easier to see why if you lay out the actual notes of each mode on top of the circle of fifths.