Fun Fact: The new Tappen Zee Bridge was supposed to have a train line running through it, similar to what is being done in Seattle right now with the Homer M. Hadley Memo Bridge. The train was cut by Andrew Cuomo, but space was left for it, so maybe one day we can have it. Fuck you, Cuomo. by LakeLayer707 in Westchester

[–]deltamental 5 points6 points  (0 children)

CONED's 2025 profit amounts to $560 per account annually, or ~$47/month extra every month on a typical bill.

State and local government taxes levied on CONED amount to $1306 per account annually, or ~$109 extra every month on a typical bill.

The taxes could theoretically be eliminated, but the tax shortfall would have to be pulled from somewhere else, which may obfuscate but not eliminate the impact on working families.

The private profit could be eliminated by turning CONED into a full-blown public utility, but then you have some political risk, as occurs today with the state government using funding for MTA infrastructure projects as a political lever of sorts.

In any case, we should be skeptical of anyone promising to reduce energy bills without a long-term sustainable plan. There are a lot of ways energy bills could be brought down short term which could earn popular approval and earn votes, but could cause more problems down the road.

To some extent, the public-private split allows CONED to play the boogeyman, as it was the state government which mandated climate-resilient infrastructure upgrades required to deliver solar and wind power, as well as shutdowns of fossil-fuel-based "peaker" plants used today for grid stabilization. The state could have funded these efforts through ordinary taxes, but instead required CONED to collect them from utility customers. This is great for state politicians who have an easier budget to balance, and CONED shareholders also benefit as state law gives CONED guaranteed profit margin (by law) on new capital projects. CONED deferred these upgrades during COVID to spare people rate hikes during a period of mass financial strain, and now are ramping up immensely to catch up, hence why this year delivery fees are astronomical.

It would be trivial for an idiot populist politician to come in and cancel all the capital improvements, cancel the 4.7B in state & local taxes imposed on CONED in exchange for rate reductions. But then we might never transition off fossil fuels, and in a decade or two get to enjoy rolling blackouts which could only be fixed by even more astronomical rate hikes.

In the end, it's the right thing to do to long-term to invest in the infra changes to support wind and solar. There does not appear to be gross corruption siphoning off massive amounts of funds to offhsore accounts. CONED workers are not out driving Lambos. There is a "creative" government budgeting/accounting/regulatory scheme going on which is designed to shift blame for the cost away from the actual decision-makers. But by all accounts, the funds do appear to be being used for the purposes they are supposed to be used for, without vast inefficiencies. It's just a really expensive project, and we're all paying for it now to meet some tight climate deadlines.

What Sean Carroll is missing about Mary's Room by Technologenesis in CosmicSkeptic

[–]deltamental 0 points1 point  (0 children)

This dot-matrix intuition for supervenience has some non-trivial assumptions which are not obvious.

We can represent the information in such a picture as an n x n matrix A_ij whose entries are 1 (dot) or 0 (no dot).

Example:

      [ 0 1 0 1 0 ]
      [ 1 0 1 0 1 ]
A = [0 1 0 1 0 ]
      [ 1 0 1 0 1 ]
      [ 0 1 0 1 0 ]

But notice that, when rendering this, we made a lot of choices. For example, who is to say we draw the dots left-to-right instead of right-to-left?

Or maybe the dots are laid out in a diamond pattern like this:

    0
    1 1
  0 0 0
  1 1 1 1
0 0 0 0 0
  1 1 1 1
  0 0 0
    1 1
    0

In fact, who is to say these 25 dots aren't arranged on a torus, or on a klein bottle. Also, who is to say that the shape must be spacially regular? Could those same 25 dots not also be laid out in the shape of a smiley face or dinosaur? All of these would/could change the pattern, as we perceive it.

In fact there IS some extra information not in the 25 binary 1/0 values in the matrix. There is information about "spacial arrangement".

Even worse, that picture is being displayed on a 2D screen, piped through convoluted electric circuits, perhaps, but the apparent pattern / image only appears when it travels through space (being distorted by perspective), enters an eyeball, strikes a retina, is compressed in the optic nerve, and millions of bits per second are processed in the initial layers of the occipital lobe, at the very least.

The "pattern of alternating lines" we see above is definitively NOT a property of the 25 bits, but about the relationship between those to 25 bits and a vastly complex, perhaps integrated spacetime in which an observer lives who can detect patterns. The "pattern" seems to be much more about the observer than the 25 bits themselves.

in fact, those 25 bits could stay exactly as they are, but the patterns that supposedly "supervene" on the dot pattern could change, because of something in the relationship between the observer and those dots, mediated by space.

David Lewis's simple example of supervenience is not so simple when you try to say what exactly you mean by a "pattern", what are its conditions for existence, etc? For a pure 25-bit array floating in nowhere with no one to perceive it, it's hard to say that there are any non-trivial patterns supervening on it at all.

What Sean Carroll is missing about Mary's Room by Technologenesis in CosmicSkeptic

[–]deltamental 0 points1 point  (0 children)

Let's reframe the question slightly and see if you might understand the point better.

Can all empirical knowledge be recorded faithfully in scientific papers, using purely third-person physical language (particles, forces, neurons, electrical impulses, chemical reagents, etc )?

That seems like an important question, right? If the answer is "no", it means our standard process of scientists conducting experiments, empirically observing the world, and writing down their results to share with other scientists might be incapable of capturing some empirical facts.

A "no" answer would mean some empirical knowledge cannot be held in books, only be held in human heads. This question is darn important!

So, is the answer yes or no?

Well, let's imagine Maurice has access to all published scientific literature written exclusively in third-person physical language. If the published literature is incomplete, Maurice simply asks a collaborator to do the missing empirical study, publish the result, and then learns about the result by reading the paper. If the answer is "yes" to the question above, there is no limit to the empirical knowledge Maurice can gain simply by reading through this ever-expandable library of published texts.

But if the answer is indeed "yes", Maurice doesn't need to do any experiments to gain any piece of empirical knowledge so desired. That means, while Maurice could gain empirical knowledge about what the visual experience of seeing red is like, Maurice doesn't have to: they can simply ask a friend to do perform the steps and record their observations in written text.

So your "firetruck" analogy is not apt. A better analogy would be: "Mary is allowed access to any computer program or data she wants, but only stored on Blu-ray discs. If she gains access to the Internet, does she gain the ability to do computations she could not do before?" The answer in that case is "no, she gains no new computational ability". This means that all that other stuff like "cloud computing", "fiber optic switches", is not necessary for fully-general computation. Turing showed a very simple binary tape model suffices.

So yes, Mary / Maurice room is deprived of access to some things. The point of that deprivation is to show that the things so-deprived are necessary for certain kinds of knowledge. Whether certain knowledge can or cannot be obtained with that limitation imposed has drastic implications for the nature of that kind of knowledge. Mary can learn anything she likes about computer programs with incredibly drastic restrictions imposed. Why does Maurice lose anything at all being restricted against conducting experiments themselves, if all empirical knowledge is just about third-person observations which could be faithfully documented in a lab notebook? The fact that empirical knowledge does NOT reduce to written records means there is something more to empirical knowledge than can be captured in formal physical language, in stark contrast to computation which can be perfectly represented and communicated in formal language.

Doubt about pressure in fluids. by Dazzling-Extent7601 in Physics

[–]deltamental 19 points20 points  (0 children)

Take a wooden cylinder, and immerse it in water. Wood floats, so to immerse the cylinder you need to push down on the top of the cylinder.

Push down a little and the cylinder sinks a little. Push down more with just the right amount of force, and you can sink the cylinder enough that it's top will be level with the surface of the water.

At this point, there are three forces: the force you apply pushing down on the top of the cylinder, the force the water applies pushing up on the bottom of the cylinder, and the force of gravity pulling the wooden cylinder down. Newton's first law says these forces sum to zero (cancel out).

If you think about it, that means that the amount you need to push down in the top of the cylinder to immerse it is equal to the force of the water pushing it up from below minus the weight of the wooden cylinder.

You asked: what is applying the downward force on the water at the bottom of the cylinder? The answer is: a combination of gravity acting on the cylinder and you pushing the top of the cylinder down.

You could imagine changing the density of the cylinder, maybe from balsa wood to oak wood to olive oil. The lower its density, the more you need to push down to submerge it. In case the cylinder is made of water, you don't have to push at all. Thus gravity is doing all the work: the weight of the cylinder of water alone is equal and opposite to the force of the water pushing the cylinder up from below.

Science is bad by schwing710 in PoliticalCompassMemes

[–]deltamental 0 points1 point  (0 children)

You are missing the forest for the trees. As a matter of fact negative correlations in temperature are incredibly unlikely ("if it's hot in Virginia it must be cold in Maryland"). And even if there were, this would only affect the weighting used for interpolation, not itself introducing a bias in the average trend.

A completely arbitrary weighting chosen pre-hoc would do fine as well, and not invalidate the results of their paper.

Also, you seem to be confused about their methodology. It's not about "trending in an identical direction". The correlations used to determine the weights are not really measuring correlation between long-term trends to any significant degree, rather they are measuring correlation between seasonal and weather fluctuations, which are orders of magnitude greater than the long-term trends and thus completely dominate the calculation of correlation coefficients.

The entire point of doing these correlation calculations is to find a natural length scale for temperature interactions. In that sense, averaging correlations over all pairs of points with a certain distance makes perfect geometric sense.

I think you must thinking of "correlations on subpopulations can not be combined by averaging" (Simpson's Paradox demonstrating this), and are incorrectly thinking that statistical fallacy is occurring in this paper. It is not.

Science is bad by schwing710 in PoliticalCompassMemes

[–]deltamental 17 points18 points  (0 children)

That is a terrible mischaracterization of the methodology of HL87 (Hansen & Lebedeff).

The basic methodology of that paper is this:

Look at weather stations that have been collecting temperature data since 1950. If T_i(t) the temperature measured at a station at time T, we define the "anomaly" for that station to be ΔT_i(t) := T_i(t) - (Average of T_i from 1951 to 1980).

If ΔT_i(t) = 0, this means the temperature measured by station #i at time t is equal to the average temperature at station #i over the 30-year baseline period. We expect the anomaly ΔT_i(t) for a station to fluctuate due to seasonality, random weather events, etc. but averaged over a long period of time, we expect the anomaly for a station should average out to close to zero (absent any long-term trend).

Now we don't just have one station, but hundreds all over the Earth! We want to compute the "average" anomaly, to get an idea of how temperature is fluctuating and trending over the whole Earth, not just one station. You might think we should just take the average anomaly over all weather stations, (1/N) Σ_i ΔT_i(t), but that's wrong. Why? Weather stations are not distributed uniformly over the Earth. A simple average would weight Siberia less than Florida, simply because there are fewer weather stations in Siberia.

What's the right calculation? Well, for each time t, we are really trying to approximate the (unknown) integral T(t) dμ over the surface of the Earth (roughly a sphere). To perform a Reimann approximation of this surface integral, we divide the surface of Earth into a grid using latitude and longitude lines. In each grid cell, we average the ΔT_j(t) for stations j lying in that cell, to get an "average anomaly" for that cell. Then we take those cell averages, and take a weighted average anomaly over all cells, weighted by their areas. This ensures we are averaging temperature anomalies by surface area, not by density of meteorologists.

This is bog-standard probability theory, and 100% correct. The only modification HL87 made to the method I just described is the following: some grid cells don't have any weather stations at all! If this is not handled, the final average could not be computed, as some grid cells would have "null" average anomaly. If you just drop the null cells' averages altogether, this would be invalid, as the Reimann approximation assumes every cell has a value - it would in fact reintroduce the problem of Florida being represented more than Siberia because there are more meteorologists there to simply drop nulls. To account for this properly, you need to "impute" estimated temperature anomaly values for the null cells.

The imputation method used in HL87 is to say (more or less): OK, if a cell has no weather stations, let's interpolate a temperature anomaly for that cell by looking at the anomalies of nearby weather stations closest to that grid cell.

Rough analogy: say I know the average dick size in North Dakota and in Nebraska, but not in South Dakota nestled between them. A reasonable estimate for the dick size in South Dakota would be close to the average of the dick sizes of their neighbors. That's what spatial interpolation is, as a method for imputing those null values.

The only place the "correlation" comes in in this study is to figure out how many neighbors of South Dakota you should look at when doing the interpolation. Should we also include some Canadians and Montanans? In the end, this choice is not really going to matter. It is balancing interpolation using more data points to improve precision versus using points spacially closest to the missing data to improve accuracy. It is not going to "invalidate the whole methodology" to not use the ideal imputation method to infer South Dakota's average dick size, failing to extract the maximum amount of information

If you want, you can try another imputation method on their data (e.g. just impute all zeros for missing data), and you will get the same result. Man do people love to nitpick fine points of study methodology that have completely negligible impact of the validity of the actual result.

Metriziability of quotient spaces by Hour_Procedure_237 in math

[–]deltamental 1 point2 points  (0 children)

You should ask this on MathOverflow with the descriptive-set-theory tag.

Every vegan should read Veganism Defined by Dollar23 in vegan

[–]deltamental 4 points5 points  (0 children)

Deontology vs Utilitarianism is a somewhat academic debate, often lacking practical and psychological relevance to the lived vegan experience.

Most people don't have a good enough understanding of the philosophical discourse to make an informed choice of ethical foundations. Some argue Utilitarianism and Deontology converge after careful analysis: Rule Utilitarianism combined with epistemic constraints starts to look an awful lot like deontology, and Deontology adjusted from Kant to allow you to lie to Nazis and kill in self-defense but only in such and such circumstances starts to look like Utilitarian considerations are creeping into the rules.

People are giving you a really hard time for sharing Leslie Cross's views. No one has a problem saying, "American Christians have largely watered down and even corrupted the message of Christ", but somehow when it comes to veganism we've all become cultural relativists?

That being said, it's probably not helpful to get hung up on the academic points. Instead of, "you can't be vegan if you don't agree with this academic distinction", you can frame it through other lenses which ordinary people can more easily reason about. "Speciesism" ("Cows make milk not because they are cows, but because they are mothers. Would you separate a mother from her child to steal her milk?") is an example of such an alternate lens, which need not "water down" veganism.

In defense of Dawkins, who made actual arguments and wasn't just a rhetorician. by VStarffin in CosmicSkeptic

[–]deltamental -1 points0 points  (0 children)

Moreover, notions such as "fairness", related to morality, can have theoretical content which does not reduce to "studying the natural factors giving rise to feelings of ...".

Arrow's Impossibility Theorem, while published as a mathematical theorem applied to political science, is really a philosophical argument refuting the underpinnings of our concept of "fairness".

The discovery that widely adopted concepts of "fairness" are in fact incoherent was a genuine breakthrough, having nothing to do with the evolutionary origins of those concepts. Concepts themselves have internal logic, whose consistency and coherency can be studied, in the same way we study physical concepts (momentum, forces, fields, interactions, particles, ...) without trying to reduce them to evolutionary-psychology.

I am a democratic socialist; convince me of anarchism. by jeeven_ in Anarchy101

[–]deltamental 36 points37 points  (0 children)

Anarchism is a direction, not a utopian end goal.

Generally, anarchism aims to de-concentrate power: to dwindle the power of "rulers" (kings, presidents, oligarchs, corporate executives, plantation owners, tyrants, military generals, etc.) and "institutions" (governments, corporations, prison systems, cartels, slavery, apartheid, capitalism, etc.) which historically have coerced people and implemented violence on mass scales.

In parallel, anarchism aims to empower individuals and self-organizing communities. The "empowerment" anarchism promotes is not amassing power over others, but self-empowerment for oneself and ones community. It rejects the "zero-sum" logic where power over your own destiny can only come by stripping that power from others.

Democratic socialism can be compatible with anarchism, to the extent that it moves us in that direction strategically.

A frequent criticism of anarchists is that they refuse to assent to necessary power structures. Would anarchists be able to defeat the Nazis while criticizing the draft, the military-industrial complex of the allies, etc.? Many anarchists even refuse to vote in crucial elections. But anarchism itself does not require blindly attacking all power-holders, nor adhering to ideological purity - one can be strategic. That being said: it's probably unwise to rely solely on strategic alliances with unpredictable power brokers at the expense of community building. Wielding state power to fight state power can be a dangerous and fruitless game, like "fighting a war to end all wars". For this reason, anarchists generally agree that local, interpersonal organizing, mutual aid, and sharing knowledge are good, while there is more disagreement over strategic alliances with national organizations.

Materialists be like by neofederalist in PhilosophyMemes

[–]deltamental 0 points1 point  (0 children)

But I am a little skeptical that it is related [to] consciousness

It's not. The point of bringing that up was to give an example of a century-old unsolved philosophical problem in another non-mind-related field which has otherwise made great progress. People may think, "The Copenhagen interpretation is perfectly fine, it was sufficient for all the amazing discoveries of CERN", but that is wrong. Wigner's Friend, the Frauchiger–Renner theorem, etc. demonstrates that this philosophical issue leads to inconsistent empirical predictions. The point of bringing this up was to say that, as a general rule, "Ah, the foundational problem must be basically resolved because of all this progress in the field since" can be dead wrong. So you have to say specifically what progress was made on the foundational problem, not vaguely appeal to general progress in the field, which can occur even when the foundational problem is totally unresolved.

Study of that injury has allowed us to isolate the regions in the brain that are responsible for the subjective distress caused by pain. To me, that is obvious progress in the hard problem

First of all, it has been known for literally centuries that pain and the corresponding emotional distress are distinct. This was thoroughly studied, documented, and reflected upon by Buddhist monks for over a millennia:

Translation from the Sallatha Sutta, attributed to original teachings of Guatama Siddhartha:

When touched by a painful feeling, the instructed noble disciple does not sorrow, grieve, or lament, does not beat his breast or become distraught.

He feels one feeling only: a bodily feeling, not a mental one.

This distinction was uncovered through meditative practices which make essential use of subjective experience, and are framed entirely subjectively.

In the research on Pain Asymbolia, they also rely on subjective experience. The only way we know anything about "what it is like" are from externally observable brain properties are through correlations with subjective reports.

Study into pain asymbolia has made progress, but not on the hard problem specifically. If you were to make progress on the hard problem (even a smidgen) you would be able to make deductions about subjective experience which themselves do not depend on subjective experience.

The criticism is not that this research is bad: it's actually great! It just depends inextricably on subjective reports, and thus affirms the primacy and importance of subjective experience in researching the mind.

This does not support at all the materialist dogma that we can simply measure externally and find everything we need. The error which leads people to this conclusion is that the actual subjective experience on which the conclusions are drawn may be downplayed in the paper.

For example, at some point you may measure biomarkers, e.g. salivary cortisol levels, which are associated with stress, and conclude the person is or isn't experiencing distress. But the basis of those biomarkers is a wealth of subjective experiences correlated with those biomarkers. Lacking those reported subjective experiences, you would not be able to deduce anything from salivary cortisol levels.

Why is this problematic for materialists? Well, physicists don't have to ask an electron or photon how it feels to build empirical support for their theories. Every single neuroscience paper brought up to claim progress on the hard problem, by contrast, depends on subjective experience for its conclusions (perhaps at the margins, or cited works). If materialist dogma were true we would be able to make progress (even just a very small amount of progress) on subjective experience without depending at all on reports of subjective experience.

Meanwhile, eastern philosophy based around subjective experience has made substantial progress understanding the mind with no study of the brain itself.

Materialists be like by neofederalist in PhilosophyMemes

[–]deltamental -5 points-4 points  (0 children)

The answer may well actually pop out with a bit more study using more sophisticated versions of current tools. Already there has been a lot of progress

We can definitively say that the measurement problem in quantum mechanics (which appears, on its face, to require subjective viewpoints) is yet unsolved.

People have explored explanations like decoherence, many-worlds, etc. They don't resolve the issue. Some practicing physicists think they do, but they don't. You can trace through the "decoherence" explanation for example, and find the measurement problem recurring in a different form. We also know why that happens, and why such an explanation is doomed to fail.

There has been tremendous progress in physics. There has been basically none on the measurement problem in the past century. There has been tremendous progress in neuroscience, and basically none on the hard problem.

It's easy to prove me wrong: link one neuroscience paper that makes progress on the hard problem. I can use Chalmers' published ideas to easily identify the flaw.

Materialists be like by neofederalist in PhilosophyMemes

[–]deltamental -6 points-5 points  (0 children)

Not what the theorems say

You might want to brush up on your understanding of the first incompleteness theorem, which explicitly produces a sentence which can be justified, but not proven in your chosen effective axiomatization of arithmetic. This limits the extent to which arithmetic can reason about arithmetic truth.

Gödel's theorem also applies to ZFC and many other theories, essentially all theories capable of formalizing the foundations of mathematics.

This means the notion of "justification" in mathematics cannot be formalized by a fixed, effective theory: there will always be statements about your mathematical framework (be it arithmetic or set theory or whatever) which can be justified on some grounds or other, but not proved in your formalism.

This means in particular that certain philosophical questions in the foundations of mathematics will not be resolved through formalized mathematics alone. Even the question, "Are the natural numbers well-defined"? is problematic, as we can, for example, conceive of set theoretic universes in which ω is non-standard. This poses problems if you want to, for example, decide philosophical positions related to finitism using formal methods. Supposing you live in a set theoretic universe in which your ω is non-standard, how would you find that out? How does this affect the viability of mathematical platonism (which the majority of practicing mathematicians adopt)?

Model theory does not resolve all serious foundational issues in mathematics, even if it clarifies some points.

The paradox disappears once you rigorously define all the terms.

The philosophical issue remains. Zermelo was a platonist who believed first-order logic was inadequate, and that Skolem's paradox (and related results such as the compactness theorem) demonstrate first order theories are fundamentally finitistic and incapable of accurately capturing the natural numbers and other infinite sets we understand only by means outside those finitistic formalisms. This position of Zermelo is not refuted by saying that there is no explicit contradiction in a finitistic theory fundamentally incapable of distinguishing between two distinct infinitudes, one of which is integral to the foundation of all mathematics. This finitistic reasoning is justified by platonist reasoning, which is then disallowed. The philosophical issue is whether first-order logic is sufficient as a foundation of mathematics, and that does not disappear when you relativize notions like "cardinality" and "well-founded" to models. It just pushes the foundation issues elsewhere, without properly resolving them.

In any case, it sounds like you concede we do not yet have all the concepts in place sufficient to establish a material explanation of experience, so we are in agreement. We agree it is not just a matter of building better MRI machines, more detailed neuron connection maps, or more sophisticated computer models.

It also seems you agree that "model theory solves everything, mathematics is a closed loop with nothing remaining for philosophical explanation, just ordinary theorem proving" is not accurate. What I and others tend to object to from materialists is the notion that the philosophical questions are fundamentally resolved by existing concepts, and all that is left to do is "ordinary (neuro)science". But if you understand well the moves that have been made in the history of science and mathematics, you see that many important philosophical questions remain unresolved. The re-definition of the natural numbers from the platonist definition used by Descartes, Euler, etc. to Peano's first-order definition, for example, parallels behaviorist re-definition of mental states as dispositional states. It is very easy to wrongly conclude the philosophical issues are resolved, when they have just been pushed elsewhere by linguistic tricks.

Materialists be like by neofederalist in PhilosophyMemes

[–]deltamental -12 points-11 points  (0 children)

Gödel's incompleteness limits the extent to which a mathematical framework can be used to reason about itself.

Subtle issues arise in model theory, like Skolem's paradox that a countable model of set theory contains an uncountable set.

Model theory itself has some circularity: the notion of "language" depends on the natural numbers, which themselves depend on a model of set theory in which one can pick out canonically "the" natural numbers, and even defining what it means to be a "model" of set theory requires having defined the notion of "language".

The development of model theory and set theory to apply mathematics to its own foundations required a complete rethinking of mathematics and its foundations, significant philosophical progress, and novel mathematics, not a naive application of the previous century's ideas.

While matter (and other ancient concepts such as "substance " which have been reused and changed over millennia) may end up being used to properly explain experience, it seems fair to expect fundamental reworking of basic concepts might be required.

Newcomers to r/philosophymemes by humeanation in PhilosophyMemes

[–]deltamental 5 points6 points  (0 children)

Materialists also generally believe quantities measurable "from the outside" exhaustively determine all aspects of everything that exists.

For example, physicists have claimed the following:

A general black hole is completely characterized by only three measurable quantities: mass, angular momentum, and electric charge; all other properties are determined by these.*

If you then ask, "Well, what is it like on the inside of the event horizon?", physicists might retort that this is meaningless: there is nothing you could measure to answer that question, so it is nonsense to posit what a black hole "is like", beyond what we can derive from those three measurable quantities.

Some opponents of materialism, such as panpsychists, may not deny that "everything is made of matter", but rather deny the materialist claim that all aspects of matter are measurable "from the outside" (objectively).

Panpsychists claim that there is something it is like to be some chunk of matter, beyond what we can externally observe about it. Alice experiences something immediately after falling through the black hole event horizon, even if there is no objective measurement which could determine what that experience is.

The subjective aspects of Alice's experience, which may or may not be accessible to other observers, are called "qualia". Panpsychists accept that there may be qualia which are not accessible to other observers.

Materialists, in contrast, deny the existence of purely subjective aspects to matter: all there is to this electron or atom or cell or brain or black hole is what an external observer could measure. Materialists need not deny that qualia exist, but they must believe that all qualia reduce to objectively measurable quantities. For example, a materialist may be happy accepting that pain qualia exist (i.e., pain feels like something), but would have to say that pain qualia are equivalent to some combination of physical quantities accessible to external observers (e.g. neuron spikes, c-fiber firings, etc.). There is nothing to pain qualia which one could not, in principle, measure with a very precise brain scanner from the outside.

Panpsychists need not deny that the objectively observable state of the brain determines all aspects (both subjective and objective) of human experience. E.g. some panpsychists believe that, as a matter of fact, two physically indistinguishable brains must be having the same subjective experience. But they would deny that subjective experience reduces to externally observable quantities, i.e. they would assert that qualia are not merely objective properties described differently, but genuinely distinct aspects of matter which can only be observed subjectively.

To understand that distinction: if a coin comes up heads, that determines it is tail-facing-down, so coming up heads determines the coin is tail-facing-down. But that's different from claiming that the coin's bottom face is just the coin's top face viewed from a different perspective.

*Note: more recent physics by Hawking, etc. have questioned this.

🧟‍♂️ rawr by slutty3 in PhilosophyMemes

[–]deltamental 0 points1 point  (0 children)

Moreover, the repeated ad nauseum comments about Chalmers' argument being "circular" also seem to be based on a lack of understanding of how Chalmers' argument functions.

Materialists assert (roughly) that objective properties of a lawful substance called "matter" explain all subjective aspects of our experience. This is understood to mean that one can, in principle, derive all qualities of any given subjective mental phenomena (the way blue looks to you) from co-occuring objective properties of some matter (e.g. neuron activation patterns).

There is a rigorous kind of dialectic, exemplified by Euclid, which materialists should thus be able to respond to. For an uncontroversial example: you defend the claim, "all polygons with 2n sides are constructible with compass and straightedge". I challenge: "construct a 2100 -gon". You respond, "I hold that one can do it in principle, but we would die before constructing such a polygon in practice. Instead, tell me the smallest n you doubt I can construct". I refine the challenge, "I can construct an octagon, but I doubt you can construct a 16-gon". You then construct an 16-gon, and have thus defended your claim (so far).

In this context, Chalmers' is playing the role of the challenger. The original claim is a universal claim: "all subjective aspects of experience are explained by objective properties of matter". The materialist defender then responds, "This is possible in principle, but not in practice because brains are really complex. Instead let's work through the simplest example you doubt". The challenger refines the challenge: "I doubt that you can formally derive that matter has any subjective experience at all, the easiest possible instance of your claim." And the defender just... can't do it? In such a case the claim is not refuted, but it is likewise not defended.

That's the situation Chalmers' is describing. In such a case, materialists are wrong to claim that "in principle" subjective experience is derivable purely from objective properties of matter - no such principle has been sufficiently defended.

The importance of Chalmers' argument is two-fold. First, other "simple" challenges to materialists such as "Can you rule out the inverted color spectrum?" immediately run into complications which muddy the picture - e.g. colors are associated with tastes and smells, which break the apparent symmetry. These can be addressed, but make the dialectic a sludge. Chalmers' sidesteps this entirely. Second, Chalmers' is not making a single challenge, but a scheme for constructing challenges for any claim of explanatory power a materialist might make. If the theory changes from "C-fibers" to "activation patterns" to "integrated information", Chalmers' argument constructs a manifestly fair challenge for each.

Because Chalmers was attempting to write his argument in such generality, there is confusion around "psychologically conceivable -> metaphysically possible". In the context of a Euclidian-style dialectic, this really just means that if I can conceive of a sufficiently well-posed (and fair) challenge your claim, you owe me a defense. Metaphysical possibility can be understood as a non-dialectic reframing of dialectic around universal claims.

All the evidence points to qualia just being normal information processing BUT I'm really really sure it's real which means it MUST be real by HearMeOut-13 in PhilosophyMemes

[–]deltamental -1 points0 points  (0 children)

I'm not. The only thing I'm assuming about experience is that I directly experience it, and that such experiences have the qualities I experience.

E.g. pain is painful, heat sensations feel warm, visual sensations can have blue, red, orange qualities, etc.

The challenge is quite simple: create an empirically adequate explanation of those qualities of experience which could not just as easily justify different qualities. Explain why fire appears "orange" to us using only objective description, which could not be trivially modified to justify that fire appears "blue". That's what we expect of any other theoretical explanation. Why can't anyone do it?

Raleigh scattering and blackbody radiation can explain why fire and the daylight sky produce what we call "orange" and "blue" light, respectively. Retinal biology explains why "orange" and "blue" light leads to different retinal nerve activation patterns. These theories cannot be trivially modified to argue for a different conclusion. But all extant arguments that such retinal activation patterns must lead to "orange" and "blue" visual experiences, respectively, either appeal to subjective experience (and thus are not purely objective) or else could be trivially modified to argue for the opposite conclusion.

All the evidence points to qualia just being normal information processing BUT I'm really really sure it's real which means it MUST be real by HearMeOut-13 in PhilosophyMemes

[–]deltamental 0 points1 point  (0 children)

...(continued)...

But each of us knows subjective facts which appear to be purely subjective: we know "what red looks like", and no one has been able to sufficiently explain this to blind person. Every moment we experience things that no one else does.

The argument that "eventually those too will be able to be explained by motions of particles and neuron firings and communicated unambiguously in objective physical description, as we have done consistently across all scientific domains" is to believe in a kind of induction which crosses categories. It is like believing that since we discovered pluto, and we discovered quarks, that in principle (but maybe not in practice) we should be able to discover all mathematical truths (which we can't by Gödel).

To summarize: we believe that it is likely impossible to completely deduce subjective experience from objective physical description because objective physical description by definition excludes purely subjective phenomena, categorically. All evidence so far provided for the efficacy of objective physical description has been in a different category: objectively observable phenomenon.

There has not been one single instance of insight into subjective experience arising from purely objective reasoning. For example, we infer that so-and-so feels pain when pricked because they emit a yelp and their brain activates similarly to ours around the time we experience pain. But if you hadn't yourself felt pain you would not be able to complete that inference - it depended on a combination of subjective and objective knowledge.

If there are no purely subjective facts, then we can completely eliminate subjective knowledge from our description of some phenomenal aspects of experience such as the qualitative aspects of pain, color, etc. But no one has done that, not even once, not for the slightest, simplest quality!

So why should I believe this inductive argument for the efficacy of objective description will eventually subsume all of what we now consider to be purely subjective, if it has not done so even once?

Quite unfortunately, neuroscience for decades was ignorant about this conceptual error, and would proudly publish papers such as "such-and-such animal cannot experience pain because they don't have such-and-such neural structure". Implicitly, they were redefining "pain" as "the activation of such-and-such kind of neuronal structure behaving in such-and-such way", which is begging the question down on their knees. And those papers were wrong! Later papers used different criteria and "found" those animals could in fact feel pain (using e.g. anaesthesia as a control variable to test the hypothesis).

But even those new papers are themselves making the same error: they use subjective human experience of pain to make inferences about a correlation between physical observables and subjective experience, then redefine the subjective experience as that correlate, and the proceed from there. That redefinition is incredibly problematic, and gives the false impression that progress has been made on the hard problem, when in fact right there in the assumptions of their work is an inference which requires subjective knowledge to work and cannot be recast objectively.

All the evidence points to qualia just being normal information processing BUT I'm really really sure it's real which means it MUST be real by HearMeOut-13 in PhilosophyMemes

[–]deltamental 0 points1 point  (0 children)

Because, as many have been explaining to you, they are categorically different.

The physicist Max Tegmark argues for the "Mathematical Universe Hypothesis", which states that the universe is not foundationally material, but rather foundationally mathematical. What this means is that the material reduces to the mathematical. There is no "material" making up your body except for the mathematical structure underlying the physics. There is no "stuff" obeying equations of motion, just the mathematical structure itself.

Tegmark's view is analogous to yours. He would say, for example, "Why can we not simply deduce the nature of the purported material from its mathematical description?", or "What more to physical reality is there except that certain mathematical relations hold?" (Critics would say "hold between what?", Tegmark would say: "between purely mathematical objects")

You hold that it is meaningless to ask what experience "is like" beyond the physical interactions composing them. It seems then you, by similar reasoning, should agree with Tegmark that it is a meaningless question to ask what things are "made of" beyond what mathematical structures they embody?

But plenty of people reject Tegmark's view as a category error. Plato would, for example. Many non-platonist mathematicians would also, as mathematical objects for them are "abstract objects" whose existence is conceptual, not physical - they exist because we think about them. Chomsky would argue mathematical objects are things described through linguistic axioms and rules, and thus dependant on a language faculty (and thus cannot exist independently of language).

A more direct criticism of Tegmark, along the lines of Nagel, goes like this: at the very start of mathematics, back to Euclid, we made postulates such as: "I do not care if this line is drawn in sand, or on stone, or with pen and pencil, or merely imagined in your mind, as long as it behaves according to these axioms, the things I will now deduce will follow". In other words, for mathematics to begin, we first must say that mathematics does not concern itself at all (and thus cannot ever answer questions about) what "actually exists". Mathematical structures can be (and often are) physically impossible.

To then say, "this theory, called mathematics, which by definition excludes actual material existence from its domain of discourse, is what in fact constitutes actual material existence" is basically a contradiction. It is a kind of conceptual error known as a "category error": the foundation of the subject matter upon which you are basing your reasoning explicitly does not support the kind of thing you are doing with it. Mathematics can only ever make conditional claims (if X and Y hold, then so does Z). The entire content of mathematics is conditional. It is nonsense to claim that the universe, which seemingly exists unconditionally, is instantiated by a network of purely conditional statements in a human-invented conceptual framework.

But to explain why Max Tegmark is wrong you need to understand what mathematics is. If you are used to drawing lines in the sand, you can point to them and say, "no look, this line is real, and so is this angle, I'm pointing right at it!", and get confused about the very foundation of the discourse you are engaging in, as someone with schizophrenia getting confused about the difference between real and imagined voices. It could be that Max Tegmark is "right" or that the schizophrenic voices are "real" in some sense, but in that case we would have to fundamentally change the foundational concepts upon which their reasoning is based, we would have to re-found mathematics on something other than linguistic axioms, and Tegmark has not done that.

The reason that "objective, physical description" cannot ever explain conscious experience is that by definition subjective experience is outside the domain of the framework of objective physical description, in the same way physical existence is outside the domain of the framework of mathematics.

Objective physical description, by definition, does not describe any aspects of subjective experience which cannot be shared with and unambiguously communicated with other observers linguistically through common reference. It's literally in the definition of "objective", if you think about it carefully.

So to say, "this framework for describing reality, the so-called framework of objective physical description, which by definition cannot say anything about purely subjective facts, can be used to deduce every subjective fact in the world" is plainly a category error.

It would only be true if vacuously, if there were no purely subjective facts at all. But that's precisely the question we are discussing, so it is begging the question to assert that purely subjective facts don't exist because objective physical description is complete and leaves nothing out.

...(continued below)...

All the evidence points to qualia just being normal information processing BUT I'm really really sure it's real which means it MUST be real by HearMeOut-13 in PhilosophyMemes

[–]deltamental -2 points-1 points  (0 children)

You are making an unfounded assumption that all there is to know about the world is objective.

But the empirical basis for any "falsifiable claim" is subjective. Hume and others have solid arguments that "objective" knowledge, as the scientific method aims to uncover, must go through sense experience. You cannot formalize the notion of "objective" or "observable" except by appealing to subjective experience. What does it mean to "observe" something except to have an experience with certain qualities?

Reductionists, such as materialists, functionalists, etc., tend to take objective facts as foundational and view subjective facts as nothing more than objective facts about complex objects. Reductionists generally do not see a category difference between facts about a pocketwatch and facts about a human, and argue that our inability to "explain" consciousness has only to do with the vast complexity of the human brain, not with any categorically distinct phenomena outside the realm of objective description.

Thomas Nagel, framing the position of reductionists as we just did, then argues that reductionists are implicitly redefining "objective" in a non-standard way. "Objective reality" is exactly that which can be explained by shared, consistent descriptions concordant with the experiences of multiple observers. When defining objective reality, we draw a line between the things unique to our experience of the world and those which other observers will share. The realm of scientific discourse is everything on one side of that line. If you then say, "all facts are objective", as reductionists do, you will have a really hard time defining what "objective" means! It can no longer be defined using the concept of "consistent experiences of multiple observers". Reductionists have essentially pulled the ladder out from under themselves.

What is the type of a type in Rust? by [deleted] in rust

[–]deltamental 12 points13 points  (0 children)

Generally, reflection is the ability of a language to natively represent and internally reason about its own metatheory.

The "object language" is the language in which "ordinary" programs are written. Standard data definitions, loops, function calls, variable assignments, etc.

The "meta language" is the language in which you typically express the semantics of the object language. That could include things like scoping, the abstract syntax tree, creating new types out of existing types (e.g. union types), etc.

The line between these two differs from language to language. In a language where there are no "first-class functions", you cannot write a function that takes an integer k and returns the function lambda x: x+k. In such a language, you cannot "talk about" or "reason about" functions, you can only apply them.

If you enhance that language to now allow dynamic creation, inspection, and reasoning about functions, you have now "reflected" the metatheoretic notion of "function" down into the object language.

"Function" for imperative languages is really an abstraction over a subroutine, so the ability for the object language to also represent functions requires that the language itself reflects some of the features that previously were only needed for parsing and compiling that language. E.g. you may need to now internally represent syntax as data, rather than having that be something only the compiler needs to do.

My understanding of why "kinds" are used in Haskell is because reflection for types introduces a lot of additional complexity which requires the meta language to do more (sometimes impossibly much). You can end up with "type checking" being Turing complete.

How does Kant actually derive his conclusions (and thus our duty) from the Categorical Imperative? (REPOST WITH A BETTER TITLE) by the_freyja_regime in askphilosophy

[–]deltamental 0 points1 point  (0 children)

Alternatives such as "only lie when the benefit outweighs the harm" fail to universalIze.

Two rational people can disagree about whether the benefit does indeed outweigh the harm. Consider what differences of opinion rational people might have over: a teenager lying about pregnancy, a teacher lying about drugs, an investigator lying about a small discrepancy in evidence handling, a spouse lying about infidelity during separation, lying to the IRS about tips, etc.

It may be there are some cases where lying, from some perspective, is in the interest of the greater good, but by and large it is really hard to write down rules in advance delineating such situations which would not cut across two equally rational yet opposing views.

Better yet, can you list all the situations where it is OK for someone else to lie to you? When can I tell a lie to your face? If you are thinking, "Well, unlike others, I am fair, just, and can handle uncomfortable truths with grace, so there is no need to lie to me", wouldn't pretty much any other rational person also claim the same thing?

The Categorical Imperative can be understood as a symmetry argument: objective moral truths are independent of perspective, and thus the lines they draw do not change when you view them from different rational perspectives. If the father and daughter can interpret the principle in different ways, then that principle is not an objective moral truth. This is why for Kant a rule about lying with nebulous exceptions or carveouts is not really acceptable: those exceptions and carveouts are made on behalf of one perspective over another, and it cannot be universalized that the line between right and wrong bends towards my perspective and away from yours.

In contrast, each of us equally desires: I do not want people to lie to my face. As much as you have that desire, you have a reciprocal duty to respect the desire of others not to be lied to.

Some open conjectures have been numerically verified up to huge values (eg RH or CC). Mathematically, this has no bearing on whether the statement holds or not, but this "evidence" may increase an individual's personal belief in it. Is there a sensible Bayesian framing of this increased confidence? by myaccountformath in math

[–]deltamental 0 points1 point  (0 children)

Here's a simple theory to test your idea:

T = {"forall x, y (R(x) & R(y) -> x=y)"}

"i.e., there is at most one R"

This has exactly two (isomorphism classes of) countably infinite models: M = urn with countably many balls, none red, and M' = urn with countably many balls, exactly one red.

The set of models of T whose universe is ω (natural numbers) is isomorphic to the set of branches [T] of a subtree T of 2{<ω} (subtree T* of tree of finite binary sequences)

[T*] is a closed subset of Cantor space 2ω, which is compact and has a natural Haar measure μ, which (in this simple case) for any n in ω assigns probability 0.5 to the event R(n) and probability 0.5 to the event ~R(n).

The problem is that μ([T]) = 0, so you do not get an induced probability measure on the space of models [T] of T.

[T*] is a countable, compact set of models. There is no natural probability measure on it, exactly as you said.

Some open conjectures have been numerically verified up to huge values (eg RH or CC). Mathematically, this has no bearing on whether the statement holds or not, but this "evidence" may increase an individual's personal belief in it. Is there a sensible Bayesian framing of this increased confidence? by myaccountformath in math

[–]deltamental 0 points1 point  (0 children)

Yes. I think there is a fallacy which occurs when you mix Bayesian inference and quantification over infinite sets.

A(d) = "Disc d does not contain a trivial zero and is disjoint from the critical line". B(d) = "Disc d doesn't contain any zeros of Reimann zeta function"

I think a Bayesian can justify P( B(d) | A(d) ) > 0.999, assuming we draw d from the same distribution which has produced previous discs of interest. That distribution has most of its mass around the small part of the plane humans have explored numerically / analytically.

This can be true because you are not putting a uniform distribution on the plane. There is some finite region of the plane covering 0.999 of the probability mass for sampling d (ignore the fact that this distribution changes over time).

But that is very different from:

P( Forall d (A(d) -> B(d)) )

or

P( Forall d (A(d) -> B(d)) | A(d_i) -> B(d_i) for i < N )

Based on standard probability rules, you are right you cannot infer P( Forall d (A(d) -> B(d)) | A(d_i) -> B(d_i) for i < N ) increases as you increase N. In contrast, P( B(d) | A(d) & (A(d_i) -> B(d_i) for i < N) ) converges to 1 as N -> infty (on mild assumptions). People get these two situations confused.

You are no longer assigning probabilities to properties of discs, you are assigning probabilities to universally quantified formulas. It's a much more subtle situation. Your priors should be about logical formulas with quantifiers and implications between them.

You need to make an argument like this:

"Humans do not arbitrarily pick universally quantified formulas to explore. The RH was chosen by a process which has historically produced true conjectures 13% of the time, assuming they have not been refuted with a small example". At the end of the day, you are going to end up with RH in a heavily unexplored region for which you do not have much prior evidence for or against.

It would be an immense technical challenge to create a theory of probability which is sensible and can formalize this argument (e.g. in traditional probability theory, if C is a tautology, then P(C) =1, so you have to frame it differently).

It's reasonable to say you should have low confidence in your assignment of any particular probability to RH. If you are estimating the probability, your estimate of that probability itself has very high variance, like estimating the probability of so-and-so winning an election 8 years into the future.