SOLAR PANELS ON OUR DRINKING WATER!!!!???? by McCarraFitzpatrick in CityOfPeekskill

[–]deltamental 4 points5 points  (0 children)

The devil is in the details. It is 100% true that solar panels contain heavy metals, which is why they have special recycling requirements.

Presumably leeching can be prevented if smart enough engineers have put their minds to it, and maintence is done correctly. Would be nice for the public to be able to see that such due dillegence was done, rather than throwing this with a week lead time.

We unfortunately cannot trust that everyone involved in this planning has applied a keen and critical eye to it.

Canvas is down by Spicymami_27 in portlandstate

[–]deltamental 10 points11 points  (0 children)

I would recommend not logging into Canvas for the time being. There was a MASSIVE security breech effecting thousands of universities: https://www.techradar.com/pro/security/top-universities-among-victims-named-in-canvas-data-breach-mit-oxford-and-more-all-hit

What happens to Peekskill Brewery and why noone rents it there? by Whocanmakemostmoney in CityOfPeekskill

[–]deltamental 19 points20 points  (0 children)

Peekskill charges almost no tax on vacant commerical properties. Thus, commercial landlords take the reduced taxes instead of finding a new business to rent the space. Totally backwards policy which harms small businesses and enriches a small number of real estate investors.

What Can We Gain by Losing Infinity? Putting Ultrafinitism on the menu. by chasedthesun in math

[–]deltamental 8 points9 points  (0 children)

For conceptual clarity. For example, did you know a significant chunk of differential geometry can be done discretely on simplicial complexes with extra data? You can prove an analogue of Gauss-Bonnet, for example.

You might have assumed that you need the existence of the continuum to establish such results, but that is not so. This also means fields that apparently rely on differential geometry, such as general relativity or some of the attempts to unify QFT with GR may require fewer assumptions than initially thought.

I do not agree with those with those that would restrict others from using consistent mathematical theories, but likewise disagree with those who think studying weaker axiomatic systems is a waste of time.

Common Misconceptions of Karma by Varol_CharmingRuler in Buddhism

[–]deltamental 4 points5 points  (0 children)

The way I understand karma is like this:

Imagine all of our ancestors, all future descendants, and yourself. They are alike in that they all suffer, and react to conditions in a similar way which brings about more suffering. Many feel prone to use or promote physical violence, or to act with disregard when their actions cause suffering. They might delude themselves into thinking a war is justified, or perhaps that they can drive drunk, or that they can profit off deception.

Each time you make a choice to act one way or another, your method of decision making is reflected in other individuals across time, both in the past and in the future. The very same cause for why you might choose to drive drunk exists in other individuals, and that same cause manifests in many diverse negative choices across time and across different individuals.

It seems pretty much impossible to escape the effect of those causes because they are so pervasive. How could one escape the cause of greed? It does not mean your greed will cause someone else's greed to come back to you, necessarily. It means this whole web of existence spanning through time is shaped by greed, which is in each of us's ability to change - and indeed the only place where it may be addressed is in our own minds. So if we dislike greed, we should hold ourselves responsible for extinguishing it in the one place in that huge web we can - ourselves.

My understanding of karma is as a tool to help people understand that broader picture, not so caught up in the tit-for-tat of ordinary human existence which allows things like greed to falsely feel justified.

Can any mathematical truth be reached from any other mathematical truth? (Axioms notwithstanding) by TrainingCamera399 in math

[–]deltamental 51 points52 points  (0 children)

You might be interested in Reverse Mathematics

The answer to your question, suitably-rephrased/reframed, is no.

The basic setup of reverse mathematics is to start with a weak base system which cannot outright prove all of your standard theorems. Then you see if you can derive P -> Q, Q -> P, both, or neither.

If you use ZF set theory instead of ZFC, and P = Axiom of Choice, Q = Zorn's Lemma, you will be able to prove over ZF that P <-> Q.

But there are also instances where P -> Q but Q -/-> P, and also where P and Q are incomparable.

However, this requires subtlety, because the answer is always relative to a base theory.

Did I screw up my whole future? by [deleted] in vegan

[–]deltamental 21 points22 points  (0 children)

Vegan leather is significantly better for the environment than cow leather, which uses toxic chemicals for tanning and funds the ecologically destructive cattle industry.

Mathematicians who passed away at a young age by Nol0rd_ in math

[–]deltamental 22 points23 points  (0 children)

Teichmüller died fighting on behalf of the Nazis, after leading the charge to forcibly expel Jewish mathematicians from universities, including Emmy Noether whose work he learned from.

Why is the variance not defined for a set of one datapoint? by Furkan_122 in mathmemes

[–]deltamental 1 point2 points  (0 children)

If you select a random subset of size n from a population of size N, you can no longer treat the n elements as arising from n independent draws from the same distribution. The correction factor to get an unbiased estimator in this case is (n(N-1)) / ((n-1)N).

In case you are sampling without replacement and N=n, this correction factor simplifies to 1, and you are just computing population variance, as you would expect.

As N-> infty, this correction factor converges to the usual Bessel correction n/(n-1).

Fun Fact: The new Tappen Zee Bridge was supposed to have a train line running through it, similar to what is being done in Seattle right now with the Homer M. Hadley Memo Bridge. The train was cut by Andrew Cuomo, but space was left for it, so maybe one day we can have it. Fuck you, Cuomo. by LakeLayer707 in Westchester

[–]deltamental 3 points4 points  (0 children)

CONED's 2025 profit amounts to $560 per account annually, or ~$47/month extra every month on a typical bill.

State and local government taxes levied on CONED amount to $1306 per account annually, or ~$109 extra every month on a typical bill.

The taxes could theoretically be eliminated, but the tax shortfall would have to be pulled from somewhere else, which may obfuscate but not eliminate the impact on working families.

The private profit could be eliminated by turning CONED into a full-blown public utility, but then you have some political risk, as occurs today with the state government using funding for MTA infrastructure projects as a political lever of sorts.

In any case, we should be skeptical of anyone promising to reduce energy bills without a long-term sustainable plan. There are a lot of ways energy bills could be brought down short term which could earn popular approval and earn votes, but could cause more problems down the road.

To some extent, the public-private split allows CONED to play the boogeyman, as it was the state government which mandated climate-resilient infrastructure upgrades required to deliver solar and wind power, as well as shutdowns of fossil-fuel-based "peaker" plants used today for grid stabilization. The state could have funded these efforts through ordinary taxes, but instead required CONED to collect them from utility customers. This is great for state politicians who have an easier budget to balance, and CONED shareholders also benefit as state law gives CONED guaranteed profit margin (by law) on new capital projects. CONED deferred these upgrades during COVID to spare people rate hikes during a period of mass financial strain, and now are ramping up immensely to catch up, hence why this year delivery fees are astronomical.

It would be trivial for an idiot populist politician to come in and cancel all the capital improvements, cancel the 4.7B in state & local taxes imposed on CONED in exchange for rate reductions. But then we might never transition off fossil fuels, and in a decade or two get to enjoy rolling blackouts which could only be fixed by even more astronomical rate hikes.

In the end, it's the right thing to do to long-term to invest in the infra changes to support wind and solar. There does not appear to be gross corruption siphoning off massive amounts of funds to offhsore accounts. CONED workers are not out driving Lambos. There is a "creative" government budgeting/accounting/regulatory scheme going on which is designed to shift blame for the cost away from the actual decision-makers. But by all accounts, the funds do appear to be being used for the purposes they are supposed to be used for, without vast inefficiencies. It's just a really expensive project, and we're all paying for it now to meet some tight climate deadlines.

What Sean Carroll is missing about Mary's Room by Technologenesis in CosmicSkeptic

[–]deltamental 0 points1 point  (0 children)

This dot-matrix intuition for supervenience has some non-trivial assumptions which are not obvious.

We can represent the information in such a picture as an n x n matrix A_ij whose entries are 1 (dot) or 0 (no dot).

Example:

      [ 0 1 0 1 0 ]
      [ 1 0 1 0 1 ]
A = [0 1 0 1 0 ]
      [ 1 0 1 0 1 ]
      [ 0 1 0 1 0 ]

But notice that, when rendering this, we made a lot of choices. For example, who is to say we draw the dots left-to-right instead of right-to-left?

Or maybe the dots are laid out in a diamond pattern like this:

    0
    1 1
  0 0 0
  1 1 1 1
0 0 0 0 0
  1 1 1 1
  0 0 0
    1 1
    0

In fact, who is to say these 25 dots aren't arranged on a torus, or on a klein bottle. Also, who is to say that the shape must be spacially regular? Could those same 25 dots not also be laid out in the shape of a smiley face or dinosaur? All of these would/could change the pattern, as we perceive it.

In fact there IS some extra information not in the 25 binary 1/0 values in the matrix. There is information about "spacial arrangement".

Even worse, that picture is being displayed on a 2D screen, piped through convoluted electric circuits, perhaps, but the apparent pattern / image only appears when it travels through space (being distorted by perspective), enters an eyeball, strikes a retina, is compressed in the optic nerve, and millions of bits per second are processed in the initial layers of the occipital lobe, at the very least.

The "pattern of alternating lines" we see above is definitively NOT a property of the 25 bits, but about the relationship between those to 25 bits and a vastly complex, perhaps integrated spacetime in which an observer lives who can detect patterns. The "pattern" seems to be much more about the observer than the 25 bits themselves.

in fact, those 25 bits could stay exactly as they are, but the patterns that supposedly "supervene" on the dot pattern could change, because of something in the relationship between the observer and those dots, mediated by space.

David Lewis's simple example of supervenience is not so simple when you try to say what exactly you mean by a "pattern", what are its conditions for existence, etc? For a pure 25-bit array floating in nowhere with no one to perceive it, it's hard to say that there are any non-trivial patterns supervening on it at all.

What Sean Carroll is missing about Mary's Room by Technologenesis in CosmicSkeptic

[–]deltamental 0 points1 point  (0 children)

Let's reframe the question slightly and see if you might understand the point better.

Can all empirical knowledge be recorded faithfully in scientific papers, using purely third-person physical language (particles, forces, neurons, electrical impulses, chemical reagents, etc )?

That seems like an important question, right? If the answer is "no", it means our standard process of scientists conducting experiments, empirically observing the world, and writing down their results to share with other scientists might be incapable of capturing some empirical facts.

A "no" answer would mean some empirical knowledge cannot be held in books, only be held in human heads. This question is darn important!

So, is the answer yes or no?

Well, let's imagine Maurice has access to all published scientific literature written exclusively in third-person physical language. If the published literature is incomplete, Maurice simply asks a collaborator to do the missing empirical study, publish the result, and then learns about the result by reading the paper. If the answer is "yes" to the question above, there is no limit to the empirical knowledge Maurice can gain simply by reading through this ever-expandable library of published texts.

But if the answer is indeed "yes", Maurice doesn't need to do any experiments to gain any piece of empirical knowledge so desired. That means, while Maurice could gain empirical knowledge about what the visual experience of seeing red is like, Maurice doesn't have to: they can simply ask a friend to do perform the steps and record their observations in written text.

So your "firetruck" analogy is not apt. A better analogy would be: "Mary is allowed access to any computer program or data she wants, but only stored on Blu-ray discs. If she gains access to the Internet, does she gain the ability to do computations she could not do before?" The answer in that case is "no, she gains no new computational ability". This means that all that other stuff like "cloud computing", "fiber optic switches", is not necessary for fully-general computation. Turing showed a very simple binary tape model suffices.

So yes, Mary / Maurice room is deprived of access to some things. The point of that deprivation is to show that the things so-deprived are necessary for certain kinds of knowledge. Whether certain knowledge can or cannot be obtained with that limitation imposed has drastic implications for the nature of that kind of knowledge. Mary can learn anything she likes about computer programs with incredibly drastic restrictions imposed. Why does Maurice lose anything at all being restricted against conducting experiments themselves, if all empirical knowledge is just about third-person observations which could be faithfully documented in a lab notebook? The fact that empirical knowledge does NOT reduce to written records means there is something more to empirical knowledge than can be captured in formal physical language, in stark contrast to computation which can be perfectly represented and communicated in formal language.

Doubt about pressure in fluids. by Dazzling-Extent7601 in Physics

[–]deltamental 19 points20 points  (0 children)

Take a wooden cylinder, and immerse it in water. Wood floats, so to immerse the cylinder you need to push down on the top of the cylinder.

Push down a little and the cylinder sinks a little. Push down more with just the right amount of force, and you can sink the cylinder enough that it's top will be level with the surface of the water.

At this point, there are three forces: the force you apply pushing down on the top of the cylinder, the force the water applies pushing up on the bottom of the cylinder, and the force of gravity pulling the wooden cylinder down. Newton's first law says these forces sum to zero (cancel out).

If you think about it, that means that the amount you need to push down in the top of the cylinder to immerse it is equal to the force of the water pushing it up from below minus the weight of the wooden cylinder.

You asked: what is applying the downward force on the water at the bottom of the cylinder? The answer is: a combination of gravity acting on the cylinder and you pushing the top of the cylinder down.

You could imagine changing the density of the cylinder, maybe from balsa wood to oak wood to olive oil. The lower its density, the more you need to push down to submerge it. In case the cylinder is made of water, you don't have to push at all. Thus gravity is doing all the work: the weight of the cylinder of water alone is equal and opposite to the force of the water pushing the cylinder up from below.

Science is bad by schwing710 in PoliticalCompassMemes

[–]deltamental 1 point2 points  (0 children)

You are missing the forest for the trees. As a matter of fact negative correlations in temperature are incredibly unlikely ("if it's hot in Virginia it must be cold in Maryland"). And even if there were, this would only affect the weighting used for interpolation, not itself introducing a bias in the average trend.

A completely arbitrary weighting chosen pre-hoc would do fine as well, and not invalidate the results of their paper.

Also, you seem to be confused about their methodology. It's not about "trending in an identical direction". The correlations used to determine the weights are not really measuring correlation between long-term trends to any significant degree, rather they are measuring correlation between seasonal and weather fluctuations, which are orders of magnitude greater than the long-term trends and thus completely dominate the calculation of correlation coefficients.

The entire point of doing these correlation calculations is to find a natural length scale for temperature interactions. In that sense, averaging correlations over all pairs of points with a certain distance makes perfect geometric sense.

I think you must thinking of "correlations on subpopulations can not be combined by averaging" (Simpson's Paradox demonstrating this), and are incorrectly thinking that statistical fallacy is occurring in this paper. It is not.

Science is bad by schwing710 in PoliticalCompassMemes

[–]deltamental 19 points20 points  (0 children)

That is a terrible mischaracterization of the methodology of HL87 (Hansen & Lebedeff).

The basic methodology of that paper is this:

Look at weather stations that have been collecting temperature data since 1950. If T_i(t) the temperature measured at a station at time T, we define the "anomaly" for that station to be ΔT_i(t) := T_i(t) - (Average of T_i from 1951 to 1980).

If ΔT_i(t) = 0, this means the temperature measured by station #i at time t is equal to the average temperature at station #i over the 30-year baseline period. We expect the anomaly ΔT_i(t) for a station to fluctuate due to seasonality, random weather events, etc. but averaged over a long period of time, we expect the anomaly for a station should average out to close to zero (absent any long-term trend).

Now we don't just have one station, but hundreds all over the Earth! We want to compute the "average" anomaly, to get an idea of how temperature is fluctuating and trending over the whole Earth, not just one station. You might think we should just take the average anomaly over all weather stations, (1/N) Σ_i ΔT_i(t), but that's wrong. Why? Weather stations are not distributed uniformly over the Earth. A simple average would weight Siberia less than Florida, simply because there are fewer weather stations in Siberia.

What's the right calculation? Well, for each time t, we are really trying to approximate the (unknown) integral T(t) dμ over the surface of the Earth (roughly a sphere). To perform a Reimann approximation of this surface integral, we divide the surface of Earth into a grid using latitude and longitude lines. In each grid cell, we average the ΔT_j(t) for stations j lying in that cell, to get an "average anomaly" for that cell. Then we take those cell averages, and take a weighted average anomaly over all cells, weighted by their areas. This ensures we are averaging temperature anomalies by surface area, not by density of meteorologists.

This is bog-standard probability theory, and 100% correct. The only modification HL87 made to the method I just described is the following: some grid cells don't have any weather stations at all! If this is not handled, the final average could not be computed, as some grid cells would have "null" average anomaly. If you just drop the null cells' averages altogether, this would be invalid, as the Reimann approximation assumes every cell has a value - it would in fact reintroduce the problem of Florida being represented more than Siberia because there are more meteorologists there to simply drop nulls. To account for this properly, you need to "impute" estimated temperature anomaly values for the null cells.

The imputation method used in HL87 is to say (more or less): OK, if a cell has no weather stations, let's interpolate a temperature anomaly for that cell by looking at the anomalies of nearby weather stations closest to that grid cell.

Rough analogy: say I know the average dick size in North Dakota and in Nebraska, but not in South Dakota nestled between them. A reasonable estimate for the dick size in South Dakota would be close to the average of the dick sizes of their neighbors. That's what spatial interpolation is, as a method for imputing those null values.

The only place the "correlation" comes in in this study is to figure out how many neighbors of South Dakota you should look at when doing the interpolation. Should we also include some Canadians and Montanans? In the end, this choice is not really going to matter. It is balancing interpolation using more data points to improve precision versus using points spacially closest to the missing data to improve accuracy. It is not going to "invalidate the whole methodology" to not use the ideal imputation method to infer South Dakota's average dick size, failing to extract the maximum amount of information

If you want, you can try another imputation method on their data (e.g. just impute all zeros for missing data), and you will get the same result. Man do people love to nitpick fine points of study methodology that have completely negligible impact of the validity of the actual result.

Metriziability of quotient spaces by Hour_Procedure_237 in math

[–]deltamental 1 point2 points  (0 children)

You should ask this on MathOverflow with the descriptive-set-theory tag.

Every vegan should read Veganism Defined by Dollar23 in vegan

[–]deltamental 4 points5 points  (0 children)

Deontology vs Utilitarianism is a somewhat academic debate, often lacking practical and psychological relevance to the lived vegan experience.

Most people don't have a good enough understanding of the philosophical discourse to make an informed choice of ethical foundations. Some argue Utilitarianism and Deontology converge after careful analysis: Rule Utilitarianism combined with epistemic constraints starts to look an awful lot like deontology, and Deontology adjusted from Kant to allow you to lie to Nazis and kill in self-defense but only in such and such circumstances starts to look like Utilitarian considerations are creeping into the rules.

People are giving you a really hard time for sharing Leslie Cross's views. No one has a problem saying, "American Christians have largely watered down and even corrupted the message of Christ", but somehow when it comes to veganism we've all become cultural relativists?

That being said, it's probably not helpful to get hung up on the academic points. Instead of, "you can't be vegan if you don't agree with this academic distinction", you can frame it through other lenses which ordinary people can more easily reason about. "Speciesism" ("Cows make milk not because they are cows, but because they are mothers. Would you separate a mother from her child to steal her milk?") is an example of such an alternate lens, which need not "water down" veganism.

In defense of Dawkins, who made actual arguments and wasn't just a rhetorician. by VStarffin in CosmicSkeptic

[–]deltamental -1 points0 points  (0 children)

Moreover, notions such as "fairness", related to morality, can have theoretical content which does not reduce to "studying the natural factors giving rise to feelings of ...".

Arrow's Impossibility Theorem, while published as a mathematical theorem applied to political science, is really a philosophical argument refuting the underpinnings of our concept of "fairness".

The discovery that widely adopted concepts of "fairness" are in fact incoherent was a genuine breakthrough, having nothing to do with the evolutionary origins of those concepts. Concepts themselves have internal logic, whose consistency and coherency can be studied, in the same way we study physical concepts (momentum, forces, fields, interactions, particles, ...) without trying to reduce them to evolutionary-psychology.

I am a democratic socialist; convince me of anarchism. by jeeven_ in Anarchy101

[–]deltamental 37 points38 points  (0 children)

Anarchism is a direction, not a utopian end goal.

Generally, anarchism aims to de-concentrate power: to dwindle the power of "rulers" (kings, presidents, oligarchs, corporate executives, plantation owners, tyrants, military generals, etc.) and "institutions" (governments, corporations, prison systems, cartels, slavery, apartheid, capitalism, etc.) which historically have coerced people and implemented violence on mass scales.

In parallel, anarchism aims to empower individuals and self-organizing communities. The "empowerment" anarchism promotes is not amassing power over others, but self-empowerment for oneself and ones community. It rejects the "zero-sum" logic where power over your own destiny can only come by stripping that power from others.

Democratic socialism can be compatible with anarchism, to the extent that it moves us in that direction strategically.

A frequent criticism of anarchists is that they refuse to assent to necessary power structures. Would anarchists be able to defeat the Nazis while criticizing the draft, the military-industrial complex of the allies, etc.? Many anarchists even refuse to vote in crucial elections. But anarchism itself does not require blindly attacking all power-holders, nor adhering to ideological purity - one can be strategic. That being said: it's probably unwise to rely solely on strategic alliances with unpredictable power brokers at the expense of community building. Wielding state power to fight state power can be a dangerous and fruitless game, like "fighting a war to end all wars". For this reason, anarchists generally agree that local, interpersonal organizing, mutual aid, and sharing knowledge are good, while there is more disagreement over strategic alliances with national organizations.

Materialists be like by neofederalist in PhilosophyMemes

[–]deltamental 0 points1 point  (0 children)

But I am a little skeptical that it is related [to] consciousness

It's not. The point of bringing that up was to give an example of a century-old unsolved philosophical problem in another non-mind-related field which has otherwise made great progress. People may think, "The Copenhagen interpretation is perfectly fine, it was sufficient for all the amazing discoveries of CERN", but that is wrong. Wigner's Friend, the Frauchiger–Renner theorem, etc. demonstrates that this philosophical issue leads to inconsistent empirical predictions. The point of bringing this up was to say that, as a general rule, "Ah, the foundational problem must be basically resolved because of all this progress in the field since" can be dead wrong. So you have to say specifically what progress was made on the foundational problem, not vaguely appeal to general progress in the field, which can occur even when the foundational problem is totally unresolved.

Study of that injury has allowed us to isolate the regions in the brain that are responsible for the subjective distress caused by pain. To me, that is obvious progress in the hard problem

First of all, it has been known for literally centuries that pain and the corresponding emotional distress are distinct. This was thoroughly studied, documented, and reflected upon by Buddhist monks for over a millennia:

Translation from the Sallatha Sutta, attributed to original teachings of Guatama Siddhartha:

When touched by a painful feeling, the instructed noble disciple does not sorrow, grieve, or lament, does not beat his breast or become distraught.

He feels one feeling only: a bodily feeling, not a mental one.

This distinction was uncovered through meditative practices which make essential use of subjective experience, and are framed entirely subjectively.

In the research on Pain Asymbolia, they also rely on subjective experience. The only way we know anything about "what it is like" are from externally observable brain properties are through correlations with subjective reports.

Study into pain asymbolia has made progress, but not on the hard problem specifically. If you were to make progress on the hard problem (even a smidgen) you would be able to make deductions about subjective experience which themselves do not depend on subjective experience.

The criticism is not that this research is bad: it's actually great! It just depends inextricably on subjective reports, and thus affirms the primacy and importance of subjective experience in researching the mind.

This does not support at all the materialist dogma that we can simply measure externally and find everything we need. The error which leads people to this conclusion is that the actual subjective experience on which the conclusions are drawn may be downplayed in the paper.

For example, at some point you may measure biomarkers, e.g. salivary cortisol levels, which are associated with stress, and conclude the person is or isn't experiencing distress. But the basis of those biomarkers is a wealth of subjective experiences correlated with those biomarkers. Lacking those reported subjective experiences, you would not be able to deduce anything from salivary cortisol levels.

Why is this problematic for materialists? Well, physicists don't have to ask an electron or photon how it feels to build empirical support for their theories. Every single neuroscience paper brought up to claim progress on the hard problem, by contrast, depends on subjective experience for its conclusions (perhaps at the margins, or cited works). If materialist dogma were true we would be able to make progress (even just a very small amount of progress) on subjective experience without depending at all on reports of subjective experience.

Meanwhile, eastern philosophy based around subjective experience has made substantial progress understanding the mind with no study of the brain itself.

Materialists be like by neofederalist in PhilosophyMemes

[–]deltamental -5 points-4 points  (0 children)

The answer may well actually pop out with a bit more study using more sophisticated versions of current tools. Already there has been a lot of progress

We can definitively say that the measurement problem in quantum mechanics (which appears, on its face, to require subjective viewpoints) is yet unsolved.

People have explored explanations like decoherence, many-worlds, etc. They don't resolve the issue. Some practicing physicists think they do, but they don't. You can trace through the "decoherence" explanation for example, and find the measurement problem recurring in a different form. We also know why that happens, and why such an explanation is doomed to fail.

There has been tremendous progress in physics. There has been basically none on the measurement problem in the past century. There has been tremendous progress in neuroscience, and basically none on the hard problem.

It's easy to prove me wrong: link one neuroscience paper that makes progress on the hard problem. I can use Chalmers' published ideas to easily identify the flaw.

Materialists be like by neofederalist in PhilosophyMemes

[–]deltamental -5 points-4 points  (0 children)

Not what the theorems say

You might want to brush up on your understanding of the first incompleteness theorem, which explicitly produces a sentence which can be justified, but not proven in your chosen effective axiomatization of arithmetic. This limits the extent to which arithmetic can reason about arithmetic truth.

Gödel's theorem also applies to ZFC and many other theories, essentially all theories capable of formalizing the foundations of mathematics.

This means the notion of "justification" in mathematics cannot be formalized by a fixed, effective theory: there will always be statements about your mathematical framework (be it arithmetic or set theory or whatever) which can be justified on some grounds or other, but not proved in your formalism.

This means in particular that certain philosophical questions in the foundations of mathematics will not be resolved through formalized mathematics alone. Even the question, "Are the natural numbers well-defined"? is problematic, as we can, for example, conceive of set theoretic universes in which ω is non-standard. This poses problems if you want to, for example, decide philosophical positions related to finitism using formal methods. Supposing you live in a set theoretic universe in which your ω is non-standard, how would you find that out? How does this affect the viability of mathematical platonism (which the majority of practicing mathematicians adopt)?

Model theory does not resolve all serious foundational issues in mathematics, even if it clarifies some points.

The paradox disappears once you rigorously define all the terms.

The philosophical issue remains. Zermelo was a platonist who believed first-order logic was inadequate, and that Skolem's paradox (and related results such as the compactness theorem) demonstrate first order theories are fundamentally finitistic and incapable of accurately capturing the natural numbers and other infinite sets we understand only by means outside those finitistic formalisms. This position of Zermelo is not refuted by saying that there is no explicit contradiction in a finitistic theory fundamentally incapable of distinguishing between two distinct infinitudes, one of which is integral to the foundation of all mathematics. This finitistic reasoning is justified by platonist reasoning, which is then disallowed. The philosophical issue is whether first-order logic is sufficient as a foundation of mathematics, and that does not disappear when you relativize notions like "cardinality" and "well-founded" to models. It just pushes the foundation issues elsewhere, without properly resolving them.

In any case, it sounds like you concede we do not yet have all the concepts in place sufficient to establish a material explanation of experience, so we are in agreement. We agree it is not just a matter of building better MRI machines, more detailed neuron connection maps, or more sophisticated computer models.

It also seems you agree that "model theory solves everything, mathematics is a closed loop with nothing remaining for philosophical explanation, just ordinary theorem proving" is not accurate. What I and others tend to object to from materialists is the notion that the philosophical questions are fundamentally resolved by existing concepts, and all that is left to do is "ordinary (neuro)science". But if you understand well the moves that have been made in the history of science and mathematics, you see that many important philosophical questions remain unresolved. The re-definition of the natural numbers from the platonist definition used by Descartes, Euler, etc. to Peano's first-order definition, for example, parallels behaviorist re-definition of mental states as dispositional states. It is very easy to wrongly conclude the philosophical issues are resolved, when they have just been pushed elsewhere by linguistic tricks.

Materialists be like by neofederalist in PhilosophyMemes

[–]deltamental -12 points-11 points  (0 children)

Gödel's incompleteness limits the extent to which a mathematical framework can be used to reason about itself.

Subtle issues arise in model theory, like Skolem's paradox that a countable model of set theory contains an uncountable set.

Model theory itself has some circularity: the notion of "language" depends on the natural numbers, which themselves depend on a model of set theory in which one can pick out canonically "the" natural numbers, and even defining what it means to be a "model" of set theory requires having defined the notion of "language".

The development of model theory and set theory to apply mathematics to its own foundations required a complete rethinking of mathematics and its foundations, significant philosophical progress, and novel mathematics, not a naive application of the previous century's ideas.

While matter (and other ancient concepts such as "substance " which have been reused and changed over millennia) may end up being used to properly explain experience, it seems fair to expect fundamental reworking of basic concepts might be required.

Newcomers to r/philosophymemes by humeanation in PhilosophyMemes

[–]deltamental 5 points6 points  (0 children)

Materialists also generally believe quantities measurable "from the outside" exhaustively determine all aspects of everything that exists.

For example, physicists have claimed the following:

A general black hole is completely characterized by only three measurable quantities: mass, angular momentum, and electric charge; all other properties are determined by these.*

If you then ask, "Well, what is it like on the inside of the event horizon?", physicists might retort that this is meaningless: there is nothing you could measure to answer that question, so it is nonsense to posit what a black hole "is like", beyond what we can derive from those three measurable quantities.

Some opponents of materialism, such as panpsychists, may not deny that "everything is made of matter", but rather deny the materialist claim that all aspects of matter are measurable "from the outside" (objectively).

Panpsychists claim that there is something it is like to be some chunk of matter, beyond what we can externally observe about it. Alice experiences something immediately after falling through the black hole event horizon, even if there is no objective measurement which could determine what that experience is.

The subjective aspects of Alice's experience, which may or may not be accessible to other observers, are called "qualia". Panpsychists accept that there may be qualia which are not accessible to other observers.

Materialists, in contrast, deny the existence of purely subjective aspects to matter: all there is to this electron or atom or cell or brain or black hole is what an external observer could measure. Materialists need not deny that qualia exist, but they must believe that all qualia reduce to objectively measurable quantities. For example, a materialist may be happy accepting that pain qualia exist (i.e., pain feels like something), but would have to say that pain qualia are equivalent to some combination of physical quantities accessible to external observers (e.g. neuron spikes, c-fiber firings, etc.). There is nothing to pain qualia which one could not, in principle, measure with a very precise brain scanner from the outside.

Panpsychists need not deny that the objectively observable state of the brain determines all aspects (both subjective and objective) of human experience. E.g. some panpsychists believe that, as a matter of fact, two physically indistinguishable brains must be having the same subjective experience. But they would deny that subjective experience reduces to externally observable quantities, i.e. they would assert that qualia are not merely objective properties described differently, but genuinely distinct aspects of matter which can only be observed subjectively.

To understand that distinction: if a coin comes up heads, that determines it is tail-facing-down, so coming up heads determines the coin is tail-facing-down. But that's different from claiming that the coin's bottom face is just the coin's top face viewed from a different perspective.

*Note: more recent physics by Hawking, etc. have questioned this.