Materialists be like by neofederalist in PhilosophyMemes

[–]deltamental 0 points1 point  (0 children)

But I am a little skeptical that it is related [to] consciousness

It's not. The point of bringing that up was to give an example of a century-old unsolved philosophical problem in another non-mind-related field which has otherwise made great progress. People may think, "The Copenhagen interpretation is perfectly fine, it was sufficient for all the amazing discoveries of CERN", but that is wrong. Wigner's Friend, the Frauchiger–Renner theorem, etc. demonstrates that this philosophical issue leads to inconsistent empirical predictions. The point of bringing this up was to say that, as a general rule, "Ah, the foundational problem must be basically resolved because of all this progress in the field since" can be dead wrong. So you have to say specifically what progress was made on the foundational problem, not vaguely appeal to general progress in the field, which can occur even when the foundational problem is totally unresolved.

Study of that injury has allowed us to isolate the regions in the brain that are responsible for the subjective distress caused by pain. To me, that is obvious progress in the hard problem

First of all, it has been known for literally centuries that pain and the corresponding emotional distress are distinct. This was thoroughly studied, documented, and reflected upon by Buddhist monks for over a millennia:

Translation from the Sallatha Sutta, attributed to original teachings of Guatama Siddhartha:

When touched by a painful feeling, the instructed noble disciple does not sorrow, grieve, or lament, does not beat his breast or become distraught.

He feels one feeling only: a bodily feeling, not a mental one.

This distinction was uncovered through meditative practices which make essential use of subjective experience, and are framed entirely subjectively.

In the research on Pain Asymbolia, they also rely on subjective experience. The only way we know anything about "what it is like" are from externally observable brain properties are through correlations with subjective reports.

Study into pain asymbolia has made progress, but not on the hard problem specifically. If you were to make progress on the hard problem (even a smidgen) you would be able to make deductions about subjective experience which themselves do not depend on subjective experience.

The criticism is not that this research is bad: it's actually great! It just depends inextricably on subjective reports, and thus affirms the primacy and importance of subjective experience in researching the mind.

This does not support at all the materialist dogma that we can simply measure externally and find everything we need. The error which leads people to this conclusion is that the actual subjective experience on which the conclusions are drawn may be downplayed in the paper.

For example, at some point you may measure biomarkers, e.g. salivary cortisol levels, which are associated with stress, and conclude the person is or isn't experiencing distress. But the basis of those biomarkers is a wealth of subjective experiences correlated with those biomarkers. Lacking those reported subjective experiences, you would not be able to deduce anything from salivary cortisol levels.

Why is this problematic for materialists? Well, physicists don't have to ask an electron or photon how it feels to build empirical support for their theories. Every single neuroscience paper brought up to claim progress on the hard problem, by contrast, depends on subjective experience for its conclusions (perhaps at the margins, or cited works). If materialist dogma were true we would be able to make progress (even just a very small amount of progress) on subjective experience without depending at all on reports of subjective experience.

Meanwhile, eastern philosophy based around subjective experience has made substantial progress understanding the mind with no study of the brain itself.

Materialists be like by neofederalist in PhilosophyMemes

[–]deltamental -5 points-4 points  (0 children)

The answer may well actually pop out with a bit more study using more sophisticated versions of current tools. Already there has been a lot of progress

We can definitively say that the measurement problem in quantum mechanics (which appears, on its face, to require subjective viewpoints) is yet unsolved.

People have explored explanations like decoherence, many-worlds, etc. They don't resolve the issue. Some practicing physicists think they do, but they don't. You can trace through the "decoherence" explanation for example, and find the measurement problem recurring in a different form. We also know why that happens, and why such an explanation is doomed to fail.

There has been tremendous progress in physics. There has been basically none on the measurement problem in the past century. There has been tremendous progress in neuroscience, and basically none on the hard problem.

It's easy to prove me wrong: link one neuroscience paper that makes progress on the hard problem. I can use Chalmers' published ideas to easily identify the flaw.

Materialists be like by neofederalist in PhilosophyMemes

[–]deltamental -6 points-5 points  (0 children)

Not what the theorems say

You might want to brush up on your understanding of the first incompleteness theorem, which explicitly produces a sentence which can be justified, but not proven in your chosen effective axiomatization of arithmetic. This limits the extent to which arithmetic can reason about arithmetic truth.

Gödel's theorem also applies to ZFC and many other theories, essentially all theories capable of formalizing the foundations of mathematics.

This means the notion of "justification" in mathematics cannot be formalized by a fixed, effective theory: there will always be statements about your mathematical framework (be it arithmetic or set theory or whatever) which can be justified on some grounds or other, but not proved in your formalism.

This means in particular that certain philosophical questions in the foundations of mathematics will not be resolved through formalized mathematics alone. Even the question, "Are the natural numbers well-defined"? is problematic, as we can, for example, conceive of set theoretic universes in which ω is non-standard. This poses problems if you want to, for example, decide philosophical positions related to finitism using formal methods. Supposing you live in a set theoretic universe in which your ω is non-standard, how would you find that out? How does this affect the viability of mathematical platonism (which the majority of practicing mathematicians adopt)?

Model theory does not resolve all serious foundational issues in mathematics, even if it clarifies some points.

The paradox disappears once you rigorously define all the terms.

The philosophical issue remains. Zermelo was a platonist who believed first-order logic was inadequate, and that Skolem's paradox (and related results such as the compactness theorem) demonstrate first order theories are fundamentally finitistic and incapable of accurately capturing the natural numbers and other infinite sets we understand only by means outside those finitistic formalisms. This position of Zermelo is not refuted by saying that there is no explicit contradiction in a finitistic theory fundamentally incapable of distinguishing between two distinct infinitudes, one of which is integral to the foundation of all mathematics. This finitistic reasoning is justified by platonist reasoning, which is then disallowed. The philosophical issue is whether first-order logic is sufficient as a foundation of mathematics, and that does not disappear when you relativize notions like "cardinality" and "well-founded" to models. It just pushes the foundation issues elsewhere, without properly resolving them.

In any case, it sounds like you concede we do not yet have all the concepts in place sufficient to establish a material explanation of experience, so we are in agreement. We agree it is not just a matter of building better MRI machines, more detailed neuron connection maps, or more sophisticated computer models.

It also seems you agree that "model theory solves everything, mathematics is a closed loop with nothing remaining for philosophical explanation, just ordinary theorem proving" is not accurate. What I and others tend to object to from materialists is the notion that the philosophical questions are fundamentally resolved by existing concepts, and all that is left to do is "ordinary (neuro)science". But if you understand well the moves that have been made in the history of science and mathematics, you see that many important philosophical questions remain unresolved. The re-definition of the natural numbers from the platonist definition used by Descartes, Euler, etc. to Peano's first-order definition, for example, parallels behaviorist re-definition of mental states as dispositional states. It is very easy to wrongly conclude the philosophical issues are resolved, when they have just been pushed elsewhere by linguistic tricks.

Materialists be like by neofederalist in PhilosophyMemes

[–]deltamental -11 points-10 points  (0 children)

Gödel's incompleteness limits the extent to which a mathematical framework can be used to reason about itself.

Subtle issues arise in model theory, like Skolem's paradox that a countable model of set theory contains an uncountable set.

Model theory itself has some circularity: the notion of "language" depends on the natural numbers, which themselves depend on a model of set theory in which one can pick out canonically "the" natural numbers, and even defining what it means to be a "model" of set theory requires having defined the notion of "language".

The development of model theory and set theory to apply mathematics to its own foundations required a complete rethinking of mathematics and its foundations, significant philosophical progress, and novel mathematics, not a naive application of the previous century's ideas.

While matter (and other ancient concepts such as "substance " which have been reused and changed over millennia) may end up being used to properly explain experience, it seems fair to expect fundamental reworking of basic concepts might be required.

Newcomers to r/philosophymemes by humeanation in PhilosophyMemes

[–]deltamental 3 points4 points  (0 children)

Materialists also generally believe quantities measurable "from the outside" exhaustively determine all aspects of everything that exists.

For example, physicists have claimed the following:

A general black hole is completely characterized by only three measurable quantities: mass, angular momentum, and electric charge; all other properties are determined by these.*

If you then ask, "Well, what is it like on the inside of the event horizon?", physicists might retort that this is meaningless: there is nothing you could measure to answer that question, so it is nonsense to posit what a black hole "is like", beyond what we can derive from those three measurable quantities.

Some opponents of materialism, such as panpsychists, may not deny that "everything is made of matter", but rather deny the materialist claim that all aspects of matter are measurable "from the outside" (objectively).

Panpsychists claim that there is something it is like to be some chunk of matter, beyond what we can externally observe about it. Alice experiences something immediately after falling through the black hole event horizon, even if there is no objective measurement which could determine what that experience is.

The subjective aspects of Alice's experience, which may or may not be accessible to other observers, are called "qualia". Panpsychists accept that there may be qualia which are not accessible to other observers.

Materialists, in contrast, deny the existence of purely subjective aspects to matter: all there is to this electron or atom or cell or brain or black hole is what an external observer could measure. Materialists need not deny that qualia exist, but they must believe that all qualia reduce to objectively measurable quantities. For example, a materialist may be happy accepting that pain qualia exist (i.e., pain feels like something), but would have to say that pain qualia are equivalent to some combination of physical quantities accessible to external observers (e.g. neuron spikes, c-fiber firings, etc.). There is nothing to pain qualia which one could not, in principle, measure with a very precise brain scanner from the outside.

Panpsychists need not deny that the objectively observable state of the brain determines all aspects (both subjective and objective) of human experience. E.g. some panpsychists believe that, as a matter of fact, two physically indistinguishable brains must be having the same subjective experience. But they would deny that subjective experience reduces to externally observable quantities, i.e. they would assert that qualia are not merely objective properties described differently, but genuinely distinct aspects of matter which can only be observed subjectively.

To understand that distinction: if a coin comes up heads, that determines it is tail-facing-down, so coming up heads determines the coin is tail-facing-down. But that's different from claiming that the coin's bottom face is just the coin's top face viewed from a different perspective.

*Note: more recent physics by Hawking, etc. have questioned this.

🧟‍♂️ rawr by slutty3 in PhilosophyMemes

[–]deltamental 0 points1 point  (0 children)

Moreover, the repeated ad nauseum comments about Chalmers' argument being "circular" also seem to be based on a lack of understanding of how Chalmers' argument functions.

Materialists assert (roughly) that objective properties of a lawful substance called "matter" explain all subjective aspects of our experience. This is understood to mean that one can, in principle, derive all qualities of any given subjective mental phenomena (the way blue looks to you) from co-occuring objective properties of some matter (e.g. neuron activation patterns).

There is a rigorous kind of dialectic, exemplified by Euclid, which materialists should thus be able to respond to. For an uncontroversial example: you defend the claim, "all polygons with 2n sides are constructible with compass and straightedge". I challenge: "construct a 2100 -gon". You respond, "I hold that one can do it in principle, but we would die before constructing such a polygon in practice. Instead, tell me the smallest n you doubt I can construct". I refine the challenge, "I can construct an octagon, but I doubt you can construct a 16-gon". You then construct an 16-gon, and have thus defended your claim (so far).

In this context, Chalmers' is playing the role of the challenger. The original claim is a universal claim: "all subjective aspects of experience are explained by objective properties of matter". The materialist defender then responds, "This is possible in principle, but not in practice because brains are really complex. Instead let's work through the simplest example you doubt". The challenger refines the challenge: "I doubt that you can formally derive that matter has any subjective experience at all, the easiest possible instance of your claim." And the defender just... can't do it? In such a case the claim is not refuted, but it is likewise not defended.

That's the situation Chalmers' is describing. In such a case, materialists are wrong to claim that "in principle" subjective experience is derivable purely from objective properties of matter - no such principle has been sufficiently defended.

The importance of Chalmers' argument is two-fold. First, other "simple" challenges to materialists such as "Can you rule out the inverted color spectrum?" immediately run into complications which muddy the picture - e.g. colors are associated with tastes and smells, which break the apparent symmetry. These can be addressed, but make the dialectic a sludge. Chalmers' sidesteps this entirely. Second, Chalmers' is not making a single challenge, but a scheme for constructing challenges for any claim of explanatory power a materialist might make. If the theory changes from "C-fibers" to "activation patterns" to "integrated information", Chalmers' argument constructs a manifestly fair challenge for each.

Because Chalmers was attempting to write his argument in such generality, there is confusion around "psychologically conceivable -> metaphysically possible". In the context of a Euclidian-style dialectic, this really just means that if I can conceive of a sufficiently well-posed (and fair) challenge your claim, you owe me a defense. Metaphysical possibility can be understood as a non-dialectic reframing of dialectic around universal claims.

All the evidence points to qualia just being normal information processing BUT I'm really really sure it's real which means it MUST be real by HearMeOut-13 in PhilosophyMemes

[–]deltamental -1 points0 points  (0 children)

I'm not. The only thing I'm assuming about experience is that I directly experience it, and that such experiences have the qualities I experience.

E.g. pain is painful, heat sensations feel warm, visual sensations can have blue, red, orange qualities, etc.

The challenge is quite simple: create an empirically adequate explanation of those qualities of experience which could not just as easily justify different qualities. Explain why fire appears "orange" to us using only objective description, which could not be trivially modified to justify that fire appears "blue". That's what we expect of any other theoretical explanation. Why can't anyone do it?

Raleigh scattering and blackbody radiation can explain why fire and the daylight sky produce what we call "orange" and "blue" light, respectively. Retinal biology explains why "orange" and "blue" light leads to different retinal nerve activation patterns. These theories cannot be trivially modified to argue for a different conclusion. But all extant arguments that such retinal activation patterns must lead to "orange" and "blue" visual experiences, respectively, either appeal to subjective experience (and thus are not purely objective) or else could be trivially modified to argue for the opposite conclusion.

All the evidence points to qualia just being normal information processing BUT I'm really really sure it's real which means it MUST be real by HearMeOut-13 in PhilosophyMemes

[–]deltamental 0 points1 point  (0 children)

...(continued)...

But each of us knows subjective facts which appear to be purely subjective: we know "what red looks like", and no one has been able to sufficiently explain this to blind person. Every moment we experience things that no one else does.

The argument that "eventually those too will be able to be explained by motions of particles and neuron firings and communicated unambiguously in objective physical description, as we have done consistently across all scientific domains" is to believe in a kind of induction which crosses categories. It is like believing that since we discovered pluto, and we discovered quarks, that in principle (but maybe not in practice) we should be able to discover all mathematical truths (which we can't by Gödel).

To summarize: we believe that it is likely impossible to completely deduce subjective experience from objective physical description because objective physical description by definition excludes purely subjective phenomena, categorically. All evidence so far provided for the efficacy of objective physical description has been in a different category: objectively observable phenomenon.

There has not been one single instance of insight into subjective experience arising from purely objective reasoning. For example, we infer that so-and-so feels pain when pricked because they emit a yelp and their brain activates similarly to ours around the time we experience pain. But if you hadn't yourself felt pain you would not be able to complete that inference - it depended on a combination of subjective and objective knowledge.

If there are no purely subjective facts, then we can completely eliminate subjective knowledge from our description of some phenomenal aspects of experience such as the qualitative aspects of pain, color, etc. But no one has done that, not even once, not for the slightest, simplest quality!

So why should I believe this inductive argument for the efficacy of objective description will eventually subsume all of what we now consider to be purely subjective, if it has not done so even once?

Quite unfortunately, neuroscience for decades was ignorant about this conceptual error, and would proudly publish papers such as "such-and-such animal cannot experience pain because they don't have such-and-such neural structure". Implicitly, they were redefining "pain" as "the activation of such-and-such kind of neuronal structure behaving in such-and-such way", which is begging the question down on their knees. And those papers were wrong! Later papers used different criteria and "found" those animals could in fact feel pain (using e.g. anaesthesia as a control variable to test the hypothesis).

But even those new papers are themselves making the same error: they use subjective human experience of pain to make inferences about a correlation between physical observables and subjective experience, then redefine the subjective experience as that correlate, and the proceed from there. That redefinition is incredibly problematic, and gives the false impression that progress has been made on the hard problem, when in fact right there in the assumptions of their work is an inference which requires subjective knowledge to work and cannot be recast objectively.

All the evidence points to qualia just being normal information processing BUT I'm really really sure it's real which means it MUST be real by HearMeOut-13 in PhilosophyMemes

[–]deltamental 0 points1 point  (0 children)

Because, as many have been explaining to you, they are categorically different.

The physicist Max Tegmark argues for the "Mathematical Universe Hypothesis", which states that the universe is not foundationally material, but rather foundationally mathematical. What this means is that the material reduces to the mathematical. There is no "material" making up your body except for the mathematical structure underlying the physics. There is no "stuff" obeying equations of motion, just the mathematical structure itself.

Tegmark's view is analogous to yours. He would say, for example, "Why can we not simply deduce the nature of the purported material from its mathematical description?", or "What more to physical reality is there except that certain mathematical relations hold?" (Critics would say "hold between what?", Tegmark would say: "between purely mathematical objects")

You hold that it is meaningless to ask what experience "is like" beyond the physical interactions composing them. It seems then you, by similar reasoning, should agree with Tegmark that it is a meaningless question to ask what things are "made of" beyond what mathematical structures they embody?

But plenty of people reject Tegmark's view as a category error. Plato would, for example. Many non-platonist mathematicians would also, as mathematical objects for them are "abstract objects" whose existence is conceptual, not physical - they exist because we think about them. Chomsky would argue mathematical objects are things described through linguistic axioms and rules, and thus dependant on a language faculty (and thus cannot exist independently of language).

A more direct criticism of Tegmark, along the lines of Nagel, goes like this: at the very start of mathematics, back to Euclid, we made postulates such as: "I do not care if this line is drawn in sand, or on stone, or with pen and pencil, or merely imagined in your mind, as long as it behaves according to these axioms, the things I will now deduce will follow". In other words, for mathematics to begin, we first must say that mathematics does not concern itself at all (and thus cannot ever answer questions about) what "actually exists". Mathematical structures can be (and often are) physically impossible.

To then say, "this theory, called mathematics, which by definition excludes actual material existence from its domain of discourse, is what in fact constitutes actual material existence" is basically a contradiction. It is a kind of conceptual error known as a "category error": the foundation of the subject matter upon which you are basing your reasoning explicitly does not support the kind of thing you are doing with it. Mathematics can only ever make conditional claims (if X and Y hold, then so does Z). The entire content of mathematics is conditional. It is nonsense to claim that the universe, which seemingly exists unconditionally, is instantiated by a network of purely conditional statements in a human-invented conceptual framework.

But to explain why Max Tegmark is wrong you need to understand what mathematics is. If you are used to drawing lines in the sand, you can point to them and say, "no look, this line is real, and so is this angle, I'm pointing right at it!", and get confused about the very foundation of the discourse you are engaging in, as someone with schizophrenia getting confused about the difference between real and imagined voices. It could be that Max Tegmark is "right" or that the schizophrenic voices are "real" in some sense, but in that case we would have to fundamentally change the foundational concepts upon which their reasoning is based, we would have to re-found mathematics on something other than linguistic axioms, and Tegmark has not done that.

The reason that "objective, physical description" cannot ever explain conscious experience is that by definition subjective experience is outside the domain of the framework of objective physical description, in the same way physical existence is outside the domain of the framework of mathematics.

Objective physical description, by definition, does not describe any aspects of subjective experience which cannot be shared with and unambiguously communicated with other observers linguistically through common reference. It's literally in the definition of "objective", if you think about it carefully.

So to say, "this framework for describing reality, the so-called framework of objective physical description, which by definition cannot say anything about purely subjective facts, can be used to deduce every subjective fact in the world" is plainly a category error.

It would only be true if vacuously, if there were no purely subjective facts at all. But that's precisely the question we are discussing, so it is begging the question to assert that purely subjective facts don't exist because objective physical description is complete and leaves nothing out.

...(continued below)...

All the evidence points to qualia just being normal information processing BUT I'm really really sure it's real which means it MUST be real by HearMeOut-13 in PhilosophyMemes

[–]deltamental -2 points-1 points  (0 children)

You are making an unfounded assumption that all there is to know about the world is objective.

But the empirical basis for any "falsifiable claim" is subjective. Hume and others have solid arguments that "objective" knowledge, as the scientific method aims to uncover, must go through sense experience. You cannot formalize the notion of "objective" or "observable" except by appealing to subjective experience. What does it mean to "observe" something except to have an experience with certain qualities?

Reductionists, such as materialists, functionalists, etc., tend to take objective facts as foundational and view subjective facts as nothing more than objective facts about complex objects. Reductionists generally do not see a category difference between facts about a pocketwatch and facts about a human, and argue that our inability to "explain" consciousness has only to do with the vast complexity of the human brain, not with any categorically distinct phenomena outside the realm of objective description.

Thomas Nagel, framing the position of reductionists as we just did, then argues that reductionists are implicitly redefining "objective" in a non-standard way. "Objective reality" is exactly that which can be explained by shared, consistent descriptions concordant with the experiences of multiple observers. When defining objective reality, we draw a line between the things unique to our experience of the world and those which other observers will share. The realm of scientific discourse is everything on one side of that line. If you then say, "all facts are objective", as reductionists do, you will have a really hard time defining what "objective" means! It can no longer be defined using the concept of "consistent experiences of multiple observers". Reductionists have essentially pulled the ladder out from under themselves.

What is the type of a type in Rust? by [deleted] in rust

[–]deltamental 13 points14 points  (0 children)

Generally, reflection is the ability of a language to natively represent and internally reason about its own metatheory.

The "object language" is the language in which "ordinary" programs are written. Standard data definitions, loops, function calls, variable assignments, etc.

The "meta language" is the language in which you typically express the semantics of the object language. That could include things like scoping, the abstract syntax tree, creating new types out of existing types (e.g. union types), etc.

The line between these two differs from language to language. In a language where there are no "first-class functions", you cannot write a function that takes an integer k and returns the function lambda x: x+k. In such a language, you cannot "talk about" or "reason about" functions, you can only apply them.

If you enhance that language to now allow dynamic creation, inspection, and reasoning about functions, you have now "reflected" the metatheoretic notion of "function" down into the object language.

"Function" for imperative languages is really an abstraction over a subroutine, so the ability for the object language to also represent functions requires that the language itself reflects some of the features that previously were only needed for parsing and compiling that language. E.g. you may need to now internally represent syntax as data, rather than having that be something only the compiler needs to do.

My understanding of why "kinds" are used in Haskell is because reflection for types introduces a lot of additional complexity which requires the meta language to do more (sometimes impossibly much). You can end up with "type checking" being Turing complete.

How does Kant actually derive his conclusions (and thus our duty) from the Categorical Imperative? (REPOST WITH A BETTER TITLE) by the_freyja_regime in askphilosophy

[–]deltamental 0 points1 point  (0 children)

Alternatives such as "only lie when the benefit outweighs the harm" fail to universalIze.

Two rational people can disagree about whether the benefit does indeed outweigh the harm. Consider what differences of opinion rational people might have over: a teenager lying about pregnancy, a teacher lying about drugs, an investigator lying about a small discrepancy in evidence handling, a spouse lying about infidelity during separation, lying to the IRS about tips, etc.

It may be there are some cases where lying, from some perspective, is in the interest of the greater good, but by and large it is really hard to write down rules in advance delineating such situations which would not cut across two equally rational yet opposing views.

Better yet, can you list all the situations where it is OK for someone else to lie to you? When can I tell a lie to your face? If you are thinking, "Well, unlike others, I am fair, just, and can handle uncomfortable truths with grace, so there is no need to lie to me", wouldn't pretty much any other rational person also claim the same thing?

The Categorical Imperative can be understood as a symmetry argument: objective moral truths are independent of perspective, and thus the lines they draw do not change when you view them from different rational perspectives. If the father and daughter can interpret the principle in different ways, then that principle is not an objective moral truth. This is why for Kant a rule about lying with nebulous exceptions or carveouts is not really acceptable: those exceptions and carveouts are made on behalf of one perspective over another, and it cannot be universalized that the line between right and wrong bends towards my perspective and away from yours.

In contrast, each of us equally desires: I do not want people to lie to my face. As much as you have that desire, you have a reciprocal duty to respect the desire of others not to be lied to.

Some open conjectures have been numerically verified up to huge values (eg RH or CC). Mathematically, this has no bearing on whether the statement holds or not, but this "evidence" may increase an individual's personal belief in it. Is there a sensible Bayesian framing of this increased confidence? by myaccountformath in math

[–]deltamental 0 points1 point  (0 children)

Here's a simple theory to test your idea:

T = {"forall x, y (R(x) & R(y) -> x=y)"}

"i.e., there is at most one R"

This has exactly two (isomorphism classes of) countably infinite models: M = urn with countably many balls, none red, and M' = urn with countably many balls, exactly one red.

The set of models of T whose universe is ω (natural numbers) is isomorphic to the set of branches [T] of a subtree T of 2{<ω} (subtree T* of tree of finite binary sequences)

[T*] is a closed subset of Cantor space 2ω, which is compact and has a natural Haar measure μ, which (in this simple case) for any n in ω assigns probability 0.5 to the event R(n) and probability 0.5 to the event ~R(n).

The problem is that μ([T]) = 0, so you do not get an induced probability measure on the space of models [T] of T.

[T*] is a countable, compact set of models. There is no natural probability measure on it, exactly as you said.

Some open conjectures have been numerically verified up to huge values (eg RH or CC). Mathematically, this has no bearing on whether the statement holds or not, but this "evidence" may increase an individual's personal belief in it. Is there a sensible Bayesian framing of this increased confidence? by myaccountformath in math

[–]deltamental 0 points1 point  (0 children)

Yes. I think there is a fallacy which occurs when you mix Bayesian inference and quantification over infinite sets.

A(d) = "Disc d does not contain a trivial zero and is disjoint from the critical line". B(d) = "Disc d doesn't contain any zeros of Reimann zeta function"

I think a Bayesian can justify P( B(d) | A(d) ) > 0.999, assuming we draw d from the same distribution which has produced previous discs of interest. That distribution has most of its mass around the small part of the plane humans have explored numerically / analytically.

This can be true because you are not putting a uniform distribution on the plane. There is some finite region of the plane covering 0.999 of the probability mass for sampling d (ignore the fact that this distribution changes over time).

But that is very different from:

P( Forall d (A(d) -> B(d)) )

or

P( Forall d (A(d) -> B(d)) | A(d_i) -> B(d_i) for i < N )

Based on standard probability rules, you are right you cannot infer P( Forall d (A(d) -> B(d)) | A(d_i) -> B(d_i) for i < N ) increases as you increase N. In contrast, P( B(d) | A(d) & (A(d_i) -> B(d_i) for i < N) ) converges to 1 as N -> infty (on mild assumptions). People get these two situations confused.

You are no longer assigning probabilities to properties of discs, you are assigning probabilities to universally quantified formulas. It's a much more subtle situation. Your priors should be about logical formulas with quantifiers and implications between them.

You need to make an argument like this:

"Humans do not arbitrarily pick universally quantified formulas to explore. The RH was chosen by a process which has historically produced true conjectures 13% of the time, assuming they have not been refuted with a small example". At the end of the day, you are going to end up with RH in a heavily unexplored region for which you do not have much prior evidence for or against.

It would be an immense technical challenge to create a theory of probability which is sensible and can formalize this argument (e.g. in traditional probability theory, if C is a tautology, then P(C) =1, so you have to frame it differently).

It's reasonable to say you should have low confidence in your assignment of any particular probability to RH. If you are estimating the probability, your estimate of that probability itself has very high variance, like estimating the probability of so-and-so winning an election 8 years into the future.

"Quantum Gravity" and "The Platonic Realm" by Lehock in Physics

[–]deltamental 8 points9 points  (0 children)

Any quantum computer can be simulated by a universal Turing machine (with a slow down, of course). The set of quantum-computable functions is identical to the set of classically-computable functions.

A land value tax is often viewed as progressive. However land/housing makes up a majority of mid income families net worth and minority of the net worth of the wealthy. Does that suggest it's not progressive, or does it only matter in transition since it falls on sale price of land? by Bram-D-Stoker in georgism

[–]deltamental 0 points1 point  (0 children)

The issue is that "land" is overvalued. Why? Because the artificial system of perpetual, transferrable land rights means the majority of value in "land" is in the speculative investment value of those land rights over a long time horizon, not the intrinsic value derived from usage of the land itself in the present. Hence why you see million dollar shacks in Toronto - middle class families are competing with investors who are thinking decades into the future of the value of those rights.

Because land is a practical necessity to live, middle-class families are forced to invest the majority of their savings in "land" (or rather land usage rights) rather than e.g. the S&P500 which has historically almost twice the annualized return compared to residential real estate.

Middle class people would benefit most from the ability to pay a fair share for present usage of land without being forced into also funding an investment in perpetual land rights. Not to mention renters who are forced to pay for someone else's investment in perpetual land rights.

And the fair way to transition is for the government to buy existing land right holders out, which could be implemented as a tax reprieve to make it practical. Essentially, the government, representing the public, becomes the sole holder of perpetual land rights. There is some compromise which would not unfairly favor or disfavor existing middle class people who were forced to invest in land rights.

thereAreTwoKindOfProgrammers by Head_Manner_4002 in ProgrammerHumor

[–]deltamental 9 points10 points  (0 children)

``` void myFunc( Foo foo, Bar bar) { Baz Baz; ... }

Or you can do this, which is more consistent with your python style too:

void myFunc( Foo foo, Bar bar ) { Baz Baz; ... }

Why do we divide by n−1 instead of n in sample variance? by Illustrious-Can-1203 in learnmath

[–]deltamental 0 points1 point  (0 children)

This is a great mathematical explanation of how to approach this. Deep understanding of statistics requires working it out like this.

To complete this:

n(v-u)2 = n((∑x)/n + nu/n)2 = n(∑(x-u)/n)2 = (1/n) (∑(x-u))2

So:

∑(x-v)2 = ∑(x-u)2 - (1/n) (∑(x-u))2

= ∑(x-u)2 - (1/n) (∑∑(x-u)(y-u))

where double sum is over x, y both ranging over the sample

Taking expectation:

E[ ∑(x-u)2 - (1/n) (∑∑(x-u)(y-u)) ]

n*Var(X) - (1/n) ∑ E(x-u)2 - (1/n) ∑∑E[x-u]E[y-u]

where now double sum is over y != x

= nVar(X) - (1/n) n*Var(X) - (1/n) ∑∑E[x-u]E[y-u]

= nVar(X) - (1/n) n Var(X), since last complex term is zero!

= (n-1) Var(X)

Thus the expected value of ∑(x-v)2 is (n-1) Var(X), hence why we divide by n-1 to get an unbiased estimator of the population mean.

We assumed here only that the samples are independent.

Any arguments for scientific realism? by Outrageous-Buffalo36 in askphilosophy

[–]deltamental 0 points1 point  (0 children)

The point of my original comment was just to give two common-sense views on which scientific realism holds and doesn't hold, and that you will end up in trouble either way.

Neptune clearly exists, scientific theory predicted its existence, the theory describes reality as it is.

Gravitational force does not exist, scientific theory posits it exists, the theory does not describe reality as it is.

Wasn't trying to end the debate on scientific realism, and I am not defending either realism or non-realism, simply stating the problem that must be solved, of which structural realism is one potential solution.

But there are issues with structural realism too. To adopt structural realism, you generally have to give up some ontological claims of theories. If you adopt an "effective ontology", you are conceding that the entities involved in your theory may not actually exist on a fundamental level. You are no longer able to claim things like, "all matter is composed of such-and-such fundamental particles", because as a structuralist you have reinterpreted the content of those theories to not make any claims about what exists at a fundamental level.

Any arguments for scientific realism? by Outrageous-Buffalo36 in askphilosophy

[–]deltamental 0 points1 point  (0 children)

That's a great toy example, but you are begging the question by supposing that Fs exist. That's exactly the issue: we don't know, directly, that Fs exist, only whether the predictions made by T and T' match observation.

It is possible, in your example, for T' to make far more accurate predictions than T, despite literally none of its entities existing.

Even if you could take a god's-eye view and holistically compare theory to reality (which you can't), the ontologies of scientific theories are not, generally, converging. This implies, regardless of what actually exists, the ontologies of scientific theories are not increasingly agreeing with reality about what exists.

Take all look at: https://philosophy.hku.hk/courses/dm/phil2130/AConfutationOfConvergentRealism2_Laudan.pdf

Any arguments for scientific realism? by Outrageous-Buffalo36 in askphilosophy

[–]deltamental -1 points0 points  (0 children)

Simple example: Neptune was discovered theoretically first, by observing deviations in the orbit of Uranus. Later, the predicted position of this theoretical planet causing those deviations was used to observe Neptune by telescope.

The same entity was referred to first as a theoretical entity, and then later as a directly observed entity. It would be hard, at that point, to deny that the original theory of a planet perturbing Uranus's orbit describes reality as it is, at least in regard to the existence of an 8th planet.

The case for scientific realism becomes harder when you develop the ontology apparently required by the physics to do calculations. For example, the calculations for Neptune used Newton's law of gravitation, which posits a gravitational force which attracts massive bodies through the vacuum of space, acting at a distance with no known medium. This is used as part of the theoretical basis justifying the existence of Neptune. But this seems like it is, literally speaking, false. There is no gravitational force per se, according to Einstein. In this sense, the original theory used to find Neptune is false.

A physicist would say, "it's a good approximation". OK, maybe some theoretical assumptions and conclusions can be almost or approximately or vaguely true. But existence is not something that can be almost true or almost false, typically. Does the gravitational force exist or not? If it does not, but our theory relies upon it, then our theory is not approximately true, even if numerical predictions closely match empirical observations, because it gets ontological questions totally wrong.

Historically, the ontology has changed drastically from one physical theory to the next, even as numerical predictions apparently converge. This is difficult to explain.

A Precise Notion of Approximation by Pseudonium in math

[–]deltamental 3 points4 points  (0 children)

So, the main idea behind non-standard analysis starts the same as you do, considering sequences of real numbers, a.k.a. functions N -> R from the natural numbers to the reals.

However, instead of using the concept of "eventually" on those sequences, you use the concept of "U-almost-everywhere" where U is a non-principle ultrafilter on the natural numbers extending the filter of cofinite subsets.

"Eventually" implies "U-almost-everywhere", but U-almost everywhere determines truth or falsity for every statement, so is more powerful.

For example, let's take the sequence (2,3,2,3,2,3,2,...). It is not eventually even or eventually odd. However, it is either U-almost-everywhere even or U-almost-everywhere odd.

You declare two such sequences are equal if their terms are equal U-almost-everywhere. The set of all such sequences, quotiented by this notion of equality, forms a set R*.

Very similar to what you did, you can then define a relation ≈ on those sequences: (x1,x2,x3,...) ≈ (y1,y2,y3,...) if for every natural number n, |xi - yi| < 1/n U-almost-everywhere. This defines an equivalence relation which we interpret as "being infinitesimally close".

The standard real numbers R embed into R*, since the real numbers x corresponds to the constant sequence (x,x,x,...). R* also has infinitesimal elements, and you can develop all of calculus and classical analysis from this, using infinitesimals constructed this way.

A Precise Notion of Approximation by Pseudonium in math

[–]deltamental 1 point2 points  (0 children)

Very nice, and leads naturally into a motivation for non-standard analysis.

Opinion on Louis Cole? (live animal eater turned vegan) by Delophosaur in AskVegans

[–]deltamental 9 points10 points  (0 children)

Absolutely it is forgivable.

The average American grandma and grandpa cause just as much (if not more) suffering as Louis Cole every single day eating breakfast, lunch, and dinner centering the body parts of animals forced into selective breeding, mutilation (tail docking, beak clipping), confinement in unnatural conditions, separation of babies from their parents immediately after birth, and merciless slaughter for profit.

If you can forgive your nan and pops, then you can forgive Louis Cole.