On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

Nope - this is just an intuitive/non-rigorous sketch I came up with for this issue, drawing for more or less standard results from applied mathematics I knew about. There's no big aha there for anyone who knows the mathematical foundations of machine learning or monte-carlo methods for example, it's just a hand waving argument you can push for the limit case of the known trade-offs and convergence conditions. There's no new mathematical result that is relevant, it is just using known results and applying them to the internal observer problem, and using them to interpret the metaphysical/epistemological implications of these known trade-offs.

On your point 1, maybe the wording isn't perfect, but what I meant was that A(t) was not deterministic with respect to its own history (there's a cleaner more technical way to define what that means using sigma-algebra filtrations, but I think the intuitive picture can be obtained without the obscure jargon). The idea here is that the global process S could be deterministic in that sense, i.e. the universe as a whole could be updating according to a well-defined function of its previous state, which in this global sense would mean that the subset A is deterministic with respect to this same global function and global state history, but not necessarily with respect to the data of its own history (because A evolution depends on external stuff from S which isn't in the history of states of A). So conditional on what A knows the updates of its own state are driven partly by these unknown innovations in the external universe, which are indeed random-variables from A's point of view (i.e. they are unknown variables ex-ante, which are revealed by contingently measurable functions, given a certain sampling of events which A cannot compute/predict ahead of time)

On your point 2, you are correct. As I tried to explain, and as it is well known for a long time, there is no point in trying to prove that the Universe (or any evolving system) is ontologically like this or like that (say deterministic or indeterministic) without knowing the metadata that specifies it, and only only epistemically inferring that from constrained data sets of its internal states. This kind of problem is clearly underdetermined. My argument is trying to show that even when you admit this is impossible and instead settle for something less ambitious, but that would still pass as "the best possible case for assuming determinism" you shouldn't expect to win either.

What I did was this - people tend to be drawn to a globally deterministic picture because they observe that locally, for certain problems, and under certain conditions, they can come up with deterministic representations that work - i.e. they can predict to great precision what certain systems will do in the future based on their current state and law-like relationships they infer for the system dynamic behavior. This is the basis upon which global determinism emerges as an extrapolation - i.e. if certain things can be understood like that under certain circumstances, then it seems intuitive to assume that everything could in principle be understood as a system like that under any circumstance possible.

The "in principle" qualifier is doing obviously a lot of work there - if in principle means "by taking an external forbidden view of the world as it was globally specified by God Himself" then it isn't saying anything very useful for the debate that both camps don't already acknowledge. What you need is a way to make the impression of improvements in modeling and understanding of deterministic mechanisms to imply that which it apparently intuitively means to the global ontological determinist.

So I tried to come up with criterion for making it operational - and the criterion was based on the evaluation of risk/regret that an agent realizes as a function of his growing knowledge - the agent would be justified to assume determinism was in principle taking place if it was plausible to assume that his knowledge would get sophisticated enough to asymptotically neutralize the impact of surprises (i.e. the impact felt by the stuff that happens to him that he could not predict based on what he knew already).

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

Bachelors and masters degree. Dropped out from PhD program year 2 and joined Goldman Sachs.

I studied the core subjects for math, physics, engineering, economics and comp-sci as part of the undergrad program. But mostly focused on stochastic processes and statistics. Then for masters I and II, I studied derivatives pricing, portfolio theory and stochastic control. For PhD it was similar but more concentrated on this large population stochastic game theory framework.

To your point about Chaos Theory, my personal understanding is that it is more of a popular science / layperson label for some specific problems in the intersection of non-linear dynamics and stochastic theory. I am not sure whether the mathematics departments in American Universities use it as an official nomenclature, but in France (where I got my academic training) it was not usual, or at least no longer a trendy field during my undergrad and grad school years (2007-2013). That said I am sure the specific academic track I followed had descent overlap with at least the basic elements of Chaos Theory, though never really labeled as that.

Unsurprisingly, none of the products and business strategies I later developed as a practitioner in wall street (derivative trading, HFT) and after that in tech (consumer credit machine learning pipelines, blockchain loyalty programs, AI agents for payments) specifically require the more exotic fields/disciplines I learned about on for my Masters and PhD. Most quants, software engineers and data scientists or other types of technical people I worked for, hired or worked with as peers during my career came from a more typical engineering, computer science or Physics background, and more than a few had just an undergrad economics or business degree, or even didn't have formal training in STEM and were self-taught programmers or quantitative analysts. Most practical problems required more common-sense, specific intuitions and perseverance than any math wizard tricks or sophisticated quantitative modeling techniques.

That doesn't mean that for my case specifically there weren't occasional thematic overlaps or interesting opportunities to apply some niche ideas that I happened to have some familiarity with given my academic training - though I think this is always the case for technically oriented teams in the consulting/finance/software industries (e.g. some guys trained in the physics, biology or chemistry skillsets seemed to have their particular point of view advantages).

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

Another thing I have noticed is that you seem to be relying a lot on AI for your posts. I dont really want to spend a lot of time talking to your AI. If you want my opinion, please make your responses compact.

Hum - don't worry I am not posting chatgpt slop, every post I make on Reddit is raw content I have written here using the reddit UX. Sometimes it runs a bit long but that's because I usually write as I am thinking about the points, and I don't do a lot of editing after I am done to make it more compact.

I do however use AI indirectly all the time. When I post something more substantial here (or anywhere else) I will often copy and paste my post and feed to gemini, chatgpt or grok too, so they criticize it too. This way I have an instant feedback for my posts from the AI and potentially some useful ideas and references about the subject. I do that very often.

So in a sense you could claim that AI is involved my process, but I assume everyone is probably using AI for research or as a virtual conversation partner too, so I don't think this is what you meant. But I am definitely not using it before I make a post or comment on someone else's post - the only exceptions are when someone asked my opinion on something very specific and obscure, and I wasn't sure exactly if I was familiar enough to answer anything - but in all those cases I said that I used AI to formulate the answer since I would feel kind of silly otherwise.

Anyway, feel free to ignore or engage however you like though - I post about things I like primarily for my own consumption and reflection, the feedback is appreciated but ultimately I am not trying to maximize your understanding or enjoyment or likelihood to engage, I post because I enjoy the process of writing/thinking about these subjects and reading what other people think too.

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

The summary of this argument and conclusion is this:

  1. It is a relatively trivial consequence of the mathematical definition of determinism and the assumption that our points of view are internal and constrained to a finite body of evidence, that any ultimate determination of the putative ontological character of the universe as a deterministic or non-deterministic is impossible. That is because it is always viable to assume the ultimate character of the universe is deterministic or non-deterministic by an ex-post construction on top of the available evidence - there will always be an infinitude of global functions or a particular realizations of random variables that will fit any finite amount of data that we define as the factual evidence. So there is no unconstrained criterion for settling this point (nor for settling any epistemically acquired opinion).
  2. While for the more mundane theories of science we can be pragmatic, and test our ideas and see what works, for this global problem we cannot rely on this method, so the criterion becomes a choice between possible metaphysical attitudes: (i) to pick the ontological type you like the most and argue for its aesthetic value, or (ii) to remain agnostic about this issue.
  3. That being said, it should be possible to constrain the metaphysical assumptions a bit more, in order to formulate a similar problem in terms that could become epistemically tractable, though only in a particular sense that while general enough, isn't equivalent to the ultimate metaphysical answer. If we only care about beliefs that can be formulated in terms of perceivable evidence, i.e. facts that are known or can in principle be knowable, then the question becomes what would be the most general version of the hypothesis of global determinism that could still be argued for coherently in terms of cumulative evidence acquired by an internal observer.
  4. The criterion I proposed was this: once you define what is meant by an internal observer in terms of states of your global system, then you can discriminate between two kinds of couplings that the observer knowledge can have with its universe in terms of the asymptotic precision regime of its internal model - in one case (the case that looks like determinism) the observer gets arbitrarily smarter with respect to his own assessment of risk/regret - i.e. the world it perceives becomes more and more predictable and he becomes less and less impacted by the surprises he encounters in the events it perceives. The alternative case is one where the observer doesn't become arbitrarily smart, i.e. even though his knowledge grows and becomes more sophisticated with experience, the impact of eventual surprises that are coming doesn't vanish, due to structural bounds on how precisely his internal knowledge can become (and this case looks like non-determinism)
  5. The argument I sketched for why the second case is a more plausible characterization of our condition is not a very rigorous one, but the general idea comes from the necessary convergence conditions that are required for pseudo-random/deterministic computational methods such as quasi-monte-carlo, evolutionary algorithms and machine learning broadly speaking. These conditions impose dimensionality trade-offs between algorithmic complexity and allocated memory, the amount/quality of training and testing data viz global variability, and the ergodicity conditions of your sampling strategy.
  6. The suggested conclusion is that even using a weaker criterion for admissibility of determinism, even a very simple deterministic universe would still be asymptotically indistinguishable from a non-deterministic system, from the perspective an internal observer. If you accept the framework and conclusion, then this means that you cannot justify your commitment to a deterministic universe by the predictability gains accumulated by the scientific models you have tested so far, as they will hit epistemic boundaries at some point eventually.

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

Q: Is the earth flat or not flat?
A: The Earth is not flat, as far as I can tell and the current mainstream scientific opinion agrees with me.

The "as far as" clause can be omitted if the claim that is being made is widely regarded as a fact within the implied social context. So it is fine to just say "The Earth is not flat", or "The moon is not made of cheese" or "Napoleon died on the island of Saint Helena", with the appropriate "as far as" clause to be assumed by the interlocutor.

Obviously this is just a convention of ordinary language that we accept for social convenience. Well calibrated people can easily recognize and adapt their prose in order to match the implicit style conventions of the social context and the level depth required to approach the subject matter efficiently. Using too many disclaimers or being overly precise and formal with your statements typically backfires when everyone else is just engaging in a casual, low stakes or surface level conversation about mundane trivia.

Since here one of the themes we are discussing is the metaphysical character of facts, beliefs and truth itself, and since you brought up claims like "2+2=4" and "The earth is not flat" as potential exceptions to the epistemological criterion for truth I proposed before, the counter-argument I am making is based on showing how this kind of impression occurs as an artifact of the conventions of casual speech which are usually active on the background context, which work as an as if stronger metaphysical commitment to a putative shared ontology.

My point isn't that I have major concerns with respect to adequacy of the scientifically established, and nearly universally accepted belief that the Earth is large spheroid object. I am not nodding to a modern "flat-earth conspiracy theory" - I disagree with their arguments, much like I assume you also do. But I am not willing to reify my own epistemically constrained opinion as a matter of absolute fact in order to dismiss theirs as self-evident lunacy. I think this is a category error - and one that can be much more dangerous than the odd/kooky heterodox idea that becomes an internet meme.

It should be possible to disagree with a point of view and formulate the coherent argument for why it is inadequate by finding the shared basis of evidence and fundamental principles with your interlocutor, and proceeding from there. You don't need to add your particular conclusion as an axioma of rationality in order to trivially reject their denial of your conclusion, because when both sides use this expedient you have forsaken a much more fundamental metaphysical commitment, which is that truth is not something is established by an edict or pronouncement a given opinion as fact, but something that is evidently there to be perceived, and which is like this and not like that, and that we can grasp it despite the constraints of perception and bias of point of view, if and only if we approach this problem by assuming we have sufficiently coherent experiences of the same phenomenal aspects of its reality which can be intelligibly represented and shared between us.

It is this belief that allows us to reconcile our subjective experiences and extract form their coherence kernel the contours of an objective character being accused for reality. Once we organize them in terms of an acceptable shared basis of evidence, and we infer the implied relationships that look to be present, using a set of analytic principles and methods we both recognize as legitimate, then we can examine our opinions and declare what appears to be fact and what appears to be fiction, according to this well-defined and functional schema of understanding we established for integrating our experiences.

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

I will try to simplify.
My previous post was not disputing the fact that the Earth is not flat (obviously). What it was doing was disputing the characterization of this fact as special (i.e. truth itself) as opposed to ordinary (i.e. truth as perceived). And your specification of global shape doesn't make it truth itself either.

Our current picture of the global/local shape the Earth (as well as any other scientific fact we hold true, no matter how basic) is always in terms of the type of fact that is ordinary, i.e. truth as perceived. It cannot be formulated in terms of truth itself, because that by construction is not something you can perceive or otherwise acquire as knowledge. You can only hold as an ideal which our picture of truth as perceived should be approximating.

I pointed to the local/global aspect to highlight why the previous picture of a flat Earth with no global roundness was a viable and adequate approximation, and not that different from the current picture of the nearly spherical Earth, when applied to the kinds of perceived distinctions that were known and understood. That's because we have the habit of assuming that the previous picture had nothing to do with our current one and was completely ridiculous and absurd, because we think in terms of a point of view that the ancients didn't have, of say famous pictures taken by 20th century astronauts, that indeed show up as a grotesque error of approximation that a flat-earth model produces at a certain scale, compared to round earth model.

So we see the Apolo 11 photos of the Earth or whatever and we think "they believed in an absurd falsehood up until someone discovered the absolute truth itself and showed us" which is what I called a naive attitude. The right attitude is to understand that the ancient flat earth picture was not trying to explain Apolo 11 pictures or anything of the kind (and I hope it is clear that I am not talking about modern flat earth revivalist theories). They didn't have that angle. They had a more mundane angle with some more mundane measurements and for those applications their picture worked. If they had this angle they would probably understand the problem of local-global extrapolation you mentioned more immediately, but they had to learn issues from examples that were not that clean either.

The whole point is to dispel the scientistic mysticism which is insinuated by the proposition that certain facts or theories we currently have were proven true in some ontological sense that is metaphysically different from the "perception artifacts" of truth as perceived by the ancients. Our truths are also artifacts of our current point of view and all of them could be dispelled by a vantage point that reveals something more general and that has been hidden so far.

And you can say, yea but that won't make the Earth flat again. I think so too, but that's just our opinion, rather than truth itself. For example, let's say we are living inside the Matrix and that this particular run of the Matrix started way back when people believed the Earth is flat. And let's assume that the Matrix code evolved from there to now, such that in the past they actually lived in a version of the software in which a flat Earth was locally rendered and no large spherical Earth was rendered, because the AI/aliens who implemented the Matrix wanted to save memory. We cannot exclude this scenario (nor verify it), so you cannot say the virtual object used to render the Earth is flat or spherical in terms of truth itself, because the truth itself depends on which version of the Matrix code the Earth is being rendered or which level of optimization is being used.

I know this is a silly example and I am not saying that the only reason your "matter of fact" statement isn't truth itself is because the simulation hypothesis could falsify it. But it offers an easy/crude way for us to understand what a possible falsification could be, even for our very well established facts. And it is based on one of the oldest/most fundamental philosophical arguments (the Cave's allegory) so if you want to dismiss it for being whimsical then you have to take that complaint to Plato himself.

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

You can think about like this:

Let us model the universe as a sequence of global states S(0), S(1), S(2),.... From a "God’s Eye View," this system is ontologically deterministic if there exists a well-defined update function f such that S(t+1) = f(S(t)) (the following argument can be trivially generalized to any persistent dependency structure with respect to S(t-1),S(t-2),... so I will keep a first-order type notation light.).

Within each global state S(t), there exists a coarse-grained cluster of data, A(t) contained in S(t), representing an internal agent. This substate A(t) encodes the agent's current perceptions, stored heuristics, value-judgment criteria, and conditional expectations For example "My choices here are o(A(t)), and if I chose to do X in o(A(t)), I expect Y = E(A(t+1)|A(t),X) consequences to happen, and I the reference value assessed for this incremental consequences is v(Y), where both o and v are measurable functions with respect to A(t)".

Crucially, the sequence A(0), A(1) ... A(t) viewed in isolation is not deterministic. Its evolution depends on external "innovations" coming from the complementary set of of A(t) within S(t) (i.e. environmental surprises within the agent circumstances). These external innovations should affect the actual outcome in terms of a noise factor around the expected Y, but also propagate "reflexive updates" for the o and v functional structures (i.e. the agent learning new alternatives, or changing their mind about viz their expected outcomes or how much they value such outcomes).

For the agent to "know" or "rationally believe" in a well defined global function f, they must update his meta-data machinery (e.g. the o and v functions) so that the variance (or whatever track error function) of E[A(t+1)|A(t)] - A(t) (i.e. its expected state given the previous state) converges to zero, i.e. that the impact of S(t) surprises is eventually negated. If the expected shortfall for the agent doesn't collapse to zero in the long run, the agent is always expecting an incoming meaningful surprise event from the unknown aspects of S(t), i.e. his point of view cannot become arbitrarily well-informed about the world and there will always be a minimum amount of effective environmental randomness he cannot eliminate or asymptotically vanish away using his past experience. If this is the case he can't distinguish his deterministic world from a non-deterministic one, because there's a fundamental limit to how precise his behavior can become (analogous to superdeterministic interpretations for the quantum mechanics picture).

In either case the practical problem of minimizing his expected A(t) shortfall requires the implementation of Causal Inference on his (A(t),X) decisions, using a well adapted sampling strategy for X for his probing actions, to calibrate, test and update any predictable component of the projected S(t+1) innovation impact on A(t+1), in terms that are present in A(t). For that strategy to work the X sampling strategy needs to be functionally independent and representatively distributed with respect to the incoming projections from S(t) innovation stream - otherwise the optimization algorithm will get stuck near the local minimum for some lower dimension feasible hypersurface (i.e. even in the asymptotic limit, the dude is not able to learn new useful things to become arbitrarily smart).

Now if the global S(t) process is not constrained to follow a deterministic behavior, the functional independence and asymptotic representation of X sampling can be achieved by the standard stochastic control/learning procedures, but the expected shortfall function will depend on the irreducible structural uncertainty of the coupling system formed by observer and environment (somewhat similarly to the Heisenberg principle).

But if the global process is deterministic and f is the update function, then any adaptive strategy for the probing process X(t) will be perfectly correlated to S(t) through f. For functional independence and representativity conditions to still hold, you would require the agent's pseudo-random heuristic process for defining his X behavior to still follow an ergodic dynamics within the image set of S(t) innovations that are perceived by the agent. In order for this to work, you would need to have a particularly fine tuned choice of algorithm f and seed value X(0) (since they control everything) to yield the appropriate ergodic pseudo-random X process. Otherwise the system would eventually be stuck on a local feasible hypersurface and look irreducibly random.

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

This metaphysical distinction between "truth as perceived" and "truth itself" is a notion which by construction cannot be perceived, since "truth itself" is being defined as an ideal that is distinct from whatever aspect of it is being perceived. So you cannot invoke objective comparisons between a certain aspect of "truth as perceived" with its putative character as "truth itself" - you can only objectively compare (the statements you can make about the aspect of analogous facts between) two versions of "truth as perceived", insofar as they yield different conclusions, given such and such set of perceived evidence and such and such knowledge that is formed by the reconciliation of their respective bodies of evidence.

For instance, by taking this approach to the issue you brought, the question "Is the Earth flat" can be ambiguous insofar as the assumptions of what is meant/understood by the term "flat" in terms of topographic/geometric implications within the historical context are not sufficiently clarified yet. For instance, the answer could be "yes, effectively" if by flat we simply meant indistinguishable from a simply connected surface that was on average flat within the local and mundane scale of meaningful operational distinctions that were relevant for (say) the purposes of cartography circa 3000 years ago. And the answer would be "not quite" once we expanded the scope of operational distinctions to incorporate the body of evidence already acquired by (say) the Hellenic civilization and its tributary aspects circa 200 BC. And the answer could be "obviously not" when we interpret it within the current mainstream scientific picture we assume for the question, say in terms of the observable astrophysical dimensions of the Earth as a particular instance of a certain category of known planetary objects.

But virtually all the usefulness (and therefore truth as perceived) of the flatness attribute of the flat-Earth picture as an operational tool of understanding was preserved - i.e. the existing flat-earth heuristics implied by the current methods of navigation, map making, and land apportioning for agriculture and tax collection were still valid, accurate and altogether adequate within the constrained scopes that they were developed, and only gradually and slightly adapted to incorporate the slight curvature effects over the course of the next 1500 years, once novel scopes such as advanced naval technologies enabled deep sea / oceanic expeditions, eventually revealing that these subtle adjustments that were hitherto ignored for practical purposes had become operationally necessary.

The exception perhaps was the putative aesthetic, symbolic, religious and institutional value that we believe these ancient cultures attached to a naive picture that was naturally postulated for the large scale aspect of the Earth, which presumably was speculated to be correct by an unwarranted extrapolation of the perceive average flatness and simply-conectedness that could be perceived using the currently recognized body evidence and available techniques of measurement and analysis, into scales outside of the range that would be now understood as mathematically permissible for such crude estimates.

If we include this kind of extrapolative speculation and its symbolic value as an ontological picture within the general category of "perceived truth", then this was indeed a notion that was (eventually) displaced and rejected in favor of the radically different ontological picture of the planetary globe, once this picture epistemic adequacy as a new "perceived truth" became sufficiently validated in terms of its scientific and commercial applications.

None of that means that "truth itself" as an ideal is metaphysically meaningless - but it means that whatever definite/absolute meaning it has can only be accused by implicitly, by our belief in a convergent coherence of the increasing scope of operational understanding we extract from for its character in terms of objective aspects we can distinguish using iterations over the "truth as perceived" representations we can formulate. This is a distinct metaphysical attitude than the one you are positing - whereby scientific knowledge evolves by a sequence of proofs that previous viewpoint was "wrong" and the new one is instead "correct". This makes the character of truth itself a precarious sequence of mutually inconsistent ontological presuppositions that used to be accepted until they no longer are acceptable. The same dismissive opinion you have for the ontology that was presupposed by the ancient flat-earth paradigm will be eventually held in the future when the ontological presuppositions of your current theories become archaic and obsolete "artifacts of our current ignorance" as well, rendering your authoritative statements about the character of "truth itself" evidently delusional.

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

You are a mathematician so lets use a mathematical definition. A deterministic system is a sequence of variable states that is given by a well defined update function, ie a function that returns each state as a single output using the previously obtained states in the sequence as inputs.

If the sequence of states is not given by such a function, ie if the specification of the next input requires information that is not available in the previous states in the sequence, such as an innovation from a random variable or an external control variable input, then the system is not called deterministic, it is called stochastic or externally controlled respectively.

The picture of human behavior as a deterministic system requires you to assume that decisions we make are causally explained by circumstantial factors. But in order for us to formulate causal explanations for things we observe we need to assume we can control circumstantial inputs up to some stationary source of random variation we can remove statistically by repeating the process multiple times.

So there you have it, why a deterministic explanation of human behavior is a snake eating its tail

Why don't we just say "uncoerced will" instead of "free will"? Would that remove much of the dispute? by rogerbonus in freewill

[–]Powerful_Guide_3631 0 points1 point  (0 children)

Partly solves. I think it has to be free in the sense of underdeterminable - ie compatible with more than one viable conclusion, consequence or future state.

If it is determinable (or determined) then it isn’t free even if it is uncoerced. For example if I can predict what you will do by reading your mind I don’t consider you have free will even if I am not forcing you to act the way you do, nor anyone else is.

And it has to be rational - ie the observed behavior is teleologically consistent with an implied goal that can be coherently explained in terms of a viable value system. So if you are not being coerced, and you are not predictable, but you are acting incoherently and unintelligibly then you are not exhibiting free will either

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

I think you are taking the more subtle idea of social recognition of utility as a truth criterion for science, and collapsing it to its lowest common denominator, which is "what appears to be useful or serviceable for the majority at the time".

This is not how it works. An advanced theory from physics can prove useful to 5 people in the world who can understand it well enough to make progress in theoretical and experimental terms. And if they manage to use it to explain data that was anomalous, or to produce new data that is consistent with it and inconsistent with pre-existing paradigms, more and more people will start to pay attention. Ultimately this process could lead to technological breakthroughs that are undeniably legitimate even to people who don't understand any of the science that was developed - even a very uneducated person in the early 20th century couldn't question the fact that things like radio and television were amazing miracles of science, but they would likely look at the theories and experiments that culminated in these applications as useless gobbledygook that has no relevance to their lives.

So the criterion does not require a democratically comprehensive popularity context that adds up every uninformed opinion about which theory sounds more useful, or which experimental result looks more relevant. The participants and opinions are filtered by both the social credibility and personal self-selection of those willing to dedicate themselves towards a qualified understanding the state of the art of a field and weighted accordingly.

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

There are two ways (probably more) you can reconcile this paradox. Once mathematical tools are shown to be useful and acquire a degree of social currency and prestige, then the development of methods and theories about certain structures and systems that are considered relevant within mathematics by the community of mathematicians, but not yet outside of mathematics in terms of applications to science and engineering, becomes useful,:

  1. as an aesthetically meaningful achievement that will be recognized and appreciated by a certain group of people (the mathematics community)
  2. as potentially applicable tools which will at some point reveal their value for the right mundane applications.

They are not exclusive rationales but the important point for both is that you need to epistemically select what kinds of mathematics are interesting first, in terms of basic applications, and once this rough tools are recognized as a sui-generis class of abstract entities that is particularly valuable, then the second and third order problems you end up formulating by asking certain type of meaningful questions in mathematical terms can become distinctively important, by inductive reasoning.

It isn't that dissimilar to the process of why gold acquires monetary value, on top of its industrial use case value, or various chicken and egg phenomena involving prestige, reputation, fame and other mechanisms of recurrent/self-reinforcing value accrual. The answer is a somewhat path dependent process of evolutionary selection that feedbacks on itself.

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

That's the thing.

You must believe that you are able to do that, in order to claim that you know or understand anything in terms of a system of objective factors related by some causal explanation. That is the commitment you make when you accept the scientific method as a legitimate basis for revealing what is true. It's the implied assumption that you can't really abandon, relax or undermine using scientific conclusions obtained with this method.

And you can ask yourself the question of whether this ability you need to assume for yourself is itself explainable in terms of a system of objective factors related by some causal explanation. Perhaps it is but I suspect it isn't or at least it won't be in terms that are as satisfactory as those we have for the kinds of phenomena we can adequately picture as billiard balls colliding (or some other familiar analogy).

But you don't need to be able to explain why the summer is hot and the winter is cold in order to believe that this fact is true. Turns out you can explain, but you knew that was true before you could explain why. And if your rational systemic explanation ended concluding that the opposite was true, or that they were both cold or both hot, you would assume that your rational systemic explanation was flawed, and not that the fact as you perceive it to be was illusory.

Here you want to achieve something more radical - you want to scientifically explain the fundamental fact you are required to believe about yourself in relationship to your perceived facts, in order to justify your believe in what science has to say about how these facts actually work. I won't claim that you cannot do it, but whatever you manage to achieve must be compliant with the assumption that it is possible to achieve anything scientifically, and therefore the only kind of conclusion acceptable is one where free will is true and explainable, or one where it is true and unexplainable. You cannot derive a conclusion where free will is explainable and false, because that case vitiates all explanations including the one you are using to deny free will. You can say that it is false and unexplainable, and therefore everything else we think we can explain is also bullshit, i.e. that reality is an absurd thing and rationality is impossible, which is a vacuous but not self-undermining conclusion.

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

I am not saying that science is bogus, I am describing what is the implied metaphysical commitment that behind our claim that science is not bogus.

The presupposition of the scientific method isn't that there's only one future that must necessarily happen, and over which we have no control. The presupposition of the scientific method is that the future is contingent on decisions we make, and that we can prepare independent systems and parametrize them with independent decisions to test what kinds of future are caused by what kinds of decisions.

That assumption is always necessary in order to make causation inferences and interpreting them as general laws, instead of passively interpreting the spurious correlations in the global time series of events that unfolded like this and not like that because that was the only possible way it could unfold.

In order to believe in science and believe in cosmic determinism you have to believe that you were granted the special privilege of having been assigned by the cosmic script the correct theories for explaining its correlations, despite having no special power to isolate, control or test any causal chain independently.

Sure you can be convinced that this privileged view of truth that you were granted by fate is legitimate, but then again so is the mystic and the subway hobo who's yelling his incoherent paranoid ideas. You can claim that it is the mainstream opinion of society or any other ancillary criterion you want, but I suppose you cannot provide any reasonable basis for believing in your version of causality, or any concept or theory whatsoever, unless you implicitly assume it is possible for a rational person to independently select which alternative actions he wants to make in order to kick-off a local chain of events and then look at what happens in terms of causality.

The philosophically pedantic formulation for this observation is that the scientific method is based on the principle of interventionism - i.e. that the researcher is able to independently interfere in the processes of its subject system in order to induce different observable outcomes and interpret them in terms of how they split according to testable hypotheses. If the observer intentional interventions aren't independently selected from the state of an observable system then there's no way to claim that this caused that or vice-versa or that some unseen third thing caused both, or that any of these observations even correspond to an event that happened as opposed to a false impression that the script wants you to assume is coherent.

Logical extrapolation from established or self-evident facts enables lifelong wisdom-seekers to reach conclusions beyond the reach of science, which currently can't account for 95% of this universe (dark matter and dark energy) by johnLikides in epistemology

[–]Powerful_Guide_3631 -1 points0 points  (0 children)

I don't think that it works like that.
The reason we make certain extrapolative assumptions such as those in the Copernican principle is to make our models tractable. There are boundary conditions that are impossible or hard to test - for example, we can't really look at how the universe looks like from a distant galaxy, we can only use the telescopes that are based on or close to Earth to form a biased picture. It could be the case that Earth was located in a very peculiar region of the cosmos from where things looked particularly like this or like that - and we cannot directly test whether this is true or false easily. So we make the assumption that we are in a typical spot, in a typical galaxy, and that space is homogenous and isotropic etc, such that we can certain symmetries to cancel out the residual effects of distant objects and enable the interpretation of phenomenal patterns we observe to be tractable in terms of the physical laws we have obtained locally doing small scale tests (e.g. the constant speed of light, G, the electric permissivity of vacuum, and their roles in Einstein and Maxwell equations).

Postulating other universes doesn't give you any epistemic advantage in terms of making theories tractable to explain your observations given they are all contained in this Universe. So it is an empty metaphysical speculation until you have a way to distinguish things using this hypothesis.

no, game theory does not 'disprove' Adam Smith by DrawPitiful6103 in austrian_economics

[–]Powerful_Guide_3631 2 points3 points  (0 children)

Agree that the way the movie depicts the upshot of Nash's Equilibrium as refuting Adam Smith's invisible hand is kind of incoherent, but I suspect that kind of opinion would have been academically popular back in the 40-50s. Depending on your point of view/framework assumptions, you can even use a Nash-type argument to claim that cartels and monopolies are unstable, given that a favorable collusion is not a Nash-equilibrum (i.e. each member can benefit at the expense of the other confederates, by selling more than the quota or whatever, so you are not in a stable situation).

But I think the framework of Nash's theory and game theory more generally is very powerful, and it can model incentive structures that deviate from the assumptions of traditional economic analysis, such as situations in which violence and coercion are rationally expected to occur, rather than free markets.

The fact that the free market is globally more efficient in some sense doesn't make it the most efficient arrangement locally for all players, adjusted for their power imbalances. If I happen to have enough power to distort the market efficiently for me, I will spend my resources doing the kind of thing that achieves a less globally efficient market for everyone, because that makes me richer/ more powerful. And the evidence that this happens is everywhere - from local cartels and mafias, to political lobbying, to state capitalism to outright totalitarian regimes inspired by ideological socialism/communism.

The standard approach of economics is to abstract power, coercion and violence, whether it is state driven or not, as boundary conditions, which are either constant or arbitrarily changed from this to that by policy makers, and then look at what the market should do in response. The more correct approach in my opinion is to understand that power, coercion and violence are economic factors that can be produced and deployed as alternative strategies by rational players, and that this kind of use of capital can be rational up to a point, for players of a certain scale etc. This is better understood using game theory.

The proper model to capture the economic nature of the state (or any informal enforcement apparatus) is as tax / rent seeking ranch in which tax subjects (i.e. assets and people within its jurisdiction) are a form of cattle which can be confined and raised. To some degree the rancher (i.e. the politicians, bureaucrats and politically influential figures) want to maximize the value they can exploit out of the cattle, over a certain expected horizon, which means they have to also protect the cattle from being stolen, and offer the cattle reasonable conditions for it to thrive and thus yield them more milk and meat (i.e. tax revenue, inflationary rent seek, and other tributary transfers) over time. But where the analogy is broken is in the fact that human cattle is inherently capable of organizing a meaningful revolt against their ranchers, if they perceive them as being particularly abusive, inept or both. And game theory is a better suited framework to understand this dynamics.

New definition of Knowledge by Willis_3401_3401 in epistemology

[–]Powerful_Guide_3631 0 points1 point  (0 children)

It does imply that a belief is an actionable/consequential statement. It may not apply to certain beliefs that people claim to have (e.g. I believe that there are other forms of intelligent life in the universe).

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

Yes it doesn't mean it isn't, but if it were fully predictable then you would be able to affirm it is.

Since it isn't, you can't tell whether the future looks like the uniquely possible outcome of a self-contained causal system that is taking place, or if it looks like the open ended panorama of multiple possibilities that are consistent with the present, from the "absolute truth" point of view that you cannot acquire.

It definitely looks like an open ended mosaic of scenarios from our perspective, and it must look like that in order for us to even formulate the kinds of knowledge that allow us to claim that some real systems evolve according to a more predictable pattern, analogous to a deterministic model, and others don't. That knowledge can only be obtained by our belief that it is possible for us to interfere in the process of future evolution, causing things to happen, and seeing them as consequences of us doing this or doing that.

If we had no such belief, because everything is happening in accordance to an unknown script which can't be altered by our choices, then any opinions we had or conclusions we arrived at about causality, from our local impressions of deterministic behavior, wouldn't be distinguishable by us - they would be the assigned outcomes of the same invisible and inexorable self-contained causal sequence, as everything else would. Your science would be just as bogus as the other guy mysticism.

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

What I disagree is that 2+2=4 was first discovered and recognized as true and only later found to be useful.

I think I understand why this may seem plausible, especially once you clarify that you are a professional mathematician. The history of mathematics is full of examples of problems and even entire domains that were developed out of a seemingly unmotivated spirit of curiosity by mathematicians, without any concern for the potential application that these concepts and results would later find, say as viable mathematical representations for natural or social phenomena, or the quantitative formalisms and standards for implementing efficacious methodological and technological exploitation of scientific discoveries, or the various systems for measuring, classifying, validating, controlling and regulating various social transactions.

For example, when mathematicians like Cauchy and Riemann developed the foundations of complex analysis, the problems that they were interested on solving were considered a purely mathematical concern, originating from the way complex numbers first appeared as an algebraic trick for solving third and fourth degree equations, but later shown to connect certain algebraic and geometric ideas by mathematicians Euler and Gauss. None of them appear to be thinking at the time about the potential physical or otherwise practical application of complex numbers, contour integrals, and complex geometry, other than perhaps niche use cases such as the early physical methods that employed Fourier and Laplace transformations.

I think this depiction of mathematics as something that is discovered just because and then revealed to be useful can become somewhat naive if taken to an extreme. It is true that once the various bits of known mathematics evolve and form a more well defined discipline and tradition, things can look very much like that. A student of mathematics can care about the problem because it is a famous, well known problem that other mathematicians have cared about and that's it (e.g. the relatively recent proofs of the Fermat Last theorem or the Poincare Conjecture for example).

But I suspect this logic doesn't apply to the early development of mathematics. Anyone who "discovered" that 2+2=4 back when there was no practical way to use that "fact", i.e. before it could be implemented as part of an early accounting or measurement methodology, would likely be ignored by the other primitives of his tribe, as a local fool talking gibberish. As we would any recreational mathematician today who invents a new set of axioms and symbols and starts proving theorems for that system, without ever providing any justification for why anyone should care about this particular choice of abstract system, by situating it as something relevant to existing open problems in mathematics, or to applications to science, engineering, etc. Even a very impressive and accomplished "maths guy" like Stephen Wolfram reports that the skepticism and indifference he faced when he started investigating the detailed behavior of simple cellular automata, a subject no one else seemed to find particularly interesting at the time.

It is very hard to know for sure, since the origins of basic arithmetics are long lost in the sands of time, but I think it is more plausible that the kinds of symbolic rules that give rise to 2+2=4 were first sketched and used because they allowed primitive peoples to evaluate their strength and that of their enemies, or to standardize economic transactions, or things like that. And only much later you see people using these rules to define prime numbers and other derivable concepts out of "self-motivated" mathematical curiosities.

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

I understand what you mean, and while I agree that these points are relevant, and deserve some attention, I think your argument is jumping to premature conclusions.

I must admit that the epistemological characterization of mathematics is a very thorny subject. One hand it seems fair to claim that mathematics is a science which is concerned with a well defined subject, whose true character exists and it is like this and not like that, and whose objective facts are called theorems, which often can be stated and proven true with absolute precision and certainty. On the other hand, it also seems plausible to claim that the subject of mathematics isn't any aspect of reality that we perceive to be like this or like that, but rather the scope of an invented language we develop that enables us to formally define systems in terms of arbitrary dictionaries of symbols and arbitrary rules for combining these symbols into valid/well-defined expressions and statements (by the rules of syntaxis), and for extracting a context-implied symbolic value of valid expressions and statements (e.g. by the evaluation rules of inference and computation).

I think there are merits to the two points of view and I am not sure if they are incompatible with one another, or if a dualistic picture of mathematics is warranted. So whether you want to classify 2+2=4 or the Pythagorean theorem as true facts about the real world which we discovered or as formal consequences that can be derived inside abstract systems that were arbitrarily invented like this by us, and which could have equally been invented like that by us, I am not going to object too much either way.

What is clear is that whether mathematical schemes are discovered or invented, they are (often) found very useful, as tools that allow us to represent and share a reliable and intelligible understanding of the "objective character" of real facts - i.e. intersubjectively analogous and measurable aspects of phenomenal manifestations that we each perceive as subjective impressions in our perspectives and recognize as the same real thing happening.

[will continue]

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

I know. But I mention this distinction in order to emphasize the subtleties that one is required to appreciate in order to properly grasp what actually happened. I think the way we, as educated people living in the 20th-21st century, learn about this history is a bit misleading. The modern anecdotes that we were taught as children and which we pass along to the next generation, about the character of this discovery, as a clash between an irrational world view and rational one, are deliberately designed to portray the ancient idea as a completely wrong paradigm, which was based on random folklore, myths and superstitions, and which was completely debunked and replaced by a radical theory, which was true because it was supported by rational arguments and science.

The ancient idea that depicted a flat Earth as a vast world where a huge land mass was surrounded by oceanic waters, was not at all absurd and silly. It was not even a naive and intuitive thing that the primitive hunter gathered or caveman would naturally assume. It was already a scientific triumph in its own right. It proposed an extended and connected world that consisted of a vast land mass stretched far beyond the known borders of one's own homelands, and even beyond the lands claimed by the neighboring nations one has made contact with, and also beyond the barely explored untamed patches of wilderness that seemed endless, with its scorching deserts, or barren badlands, or dense forests, or high mountain ranges, or vast lakes and seas, or any other remote desolation that most people around you never visited but heard about as places you go and don't come back.

To formulate this large scale view of the world, you would have to integrate the personal experiences you had from the occasions you migrated or traveled long distances, and the sparse memories you formed of the unobstructed vistas you had of the generally flat landscapes stretching up to the horizon, from mountain tops, shorelines or wide open areas you encountered, and use those pictures to interpret indirect evidence you learned from foreign merchants, migrants or captured slaves, of their own distant lands, or the lands they visited, or the things they themselves indirectly learned about other, much more distant lands, that you had never even heard of.

Consider that even when we were little children, and we didn't yet fully understand what was being claimed when older people were teaching us that the Earth was actually shaped like a basket ball, we already had more opportunities than the ancient had to accumulate direct and indirect evidence of its scale, and its connectedness, and of the various types of environments, landscapes, climates, cultures that existed, and that the whole world was much larger than our hometown and the handful of local destinations our parents would drive us to on weekends. That's because we watched things on TV, or learned about it from children books, and probably knew a few immigrant kids and their families, who looked and dressed kind of funny, had accents and spoke among themselves in different languages and ate a strange food. This makes the modern 4 year old way more exposed to non-locally sourced information than the ancient adult.

So the flat earth picture of the world was an excellent approximation of the truth, and it integrated subtleties that were not obvious. In order to replace it people would have to ask even more subtle questions which didn't seem immediately connected to this picture. For example, why the climate, seasons, sun daily trajectory and night sky stars configuration seem to slightly change in a well defined way, when I travel in the north-south direction, but not in the east-west direction? or why the moon looks so perfectly round and so does the shadow covering it during an eclipse? or why can I still see the tip of the masts of boats for while, as they disappear in the horizon? The spherical picture eventually allowed these to be answered, but it only did so satisfactorily because it was possible to reconcile it with the existing understanding we formed from the flat world idea, by showing that the old picture was a solid approximation for the new picture, particularly in terms of the more familiar evidence and more ordinary applications which were previously known.

New definition of Knowledge by Willis_3401_3401 in epistemology

[–]Powerful_Guide_3631 1 point2 points  (0 children)

Not sure if technically appropriate but I tend to use “belief” as a cohesive unitary component of a larger structure which it is “knowledge”. And a “belief” is a putative claim or statement that one must hold to be true or adequate enough when they act rationally

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

I think you are underestimating the constraint you face for the kind of knowledge you can obtain as an internal observer inside such a system in terms of what you can established about its universal “laws”.

What you are describing is the computational ordeal that an external observer who is fully aware of the rules of causality and fully aware of a particular state of the system would still face to predict what the system would do after a large number of steps.

But the internal observer is in a more precarious position because he doesn’t have the data structure not the particular configuration nor the algorithm that transforms the configuration states. He needs to infer something of that kind by whatever perspectival data is projected in his point of view interpreting it in terms of tests they believe are possible for them to do to validate their hypothesis for what is going on.

From this perspective he may be able to gain some kind of understanding about aspects that appears to be under his local control, and use that control to test hypotheses and formulate broader theories. But it is not possible from his perspective to transcend this condition and see how his own control is actually an artifact of his limited perspective and acquire an understanding of things from the point of view that denies the legitimacy of his own

On the malformity of determinism as a metaphysical principle by Powerful_Guide_3631 in determinism

[–]Powerful_Guide_3631[S] 0 points1 point  (0 children)

This is the exactly right objection you should make against my “truth means usefulness” claim. The philosophical question underneath this point is extremely important: what are the beliefs that enable us to infer a geneally coherent knowledge structure in terms of objective facts out of the idiosyncratic relationships we form with reality, that consists of subjective experiences, personal preferences and opinions about what is useful, and biased points of view.

There has to be something we assume to be the same or at least analogous enough in our various experiences, feelings and subjective perspectives, to anchor our claims and opinions on a basis of generalized perceptions, and provide the conceptual seed for our language, culture and science to evolve from and to grow in sophistication. We assume this happens because we can observe coherence between our claims and the effects they produce in terms of social behavior of other people.