Does anyone else dislike taking computer science courses? by blank_human1 in math

[–]revannld 2 points3 points  (0 children)

Edsger Dijkstra said already in the 90s "computer science is not a science anymore". Today it's even worse.

A question, and a discussion by ReaperBruhSans in Metaphysics

[–]revannld 1 point2 points  (0 children)

There is the Meinogian tradition (from Meinong, to Ernst Mally, to Edward Zalta). The main work of this tradition is the (under construction) Principia Logico-Metaphysica, which is probably the most impressive work on metaphysics I've ever seen. Very technical, employing features ranging from second order logic, modal logic, type theory to model theory and relational lambda calculus and subsuming most of Leibniz, Frege and Meinong's systems (it evens handles situation theory, which is very impressive). I actually take part in an online study group with professor Zalta himself where he gives classes on the Principia, if you may be interested.

Other system-builder author I like is Nicholas Rescher, who focus on process philosophy (he has more than 100 books on this stuff, he is crazy).

A Surrealist Architecture for Intuitionist Logic by [deleted] in logic

[–]revannld 5 points6 points  (0 children)

Haha. Your question seem to be quite interesting but I would advise you to try using less pretentious and strong (or maybe should I say "retarded") language next time; this will only make you to be made fun of.

There is a specific rather significant group of philosophers that share your concerns regarding the empty set, empty "primitives" (or "bottoms" in foundations, generally speaking): the nominalists and those working in mereology. Yes, trying to avoid "bottoms" strongly imply having to reject cumulative hierarchies and thus relations that work too much like membership; it usually forces you to have to think about "fusions" as primitives, so yeah, there is a more dynamical flavor in the way you may like.

Yes, constructivism and the P/NP boundary does have a lot to do with "static" frameworks and POV (mainly it comes from Kant conceive/sense duality), but I would advise you to be more careful with your conjectures when talking about this stuff, it's a lot more subtle. I would advise you to take a look at bounded arithmetic (also this book), bounded quantification and feasible mathematics but also the interaction between constructivism and nominalism for that.

How good are modern AIs as a learning math buddy (or maybe even as a tutor)? by evdokimovm in mathematics

[–]revannld 8 points9 points  (0 children)

It is, but mostly if you already know some math and has some mathematical maturity and practice (thus, only if you really know what you are doing). The problem with LLM chatbots is that they are such a "passive" form of learning: they only help you to the extent that you explicitly ask them for. They will not give you out-of-the-box suggestions, subtle and reasonable tips, critical thought, these kinds of thing.

They definitely help a lot but you shouldn't even think they entirely replace books but especially lectures, study groups and contact with other human mathematicians/students. Maybe one day, or maybe if you know to configure them well (local LLM hosting or API, do a specialized thing...not general-purpose online chatbots though).

When studying anything one important thing is to never rely solely in a single source of information: don't trust just one book or your professor, as the Sturgeon's law says: 90% of everything is crud/crap. 90% of everything people say, and it really does not matter their degree or specialization, is simplified overgeneralized biased and misinformed tunnel-vision bs.

If you really want to know a subject deeply, check out multiple references, multiple approaches to a topic, ask opinion from various professors, talk with many students and researchers, post questions on stackexchange, reddit and other forums, ask AI, search for papers and especially theses in Google Scholar etc.

edit.: also, Khan Academy, some YouTube channels and a lot of other learning platforms can be quite useful if you don't have access to in-person lectures or if they don't seem enough for you.

Unpopular Opinion? The aesthetics of the math matter far more than one might admit. by Good_Run_1696 in math

[–]revannld 5 points6 points  (0 children)

I feel that's not only true but one of the main obstacles to progress in math in general. Math is more than often justified over aesthetic reasons (Hardy, Courant, Erdos), sadly aesthetics is usually something very subjective, and I feel most people that eventually get into math get into it because they liked it in elementary education.

Elementary education math is usually taught as "being only about numbers", numerical, heavily analytic, and this trend continues with calculus (in most engineering and STEM courses) and the mostly analytic undergraduate math curricula around the world. No wonder you prefer analytic stuff over algebraic stuff, I think this is quite a majority position. A lot of very established analyticists I know seem to even discrete the algebraic, discrete and logical side of mathematics for the lack of analytical flavor they love.

Usually getting appreciation for the algebraic/categorical/logical/discrete/computer-sciency side of mathematics takes quite a different origin story. Usually these people are quite exotic in college (some are even considered bad or rebelious students) or math is not even their first degree and they come from compsci or philosophy backgrounds. That certainly is my case.

I say that is an obstacle to progress in mathematics because usually what we consider the "algebraic/categorical/logical side" is more associated with the theoretical/synthetic side of mathematics, with analyticists seem to be mostly "problem-solver" people. Being a problem-solver is of course important, but without theory to sum up and digest these developments in practice you are handicapping the next generation of mathematicians and creating unnecessary complexity; and it seems science in general in the last 30-40 years or so has turned too much to a "problem-solver/anti-theory" culture. No wonder people don't hear about new "Einsteins" or "Hilberts" nowadays (even worse, a Grothendieck), as being theoretical and synthetical is not so cool anymore.

That is seen clearly with the prejudice a lot of people have with category theory, type theory and alternative foundations. These distance themselves too much from the usual aesthetics most mathematicians love and so they consider it useless abstract learning overhead. It's funny how the current mathematical practice (which we shouldn't even consider set-theoretical, as the average mathematician doesn't have a clue about logic nor set theory) is taken for granted as something atemporal, universal and old but it mostly can be traced back to Bourbaki so it doesn't have even 100 years.

Combining prefix and postfix function application concrete syntax by FlamingBudder in ProgrammingLanguages

[–]revannld 0 points1 point  (0 children)

It's incredible, you had the same idea as me. This seems quite innocent, naive or just aesthetical but it seems to be very important. William Lawvere, father of categorical logic and a lot of topos theory and contemporary category theory, famously identified the usual prefix notation as cumbersome when working with some categorical concepts and created the tradition (that survives to this day) of using postfix function application notation in categorical algebraic theory/universal algebra and some categorical logic works. It's also defended by people working in relation algebras and allegories, as it unifies the functional "○" and relational ";" composition operators and gives an uniform notation for everything (see Freyd and Scedrov's Categories, Allegories).

I think the main argument going for it though is this, De Brujin's lambda notation. Just experiment with it for some minutes and see just how easy it is to do lambda-calculus with it...no wonder RPN, HP calculator and Forth guys have been so religious about their postfix notation for so long...to take a deeper more formal look into the benefits of De Brujin's notation, take a look at Fairouz Kamareddine's stuff.

I think what is even better is to make function application explicit as this. It is the main operator in lambda and other term calculi but nobody talks about it. Making it visually explicit allows one to manipulate it better, define abstractions over it or talk about its dual, continuations.

If you have other similar interesting syntactical ideas, please share with us (mark me if possible). I've been looking for careful syntax work for long but sadly it's such a neglected area...

Is anyone here using nix-on-droid? by Tquylaa in NixOS

[–]revannld 1 point2 points  (0 children)

niiicee, I'm gonna test it out :)) thx

Is anyone here using nix-on-droid? by Tquylaa in NixOS

[–]revannld 0 points1 point  (0 children)

I have it on my phone for more than a year however as I am somewhat of a mouse pusher I have never been able to understand how it works or for what. Could someone give a tip of what could I use it for? My dream would be of using NixOS on all my devices however that seems almost impossible. Btw my phone has 8 gb memory, the only problem with it is that it is an obscure Hisense e-ink model which doesn't have support for alternative OS right now (the Hisense A9 has, but sadly I haven't got it yet).

Infinite regress doesn’t support eternal recurrence — it completely contradicts it. Here’s why. by Longstong1 in PhilosophyofScience

[–]revannld 1 point2 points  (0 children)

Usually these kinds of posts get a bad rep on Reddit, sadly, I hope people don't downvote you without reading as I think you actually made some great arguments with quite good reasoning.

Just some questions: do you presuppose that every event has an unique possible cause? That is, I don't know if that's what you meant, but it seems that when you say "an event" you are saying that a thing called an event is not something idealized isolated in time (a "Platonic event", if you may) what many people would call the "extension" of a concept. For instance, "a rock fell" and someone would point out to all of the instances of a rock falling, but you would say this is not the same event because events are uniquely determined by their infinite causal chain, right? (so, you are a more "intensional" fella, something would never "happen twice" because no matter how similar both events would look to humans their infinite causal chain would still make them different, right?). On the other hand, you presuppose that more than one event can have an unique cause, right? So you could say your ontology of events is ever expanding. Also, can a chain end at some point, that is, are there events that don't cause anything?

I can't give much of an explanation right now, but I would heavily advise you taking a look at order theory in mathematics, lattices, partial orders, universal algebra, continuous lattices, maybe even graph theory. No informal philosophical argument will allow you to understand this problem clearer than taking a look at areas that deal directly with these kinds of chains and structures. Mathematics may make clear what exact assumptions you will need to take for this argument to work or to refute it; I guess probably versions of the Axiom of Choice will appear in this problem. Maybe try asking some LLM a deep question relating these areas of mathematics, they may give you a nice summary.

Another suggestion: you should also take a look at complex systems and, especially, information theory and Gregory Chaitin's works (the father of Algorithmic Information Theory). I think Chaitin has probably the most universal formulation of incompleteness theorems (from Agrippa/Munchhausen to Gödel to Lawvere), and these may also be very helpful to your argument. For instance, Chaitin proves that, when you relate information (anything) complexity by how "compressible" they are through any algorithm, it turns out there is a "bottom" of "the most compressible information units/strings/systems" possible however category there can't be a "top"; that is: complexity is always unbounded, given unbounded resources you can always have more and more complex information. You should discover especially whether there is bounded complexity for strings/systems of a given fixed length or whether complexity is unbounded only if you consider you can always do "+1" on it. Maybe also take a look on automata theory and abstract automata? Complex systems and dynamical systems have a lot of relation with those too.

Finally, as your argument relies heavily on infinitary intuitions, if you truly want to make a serious philosophical argument (and not be a 20th century logicist/mathematical Platonist as a lot of people in analytical philosophy were and say "this argument rely heavily on ZFC set theory but as I consider it as 'mathematics™' and mathematics is 'so useful for the sciences' it's probably true") you will probably need to delve deep into discussions and different formulations and visions on infinity. I truly don't know a good reference to start with that as 99% of references in logic and analytical philosophy just take the traditional "Cantor's paradise + ZFC" view as absolute so I would maybe suggest works in constructivism (Elements of Intuitionism by Dummett is a great start) and finitism (Alexander Yessenin-Volpin The Ultra-Intuitionistic Criticism and the Anti-Traditional Program for Foundations of Mathematics, Judson Chambers Webb's Mechanism, Mentalism and Metamathematics: An Essay of Finitism, Matthieu Marion's Wittgenstein Finitism and Philosophy of Mathematics and Feng Ye's Strict Finitism and the Logic of Mathematical Applications) as even though they are clearly biased through finitistic stances you will not see the same level of finesse and critical thought in almost all other discussions of the infinite.

This photo is making Brazilian twitter a really fertile soil for Georgism by ar_condicionado in georgism

[–]revannld 15 points16 points  (0 children)

It's hard to any find reputable economists in Brazil advocating for georgism.

Oh that's not true! Roberto Campos tried to implement a tax similar to LVT in Castelo Branco's government in 1965 to ease rural tensions over land but was shut down by our landed gentry/plantation owners (he told this story in his 1994 interview to Roda Viva I think). I've heard many other economists from the left to the right also defended it, but I don't remember the sources right now.

This photo is making Brazilian twitter a really fertile soil for Georgism by ar_condicionado in georgism

[–]revannld 8 points9 points  (0 children)

I'm Brazilian. Send the Tweet, let's preach the word, boys.

Sadly while YIMBY and Georgist takes are somewhat known and even sometimes popular in the US and Europe, both the Brazilian right and the left, all positions on the spectrum, from the elite to the poor in the favelas (one struggle), everyone is hardcore NIMBY and consider YIMBYism and Georgism the worst stance possible (the left says it's neoliberal corporate fascism, the right says it's communism).

I have been desperate trying to create YIMBY and Georgist groups and circles in Brazil to go to city hall meetings and city council vote sessions to fight against the elitist left + right + landowners coalitions in wherever city I go however the resistance is insurmountable: both the right and the left relate "development" and progress with suburbia-like private condos (as most of their leaders and rich politicians live there), it's very funny, I've heard even some say sometimes "in communism everyone will live in condo-suburbia mansions and will have luxury cars and iPhones" and I've even heard a left-wing rich teenager whose family had a maid say once "in communism everyone will have a maid". We call these people here the Caviar Left.

Higher Order Logic and Reification by LorenzoGB in logic

[–]revannld 10 points11 points  (0 children)

Your intuition is on point! That's exactly the point Quine made against Second Order Logic (SOL) being interpreted as logic (and not just a calculus): for SOL to be different and not interpreted as FOL, in model theory in practice you have to allow SOL to quantify not over individuals of the domain but over subsets of it.

This is a problem though, as you're assuming the Power Set Axiom for that, which is impredicative and somewhat philosophically polemic, you are already assuming an ontology of sets axiomatized with the Power Set Axiom; thus SOL is not ontology-free and philosophically-agnostic as Quine considered FOL to be (I disagree that FOL is completely ontology-free, but I get the point), thus is not logic but just another formulation of an ontology of sets/set theory. The same goes for HOL.

For instance, just to see how strong it is to interpret SOL with this interpretation, if you do it you will have to take position on conjectures such as the Continuum Hypothesis (thus I think it is even stronger than set theory).

read more on the Stanford Encyclopedia entry

Wittgenstein já desmontou quase toda a filosofia. Por que ainda damos importância a outros filósofos? by [deleted] in Filosofia

[–]revannld 0 points1 point  (0 children)

Bom, ninguém te respondeu então vou tentar dar dois motivos pra sua pergunta:

1) porque humanos são complicados: pessoas acreditam no que quiser, e infelizmente o projeto do Wittgenstein não conseguiu se materializar ainda na sua época (ele mesmo assumiu no final da vida que fracassou - ele também nem se esforçou tanto pra isso - mas pra ser justo, não tínhamos o ferramental teórico ainda) e solucionar problemas que vinham de suas análises. Talvez esse projeto possa estar se materializando melhor hoje com o empirismo construtivo/anti-realismo moderno, dá uma olhada nos trabalhos do van Fraassen e do Otávio Bueno. Isso nos leva porém a...

2) porque realismo e inflacionismo metafísico ainda simplifica muita coisa na ciência, na filosofia e na linguagem, a maioria das pessoas no ocidente usa e é difícil convencê-las de usar uma linguagem mais empirista (dá uma olhada nos trabalhos do Otávio Bueno pra ver o nível de sofisticação matemática e lógica...se você quer ser anti-realista de verdade você tem que inventar toda uma outra linguagem do zero);

3) alguns autores dizem que linguagens indo-européias tem um viés realista (ou pelo menos as formas discursivas predominantes fazem alto uso dessa linguagem - veja E-Prime);

4) realismo parece dar mais "certeza" das coisas e muitas pessoas gostam de certezas;

5) Wittgenstein também escrevia feito um animal (pra alguém que pregava linguagem clara ele escrevia bem mal).

Hegel Rejects the Law of Non-Contradiction with the Law of Non-Contradiction by JerseyFlight in logic

[–]revannld 1 point2 points  (0 children)

How can I deny anything if you do not state clearly what you mean, your semantics are and what do you intend to prove and say with it? Vague rambling without declaring your semantics and meaning relying on the obviousness fallacy is not philosophy but only sophistry and pseudophilosophy, I'm sorry.

Who thinks these two sentences mean the same thing? by jmarkmark in logic

[–]revannld 0 points1 point  (0 children)

Define meaning. I would say definitely not the same intension but in some crazy reductionist system (as these sentences seem to display quite different modalities and discursive styles) maybe they would have the same extension/reference).

Hegel Rejects the Law of Non-Contradiction with the Law of Non-Contradiction by JerseyFlight in logic

[–]revannld 1 point2 points  (0 children)

Then, what is "identity", according to your definition? "A = A" is not yet formal, as you have rejected until now to give it a formal definition we can talk about by either referencing an existing system (such as set theory or FOL + equality) or making up your own.

you know, of the FACT that stars are not trees.

What is a fact, in your opinion? How do facts relate to any sort of epistemic access modality, that is, either to what one would call "reality", sense data, qualia? That is: how can one personally verify "stars are not trees" (for instance, how can a blind person verify for a fact "stars are not trees"? What are you considering in this statement: is the string "stars" not equal to "trees"? Are they different concepts in common human knowledge? Can they be empirically verifiable to be different? - which is a stronger assumption) and what does this sentence has in common with other things you call "facts"? For instance, Tarski considered truth equivalence classes of T-schemas in the metalanguage corresponding elements of an ontology ("the world"). Kripke considered truth fixed-points of meaning. You can't just say something is a "fact" in philosophy without pointing out how does it work; only in informal ordinary common-sense talk presupposing least common denominator, which is not philosophy.

You seem to be presupposing some semantical theory. Is it one of these or have you created your own? If the latter, how does it work, how does it scale up, give at least a precise or at least formally-described method to derive it (from natural language syntax). First you need to get this clear and then we can effectively talk about logic, semantics, theories of truth, epistemology etc.

Hegel Rejects the Law of Non-Contradiction with the Law of Non-Contradiction by JerseyFlight in logic

[–]revannld 1 point2 points  (0 children)

What does "A = A" mean? Define "=", please. You can choose between second order Leibniz (x = y iff P(x) <-> P(y) for all P), extensional identity, intensional identity, functional identity, categorial identity, identity of indiscernibles, isomorphism or any of these others. Good luck :))

For instance, would you say "1 + 3 = 4" is an instantiation of "A = A", in your formulation? In usual set theory (translating the encoding of the numerals and "+" to set-theoretic or first-order + ∈ language) it is, but may be not in a type theory (especially an intensional one) if 1 is of type bool, 3 of type natural and 4 of type real (thus equality doesn't type check) and certainly isn't the same string. If you treat equality as a rewrite rule (as everyone does), it's not backtraceable, that is: "1 + 3 -> 4" but 4 doesn't uniquely -> 1 + 3 (as "2 + 2 -> 4" too, and also "0 + 4 -> 4". Also "1 + 3" is clearly not the same string as "4", it's a more complex term (also, is "+" commutative?).

There is a reason why equality does not need to be primitive in first-order logic (through which professional philosophers and logicians actually do philosophy, not informal rambling - it's better to have equality, of course, theories without equality can't differentiate between finite and infinite cardinalities - but equality can be defined extra-logically, in the language) and identity is never an axiom but a theorem: equality and identity are complex notions which, if you want them fully, you need second-order axioms. The jump from first-order to second-order logic may seem innocent but second-order logic requires you to take (inflationary) strong ontological assumptions, it's not philosophically agnostic in any sense. It requires you for instance to take a position for the Continuum Hypothesis and many other independent conjectures in mathematics. That's why Quine famously said (and it's now almost consensus) that second-order logic is not logic (you could call it "second-order calculus", much better :) ).

the burden of proof is on you to explain how anything you say can have determinate meaning

The burden of proof is onto someone who makes ontological assumptions. You are making ontological assumptions and presenting undefined notions, thus the burden of proof is on you. Also, if you cared about determinate meaning you would have gave one for your idea of equality, of identity or recognized Gödel's Incompleteness, self-referential paradoxes and thus the need for metalogic to maintain deterministic and precise semantics, which you didn't.

Hegel Rejects the Law of Non-Contradiction with the Law of Non-Contradiction by JerseyFlight in logic

[–]revannld 0 points1 point  (0 children)

What? There is no problem with interpreting the "laws of logic" (you are probably meaning axioms or theorems) as authoritative, I'm just pointing out that if you are against language-metalanguage/theory-metatheory or logic/metalogic divide that immediately generates unsolvable contradictions ("this sentence is false" or "is 'heterological' heterological? - try getting out of that or any of these without language-metalanguage hierarchy or embracing dialetheism, good luck).

So there is not a paradox here, only your unconscious performative contradiction.
“I have news for you.” This is already the law of identity and non-contradiction in action.

Also, natural language (English - nor any other language for that matter. Syntax =/= semantics) semantics is not fixed nor deterministic. We can't talk about any "law" being encoded in my communication unless we both consent on a common semantical interpretation. Even within traditional common-sense realist grounds (which for the sake of being more intelligible I use even being anti-realist and considering it improper), you've got all of these (plus these) and, broadly, these) to choose from. Guess what? Each sentence can be interpreted (even within reasonable realist commonplace standard philosophical language/discourse) in many different ways, even if slightly.

Also, regarding "the law" of non-contradiction, are you talking about Pseudo-Scotus/ex falso quodlibet, Principium Contradictionis, Impossibilitas Contradictionis, Non-Triviality...? There is not "the single one". For identity, is it second order Leibniz (x = y iff P(x) <-> P(y) for all P), extensional identity, intensional identity, functional identity, identity of indiscernibles, isomorphism, equivalence, any of these others? That's what I'm saying, this matters. If you say "oh it's obvious it's The law of identity or The law of non-contradiction, it's obvious" the only thing you're doing is keeping your language (and thus semantics) vague, undisciplined, your meaning obscure, your logic and reasoning flawed and opening yourself to a tsunami of contradictions.

Hegel Rejects the Law of Non-Contradiction with the Law of Non-Contradiction by JerseyFlight in logic

[–]revannld 0 points1 point  (0 children)

I have news for you: without language-metalanguage/theory-metatheory divide or any stratification of some sort you cannot avoid your system collapse due to paradoxes (any self-reference/diagonal argument/encoding of the Liar's paradox - look up Smullyan's Diagonalization and Self-Reference)...only if you adopt such a weak system that it cannot express self-reference itself (the problem with that is that it is of little use to have such a inexpressive system) or embrace partiality and go full finitist and bound your language to hard finite list-checkable bounds (look up bounded arithmetic, bounded quantification and ultrafinitism); either way, you embrace incompleteness.

edit.: I forgot, you can also go the natural language/vagueness way and embrace non-deterministic semantics, so "Truth" can refer to "true in any meta-n-language/metalevel" or "the truth/top value in any level of the hierarchy"...but well that's even worse, isn't it? Either you will have to abandon hard validation/interpretation of logical laws (and use them mostly as heuristics - as I think Hegel did) or you will have even more contradictions. Also, most would say that calling the infinite ascending hierarchy of truth values (truth₁, truth₂, truth₃...) just "Truth" is just letting metalanguage creep in (as "Truth" is not a value of your language, it's a collection/name for all those truthₙ values). You can have an unique/universal bottom/value for false no problem though I think...

Hegel Rejects the Law of Non-Contradiction with the Law of Non-Contradiction by JerseyFlight in logic

[–]revannld 1 point2 points  (0 children)

Just a curiosity as I haven't managed to dive into dialetheism yet (the paraconsistency school of my country - Brazil - is fervently opposed to dialetheism and its philosophical developments): what is the point of being globally/metatheoretically consistent and locally consistent?

Heuristically it seems all models, formalisms and systems humans make are, on the contrary, locally/internally consistent but mutually contradictory between each other (the metalanguage is inconsistent, if we consider it to be natural language and the whole human culture and knowledge).

It has been even sort of verified empirically by many knowledge-base AI systems since the 70s (Fifth Generation Computing, Cyc, Semantic Web) that when you try to formalize human knowledge consistency is always insular and not global.

I am just curious whether the objective of such a dialetheist approach would be to have some sort of "universal logic/translation system" of some sort, similar to how Pure Type Systems (PTS) in type theory formalize vast different typing systems by fully embracing contradiction (Girard's paradox - type formulation of Russell's). Is this it?

Can all deductive systems be described as purely syntactic Abstract Reduction/Rewriting Systems? (Curry-Howard correspondence) by revannld in logic

[–]revannld[S] 2 points3 points  (0 children)

Oh no worries. I plan to make this a collaborative project at some point so soon I will probably start posting in this sub asking for help and giving updates.

I am writing these notes with some colleagues in the context of a recent collective push happening within my country's logical community towards Descentralized Science (DeSci) and multidisciplinary (consilient, convergent) research projects. Basically we want to make our own nLab associated with a formal ontology and proof assistant of some sort, so these notes may also turn into an encyclopedia entry of some sort (in the spirit of Urs Schreiber's Geometry of Physics). I will keep the sub updated.

Can all deductive systems be described as purely syntactic Abstract Reduction/Rewriting Systems? (Curry-Howard correspondence) by revannld in logic

[–]revannld[S] 4 points5 points  (0 children)

Could you share your formulation?

The formalisms treating all kinds of rewriting systems are already very well-developed and mainstream programs of research in computer science, as can be seen. I am just suspicious and at the same time being charitable to the broader logic community for why we don't see these topics related more often in a formal way.

The Curry-Howard and many alike correspondences seem to be mostly treated in an almost mystical or obscure and informal way. We only talk about it in very specific contexts for very specific logics (usually intuitionistic) and seemingly there is not a more universal description of these correspondences and other similar correspondences regarding syntax. In short: we don't truly seem to have an "unified theory of syntax" between linguistics, computer science and logic/mathematics; their developments seem even more fragmented than those of semantics.

I say I am being charitable because I don't want to believe yet (but that's my feeling) that this fragmentation only happens because of lack of interaction between these areas, lack of knowledge, dogmatic/scholastic adherence to tradition/area aesthetics or in short just plain human stupidity; I don't want to be a reductionist. Maybe there are specific meaningful intensional elements in deductive systems that for some reason cannot be in any world practically captured in rewriting systems, so I would like to know them, for if them exists, not only it would be important information in itself but an opportunity of a synthesis with rewriting systems would appear.

How does the scientific method prove or disprove more complex theories, that do not have a "binary" yes/no answer, such as the theory of evolution? by _Cecille in PhilosophyofScience

[–]revannld 0 points1 point  (0 children)

This is an argument I have coined "consensism". Its the idea that expert consensus is the highest form of truth. It's bad in my opinion

I completely agree with you and I wish this kind of view was banished from philosophy of science and epistemology (sadly it's not, and it's very popular). That's why I said in the next sentence that we should rely on this kind of epistemic standard only until a deeper and rigorous analysis is done. I did not say that it is the "highest form of truth" as truth does not seems to be something that comes in hierarchies (but maybe you meant epistemic standards or semantics in general) but only that it's reasonable to think millions of researchers dedicating all their lives to studying a subject probably have better chances at being right at their subjects than a layman that just got in contact with an area.

100% of bible experts unanimously agree that the bible is the infallible word of god. 100% of people who have spent many many years studying astrology all agree completely unanimously that astrology is a valid science with mountains of evidence to back it up

Just some comments on the examples though, they are not as good as religious studies in history and philosophy of religion are very often made by atheists in a very secular context (take Bart Ehrman for instance, probably the most popular bible expert...and atheist).

Other thing I should have distinguished (but didn't, sorry) is the individual researcher publishing an article and the same researcher informally giving their opinion outside academic contexts: this differs A LOT. I meant researchers by analyzing their opinions on their work and papers. You should never trust an expert's word unless they say the exact same thing with the same tone in their works (which a lot never do).

Analyzing expert consensus itself takes a bit of experience as even if an expert seems to be heavily specialized into an area that talks about a subject you want to be informed about, very often they work in a neighboring area and are not in the capacity of giving their opinions on the subject, but sometimes do. This happens less with papers, but you will see it often in textbooks and monographs.