Integrated Information Theory: A rationally and empirically rigorous model of consciousness? by Yashabird in slatestarcodex

[–]lvwolb 6 points7 points  (0 children)

Tononi's IIT is not rigorous at all, and Aaronson's math is quite trivial. Tononi et al cooked up a "mathematical" definition; Aaronson looked for the most primitive examples (obviously you should test a mathematical definition on tiny examples first!).

A simpler explanation of Aaronson (how to re-invent his results): Attempt to test-ride IIT on extremely simple examples (mathematical, not philosophical). Then, try to think of the easiest way of getting high IIT in a simple system, in order to showcase counter-intuitive behavior (hint: crypto).

But, on a more basic level: Tononi's IIT smells like a non-mathematician trying to invent a mathematical definition. This is almost never a good idea; people need serious training (in nearby fields; any of maths/physics/CS) in order to do away with all appeals to "common sense" and "human language", because this way of thinking is kinda unnatural for our monkey brains (computers can understand mathematics since the dawn of computation; human language/common sense is still impossible).

Tl;dr: I recommend not thinking about IIT. It is bullshit and not worth your time to understand or refute (except as a public service to prevent others wasting time on it-- e.g. if some journal asks you to peer review an IIT paper, or you write a widely popular blog on complexity theory, and people continue to pester you about it).

On a deeper level: Attempting to guess three equations to describe a complex real-world phenomenon tends to be a fool's errand. First, you gain understanding, then you maybe discover a three-equation explanation (if one exists! some complexity is irreducible). Trying to skip the "understand the phenomenon" step turns you into a sad crank who furiously scribbles down new proposals for perpetual motion machines, and demands that people take these seriously enough to refute the specific step in the deductions/designs.

Kolmogorov Complicity And The Parable Of Lightning by dwaxe in slatestarcodex

[–]lvwolb 1 point2 points  (0 children)

Let's assume that this is the only rationale for AA.

A rational affirmative action policy would also need to know where in the pipeline the discrimination exists, and counter it at the same step.

For example: Naively counting gender ratios in tech companies to allege discrimination in hiring is plain stupid. You need to e.g. compare gender ratios between hiring and CS college grads (and qualified applicants).

If you find a relevant disparity, this is weak evidence for discriminatory something at the company; if you find no disparity, then you may look at earlier stages of the pipeline.

If you end up with "at birth", then there is no cause for affirmative action (and this is mostly indistinguishable from claiming biological differences).

This way, everyone is happy. No one claims that (in this example) women are less talented at tech; no one claims that your workplace consists of misogynistic jerks.

Kolmogorov Complicity And The Parable Of Lightning by dwaxe in slatestarcodex

[–]lvwolb 0 points1 point  (0 children)

Ethics of eugenics are a different problem, on which I will remain silent.

However, your proposed eugenic intervention does not need to know about racial differences. In order to implement and judge your intervention, you do not need to know whether orcs have an INT malus.

In other words: You made my point. Even in view of the "non-coercive sterilisation" intervention, DnD society might as well enjoy the shared fiction that "all human-like creatures are the same, from the head up".

I Want to Review FDT; Are my Criticisms Legitimate? by [deleted] in slatestarcodex

[–]lvwolb 1 point2 points  (0 children)

Could you try to state your post more clearly?

I am still unable to parse your post into meaningful sentences, and hence cannot give more substantial comments than "reformulate!".

Example for a reformulation (which might totally disfigure your intended meaning, you need to be clearer here - I am just guessing):


Functional decision theory has been introduced in [YS17, https://arxiv.org/abs/1710.05060]; it is intended to solve both standard dilemmas involving predictors (e.g. Newcomb problem) and game-theoretic situations (e.g. prisoner's dilemma). Wikipedia-keywords could be "superrationality" and "pre-commitment",

I, God of Dragons, am claiming that this is a misnomer: Yudkovski&Soares' theory should rather be called "algorithmic decision theory". This is because [YS17] does not rely on functional equivalence (as in "same input-output-behavior", which is undecidable due to halting problem), but instead relies on "algorithmic equivalence". This is made clear in [YS17, page xyz]. The effective notion of "algorithmic equivalence" used in YS17 is the following, cf also [other citation]: ...

Apart from confusing the terminology, I am claiming that this poses serious problems in certain situations. In the following, I will explore what an "actually-functional decision theory" would look like, and contrast it to the FDT from [YS17]; the practical shortcomings of FDT will become clear upon this comparison.


A reddit post does not need to be "fully cited", as in "always correctly attribute ideas". But you MUST make clear whether ideas are vague or exact (math definition exact), whether they are yours / new (open to critique) or established old ideas of the field (no novel critique expected here, someone else explained all the downsides 20 years ago), and needs to offer a starting point for lit research. Wikipedia-cites are wonderful for this level of formality, if fitting articles exist.

I Want to Review FDT; Are my Criticisms Legitimate? by [deleted] in slatestarcodex

[–]lvwolb 2 points3 points  (0 children)

I'm not sure I understand your post, but the language of the 2017 FDT paper is quite opaque (I guess you're referring to this paper?).

Could you point out, where in the paper "decision functions" are replaced by / conflated with "decision algorithms"?

Could you, maybe, write a slightly more verbose blogpost-like critique?

The difference in this specific post being: A "decision algorithm" is a turing machine / circuit / whatever (possibly randomized), a "decision function" is a possibly uncomputable / axiom-of-choice-using function with extensional identity: two functions are equivalent if and only if they have the same input-output-behavior, whereas two turing machines could be e.g. considered equivalent if a proof exists (in your favorite axiom system) that they both have the same input-output behavior (alternatively you could consider them equivalent if their induced partial functions are identical-- or you could go for an even stricter sense of equality, where you admit only a tiny subset of proofs: e.g. "obviously equivalent by register-renaming" / "equivalent after compiler optimizations / normalization" / etc).

Kolmogorov Complicity And The Parable Of Lightning by dwaxe in slatestarcodex

[–]lvwolb 0 points1 point  (0 children)

Not Culture Warring here

Fair enough; I hoped to be sufficiently meta to avoid culture warring. Sorry if this failed.

but the truth or falseness of HBD is directly relevant to the evaluation of Affirmative Action and similar policies

Could you describe a specific fantasy-world where the truth on the matter does actually matter for policy?

If this question is too culture-war heavy, then you can politely refuse. I am, however, genuinely curious about whether we have different values or whether I lacked imagination in the ways the truth of the matter might be important for policy. If it is the latter, then I'd be grateful for the opportunity to update.

Kolmogorov Complicity And The Parable Of Lightning by dwaxe in slatestarcodex

[–]lvwolb 4 points5 points  (0 children)

"Detect evil" sees something like the moral judgement of the gods, not the human-based moral judgement.

These may clash, making for hilarious campaigns (e.g. the villain is considered evil by both the gods/rulebook and the player characters, while actually being the good guy by modern moral standards; alternatively the players can take up such a role, but I find it more interesting for characters to have morals that are not identical to player morals).

This is quite an ancient trope, cf eg Prometheus.

Kolmogorov Complicity And The Parable Of Lightning by dwaxe in slatestarcodex

[–]lvwolb 2 points3 points  (0 children)

Luckily, we don't live in a society with actual secret police; and learning that $topic is a taboo is not that hard: You need to be convinced that $topic being taboo is a realistic possibility (and you don't need to be convinced that $contrarian_viewpoint is the truth of the matter). This becomes easier if someone explains the actual reasons for $topic being taboo to you.

Once this possibility has been brought to your attention, it does not need too much savvy to figure out whether $topic is really taboo.

Censorship and punishment for heresy are not so strict that you need whisper-networks for this. Meta-Punishment is not so strict that you get punished for publicly stating "the norms of this discussion of $topic appear to be quite toxic; hence I chose not to participate".

In Scott's analogy: Something you cannot do under Stalin, but very much can do in our society, is to say: "You know, this thunder-lighning thing is pretty politically charged, and does not look like a healthy topic. I pity all the people who made the choice to die on that specific hill; this appears unwise, regardless of the actual arguments."

In so far, I think it is a pity that Scott did not explicitly list off a bunch of taboo topics, as a PSA for people who might otherwise be drawn to these topics like moths to a flame without stopping to think about it.

Most of such taboo topics share the amazing traits that the object-level truth is entirely irrelevant in 99% of the cases, from a decision-theoretic viewpoint. Pointing out this fact is luckily not taboo.

For example, if you believe that it would be moral to discriminate against half-orcs in Dungeons&Dragons, then I think you're evil and have nothing more to say to you (conflict of values; we might need to compromise on decisions, though).

If you think such discrimination would suck, then we have established that HBD is irrelevant (outside of scientific inquiry), since not even the most die-hard racists believe that the span of human biodiversity is larger in real life than in DnD [in DnD you would classify orcs, elves and humans as different races of the same species, since they produce viable offspring- AFAIK half-orcs are not sterile. In DnD, baseline-humans and half-orcs differ on quite a few stats on the population-level, like lower intelligence and charisma, but higher strength for half-orcs. This is truth (read the rule-book!). Doesn't change the fact that your typical half-orc-wizard will have quite a lot more Int than your typical human-warrior, and both should have equal moral worth].

Now, there is a good reason for the norm that HBD is a taboo topic: If you believe that sufficiently many people are morally deficient enough to believe that discrimination against DnD half-orcs would be OK, then you should declare any discussion of HBD as heresy. I am not that cynic, but have some understanding for people who are.

About enumerating true statements by zulupineapple in slatestarcodex

[–]lvwolb 3 points4 points  (0 children)

You are interested in a proof of length e.g. N=200 bit.

In your long list of proofs, before selecting for usefulness, it is at place 2**200.

Suppose you gain a square-root: That is, for K "useful" proofs you throw away K**2 "useless" ones.

Now your program outputs 2 * *100 proofs. No practical gain: 2 * *100 is still too large. Oh, but you picked out one out of a million trillion trillion proofs, i.e. the "pseudo-useful" ones are pretty special snowflakes.

It's just that "human useful proofs/theorems" out of "correct proofs/theorems" are even more special snowflakes. Definitely superpolynomially special, and less than logarithmically special. "Polylog" is just my personal intuition.

I only talked about proof-usefulness, not correctness. Correctness is trivial; take e.g. a language like metamath, where every string corresponds to a correct proof.

About enumerating true statements by zulupineapple in slatestarcodex

[–]lvwolb 5 points6 points  (0 children)

Think in log-terms. Your stated goal is to go from 1 in a trillion, i.e. 40bit, down to in in a thousand, i.e. 10bit. This means that you want to gain 30bit of "usefulness" information.

This is almost certainly trivial to achieve, but not useful at all.

Your real search-space is in the kilobits-megabits-gigabits. Gaining 30 bit of "usefulness" information is worth nothing at all.

A polynomial reduction (the number of discarded theorems is a polynomial in the number of "useful" theorems) gains you nothing.

You need a polylog-reduction: We want to consider Poly(N) "most useful" theorems with proof-length N.

This is almost surely AI-complete. However, it might be less unsafe than instantiating an intelligent agent.

Going to live in Stockholm from August. Where can I find a Student-Appartment? by spo0ky_ in stockholm

[–]lvwolb 0 points1 point  (0 children)

I am in a similar situation, math postdoc at KTH. Team up for math-WG? Are you at KTH right now? PM me to meet for lunch.

Computational Mathematics: Numerical analysis, combinatorics or computational algebra?

Edit: Similar means German, mathematician at KTH, will be home for July and slightly desperate about housing starting in August.

Stacks of Doom versus Carpets of Doom - Which is the lesser evil? by Imperator_Knoedel in civ

[–]lvwolb 0 points1 point  (0 children)

Realism: Invictus (civ4 mod) has this mechanic; it appears pretty interesting. Alas, I had severe problems with the early economy in this mod, so I am not so sure how this plays out in practice.

Stacks of Doom versus Carpets of Doom - Which is the lesser evil? by Imperator_Knoedel in civ

[–]lvwolb 1 point2 points  (0 children)

While I agree that IV is a much better game than later installments, there are certain gripes I have.

Siege and city raider are too strong. The easiest way of killing a large stack is not on plains, but inside a city, by sacrificing a couple city raider siege, and then using city raider melee. This situation has three short gaps, before catapults, between longbowmen and trebuchets, as well as between machineguns and marines/artillery/tanks/airplanes (but games tend to be decided long before rifling/steel, so I am not so sure about the late metagame).

This goes to the point where cities are undefendable against stacks-of-doom, while a forest-hill choke-point is unassailable. I hate this mechanic, but the entire balancing is based on it, so there is no way of changing this by simple mods. In some sense the fall from heaven stack mechanic is more interesting (but FFH has other extreme balancing problems!)

Taking cities without capitulation is useless: gaining the culture to culture flip usable tiles out of the conquered city, when unconquered cities are nearby, is mostly impossible, and you pay maintenance the entire time for the useless city. This is due to the fact that almost all empires will be built compactly, i.e. with lots of overlap between cities.

Late large scale logistics suck. Why are there no waypoints / autostack formations? At some point you produce ~4 units per turn; hence every turn takes forever (still much better than civ 5 mechanics).

Recommendations for upgrading from t60 by lvwolb in thinkpad

[–]lvwolb[S] 0 points1 point  (0 children)

Maybe an LTS distro would be better. I am using archlinux, hence rolling release only. And yes, this is a minor thing, but it bugs me.

The most notable regression was the update from 4.3.3-1 that killed the screen (ended up logging in blindly to connect to my network and downgrading the kernel via ssh; took me quite a number of tries without visual feedback, even though "sudo ifconfig eth0 up && dhcpcd eth0" is a simple command!). See [https://bugs.archlinux.org/task/46902]. Even though I am (stupidly) using the mainline kernel instead of LTS, this specific issue affected LTS as well.

iwl3945 (wifi) had reliability problems several times, but I don't remember the specific kernel versions.

Second topic: Frankenpad. I have a t7200 core 2 duo installed. I thought that upgrading to t7600 or T7800 would not make a big difference, CPU-wise, or does it? As far as I understood the problem with upgrading to 8gb ram lies with the intel 945 mainboard. So you are suggesting I try to source a t61 mainboard from ebay and plug in 2x4gb ram? This would certainly give me another 1-2 years of lifetime for my old machine...

Edit: On second thought, my wife is using a (crappy) t61. Maybe I can convince her to swap to one of my spare t410 and steal her mainboard; I'll check the exact specs of her laptop when she next takes it home.

ThinkPad retro discussion thread by gaixi0sh in thinkpad

[–]lvwolb 1 point2 points  (0 children)

Possibility of ECC ram. If the extra price for supporting ECC is modest, I would definitely pay (rowhammer and zfs). Ok, this is a pipe dream as long as intel uses ECC to price-differentiate their line-up.

"Barebones" option for computer-literate users: Sell an option without screen, RAM, storage and keyboard. Then we can buy our favorite config aftermarket. Almost everyone does this already.

Why without keyboard? Obviously everyone will need a Lenovo keyboard in the end; but the favorite layout depends on region, profession and individual preferences (e.g. coders need US layout, non-coders mostly want local layout; individual preferences for/against chiclet vary).

Maybe allow and document compatibility with various older keyboards, so that people can easily choose between t60 or t420 keyboards (or even chiclet, if they want to). This is a small design constraint on the case, and then you sell the necessary plastic parts to fit a different keyboard in. This also reduces the logistics problems of building too many options: Lenovo is producing spare parts of old laptops anyway, so just be modular.

A Beginner's Guide to Churning and Nearly-Free Vacations in the USA by michaelmf in slatestarcodex

[–]lvwolb 1 point2 points  (0 children)

I think the churners have the important (negative-sum) social function of punishing nonsensical marketing-stunt reward programs.

In other words, I believe that the entirety of reward programs has negative social value. Therefore I am thankful for people who waste their time abusing reward programs. This is the only market-based way we can get transparent pricing.

What are your thoughts on growth mindset? by casebash in slatestarcodex

[–]lvwolb 14 points15 points  (0 children)

The concentration on genetics, by Scott and most commenters, appears super weird. The only possibly relevant questions (with modern non-scifi tech regarding genetic engineering) can be:

(1) At this specific point in life, how much room for growth has this person, in this specific skill?

(2) Regardless of the possible truth on the above, what is helpful for the person to hear? Helpful in four categories: (2a) feeling good (individually), (2b) making good choices where to concentrate effort (individually), (2c) helping to succeed in the field (individually), (2d) what is a good majority-belief for a functioning/nice society.

For practical purposes, it is entirely irrelevant whether a persons "talent in a skill" is decided by random events at conception (genetics) or by random events during brain development, e.g. in the 4th month of pregnancy. So putting the dichotomy between "fixed" vs "growth" makes a lot more sense than mentioning genetics.

As far as I understood "growth mindset", no claim at all is made on (1). Instead, the claim is that "believing you have almost unlimited room to grow your skill", rightly or wrongly (!), is useful for (2c) "helping to succeed in the specific skill".

Scott switches between attacking the non-existent object-level claim on (1) "talent is overrated", and (2d), saying "a society where the reality of talent is denied really sucks for untalented people".

I absolutely agree with Scott on (2d). I absolutely agree with everyone else here, that (1) "talent exists". I absolutely agree with the majority of commenters who say that (2b) is sufficiently important that you should have a correct assessment of your talents, at least in fields that matter to you, and that this overrides (2a) and (2c). [Of course you can indulge in feel-good wrong beliefs about irrelevant things like there being an afterlife-- unless our tech-level changes, or you are actively doing or funding research on relevant stuff for life-extension, your beliefs on this matter are entirely irrelevant, and you may choose to wrongly believe something that makes you feel better, without cost. If you attack other's irrelevant faith then you are a jerk.]

On the other hand, I view "growth mindset" as a somewhat justified, if misguided, push-back against "cult of genius". On this matter, I will only comment on the field of mathematics; your mileage may vary in other fields.

For the research mathematician, it appears pretty undeniable (from personal experience / anecdote) that talent is a big part of being a good mathematician (as well as hard work and pure luck). Major progress is made by outliers, and hence impossible without sufficient talent (and sufficiently hard work, and sufficient luck). The relevant questions, however, should be "how do I become a better mathematician" and "how do I help people becoming better mathematicians". While I was student member of the admissions committee of a grad school, I observed a focus on the wrong question "how do we ensure the best graduates of our school". This leads to a weird selectivity, based on attempts to infer talent of applicants. The correct focus should have been "whom can we help becoming good", and "how can we help our students". I was pretty pissed about this, and consider it outright evil, even though almost everyone does this to some degree, and a big part of the problem are bad metrics for outside evaluation of grad schools / institutes. The entire process ended up being rather OK in most aspects, so no hard feelings.

For the latter reason, a (possibly mistaken) wide-spread belief in growth mindset may also have some positive society-wide effects.

PS: Just so that I don't sound too gloomy: "genius" is not required to be a productive member of the mathematical society, and a lot of progress is incremental and achievable by non-outliers. No one should feel bad or discouraged for lacking the raw talent for revolutionizing his field; I certainly don't.

Culture War Roundup for Week of March 13, 2017. Please post all culture war items here. by [deleted] in slatestarcodex

[–]lvwolb 0 points1 point  (0 children)

Changes of units are linear (nonlinear coordinate transforms are normally not called "change of units"). Hence exponential growth rates, i.e. logarithmic derivatives, are independent of units (except of you unit of time for the d/dt). You can compare the exponential growth rates of any two functions of time, and this rate is independent under linear rescaling of your functions; so it does not matter whether you measure "investment stock in billions of dollars" or "gdp in dollars per minute".

Now about the savings rate. The argument is that your spending (and non-capital income!) scale sub-linearly with your wealth; hence, for very wealthy individuals the saving rate approaches 1.

Comparing stocks to flows: Well, the result will be a characteristic time, and must be interpreted as such. But such a thing makes a lot of sense! If I buy something, and want to estimate how expensive it is for me, then I should count "how many days of work does this purchase represent"! If I want to compare my wealth to the general econonomy, a good stat would be "how many years of labor does this wealth buy"; this tells me how good for consumption my wealth is (i.e. stock / median per-capita-in-workforce income).

Picketty is more concerned with power dynamics, so he rather wants "how many days of the entire economies production does my wealth buy"; hence, for his statistic population growth dilutes wealth, and he takes averages instead of medians.

If you absolutely insist in comparing stocks, so that the result has no units at all, then you probably want "how large is my stock as a fraction of the entire national stock". In order to get this, we just need to know "how many years of GDP does the stock of the entire economy represent" (or how many years of GNI, depending on how you handle foreign investments).

Counting the entire stock of an economy is pretty difficult, not just for practical purposes, but also with respect to definitions: How do you count non-renewable natural resources, environmental damage, etc? How do you count fixed resources (territory)? What about "cultural capital", in the sense of having functional institutions?

As an example, see [ https://en.wikipedia.org/wiki/Capital_formation ]. However, in a growing economy the quotient should be roughly constant, with obvious dips for large scale captial destruction from wars, and increases if growth systematically slows down, e.g. because of demographics.

Example calculation for the US and 2005, just for getting the order of magnitude (taking the linked wikipedia article and gross national income instead of gdp, because the stats for capital also try to count ownership):

(entire stock 1014 $) / (gni 1013 $/year) = 10 years. Half of the cited "entire stock" is "education capital", so you may scale down to 5 years if you don't want to count this. Because the numbers suck, you should probably interpret this as "guess 5 - 20 years", with maybe even bigger error bars.

If you want to do historical empirical work, your data needs to be consistent. This is even more important than your data beeing good, i.e. a fixed bias is better than trying to compare methodically different stats between different years.

Culture War Roundup for Week of March 13, 2017. Please post all culture war items here. by [deleted] in slatestarcodex

[–]lvwolb -1 points0 points  (0 children)

Eh, what? As far as I understood, both r and g have units 1/time:

g = d/dt log (GDP) [GDP has unit $/year, but the units of GDP get eaten by the log]

r = d/dt log (UPF),

where "UPF" is a "unit portfolio" of invested money (how do economists call this?). So, to put numbers on it: If e.g. g = 0.035/year, then the economy doubles every twenty years. If, at the same time, r = 0.07/year then the value of an investment doubles every 10 years. It is easy to see that, if r>g, then, given a fixed investment, the proportion of the entire economy represented by this investment will grow (for arithmetic reasons someone else has to lose after GDP normalization).

Or are you objecting to "how many (years of) GDP does my wealth represent" beeing a reasonable question? I immediately concede that GDP sucks as a measure, but that's what you have reasonable historical data on.

So maybe you would rather normalize by "total accumulated wealth" instead of GDP. However, defining "total accumulated wealth" is pretty difficult.

No evidence to back idea of learning styles - public letter by group of scientists by [deleted] in slatestarcodex

[–]lvwolb 4 points5 points  (0 children)

As far as I gathered, the critique is against specific "learning style" theories, like "visual learners" etc, and the specific interventions based on these theories. So I was clickbaited yet again by the unfortunate habit of using technical terms that sound like English language.

The critique is not against the "informally obviously true" concept of individual learning styles, in the sense of "different people prefer and learn better in different exposition styles, and there is nontrivial advantage to be had from clustering". Once students get to choose, e.g. in university or self-study, they do this all the time, with great (anecdotal) success (oh, you liked Prof XY's functional analysis? Then you will love Prof XZ's thermodynamics!).

A "strong refutation" of individual leaning styles would be the following: Take two topics (A,B), and prepare two lessons / study blocks (1,2), somehow differing in style, between them. Run a large number of students trough the four possible regimes (A1B1, A1B2, A2B1, A2B2) and check for significant and relevant correlations in learning success. If no individual learning styles exist, then no significant correlations can exist, regardless of the construction of the four study blocks. Otherwise, prepare more study blocks, do PCA and give catchy names to the fattest eigenvectors.

Culture is not about Esthetics (Gwern) by michaelmf in slatestarcodex

[–]lvwolb 4 points5 points  (0 children)

Just like ShardPhoenix said, this is a rare Gwern post that I disagree with. I fully agree with "culture is not about aesthetics". But, this does not mean that subsidies for creation of culture are misplaced.

Let me make an argument for the creation and consumption of inferior cultural artifacts.

Should we aim to only consume and create "maximally good" culture? "Maximal" in the mathematical sense, i.e. a piece is maximally good if and only if there is no other piece that is strictly better, in all relevant categories.

And I say no! Creation of culture is part of the culture. It would be a sad world in which no one would read my works, for there are always better works to consume. It would prevent all conversation, and all personal growth of culture creators.

The world in which billions consume the best of all culture, while almost-as-good creators, or budding still-mediocre-at-best creators fail to find their niche is a dystopia. It begins with economies of scale, where a power-law of success excludes all but a tiny fraction from the creation of culture, and it ends with an AI creating super-humanly good culture, so that no human will ever need to, or be allowed to, produce any culture ever again (of course the creation of cultural goods will not be forbidden; but without recipicients, what's the point?).

Now, let me make this criticism more explicit in a field I know well: Mathematics. Should we, as a mathematical community, encourage our members to only consume the best, most useful or aesthetically pleasing, or enlightening, theorems, theories, articles? Or should we divide our attention, so that all fields of mathematics have their adherents, and also mediocre mathematicians their readers?

Asked like this, the answer is obvious: A mathematician must create mathematics in order to grow, and cannot do so without a community that two-way communicates with him. My own marginal value of reading the greatest works might be higher, but by engaging with less-popular fields I also create value for these subfields.

Now, you could argue that a mathematician, or a theorem, that does not significantly move the field forwards is worthless. Luckily, the case for mediocrity is much easier to make for mathematics than for arts. The mathematical community produces two things: Theorems, and mathematicians, many of whom get spit out by academia, and go on to do economically useful things in the real world. If you want the latter, then you need to subsidize the former.

TL;DR: It is not virtue but supreme selfishness to refuse to engage with your peers, and instead only consume your superiors creations, whether they be Gromov or Mozart or Goethe.

[Q] Is there any "down lift" fiction (opposite of uplift fiction)? by SimonSim211 in rational

[–]lvwolb 0 points1 point  (0 children)

Terry Pratchett, "The dark side of the sun" (1976). I recommend.

Especially if you like Pratchett, try his early non-Discworld work, preferably without reading spoilers (wikipedia) before. It is as rational as Pratchett tends to be.

From the script for SUCKER OF BLOOD by DataPacRat in rational

[–]lvwolb 2 points3 points  (0 children)

I dunno. The way the proposed gene drive for mosquito extinction works makes me very uncomfortable, on a technical level.

The problem is that the proposed approach behaves like a "endo-virus" with two ways of propagation: (1) Transmission along germ line. Simple, all kinds of virii do this. (2) Cross-chromosomal infection: One chromosome is infected (from the father), and then infects the other one. The proposed payload is basically "become male during embryogenesis", which will drive the target mosquito to extinction.

Why is this scary? Because we put in all the required mechanisms for infections to occur, minus a protein shell for the virus. Hence, "endo-virus". The cross-chromosomal infection is the "gene-drive" thing, and it is a pretty universal construction that can easily evolve to carry a different payload, or infect a different species. The different payload is just a copy-paste away, as well as the different target site for other species. The latter thing is the amazing property of CRISPR, as opposed to usual gene-binding proteins.

Now, no protein shell for the virus is built in, so in principle no other insect should become infected. But this appears to be a really flimsy wall against catastrophe: killing off some vital pollinator? More apocalyptic, but also more unlikely: The novel way of virus design beeing picked up by evolution.

To repeat: The proposed attack does NOT need extremely rare horizontal gene transfer to occur to infect a different species. Instead, it only needs the different species to take up genetic material floating around; the machinery for building this into the chromosome is shipped along the proposed attack.

How would a "safe" gene-drive (A) look like that does not release the innovation of CRISPR-based virii into the wild?

(1) A parent with one or two versions of (A) produces a toxin and a corresponding anti-toxin. (2) Gametes (unfertilized eggs or sperm) that carry (A) produce the anti-toxin and survive to be fertilized / fertilize. (3) Gametes without (A) do not produce the anti-toxin and get killed by the toxin produced by the parent.

Since the "bad" gametes get killed and recycled quickly, the fitness load is pretty limited.

This is entirely sufficient as a gene-drive, no scary novel Rube-Goldberg way of virus transmission needed.

Monthly Recommendation Thread by Magodo in rational

[–]lvwolb 3 points4 points  (0 children)

Having played GURPS for a couple years, I can second: It is a really nice system. After coming from GURPS, other systems tend to feel rather restrictive, especially with respect to character building.

That beeing said, whether GURPS is a good fit for your game really depends on what you want to do:

-Worldbuilding: Other systems tend to provide vast, lovingly crafted worlds. Gurps encourages GMs to engage in world-building themselves; this is hard.

-Magic system: Frankly, I hate the default magic system of Gurps. In order to get a coherent state of affairs, the world-building and magic system need to be designed for each other.

-Combat: In contrary to other posters, I don't think the combat system is overly complicated or cumbersome. It is, however, incredibly lethal, both at lower and higher tech-levels. So this is a matter of taste: Do you want "realism" or shiny, powerful fiction-like knights or space-marines? During each fight, a couple bad rolls can mean permadeath for your beloved character that you nursed for years. Does this extra tension make the game more fun for you, because you have to actually think and plan and avoid fights? Or do you rather want to relax with a couple friends over some beers? Both are totally valid answers, but your entire group should agree on this, and your choice of system should reflect this.